Now has 3 different model classes available (
All models have both
linear mode architectures.
Updated to use Chainer 1.6.0
Will output intermediate generated images to give users the ability to inspect training progress when run in a Jupyter notebook.
This package contains classes for training three different unsupervised, generative image models. Namely Variational Auto-encoders, Generative Adversarial Networks, and the newly developed combination of the two (VAE/GAN). Descriptions of the inner workings of these algorithms can be found in
All models take in a series of images and can be trained to perform either an encoding
transform step or a generative
inverse_transform step (or both). It's built on top of the Chainer framework and has an easy to use command line interface for training and generating images with a Variational Auto-encoder.
Both the module itself as well as the training script are available by installing this package through PyPI. Otherwise the module itself containing the main class which does all the heavy lifting is in
fauxtograph/fauxtograph.py which has dependencies in
fauxtograph/vaegan.py, while the training/generation CLI script is in
To learn more about the command line tool functionality and to get a better sense of how one might use it, please see the blog post on the Stitch Fix tech blog, multithreaded.
The simplest step to using the module is to install via pip:
$ pip install fauxtograph
this should additionally grab all necessary dependencies including the main backend NN framework, Chainer. However, if you plan on using CUDA to train the model with a GPU you'll need to additionally install the Chainer CUDA dependencies with
$ pip install chainer-cuda-deps
To get started, you can either find your own image set to use or use the downloading tool to grab some of the Hubble/ESA space images, which I've found make for interesting results.
To grab the images and place them in an
images folder run
$ fauxtograph download ./images
This process can take some time depending on your internet connection.
Then you can train a model and output it to disk with
$ fauxtograph train --kl_ratio 0.005 ./images ./models/model_name
Finally, you can generate new images based on your trained model with
$ fauxtograph generate ./models/model_name_model.h5 ./models/model_name_opt.h5 ./models/model_name_meta.json ./generated_images_folder
Each command comes with a
--help option to see possible optional arguments.
In order to get the best results for generated images it'll be necessary to either have a rather large number of images (say on the order of several hundred thousand or more), or images that are all quite similar with minimal backgrounds.
As the model trains you should see the output of the KL Divergence average over the batches and the reconstruction loss average as well. You might wish to adjust the ratio of these two terms with the
--kl_ratio option in order to get better performance should you find that the learning rate is driving one or the other terms to zero too quickly(slowly).
If you have an CUDA capable Nvidia GPU, use it. The model can train over 10 times faster by taking advantage of GPU processing.
Sometimes you will want to brighten your images when saving them, which can be done with the
If you manage to train a particularly interesting model and generate some neat images, then we'd like to see them. Use #fauxtograph if you decide to put them up on social media.
beta1parameters of either network (usually the discriminator) to help train them at a similar rate.