Style transfer is the task of producing a pastiche image 'p' that shares the content of a content image 'c' and the style of a style image 's'. This code implements the paper "A Learned Representation for Artistic Style":
A Learned Representation for Artistic Style. Vincent Dumoulin, Jon Shlens, Manjunath Kudlur.
Whether you want to stylize an image with one of our pre-trained models or train your own model, you need to set up your enviroment, install requirements.txt.
There is a Jupyter notebook Image_Stylization.ipynb showing how to apply style transfer from a trained model.
First, download one of our pre-trained models to /checkpoints:
(You can also train your own model, but if you're just getting started we recommend using a pre-trained model first.)
Then, run the following command:
python image_stylization_transform.py \
--num_styles=<NUMBER_OF_STYLES> \
--checkpoint=/path/to/model.ckpt \
--input_image=/path/to/image.jpg \
--which_styles="[0,1,2,5,14]" \
--output_dir=/tmp/image_stylization/output \
--output_basename="stylized"
You'll have to specify the correct number of styles for the model you're using. For the Monet model this is 10 and for the varied model this is 32. The which_styles
argument should be a Python list of integer style indices.
which_styles
can also be used to specify a linear combination of styles to
combine in a single image. Use a Python dictionary that maps the style index to
the weights for each style. If the style index is unspecified then it will have
a zero weight. Note that the weights are not normalized.
Here's an example that produces a stylization that is an average of all of the monet styles.
python image_stylization_transform.py \
--num_styles=10 \
--checkpoint=/checkpoints/multistyle-pastiche-generator-monet.ckpt \
--input_image=/evaluation_images/benjamin_harrison.jpg \
--which_styles="{0:0.1,1:0.1,2:0.1,3:0.1,4:0.1,5:0.1,6:0.1,7:0.1,8:0.1,9:0.1}" \
--output_dir=/tmp/image_stylization/output \
--output_basename="all_monet_styles"
To train your own model, you'll need three things:
- A directory of images to use as styles.
- A trained VGG model checkpoint.
- The ImageNet dataset. Instructions for downloading the dataset can be found here.
First, you need to prepare your style images:
$ image_stylization_create_dataset \
--vgg_checkpoint=/path/to/vgg_16.ckpt \
--style_files=/path/to/style/images/*.jpg \
--output_file=/tmp/image_stylization/style_images.tfrecord
Then, to train a model:
$ image_stylization_train \
--train_dir=/tmp/image_stylization/run1/train
--style_dataset_file=/tmp/image_stylization/style_images.tfrecord \
--num_styles=<NUMBER_OF_STYLES> \
--vgg_checkpoint=/path/to/vgg_16.ckpt \
--imagenet_data_dir=/path/to/imagenet-2012-tfrecord
To evaluate the model:
$ image_stylization_evaluate \
--style_dataset_file=/tmp/image_stylization/style_images.tfrecord \
--train_dir=/tmp/image_stylization/run1/train \
--eval_dir=/tmp/image_stylization/run1/eval \
--num_styles=<NUMBER_OF_STYLES> \
--vgg_checkpoint=/path/to/vgg_16.ckpt \
--imagenet_data_dir=/path/to/imagenet-2012-tfrecord \
--style_grid
You can also finetune a pre-trained model for new styles:
$ image_stylization_finetune \
--checkpoint=/path/to/model.ckpt \
--train_dir=/tmp/image_stylization/run2/train
--style_dataset_file=/tmp/image_stylization/style_images.tfrecord \
--num_styles=<NUMBER_OF_STYLES> \
--vgg_checkpoint=/path/to/vgg_16.ckpt \
--imagenet_data_dir=/path/to/imagenet-2012-tfrecord
- Image Stylization: A "Multistyle Pastiche Generator" that generates artistics representations of photographs. Described in A Learned Representation For Artistic Style.