Multi-Content GAN for Few-Shot Font Style Transfer at CVPR 2018

Related tags

Deep LearningMC-GAN
Overview

MC-GAN in PyTorch

This is the implementation of the Multi-Content GAN for Few-Shot Font Style Transfer. The code was written by Samaneh Azadi. If you use this code or our collected font dataset for your research, please cite:

Multi-Content GAN for Few-Shot Font Style Transfer; Samaneh Azadi, Matthew Fisher, Vladimir Kim, Zhaowen Wang, Eli Shechtman, Trevor Darrell, in arXiv, 2017.

Prerequisites:

  • Linux or macOS
  • Python 2.7
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

Installation

  • Install PyTorch and dependencies from http://pytorch.org
  • Install Torch vision from the source.
git clone https://github.com/pytorch/vision
cd vision
python setup.py install
pip install visdom
pip install dominate
pip install scikit-image
  • Clone this repo:
mkdir FontTransfer
cd FontTransfer
git clone https://github.com/azadis/MC-GAN
cd MC-GAN

MC-GAN train/test

  • Download our gray-scale 10K font data set:

./datasets/download_font_dataset.sh Capitals64

../datasets/Capitals64/test_dict/dict.pkl makes observed random glyphs be similar at different test runs on Capitals64 dataset. It is a dictionary with font names as keys and random arrays containing indices from 0 to 26 as their values. Lengths of the arrays are equal to the number of non-observed glyphs in each font.

../datasets/Capitals64/BASE/Code New Roman.0.0.png is a fixed simple font used for training the conditional GAN in the End-to-End model.

./datasets/download_font_dataset.sh public_web_fonts

Given a few letters of font ${DATA} for examples 5 letters {T,O,W,E,R}, training directory ${DATA}/A should contain 5 images each with dimension 64x(64x26)x3 where 5 - 1 = 4 letters are given and the rest are zeroed out. Each image should be saved as ${DATA}_${IND}.png where ${IND} is the index (in [0,26) ) of the letter omitted from the observed set. Training directory ${DATA}/B contains images each with dimension 64x64x3 where only the omitted letter is given. Image names are similar to the ones in ${DATA}/A though. ${DATA}/A/test/${DATA}.png contains all 5 given letters as a 64x(64x26)x3-dimensional image. Structure of the directories for above real-world fonts (including only a few observed letters) is as follows. One can refer to the examples in ../datasets/public_web_fonts for more information.

../datasets/public_web_fonts
                      └── ${DATA}/
                          ├── A/
                          │  ├──train/${DATA}_${IND}.png
                          │  └──test/${DATA}.png
                          └── B/
                             ├──train/${DATA}_${IND}.png
                             └──test/${DATA}.png
  • (Optional) Download our synthetic color gradient font data set:

./datasets/download_font_dataset.sh Capitals_colorGrad64
  • Train Glyph Network:
./scripts/train_cGAN.sh Capitals64

Model parameters will be saved under ./checkpoints/GlyphNet_pretrain.

  • Test Glyph Network after specific numbers of epochs (e.g. 400 by setting EPOCH=400 in ./scripts/test_cGAN.sh):
./scripts/test_cGAN.sh Capitals64
  • (Optional) View the generated images (e.g. after 400 epochs):
cd ./results/GlyphNet_pretrain/test_400/

If you are running the code in your local machine, open index.html. If you are running remotely via ssh, on your remote machine run:

python -m SimpleHTTPServer 8881

Then on your local machine, start an SSH tunnel: ssh -N -f -L localhost:8881:localhost:8881 [email protected]_host Now open your browser on the local machine and type in the address bar:

localhost:8881
  • (Optional) Plot loss functions values during training, from MC-GAN directory:
python util/plot_loss.py --logRoot ./checkpoints/GlyphNet_pretrain/
  • Train End-to-End network (e.g. on DATA=ft37_1): You can train Glyph Network following instructions above or download our pre-trained model by running:
./pretrained_models/download_cGAN_models.sh

Now, you can train the full model:

./scripts/train_StackGAN.sh ${DATA}
  • Test End-to-End network:
./scripts/test_StackGAN.sh ${DATA}

results will be saved under ./results/${DATA}_MCGAN_train.

  • (Optional) Make a video from your results in different training epochs:

First, train your model and save model weights in every epoch by setting opt.save_epoch_freq=1 in scripts/train_StackGAN.sh. Then test in different epochs and make the video by:

./scripts/make_video.sh ${DATA}

Follow the previous steps to visualize generated images and training curves where you replace GlyphNet_train with ${DATA}_StackGAN_train.

Training/test Details

  • Flags: see options/train_options.py, options/base_options.py and options/test_options.py for explanations on each flag.

  • Baselines: if you want to use this code to get results of Image Translation baseline or want to try tiling glyphs rather than stacking, refer to the end of scripts/train_cGAN.sh . If you only want to train OrnaNet on top of clean glyphs, refer to the end of scripts/train_StackGAN.sh.

  • Image Dimension: We have tried this network only on 64x64 images of letters. We do not scale and crop images since we set both opt.FineSize and opt.LoadSize to 64.

Citation

If you use this code or the provided dataset for your research, please cite our paper:

@inproceedings{azadi2018multi,
  title={Multi-content gan for few-shot font style transfer},
  author={Azadi, Samaneh and Fisher, Matthew and Kim, Vladimir and Wang, Zhaowen and Shechtman, Eli and Darrell, Trevor},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  volume={11},
  pages={13},
  year={2018}
}

Acknowledgements

We thank Elena Sizikova for downloading all fonts used in the 10K font data set.

Code is inspired by pytorch-CycleGAN-and-pix2pix.

Owner
Samaneh Azadi
CS PhD student at UC Berkeley
Samaneh Azadi
ECCV18 Workshops - Enhanced SRGAN. Champion PIRM Challenge on Perceptual Super-Resolution. The training codes are in BasicSR.

ESRGAN (Enhanced SRGAN) [ 🚀 BasicSR] [Real-ESRGAN] ✨ New Updates. We have extended ESRGAN to Real-ESRGAN, which is a more practical algorithm for rea

Xintao 4.7k Jan 02, 2023
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation

Contents Local and Global GAN Cross-View Image Translation Semantic Image Synthesis Acknowledgments Related Projects Citation Contributions Collaborat

Hao Tang 131 Dec 07, 2022
Official pytorch implementation of paper Dual-Level Collaborative Transformer for Image Captioning (AAAI 2021).

Dual-Level Collaborative Transformer for Image Captioning This repository contains the reference code for the paper Dual-Level Collaborative Transform

lyricpoem 160 Dec 11, 2022
The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question IntentionClassification Benchmark for Text-to-SQL"

TriageSQL The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question Intention Classification Benchmark for Text

Yusen Zhang 22 Nov 09, 2022
Task-based end-to-end model learning in stochastic optimization

Task-based End-to-end Model Learning in Stochastic Optimization This repository is by Priya L. Donti, Brandon Amos, and J. Zico Kolter and contains th

CMU Locus Lab 164 Dec 29, 2022
Language-Driven Semantic Segmentation

Language-driven Semantic Segmentation (LSeg) The repo contains official PyTorch Implementation of paper Language-driven Semantic Segmentation. Authors

Intelligent Systems Lab Org 416 Jan 03, 2023
PyTorch implementation of "Learn to Dance with AIST++: Music Conditioned 3D Dance Generation."

Learn to Dance with AIST++: Music Conditioned 3D Dance Generation. Installation pip install -r requirements.txt Prepare Dataset bash data/scripts/pre

Zj Li 8 Sep 07, 2021
fcn by tensorflow

Update An example on how to integrate this code into your own semantic segmentation pipeline can be found in my KittiSeg project repository. tensorflo

9 May 22, 2022
Deep learning library featuring a higher-level API for TensorFlow.

TFLearn: Deep learning library featuring a higher-level API for TensorFlow. TFlearn is a modular and transparent deep learning library built on top of

TFLearn 9.6k Jan 02, 2023
Code release for NeX: Real-time View Synthesis with Neural Basis Expansion

NeX: Real-time View Synthesis with Neural Basis Expansion Project Page | Video | Paper | COLAB | Shiny Dataset We present NeX, a new approach to novel

538 Jan 09, 2023
AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition [ArXiv] [Project Page] This repository is the official implementation of AdaMML:

International Business Machines 43 Dec 26, 2022
Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022.

Jadena Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022. arXiv

Qing Guo 13 Nov 29, 2022
An implementation of the [Hierarchical (Sig-Wasserstein) GAN] algorithm for large dimensional Time Series Generation

Hierarchical GAN for large dimensional financial market data Implementation This repository is an implementation of the [Hierarchical (Sig-Wasserstein

11 Nov 29, 2022
Test-Time Personalization with a Transformer for Human Pose Estimation, NeurIPS 2021

Transforming Self-Supervision in Test Time for Personalizing Human Pose Estimation This is an official implementation of the NeurIPS 2021 paper: Trans

41 Nov 28, 2022
Agent-based model simulator for air quality and pandemic risk assessment in architectural spaces

Agent-based model simulation for air quality and pandemic risk assessment in architectural spaces. User Guide archABM is a fast and open source agent-

Vicomtech 10 Dec 05, 2022
Satellite labelling tool for manual labelling of storm top features such as overshooting tops, above-anvil plumes, cold U/Vs, rings etc.

Satellite labelling tool About this app A tool for manual labelling of storm top features such as overshooting tops, above-anvil plumes, cold U/Vs, ri

Czech Hydrometeorological Institute - Satellite Department 10 Sep 14, 2022
SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model

SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model Edresson Casanova, Christopher Shulby, Eren Gölge, Nicolas Michael Müller, Frede

Edresson Casanova 92 Dec 09, 2022
Converts given image (png, jpg, etc) to amogus gif.

Image to Amogus Converter Converts given image (.png, .jpg, etc) to an amogus gif! Usage Place image in the /target/ folder (or anywhere realistically

Hank Magan 1 Nov 24, 2021
Continual World is a benchmark for continual reinforcement learning

Continual World Continual World is a benchmark for continual reinforcement learning. It contains realistic robotic tasks which come from MetaWorld. Th

41 Dec 24, 2022
Teaching end to end workflow of deep learning

Deep-Education This repository is now available for public use for teaching end to end workflow of deep learning. This implies that learners/researche

Data Lab at College of William and Mary 2 Sep 26, 2022