Technion symbol

Technion - Israel Institute of Technology

Google Research

SinGAN: Learning a Generative Model from a Single Natural Image

ICCV 2019 Best paper award (Marr prize)

 

Tamar Rott Shaham, Tali Dekel, Tomer Michaeli

[Paper]

[Code]

Single Training Image

Random samples from a single image

 

Image generation learned from a single training Image. We propose SinGAN; a new unconditional generative model trained on a single natural image. Our model learns the image's patch statistics across multiple scales, using a dedicated multi-scale adversarial training scheme; it can then be used to generate new realistic image samples that preserve the original patch distribution while creating new object configurations and structures.

 

Abstract

 

We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks.

 

Talk

SinGAN for image manipulation

 

Paint to image

Editing

Harmonization

Super-resolution

Animation

Training Image

Input

Output

SinGAN can be used in various image manipulation tasks, including: transforming a paint (clipart) into a realistic photo, rearranging and editing objects in the image, harmonizing a new object into an image, image super-resolution and creating an animation from a single input. In all these cases, our model observes only the training image (first row) and is trained in the same manner for all applications, with no architectural changes or further tuning.

 

Downloads

 

 

 

 

 

 

 

 

youtube

 

 

 

 

github symbol

 

 

[ Paper ]

[ Supplementary Material ]

[ Single Image Animation Video ]

[ Github ]

 

BibTex

@inproceedings{rottshaham2019singan,
  title={SinGAN: Learning a Generative Model from a Single Natural Image},
  author={Rott Shaham, Tamar and Dekel, Tali and Michaeli, Tomer},
  booktitle={Computer Vision (ICCV), IEEE International Conference on},
  year={2019}
}