Paper Review - StyleGANv1
StyleGAN
Paper: A Style-Based Generator Architecture for Generative Adversarial Networks
Note: You need to first understand ProGAN before understanding StyleGAN.
StyleGAN is based on ProGAN but it changed the Generator.
Key feature:
- offers control over the style of generated images at different levels of detail.
- Based on ProGAN to produce high resolution image
- Capable of generating very high-resolution images even of 1024*1024 resolution
- Control the generated images via style mixing
- Style-based Generator
- Style-based Generator consisted of a Mapping network and a Synthesis network
- An intermediate latent space is introduced between the mapping network and the synthesis network
- Affine transforms produce styles that control the layers of the synthesis network
- Adaptive instance normalization (AdaIN) to control the style locally in different places directly.
- manipulates the per-channel mean and variance to control the style of an image effectively
- Noise injection to introduce stochastic details / stochastic variation
- Style injection in different conv layers to control the style locally in different places directly.
- Affine transform of latent space W + Adaptive instance normalization (AdaIN)
Our generator architecture makes it possible to control the image synthesis via scale-specific modifications to the styles. We can view the mapping network and affine transformations as a way to draw samples for each style from a learned distribution, and the synthesis network as a way to generate a novel image based on a collection of styles. The effects of each style are localized in the network, i.e., modifying a specific subset of the styles can be expected to affect only certain aspects of the image.
Style-based generator
Style-based generator is very different from traditional generator.
- Traditionally the latent code is provided to the generator through an input layer, i.e., the first layer of a feedforward network (a).
- Note the generator (a) is the generator of ProGAN.
- Style-based generator omits the input layer and started from a learned constant instead. The input is mapped to an intermediate latent space which controls the generator at each convolution layer through AdaIN(Adaptive instance norm).
Where:
- A is a learned affine transform
- Specialize to styles
- B is a learned per-channel scaling factors to the noise input
- Noise is used to control the changes of fine details (lower level features) (Does not affect high level features)
- generate stochastic (stochastic means random) detail by introducing explicit noise inputs
- AdaIN is the Adaptive instance normalization
- scale the injected intermediate latent space to control the generator at each convolution layer
- The Mapping network consists of 8 layers of MLP
- Why?
- The Synthesis network consists of 18 layers (2 layers for each resolution, )
- Output of last layer is converted to RGB using a separate 1x1 conv like ProGAN
We can view the mapping network and affine transformations as a way to draw samples for each style from a learned distribution, and the synthesis network as a way to generate a novel image based on a collection of styles. The effects of each style are localized in the network, i.e., modifying a specific subset of the styles can be expected to affect only certain aspects of the image.
For StyleGAN and StyleGAN2,
number of layers in synthesis network is determined by the output image size
it also has a maximum resolution of 1024×1024 with 18 layers
Why do we need a mapping network?
StackOverflow: How does Mapping Network in StyleGAN work?
Note and have the same dimensions.
- However, is more disentangled than .
Finding a w
from intermediate latent space W
for an image allows specific image editing.
- The intermediate latent space does not have to support sampling according to any fixed distribution
- The intermediate latent space more faithfully reflects the distribution of the training data compared to the standard Gaussian latent space.
- This mapping can be adapted to “unwarp” so the that the factors of variation become more linear.
- We expect the training to yield a less entangled in an unsupervised setting, i.e., when the factors of variation are not known in advance
- The disentangled properties allow one to perform extensive image manipulations by leveraging a pretrained StyleGAN.
- It should be easier to generate realistic images based on a disentangled representation than based on an entangled representation.
Z Space
The generative model in the GAN architecture learns to map the values (sampled from a normal or uniform distribution) to the generated images.
These values are called latent codes or latent representations (denoted by ).
- The latent space is applicable to all the unconditional GAN models.
- However, the constraint of the space subject to a normal distribution limits its representation capacity and disentanglement for the semantic attributes.
- Limited representation capacity due to normal distribution
W and W+ Space
Recent GAN inversion methods mostly adopt the latent spaces used in StyleGANs. These latent spaces have higher degrees of freedom and thus are significantly more expressive than the space.
- StyleGAN get converts native to the mapped style vectors by a nonlinear mapping network implemented with an 8-layer MLP.
- Due to the mapping network and affine transformations, the space contains more disentangled features than space.
Adaptive instance normalization
- Adaptive instance normalization is used such that our injected code can control the style locally in different places directly.
- each style controls only one convolution before being overridden by the next AdaIN operation
- The idea is to normalize each channel to 0 mean and unit variance,
- each feature map is normalized separately,
- and then apply scales and biases based on the style to affact the weightings to achive style transfer.
- Thus the dimensionality of y is twice the number of feature maps on that layer.
Where:
- is the feature map
- is the style
- is the scalar
- is the bias
The dimensionality of is equal to twice the number of feature maps on that layer.
By applying weights and bias to each feature map, the style will be changed.
Why inject the W code in different convolution layers?
- Different resolution of Convolution could represent a different style
- (4x4 - 8x8) brings the high level aspects such as pose, general hairstyle
- (16x16 - 32x32) brings the smaller scale features such as hairstyle, eyes open/closed
- (64x64 - 1024x1024) brings mainly the color scheme and microstructure
Importance of Noise input | Stochastic Variation
Noise is used to control the changes of fine details (lower level features) (Does not affect high level features)
- generate stochastic (stochastic means random) detail by introducing explicit noise inputs
- The stochastic detail would greatly increase the quality of image
Mixing Regularization for Style Mixing
If we only use one z to pass through the mapping network to get w, the synthesis network may assume that adjacent styles are correlated.
Therefore, we can use different z points to pass through the mapping network to get different w, then mix the w.
- A given percentage of images are generated using two random latent codes instead of one during training
- Run two latent codes through the mapping network, and have the corresponding control the styles
- This regularization technique prevents the network from assuming that adjacent styles are correlated.
1 | def get_w(self, batch_size:int, style_mix=True): |
Perceptual Path Length (PPL)
This regularization encourages a fixed-size step in w to result in a fixed-magnitude change in the image.
- Used to measure how smooth the interpolation of latent vector is.
- The idea is to want the path length to be short in some perceptual space.
Where
- is the perceptual distance (L2)
As a basis for our metric, we use a perceptually-based pairwise image distance that is calculated as a weighted difference between two VGG16 embeddings, where the weights are fit so that the metric agrees with human perceptual similarity judgments. If we subdivide a latent space interpolation path into linear segments, we can define the total perceptual length of this segmented path as the sum of perceptual differences over each segment, as reported by the image distance metric.
Truncation Trick
Generally, Truncation Trick is a trick to boost the FID score.
- When your latent is far away from mean,
- the quality of the image is usually unstable, but in high variation.
- When your latent is close to mean,
- the quality of the image is usually stable, but in limited variation.
it is known that drawing latent vectors from a truncated or otherwise shrunk sampling space tends to improve average image quality, although some amount of variation is lost.
- the quality of the image is usually stable, but in limited variation.
Training Details
- Adam Optimizer
- WGAN-GP Loss