Title |
Improving the Performance of WGAN Using Stabilization of Lipschitz Continuity of the Discriminator |
DOI |
https://doi.org/10.5573/ieie.2020.57.2.73 |
Keywords |
Deep learning ; Generative models ; Wasserstein GAN ; Lipschitz continuity ; Training stability ; |
Abstract |
Generative Adversarial Networks(GAN) have made breakthroughs in the field of generative models, but suffers from training instability. The recently proposed Wasserstein GAN(WGAN) is an alternative to GAN due to improved training stability. However, there are still cases where it generates poor samples or fails to converge. It is widely accepted that this problem depends on how precisely the discriminator can be made to be Lipschitz continuous. As representative algorithms, there have been proposed methods of forcibly clipping the weights of the discriminator or adding to the loss function the regularization term to penalize the norm of gradient of the discriminator, but they show undesired behavior. In this paper, techniques, which restrict its norm less than or equal to one, are proposed to stably maintain the Lipschitz continuity of the discriminator without being influenced by choice of training dataset and the performance of the proposed techniques are evaluated through various experiments. |