Exploring the Effectiveness of Hyperparameters in Deep Convolution Generative Adversarial Networks

Authors

  • Zhiqiao Kang

DOI:

https://doi.org/10.62051/h3sxs218

Keywords:

Generative adversarial network; convolutional neural network; image generation.

Abstract

Traditional machine learning models are often only able to classify or predict data, and cannot generate new simulated data, while Generative Adversarial Network (GAN) provides the possibility for machines to generate high-quality and diverse data. GAN can be used for image data generation, image-to-image transformation, translation of image information and text information into each other, and so on. There are many different types of GAN can realize different types of functions. GAN has a strong generating ability, can generate some high-quality simulation data for human reference. The framework idea of GAN is also very innovative and has been widely developed for applications in unsupervised or semi-supervised learning. In addition, GAN can be integrated with other machine learning models to form a more powerful model, so as to solve the problems that are difficult to deal with traditional models. Therefore, GAN is a very promising model in the field of artificial intelligence. In this thesis, the author completed the basic reproduction of Deep Convolution Generative Adversarial Network (DCGAN), and analyzed the quality of DCGAN generated images, using the control variable method. Some parameters are modified, such as learning rate, batch size, latent size, and explored the influence of these parameters on the training effect of DCGAN. The characteristics of basic parameters on DCGAN image generation results are summarized.

Downloads

Download data is not yet available.

References

Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 2014, 27: 1-9.

Mirza, Mehdi, and Simon Osindero. Conditional generative adversarial nets. ArXiv Preprint, 2014:1411.1784.

Radford, Alec, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. ArXiv Preprint, 2015:1511.06434.

Makhzani, Alireza, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. ArXiv Preprint, 2015: 1511.05644.

Mescheder, Lars, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. In International conference on machine learning, 2017: 2391-2400.

Arjovsky, Martin, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, 2017: 214-223.

Zhu, Jun-Yan, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, 2017: 2223-2232. DOI: https://doi.org/10.1109/ICCV.2017.244

Brock, Andrew, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. ArXiv Preprint, 2018: 1809.11096.

Zhang, Han, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In International conference on machine learning, 2019: 7354-7363.

Karras, Tero, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019: 4401-4410. DOI: https://doi.org/10.1109/CVPR.2019.00453

Anime Face Dataset NTU-MLDS, Kaggle, URL: https://www.kaggle.com/datasets/lunarwhite/anime-face-dataset-ntumlds. Last Accessed 24/03/09

Published

12-08-2024

How to Cite

Kang, Z. (2024) “Exploring the Effectiveness of Hyperparameters in Deep Convolution Generative Adversarial Networks”, Transactions on Computer Science and Intelligent Systems Research, 5, pp. 178–188. doi:10.62051/h3sxs218.