Exploiting Feature Space Manipulation for Image Generation in Deep Convolutional Generative Adversarial Networks
DOI:
https://doi.org/10.62051/v2j19752Keywords:
Image generation; image manipulation; deep convolutional generative neural networks.Abstract
Image generation and manipulation are important research directions in the field of computer vision and graphics. Traditional methods of image generation and manipulation often necessitate substantial manual intervention or are constrained by the quality and diversity of the dataset, leading to a lack of authenticity and control in the images produced. This paper provides an extensive examination of image generation and manipulation using Deep Convolutional Generative Adversarial Networks (DCGAN) and feature weighting operations. The model successfully captured the basic structure of a face. Furthermore, the color and texture of the images are quite realistic, indicating that the model has learned some basic features of the face. In terms of feature weighting operations, specific features are successfully transferred from one image to another. This research contributes valuable insights to the field of computer vision and graphics and paves the way for future advancements in image generation and manipulation. Ethical considerations and efficient training methods are also discussed as important aspects for future research.
Downloads
References
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 2014, 27: 1-9.
Radford, Alec, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. ArXiv Preprint, 2015: 1511.06434.
Curtó, Joachim D., Irene C. Zarza, Fernando De La Torre, Irwin King, and Michael R. Lyu. High-resolution deep convolutional generative adversarial networks. ArXiv Preprint, 2017: 1711.06491.
Kingma, Diederik P., and Max Welling. Auto-encoding variational bayes. ArXiv Preprint, 2013: 1312.6114.
Higgins, Irina, Loic Matthey, Arka Pal, Christopher P. Burgess, Xavier Glorot, Matthew M. Botvinick, Shakir Mohamed, and Alexander Lerchner. Beta-vae: Learning basic visual concepts with a constrained variational framework. International Conference on Learning Representations, 2017, 3: 1-22.
Makhzani, Alireza, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. ArXiv Preprint, 2015: 1511.05644.
Larsen, Anders Boesen Lindbo, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In International conference on machine learning, 2016: 1558-1566.
Sinha, Samarth, Sayna Ebrahimi, and Trevor Darrell. Variational adversarial active learning. Proceedings of the IEEE/CVF international conference on computer vision. 2019: 5972-5981. DOI: https://doi.org/10.1109/ICCV.2019.00607
Dumoulin, Vincent, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially learned inference. ArXiv Preprint, 2016: 1606.00704.
What are deconvolutional layers? URL: https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers/111532. Last Accessed: 2024/03/14
Zero to Hero in Pytorch (ML+DL+RL). URL: https://www.kaggle.com/code/ashishpatel26/zero-to-hero-in-pytorch-ml-dl-rl/notebook. Last Accessed: 2024/03/14
Downloads
Published
Conference Proceedings Volume
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.