Exploiting Deep Convolutional Generative Adversarial Network Generated Images for Enhanced Image Classification

Authors

  • Jin Xing

DOI:

https://doi.org/10.62051/vq4pyb84

Keywords:

Deep learning; generative adversarial network; image augmentation.

Abstract

The power of deep neural networks relies heavily on the quantity and quality of training data. However, it is expensive and time consuming for people to collect and annotate data on a large scale. Traditional methods, including modifying the copies of existing data, do not always have the effect, especially in some biomedical fields where some large-size anonymous datasets are generally not publicly available. So, this paper tried to tackle this problem by generating specific data using Deep Convolutional Generative Adversarial Network (DCGAN). DCGAN structure combines convolution and traditional generative adversarial network, has the advantages of producing the clearer images than vanilla Generative adversarial network (GAN). The training dataset is from CIFAR-10 dataset, consist of 10 classes of natural item images. To measure whether it is useful, three classifiers, LeNet, AlexNet and InceptionNet, are trained by feeding original dataset and original dataset mixed with generated data. The final result is presented by comparing accuracy. It goes well by adding more generated data from DCGAN into the original data. The result proves that DCGAN is able to augment data.

Downloads

Download data is not yet available.

References

Hemkens, Lars G., et al. The reporting of studies using routinely collected health data was often insufficient. Journal of clinical epidemiology, 2016, 79: 104-111.

Yang, Xiangli, Song, Zixing, et al. A survey on deep semi-supervised learning. IEEE Transactions on Knowledge and Data Engineering, 2022, 35(9): 8934-8954.

Goodfellow, Ian and Pouget-Abadie, Jean and Mirza, et al. Generative adversarial nets. Advances in neural information processing systems, 2014, 27: 1-9.

Radford, Alec, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. ArXiv Preprint, 2015: 1511.06434.

Zhang, Yifan, Zhou, Daquan, et al. Expanding small-scale datasets with guided imagination. Advances in Neural Information Processing Systems, 2024, 36: 1-61.

Jiang, Yifan, Shiyu Chang, and Zhangyang Wang. Transgan: Two pure transformers can make one strong gan, and that can scale up. Advances in Neural Information Processing Systems, 2021, 34: 14745-14758.

Uppal, Hardik, Sepas-Moghaddam, Alireza, et al. Teacher-student adversarial depth hallucination to improve face recognition. Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 3671--3680.

LeCun, Yann, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278-2324.

Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 2012, 25: 1-9.

Szegedy, Christian, et al. Going deeper with convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9.

Downloads

Published

12-08-2024

How to Cite

Xing, J. (2024) “Exploiting Deep Convolutional Generative Adversarial Network Generated Images for Enhanced Image Classification”, Transactions on Computer Science and Intelligent Systems Research, 5, pp. 476–481. doi:10.62051/vq4pyb84.