Background Augmentation Generative Adversarial Networks (BAGANs): Effective Data Generation Based on GAN-Augmented 3D Synthesizing

Clicks: 418
ID: 41184
2018
Article Quality & Performance Metrics
Overall Quality Improving Quality
0.0 /100
Combines engagement data with AI-assessed academic quality
AI Quality Assessment
Not analyzed
Abstract
Augmented Reality (AR) is crucial for immersive Human⁻Computer Interaction (HCI) and the vision of Artificial Intelligence (AI). Labeled data drives object recognition in AR. However, manually annotating data is expensive, labor-intensive, and data distribution asymmetry. Scantily labeled data limits the application of AR. Aiming at solving the problem of insufficient and asymmetry training data in AR object recognition, an automated vision data synthesis method, i.e., background augmentation generative adversarial networks (BAGANs), is proposed in this paper based on 3D modeling and the Generative Adversarial Network (GAN) algorithm. Our approach has been validated to have better performance than other methods through image recognition tasks with respect to the natural image database ObjectNet3D. This study can shorten the algorithm development time of AR and expand its application scope, which is of great significance for immersive interactive systems.
Reference Key
ma2018backgroundsymmetry Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Ma, Yan;Liu, Kang;Guan, Zhibin;Xu, Xinkai;Qian, Xu;Bao, Hong;
Journal Symmetry
Year 2018
DOI DOI not found
URL
Keywords Keywords not found

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.