Published in

SAGE Publications, Textile Research Journal, 2024

DOI: 10.1177/00405175241233942

Links

Tools

Export citation

Search in Google Scholar

Fabric defect image generation method based on the dual-stage W-net generative adversarial network

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Due to the intricate and diverse nature of textile defects, detecting them poses an exceptionally challenging task. In comparison with conventional defect detection methods, deep learning-based defect detection methods generally exhibit superior precision. However, utilizing deep learning for defect detection requires a substantial volume of training data, which can be particularly challenging to accumulate for textile flaws. To augment the fabric defect dataset and enhance fabric defect detection accuracy, we propose a fabric defect image generation method based on Pix2Pix generative adversarial network. This approach devises a novel dual-stage W-net generative adversarial network. By increasing the network depth, this model can effectively extract intricate textile image features, thereby enhancing its ability to expand information sharing capacity. The dual-stage W-net generative adversarial network allows generating desired defects on defect-free textile images. We conduct quality assessment of the generated fabric defect images resulting in peak signal-to-noise ratio and structural similarity values exceeding 30 and 0.930, respectively, and a learned perceptual image patch similarity value no greater than 0.085, demonstrating the effectiveness of fabric defect data augmentation. The effectiveness of dual-stage W-net generative adversarial network is established through multiple comparative experiments evaluating the generated images. By examining the detection performance before and after data augmentation, the results demonstrate that mean average precision improves by 6.13% and 14.57% on YOLO V5 and faster recurrent convolutional neural networks detection models, respectively.