Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality). Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.
Abstract:The exploitation of multi-view synthetic aperture radar (SAR) images can effectively improve the performance of target recognition. However, due to the various extended operating conditions (EOCs) in practical applications, some of the collected views may not be discriminative enough for target recognition. Therefore, each of the input views should be examined before being passed through to multi-view recognition. This paper proposes a novel structure for multi-view SAR target recognition. The multi-view images are first classified by sparse representation-based classification (SRC). Based on the output residuals, a reliability level is calculated to evaluate the effectiveness of a certain view for multi-view recognition. Meanwhile, the support samples for each view selected by SRC collaborate to construct an enhanced local dictionary. Then, the selected views are classified by joint sparse representation (JSR) based on the enhanced local dictionary for target recognition. The proposed method can eliminate invalid views for target recognition while enhancing the representation capability of JSR. Therefore, the individual discriminability of each valid view as well as the inner correlation among all of the selected views can be exploited for robust target recognition. Experiments are conducted on the moving and stationary target acquisition recognition (MSTAR) dataset to demonstrate the validity of the proposed method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.