Small-Sample Seabed Sediment Classification Based on Deep Learning

  1. Zhao, Yuxin 23
  2. Zhu, Kexin 23
  3. Zhao, Ting 1
  4. Zheng, Liangfeng 23
  5. Deng, Xiong 23
  6. Rodríguez-Gonzálvez, Pablo
  7. González Aguilera, Diego 4
  1. 1 College of Underwater Acoustic Engineering, Harbin Engineering University, Harbin 150001, China
  2. 2 College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China
  3. 3 Engineering Research Center of Navigation Instruments, Ministry of Education, Harbin 150001, China
  4. 4 Universidad de Salamanca
    info

    Universidad de Salamanca

    Salamanca, España

    ROR https://ror.org/02f40zc51

Revista:
Remote Sensing

ISSN: 2072-4292

Año de publicación: 2023

Volumen: 15

Número: 8

Páginas: 2178

Tipo: Artículo

DOI: 10.3390/RS15082178 GOOGLE SCHOLAR lock_openAcceso abierto editor

Otras publicaciones en: Remote Sensing

Resumen

Seabed sediment classification is of great significance in acoustic remote sensing. To accurately classify seabed sediments, big data are needed to train the classifier. However, acquiring seabed sediment information is expensive and time-consuming, which makes it crucial to design a well-performing classifier using small-sample seabed sediment data. To avoid data shortage, a self-attention generative adversarial network (SAGAN) was trained for data augmentation in this study. SAGAN consists of a generator, which generates data similar to the real image, and a discriminator, which distinguishes whether the image is real or generated. Furthermore, a new classifier for seabed sediment based on self-attention densely connected convolutional network (SADenseNet) is proposed to improve the classification accuracy of seabed sediment. The SADenseNet was trained using augmented images to improve the classification performance. The self-attention mechanism can scan the global image to obtain global features of the sediment image and is able to highlight key regions, improving the efficiency and accuracy of visual information processing. The proposed SADenseNet trained with the augmented dataset had the best performance, with classification accuracies of 92.31%, 95.72%, 97.85%, and 95.28% for rock, sand, mud, and overall, respectively, with a kappa coefficient of 0.934. The twelve classifiers trained with the augmented dataset improved the classification accuracy by 2.25%, 5.12%, 0.97%, and 2.64% for rock, sand, mud, and overall, respectively, and the kappa coefficient by 0.041 compared to the original dataset. In this study, SAGAN can enrich the features of the data, which makes the trained classification networks have better generalization. Compared with the state-of-the-art classifiers, the proposed SADenseNet has better classification performance.

Información de financiación

Financiadores

  • Major Project of Chinese National Programs for Fundamental Research and Development
    • 613317

Referencias bibliográficas

  • Zhao, J., Yan, J., Zhang, H., and Meng, J. (2017). A new radiometric correction method for side-scan sonar images in consideration of seabed sediment variation. Remote Sens., 9.
  • Zhu, (2021), Mar. Geol., 438, pp. 106519, 10.1016/j.margeo.2021.106519
  • Zhao, T., Montereale Gavazzi, G., Lazendić, S., Zhao, Y., and Pižurica, A. (2021). Acoustic Seafloor Classification Using the Weyl Transform of Multibeam Echosounder Backscatter Mosaic. Remote Sens., 13.
  • Zhang, (2019), IEEE Trans. Geosci. Remote Sens., 58, pp. 3034, 10.1109/TGRS.2019.2946986
  • Qin, (2021), IEEE Access, 9, pp. 29416, 10.1109/ACCESS.2021.3052206
  • Li, (2021), IEEE Access, 9, pp. 53379, 10.1109/ACCESS.2021.3071299
  • Ji, (2020), IEEE J. Ocean. Eng., 46, pp. 509, 10.1109/JOE.2020.2989853
  • Anderson, (2008), ICES J. Mar. Sci., 65, pp. 1004, 10.1093/icesjms/fsn061
  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7–12). Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.
  • He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.
  • Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K.Q. (2017, January 21–26). Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.
  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv.
  • Ding, (2021), IEEE Trans. Geosci. Remote Sens., 60, pp. 5511812
  • Ding, (2022), Neurocomputing, 501, pp. 246, 10.1016/j.neucom.2022.06.031
  • Forman, (2021), JASA Express Lett., 1, pp. 040802, 10.1121/10.0004138
  • Lohse, J., Doulgeris, A.P., and Dierking, W. (2019). An optimal decision-tree design strategy and its application to sea ice classification from SAR imagery. Remote Sens., 11.
  • Sales, (2021), IEEE Trans. Geosci. Remote Sens., 60, pp. 4402711
  • Koda, (2018), IEEE Trans. Geosci. Remote Sens., 56, pp. 5948
  • Chen, L., Li, S., Bai, Q., Yang, J., Jiang, S., and Miao, Y. (2021). Review of image classification algorithms based on convolutional neural networks. Remote Sens., 13.
  • Cui, (2021), Appl. Acoust., 174, pp. 107728, 10.1016/j.apacoust.2020.107728
  • Yu, X., Zhai, J., Zou, B., Shao, Q., and Hou, G. (2021). A Novel Acoustic Sediment Classification Method Based on the K-Mdoids Algorithm Using Multibeam Echosounder Backscatter Intensity. J. Mar. Sci. Eng., 9.
  • Gaida, T.C., Tengku Ali, T.A., Snellen, M., Amiri-Simkooei, A., Van Dijk, T.A., and Simons, D.G. (2018). A multispectral Bayesian classification method for increased acoustic discrimination of seabed sediments using multi-frequency multibeam backscatter data. Geosciences, 8.
  • Yan, P., Feng, X., and Yue, L.J.Z. (2021, January 14–17). Seabed Sediment Classification based on Multi-features Fusion and Feature Selection Framework. Proceedings of the 2021 OES China Ocean Acoustics (COA), Harbin, China.
  • He, (2022), Estuar. Coast. Shelf Sci., 265, pp. 107701, 10.1016/j.ecss.2021.107701
  • Li, S., Zhao, J., Zhang, H., and Qu, S. (2021). Sub-Bottom Sediment Classification Using Reliable Instantaneous Frequency Calculation and Relaxation Time Estimation. Remote Sens., 13.
  • Zheng, (2013), Appl. Ocean Res., 39, pp. 131, 10.1016/j.apor.2012.11.002
  • Wang, (2021), Comput. Geosci., 149, pp. 104713, 10.1016/j.cageo.2021.104713
  • Manik, (2021), IOP Conf. Ser. Earth Environ. Sci., 944, pp. 012001, 10.1088/1755-1315/944/1/012001
  • Febriawan, (2019), Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.-ISPRS Arch., 42, pp. 27, 10.5194/isprs-archives-XLII-2-W13-27-2019
  • Annalakshmi, G., Murugan, S.S., and Ramasundaram, K. (2019, January 11–13). Side Scan Sonar Images Based Ocean Bottom Sediment Classification. Proceedings of the 2019 International Symposium on Ocean Technology (SYMPOL), Ernakulam, India.
  • Berthold, T., Leichter, A., Rosenhahn, B., Berkhahn, V., and Valerius, J. (December, January 27). Seabed sediment classification of side-scan sonar data using convolutional neural networks. Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA.
  • Xi, H., Wan, L., Sheng, M., Li, Y., and Liu, T. (2017, January 17–18). The study of the seabed side-scan acoustic images recognition using BP neural network. Proceedings of the Parallel Architecture, Algorithm and Programming: 8th International Symposium, Haikou, China.
  • Yu, Y., Zhao, J., Gong, Q., Huang, C., Zheng, G., and Ma, J. (2021). Real-time underwater maritime object detection in side-scan sonar images based on transformer-YOLOv5. Remote Sens., 13.
  • Atallah, L., Shang, C., and Bates, R. (2005, January 20–23). Object detection at different resolution in archaeological side-scan sonar images. Proceedings of the Europe Oceans 2005, Brest, France.
  • Yulin, (2020), IEEE Access, 8, pp. 173450, 10.1109/ACCESS.2020.3024813
  • Hou, (2022), Adv. Eng. Inform., 52, pp. 101545, 10.1016/j.aei.2022.101545
  • Shorten, (2019), J. Big Data, 6, pp. 60, 10.1186/s40537-019-0197-0
  • Bayer, (2022), ACM Comput. Surv., 55, pp. 1, 10.1145/3544558
  • Goodfellow, (2020), Commun. ACM, 63, pp. 139, 10.1145/3422622
  • Zhang, H., Goodfellow, I., Metaxas, D., and Odena, A. (2019, January 9–15). Self-attention generative adversarial networks. Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA.
  • Zhao, H., Jia, J., and Koltun, V. (2020, January 14–19). Exploring self-attention for image recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.
  • Niu, (2021), Neurocomputing, 452, pp. 48, 10.1016/j.neucom.2021.03.091
  • Ioffe, S., and Szegedy, C. (2015, January 6–11). Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the International Conference on Machine Learning, Lille, France.
  • Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A.C. (2017). Improved training of wasserstein gans. Adv. Neural Inf. Process. Syst., 30.
  • Miyato, T., Kataoka, T., Koyama, M., and Yoshida, Y. (2018). Spectral normalization for generative adversarial networks. arXiv.
  • Ma, (2010), IEEE Trans. Geosci. Remote Sens., 48, pp. 4099
  • Kayed, M., Anter, A., and Mohamed, H. (2020, January 8–9). Classification of garments from fashion MNIST dataset using CNN LeNet-5 architecture. Proceedings of the 2020 International Conference on Innovative Trends in Communication and Computer Engineering (ITCE), Aswan, Egypt.
  • Krizhevsky, (2017), Commun. ACM, 60, pp. 84, 10.1145/3065386
  • Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv.
  • Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 11–17). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada.
  • Heusel, (2017), Adv. Neural Inf. Process. Syst., 30, pp. 6627
  • Foody, (2004), Photogramm. Eng. Remote Sens., 70, pp. 627, 10.14358/PERS.70.5.627
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Adv. Neural Inf. Process. Syst., 30.
  • Wei, J., Liu, M., Luo, J., Zhu, A., Davis, J., and Liu, Y. (2022, January 23–27). DuelGAN: A Duel between Two Discriminators Stabilizes the GAN Training. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel.