• Skip to primary navigation
  • Skip to main content
  • Home
  • Research
  • Publications
  • News
  • Code Repository
  • People
  • Join the Lab
  • Group Photos
  • Courses

Advanced Vision and Learning Lab (AVLL)

Texas A&M University College of Engineering

Publications

Cross-Domain Knowledge Transfer for Underwater Acoustic Classification Using Pre-trained Models

June 2025

A. Mohammadi, T. Kelhe, D. Carreiro, A. V. Dine, and J. Peeples, “Cross-Domain Knowledge Transfer for Underwater Acoustic Classification Using Pre-trained Models,” in OCEANS 2025 Brest, BREST, France, 2025, pp. 1-6, 2025, doi: 10.1109/OCEANS58557.2025.11104545.

Transfer learning is commonly employed to leverage large, pre-trained models and perform fine-tuning for downstream tasks. The most prevalent pre-trained models are initially trained using ImageNet. However, their ability to generalize can vary across different data modalities. This study compares pre-trained Audio Neural Networks (PANNs) and ImageNet pre-trained models within the context of underwater acoustic target recognition (UATR). It was observed that the ImageNet pre-trained models slightly out-perform pre-trained audio models in passive sonar classification. We also analyzed the impact of audio sampling rates for model pre-training and fine-tuning. This study contributes to transfer learning applications of UATR, illustrating the potential of pre-trained models to address limitations caused by scarce, labeled data in the UATR domain. https://ieeexplore.ieee.org/document/11104545

Lacunarity Pooling Layers for Plant Image Texture Analysis

  • Abstract: Pooling layers (e.g. max and average) may overlook important information encoded in the spatial arrangement of pixel intensity and/or feature values. We propose a novel lacunarity pooling layer that aims to capture the spatial heterogeneity of the feature maps by evaluating the variability within local windows. The layer operates at multiple scales allowing the network to adaptively learn hierarchical features. The lacunarity pooling layer can be seamlessly integrated into any artificial neural network architecture. Experimental results demonstrate the layer’s effectiveness in capturing intricate spatial patterns leading to improved feature extraction capabilities. The proposed approach holds promise in various domains especially in agricultural image analysis tasks. This work contributes to the evolving landscape of artificial neural network architectures by introducing a novel pooling layer that enriches the representation of spatial features.
  • Link: https://openaccess.thecvf.com/content/CVPR2024W/Vision4Ag/html/Mohan_Lacunarity_Pooling_Layers_for_Plant_Image_Classification_using_Texture_Analysis_CVPRW_2024_paper.html
  • Publication date: June 2024
  • Citation: A. Mohan and J. Peeples, “Lacunarity Pooling Layers for Plant Image Texture Analysis”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp.5384-5392.

Texture Analysis of Lung Cell Surface Morphology After Nanoparticle Exposure

Abstract: Nanoparticle exposure induces significant morphological changes in cellular surfaces, necessitating robust methods for quantitative analysis. Scanning Electron Microscopy (SEM) provides high-resolution imaging of these surfaces, enabling the study of cellular responses to nanoparticle exposure. To objectively quantify these changes, we propose a novel texture analysis framework that leverages fractal dimension and lacunarity analysis, combined with a Quantized Co-Occurrence (QCO) operator and Earth Mover’s Distance (EMD) to capture both local and global textural features directly from grayscale SEM images. The QCO operator enables the discretization of textural features into quantized levels, facilitating the generation of feature distributions that retain spatial information while summarizing surface variability. Using EMD, we assess the differences in these distributions between classes, providing a robust measure to quantify the structural and morphological differences between untreated cells and those exposed to various nanoparticles. This combined framework enables us to visualize and rank nanoparticle-induced changes in cellular morphology, revealing key insights into the differential effects of sensitizers such as nickel oxide (NiO) and irritants such as crystalline silica (CS). Our results demonstrate the effectiveness of the proposed framework in highlighting distributional differences, with rankings validated against expert knowledge using statistical measures such as Cohen’s kappa. This approach not only advances the objective quantification of cellular texture changes, but also establishes a scalable method for analyzing complex morphological features in biomedical imaging.

Link: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/13413/134130X/Texture-analysis-of-lung-cell-surface-morphology-after-nanoparticle-exposure/10.1117/12.3047087.full

Publication date: February 2025

Citation: A. Mohan, T. Jefferis, C. Sayes and, J. Peeples, “Texture Analysis of Lung Cell Surface Morphology After Nanoparticle Exposure,” in Medical Imaging 2025: Digital and Computational Pathology, vol. 13413, pp. 221-230. SPIE, 2025.

Spatial Transformer Network YOLO Model for Agricultural Object Detection

Abstract: Object detection plays a crucial role in the field of computer vision by autonomously locating and identifying objects of interest. The You Only Look Once (YOLO) model is an effective single-shot detector. However, YOLO faces challenges in cluttered or partially occluded scenes and can struggle with small, low-contrast objects. We propose a new method that integrates spatial transformer networks (STNs) into YOLO to improve performance. The proposed STN-YOLO aims to enhance the model’s effectiveness by focusing on important areas of the image and improving the spatial invariance of the model before the detection process. Our proposed method improved object detection performance both qualitatively and quantitatively. We explore the impact of different localization networks within the STN module as well as the robustness of the model across different spatial transformations. We apply the STN-YOLO on benchmark datasets for Agricultural object detection as well as a new dataset from a state-of-the-art plant phenotyping greenhouse facility. Our code and dataset are publicly available: https://github.com/Advanced-Vision-and-Learning-Lab/STN-YOLO.

Link: https://arxiv.org/abs/2407.21652

Publication date: December 19, 2024

Citation: Y. Zambre, E. Rajkitkul, A. Mohan and, J. Peeples, “Spatial Transformer Network YOLO Model for Agricultural Object Detection,” in IEEE International Conference on Machine Learning and Applications (ICMLA), Miami, FL, 2024, pp. 115 – 121. doi: 10.1109/ICMLA61862.2024.00022.

Evaluation of Machine Learning Based QSAR Models for the Classification of Lung Surfactant Inhibitors

  • Abstract: Inhaled chemicals can cause dysfunction in the lung surfactant, a protein–lipid complex with critical biophysical and biochemical functions. This inhibition has many structure-related and dose-dependent mechanisms, making hazard identification challenging. We developed quantitative structure–activity relationships for predicting lung surfactant inhibition using machine learning. Logistic regression, support vector machines, random forest, gradient-boosted trees, prior-data-fitted networks, and multilayer perceptron were evaluated as methods. Multilayer perceptron had the strongest performance with 96% accuracy and an F1 score of 0.97. Support vector machines and logistic regression also performed well with lower computation costs. This serves as a proof-of-concept for efficient hazard screening in the emerging area of lung surfactant inhibition.Link: TBD
  • Publication date: September 2024
  • Link: https://pubs.acs.org/doi/full/10.1021/envhealth.4c00118
  • Citation: J. Liu, J. Peeples, C. Sayes, “Evaluation of Machine Learning Based QSAR Models for the Classification of Lung Surfactant Inhibitors,” in Environment & Health, 2024. doi: 10.1021/envhealth.4c00118.

Histogram Layer Time Delay Neural Networks for Passive Sonar Classification

July 12, 2023

J. Ritu, E. Barnes, R. Martell, A. Dine and J. Peeples, "Histogram Layer Time Delay Neural Network For Passive Sonar Classification," 2023 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). doi: 10.1109/WASPAA58266.2023.10248102.

Underwater acoustic target detection in remote marine sensing operations is challenging due to complex sound wave propagation. Despite the availability of reliable sonar systems, target recognition remains a difficult problem. Various methods address improved target recognition. However, most struggle to disentangle the high-dimensional, non-linear patterns in the observed target recordings. In this work, a novel method combines a time delay neural network and histogram layer to incorporate statistical contexts for improved feature learning and underwater acoustic target classification. The proposed method outperforms the baseline model, demonstrating the utility in incorporating statistical contexts for passive sonar target recognition. The code for this work is publicly available: https://github.com/Peeples-Lab/HLTDNN.

Quantitative Analysis of Primary Attribution Explainable Artificial Intelligence Methods for Remote Sensing Image Classification

April 4, 2023

A. Mohan and J. Peeples, “Quantitative Analysis of Primary Attribution Explainable Artificial Intelligence Methods for Remote Sensing Image Classification,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2023, in Press. doi: 10.48550/arXiv.2306.04037.

We present a comprehensive analysis of quantitatively evaluating explainable artificial intelligence (XAI) techniques for remote sensing image classification. Our approach leverages state-of-the-art machine learning approaches to perform remote sensing image classification across multiple modalities. We investigate the results of the models qualitatively through XAI methods. Additionally, we compare the XAI methods quantitatively through various categories of desired properties. Through our analysis, we offer insights and recommendations for selecting the most appropriate XAI method(s) to gain a deeper understanding of the models’ decision-making processes. The code for this work is publicly available: https://github.com/Peeples-Lab/XAI_Analysis.

Spatial and Texture Analysis of Root System distribution with Earth mover’s Distance (STARSEED)

January 5, 2023

J. Peeples, W. Xu, R. Gloaguen, D. Rowland, A. Zare, and Z. Brym, “Spatial and Texture Analysis of Root System Distribution with Earth Mover’s Distance (STARSEED),” in Plant Methods 19, 2023. doi: 10.1186/s13007‑022‑00974‑z.

Purpose: Root system architectures are complex and challenging to characterize effectively for agronomic and ecological discovery.
Methods: We propose a new method, Spatial and Texture Analysis of Root SystEm distribution with Earth mover’s
Distance (STARSEED), for comparing root system distributions that incorporates spatial information through a novel
application of the Earth Mover’s Distance (EMD).
Results: We illustrate that the approach captures the response of sesame root systems for diferent genotypes and
soil moisture levels. STARSEED provides quantitative and visual insights into changes that occur in root architectures
across experimental treatments.
Conclusion: STARSEED can be generalized to other plants and provides insight into root system architecture development and response to varying growth conditions not captured by existing root architecture metrics and models. The code and data for our experiments are publicly available: https://github.com/GatorSense/STARSEED.

Histogram Layers for Synthetic Aperture Sonar Imagery

Histogram Layers for Synthetic Aperture Sonar Imagery

September 2, 2022

J. Peeples, A. Zare, J. Dale, and J. Keller, "Histogram Layers for Synthetic Aperture Sonar Imagery," in IEEE International Conference on Machine Learning and Applications (ICMLA), 2022, doi: 10.1109/ICMLA55696.2022.00032.

Synthetic aperture sonar (SAS) imagery is crucial for several applications, including target recognition and environmental segmentation. Deep learning models have led to much success in SAS analysis; however, the features extracted by these approaches may not be suitable for capturing certain textural information. To address this problem, we present a novel application of histogram layers on SAS imagery. The addition of histogram layer(s) within the deep learning models improved performance by incorporating statistical texture information on both synthetic and real-world datasets.

  • « Go to Previous Page
  • Go to page 1
  • 2

© 2016–2026 Advanced Vision and Learning Lab (AVLL) Log in

Texas A&M Engineering Experiment Station Logo
  • College of Engineering
  • Facebook
  • Twitter
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment