• Skip to primary navigation
  • Skip to main content
  • Home
  • Research
  • Publications
  • News
  • Code Repository
  • People
  • Join the Lab
  • Group Photos
  • Courses

Advanced Vision and Learning Lab (AVLL)

Texas A&M University College of Engineering

Publications

Evaluating GAN-LSTM for Smart Meter Anomaly Detection in Power Systems

  • F.O. Nia, S. Salehi, and, J. Peeples, “Evaluating GAN-LSTM for Smart Meter Anomaly Detection in Power Systems,” in IEEE Texas Power and Energy Conference (TPEC), 2026, in Press, doi: arXiv:2601.09701.
  • Abstract: Advanced metering infrastructure (AMI) provides high-resolution electricity consumption data that can enhance monitoring, diagnosis, and decision making in modern power distribution systems. Detecting anomalies in these time-series measurements is challenging due to nonlinear, nonstationary, and multi-scale temporal behavior across diverse building types and operating conditions. This work presents a systematic, power-system-oriented evaluation of a GAN-LSTM framework for smart meter anomaly detection using the Large-scale Energy Anomaly Detection (LEAD) dataset, which contains one year of hourly measurements from 406 buildings. The proposed pipeline applies consistent preprocessing, temporal windowing, and threshold selection across all methods, and compares the GAN-LSTM approach against six widely used baselines, including statistical, kernel-based, reconstruction-based, and GAN-based models. Experimental results demonstrate that the GAN-LSTM significantly improves detection performance, achieving an F1-score of 0.89. These findings highlight the potential of adversarial temporal modeling as a practical tool for supporting asset monitoring, non-technical loss detection, and situational awareness in real-world power distribution networks. The code for this work is publicly available.
  • Link: https://arxiv.org/abs/2601.09701
  • Publication date: February 2026
  • Citation: F.O. Nia, S. Salehi, and, J. Peeples, “Evaluating GAN-LSTM for Smart Meter Anomaly Detection in Power Systems,” in IEEE Texas Power and Energy Conference (TPEC), 2026, in Press, doi: arXiv:2601.09701.

Morphological Change Detection for Scanning Electron Microscope Images of Lung Cell Surfaces

  • :J. Ritu, T. Jefferis, C. Sayes and, J. Peeples, “Morphological Change Detection for Scanning Electron Microscope Images of Lung Cell Surfaces,” in International Conference on Advances in Artificial Intelligence and Machine Learning (AAIML), 2026, in Press.
  • Abstract: Exposure to nanoparticles can alter cellular morphology, which serves as an indicator of toxic response and can be visualized using scanning electron microscopy (SEM). Texture-based features have been widely used to quantify nanoscale surface complexity, but they primarily operate at the pixel level. To provide complementary biological context, this study introduces a morphology-driven SEM analysis framework that extracts interpretable features from both cells and their protrusions (cilia or dendrites). An adaptation of the Segment Anything Model (SAM) tailored for microscopy images, MicroSAM is used for semantic segmentation to quantify cell-level (e.g., area, circularity) and dendrite-level features (e.g., length, waviness). Statistical tests and effect size analysis identify significant differences across exposure groups, while correlation analysis reveals inter-feature relationships. Together, these results demonstrate interpretable patterns of structural changes across nanoparticle types, offering biologically grounded insights that enhance and extend texture-based analysis through morphology-aware characterization. This framework provides biologically grounded insights with direct relevance for automated toxicology assessment and monitoring of nanoparticle-induced cellular changes. The code for this work is publicly available: https://github.com/Advanced-Vision-and-Learning-Lab/SEM_Morphology.
  • Link: TBD
  • Publication date: March 2026
  • Citation: J. Ritu, T. Jefferis, C. Sayes and, J. Peeples, “Morphological Change Detection for Scanning Electron Microscope Images of Lung Cell Surfaces,” in International Conference on Advances in Artificial Intelligence and Machine Learning (AAIML), 2026, in Press.

Quantitative Measures for Passive Sonar Texture Analysis

  • J. Ritu, A.V. Dine, and J. Peeples, “Quantitative Measures for Passive Sonar Texture Analysis,” in Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications IV, International Society for Optics and Photonics (SPIE), 2026, in Press, doi: arXiv:2504.14843.
  • Abstract: Passive sonar signals contain complex characteristics often arising from environmental noise, vessel machinery, and propagation effects. While convolutional neural networks (CNNs) perform well on passive sonar classification tasks, they can struggle with statistical variations that occur in the data. To investigate this limitation, synthetic underwater acoustic datasets are generated that centered on amplitude and period variations. Two metrics are proposed to quantify and validate these characteristics in the context of statistical and structural texture for passive sonar. These measures are applied to real-world passive sonar datasets to assess texture information in the signals and correlate the performances of the models. Results show that CNNs underperform on statistically textured signals, but incorporating explicit statistical texture modeling yields consistent improvements. These findings highlight the importance of quantifying texture information for passive sonar classification.
  • Link: https://arxiv.org/abs/2504.14843
  • Publication date: April 2026
  • Citation: J. Ritu, A.V. Dine, and J. Peeples, “Quantitative Measures for Passive Sonar Texture Analysis,” in Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications IV, International Society for Optics and Photonics (SPIE), 2026, in Press, doi: arXiv:2504.14843.

Neighborhood Feature Pooling for Remote Sensing Image Classification

  • F.O. Nia, A. Mohammadi, S.A. Kharsa, P. Naikare, Z. Hampel-Arias, and J. Peeples, “Neighborhood Feature Pooling for Remote Sensing Image Classification,” in Proceedings of the Winter Conference on Applications of Computer Vision (WACV) Workshops, 2026, in Press, doi: arXiv:2510.25077.
  • Abstract: In this work, we introduce Neighborhood Feature Pooling (NFP), a novel pooling layer designed to enhance texture-aware representation learning for remote sensing image classification. The proposed NFP layer captures relationships between neighboring spatial features by aggregating local similarity patterns across feature dimensions. Implemented using standard convolutional operations, NFP can be seamlessly integrated into existing neural network architectures with minimal additional parameters. Extensive experiments across multiple benchmark datasets and backbone models demonstrate that NFP consistently improves classification performance compared to conventional pooling strategies, while maintaining computational efficiency. These results highlight the effectiveness of neighborhood-based feature aggregation for capturing discriminative texture information in remote sensing imagery.
  • Link: https://arxiv.org/abs/2510.25077
  • Publication date: March 2026
  • Citation: F.O. Nia, A. Mohammadi, S.A. Kharsa, P. Naikare, Z. Hampel-Arias, and J. Peeples, “Neighborhood Feature Pooling for Remote Sensing Image Classification,” In Proceedings of the Winter Conference on Applications of Computer Vision (WACV) Workshops, 2026, in Press, doi: arXiv:2510.25077.

Cover Damage Detection of Round Cotton Modules using Convolutional Neural Networks (CNN)

  • Md Z. Iqbal, R. Hardin, J. Peeples, and E. Barnes, “Cover Damage Detection of Round Cotton Modules using Convolutional Neural Networks (CNN),” in Computers and Electronics in Agriculture, Volume 239, Part B, 111023, 2025. doi: 10.1016/j.compag.2025.111023.
  • Abstract: Plastic contamination in cotton threatens the economic viability and global reputation of US cotton. In the US, most contaminants likely originate from damaged plastic covers on round cotton modules, as loose pieces of cover can be torn and entangled in cotton by handling equipment. This study aimed to develop a robust convolutional neural network (CNN)-based detection model to identify cover damage on modules during handling, enabling necessary interventions to mitigate contamination. To achieve this objective, several models, including two-stage, one-stage, and detection transformers, were trained using images of modules with damaged covers. Following evaluation, the most effective model (YOLOv8l) was further optimized through pruning and fine-tuning, resulting in the proposed YOLOv8-wd model. This model achieved a mean average precision (mAP) of 92 % for detecting module cover damages, with an inference speed of 6.20 ms per image using sparse-aware engine. The proposed model demonstrated comparable accuracy to YOLOv8l while being 62.71 % lighter and 50.40 % faster. Model testing was conducted using images collected by a system installed on a module truck and a loader used for module handling at a fully operational gin and field. The loader handled 1,801 modules, capturing 6,935 images, while the truck handled 2,094 modules, yielding 32,584 images. From these images, YOLOv8-wd identified cover damage in 4.72 % of loader-handled and 3.92 % of truck-handled modules, though actual rates may be higher. Furthermore, using the model, the system provided clear status indicators (cover-damaged or undamaged) and unique ID’s for each module. The findings of this study could be used to reduce economic losses resulting from damaged covers of round cotton modules.
  • Link: https://www.sciencedirect.com/science/article/pii/S0168169925011299
  • Publication date: December 2025
  • Citation: Md Z. Iqbal, R. Hardin, J. Peeples, and E. Barnes, “Cover Damage Detection of Round Cotton Modules using Convolutional Neural Networks (CNN),” in Computers and Electronics in Agriculture, Volume 239, Part B, 111023, 2025. doi: 10.1016/j.compag.2025.111023.

Histogram Layers for Neural ‘Engineered’ Features

  • J. Peeples, S. A. Kharsa, L. Saleh, and A. Zare, “Histogram Layers for Neural Engineered Features,” in IEEE Transactions on Artificial Intelligence, July 2025. doi: 10.1109/TAI.2025.3593445.
  • Abstract: In the computer vision literature, many effective histogram-based features have been developed. These “engineered” features include local binary patterns and edge histogram descriptors among others and they have been shown to be informative features for a variety of computer vision tasks. In this paper, we explore whether these features can be learned through histogram layers embedded in a neural network and, therefore, be leveraged within deep learning frameworks. By using histogram features, local statistics of the feature maps from the convolution neural networks can be used to better represent the data. We present neural versions of local binary pattern and edge histogram descriptors that jointly improve feature representation for image classification. Experiments are presented on benchmark and real-world datasets. Our code is publicly available
  • Link: https://ieeexplore.ieee.org/abstract/document/11099042
  • Publication date: July 2025
  • Citation: J. Peeples, S. A. Kharsa, L. Saleh, and A. Zare, “Histogram Layers for Neural Engineered Features,” in IEEE Transactions on Artificial Intelligence, July 2025. doi: 10.1109/TAI.2025.3593445.

Benchmarking suite for synthetic aperture radar imagery anomaly detection (SARIAD) algorithms

April 2025

L. Chauvin, S. Gupta, A. Ibarra and J. Peeples, “Benchmarking suite for synthetic aperture radar imagery anomaly detection (SARIAD) algorithms,” in Algorithms for Synthetic Aperture Radar Imagery XXXII, International Society for Optics and Photonics (SPIE), 2025, doi:  10.1117/12.3052519.

Anomaly detection is a key research challenge in computer vision and machine learning with applications in many fields from quality control to radar imaging. In radar imaging, specifically synthetic aperture radar (SAR), anomaly detection can be used for the classification, detection, and segmentation of objects of interest. However, there is no method for developing and benchmarking these methods on SAR imagery. To address this issue, we introduce SAR imagery anomaly detection (SARIAD). In conjunction with Anomalib, a deep-learning library for anomaly detection, SARIAD provides a comprehensive suite of algorithms and datasets for assessing and developing anomaly detection approaches on SAR imagery. SARIAD specifically integrates multiple SAR datasets along with tools to effectively apply various anomaly detection algorithms to SAR imagery. Several anomaly detection metrics and visualizations are available. Overall, SARIAD acts as a central package for benchmarking SAR models and datasets to allow for reproducible research in the field of anomaly detection in SAR imagery. https://www.spiedigitallibrary.org/conference-proceedings-of-spie/13456/134560C/Benchmarking-suite-for-synthetic-aperture-radar-imagery-anomaly-detection-SARIAD/10.1117/12.3052519.short

Patch distribution modeling framework learnable adaptive cosine estimator (PaDiM-LACE) for anomaly detection in synthetic aperture radar imagery

April 2025

A. Ibarra and J. Peeples, “Patch distribution modeling framework learnable adaptive cosine estimator (PaDiM-LACE) for anomaly detection in synthetic aperture radar imagery,” in Algorithms for Synthetic Aperture Radar Imagery XXXII, International Society for Optics and Photonics (SPIE), 2025, doi: 10.1117/12.3052541.

This work presents a new approach to anomaly detection and localization in synthetic aperture radar imagery (SAR), expanding upon the existing patch distribution modeling framework (PaDiM). We introduce the adaptive cosine estimator (ACE) detection statistic. PaDiM uses the Mahalanobis distance at inference, an unbounded metric. ACE instead uses the cosine similarity metric, providing bounded anomaly detection scores. The proposed method is evaluated across multiple SAR datasets, with performance metrics including the area under the receiver operating curve (AUROC) at the image and pixel level, aiming for increased performance in anomaly detection and localization of SAR imagery. https://www.spiedigitallibrary.org/conference-proceedings-of-spie/13456/134560D/Patch-distribution-modeling-framework-adaptive-cosine-estimator-PaDiM-ACE-

Neural Edge Histogram Descriptors for Underwater Acoustic Target Recognition

June 2025

A. Agashe, D. Carreiro, A. V. Dine, and J. Peeples, “Neural Edge Histogram Descriptors for Underwater Acoustic Target Recognition,” in OCEANS 2025 Brest, BREST, France, 2025, pp. 1-6, doi: 10.1109/OCEANS58557.2025.11104298.

Numerous maritime applications rely on the ability to recognize acoustic targets using passive sonar. While there is a growing reliance on pre-trained models for classification tasks, these models often require extensive computational resources and may not perform optimally when transferred to new domains due to dataset variations. To address these challenges, this work adapts the neural edge histogram descriptors (NEHD) method originally developed for image classification, to classify passive sonar signals. We conduct a comprehensive evaluation of statistical and structural texture features, demonstrating that their combination achieves competitive performance with large pre-trained models. The proposed NEHD-based approach offers a lightweight and efficient solution for underwater target recognition, significantly reducing computational costs while maintaining accuracy. https://arxiv.org/abs/2503.13763

Investigation of Time-Frequency Feature Combinations with Histogram Layer Time Delay Neural Networks

June 2025

A. Mohammadi, I. Masabarakiza, E. Barnes, D. Carreiro, A. V. Dine, and J. Peeples, “Investigation of Time-Frequency Feature Combinations with Histogram Layer Time Delay Neural Networks,” in OCEANS 2025 Brest, BREST, France, 2025, pp. 1-6, doi: 10.1109/OCEANS58557.2025.11104751.

Abstract: While deep learning has reduced the prevalence of manual feature extraction, transformation of data via feature engineering remains essential for improving model performance, particularly for underwater acoustic signals. The methods by which audio signals are converted into time-frequency representations and the subsequent handling of these spectrograms can significantly impact performance. This work demonstrates the performance impact of using different combinations of time-frequency features in a histogram layer time delay neural network. An optimal set of features is identified with results indicating that specific feature combinations outperform single data features. https://ieeexplore.ieee.org/document/11104751

  • 1
  • Go to page 2
  • Go to Next Page »

© 2016–2026 Advanced Vision and Learning Lab (AVLL) Log in

Texas A&M Engineering Experiment Station Logo
  • College of Engineering
  • Facebook
  • Twitter
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment