94 research outputs found
Recommended from our members
Automatic X-ray Image Segmentation and Clustering for Threat Detection
Firearms currently pose a known risk at the borders. The enormous number of X-ray images from parcels, luggage and freight coming into each country via rail, aviation and maritime presents a continual challenge to screening officers. To further improve UK capability and aid officers in their search for firearms we suggest an automated object segmentation and clustering architecture to focus officers’ attentions to high-risk threat objects. Our proposal utilizes dual-view single/ dual-energy 2D X-ray imagery and is a blend of radiology, image processing and computer vision concepts. It consists of a triple-layered processing scheme that supports segmenting the luggage contents based on the effective atomic number of each object, which is then followed by a dual-layered clustering procedure. The latter comprises of mild and a hard clustering phase. The former is based on a number of morphological operations obtained from the image-processing domain and aims at disjoining mild-connected objects and to filter noise. The hard clustering phase exploits local feature matching techniques obtained from the computer vision domain, aiming at sub-clustering the clusters obtained from the mild clustering stage. Evaluation on highly challenging single and dual-energy X-ray imagery reveals the architecture’s promising performance
Recommended from our members
SAR image segmentation with GMMs
This paper proposes a new approach for Synthetic Aperture Radar (SAR) image segmentation. Segmenting SAR images can be challenging because of the blurry edges and the high speckle. The segmentation proposed is based on a machine learning technique. Gaussian Mixture Models (GMMs) were already used to segment images in the visual field and are here adapted to work with single channel SAR images. The segmentation suggested is designed to be a first step towards feature and model based classification. The recall rate is the most important as the goal is to retain most target's features. A high recall rate of 88%, higher than for other segmentation methods on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, was obtained. The next classification stage is thus not affected by a lack of information while its computation load drops. With this method, the inclusion of disruptive features in models of targets is limited, providing computationally lighter models and a speed up in further classification as the narrower segmented areas foster convergence of models and provide refined features to compare. This segmentation method is hence an asset to template, feature and model based classification methods. Besides this method, a comparison between variants of the GMMs segmentation and a classical segmentation is provided
Recommended from our members
Evaluating 3D local descriptors for future LIDAR missiles with automatic target recognition capabilities
Future light detection and ranging seeker missiles incorporating 3D automatic target recognition (ATR) capabilities can improve the missile’s effectiveness in complex battlefield environments. Considering the progress of local 3D descriptors in the computer vision domain, this paper evaluates a number of these on highly credible simulated air-to-ground missile engagement scenarios. The latter take into account numerous parameters that have not been investigated yet by the literature including variable missile – target range, 6-degrees-of-freedom missile motion and atmospheric disturbances. Additionally, the evaluation process utilizes our suggested 3D ATR architecture that compared to current pipelines involves more post-processing layers aiming at further enhancing 3D ATR performance. Our trials reveal that computer vision algorithms are appealing for missile-oriented 3D ATR
Recommended from our members
Depth-Enhanced Deep Learning Approach For Monocular Camera Based 3D Object Detection
Automatic 3D object detection using monocular cameras presents significant challenges in the context of autonomous driving. Precise labeling of 3D object scales requires accurate spatial information, which is difficult to obtain from a single image due to the inherent lack of depth information in monocular images, compared to LiDAR data. In this paper, we propose a novel approach to address this issue by enhancing deep neural networks with depth information for monocular 3D object detection. The proposed method comprises three key components: 1)Feature Enhancement Pyramid Module: We extend the conventional Feature Pyramid Networks (FPN) by introducing a feature enhancement pyramid network. This module fuses feature maps from the original pyramid and captures contextual correlations across multiple scales. To increase the connectivity between low-level and high-level features, additional pathways are incorporated. 2)Auxiliary Dense Depth Estimator: We introduce an auxiliary dense depth estimator that generates dense depth maps to enhance the spatial perception capabilities of the deep network model without adding computational burden. 3)Augmented Center Depth Regression: To aid center depth estimation, we employ additional bounding box vertex depth regression based on geometry. Our experimental results demonstrate the superiority of the proposed technique over existing competitive methods reported in the literature. The approach showcases remarkable performance improvements in monocular 3D object detection, making it a promising solution for autonomous driving applications
Recommended from our members
Robust Multi‐Agent Reinforcement Learning Against Adversarial Attacks for Cooperative Self‐Driving Vehicles
Multi‐agent deep reinforcement learning (MARL) for self‐driving vehicles aims to address the complex challenge of coordinating multiple autonomous agents in shared road environments. MARL creates a more stable system and improves vehicle performance in typical traffic scenarios compared to single‐agent DRL systems. However, despite its sophisticated cooperative training, MARL remains vulnerable to unforeseen adversarial attacks. Perturbed observation states can lead one or more vehicles to make critical errors in decision‐making, triggering chain reactions that often result in severe collisions and accidents. To ensure the safety and reliability of multi‐agent autonomous driving systems, this paper proposes a robust constrained cooperative multi‐agent reinforcement learning (R‐CCMARL) algorithm for self‐driving vehicles, enabling robust driving policy to handle strong and unpredictable adversarial attacks. Unlike most existing works, our R‐CCMARL framework employs a universal policy for each agent, achieving a more practical, nontask‐oriented driving agent for real‐world applications. In this way, it enables us to integrate shared observations with Mean‐Field theory to model interactions within the MARL system. A risk formulation and a risk estimation network are developed to minimise the defined long‐term risks. To further enhance robustness, this risk estimator is then used to construct a constrained optimisation objective function with a regulariser to maximise long‐term rewards in worst‐case scenarios. Experiments conducted in the CARLA simulator in intersection scenarios demonstrate that our method remains robust against adversarial state perturbations while maintaining high performance, both with and without attacks
Recommended from our members
Robust Adversarial Attacks Detection for Deep Learning based Relative Pose Estimation for Space Rendezvous
Research on developing deep learning techniques for autonomous spacecraft relative navigation challenges is continuously growing in recent years. Adopting those techniques offers enhanced performance. However, such approaches also introduce heightened apprehensions regarding the trustability and security of such deep learning methods through their susceptibility to adversarial attacks. In this work, we propose a novel approach for adversarial attack detection for deep neural network-based relative pose estimation schemes based on the explainability concept. We develop for an orbital rendezvous scenario an innovative relative pose estimation technique adopting our proposed Convolutional Neural Network (CNN), which takes an image from the chaser’s onboard camera and outputs accurately the target’s relative position and rotation. We perturb seamlessly the input images using adversarial attacks that are generated by the Fast Gradient Sign Method (FGSM). The adversarial attack detector is then built based on a Long Short Term Memory (LSTM) network which takes the explainability measure namely SHapley Value from the CNN-based pose estimator and flags the detection of adversarial attacks when acting. Simulation results show that the proposed adversarial attack detector achieves a detection accuracy of 99.21%. Both the deep relative pose estimator and adversarial attack detector are then tested on real data captured from our laboratory-designed setup. The experimental results from our laboratory-designed setup demonstrate that the proposed adversarial attack detector achieves an average detection accuracy of 96.29%
SEASTAR: a mission to study ocean submesoscale dynamics and small-scale atmosphere-ocean processes in coastal, shelf and polar seas
High-resolution satellite images of ocean color and sea surface temperature reveal an abundance of ocean fronts, vortices and filaments at scales below 10 km but measurements of ocean surface dynamics at these scales are rare. There is increasing recognition of the role played by small scale ocean processes in ocean-atmosphere coupling, upper-ocean mixing and ocean vertical transports, with advanced numerical models and in situ observations highlighting fundamental changes in dynamics when scales reach 1 km. Numerous scientific publications highlight the global impact of small oceanic scales on marine ecosystems, operational forecasts and long-term climate projections through strong ageostrophic circulations, large vertical ocean velocities and mixed layer re-stratification. Small-scale processes particularly dominate in coastal, shelf and polar seas where they mediate important exchanges between land, ocean, atmosphere and the cryosphere, e.g., freshwater, pollutants. As numerical models continue to evolve toward finer spatial resolution and increasingly complex coupled atmosphere-wave-ice-ocean systems, modern observing capability lags behind, unable to deliver the high-resolution synoptic measurements of total currents, wind vectors and waves needed to advance understanding, develop better parameterizations and improve model validations, forecasts and projections. SEASTAR is a satellite mission concept that proposes to directly address this critical observational gap with synoptic two-dimensional imaging of total ocean surface current vectors and wind vectors at 1 km resolution and coincident directional wave spectra. Based on major recent advances in squinted along-track Synthetic Aperture Radar interferometry, SEASTAR is an innovative, mature concept with unique demonstrated capabilities, seeking to proceed toward spaceborne implementation within Europe and beyond
Altimetry for the future: Building on 25 years of progress
In 2018 we celebrated 25 years of development of radar altimetry, and the progress achieved by this methodology in the fields of global and coastal oceanography, hydrology, geodesy and cryospheric sciences. Many symbolic major events have celebrated these developments, e.g., in Venice, Italy, the 15th (2006) and 20th (2012) years of progress and more recently, in 2018, in Ponta Delgada, Portugal, 25 Years of Progress in Radar Altimetry. On this latter occasion it was decided to collect contributions of scientists, engineers and managers involved in the worldwide altimetry community to depict the state of altimetry and propose recommendations for the altimetry of the future. This paper summarizes contributions and recommendations that were collected and provides guidance for future mission design, research activities, and sustainable operational radar altimetry data exploitation. Recommendations provided are fundamental for optimizing further scientific and operational advances of oceanographic observations by altimetry, including requirements for spatial and temporal resolution of altimetric measurements, their accuracy and continuity. There are also new challenges and new openings mentioned in the paper that are particularly crucial for observations at higher latitudes, for coastal oceanography, for cryospheric studies and for hydrology. The paper starts with a general introduction followed by a section on Earth System Science including Ocean Dynamics, Sea Level, the Coastal Ocean, Hydrology, the Cryosphere and Polar Oceans and the “Green” Ocean, extending the frontier from biogeochemistry to marine ecology. Applications are described in a subsequent section, which covers Operational Oceanography, Weather, Hurricane Wave and Wind Forecasting, Climate projection. Instruments’ development and satellite missions’ evolutions are described in a fourth section. A fifth section covers the key observations that altimeters provide and their potential complements, from other Earth observation measurements to in situ data. Section 6 identifies the data and methods and provides some accuracy and resolution requirements for the wet tropospheric correction, the orbit and other geodetic requirements, the Mean Sea Surface, Geoid and Mean Dynamic Topography, Calibration and Validation, data accuracy, data access and handling (including the DUACS system). Section 7 brings a transversal view on scales, integration, artificial intelligence, and capacity building (education and training). Section 8 reviews the programmatic issues followed by a conclusion
Recommended from our members
Pose-informed deep learning method for SAR ATR
Synthetic aperture radar (SAR) images for automatic target classification (automatic target recognition (ATR)) have attracted significant interest as they can be acquired day and night under a wide range of weather conditions. However, SAR images can be time consuming to analyse, even for experts. ATR can alleviate this burden and deep learning is an attractive solution. A new deep learning Pose-informed architecture solution, that takes into account the impact of target orientation on the SAR image as the scatterers configuration changes, is proposed. The classification is achieved in two stages. First, the orientation of the target is determined using a Hough transform and a convolutional neural network (CNN). Then, classification is achieved with a CNN specifically trained on targets with similar orientations to the target under test. The networks are trained with translation and SAR-specific data augmentation. The proposed Pose-informed deep network architecture was successfully tested on the Military Ground Target Dataset (MGTD) and the Moving and Stationary Target Acquisition and Recognition (MSTAR) datasets. Results show the proposed solution outperformed standard AlexNets on the MGTD, MSTAR extended operating condition (EOC)1, EOC2 and standard operating condition (SOC)10 datasets with a score of 99.13% on the MSTAR SOC10
- …
