269 research outputs found
Designing Information Technology Architectures: A Cost-Oriented Methodology
This paper proposes a design methodology of information technology architectures tying organizational requirements to technical choices and costs. The primary goal is to provide a structured support for the selection of the minimum-cost architecture satisfying given organizational requirements. Previous empirical studies have attempted absolute cost comparisons of different architectural solutions, primarily relying on the expertise of practitioners and a priori beliefs, but have rarely taken into account the impact of organizational requirements on costs. Requirements are modelled as information processes, composed of tasks exchanging information and characterized by varying levels of computational complexity. Different architectural distributions of presentation, computation and data management applications are compared. The cost implications of organizational requirements for processing intensity, communication intensity and networking are analysed. The results show a relationship between structural features of information processes and architectural costs and indicate how architectural design should be based on organizational as well as technology considerations
Editorial Foreword to the Special Issue on Artificial Intelligence for Hyper- and Multi-spectral Remote Sensing Image Processing
In the current age of widespread application of artificial intelligence (AI) across various facets of life, satellite remote sensing is no outlier. Thanks to the ongoing enhancements in the spatial and temporal resolutions of satellite images, they are emerging as invaluable assets in areas such as land-use analysis, meteorology, change detection, and beyond. Accurate analysis and classification at various levels of hyperspectral images (HSIs) and multispectral remote sensing images (RSIs) are essential for extracting valuable insights from these datasets
Evaluating the Repair of System-on-Chip (SoC) using Connectivity
This paper presents a new model for analyzing the repairability of reconfigurable system-on-chip (RSoC) instrumentation with the repair process. It exploits the connectivity of the interconnected cores in which unreliability factors due to both neighboring cores and the interconnect structure are taken into account. Based on the connectivity, two RSoC repair scheduling strategies, Minimum Number of Interconnections First (I-MIN) and Minimum Number of Neighboring Cores First (C-MIN), are proposed. Two other scheduling strategies, Maximum Number of Interconnections First (I-MAX) and Maximum Number of Neighboring cores First (C-MAX), are also introduced and analyzed to further explore the impact of connectivity-based repair scheduling on the overall repairability of RSoCs. Extensive parametric simulations demonstrate the efficiency of the proposed RSoC repair scheduling strategies; thereby manufacturing ultimately reliable RSoC instrumentation can be achieved
Robust DDoS attack detection with adaptive transfer learning
In the evolving cybersecurity landscape, the rising frequency of Distributed Denial of Service (DDoS) attacks requires robust defense mechanisms to safeguard network infrastructure availability and integrity. Deep Learning (DL) models have emerged as a promising approach for DDoS attack detection and mitigation due to their capability of automatically learning feature representations and distinguishing complex patterns within network traffic data. However, the effectiveness of DL models in protecting against evolving attacks depends also on the design of adaptive architectures, through the combination of appropriate models, quality data, and thorough hyperparameter optimizations, which are scarcely performed in the literature. Also, within adaptive architectures for DDoS detection, no method has yet addressed how to transfer knowledge between different datasets to improve classification accuracy. In this paper, we propose an innovative approach for DDoS detection by leveraging Convolutional Neural Networks (CNN), adaptive architectures, and transfer learning techniques. Experimental results on publicly available datasets show that the proposed adaptive transfer learning method effectively identifies benign and malicious activities and specific attack categories
An intelligent monitoring system for assessing bee hive health
Up to one third of the global food production depends on the pollination of honey bees, making them vital. This study defines a methodology to create a bee hive health monitoring system through image processing techniques. The approach consists of two models, where one performs the detection of bees in an image and the other classifies the detected bee's health. The main contribution of the defined methodology is the increased efficacy of the models, whilst maintaining the same efficiency found in the state of the art. Two databases were used to create models based on Convolutional Neural Network (CNN). The best results consist of 95% accuracy for health classification of a bee and 82% accuracy in detecting the presence of bees in an image, higher than those found in the state-of-the-art
Explainability and Interpretability Concepts for Edge AI Systems
The increased complexity of artificial intelligence (AI), machine learning (ML) and deep learning (DL) methods, models, and training data to satisfy industrial application needs has emphasised the need for AI model providing explainability and interpretability. Model Explainability aims to communicate the reasoning of AI/ML/DL technology to end users, while model interpretability focuses on in-powering model transparency so that users will understand precisely why and how a model generates its results.
Edge AI, which combines AI, Internet of Things (IoT) and edge com puting to enable real-time collection, processing, analytics, and decision making, introduces new challenges to acheiving explainable and interpretable methods. This is due to the compromises among performance, constrained resources, model complexity, power consumption, and the lack of benchmarking and standardisation in edge environments.
This chapter presents the state of play of AI explainability and interpretability methods and techniques, discussing different benchmarking approaches and highlighting the state-of-the-art development directions.publishedVersio
Spare Line Borrowing Technique for Distributed Memory Cores in SoC
In this paper, a new architecture of distributed embedded memory cores for SoC is proposed and an effective memory repair method by using the proposed Spare Line Borrowing (software-driven reconfiguration) technique is investigated. It is known that faulty cells in memory core show spatial locality, also known as fault clustering. This physical phenomenon tends to occur more often as deep submicron technology advances due to defects that span multiple circuit elements and sophisticated circuit design. The combination of new architecture & repair method proposed in this paper ensures fault tolerance enhancement in SoC, especially in case of fault clustering. This fault tolerance enhancement is obtained through optimal redundancy utilization: Spare redundancy in a fault-resistant memory core is used to fix the fault in a fault-prone memory core. The effect of Spare Line Borrowing technique on the reliability of distributed memory cores is analyzed through modeling and extensive parametic simulation
- …
