596 research outputs found
Analysis and application of digital spectral warping in analog and mixed-signal testing
Spectral warping is a digital signal processing transform which shifts the frequencies contained within a signal along the frequency axis. The Fourier transform coefficients of a warped signal correspond to frequency-domain 'samples' of the original signal which are unevenly spaced along the frequency axis. This property allows the technique to be efficiently used for DSP-based analog and mixed-signal testing. The analysis and application of spectral warping for test signal generation, response analysis, filter design, frequency response evaluation, etc. are discussed in this paper along with examples of the software and hardware implementation
A Web-Based Distributed Virtual Educational Laboratory
Evolution and cost of measurement equipment, continuous training, and distance learning make it difficult to provide a complete set of updated workbenches to every student. For a preliminary familiarization and experimentation with instrumentation and measurement procedures, the use of virtual equipment is often considered more than sufficient from the didactic point of view, while the hands-on approach with real instrumentation and measurement systems still remains necessary to complete and refine the student's practical expertise. Creation and distribution of workbenches in networked computer laboratories therefore becomes attractive and convenient. This paper describes specification and design of a geographically distributed system based on commercially standard components
Anomaly-based intrusion detection system for DDoS attack with Deep Learning techniques
The increasing number of connected devices is fostering a rising frequency of cyber attacks, with Distributed Denial of Service (DDoS) attacks among the most common. To counteract DDoS, companies and large organizations are increasingly deploying anomaly-based Intrusion Detection Systems (IDS), which detect attack patterns by analyzing differences in malicious network traffic against a baseline of legitimate traffic. To differentiate malicious and normal traffic, methods based on artificial intelligence and, in particular, Deep Learning (DL) are being increasingly considered, due to their ability to automatically learn feature representations for the different traffic types, without need of explicit programming or handcrafted feature extraction. In this paper, we propose a novel methodology for simulating an anomaly-based IDS based on adaptive DL by designing multiple DL models working with both binary and multi-label classification on multiple datasets with different degrees of comp lexity. To make the DL models adaptable to different conditions, we consider adaptive architectures obtained by automatically tuning the number of neurons for each situation. Results on publicly-available datasets confirm the validity of our proposed methodology, with DL models adapting to the different conditions by increasing the number of neurons on more complex datasets and achieving the highest accuracy in the binary classification configuration
Tasks Scheduling with Load Balancing in Fog Computing: a Bi-level Multi-Objective Optimization Approach
Fog computing is characterized by its proximity to edge devices, allowing it to handle data near the source. This capability alleviates the computational burden on data centers and minimizes latency. Ensuring high throughput and reliability of services in Fog environments depends on the critical roles of load balancing of resources and task scheduling. A significant challenge in task scheduling is allocating tasks to optimal nodes. In this paper, we tackle the challenge posed by the dependency between optimally scheduled tasks and the optimal nodes for task scheduling and propose a novel bi-level multi-objective task scheduling approach. At the upper level, which pertains to task scheduling optimization, the objective functions include the minimization of makespan, cost, and energy. At the lower level, corresponding to load balancing optimization, the objective functions include the minimization of response time and maximization of resource utilization. Our approach is based on an Improved Multi-Objective Ant Colony algorithm (IMOACO). Simulation experiments using iFogSim confirm the performance of our approach and its advantage over existing algorithms, including heuristic and meta-heuristic approaches
Towards explainable face aging with Generative Adversarial Networks
Generative Adversarial Networks (GAN) are being increasingly used to perform face aging due to their capabilities of automatically generating highly-realistic synthetic images by using an adversarial model often based on Convolutional Neural Networks (CNN). However, GANs currently represent black box models since it is not known how the CNNs store and process the information learned from data. In this paper, we propose the \ufb01rst method that deals with explaining GANs, by introducing a novel qualitative and quantitative analysis of the inner structure of the model. Similarly to analyzing the common genes in two DNA sequences, we analyze the common \ufb01lters in two CNNs. We show that the GANs for face aging partially share their parameters with GANs trained for heterogeneous applications and that the aging transformation can be learned using general purpose image databases and a \ufb01ne-tuning step. Results on public databases con\ufb01rm the validity of our approach, also enabling future studies on similar models
On the use of fuzzy logic in dependable cloud management
The effective and efficient use of dependable cloud infrastructures requires the agreement between users and cloud providers on resources, services, operating conditions, and features as well as the mapping of users' requirements onto the cloud architecture. In this paper, we identify the different ways in which fuzzy logic can be profitably adopted in performing these tasks, providing flexibility in capturing users' needs and dealing with complex architectures and conflicting or hardly-satisfiable requirements. We specifically put forward the idea of using fuzzy logic at the user-side, to enable the specification of users' needs in crisp or fuzzy ways and their homogenous processing
Adversarial defect synthesis for industrial products in low data regime
Synthetic defect generation is an important aid for advanced manufacturing and production processes. Industrial scenarios rely on automated image-based quality control methods to avoid time-consuming manual inspections and promptly identify products not complying with specific quality standards. However, these methods show poor performance in the case of ill-posed low-data training regimes, and the lack of defective samples, due to operational costs or privacy policies, strongly limits their large-scale applicability.To overcome these limitations, we propose an innovative architecture based on an unpaired image-to-image (I2I) translation model to guide a transformation from a defect-free to a defective domain for common industrial products and propose simultaneously localizing their synthesized defects through a segmentation mask. As a performance evaluation, we measure image similarity and variability using standard metrics employed for generative models. Finally, we demonstrate that inspection networks, trained on synthesized samples, improve their accuracy in spotting real defective products
Applications and limits of image-to-image translation models
Image-to-image (I2I) translation models are widely employed in several fields, e.g., computer vision, security or medicine. Their goal is to map images from a source domain to a target domain while preserving content information. Despite their success, these models suffer from multiple weaknesses. For example, many practical scenarios do not consent to collect a sufficient amount of images, leading to imbalanced domains. Furthermore, mode collapse and training instability require a careful design and further discourage their deployment on edge devices. Finally, I2I models need an intensive computation to learn conditional probability distributions and are difficult to adapt to different contexts. These drawbacks mainly limit their large scale applicability. In this work, we want to shed light on the main solutions adopted to overcome the above issues and their impact on the performance. We also investigate several approaches to deploy these models on low-powered devices and weight sharing techniques to reduce the number of parameters and resources used
Synthetic and (Un)Secure: Evaluating Generalized Membership Inference Attacks on Image Data
Synthetic data are widely employed across diverse fields, including computer vision, robotics, and cybersecurity. However, generative models are prone to unintentionally revealing sensitive information from their training datasets, primarily due to overfitting phenomena. In this context, membership inference attacks (MIAs) have emerged as a significant privacy threat. These attacks employ binary classifiers to verify whether a specific data sample was part of the model’s training set, thereby discriminating between member and non-member samples. Despite their growing relevance, the interpretation of MIA outcomes can be misleading without a detailed understanding of the data domains involved during both model development and evaluation. To bridge this gap, we performed an analysis focused on a particular category (i.e., vehicles) to assess the effectiveness of MIA under scenarios with limited overlap in data distribution. First, we introduce a data selection strategy, based on the Fréchet Coefficient, to filter and curate the evaluation datasets, followed by the execution of membership inference attacks under varying degrees of distributional overlap. Our findings indicate that MIAs are highly effective when the training and evaluation data distributions are well aligned, but their accuracy drops significantly under distribution shifts or when domain knowledge is limited. These results highlight the limitations of current MIA methodologies in reliably assessing privacy risks in generative modeling contexts
DL4ALL: Multi-task cross-dataset transfer learning for Acute Lymphoblastic Leukemia detection
Methods for the detection of Acute Lymphoblastic (or Lymphocytic) Leukemia (ALL)
are increasingly considering Deep Learning (DL) due to its high accuracy in several elds, including
medical imaging. In most cases, such methods use transfer learning techniques to compensate for
the limited availability of labeled data. However, current methods for ALL detection use traditional
transfer learning, which requires the models to be fully trained on the source domain, then ne-
tuned on the target domain, with the drawback of possibly over tting the source domain and
reducing the generalization capability on the target domain. To overcome this drawback and
increase the classi cation accuracy that can be obtained using transfer learning, in this paper
we propose our method named Deep Learning for Acute Lymphoblastic Leukemia (DL4ALL),
a novel multi-task learning DL model for ALL detection, trained using a cross-dataset transfer
learning approach. The method adapts an existing model into a multi-task classi cation problem,
then trains it using transfer learning procedures that consider both source and target databases
at the same time, interleaving batches from the two domains even when they are signi cantly
di erent. The proposed DL4ALL represents the rst work in the literature using a multi-task
cross-dataset transfer learning procedure for ALL detection. Results on a publicly-available ALL
database con rm the validity of our approach, which achieves a higher accuracy in detecting ALL
with respect to existing methods, even when not using manual labels for the source domain
- …
