170 research outputs found
LivDet in Action - Fingerprint Liveness Detection Competition 2019
The International Fingerprint liveness Detection Competition (LivDet) is an
open and well-acknowledged meeting point of academies and private companies
that deal with the problem of distinguishing images coming from reproductions
of fingerprints made of artificial materials and images relative to real
fingerprints. In this edition of LivDet we invited the competitors to propose
integrated algorithms with matching systems. The goal was to investigate at
which extent this integration impact on the whole performance. Twelve
algorithms were submitted to the competition, eight of which worked on
integrated systems.Comment: Preprint version of a paper accepted at ICB 201
Fusion of fingerprint presentation attacks detection and matching: a real approach from the LivDet perspective
The liveness detection ability is explicitly required for current personal verification systems in many security applications. As a matter of fact, the project of any biometric verification system cannot ignore the vulnerability to spoofing or presentation attacks (PAs), which must be addressed by effective countermeasures from the beginning of the design process. However, despite significant improvements, especially by adopting deep learning approaches to fingerprint Presentation Attack Detectors (PADs), current research did not state much about their effectiveness when embedded in fingerprint verification systems. We believe that the lack of works is explained by the lack of instruments to investigate the problem, that is, modelling the cause-effect relationships when two systems (spoof detection and matching) with non-zero error rates are integrated.
To solve this lack of investigations in the literature, we present in this PhD thesis a novel performance simulation model based on the probabilistic relationships between the Receiver Operating Characteristics (ROC) of the two systems when implemented sequentially. As a matter of fact, this is the most straightforward, flexible, and widespread approach. We carry out simulations on the PAD algorithms’ ROCs submitted to the editions of LivDet 2017-2019, the NIST Bozorth3, and the top-level VeriFinger 12.0 matchers. With the help of this simulator, the overall system performance can be predicted before actual implementation, thus simplifying the process of setting the best trade-off among error rates.
In the second part of this thesis, we exploit this model to define a practical evaluation criterion to assess whether operational points of the PAD exist that do not alter the expected or previous performance given by the verification system alone. Experimental simulations coupled with the theoretical expectations confirm that this trade-off allows a complete view of the sequential embedding potentials worthy of being extended to other integration approaches
Balancing Accuracy and Error Rates in Fingerprint Verification Systems Under Presentation Attacks With Sequential Fusion
The assessment of the fingerprint PADs embedded into a comparison system represents an emerging topic in biometric recognition. Providing models and methods for this aim helps scientists, technologists, and companies to simulate multiple scenarios and have a realistic view of the process’s consequences on the recognition system. The most recent models aimed at deriving the overall system performance, especially in the sequential assessment of the fingerprint liveness and comparison pointed out a significant decrease in Genuine Acceptance Rate (GAR). In particular, our previous studies showed that PAD contributes predominantly to this drop, regardless of the comparison system used. This paper’s goal is to establish a systematic approach for the “trade-off” computation between the gain in Impostor Attack Presentation Accept Rate (IAPAR) and the loss in GAR mentioned above. We propose a formal “trade-off” definition to measure the balance between tackling presentation attacks and the performance drop on genuine users. Experimental simulations and theoretical expectations confirm that an appropriate “trade-off” definition allows a complete view of the sequential embedding potentials
Interpretability of fingerprint presentation attack detection systems: a look at the “representativeness” of samples against never-seen-before attacks
Nowadays, fingerprint Presentation Attack Detection systems (PADs) are primarily based on deep learning architectures subjected to massive training. However, their performance decreases to never-seen-before attacks. With the goal of contributing to explaining this issue, we hypothesized that this limited ability to generalize is due to the lack of "representativeness" of the samples available for the PAD training. "Representativeness" is treated here from a geometrical perspective: the spread of samples into the feature space, especially near the decision boundaries. In particular, we explored the possibility of adopting three-dimensionality reduction methods to make the problem affordable through visual inspection. These methods enable visual inspection and interpretation by projecting data into two-dimensional spaces, facilitating the identification of weak areas in the decision regions estimated after the training phase. Our analysis delineates the benefits and drawbacks of each dimensionality reduction method and leads us to make substantial recommendations in the crucial phase of the training design
Data generation via diffusion models for crowd anomaly detection
Crowd analysis is a critical aspect of public security and video
surveillance. One of the primary challenges in developing effective crowd anomaly detectors is the lack of comprehensive training data. To address this issue, we investigate using synthetic data to enhance training for anomaly detection in crowded environments by generating a dataset of synthetic videos using two open-source diffusion models. Each synthetic video depicts typical crowded scenes that may be either normal or anomalous. To assess the
effectiveness of our approach, we compare the model’s performance across three training scenarios: using only real videos, only synthetic videos, and a combination of both. This preliminary analysis highlights the potential of data generated via diffusion models to
improve crowd anomaly detectors’ stability and classification capabilities
Texture and artifact decomposition for improving generalization in deep-learning-based deepfake detection
The harmful utilization of DeepFake technology poses a significant threat to public welfare, precipitating a crisis in public opinion. Existing detection methodologies, predominantly relying on convolutional neural networks and deep learning paradigms, focus on achieving high in-domain recognition accuracy amidst many forgery techniques. However, overseeing the intricate interplay between textures and artifacts results in compromised performance across diverse forgery scenarios. This paper introduces a groundbreaking framework, denoted as Texture and Artifact Detector (TAD), to mitigate the challenge posed by the limited generalization ability stemming from the mutual neglect of textures and artifacts. Specifically, our approach delves into the similarities among disparate forged datasets, discerning synthetic content based on the consistency of textures and the presence of artifacts. Furthermore, we use a model ensemble learning strategy to judiciously aggregate texture disparities and artifact patterns inherent in various forgery types, thereby enabling the model’s generalization ability. Our comprehensive experimental analysis, encompassing extensive intra-dataset and cross-dataset validations along with evaluations on both video sequences and individual frames, confirms the effectiveness of TAD. The results from four benchmark datasets highlight the significant impact of the synergistic consideration of texture and artifact information, leading to a marked improvement in detection capabilities
Development and validation of a prediction score to assess the risk of incurring in COPD-related exacerbations: a population-based study in primary care
Background: Chronic obstructive pulmonary disease (COPD) is the fourth most important cause of death in high-income countries. Inappropriate use of COPD inhaled therapy, including the low adherence (only 10 %-40 % of patients reporting an adequate compliance) may shrink or even nullify the proven benefits of these medications. As such, an accurate prediction algorithm to assess at national level the risk of COPD exacerbation might be relevant for general practictioners (GPs) to improve patient's therapy. Methods: We formed a cohort of patients aged 45 years or older being diagnosed with COPD in the period between January 2013 to December 2021. Each patient was followed until occurrence of COPD exacerbation up to the end of 2021. Sixteen determinants were adopted to assemble the CopdEX(CEX)-Health Search(HS)core, which was therefore developed and validated through the related two sub-cohorts. Results: We idenfied 63763 patients aged 45 years or older being diagnosed with COPD (mean age: 67.8 (SD:11.7); 57.7 % males).When the risk of COPD exacerbation was estimated via CEX-HScore, its predicted value was equal to 14.22 % over a 6-month event horizon. Discrimination accuracy and explained variation were equal to 66 % (95 % CI: 65-67 %) and 10 % (95 % CI: 9-11 %), respectively. The calibration slope did not significantly differ from the unit (p = 0.514). Conclusions: The CEX-HScore was featured by fair accuracy for prediction of COPD-related exacerbations over a 6-month follow-up. Such a tool might therefore support GPs to enhance COPD patients' care, and improve their outcomes by facilitating personalized approaches through a score-based decision support system
Biliary pancreatic diversion and laparoscopic adjustable gastric banding in morbid obesity: their long-term effects on metabolic syndrome and on cardiovascular parameters
<p>Abstract</p> <p>Background</p> <p>Bariatric surgery is able to improve glucose and lipid metabolism, and cardiovascular function in morbid obesity. Aim of this study was to compare the long-term effects of malabsorptive (biliary pancreatic diversion, BPD), and restrictive (laparoscopic gastric banding, LAGB) procedures on metabolic and cardiovascular parameters, as well as on metabolic syndrome in morbidly obese patients.</p> <p>Methods</p> <p>170 patients studied between 1989 and 2001 were called back after a mean period of 65 months. 138 patients undergoing BPD (n = 23) or LAGB (n = 78), and control patients (refusing surgery and treated with diet, n = 37) were analysed for body mass index (BMI), blood glucose, cholesterol, and triglycerides, blood pressure, heart rate, and ECG indexes (QTc, Cornell voltage-duration product, and rate-pressure-product).</p> <p>Results</p> <p>After a mean 65 months period, surgery was more effective than diet on all items under evaluation; diabetes, hypertension, and metabolic syndrome disappeared more in surgery than in control patients, and new cases appeared only in controls. BPD was more effective than LAGB on BMI, on almost all cardiovascular parameters, and on cholesterol, not on triglyceride and blood glucose. Disappearance of diabetes, hypertension, and metabolic syndrome was similar with BPD and with LAGB, and no new cases were observed.</p> <p>Conclusion</p> <p>These data indicate that BPD, likely due to a greater BMI decrease, is more effective than LAGB in improving cardiovascular parameters, and similar to LAGB on metabolic parameters, in obese patients. The greater effect on cholesterol levels is probably due to the different mechanism of action.</p
LivDet2023 - Fingerprint Liveness Detection Competition: Advancing Generalization
The International Fingerprint Liveness Detection Competition (LivDet) is a biennial event that invites academic and industry participants to prove their advancements in Fingerprint Presentation Attack Detection (PAD). This edition, LivDet2023, proposed two challenges, "Liveness Detection in Action" and "Fingerprint Representation", to evaluate the efficacy of PAD embedded in verification systems and the effectiveness and compactness of feature sets. A third, "hidden" challenge is the inclusion of two subsets in the training set whose sensor information is unknown, testing participants' ability to generalize their models. Only bona fide fingerprint samples were provided to participants, and the competition reports and assesses the performance of their algorithms suffering from this limitation in data availability
Normative growth charts for Shwachman-Diamond syndrome from Italian cohort of 0-8 years old
Shwachman-Diamond syndrome (SDS) is a rare autosomal recessive disorder. Its predominant manifestations include exocrine pancreatic insufficiency, bone marrow failure and skeletal abnormalities. Patients frequently present failure to thrive and susceptibility to short stature. Average birth weight is at the 25th percentile; by the first birthday, >50% of patients drop below the third percentile for height and weight.The study aims at estimating the growth charts for patients affected by SDS in order to give a reference tool helpful for medical care and growth surveillance through the first 8 years of patient's life
- …
