144 research outputs found
Location Privacy in Spatial Crowdsourcing
Spatial crowdsourcing (SC) is a new platform that engages individuals in
collecting and analyzing environmental, social and other spatiotemporal
information. With SC, requesters outsource their spatiotemporal tasks to a set
of workers, who will perform the tasks by physically traveling to the tasks'
locations. This chapter identifies privacy threats toward both workers and
requesters during the two main phases of spatial crowdsourcing, tasking and
reporting. Tasking is the process of identifying which tasks should be assigned
to which workers. This process is handled by a spatial crowdsourcing server
(SC-server). The latter phase is reporting, in which workers travel to the
tasks' locations, complete the tasks and upload their reports to the SC-server.
The challenge is to enable effective and efficient tasking as well as reporting
in SC without disclosing the actual locations of workers (at least until they
agree to perform a task) and the tasks themselves (at least to workers who are
not assigned to those tasks). This chapter aims to provide an overview of the
state-of-the-art in protecting users' location privacy in spatial
crowdsourcing. We provide a comparative study of a diverse set of solutions in
terms of task publishing modes (push vs. pull), problem focuses (tasking and
reporting), threats (server, requester and worker), and underlying technical
approaches (from pseudonymity, cloaking, and perturbation to exchange-based and
encryption-based techniques). The strengths and drawbacks of the techniques are
highlighted, leading to a discussion of open problems and future work
Differential privacy and game theory in cybersecurity
University of Technology Sydney. Faculty of Engineering and Information Technology.The vast majority of cybersecurity solutions are founded on game theory, and differential privacy is emerging as perhaps the most rigorous and widely-adopted privacy paradigm in the field. However, alongside all the advancements made in both fields, there is not a single application that is not still vulnerable to privacy violations, security breaches, or manipulation by adversaries. The current understanding of the interactions between differential privacy and game-theoretic solutions is limited. Hence, this thesis undertook a comprehensive exploration of differential privacy and game theory in the field of cybersecurity, finding that differential privacy has several advantageous properties that can make more of a contribution to game theory than just privacy protection. It can also be used to build heuristic game-theoretic models for cybersecurity solutions, to avert strategic manipulations by adversaries, and to quantify the cost of information leakage. With a focus on cybersecurity, the aim of this thesis is to provide new perspectives on the currently-held impossibilities in privacy and security issues, potential avenues to circumvent those impossibilities, and opportunities to improve the performance of cybersecurity solutions with game-theoretic and differentially private techniques
Really Unlearned? Verifying Machine Unlearning via Influential Sample Pairs
Machine unlearning enables pre-trained models to eliminate the effects of
partial training samples. Previous research has mainly focused on proposing
efficient unlearning strategies. However, the verification of machine
unlearning, or in other words, how to guarantee that a sample has been
successfully unlearned, has been overlooked for a long time. Existing
verification schemes typically rely on machine learning attack techniques, such
as backdoor or membership inference attacks. As these techniques are not
formally designed for verification, they are easily bypassed when an
untrustworthy MLaaS undergoes rapid fine-tuning to merely meet the verification
conditions, rather than executing real unlearning. In this paper, we propose a
formal verification scheme, IndirectVerify, to determine whether unlearning
requests have been successfully executed. We design influential sample pairs:
one referred to as trigger samples and the other as reaction samples. Users
send unlearning requests regarding trigger samples and use reaction samples to
verify if the unlearning operation has been successfully carried out. We
propose a perturbation-based scheme to generate those influential sample pairs.
The objective is to perturb only a small fraction of trigger samples, leading
to the reclassification of reaction samples. This indirect influence will be
used for our verification purposes. In contrast to existing schemes that employ
the same samples for all processes, our scheme, IndirectVerify, provides
enhanced robustness, making it less susceptible to bypassing processes
Towards Efficient Target-Level Machine Unlearning Based on Essential Graph
Machine unlearning is an emerging technology that has come to attract
widespread attention. A number of factors, including regulations and laws,
privacy, and usability concerns, have resulted in this need to allow a trained
model to forget some of its training data. Existing studies of machine
unlearning mainly focus on unlearning requests that forget a cluster of
instances or all instances from one class. While these approaches are effective
in removing instances, they do not scale to scenarios where partial targets
within an instance need to be forgotten. For example, one would like to only
unlearn a person from all instances that simultaneously contain the person and
other targets. Directly migrating instance-level unlearning to target-level
unlearning will reduce the performance of the model after the unlearning
process, or fail to erase information completely. To address these concerns, we
have proposed a more effective and efficient unlearning scheme that focuses on
removing partial targets from the model, which we name "target unlearning".
Specifically, we first construct an essential graph data structure to describe
the relationships between all important parameters that are selected based on
the model explanation method. After that, we simultaneously filter parameters
that are also important for the remaining targets and use the pruning-based
unlearning method, which is a simple but effective solution to remove
information about the target that needs to be forgotten. Experiments with
different training models on various datasets demonstrate the effectiveness of
the proposed approach
Machine Unlearning: A Survey
Machine learning has attracted widespread attention and evolved into an
enabling technology for a wide range of highly successful applications, such as
intelligent computer vision, speech recognition, medical diagnosis, and more.
Yet a special need has arisen where, due to privacy, usability, and/or the
right to be forgotten, information about some specific samples needs to be
removed from a model, called machine unlearning. This emerging technology has
drawn significant interest from both academics and industry due to its
innovation and practicality. At the same time, this ambitious problem has led
to numerous research efforts aimed at confronting its challenges. To the best
of our knowledge, no study has analyzed this complex topic or compared the
feasibility of existing unlearning solutions in different kinds of scenarios.
Accordingly, with this survey, we aim to capture the key concepts of unlearning
techniques. The existing solutions are classified and summarized based on their
characteristics within an up-to-date and comprehensive review of each
category's advantages and limitations. The survey concludes by highlighting
some of the outstanding issues with unlearning techniques, along with some
feasible directions for new research opportunities
Federated Learning with Blockchain-Enhanced Machine Unlearning: A Trustworthy Approach
With the growing need to comply with privacy regulations and respond to user
data deletion requests, integrating machine unlearning into IoT-based federated
learning has become imperative. Traditional unlearning methods, however, often
lack verifiable mechanisms, leading to challenges in establishing trust. This
paper delves into the innovative integration of blockchain technology with
federated learning to surmount these obstacles. Blockchain fortifies the
unlearning process through its inherent qualities of immutability,
transparency, and robust security. It facilitates verifiable certification,
harmonizes security with privacy, and sustains system efficiency. We introduce
a framework that melds blockchain with federated learning, thereby ensuring an
immutable record of unlearning requests and actions. This strategy not only
bolsters the trustworthiness and integrity of the federated learning model but
also adeptly addresses efficiency and security challenges typical in IoT
environments. Our key contributions encompass a certification mechanism for the
unlearning process, the enhancement of data security and privacy, and the
optimization of data management to ensure system responsiveness in IoT
scenarios.Comment: 13 pages, 25 figure
Federated TrustChain: Blockchain-Enhanced LLM Training and Unlearning
The development of Large Language Models (LLMs) faces a significant
challenge: the exhausting of publicly available fresh data. This is because
training a LLM needs a large demanding of new data. Federated learning emerges
as a promising solution, enabling collaborative model to contribute their
private data to LLM global model. However, integrating federated learning with
LLMs introduces new challenges, including the lack of transparency and the need
for effective unlearning mechanisms. Transparency is essential to ensuring
trust and fairness among participants, while accountability is crucial for
deterring malicious behaviour and enabling corrective actions when necessary.
To address these challenges, we propose a novel blockchain-based federated
learning framework for LLMs that enhances transparency, accountability, and
unlearning capabilities. Our framework leverages blockchain technology to
create a tamper-proof record of each model's contributions and introduces an
innovative unlearning function that seamlessly integrates with the federated
learning mechanism. We investigate the impact of Low-Rank Adaptation (LoRA)
hyperparameters on unlearning performance and integrate Hyperledger Fabric to
ensure the security, transparency, and verifiability of the unlearning process.
Through comprehensive experiments and analysis, we showcase the effectiveness
of our proposed framework in achieving highly effective unlearning in LLMs
trained using federated learning. Our findings highlight the feasibility of
integrating blockchain technology into federated learning frameworks for LLMs.Comment: 16 pages, 7 figures
QUEEN: Query Unlearning against Model Extraction
Model extraction attacks currently pose a non-negligible threat to the
security and privacy of deep learning models. By querying the model with a
small dataset and usingthe query results as the ground-truth labels, an
adversary can steal a piracy model with performance comparable to the original
model. Two key issues that cause the threat are, on the one hand, accurate and
unlimited queries can be obtained by the adversary; on the other hand, the
adversary can aggregate the query results to train the model step by step. The
existing defenses usually employ model watermarking or fingerprinting to
protect the ownership. However, these methods cannot proactively prevent the
violation from happening. To mitigate the threat, we propose QUEEN (QUEry
unlEarNing) that proactively launches counterattacks on potential model
extraction attacks from the very beginning. To limit the potential threat,
QUEEN has sensitivity measurement and outputs perturbation that prevents the
adversary from training a piracy model with high performance. In sensitivity
measurement, QUEEN measures the single query sensitivity by its distance from
the center of its cluster in the feature space. To reduce the learning accuracy
of attacks, for the highly sensitive query batch, QUEEN applies query
unlearning, which is implemented by gradient reverse to perturb the softmax
output such that the piracy model will generate reverse gradients to worsen its
performance unconsciously. Experiments show that QUEEN outperforms the
state-of-the-art defenses against various model extraction attacks with a
relatively low cost to the model accuracy. The artifact is publicly available
at https://anonymous.4open.science/r/queen implementation-5408/
A clinical study on the application of three-dimensionally printed splints combined with finite element analysis in paediatric distal radius fractures
PurposeThis single-centre randomised clinical trial assessed the clinical efficacy and patient satisfaction of 3D-printed splints optimised via finite element analysis (FEA) for pediatric distal radius fractures.MethodsThis retrospective study included 56 children diagnosed with forearm fractures at our hospital between August 2023 and August 2024. Those who underwent traditional U-shaped forearm plaster immobilisation were compared with those who received a customised 3D-printed splint. FEA was conducted based on the biomechanical characteristics of the forearm; the splint structure was optimised based on the analysis results and created via 3D printing. Outcomes were evaluated using the Patient Satisfaction Questionnaire and Wong-Baker Faces Pain Scale–Revised. Forearm function was evaluated using the Mayo Wrist Score and radiological outcomes. A power calculation was not performed due to the exploratory scope and resource limitations of this preliminary study.ResultsThe treatment costs significantly differed between the two groups (p = 0.001). In the Patient Satisfaction Questionnaire, the hot and sweaty item showed no significant difference (p = 0.089), whereas the last week's comfort (p = 0.001), first applied comfort (p = 0.004), weight (p = 0.001), itchiness (p = 0.015), smell (p = 0.003), and overall satisfaction items significantly differed between the two groups (p = 0.004). A comparison of the Mayo Wrist Score showed a statistically significant difference between the two groups after external fixation removal (p = 0.044). There were no significant differences between the two groups in terms of the palmar tilt angle (p = 0.196), ulnar deviation angle (p = 0.460), or height of the radial styloid (p = 0.111).ConclusionBoth 3D-printed splint and plaster cast fixation methods can effectively treat distal radial fractures in children, but the 3D-printed splint showed superior patient acceptance
Medical Economic Consequences, Predictors, and Outcomes of Immediate Atrial Fibrillation Recurrence after Radiofrequency Ablation
Background and aims: Immediate recurrence (Im-Recurr), a type of atrial fibrillation (AF) recurrence occurring during the blanking period after radiofrequency catheter ablation (RFCA), has received little attention. Therefore, this study was aimed at exploring the clinical significance of Im-Recurr in patients with AF after RFCA. Methods: This study retrospectively included patients with AF who underwent RFCA at our center. Regression, propensity score matching (PSM), and survival curve analyses were conducted to investigate the effects of Im-Recurr on costs, hospitalization durations, AF recurrence rates, and predictors of Im-Recurr. Results: A total of 898 patients were included, among whom 128 developed Im-Recurr after RFCA. Multiple linear regression analysis revealed that Im-Recurr correlated with greater cost, hospitalization duration, and hospitalization duration after ablation. Logistic regression and PSM analyses indicated that intraoperative electric cardioversion (IEC) was an independent predictor of Im-Recurr. The follow-up results suggested a significantly higher 1-year cumulative AF recurrence rate in the Im-Recurr group than the control group. Conclusions: Im-Recurr significantly increases the cost and length of hospitalization for patients with AF undergoing RFCA and is associated with an elevated 1-year cumulative AF recurrence rate. IEC serves as an independent predictor of Im-Recurr. Registration number: ChiCTR2200065235
- …
