297 research outputs found
Bounded incentives in manipulating the probabilistic serial rule
The Probabilistic Serial mechanism is valued for its fairness and efficiency in addressing the random assignment problem. However, it lacks truthfulness, meaning it works well only when agents' stated preferences match their true ones. Significant utility gains from strategic actions may lead self-interested agents to manipulate the mechanism, undermining its practical adoption. To gauge the potential for manipulation, we explore an extreme scenario where a manipulator has complete knowledge of other agents' reports and unlimited computational resources to find their best strategy. We establish tight incentive ratio bounds of the mechanism. Furthermore, we complement these worst-case guarantees by conducting experiments to assess an agent's average utility gain through manipulation. The findings reveal that the incentive for manipulation is very small. These results offer insights into the mechanism's resilience against strategic manipulation, moving beyond the recognition of its lack of incentive compatibility.</p
Cost Minimization for Equilibrium Transition
In this paper, we delve into the problem of using monetary incentives to
encourage players to shift from an initial Nash equilibrium to a more favorable
one within a game. Our main focus revolves around computing the minimum reward
required to facilitate this equilibrium transition. The game involves a single
row player who possesses strategies and column players, each endowed
with strategies. Our findings reveal that determining whether the minimum
reward is zero is NP-complete, and computing the minimum reward becomes
APX-hard. Nonetheless, we bring some positive news, as this problem can be
efficiently handled if either or is a fixed constant. Furthermore, we
have devised an approximation algorithm with an additive error that runs in
polynomial time. Lastly, we explore a specific case wherein the utility
functions exhibit single-peaked characteristics, and we successfully
demonstrate that the optimal reward can be computed in polynomial time.Comment: To appear in the proceeding of AAAI202
Evaluating Cryptocurrency Market Risk on the Blockchain: An Empirical Study Using the ARMA-GARCH-VaR Model
Cryptocurrency, a novel digital asset within the blockchain technology ecosystem, has recently garnered significant attention in the investment world. Despite its growing popularity, the inherent volatility and instability of cryptocurrency investments necessitate a thorough risk evaluation. This study utilizes the Autoregressive Moving Average (ARMA) model combined with the Generalized Autoregressive Conditionally Heteroscedastic (GARCH) model to analyze the volatility of three major cryptocurrencies—Bitcoin (BTC), Ethereum (ETH), and Binance Coin (BNB)—over a period from January 1, 2017, to October 29, 2022. The dataset comprises daily closing prices, offering a comprehensive view of the market's fluctuations. Our analysis revealed that the value-at-risk (VaR) curves for these cryptocurrencies demonstrate significant volatility, encompassing a broad spectrum of returns. The overall risk profile is relatively high, with ETH exhibiting the highest risk, followed by BTC and BNB. The ARMA-GARCH-VaR model has proven effective in quantifying and assessing the market risks associated with cryptocurrencies, providing valuable insights for investors and policymakers in navigating the complex landscape of digital assets
Spread and Control of Mobile Benign Worm Based on Two-Stage Repairing Mechanism
Both in traditional social network and in mobile network environment, the worm is a serious threat, and this threat is growing all the time. Mobile smartphones generally promote the development of mobile network. The traditional antivirus technologies have become powerless when facing mobile networks. The development of benign worms, especially active benign worms and passive benign worms, has become a new network security measure. In this paper, we focused on the spread of worm in mobile environment and proposed the benign worm control and repair mechanism. The control process of mobile benign worms is divided into two stages: the first stage is rapid repair control, which uses active benign worm to deal with malicious worm in the mobile network; when the network is relatively stable, it enters the second stage of postrepair and uses passive mode to optimize the environment for the purpose of controlling the mobile network. Considering whether the existence of benign worm, we simplified the model and analyzed the four situations. Finally, we use simulation to verify the model. This control mechanism for benign worm propagation is of guiding significance to control the network security
Rnnotator: an automated de novo transcriptome assembly pipeline from stranded RNA-Seq reads
Background: Comprehensive annotation and quantification of transcriptomes are outstanding problems in functional genomics. While high throughput mRNA sequencing (RNA-Seq) has emerged as a powerful tool for addressing these problems, its success is dependent upon the availability and quality of reference genome sequences, thus limiting the organisms to which it can be applied. Results: Here, we describe Rnnotator, an automated software pipeline that generates transcript models by de novo assembly of RNA-Seq data without the need for a reference genome. We have applied the Rnnotator assembly pipeline to two yeast transcriptomes and compared the results to the reference gene catalogs of these organisms. The contigs produced by Rnnotator are highly accurate (95percent) and reconstruct full-length genes for the majority of the existing gene models (54.3percent). Furthermore, our analyses revealed many novel transcribed regions that are absent from well annotated genomes, suggesting Rnnotator serves as a complementary approach to analysis based on a reference genome for comprehensive transcriptomics. Conclusions: These results demonstrate that the Rnnotator pipeline is able to reconstruct full-length transcripts in the absence of a complete reference genome
Deep quantum neural networks equipped with backpropagation on a superconducting processor
Deep learning and quantum computing have achieved dramatic progresses in
recent years. The interplay between these two fast-growing fields gives rise to
a new research frontier of quantum machine learning. In this work, we report
the first experimental demonstration of training deep quantum neural networks
via the backpropagation algorithm with a six-qubit programmable superconducting
processor. In particular, we show that three-layer deep quantum neural networks
can be trained efficiently to learn two-qubit quantum channels with a mean
fidelity up to 96.0% and the ground state energy of molecular hydrogen with an
accuracy up to 93.3% compared to the theoretical value. In addition, six-layer
deep quantum neural networks can be trained in a similar fashion to achieve a
mean fidelity up to 94.8% for learning single-qubit quantum channels. Our
experimental results explicitly showcase the advantages of deep quantum neural
networks, including quantum analogue of the backpropagation algorithm and less
stringent coherence-time requirement for their constituting physical qubits,
thus providing a valuable guide for quantum machine learning applications with
both near-term and future quantum devices.Comment: 7 pages (main text) + 11 pages (Supplementary Information), 10
figure
Experimental demonstration of reconstructing quantum states with generative models
Quantum state tomography, a process that reconstructs a quantum state from
measurements on an ensemble of identically prepared copies, plays a crucial
role in benchmarking quantum devices. However, brute-force approaches to
quantum state tomography would become impractical for large systems, as the
required resources scale exponentially with the system size. Here, we explore a
machine learning approach and report an experimental demonstration of
reconstructing quantum states based on neural network generative models with an
array of programmable superconducting transmon qubits. In particular, we
experimentally prepare the Greenberger-Horne-Zeilinger states and random states
up to five qubits and demonstrate that the machine learning approach can
efficiently reconstruct these states with the number of required experimental
samples scaling linearly with system size. Our results experimentally showcase
the intriguing potential for exploiting machine learning techniques in
validating and characterizing complex quantum devices, offering a valuable
guide for the future development of quantum technologies
Intake patterns of specific alcoholic beverages by prostate cancer status
Background: Previous studies have shown that different alcoholic beverage types impact prostate cancer (PCa) clinical outcomes differently. However, intake patterns of specific alcoholic beverages for PCa status are understudied. The study?s objective is to evaluate intake patterns of total alcohol and the three types of beverage (beer, wine, and spirits) by the PCa risk and aggressiveness status. Method: This is a cross-sectional study using 10,029 men (4676 non-PCa men and 5353 PCa patients) with European ancestry from the PCa consortium. Associations between PCa status and alcohol intake patterns (infrequent, light/moderate, and heavy) were tested using multinomial logistic regressions. Results: Intake frequency patterns of total alcohol were similar for non-PCa men and PCa patients after adjusting for demographic and other factors. However, PCa patients were more likely to drink wine (light/moderate, OR = 1.11, p = 0.018) and spirits (light/moderate, OR = 1.14, p = 0.003; and heavy, OR = 1.34, p = 0.04) than non-PCa men. Patients with aggressive PCa drank more beer than patients with non-aggressive PCa (heavy, OR = 1.48, p = 0.013). Interestingly, heavy wine intake was inversely associated with PCa aggressiveness (OR = 0.56, p = 0.009). Conclusions: The intake patterns of some alcoholic beverage types differed by PCa status. Our findings can provide valuable information for developing custom alcohol interventions for PCa patients
Probing many-body Bell correlation depth with superconducting qubits
Quantum nonlocality describes a stronger form of quantum correlation than
that of entanglement. It refutes Einstein's belief of local realism and is
among the most distinctive and enigmatic features of quantum mechanics. It is a
crucial resource for achieving quantum advantages in a variety of practical
applications, ranging from cryptography and certified random number generation
via self-testing to machine learning. Nevertheless, the detection of
nonlocality, especially in quantum many-body systems, is notoriously
challenging. Here, we report an experimental certification of genuine
multipartite Bell correlations, which signal nonlocality in quantum many-body
systems, up to 24 qubits with a fully programmable superconducting quantum
processor. In particular, we employ energy as a Bell correlation witness and
variationally decrease the energy of a many-body system across a hierarchy of
thresholds, below which an increasing Bell correlation depth can be certified
from experimental data. As an illustrating example, we variationally prepare
the low-energy state of a two-dimensional honeycomb model with 73 qubits and
certify its Bell correlations by measuring an energy that surpasses the
corresponding classical bound with up to 48 standard deviations. In addition,
we variationally prepare a sequence of low-energy states and certify their
genuine multipartite Bell correlations up to 24 qubits via energies measured
efficiently by parity oscillation and multiple quantum coherence techniques.
Our results establish a viable approach for preparing and certifying
multipartite Bell correlations, which provide not only a finer benchmark beyond
entanglement for quantum devices, but also a valuable guide towards exploiting
multipartite Bell correlation in a wide spectrum of practical applications.Comment: 11 pages,6 figures + 14 pages, 6 figure
- …
