746 research outputs found
Demonstration of Adiabatic Variational Quantum Computing with a Superconducting Quantum Coprocessor
Adiabatic quantum computing enables the preparation of many-body ground
states. This is key for applications in chemistry, materials science, and
beyond. Realisation poses major experimental challenges: Direct analog
implementation requires complex Hamiltonian engineering, while the digitised
version needs deep quantum gate circuits. To bypass these obstacles, we suggest
an adiabatic variational hybrid algorithm, which employs short quantum circuits
and provides a systematic quantum adiabatic optimisation of the circuit
parameters. The quantum adiabatic theorem promises not only the ground state
but also that the excited eigenstates can be found. We report the first
experimental demonstration that many-body eigenstates can be efficiently
prepared by an adiabatic variational algorithm assisted with a multi-qubit
superconducting coprocessor. We track the real-time evolution of the ground and
exited states of transverse-field Ising spins with a fidelity up that can reach
about 99%.Comment: 12 pages, 4 figure
THREE ESSAYS ON THE HIGH-SPEED RAIL NETWORK IN CHINA
My dissertation consists of three essays that study the economic consequences of China’s high-speed rail (HSR) expansion.
In the first essay, I use the college admission cutoff scores to reveal students’ college preferences under the enrollment quota. By exploiting the quasi-experimental variation in whether or not college cities are connected by the HSR network, I document a two-point increase in the cutoff scores following a HSR station opening in the college city using difference-in-difference (DD) approach. Colleges in the megacities experience a larger increase in cutoff scores after the station opening. These findings suggest that the HSR network stimulates “brain drain” from unconnected cities to connected cities, especially connected megacities.
The second essay examines the impact of better HSR accessibility on housing prices in Jiangsu Province. Using transaction data of new houses aggregated to the complex level, I compare the housing prices of properties close to the new HSR stations to those close to pre-existing HSR stations, before and after the new station openings. In a DD specification, I document that housing prices decrease by twenty percent in the areas where the station distance reduces due to the station opening outside the city.
The third essay investigates the impacts on household income. Using DD approach, I document that urban households experience a significant increase in total household income following the opening of HSR station in their city. While labor earnings increase, the probability of having business income decreases. Moreover, labor income of the households whose heads work in the manufacturing sector increases little, but for households whose heads work in the transport or communications sectors increases much more than other households, suggesting that the HSR network facilitates urban industry specialization
When Do LLMs Need Retrieval Augmentation? Mitigating LLMs' Overconfidence Helps Retrieval Augmentation
Large Language Models (LLMs) have been found to have difficulty knowing they
do not possess certain knowledge and tend to provide specious answers in such
cases. Retrieval Augmentation (RA) has been extensively studied to mitigate
LLMs' hallucinations. However, due to the extra overhead and unassured quality
of retrieval, it may not be optimal to conduct RA all the time. A
straightforward idea is to only conduct retrieval when LLMs are uncertain about
a question. This motivates us to enhance the LLMs' ability to perceive their
knowledge boundaries to help RA. In this paper, we first quantitatively measure
LLMs' such ability and confirm their overconfidence. Then, we study how LLMs'
certainty about a question correlates with their dependence on external
retrieved information. We propose several methods to enhance LLMs' perception
of knowledge boundaries and show that they are effective in reducing
overconfidence. Additionally, equipped with these methods, LLMs can achieve
comparable or even better performance of RA with much fewer retrieval calls
- …
