3,674 research outputs found
Three-dimensional analysis of the surface mode supported in \v{C}erenkov and Smith-Purcell free-electron lasers
In \v{C}erenkov and Smith-Purcell free-electron lasers (FELs), a resonant
interaction between the electron beam and the co-propagating surface mode can
produce copious amount of coherent terahertz (THz) radiation. We perform a
three-dimensional (3D) analysis of the surface mode, taking the effect of
attenuation into account, and set up 3D Maxwell-Lorentz equations for both
these systems. Based on this analysis, we determine the requirements on the
electron beam parameters, i.e., beam emittance, beam size and beam current for
the successful operation of a \v{C}erenkov FEL
HT-Paxos: High Throughput State-Machine Replication Protocol for Large Clustered Data Centers
Paxos is a prominent theory of state machine replication. Recent data
intensive Systems those implement state machine replication generally require
high throughput. Earlier versions of Paxos as few of them are classical Paxos,
fast Paxos and generalized Paxos have a major focus on fault tolerance and
latency but lacking in terms of throughput and scalability. A major reason for
this is the heavyweight leader. Through offloading the leader, we can further
increase throughput of the system. Ring Paxos, Multi Ring Paxos and S-Paxos are
few prominent attempts in this direction for clustered data centers. In this
paper, we are proposing HT-Paxos, a variant of Paxos that one is the best
suitable for any large clustered data center. HT-Paxos further offloads the
leader very significantly and hence increases the throughput and scalability of
the system. While at the same time, among high throughput state-machine
replication protocols, HT-Paxos provides reasonably low latency and response
time
Driving with Style: Inverse Reinforcement Learning in General-Purpose Planning for Automated Driving
Behavior and motion planning play an important role in automated driving.
Traditionally, behavior planners instruct local motion planners with predefined
behaviors. Due to the high scene complexity in urban environments,
unpredictable situations may occur in which behavior planners fail to match
predefined behavior templates. Recently, general-purpose planners have been
introduced, combining behavior and local motion planning. These general-purpose
planners allow behavior-aware motion planning given a single reward function.
However, two challenges arise: First, this function has to map a complex
feature space into rewards. Second, the reward function has to be manually
tuned by an expert. Manually tuning this reward function becomes a tedious
task. In this paper, we propose an approach that relies on human driving
demonstrations to automatically tune reward functions. This study offers
important insights into the driving style optimization of general-purpose
planners with maximum entropy inverse reinforcement learning. We evaluate our
approach based on the expected value difference between learned and
demonstrated policies. Furthermore, we compare the similarity of human driven
trajectories with optimal policies of our planner under learned and
expert-tuned reward functions. Our experiments show that we are able to learn
reward functions exceeding the level of manual expert tuning without prior
domain knowledge.Comment: Appeared at IROS 2019. Accepted version. Added/updated footnote,
minor correction in preliminarie
- …
