2,768 research outputs found
Controlled Experimentation in Naturalistic Mobile Settings
Performing controlled user experiments on small devices in naturalistic
mobile settings has always proved to be a difficult undertaking for many Human
Factors researchers. Difficulties exist, not least, because mimicking natural
small device usage suffers from a lack of unobtrusive data to guide
experimental design, and then validate that the experiment is proceeding
naturally.Here we use observational data to derive a set of protocols and a
simple checklist of validations which can be built into the design of any
controlled experiment focused on the user interface of a small device. These,
have been used within a series of experimental designs to measure the utility
and application of experimental software. The key-point is the validation
checks -- based on the observed behaviour of 400 mobile users -- to ratify that
a controlled experiment is being perceived as natural by the user. While the
design of the experimental route which the user follows is a major factor in
the experimental setup, without check validations based on unobtrusive observed
data there can be no certainty that an experiment designed to be natural is
actually progressing as the design implies.Comment: 12 pages, 3 table
Random Feature-based Online Multi-kernel Learning in Environments with Unknown Dynamics
Kernel-based methods exhibit well-documented performance in various nonlinear
learning tasks. Most of them rely on a preselected kernel, whose prudent choice
presumes task-specific prior information. Especially when the latter is not
available, multi-kernel learning has gained popularity thanks to its
flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging
the random feature approximation and its recent orthogonality-promoting
variant, the present contribution develops a scalable multi-kernel learning
scheme (termed Raker) to obtain the sought nonlinear learning function `on the
fly,' first for static environments. To further boost performance in dynamic
environments, an adaptive multi-kernel learning scheme (termed AdaRaker) is
developed. AdaRaker accounts not only for data-driven learning of kernel
combination, but also for the unknown dynamics. Performance is analyzed in
terms of both static and dynamic regrets. AdaRaker is uniquely capable of
tracking nonlinear learning functions in environments with unknown dynamics,
and with with analytic performance guarantees. Tests with synthetic and real
datasets are carried out to showcase the effectiveness of the novel algorithms.Comment: 36 page
Energy-Efficient Transmission Schedule for Delay-Limited Bursty Data Arrivals under Non-Ideal Circuit Power Consumption
This paper develops a novel approach to obtaining energy-efficient
transmission schedules for delay-limited bursty data arrivals under non-ideal
circuit power consumption. Assuming a-prior knowledge of packet arrivals,
deadlines and channel realizations, we show that the problem can be formulated
as a convex program. For both time-invariant and time-varying fading channels,
it is revealed that the optimal transmission between any two consecutive
channel or data state changing instants, termed epoch, can only take one of the
three strategies: (i) no transmission, (ii) transmission with an
energy-efficiency (EE) maximizing rate over part of the epoch, or (iii)
transmission with a rate greater than the EE-maximizing rate over the whole
epoch. Based on this specific structure, efficient algorithms are then
developed to find the optimal policies that minimize the total energy
consumption with a low computational complexity. The proposed approach can
provide the optimal benchmarks for practical schemes designed for transmissions
of delay-limited data arrivals, and can be employed to develop efficient online
scheduling schemes which require only causal knowledge of data arrivals and
deadline requirements.Comment: 30 pages, 7 figure
An Improved Algorithm for Incremental DFS Tree in Undirected Graphs
Depth first search (DFS) tree is one of the most well-known data structures
for designing efficient graph algorithms. Given an undirected graph
with vertices and edges, the textbook algorithm takes time to
construct a DFS tree. In this paper, we study the problem of maintaining a DFS
tree when the graph is undergoing incremental updates. Formally, we show: Given
an arbitrary online sequence of edge or vertex insertions, there is an
algorithm that reports a DFS tree in worst case time per operation, and
requires preprocessing time.
Our result improves the previous worst case update time
algorithm by Baswana et al. and the time by Nakamura and
Sadakane, and matches the trivial lower bound when it is required
to explicitly output a DFS tree.
Our result builds on the framework introduced in the breakthrough work by
Baswana et al., together with a novel use of a tree-partition lemma by Duan and
Zhan, and the celebrated fractional cascading technique by Chazelle and Guibas
RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets
In this paper, we propose a class of robust stochastic subgradient methods
for distributed learning from heterogeneous datasets at presence of an unknown
number of Byzantine workers. The Byzantine workers, during the learning
process, may send arbitrary incorrect messages to the master due to data
corruptions, communication failures or malicious attacks, and consequently bias
the learned model. The key to the proposed methods is a regularization term
incorporated with the objective function so as to robustify the learning task
and mitigate the negative effects of Byzantine attacks. The resultant
subgradient-based algorithms are termed Byzantine-Robust Stochastic Aggregation
methods, justifying our acronym RSA used henceforth. In contrast to most of the
existing algorithms, RSA does not rely on the assumption that the data are
independent and identically distributed (i.i.d.) on the workers, and hence fits
for a wider class of applications. Theoretically, we show that: i) RSA
converges to a near-optimal solution with the learning error dependent on the
number of Byzantine workers; ii) the convergence rate of RSA under Byzantine
attacks is the same as that of the stochastic gradient descent method, which is
free of Byzantine attacks. Numerically, experiments on real dataset corroborate
the competitive performance of RSA and a complexity reduction compared to the
state-of-the-art alternatives.Comment: To appear in AAAI 201
- …
