21,007 research outputs found
A new scope of penalized empirical likelihood with high-dimensional estimating equations
Statistical methods with empirical likelihood (EL) are appealing and
effective especially in conjunction with estimating equations through which
useful data information can be adaptively and flexibly incorporated. It is also
known in the literature that EL approaches encounter difficulties when dealing
with problems having high-dimensional model parameters and estimating
equations. To overcome the challenges, we begin our study with a careful
investigation on high-dimensional EL from a new scope targeting at estimating a
high-dimensional sparse model parameters. We show that the new scope provides
an opportunity for relaxing the stringent requirement on the dimensionality of
the model parameter. Motivated by the new scope, we then propose a new
penalized EL by applying two penalty functions respectively regularizing the
model parameters and the associated Lagrange multipliers in the optimizations
of EL. By penalizing the Lagrange multiplier to encourage its sparsity, we show
that drastic dimension reduction in the number of estimating equations can be
effectively achieved without compromising the validity and consistency of the
resulting estimators. Most attractively, such a reduction in dimensionality of
estimating equations is actually equivalent to a selection among those
high-dimensional estimating equations, resulting in a highly parsimonious and
effective device for high-dimensional sparse model parameters. Allowing both
the dimensionalities of model parameters and estimating equations growing
exponentially with the sample size, our theory demonstrates that the estimator
from our new penalized EL is sparse and consistent with asymptotically normally
distributed nonzero components. Numerical simulations and a real data analysis
show that the proposed penalized EL works promisingly
Structured Low-Rank Matrix Factorization with Missing and Grossly Corrupted Observations
Recovering low-rank and sparse matrices from incomplete or corrupted
observations is an important problem in machine learning, statistics,
bioinformatics, computer vision, as well as signal and image processing. In
theory, this problem can be solved by the natural convex joint/mixed
relaxations (i.e., l_{1}-norm and trace norm) under certain conditions.
However, all current provable algorithms suffer from superlinear per-iteration
cost, which severely limits their applicability to large-scale problems. In
this paper, we propose a scalable, provable structured low-rank matrix
factorization method to recover low-rank and sparse matrices from missing and
grossly corrupted data, i.e., robust matrix completion (RMC) problems, or
incomplete and grossly corrupted measurements, i.e., compressive principal
component pursuit (CPCP) problems. Specifically, we first present two
small-scale matrix trace norm regularized bilinear structured factorization
models for RMC and CPCP problems, in which repetitively calculating SVD of a
large-scale matrix is replaced by updating two much smaller factor matrices.
Then, we apply the alternating direction method of multipliers (ADMM) to
efficiently solve the RMC problems. Finally, we provide the convergence
analysis of our algorithm, and extend it to address general CPCP problems.
Experimental results verified both the efficiency and effectiveness of our
method compared with the state-of-the-art methods.Comment: 28 pages, 9 figure
Ill-posedness of the Prandtl equations in Sobolev spaces around a shear flow with general decay
Motivated by the paper by D. Gerard-Varet and E. Dormy [JAMS, 2010] about the
linear ill-posedness for the Prandtl equations around a shear flow with
exponential decay in normal variable, and the recent study of well-posedness on
the Prandtl equations in Sobolev spaces, this paper aims to extend the result
in \cite{GV-D} to the case when the shear flow has general decay. The key
observation is to construct an approximate solution that captures the initial
layer to the linearized problem motivated by the precise formulation of
solutions to the inviscid Prandtl equations
A well-posedness theory for the Prandtl equations in three space variables
The well-posedness of the three space dimensional Prandtl equations is
studied under some constraint on its flow structure. It reveals that the
classical Burgers equation plays an important role in determining this type of
flow with special structure, that avoids the appearance of the complicated
secondary flow in the three-dimensional Prandtl boundary layers. And the
sufficiency of the monotonicity condition on the tangential velocity field for
the existence of solutions to the Prandtl boundary layer equations is
illustrated in the three dimensional setting. Moreover, it is shown that this
structured flow is linearly stable for any three-dimensional perturbation.Comment: 40 page
Justification of Prandtl Ansatz for MHD boundary layer
As a continuation of \cite{LXY}, the paper aims to justify the high Reynolds
numbers limit for the MHD system with Prandtl boundary layer expansion when
no-slip boundary condition is imposed on velocity field and perfect conducting
boundary condition on magnetic field. Under the assumption that the viscosity
and resistivity coefficients are of the same order and the initial tangential
magnetic field on the boundary is not degenerate, we justify the validity of
the Prandtl boundary layer expansion and give a estimate on the
error by multi-scale analysis.Comment: 34 page
Local-in-time well-posedness for Compressible MHD boundary layer
In this paper, we are concerned with the motion of electrically conducting
fluid governed by the two-dimensional non-isentropic viscous compressible MHD
system on the half plane, with no-slip condition for velocity field, perfect
conducting condition for magnetic field and Dirichlet boundary condition for
temperature on the boundary. When the viscosity, heat conductivity and magnetic
diffusivity coefficients tend to zero in the same rate, there is a boundary
layer that is described by a Prandtl-type system. By applying a coordinate
transformation in terms of stream function as motivated by the recent work
\cite{liu2016mhdboundarylayer} on the incompressible MHD system, under the
non-degeneracy condition on the tangential magnetic field, we obtain the
local-in-time well-posedness of the boundary layer system in weighted Sobolev
spaces.Comment: 29p
The Formation and Early Evolution of a Coronal Mass Ejection and its Associated Shock Wave on 2014 January 8
In this paper, we study the formation and early evolution of a limb coronal
mass ejection (CME) and its associated shock wave that occurred on 2014 January
8. The extreme ultraviolet (EUV) images provided by the Atmospheric Imaging
Assembly (AIA) on board \textit{Solar Dynamics Observatory} disclose that the
CME first appears as a bubble-like structure. Subsequently, its expansion forms
the CME and causes a quasi-circular EUV wave. Interestingly, both the CME and
the wave front are clearly visible at all of the AIA EUV passbands. Through a
detailed kinematical analysis, it is found that the expansion of the CME
undergoes two phases: a first phase with a strong but transient lateral
over-expansion followed by a second phase with a self-similar expansion. The
temporal evolution of the expansion velocity coincides very well with the
variation of the 25--50 keV hard X-ray flux of the associated flare, which
indicates that magnetic reconnection most likely plays an important role in
driving the expansion. Moreover, we find that, when the velocity of the CME
reaches 600 km s, the EUV wave starts to evolve into a shock wave,
which is evidenced by the appearance of a type II radio burst. The shock's
formation height is estimated to be 0.2, which is much lower
than the height derived previously. Finally, we also study the thermal
properties of the CME and the EUV wave. We find that the plasma in the CME
leading front and the wave front has a temperature of 2 MK, while that in
the CME core region and the flare region has a much higher temperature of
8 MK.Comment: 11 pages, 7 figures, accepted by Ap
High-dimensional empirical likelihood inference
High-dimensional statistical inference with general estimating equations are
challenging and remain less explored. In this paper, we study two problems in
the area: confidence set estimation for multiple components of the model
parameters, and model specifications test. For the first one, we propose to
construct a new set of estimating equations such that the impact from
estimating the high-dimensional nuisance parameters becomes asymptotically
negligible. The new construction enables us to estimate a valid confidence
region by empirical likelihood ratio. For the second one, we propose a test
statistic as the maximum of the marginal empirical likelihood ratios to
quantify data evidence against the model specification. Our theory establishes
the validity of the proposed empirical likelihood approaches, accommodating
over-identification and exponentially growing data dimensionality. The
numerical studies demonstrate promising performance and potential practical
benefits of the new methods.Comment: The original title of this paper is "High-dimensional statistical
inferences with over-identification: confidence set estimation and
specification test
Local Adaption for Approximation and Minimization of Univariate Functions
Most commonly used \emph{adaptive} algorithms for univariate real-valued
function approximation and global minimization lack theoretical guarantees. Our
new locally adaptive algorithms are guaranteed to provide answers that satisfy
a user-specified absolute error tolerance for a cone, , of
non-spiky input functions in the Sobolev space . Our
algorithms automatically determine where to sample the function---sampling more
densely where the second derivative is larger. The computational cost of our
algorithm for approximating a univariate function on a bounded interval
with -error no greater than is
as
. This is the same order as that of the best function
approximation algorithm for functions in . The computational cost
of our global minimization algorithm is of the same order and the cost can be
substantially less if significantly exceeds its minimum over much of the
domain. Our Guaranteed Automatic Integration Library (GAIL) contains these new
algorithms. We provide numerical experiments to illustrate their superior
performance
- …
