20 research outputs found

    Adaptive robust estimation in sparse vector model

    Get PDF
    For the sparse vector model, we consider estimation of the target vector, of its l2-norm and of the noise variance. We construct adaptive estimators and establish the optimal rates of adaptive estimation when adaptation is considered with respect to the triplet "noise level - noise distribution - sparsity". We consider classes of noise distributions with polynomially and exponentially decreasing tails as well as the case of Gaussian noise. The obtained rates turn out to be different from the minimax non-adaptive rates when the triplet is known. A crucial issue is the ignorance of the noise variance. Moreover, knowing or not knowing the noise distribution can also influence the rate. For example, the rates of estimation of the noise variance can differ depending on whether the noise is Gaussian or sub-Gaussian without a precise knowledge of the distribution. Estimation of noise variance in our setting can be viewed as an adaptive variant of robust estimation of scale in the contamination model, where instead of fixing the "nominal" distribution in advance, we assume that it belongs to some class of distributions

    Highly efficient formic acid and carbon dioxide electro-reduction to alcohols on indium oxide electrodes

    Get PDF
    Formic acid is often assumed to be the first intermediate of carbon dioxide reduction to alcohols or hydrocarbons. Here we use co-electrolysis of water and aqueous formic acid in a PEM electrolysis cell with Nafion® as a polymer electrolyte, a standard TaC-supported IrO2 water-splitting catalyst at the anode, and nanosize In2O3 with a small amount of added polytetrafluoroethylene (PTFE) as the cathode. This results in a mixture of methanol, ethanol and iso-propanol with a maximum combined Faraday efficiency of 82.5%. In the absence of diffusion limitation, a current density up to 70 mA cm−2 is reached, and the space-time-yield compares well with results from heterogeneous In2O3 catalysis. Reduction works more efficiently with dissolved CO2 than with formic acid, but the product distribution is different, suggesting that CO2 reduction occurs primarily via a competing pathway that bypasses formic acid as an intermediate.The University of Pretoria for financial support via the IRT Energy and the South African NRF for support via the SSAJRP Program (UID 87401) and via the PROTEA Program (Nr. 42442PF) together with France (NAF 8542 Z). K. A. Adegoke thanks for his NRF TWAS fellowship for the Doctoral Scholarship Award ((Nr. 42442PF/NRF UID: 105453 & Reference: SFH160618172220, and MND190603441389 & Unique Grant No: 121108) and UP Postgraduate Doctoral Research bursary award. The PhD fellowship to P. Rayess from the ANR (EClock project) is gratefully acknowledged.https://pubs.rsc.org/en/journals/journalissues/se#!recentarticles&adv2021-06-05hj2020Chemistr

    On estimation of nonsmooth functionals of sparse normal means

    No full text

    Adaptive robust estimation in sparse vector model

    No full text

    Minimax Rate of Testing in Sparse Linear Regression

    Full text link

    Highly efficient formic acid and carbon dioxide electro-reduction to alcohols on indium oxide electrodes

    Full text link
    Co-electrolysis of formic acid and water using an indium oxide cathode catalyst yields a mixture of methanol, ethanol and iso-propanol with a Faraday efficiency up to 82.4%. The reaction of aqueous carbon dioxide occursviaa competing pathway.</p

    Estimating linear functionals of a sparse family of Poisson means

    No full text
    Assume that we observe a sample of size n composed of p-dimensional signals, each signal having independent entries drawn from a scaled Poisson distribution with an unknown intensity. We are interested in estimating the sum of the n unknown intensity vectors, under the assumption that most of them coincide with a given 'background' signal. The number s of p-dimensional signals different from the background signal plays the role of sparsity and the goal is to leverage this sparsity assumption in order to improve the quality of estimation as compared to the naive estimator that computes the sum of the observed signals. We first introduce the group hard thresholding estimator and analyze its mean squared error measured by the squared Euclidean norm. We establish a nonasymptotic upper bound showing that the risk is at most of the order of {\sigma}^2(sp + s^2sqrt(p)) log^3/2(np). We then establish lower bounds on the minimax risk over a properly defined class of collections of s-sparse signals. These lower bounds match with the upper bound, up to logarithmic terms, when the dimension p is fixed or of larger order than s^2. In the case where the dimension p increases but remains of smaller order than s^2, our results show a gap between the lower and the upper bounds, which can be up to order sqrt(p)
    corecore