37 research outputs found
Smoothed Functional Algorithms for Stochastic Optimization using q-Gaussian Distributions
Smoothed functional (SF) schemes for gradient estimation are known to be
efficient in stochastic optimization algorithms, specially when the objective
is to improve the performance of a stochastic system. However, the performance
of these methods depends on several parameters, such as the choice of a
suitable smoothing kernel. Different kernels have been studied in literature,
which include Gaussian, Cauchy and uniform distributions among others. This
paper studies a new class of kernels based on the q-Gaussian distribution, that
has gained popularity in statistical physics over the last decade. Though the
importance of this family of distributions is attributed to its ability to
generalize the Gaussian distribution, we observe that this class encompasses
almost all existing smoothing kernels. This motivates us to study SF schemes
for gradient estimation using the q-Gaussian distribution. Using the derived
gradient estimates, we propose two-timescale algorithms for optimization of a
stochastic objective function in a constrained setting with projected gradient
search approach. We prove the convergence of our algorithms to the set of
stationary points of an associated ODE. We also demonstrate their performance
numerically through simulations on a queuing model
Validation, reproducibility and safety of trans dermal electrical stimulation in chronic pain patients and healthy volunteers
Background: Surrogate pain models have been extensively tested in Normal Human Volunteers (NHV). There are few studies that examined pain models in chronic pain patients. Patients are likely to have altered pain mechanisms. It is of interest to test patient pain responses to selective pain stimuli under controlled laboratory conditions. Methods: The Institutional Ethic Committee approved the study. 16 patients with chronic neuropathic radiculopathy and 16 healthy volunteers were enrolled to the study after obtaining informed consent. During electrical stimulation (150 minutes for volunteers and 75 minutes for patients) the following parameters were measured every 10 minutes: Ongoing pain: Visual Analogue Scale (VAS) and Numeric Rate Scale (NRS) Allodynia (soft foam brush) Hyperalgesia (von Frey monofilament 20 g) Flare For each endpoint, the area under the curve (AUC) was estimated from the start of stimulation to the end of stimulation by the trapezoidal rule. The individual AUC values for both periods were plotted to show the inter- and intra-subject variability. For each endpoint a mixed effect model was fitted with random effect subject and fixed effect visit. The estimate of intra-subject variance and the mean value were then used to estimate the sample size of a crossover study required to have a probability of 0.80 to detect a 25% change in the mean value. Analysis was done using GenStat 8(th) edition. Results: Each endpoint achieved very good reproducibility for patients and NHV. Comparison between groups revealed trends towards: Faster habituation to painful stimuli in patients Bigger areas of hyperalgesia in patients Similar area of allodynia and flare (no statistical significance) Conclusion: The differences demonstrated between patients and NHVs suggest that the electrical stimulation device used here may stimulate pathways that are affected in the pathological state
Laminar Forced Convection of Nanofluids in a Circular Tube: A New Nonhomogeneous Flow Model
The Design of a Large Single-Screw Melt Extruder Using a Quasi Two-Dimensional Conducting Screw Computer Model
Hypergraph Clustering Using a New Laplacian Tensor with Applications in Image Processing
Best Increments for the Average Case of Shellsort
This paper presents the results of using sequential analysis to find increment sequences that minimize the average running time of Shellsort, for array sizes up to several thousand elements. The obtained sequences outperform by about 3% the best ones known so far, and there is a plausible evidence that they are the optimal ones
HOLISMOKES VII. Time-delay measurement of strongly lensed Type Ia supernovae using machine learning
The Hubble constant () is one of the fundamental parameters in
cosmology, but there is a heated debate around the 4 tension between
the local Cepheid distance ladder and the early Universe measurements. Strongly
lensed Type Ia supernovae (LSNe Ia) are an independent and direct way to
measure , where a time-delay measurement between the multiple supernova
(SN) images is required. In this work, we present two machine learning
approaches for measuring time delays in LSNe Ia, namely, a fully connected
neural network (FCNN) and a random forest (RF). For the training of the FCNN
and the RF, we simulate mock LSNe Ia from theoretical SN Ia models that include
observational noise and microlensing. We test the generalizability of the
machine learning models by using a final test set based on empirical LSN Ia
light curves not used in the training process, and we find that only the RF
provides a low enough bias to achieve precision cosmology; as such, RF is
therefore preferred over our FCNN approach for applications to real systems.
For the RF with single-band photometry in the band, we obtain an accuracy
better than 1\% in all investigated cases for time delays longer than 15 days,
assuming follow-up observations with a 5 point-source depth of 24.7, a
two day cadence with a few random gaps, and a detection of the LSNe Ia 8 to 10
days before peak in the observer frame. In terms of precision, we can achieve
an approximately 1.5-day uncertainty for a typical source redshift of 0.8
on the band under the same assumptions. To improve the measurement, we find
that using three bands, where we train a RF for each band separately and
combine them afterward, helps to reduce the uncertainty to 1.0 day. We
have publicly released the microlensed spectra and light curves used in this
work.Comment: 25 pages, 28 figures; accepted for publication in A&
HOLISMOKES VII. Time-delay measurement of strongly lensed Type Ia supernovae using machine learning
The Hubble constant (H-0) is one of the fundamental parameters in cosmology, but there is a heated debate around the > 4 sigma tension between the local Cepheid distance ladder and the early Universe measurements. Strongly lensed Type Ia supernovae (LSNe Ia) are an independent and direct way to measure H-0, where a time-delay measurement between the multiple supernova (SN) images is required. In this work, we present two machine learning approaches for measuring time delays in LSNe Ia, namely, a fully connected neural network (FCNN) and a random forest (RF). For the training of the FCNN and the RF, we simulate mock LSNe Ia from theoretical SN Ia models that include observational noise and microlensing. We test the generalizability of the machine learning models by using a final test set based on empirical LSN Ia light curves not used in the training process, and we find that only the RF provides a low enough bias to achieve precision cosmology; as such, RF is therefore preferred over our FCNN approach for applications to real systems. For the RF with single-band photometry in the i band, we obtain an accuracy better than 1% in all investigated cases for time delays longer than 15 days, assuming follow-up observations with a 5 sigma point-source depth of 24.7, a two day cadence with a few random gaps, and a detection of the LSNe Ia 8 to 10 days before peak in the observer frame. In terms of precision, we can achieve an approximately 1.5-day uncertainty for a typical source redshift of similar to 0.8 on the i band under the same assumptions. To improve the measurement, we find that using three bands, where we train a RF for each band separately and combine them afterward, helps to reduce the uncertainty to similar to 1.0 day. The dominant source of uncertainty is the observational noise, and therefore the depth is an especially important factor when follow-up observations are triggered. We have publicly released the microlensed spectra and light curves used in this work.LASTR
HOLISMOKES
The Hubble constant (H0) is one of the fundamental parameters in cosmology, but there is a heated debate around the > 4σ tension between the local Cepheid distance ladder and the early Universe measurements. Strongly lensed Type Ia supernovae (LSNe Ia) are an independent and direct way to measure H0, where a time-delay measurement between the multiple supernova (SN) images is required. In this work, we present two machine learning approaches for measuring time delays in LSNe Ia, namely, a fully connected neural network (FCNN) and a random forest (RF). For the training of the FCNN and the RF, we simulate mock LSNe Ia from theoretical SN Ia models that include observational noise and microlensing. We test the generalizability of the machine learning models by using a final test set based on empirical LSN Ia light curves not used in the training process, and we find that only the RF provides a low enough bias to achieve precision cosmology; as such, RF is therefore preferred over our FCNN approach for applications to real systems. For the RF with single-band photometry in the i band, we obtain an accuracy better than 1% in all investigated cases for time delays longer than 15 days, assuming follow-up observations with a 5σ point-source depth of 24.7, a two day cadence with a few random gaps, and a detection of the LSNe Ia 8 to 10 days before peak in the observer frame. In terms of precision, we can achieve an approximately 1.5-day uncertainty for a typical source redshift of ∼0.8 on the i band under the same assumptions. To improve the measurement, we find that using three bands, where we train a RF for each band separately and combine them afterward, helps to reduce the uncertainty to ∼1.0 day. The dominant source of uncertainty is the observational noise, and therefore the depth is an especially important factor when follow-up observations are triggered. We have publicly released the microlensed spectra and light curves used in this work.</jats:p
