2,454 research outputs found
High-Resolution Mid-Infrared Molecular Line Survey of the Orion Hot Core
The basic building blocks of life are synthesized in space as part of the natural stellar evolutionary cycle, whereby elements ejected into the interstellar medium by dying stars are incorporated back into the dense clouds, which form the next generation of stars and planets. The formation of stars and planets are fundamental to the evolution of matter in the Universe as complex molecules are created and destroyed during this step. Understanding these processes will allow us to answer What is the relation between the molecules we see in the ISM and the molecular inventory of Earth and the terrestrial planets in the Solar System? Measuring and cataloging the inventory of organic molecules and understanding their evolution requires observations over a broad wavelength range (IR, MIR, FIR, (sub)mm, and radio) to cover all stages of this evolutionary cycle needed to link interstellar material to that delivered to planets. High-resolution molecular line surveys provide chemical inventories for star forming regions and are essential for studying their chemistry, kinematics and physical conditions. Previous high spectral resolution surveys have been limited to radio, sub-mm and FIR wavelengths; however, Mid-infrared observations are the only way to study symmetric molecules that have no dipole moment and thus cannot be detected in the (sub)mm line surveys from ALMA. Past midinfrared missions such as ISO and Spitzer had low to moderate resolving power that were only able to link broad features with particular molecular bands and could not resolve the individual rovibrational transitions. JWST will provide exceptional sensitivity in the MIR, but will also not have sufficient spectral resolution, which can lead to confusion in identifying the contribution from strong to moderate strength molecular species. We present new results from an on-going high resolution (R ~ 60,000) line survey of the Orion hot core between 12.5 - 28.3 m and 7 - 8 m, using the EXES instrument on the SOFIA airborne observatory. SOFIA's higher-resolution and smaller beam compared to ISO allows us to spatially and spectrally isolate the emission towards the hot core. This survey will provide the best infrared measurements (to date) of molecular column densities and physical conditions, providing strong constraints on the current chemical network models for star forming regions. This survey will greatly enhance the inventory of resolved line features in the MIR, making it an invaluable reference to be used by the JWST and ALMA scientific communities
Embedding Feature Selection for Large-scale Hierarchical Classification
Large-scale Hierarchical Classification (HC) involves datasets consisting of
thousands of classes and millions of training instances with high-dimensional
features posing several big data challenges. Feature selection that aims to
select the subset of discriminant features is an effective strategy to deal
with large-scale HC problem. It speeds up the training process, reduces the
prediction time and minimizes the memory requirements by compressing the total
size of learned model weight vectors. Majority of the studies have also shown
feature selection to be competent and successful in improving the
classification accuracy by removing irrelevant features. In this work, we
investigate various filter-based feature selection methods for dimensionality
reduction to solve the large-scale HC problem. Our experimental evaluation on
text and image datasets with varying distribution of features, classes and
instances shows upto 3x order of speed-up on massive datasets and upto 45% less
memory requirements for storing the weight vectors of learned model without any
significant loss (improvement for some datasets) in the classification
accuracy. Source Code: https://cs.gmu.edu/~mlbio/featureselection.Comment: IEEE International Conference on Big Data (IEEE BigData 2016
Upward flame spread over corrugated cardboard
As part of a study of the combustion of boxes of commodities, rates of upward flame spread during early-stage burning were observed during experiments on wide samples of corrugated cardboard. The rate of spread of the flame front, defined by the burning pyrolysis region, was determined by visually averaging the pyrolysis front position across the fuel surface. The resulting best fit produced a power-law progression of the pyrolysis front, xp=Atn, where xp is the average height of the pyrolysis front at time t, n=3/2, and A is a constant. This result corresponds to a slower acceleration than was obtained in previous measurements and theories (e.g. n=2), an observation which suggests that development of an alternative description of the upward flame spread rate over wide, inhomogeneous materials may be worth studying for applications such as warehouse fires. Based upon the experimental results and overall conservation principles it is hypothesized that the non-homogeneity of the cardboard helped to reduce the acceleration of the upward spread rates by physically disrupting flow in the boundary layer close to the vertical surface and thereby modifying heating rates of the solid fuel above the pyrolysis region. As a result of this phenomena, a distinct difference was observed between scalings of peak flame heights, or maximum " flame tip" measurements and the average location of the flame. The results yield alternative scalings that may be better applicable to some situations encountered in practice in warehouse fires. © 2010 The Combustion Institute
Warehouse commodity classification from fundamental principles. Part II: Flame heights and flame spread
In warehouse storage applications, it is important to classify the burning behavior of commodities and rank them according to their material flammability for early fire detection and suppression operations. In this study, a preliminary approach towards commodity classification is presented that models the early stage of large-scale warehouse fires by decoupling the problem into separate processes of heat and mass transfer. Two existing nondimensional parameters are used to represent the physical phenomena at the large-scale: a mass transfer number that directly incorporates the material properties of a fuel, and the soot yield of the fuel that controls the radiation observed in the large-scale. To facilitate modeling, a mass transfer number (or B-number) was experimentally obtained using mass-loss (burning rate) measurements from bench-scale tests, following from a procedure that was developed in Part I of this paper. Two fuels are considered: corrugated cardboard and polystyrene. Corrugated cardboard provides a source of flaming combustion in a warehouse and is usually the first item to ignite and sustain flame spread. Polystyrene is typically used as the most hazardous product in large-scale fire testing. The nondimensional mass transfer number was then used to model in-rack flame heights on 6.19.1 m (2030 ft) stacks of 'C' flute corrugated cardboard boxes on rack-storage during the initial period of flame spread (involving flame spread over the corrugated cardboard face only). Good agreement was observed between the model and large-scale experiments during the initial stages of fire growth, and a comparison to previous correlations for in-rack flame heights is included. © 2011 Elsevier Ltd. All rights reserved
Model Extraction Warning in MLaaS Paradigm
Cloud vendors are increasingly offering machine learning services as part of
their platform and services portfolios. These services enable the deployment of
machine learning models on the cloud that are offered on a pay-per-query basis
to application developers and end users. However recent work has shown that the
hosted models are susceptible to extraction attacks. Adversaries may launch
queries to steal the model and compromise future query payments or privacy of
the training data. In this work, we present a cloud-based extraction monitor
that can quantify the extraction status of models by observing the query and
response streams of both individual and colluding adversarial users. We present
a novel technique that uses information gain to measure the model learning rate
by users with increasing number of queries. Additionally, we present an
alternate technique that maintains intelligent query summaries to measure the
learning rate relative to the coverage of the input feature space in the
presence of collusion. Both these approaches have low computational overhead
and can easily be offered as services to model owners to warn them of possible
extraction attacks from adversaries. We present performance results for these
approaches for decision tree models deployed on BigML MLaaS platform, using
open source datasets and different adversarial attack strategies
- …
