65 research outputs found

    Coherent Backscattering of Light From an Ultra Cold Gas of Rubidium-85 Atoms

    Get PDF
    This thesis reports on the experimental study of coherent radiative transport in an ultracold gas of 85Rb atoms confined in a magneto-optic trap. Measurements are made of the polarization dependence of the spatial and spectral profile of light backscattered from the sample. The results shows an interferometric enhancement sensitive to coherent multiple scattering in the atomic gas, and strong variations with the polarization of the incident and detected light. Effects due to coherent enhancement of weak non-resonant transitions are also observed. Comparison of the measurements with realistic quantum Monte Carlo simulations of Kupriyanov, et al [1] yield very good agreement

    What Do Self-Supervised Speech Models Know About Words?

    Full text link
    Many self-supervised speech models (S3Ms) have been introduced over the last few years, improving performance and data efficiency on various speech tasks. However, these empirical successes alone do not give a complete picture of what is learned during pre-training. Recent work has begun analyzing how S3Ms encode certain properties, such as phonetic and speaker information, but we still lack a proper understanding of knowledge encoded at the word level and beyond. In this work, we use lightweight analysis methods to study segment-level linguistic properties -- word identity, boundaries, pronunciation, syntactic features, and semantic features -- encoded in S3Ms. We present a comparative study of layer-wise representations from ten S3Ms and find that (i) the frame-level representations within each word segment are not all equally informative, and (ii) the pre-training objective and model size heavily influence the accessibility and distribution of linguistic information across layers. We also find that on several tasks -- word discrimination, word segmentation, and semantic sentence similarity -- S3Ms trained with visual grounding outperform their speech-only counterparts. Finally, our task-based analyses demonstrate improved performance on word segmentation and acoustic word discrimination while using simpler methods than prior work.Comment: Pre-MIT Press publication versio

    Formaldehyde-releasers: relationship to formaldehyde contact allergy. Contact allergy to formaldehyde and inventory of formaldehyde-releasers

    Get PDF
    This is one of series of review articles on formaldehyde and formaldehyde-releasers (others: formaldehyde in cosmetics, in clothes and in metalworking fluids and miscellaneous). Thirty-five chemicals were identified as being formaldehyde-releasers. Although a further seven are listed in the literature as formaldehyde-releasers, data are inadequate to consider them as such beyond doubt. Several (nomenclature) mistakes and outdated information are discussed. Formaldehyde and formaldehyde allergy are reviewed: applications, exposure scenarios, legislation, patch testing problems, frequency of sensitization, relevance of positive patch test reactions, clinical pattern of allergic contact dermatitis from formaldehyde, prognosis, threshold for elicitation of allergic contact dermatitis, analytical tests to determine formaldehyde in products and frequency of exposure to formaldehyde and releasers. The frequency of contact allergy to formaldehyde is consistently higher in the USA (8-9%) than in Europe (2-3%). Patch testing with formaldehyde is problematic; the currently used 1% solution may result in both false-positive and false-negative (up to 40%) reactions. Determining the relevance of patch test reactions is often challenging. What concentration of formaldehyde is safe for sensitive patients remains unknown. Levels of 200-300 p.p.m. free formaldehyde in cosmetic products have been shown to induce dermatitis from short-term use on normal skin

    SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech

    Full text link
    Progress in speech processing has been facilitated by shared datasets and benchmarks. Historically these have focused on automatic speech recognition (ASR), speaker identification, or other lower-level tasks. Interest has been growing in higher-level spoken language understanding tasks, including using end-to-end models, but there are fewer annotated datasets for such tasks. At the same time, recent work shows the possibility of pre-training generic representations and then fine-tuning for several tasks using relatively little labeled data. We propose to create a suite of benchmark tasks for Spoken Language Understanding Evaluation (SLUE) consisting of limited-size labeled training sets and corresponding evaluation sets. This resource would allow the research community to track progress, evaluate pre-trained representations for higher-level tasks, and study open questions such as the utility of pipeline versus end-to-end approaches. We present the first phase of the SLUE benchmark suite, consisting of named entity recognition, sentiment analysis, and ASR on the corresponding datasets. We focus on naturally produced (not read or synthesized) speech, and freely available datasets. We provide new transcriptions and annotations on subsets of the VoxCeleb and VoxPopuli datasets, evaluation metrics and results for baseline models, and an open-source toolkit to reproduce the baselines and evaluate new models.Comment: Updated preprint (Sentiment annotation on test set was updated). Toolkit link https://github.com/asappresearch/slue-toolki

    On the Evaluation of Speech Foundation Models for Spoken Language Understanding

    Full text link
    The Spoken Language Understanding Evaluation (SLUE) suite of benchmark tasks was recently introduced to address the need for open resources and benchmarking of complex spoken language understanding (SLU) tasks, including both classification and sequence generation tasks, on natural speech. The benchmark has demonstrated preliminary success in using pre-trained speech foundation models (SFM) for these SLU tasks. However, the community still lacks a fine-grained understanding of the comparative utility of different SFMs. Inspired by this, we ask: which SFMs offer the most benefits for these complex SLU tasks, and what is the most effective approach for incorporating these SFMs? To answer this, we perform an extensive evaluation of multiple supervised and self-supervised SFMs using several evaluation protocols: (i) frozen SFMs with a lightweight prediction head, (ii) frozen SFMs with a complex prediction head, and (iii) fine-tuned SFMs with a lightweight prediction head. Although the supervised SFMs are pre-trained on much more speech recognition data (with labels), they do not always outperform self-supervised SFMs; the latter tend to perform at least as well as, and sometimes better than, supervised SFMs, especially on the sequence generation tasks in SLUE. While there is no universally optimal way of incorporating SFMs, the complex prediction head gives the best performance for most tasks, although it increases the inference time. We also introduce an open-source toolkit and performance leaderboard, SLUE-PERB, for these tasks and modeling strategies.Comment: Accepted at ACL Findings 202

    STUDIES IN DURABLE-PRESS FINISHING OF COTTON (DMDHEU, PAD-BATH PH, RESIDUES)

    No full text
    Durable-press (DP) finishes, when used in presence of catalysts, impart crease-resistant and smooth-drying properties to cotton fabrics. Various aspects related to these DP finishes and the DP finishing process are investigated. These studies are carried out using mainly dimethyloldihydroxyethyleneurea (the most commonly used DP finish) and include the investigation of (i) high performance liquid chromatography (HPLC) as an analytical tool for quantitative analysis of commercial finishes, (ii) storage stability of DP finishes on fabrics when applied from pad-baths of varying pH values, (iii) rate studies of the reaction between cellulose and a DP finish and (iv) the influence of (a) the reagent residues generated from finishes during finishing and (b) the catalysts on formaldehyde release from finished fabrics. The findings from these studies are summarized as follows: HPLC is a useful analytical tool for quantitating the components of commercial DMDHEU-based finishes. However, the data obtained from the use of this technique did not correlate well either with the data from the use of other techniques or the performance data from finished fabrics. Pad-bath pH and reactivity of the DP finish are important factors influencing the storage stability of DP finishes on fabrics, DMDHEU, a less reactive finish, is stable under acidic conditions (pH 3-6), whereas, DMEU, a more reactive finish, is not stable at any pH. Thus, DMDHEU can be used in post-cure finishing operation when applied from slightly acidic pad-baths. HPLC is a useful analytical tool for studying the rate of reaction of DP finishes with cellulose. The rate constants for DMDHEU-cellulose reaction in presence of magnessium chloride at three temperatures were determined by following the concentration of DMDHEU on fabric using HPLC. With the help of an Arrhenius plot from these data, the rate constant at any given temperature can be predicted. Reagent residues generated on fabric during the finishing process contribute to the formaldehyde release from fabrics finished with a monofunctional model compound, N-methylolpyrrolidone. When the studies were extended to DMDHEU, this conclusion could not be established definitely since the exact nature of some of the residues could not be determined. In case of both the finishes, it was found that the nature and the amount of catalysts used in the finishing process have a significant influence on the extent of formaldehyde release from the finished fabrics
    corecore