332 research outputs found
Enhanced transmission versus localization of a light pulse by a subwavelength metal slit: Can the pulse have both characteristics?
The existence of resonant enhanced transmission and collimation of light
waves by subwavelength slits in metal films [for example, see T.W. Ebbesen et
al., Nature (London) 391, 667 (1998) and H.J. Lezec et al., Science, 297, 820
(2002)] leads to the basic question: Can a light be enhanced and simultaneously
localized in space and time by a subwavelength slit? To address this question,
the spatial distribution of the energy flux of an ultrashort (femtosecond)
wave-packet diffracted by a subwavelength (nanometer-size) slit was analyzed by
using the conventional approach based on the Neerhoff and Mur solution of
Maxwell's equations. The results show that a light can be enhanced by orders of
magnitude and simultaneously localized in the near-field diffraction zone at
the nm- and fs-scales. Possible applications in nanophotonics are discussed.Comment: 5 figure
In need of mediation: The relation between syntax and information structure
This paper defends the view that syntax does not directly interact with information structure. Rather, information structure affects prosody, and only the latter has an interface with syntax. We illustrate this point by discussing scrambling, focus preposing, and topicalization. The position entertained here implies that syntax is not very informative when one wants to narrow down the interpretation of terms such as “focus”, “topic”, etc
A Generic Transferable EEG Decoder for Online Detection of Error Potential in Target Selection
Reliable detection of error from electroencephalography (EEG) signals as feedback while performing a discrete target selection task across sessions and subjects has a huge scope in real-time rehabilitative application of Brain-computer Interfacing (BCI). Error Related Potentials (ErrP) are EEG signals which occur when the participant observes an erroneous feedback from the system. ErrP holds significance in such closed-loop system, as BCI is prone to error and we need an effective method of systematic error detection as feedback for correction. In this paper, we have proposed a novel scheme for online detection of error feedback directly from the EEG signal in a transferable environment (i.e., across sessions and across subjects). For this purpose, we have used a P300-speller dataset available on a BCI competition website. The task involves the subject to select a letter of a word which is followed by a feedback period. The feedback period displays the letter selected and, if the selection is wrong, the subject perceives it by the generation of ErrP signal. Our proposed system is designed to detect ErrP present in the EEG from new independent datasets, not involved in its training. Thus, the decoder is trained using EEG features of 16 subjects for single-trial classification and tested on 10 independent subjects. The decoder designed for this task is an ensemble of linear discriminant analysis, quadratic discriminant analysis, and logistic regression classifier. The performance of the decoder is evaluated using accuracy, F1-score, and Area Under the Curve metric and the results obtained is 73.97, 83.53, and 73.18%, respectively
Laser-induced breakdown spectroscopy: a tool for real-time, in vitro and in vivo identification of carious teeth
BACKGROUND: Laser Induced Breakdown Spectroscopy (LIBS) can be used to measure trace element concentrations in solids, liquids and gases, with spatial resolution and absolute quantifaction being feasible, down to parts-per-million concentration levels. Some applications of LIBS do not necessarily require exact, quantitative measurements. These include applications in dentistry, which are of a more "identify-and-sort" nature – e.g. identification of teeth affected by caries. METHODS: A one-fibre light delivery / collection assembly for LIBS analysis was used, which in principle lends itself for routine in vitro / in vivo applications in a dental practice. A number of evaluation algorithms for LIBS data can be used to assess the similarity of a spectrum, measured at specific sample locations, with a training set of reference spectra. Here, the description has been restricted to one pattern recognition algorithm, namely the so-called Mahalanobis Distance method. RESULTS: The plasma created when the laser pulse ablates the sample (in vitro / in vivo), was spectrally analysed. We demonstrated that, using the Mahalanobis Distance pattern recognition algorithm, we could unambiguously determine the identity of an "unknown" tooth sample in real time. Based on single spectra obtained from the sample, the transition from caries-affected to healthy tooth material could be distinguished, with high spatial resolution. CONCLUSIONS: The combination of LIBS and pattern recognition algorithms provides a potentially useful tool for dentists for fast material identification problems, such as for example the precise control of the laser drilling / cleaning process
Results of the first European Source Apportionment intercomparison for Receptor and Chemical Transport Models
In this study, the performance of the source apportionment model applications were evaluated by comparing the model results provided by 44 participants adopting a methodology based on performance indicators: z-scores and RMSEu, with pre-established acceptability criteria. Involving models based on completely different and independent input data, such as receptor models (RMs) and chemical transport models (CTMs), provided a unique opportunity to cross-validate them. In addition, comparing the modelled source chemical profiles, with those measured directly at the source contributed to corroborate the chemical profile of the tested model results. The most used RM was EPA- PMF5. RMs showed very good performance for the overall dataset (91% of z-scores accepted) and more difficulties are observed with SCE time series (72% of RMSEu accepted). Industry resulted the most problematic source for RMs due to the high variability among participants. Also the results obtained with CTMs were quite comparable to their ensemble reference using all models for the overall average (>92% of successful z-scores) while the comparability of the time series is more problematic (between 58% and 77% of the candidates’ RMSEu are accepted). In the CTM models a gap was observed between the sum of source contributions and the gravimetric PM10 mass likely due to PM underestimation in the base case. Interestingly, when only the tagged species CTM results were used in the reference, the differences between the two CTM approaches (brute force and tagged species) were evident. In this case the percentage of candidates passing the z-score and RMSEu tests were only 50% and 86%, respectively. CTMs showed good comparability with RMs for the overall dataset (83% of the z-scores accepted), more differences were observed when dealing with the time series of the single source categories. In this case the share of successful RMSEu was in the range 25% - 34%.JRC.C.5-Air and Climat
A Taxonomy of Explainable Bayesian Networks
Artificial Intelligence (AI), and in particular, the explainability thereof,
has gained phenomenal attention over the last few years. Whilst we usually do
not question the decision-making process of these systems in situations where
only the outcome is of interest, we do however pay close attention when these
systems are applied in areas where the decisions directly influence the lives
of humans. It is especially noisy and uncertain observations close to the
decision boundary which results in predictions which cannot necessarily be
explained that may foster mistrust among end-users. This drew attention to AI
methods for which the outcomes can be explained. Bayesian networks are
probabilistic graphical models that can be used as a tool to manage
uncertainty. The probabilistic framework of a Bayesian network allows for
explainability in the model, reasoning and evidence. The use of these methods
is mostly ad hoc and not as well organised as explainability methods in the
wider AI research field. As such, we introduce a taxonomy of explainability in
Bayesian networks. We extend the existing categorisation of explainability in
the model, reasoning or evidence to include explanation of decisions. The
explanations obtained from the explainability methods are illustrated by means
of a simple medical diagnostic scenario. The taxonomy introduced in this paper
has the potential not only to encourage end-users to efficiently communicate
outcomes obtained, but also support their understanding of how and, more
importantly, why certain predictions were made
Enriching Visual with Verbal Explanations for Relational Concepts -- Combining LIME with Aleph
With the increasing number of deep learning applications, there is a growing
demand for explanations. Visual explanations provide information about which
parts of an image are relevant for a classifier's decision. However,
highlighting of image parts (e.g., an eye) cannot capture the relevance of a
specific feature value for a class (e.g., that the eye is wide open).
Furthermore, highlighting cannot convey whether the classification depends on
the mere presence of parts or on a specific spatial relation between them.
Consequently, we present an approach that is capable of explaining a
classifier's decision in terms of logic rules obtained by the Inductive Logic
Programming system Aleph. The examples and the background knowledge needed for
Aleph are based on the explanation generation method LIME. We demonstrate our
approach with images of a blocksworld domain. First, we show that our approach
is capable of identifying a single relation as important explanatory construct.
Afterwards, we present the more complex relational concept of towers. Finally,
we show how the generated relational rules can be explicitly related with the
input image, resulting in richer explanations
More data, more problems: Strategically addressing data ethics and policy issues in LIS curricula and courses
Library and information science (LIS) schools are revising undergraduate and graduate curricula and individual courses to prepare students for data-centric careers, as well as to participate in a data-driven society. To meet these new challenges, programs are developing courses on, among other things, data curation, analytics, visualization, algorithm design, and artificial intelligence. While such changes reflect new workforce and society needs, it remains to be seen whether or not such efforts adequately address the very real and serious ethics and policy issues associated with related data practices (e.g., privacy, bias, fairness, and justice). The Information Ethics SIG and the Information Policy SIG have merged to present a panel on data ethics and policy issues in LIS education. In this session, two recent books on information ethics and information policy will be discussed to bring context to the panel, three papers will be presented, and the audience will have an opportunity to participate in a structured discussion. The papers will address three topics that explore the implications and concerns of living in a data-driven society: collaborative strategies for contributing to the data ethics education landscape, young adult information privacy concerns when using mobile devices, and artificial intelligence and social responsibility. The structured discussion will invite participation on issues raised by the papers, as well as implications for practice in LIS education
- …
