164 research outputs found
Ultimate limits for quantum magnetometry via time-continuous measurements
We address the estimation of the magnetic field B acting on an ensemble of
atoms with total spin J subjected to collective transverse noise. By preparing
an initial spin coherent state, for any measurement performed after the
evolution, the mean-square error of the estimate is known to scale as ,
i.e. no quantum enhancement is obtained. Here, we consider the possibility of
continuously monitoring the atomic environment, and conclusively show that
strategies based on time-continuous non-demolition measurements followed by a
final strong measurement may achieve Heisenberg-limited scaling and
also a monitoring-enhanced scaling in terms of the interrogation time. We also
find that time-continuous schemes are robust against detection losses, as we
prove that the quantum enhancement can be recovered also for finite measurement
efficiency. Finally, we analytically prove the optimality of our strategy.Comment: 11 pages, 6 figures, close to published versio
Enhanced estimation of loss in the presence of Kerr nonlinearity
We address the characterization of dissipative bosonic channels and show that
estimation of the loss rate by Gaussian probes (coherent or squeezed) is
improved in the presence of Kerr nonlinearity. In particular, enhancement of
precision may be substantial for short interaction time, i.e. for media of
moderate size, e.g. biological samples. We analyze in detail the behaviour of
the quantum Fisher information (QFI), and determine the values of nonlinearity
maximizing the QFI as a function of the interaction time and of the parameters
of the input signal. We also discuss the precision achievable by photon
counting and quadrature measurement and present additional results for
truncated, few-photon, probe signals. Finally, we discuss the origin of the
precision enhancement, showing that it cannot be linked quantitatively to the
non-Gaussianity of the interacting probe signal
Recommendation Systems: An Insight Into Current Development and Future Research Challenges
Research on recommendation systems is swiftly producing an abundance of novel methods, constantly challenging the current state-of-the-art. Inspired by advancements in many related fields, like Natural Language Processing and Computer Vision, many hybrid approaches based on deep learning are being proposed, making solid improvements over traditional methods. On the downside, this flurry of research activity, often focused on improving over a small number of baselines, makes it hard to identify reference methods and standardized evaluation protocols. Furthermore, the traditional categorization of recommendation systems into content-based, collaborative filtering and hybrid systems lacks the informativeness it once had. With this work, we provide a gentle introduction to recommendation systems, describing the task they are designed to solve and the challenges faced in research. Building on previous work, an extension to the standard taxonomy is presented, to better reflect the latest research trends, including the diverse use of content and temporal information. To ease the approach toward the technical methodologies recently proposed in this field, we review several representative methods selected primarily from top conferences and systematically describe their goals and novelty. We formalize the main evaluation metrics adopted by researchers and identify the most commonly used benchmarks. Lastly, we discuss issues in current research practices by analyzing experimental results reported on three popular datasets
On the Application of a Common Theoretical Explainability Framework in Information Retrieval
Most of the current state-of-the-art models used to solve the search and ranking tasks in Information Retrieval (IR) are considered “black boxes” due to the enormous number of parameters employed, which makes it difficult for humans to understand the relation between input and output. Thus, in the current literature, several approaches are proposed to explain their outputs, trying to make the models more explainable while maintaining the high level of effectiveness achieved. Even though many methods have been developed, there is still a lack of a common way of describing and evaluating the models and methods of the Explainabile IR (ExIR) field. This work shows how a common theoretical framework for explainability (previously presented in the biomedical field) can be applied to IR. We first describe the general framework and then focus on specific explanation techniques in the IR field, focusing on core IR tasks: search and ranking. We show how well-known methods in ExIR fit into the framework and how specific IR explainability evaluation metrics can be described using this new setting
Cylinders extraction in non-oriented point clouds as a clustering problem
Finding geometric primitives in 3D point clouds is a fundamental task in many engineering applications such as robotics, autonomous-vehicles and automated industrial inspection. Among all solid shapes, cylinders are frequently found in a variety of scenes, comprising natural or man-made objects. Despite their ubiquitous presence, automated extraction and fitting can become challenging if performed ”in-the-wild”, when the number of primitives is unknown or the point cloud is noisy and not oriented. In this paper we pose the problem of extracting multiple cylinders in a scene by means of a Game-Theoretic inlier selection process exploiting the geometrical relations between pairs of axis candidates. First, we formulate the similarity between two possible cylinders considering the rigid motion aligning the two axes to the same line. This motion is represented with a unitary dual-quaternion so that the distance between two cylinders is induced by the length of the shortest geodesic path in SE(3). Then, a Game-Theoretical process exploits such similarity function to extract sets of primitives maximizing their inner mutual consensus. The outcome of the evolutionary process consists in a probability distribution over the sets of candidates (ie axes), which in turn is used to directly estimate the final cylinder parameters. An extensive experimental section shows that the proposed algorithm offers a high resilience to noise, since the process inherently discards inconsistent data. Compared to other methods, it does not need point normals and does not require a fine tuning of multiple parameters
A Survey on Text Classification Algorithms: From Text to Predictions
In recent years, the exponential growth of digital documents has been met by rapid progress in text classification techniques. Newly proposed machine learning algorithms leverage the latest advancements in deep learning methods, allowing for the automatic extraction of expressive features. The swift development of these methods has led to a plethora of strategies to encode natural language into machine-interpretable data. The latest language modelling algorithms are used in conjunction with ad hoc preprocessing procedures, of which the description is often omitted in favour of a more detailed explanation of the classification step. This paper offers a concise review of recent text classification models, with emphasis on the flow of data, from raw text to output labels. We highlight the differences between earlier methods and more recent, deep learning-based methods in both their functioning and in how they transform input data. To give a better perspective on the text classification landscape, we provide an overview of datasets for the English language, as well as supplying instructions for the synthesis of two new multilabel datasets, which we found to be particularly scarce in this setting. Finally, we provide an outline of new experimental results and discuss the open research challenges posed by deep learning-based language models
Unsupervised Semantic Discovery Through Visual Patterns Detection
We propose a new fast fully unsupervised method to discover semantic patterns. Our algorithm is able to hierarchically find visual categories and produce a segmentation mask where previous methods fail. Through the modeling of what is a visual pattern in an image, we introduce the notion of “semantic levels" and devise a conceptual framework along with measures and a dedicated benchmark dataset for future comparisons. Our algorithm is composed by two phases. A filtering phase, which selects semantical hotsposts by means of an accumulator space, then a clustering phase which propagates the semantic properties of the hotspots on a superpixels basis. We provide both qualitative and quantitative experimental validation, achieving optimal results in terms of robustness to noise and semantic consistency. We also made code and dataset publicly available
An Analysis of Errors in Graph-Based Keypoint Matching and Proposed Solutions
International audienceAn error occurs in graph-based keypoint matching when key-points in two different images are matched by an algorithm but do not correspond to the same physical point. Most previous methods acquire keypoints in a black-box manner, and focus on developing better algorithms to match the provided points. However to study the complete performance of a matching system one has to study errors through the whole matching pipeline, from keypoint detection, candidate selection to graph optimisation. We show that in the full pipeline there are six different types of errors that cause mismatches. We then present a matching framework designed to reduce these errors. We achieve this by adapting keypoint detectors to better suit the needs of graph-based matching, and achieve better graph constraints by exploiting more information from their keypoints. Our framework is applicable in general images and can handle clutter and motion discontinuities. We also propose a method to identify many mismatches a posteriori based on Left-Right Consistency inspired by stereo matching due to the asymmetric way we detect keypoints and define the graph
A stable graph-based representation for object recognition through high-order matching
Many Object recognition techniques perform some flavour of point pattern matching between a model and a scene. Such points are usually selected through a feature detection algorithm that is robust to a class of image transformations and a suitable descriptor is computed over them in order to get a reliable matching. Moreover, some approaches take an additional step by casting the correspondence problem into a matching between graphs defined over feature points. The motivation is that the relational model would add more discriminative power, however the overall effectiveness strongly depends on the ability to build a graph that is stable with respect to both changes in the object appearance and spatial distribution of interest points. In fact, widely used graph-based representations, have shown to suffer some limitations, especially with respect to changes in the Euclidean organization of the feature points. In this paper we introduce a technique to build relational structures over corner points that does not depend on the spatial distribution of the features
- …
