985 research outputs found
Bandwidth enhancement : correcting magnitude and phase distortion in wideband piezoelectric transducer systems
Acoustic ultrasonic measurements are widespread and commonly use transducers exhibiting
resonant behaviour due to the piezoelectric nature of their active elements, being designed
to give maximum sensitivity in the bandwidth of interest. We present a characterisation of
such transducers that provides both magnitude and phase information describing the way in
which the receiver responds to a surface displacement over its frequency range. Consequently,
these devices work efficiently and linearly over only a very narrow band of their overall
frequency range. In turn, this causes phase and magnitude distortion of linear signals. To
correct for this distortion, we introduce a software technique, which considers only the input
and the final output signals of the whole systemwhich is therefore generally applicable to any
acoustic system. By correcting for the distortion of the magnitude and phase responses, we
have ensured the signal seen at the receiver replicates the desired signal. We demonstrate a
bandwidth extension on the received signal from 60-130 kHz at -6dB to 40-200 kHz at -1dB
in a test system. The linear chirp signal we used to demonstrate this method showed the
received signal to be almost identical to the desired linear chirp. Such systemcharacterisation
will improve ultrasonic techniques when investigating material properties by maximising the
accuracy of magnitude and phase estimations
Challenging Lucas: From overlapping generations to infinite-lived agent models
The canonical history of macroeconomics, one of the rival schools of thought and the great economists, gives Robert Lucas a prominent role in shaping the recent developments in the area. According to it, his followers were initially split into two camps, the "real business cycle" theorists with models of efficient fluctuations, and the "new-Keynesians" with models in which fluctuations are costly, and the government has a role to play, due to departures from the competitive equilibrium (such as nominal rigidities and imperfect competition). Later on, a consensus view emerged (the so-called new neoclassical synthesis), based on the dynamic stochastic general equilibrium (DSGE) model, which combines elements of the models developed by economists of those two groups. However, this account misses critical developments, as already pointed out by Cherrier and Säidi (2015). As a reaction to Lucas's 1972 policy ineffectiveness results, based on an overlapping generations (OLG) model, a group of macroeconomists realized that a competitive OLG model may have a continuum of equilibria and that this indeterminacy justified government intervention for competitive cycles that emerged even in deterministic models. We can identify here two distinct, but related, groups: one of the deterministic cycles of David Gale, David Cass, and Jean-Michel Grandmont, and another of the stochastic models and sunspots of Karl Shell, Roger Guesnerie, Roger Farmer and Costas Azariadis (Lucas's PhD student). Here, the OLG was the workhorse model. Following from these works, a number of authors, including Michael Woodford, argued that similar results could occur in models with infinitely lived agents when there are various kinds of market imperfections. With such generalization, some of these macroeconomists saw that once these imperfections are introduced, nothing important for business cycle modeling was lost and they could therefore leave the OLG model aside as a model of business fluctuations, to the dismay of authors such as Grandmont, Robert Solow and Frank Hahn. In this paper, we scrutinize the differences between the deterministic cycles and sunspot groups and explore the many efforts of building a dynamic competitive business cycle model that implies a role for the government to play. We then assess the transformation process that took place in the late 1980s when several macroeconomists switched from OLG to infinitelived agents models with imperfections that eventually became central to the DSGE literature. With this we hope to shed more light on the origins of new neoclassical synthesis
A finite element method for a curlcurl-graddiv eigenvalue interface problem
In this paper we propose and study a finite element method for a curlcurl-graddiv eigenvalue interface problem. Its solution may be of piecewise non-H1. We would like to approximate such a solution in an H1-conforming finite element space. With the discretizations of both curl and div operators of the underlying eigenvalue problem in two finite element spaces, the proposed method is essentially a standard H1-conforming element method, up to element bubbles which can be statically eliminated at element levels. We first analyze the proposed method for the related source interface problem by establishing the stability and the error bounds. We then analyze the underlying eigenvalue interface problem, and we obtain the error bounds O(h2r0 ) for eigenvalues which correspond to eigenfunctions in ∏Jj=1 (Hr (Ωj ))3 → (Hr0 (Ω))3 space, where the piecewise regularity r and the global regularity r0 may belong to the most interesting interval [0, 1]
Bioinspired low-frequency material characterisation
New-coded signals, transmitted by high-sensitivity broadband transducers in the 40–200 kHz range, allow subwavelength material discrimination and thickness determination of polypropylene, polyvinylchloride, and brass samples. Frequency domain spectra enable simultaneous measurement of material properties including longitudinal sound velocity and the attenuation constant as well as thickness measurements. Laboratory test measurements agree well with model results, with sound velocity prediction errors of less than 1%, and thickness discrimination of at least wavelength/15. The resolution of these measurements has only been matched in the past through methods that utilise higher frequencies. The ability to obtain the same resolution using low frequencies has many advantages, particularly when dealing with highly attenuating materials. This approach differs significantly from past biomimetic approaches where actual or simulated animal signals have been used and consequently has the potential for application in a range of fields where both improved penetration and high resolution are required, such as nondestructive testing and evaluation, geophysics, and medical physics
On Maximal Unbordered Factors
Given a string of length , its maximal unbordered factor is the
longest factor which does not have a border. In this work we investigate the
relationship between and the length of the maximal unbordered factor of
. We prove that for the alphabet of size the expected length
of the maximal unbordered factor of a string of length~ is at least
(for sufficiently large values of ). As an application of this result, we
propose a new algorithm for computing the maximal unbordered factor of a
string.Comment: Accepted to the 26th Annual Symposium on Combinatorial Pattern
Matching (CPM 2015
Computing the Longest Unbordered Substring
International audienceA substring of a string is unbordered if its only border is the empty string. The study of unbordered substrings goes back to the paper of Ehrenfeucht and Silberger [7]. The main focus of [7] and of subsequent papers was to elucidate the relationship between the longest unbordered substring and the minimal period of strings. In this paper, we consider the algorithmic problem of computing the longest unbordered substring of a string. The problem was introduced recently in [12], where the authors showed that the average-case running time of the simple, border-array based algorithm can be bounded by O(n 2 /σ 4) for σ being the size of the alphabet. (The worst-case running time remained O(n 2).) Here we propose two algorithms, both presenting substantial theoretical improvements to the result of [12]. The first algorithm has O(n log n) average-case running time and O(n 2) worst-case running time, and the second algorithm has O(n 1.5) worst-case running time
- …
