442 research outputs found
Counterterrorism Policy Responses After a Major Attack
Both domestically and internationally, counter terrorism policy is a crucial and universal challenge of countries in the modern era. Where do countries generate their counter terrorism policy and how does it change after terrorist attacks? Using Policy Convergence Theory (PCT), this paper attempts to explore how counter terrorism policy changes and is adopted after a large-scale attack. PCT argues that governments of a similar economic track will ultimately create similar policies in all policy genres, but this theory has not been examined in light of the threat of terrorism. This paper’s objective is to evaluate the role and existence of PCT in counterterrorism policy. Using original research of case studies of Indonesia, Turkey, Russia, Spain, the United Kingdom, India, and France, I find positive evidence that policy convergence is present in nearly all cases of response to major terrorist attacks
XUV digital in-line holography using high-order harmonics
A step towards a successful implementation of timeresolved digital in-line
holography with extreme ultraviolet radiation is presented. Ultrashort XUV
pulses are produced as high-order harmonics of a femtosecond laser and a
Schwarzschild objective is used to focus harmonic radiation at 38 nm and to
produce a strongly divergent reference beam for holographic recording.
Experimental holograms of thin wires are recorded and the objects
reconstructed. Descriptions of the simulation and reconstruction theory and
algorithms are also given. Spatial resolution of few hundreds of nm is
potentially achievable, and micrometer resolution range is demonstrated.Comment: 8 pages, 8 figure
Image quality optimization, via application of contextual contrast sensitivity and discrimination functions
What is the best luminance contrast weighting-function for image quality optimization? Traditionally measured contrast sensitivity functions (CSFs), have been often used as weighting-functions in image quality and difference metrics. Such weightings have been shown to result in increased sharpness and perceived quality of test images. We suggest contextual CSFs (cCSFs) and contextual discrimination functions (cVPFs) should provide bases for further improvement, since these are directly measured from pictorial scenes, modeling threshold and suprathreshold sensitivities within the context of complex masking information. Image quality assessment is understood to require detection and discrimination of masked signals, making contextual sensitivity and discrimination functions directly relevant. In this investigation, test images are weighted with a traditional CSF, cCSF, cVPF and a constant function. Controlled mutations of these functions are also applied as weighting-functions, seeking the optimal spatial frequency band weighting for quality optimization. Image quality, sharpness and naturalness are then assessed in two-alternative forced-choice psychophysical tests. We show that maximal quality for our test images, results from cCSFs and cVPFs, mutated to boost contrast in the higher visible frequencies
The Design of Equal Complexity FIR Perfect Reconstruction Filter Banks Incorporating Symmetries
In this report, we present a new approach to the design of perfect reconstruction filter banks (PRFB’s) which have equal length FIR analysis and synthesis filters. To achieve perfect reconstruction, necessary and sufficient conditions are incorporated directly in a numerical design procedure as a set of quadratic equality constraints among the impulse response coefficients of the filters. Any symmetry inherent in a particular application, such as quadrature mirror symmetry, linear phase, or symmetry between analysis and synthesis filters, may be exploited to reduce the number of variables and constraints in the design problem. A novel feature of our new approach is that it allows the design of filter banks that perform functions other than flat passband band-splitting
Seq-UPS: Sequential Uncertainty-aware Pseudo-label Selection for Semi-Supervised Text Recognition
This paper looks at semi-supervised learning (SSL) for image-based text
recognition. One of the most popular SSL approaches is pseudo-labeling (PL). PL
approaches assign labels to unlabeled data before re-training the model with a
combination of labeled and pseudo-labeled data. However, PL methods are
severely degraded by noise and are prone to over-fitting to noisy labels, due
to the inclusion of erroneous high confidence pseudo-labels generated from
poorly calibrated models, thus, rendering threshold-based selection
ineffective. Moreover, the combinatorial complexity of the hypothesis space and
the error accumulation due to multiple incorrect autoregressive steps posit
pseudo-labeling challenging for sequence models. To this end, we propose a
pseudo-label generation and an uncertainty-based data selection framework for
semi-supervised text recognition. We first use Beam-Search inference to yield
highly probable hypotheses to assign pseudo-labels to the unlabelled examples.
Then we adopt an ensemble of models, sampled by applying dropout, to obtain a
robust estimate of the uncertainty associated with the prediction, considering
both the character-level and word-level predictive distribution to select good
quality pseudo-labels. Extensive experiments on several benchmark handwriting
and scene-text datasets show that our method outperforms the baseline
approaches and the previous state-of-the-art semi-supervised text-recognition
methods.Comment: Accepted at WACV 202
- …
