1,357 research outputs found
Active redundancy allocation in systems
An effective way of improving the reliability of a system is the
allocation of active redundancy. Let , be independent
lifetimes of the components and , respectively, which
form a series system. Let denote and , where X
is the lifetime of a redundancy (say S) independent of and
. That is denote the lifetime of a system
obtained by allocating S to as an active redundancy.
Singh and Misra (1994) considered the criterion where is
preferred to for redundancy allocation if . In this paper we use the same
criterion of Singh and Misra (1994) and we investigate the
allocation of one active redundancy when it differs depending on the
component with which it is to be allocated. We find sufficient
conditions for the optimization which depend on the components and
redundancies probability distributions. We also compare the
allocation of two active redundancies (say and ) in
two different ways, that is with and with
and viceversa. For this case the hazard rate order plays an
important role. We obtain results for the allocation of more than
two active redundancies to a k-out-of-n systems
Active redundancy allocation in systems
An effective way of improving the reliability of a system is the allocation of active redundancy. Let 1 X , 2 X be independent lifetimes of the components 1 C and 2 C , respectively, which form a series system. Let denote ( ) ( ) 2 1 1 , , max min X X X U = and ( ) ( ) X X X U , max , min 2 1 2 = , where X is the lifetime of a redundancy (say S ) independent of 1 X and 2 X . That is ( ) 2 1 U U denote the lifetime of a system obtained by allocating S to ( ) 2 1 C C as an active redundancy. Singh and Misra (1994) considered the criterion where 1 C is preferred to 2 C for redundancy allocation if ( ) ( ) 1 2 2 1 U U P U U P > ³ > . In this paper we use the same criterion of Singh and Misra (1994) and we investigate the allocation of one active redundancy when it differs depending on the component with which it is to be allocated. We find sufficient conditions for the optimization which depend on the components and redundancies probability distributions. We also compare the allocation of two active redundancies (say 1 S and 2 S ) in two different ways, that is, 1 S with 1 C and 2 S with 2 C and viceversa. For this case the hazard rate order plays an important role. We obtain results for the allocation of more than two active redundancies to a k-out- of-n systems.
Predicting Future Instance Segmentation by Forecasting Convolutional Features
Anticipating future events is an important prerequisite towards intelligent
behavior. Video forecasting has been studied as a proxy task towards this goal.
Recent work has shown that to predict semantic segmentation of future frames,
forecasting at the semantic level is more effective than forecasting RGB frames
and then segmenting these. In this paper we consider the more challenging
problem of future instance segmentation, which additionally segments out
individual objects. To deal with a varying number of output labels per image,
we develop a predictive model in the space of fixed-sized convolutional
features of the Mask R-CNN instance segmentation model. We apply the "detection
head'" of Mask R-CNN on the predicted features to produce the instance
segmentation of future frames. Experiments show that this approach
significantly improves over strong baselines based on optical flow and
repurposed instance segmentation architectures
Zero-Shot Hashing via Transferring Supervised Knowledge
Hashing has shown its efficiency and effectiveness in facilitating
large-scale multimedia applications. Supervised knowledge e.g. semantic labels
or pair-wise relationship) associated to data is capable of significantly
improving the quality of hash codes and hash functions. However, confronted
with the rapid growth of newly-emerging concepts and multimedia data on the
Web, existing supervised hashing approaches may easily suffer from the scarcity
and validity of supervised information due to the expensive cost of manual
labelling. In this paper, we propose a novel hashing scheme, termed
\emph{zero-shot hashing} (ZSH), which compresses images of "unseen" categories
to binary codes with hash functions learned from limited training data of
"seen" categories. Specifically, we project independent data labels i.e.
0/1-form label vectors) into semantic embedding space, where semantic
relationships among all the labels can be precisely characterized and thus seen
supervised knowledge can be transferred to unseen classes. Moreover, in order
to cope with the semantic shift problem, we rotate the embedded space to more
suitably align the embedded semantics with the low-level visual feature space,
thereby alleviating the influence of semantic gap. In the meantime, to exert
positive effects on learning high-quality hash functions, we further propose to
preserve local structural property and discrete nature in binary codes.
Besides, we develop an efficient alternating algorithm to solve the ZSH model.
Extensive experiments conducted on various real-life datasets show the superior
zero-shot image retrieval performance of ZSH as compared to several
state-of-the-art hashing methods.Comment: 11 page
Configuration Complexities of Hydrogenic Atoms
The Fisher-Shannon and Cramer-Rao information measures, and the LMC-like or
shape complexity (i.e., the disequilibrium times the Shannon entropic power) of
hydrogenic stationary states are investigated in both position and momentum
spaces. First, it is shown that not only the Fisher information and the
variance (then, the Cramer-Rao measure) but also the disequilibrium associated
to the quantum-mechanical probability density can be explicitly expressed in
terms of the three quantum numbers (n, l, m) of the corresponding state.
Second, the three composite measures mentioned above are analytically,
numerically and physically discussed for both ground and excited states. It is
observed, in particular, that these configuration complexities do not depend on
the nuclear charge Z. Moreover, the Fisher-Shannon measure is shown to
quadratically depend on the principal quantum number n. Finally, sharp upper
bounds to the Fisher-Shannon measure and the shape complexity of a general
hydrogenic orbital are given in terms of the quantum numbers.Comment: 22 pages, 7 figures, accepted i
Effects of Social Attitude Change on Smoking Heritability
Societal attitudes and norms to female smoking changed in Spain in the mid-twentieth century from a restrictive to a tolerant, and an even pro-smoking, posture, while social attitudes remained stable for males. We explored whether this difference in gender-related social norms influenced the heritability of two tobacco use measures: lifetime smoking and number of years smoking. We used a population-based sample of 2285 twins (mean age = 55.78; SD = 7.45; 58% females) whose adolescence began between the mid-1950s and the early 1980s. After modeling the effect of sex and year of birth on the variance components, we observed that the impact of the genetic and shared environmental factors varied differently by birth cohort between males and females. For females, shared environment explained a higher proportion of variance than the genetic factors in older cohorts. However, this situation was inverted in the younger female cohorts. In contrast, no birth cohort effect was observed for males, where the impact of the genetic and environmental factors remained constant throughout the study period. These results suggest that heritability is larger in a permissive social environment, whereas shared-environmental factors are more relevant in a society that is less tolerant to smoking
Quantum properties of classical Fisher information
The Fisher information of a quantum observable is shown to be proportional to
both (i) the difference of a quantum and a classical variance, thus providing a
measure of nonclassicality; and (ii) the rate of entropy increase under
Gaussian diffusion, thus providing a measure of robustness. The joint
nonclassicality of position and momentum observables is shown to be
complementary to their joint robustness in an exact sense.Comment: 16 page
The LHC Post Mortem Analysis Framework
The LHC with its unprecedented complexity and criticality of beam operation will need thorough analysis of data taken from systems such as power converters, interlocks and beam instrumentation during events like magnet quenches and beam loss. The causes of beam aborts or in the worst case equipment damage have to be revealed to improve operational procedures and protection systems. The correct functioning of the protection systems with their required redundancy has to be verified after each such event. Post mortem analysis software for the control room has been prepared with automated analysis packages in view of the large number of systems and data volume. This paper recalls the requirements for the LHC Beam Post Mortem System (PM) and the necessity for highly reliable data collection. It describes in detail the redundant architecture for data collection as well as the chosen implementation of a multi-level analysis framework, allowing for automated analysis and qualification of a beam dump event based on expert provided analysis modules. It concludes with an example of the data taken during first beam tests in September 2008 with a first version of the system
- …
