43,847 research outputs found

    A model driven approach for software systems reliability

    Get PDF
    The reliability assurance of software systems from design to deployment level through transformation techniques and model driven approach, is described. Once the reliability mechanisms provided by current component-based development architectures (CBDA) are designed in a platform-independent way, platform-based design and implementation models must be extended. Current CBDAs, such as Enterprise Java Beans, address a considerable range of features to support system reliability. The evaluation aims to test maturity of the approach, its applicability, and the effectiveness of reliability models. The techniques such as process algebras are generally considered time consuming, in regard to software development

    Sensitivity Analysis for a Scenario-Based Reliability Prediction Model

    Get PDF
    As a popular means for capturing behavioural requirements, scenariosshow how components interact to provide system-level functionality.If component reliability information is available, scenarioscan be used to perform early system reliability assessment. Inprevious work we presented an automated approach for predictingsoftware system reliability that extends a scenario specificationto model (1) the probability of component failure, and (2) scenariotransition probabilities. Probabilistic behaviour models ofthe system are then synthesized from the extended scenario specification.From the system behaviour model, reliability predictioncan be computed. This paper complements our previous work andpresents a sensitivity analysis that supports reasoning about howcomponent reliability and usage profiles impact on the overall systemreliability. For this purpose, we present how the system reliabilityvaries as a function of the components reliabilities and thescenario transition probabilities. Taking into account the concurrentnature of component-based software systems, we also analysethe effect of implied scenarios prevention into the sensitivity analysisof our reliability prediction technique

    Lautum Regularization for Semi-supervised Transfer Learning

    Get PDF
    Transfer learning is a very important tool in deep learning as it allows propagating information from one "source dataset" to another "target dataset", especially in the case of a small number of training examples in the latter. Yet, discrepancies between the underlying distributions of the source and target data are commonplace and are known to have a substantial impact on algorithm performance. In this work we suggest a novel information theoretic approach for the analysis of the performance of deep neural networks in the context of transfer learning. We focus on the task of semi-supervised transfer learning, in which unlabeled samples from the target dataset are available during the network training on the source dataset. Our theory suggests that one may improve the transferability of a deep neural network by imposing a Lautum information based regularization that relates the network weights to the target data. We demonstrate the effectiveness of the proposed approach in various transfer learning experiments

    Evidence of spontaneous spin polarized transport in magnetic nanowires

    Full text link
    The exploitation of the spin in charge-based systems is opening revolutionary opportunities for device architecture. Surprisingly, room temperature electrical transport through magnetic nanowires is still an unresolved issue. Here, we show that ferromagnetic (Co) suspended atom chains spontaneously display an electron transport of half a conductance quantum, as expected for a fully polarized conduction channel. Similar behavior has been observed for Pd (a quasi-magnetic 4d metal) and Pt (a non-magnetic 5d metal). These results suggest that the nanowire low dimensionality reinforces or induces magnetic behavior, lifting off spin degeneracy even at room temperature and zero external magnetic field.Comment: 4 pages, 3 eps fig

    Unveiling The Sigma-Discrepancy II: Revisiting the Evolution of ULIRGs & The Origin of Quasars

    Full text link
    We present the first central velocity dispersions (sigma_o) measured from the 0.85 micron Calcium II Triplet (CaT) for 8 advanced (i.e. single nuclei) local (z < 0.15) Ultraluminous Infrared Galaxies (ULIRGs). First, these measurements are used to test the prediction that the "sigma-Discrepancy," in which the CaT sigma_o is systematically larger than the sigma_o obtained from the 1.6 or 2.3 micron stellar CO band-heads, extends to ULIRG luminosities. Next, we combine the CaT data with rest-frame I-band photometry obtained from archival Hubble Space Telescope data and the Sloan Digital Sky Survey (SDSS) to derive dynamical properties for the 8 ULIRGs. These are then compared to the dynamical properties of 9,255 elliptical galaxies from the SDSS within the same redshift volume and of a relatively nearby (z < 0.4) sample of 53 QSO host galaxies. A comparison is also made between the I-band and H-band dynamical properties of the ULIRGs. We find four key results: 1) the sigma-Discrepancy extends to ULIRG luminosities; 2) at I-band ULIRGs lie on the Fundamental Plane (FP) in a region consistent with the most massive elliptical galaxies and not low-intermediate mass ellipticals as previously reported in the near-infrared; 3) the I-band M/L of ULIRGs are consistent with an old stellar population, while at H-band ULIRGs appear significantly younger and less massive; and 4) we derive an I-band Kormendy Relation from the SDSS ellipticals and demonstrate that ULIRGs and QSO host galaxies are dynamically similar.Comment: Accepted to The Astrophysical Journal. 6 Figures, 5 Tables, 4 Appendices. Version 2 changes: Corrects errors in Table 1 of Appendix C; and now formatted using ApJ emulat

    Asymptotic Task-Based Quantization with Application to Massive MIMO

    Get PDF
    Quantizers take part in nearly every digital signal processing system which operates on physical signals. They are commonly designed to accurately represent the underlying signal, regardless of the specific task to be performed on the quantized data. In systems working with high-dimensional signals, such as massive multiple-input multiple-output (MIMO) systems, it is beneficial to utilize low-resolution quantizers, due to cost, power, and memory constraints. In this work we study quantization of high-dimensional inputs, aiming at improving performance under resolution constraints by accounting for the system task in the quantizers design. We focus on the task of recovering a desired signal statistically related to the high-dimensional input, and analyze two quantization approaches: We first consider vector quantization, which is typically computationally infeasible, and characterize the optimal performance achievable with this approach. Next, we focus on practical systems which utilize hardware-limited scalar uniform analog-to-digital converters (ADCs), and design a task-based quantizer under this model. The resulting system accounts for the task by linearly combining the observed signal into a lower dimension prior to quantization. We then apply our proposed technique to channel estimation in massive MIMO networks. Our results demonstrate that a system utilizing low-resolution scalar ADCs can approach the optimal channel estimation performance by properly accounting for the task in the system design
    corecore