794 research outputs found

    Testing the New Keynesian Phillips Curve Without Assuming Identification

    Get PDF
    We re-examine the evidence on the new Phillips curve model of Gali and Gertler (Journal of Monetary Economics 1999) using the conditional score test of Kleibergen (Econometrica 2005), which is robust to weak identification. In contrast to earlier studies, we find that US postwar data are consistent both with the view that inflation dynamics are forward-looking, and with the opposite view that they are predominantly backward-looking. Moreover, the labor share does not appear to be a relevant determinant of inflation. We show that this is an important factor contributing to the weak identification of the Phillips curve.

    An Evaluation of Score Level Fusion Approaches for Fingerprint and Finger-vein Biometrics

    Get PDF
    Biometric systems have to address many requirements, such as large population coverage, demographic diversity, varied deployment environment, as well as practical aspects like performance and spoofing attacks. Traditional unimodal biometric systems do not fully meet the aforementioned requirements making them vulnerable and susceptible to different types of attacks. In response to that, modern biometric systems combine multiple biometric modalities at different fusion levels. The fused score is decisive to classify an unknown user as a genuine or impostor. In this paper, we evaluate combinations of score normalization and fusion techniques using two modalities (fingerprint and finger-vein) with the goal of identifying which one achieves better improvement rate over traditional unimodal biometric systems. The individual scores obtained from finger-veins and fingerprints are combined at score level using three score normalization techniques (min-max, z-score, hyperbolic tangent) and four score fusion approaches (minimum score, maximum score, simple sum, user weighting). The experimental results proved that the combination of hyperbolic tangent score normalization technique with the simple sum fusion approach achieve the best improvement rate of 99.98%.Comment: 10 pages, 5 figures, 3 tables, conference, NISK 201

    Automatic Detection of Malware-Generated Domains with Recurrent Neural Models

    Get PDF
    Modern malware families often rely on domain-generation algorithms (DGAs) to determine rendezvous points to their command-and-control server. Traditional defence strategies (such as blacklisting domains or IP addresses) are inadequate against such techniques due to the large and continuously changing list of domains produced by these algorithms. This paper demonstrates that a machine learning approach based on recurrent neural networks is able to detect domain names generated by DGAs with high precision. The neural models are estimated on a large training set of domains generated by various malwares. Experimental results show that this data-driven approach can detect malware-generated domain names with a F_1 score of 0.971. To put it differently, the model can automatically detect 93 % of malware-generated domain names for a false positive rate of 1:100.Comment: Submitted to NISK 201

    Testing the new Keynesian Phillips curve without assuming identification

    Full text link
    We re-examine the evidence on the new Phillips curve model of Gali and Gertler (Journal of Monetary Economics 1999) using the conditional score test of Kleibergen (Econometrica 2005), which is robust to weak identification. In contrast to earlier studies, we find that US postwar data are consistent both with the view that inflation dynamics are forward-looking, and with the opposite view that they are predominantly backward-looking. Moreover, the labor share does not appear to be a relevant determinant of inflation. We show that this is an important factor contributing to the weak identification of the Phillips curve

    Anomaly Detection for imbalanced datasets with Deep Generative Models

    Get PDF
    Many important data analysis applications present with severely imbalanced datasets with respect to the target variable. A typical example is medical image analysis, where positive samples are scarce, while performance is commonly estimated against the correct detection of these positive examples. We approach this challenge by formulating the problem as anomaly detection with generative models. We train a generative model without supervision on the `negative' (common) datapoints and use this model to estimate the likelihood of unseen data. A successful model allows us to detect the `positive' case as low likelihood datapoints. In this position paper, we present the use of state-of-the-art deep generative models (GAN and VAE) for the estimation of a likelihood of the data. Our results show that on the one hand both GANs and VAEs are able to separate the `positive' and `negative' samples in the MNIST case. On the other hand, for the NLST case, neither GANs nor VAEs were able to capture the complexity of the data and discriminate anomalies at the level that this task requires. These results show that even though there are a number of successes presented in the literature for using generative models in similar applications, there remain further challenges for broad successful implementation.Comment: 15 pages, 13 figures, accepted by Benelearn 2018 conferenc

    Bargaining and Wage Rigidity in a Matching Model for the US

    Get PDF
    The Mortensen and Pissarides (1994) matching model with all wages negotiated each period is shown inconsistent with macroeconomic wage dynamics in the US. This applies even when heterogeneous match productivities, time to build vacancies and credible bargaining are incorporated. Wage rigidity consistent with micro evidence that wages of job changers are more flexible than those of job stayers allows the model to capture these dynamics and is not inconsistent with parameter calibrations in the literature. Such wage rigidity affects only the timing of wage payments over the duration of matches, so conclusions about characteristics based on calibrations continue to apply

    Large Scale Spectral Clustering Using Approximate Commute Time Embedding

    Full text link
    Spectral clustering is a novel clustering method which can detect complex shapes of data clusters. However, it requires the eigen decomposition of the graph Laplacian matrix, which is proportion to O(n3)O(n^3) and thus is not suitable for large scale systems. Recently, many methods have been proposed to accelerate the computational time of spectral clustering. These approximate methods usually involve sampling techniques by which a lot information of the original data may be lost. In this work, we propose a fast and accurate spectral clustering approach using an approximate commute time embedding, which is similar to the spectral embedding. The method does not require using any sampling technique and computing any eigenvector at all. Instead it uses random projection and a linear time solver to find the approximate embedding. The experiments in several synthetic and real datasets show that the proposed approach has better clustering quality and is faster than the state-of-the-art approximate spectral clustering methods
    corecore