5,421 research outputs found

    A Layer Decomposition-Recomposition Framework for Neuron Pruning towards Accurate Lightweight Networks

    Full text link
    Neuron pruning is an efficient method to compress the network into a slimmer one for reducing the computational cost and storage overhead. Most of state-of-the-art results are obtained in a layer-by-layer optimization mode. It discards the unimportant input neurons and uses the survived ones to reconstruct the output neurons approaching to the original ones in a layer-by-layer manner. However, an unnoticed problem arises that the information loss is accumulated as layer increases since the survived neurons still do not encode the entire information as before. A better alternative is to propagate the entire useful information to reconstruct the pruned layer instead of directly discarding the less important neurons. To this end, we propose a novel Layer Decomposition-Recomposition Framework (LDRF) for neuron pruning, by which each layer's output information is recovered in an embedding space and then propagated to reconstruct the following pruned layers with useful information preserved. We mainly conduct our experiments on ILSVRC-12 benchmark with VGG-16 and ResNet-50. What should be emphasized is that our results before end-to-end fine-tuning are significantly superior owing to the information-preserving property of our proposed framework.With end-to-end fine-tuning, we achieve state-of-the-art results of 5.13x and 3x speed-up with only 0.5% and 0.65% top-5 accuracy drop respectively, which outperform the existing neuron pruning methods.Comment: accepted by AAAI19 as ora

    Small Open Economy Study for Hong Kong

    Get PDF
    In this paper we derive a dynamic stochastic general equilibrium (DSGE) model, following Gali and Monacelli (2005) for Hong Kong. The model features a small open economy with a currency board. We simulate the model and illustrate impulse response functions, comparing three different monetary rules: PEG, domestic inflation target (DIT) and a Taylor rule. The model is estimated with conventional Bayesian approach, then we perform model comparison of PEG against other two rules, and PEG wins the overwhelming support of the data. Our results show substantial openness of Hong Kong, and firms reset prices roughly every three quarters. Cyclical variations of Hong Kong seem mostly come from productivity and cost push-up shock. Finally a DSGE-VAR model is estimated, results are similar to DSGE model, however, estimated weight parameter indicates that cross equation restrictions are too stylised to capture the essential dynamics of the data than a pure VAR model

    Fast and Robust Rank Aggregation against Model Misspecification

    Full text link
    In rank aggregation, preferences from different users are summarized into a total order under the homogeneous data assumption. Thus, model misspecification arises and rank aggregation methods take some noise models into account. However, they all rely on certain noise model assumptions and cannot handle agnostic noises in the real world. In this paper, we propose CoarsenRank, which rectifies the underlying data distribution directly and aligns it to the homogeneous data assumption without involving any noise model. To this end, we define a neighborhood of the data distribution over which Bayesian inference of CoarsenRank is performed, and therefore the resultant posterior enjoys robustness against model misspecification. Further, we derive a tractable closed-form solution for CoarsenRank making it computationally efficient. Experiments on real-world datasets show that CoarsenRank is fast and robust, achieving consistent improvement over baseline methods
    corecore