260 research outputs found

    Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems

    Full text link
    Deep neural networks (DNNs) have been quite successful in solving many complex learning problems. However, DNNs tend to have a large number of learning parameters, leading to a large memory and computation requirement. In this paper, we propose a model compression framework for efficient training and inference of deep neural networks on embedded systems. Our framework provides data structures and kernels for OpenCL-based parallel forward and backward computation in a compressed form. In particular, our method learns sparse representations of parameters using 1\ell_1-based sparse coding while training, storing them in compressed sparse matrices. Unlike the previous works, our method does not require a pre-trained model as an input and therefore can be more versatile for different application environments. Even though the use of 1\ell_1-based sparse coding for model compression is not new, we show that it can be far more effective than previously reported when we use proximal point algorithms and the technique of debiasing. Our experiments show that our method can produce minimal learning models suitable for small embedded devices

    Scalable stochastic gradient descent with improved confidence

    Get PDF
    Stochastic gradient descent methods have been quite successful for solving large- scale and online learning problems. We provide a simple parallel framework to obtain solutions of high confidence, where the confidence can be easily controlled by the number of processes, independently of the length of learning processes. Our framework is implemented as a scalable open-source software which can be configured for a single multicore machine or for a cluster of computers, where the training outcomes from independent parallel processes are combined to produce the final output

    Feature Selection for High-Dimensional Data with RapidMiner

    Get PDF
    Feature selection is an important task in machine learning, reducing dimensionality of learning problems by selecting few relevant features without losing too much information. Focusing on smaller sets of features, we can learn simpler models from data that are easier to understand and to apply. In fact, simpler models are more robust to input noise and outliers, often leading to better prediction performance than the models trained in higher dimensions with all features. We implement several feature selection algorithms in an extension of RapidMiner, that scale well with the number of features compared to the existing feature selection operators in RapidMiner

    Sensitivity to cdk1-inhibition is modulated by p53 status in preclinical models of embryonal tumors

    Get PDF
    Dysregulation of the cell cycle and cyclin-dependent kinases (cdks) is a hallmark of cancer cells. Intervention with cdk function is currently evaluated as a therapeutic option in many cancer types including neuroblastoma (NB), a common solid tumor of childhood. Re-analyses of mRNA profiling data from primary NB revealed that high level mRNA expression of both cdk1 and its corresponding cyclin, CCNB1, were significantly associated with worse patient outcome independent of MYCN amplification, a strong indicator of adverse NB prognosis. Cdk1 as well as CCNB1 expression were readily detectable in all embryonal tumor cell lines investigated. Pharmacological inhibition or siRNA-mediated knockdown of cdk1/CCNB1 induced proliferation arrest independent of MYCN status in NB cells. Sensitivity to cdk1 inhibition was modulated by TP53, which was demonstrated using isogenic cells with wild-type TP53 expressing either dominant-negative p53 or a short hairpin RNA directed against TP53. Apoptosis induced by cdk1 inhibition was dependent on caspase activation and was concomitant with upregulation of transcriptional targets of TP53. Our results confirm an essential role for the cdk1/CCNB1 complex in tumor cell survival. As relapsing embryonal tumors often present with p53 pathway alterations, these findings have potential implications for therapy approaches targeting cdks

    Preprocessing of Affymetrix Exon Expression Arrays

    Get PDF
    The activity of genes can be captured by measuring the amount of messenger RNAs transcribed from the genes, or from their subunits called exons. In our study, we use the Affymetrix Human Exon ST v1.0 micro arrays to measure the activity of exon s in Neuroblastoma cancer patients. The purpose is to discover a small number of genes or exons that play important roles in differentiating high - risk patients fro m low - risk counterparts. Although the technology has been improved for the past 15 years, array measurements still can be contaminated by various factors, including human error. Since the number of arrays is often only few hundreds, atypical errors can hardly be canceled by large numbers of normal arrays. In this article we describe how we filter out low - quality arrays in a principled way, so that we can obtain more reliable results in downstream analyses
    corecore