542 research outputs found

    DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving

    Full text link
    Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website

    Legality of the Maryland Public Utilities Disputes Act

    Get PDF

    Konsep I'jaz BALAGHAY Dalam Perspektif Al Qur'an ( Studi Terhadap I"JAZ Balaghy Dalam Al Qur'an)

    Get PDF
    القرآن كلام الله المعجز في نظمه وأسلوبه، وقد تكفل الله بحفظه حيث أنه باق إلى طول الزمان، وقد عجزوا العرب عن الإتيان بمثله مهم أنهم من أصحاب القصحاء والبلغاء، فنزول القرآن حجة بلاغية لكافة الأنس والجن، وهو أعلى طبقات الكلام وجمال ألفاظه وحسن نظمه وسمو معانيه وتأثير في النفوس لأن القرأن صار معجز التأليف مضمنا أصح المعان

    LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop

    Full text link
    While there has been remarkable progress in the performance of visual recognition algorithms, the state-of-the-art models tend to be exceptionally data-hungry. Large labeled training datasets, expensive and tedious to produce, are required to optimize millions of parameters in deep network models. Lagging behind the growth in model capacity, the available datasets are quickly becoming outdated in terms of size and density. To circumvent this bottleneck, we propose to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop. Starting from a large set of candidate images for each category, we iteratively sample a subset, ask people to label them, classify the others with a trained model, split the set into positives, negatives, and unlabeled based on the classification confidence, and then iterate with the unlabeled set. To assess the effectiveness of this cascading procedure and enable further progress in visual recognition research, we construct a new image dataset, LSUN. It contains around one million labeled images for each of 10 scene categories and 20 object categories. We experiment with training popular convolutional networks and find that they achieve substantial performance gains when trained on this dataset

    A New 2.5D Representation for Lymph Node Detection using Random Sets of Deep Convolutional Neural Network Observations

    Full text link
    Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards 100% sensitivity at the cost of high FP levels (40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.Comment: This article will be presented at MICCAI (Medical Image Computing and Computer-Assisted Interventions) 201

    Anatomy-specific classification of medical images using deep convolutional nets

    Full text link
    Automated classification of human anatomy is an important prerequisite for many computer-aided diagnosis systems. The spatial complexity and variability of anatomy throughout the human body makes classification difficult. "Deep learning" methods such as convolutional networks (ConvNets) outperform other state-of-the-art methods in image classification tasks. In this work, we present a method for organ- or body-part-specific anatomical classification of medical images acquired using computed tomography (CT) with ConvNets. We train a ConvNet, using 4,298 separate axial 2D key-images to learn 5 anatomical classes. Key-images were mined from a hospital PACS archive, using a set of 1,675 patients. We show that a data augmentation approach can help to enrich the data set and improve classification performance. Using ConvNets and data augmentation, we achieve anatomy-specific classification error of 5.9 % and area-under-the-curve (AUC) values of an average of 0.998 in testing. We demonstrate that deep learning can be used to train very reliable and accurate classifiers that could initialize further computer-aided diagnosis.Comment: Presented at: 2015 IEEE International Symposium on Biomedical Imaging, April 16-19, 2015, New York Marriott at Brooklyn Bridge, NY, US

    How gender- and violence-related norms affect self-esteem among adolescent refugee girls living in Ethiopia.

    Get PDF
    BACKGROUND: Evidence suggests adolescent self-esteem is influenced by beliefs of how individuals in their reference group perceive them. However, few studies examine how gender- and violence-related social norms affect self-esteem among refugee populations. This paper explores relationships between gender inequitable and victim-blaming social norms, personal attitudes, and self-esteem among adolescent girls participating in a life skills program in three Ethiopian refugee camps. METHODS: Ordinary least squares multivariable regression analysis was used to assess the associations between attitudes and social norms, and self-esteem. Key independent variables of interest included a scale measuring personal attitudes toward gender inequitable norms, a measure of perceived injunctive norms capturing how a girl believed her family and community would react if she was raped, and a peer-group measure of collective descriptive norms surrounding gender inequity. The key outcome variable, self-esteem, was measured using the Rosenberg self-esteem scale. RESULTS: Girl's personal attitudes toward gender inequitable norms were not significantly predictive of self-esteem at endline, when adjusting for other covariates. Collective peer norms surrounding the same gender inequitable statements were significantly predictive of self-esteem at endline (ß = -0.130; p  =  0.024). Additionally, perceived injunctive norms surrounding family and community-based sanctions for victims of forced sex were associated with a decline in self-esteem at endline (ß = -0.103; p  =  0.014). Significant findings for collective descriptive norms and injunctive norms remained when controlling for all three constructs simultaneously. CONCLUSIONS: Findings suggest shifting collective norms around gender inequity, particularly at the community and peer levels, may sustainably support the safety and well-being of adolescent girls in refugee settings

    Notes and Comments: Right to Counsel at Prison Disciplinary Hearings

    Get PDF
    corecore