30 research outputs found

    LATERAL EARTH PRESSURE, SLOPE STABILITY AND PILE SHEETING (Sheet piled constructions)

    Get PDF
    This master thesis studies about how the sheet pile walls resist the lateral earth pressures on slope area. Sheet pile walls are one type of construction elements that are built by a continuous interlocking of pile segments side by side to each other. They are embedded in the soils to provide lateral supports by resisting the horizontal pressures released from sea waters, or any other earth masses. Sheet pile walls are flexible retaining systems therefore they can tolerate larger horizontal deformations compared to other types of walls such as stone and concrete. Sheet piles are made up of different types of materials in different forms and sizes. Although steel sheet pile has a higher rate of corrosion property, they are the most used due to its availability in the markets all over the world and its super resistance while being installed by hydraulic pressures. PVC can substitute steel sheet piles concerning environmental impact, sustainability, and maintenance requirements against corrosion. The thesis question is (How can one keep the stability and durability of a sheet piled wall construction on slope areas?) and some sub questions that widen the thesis question are listed as 1. What is the concept of pile sheeting? 2. How can one ensure the stability of sheet piled constructions? 3. How can one select the size and material type of a sheet pile for a certain construction? 4. How can one possibly keep the durability of sheet piled wall constructions? The maximum moment induced by the latera earth pressures, the section of modulus of sheet piles are the key expression for selecting a sheet pile size. However, the environmental conditions, strength sustainability and maintenance requirements have a great role in selecting material type of the sheet piles

    An assessment of cash management (in the case of Dashen Bank S. Co.)

    Get PDF
    Cash is the most important current asset for the operations of the business. Cash is the basic input needed to keep the business running on a continuous basis, it is also the ultimate output expected to be realized by selling the services or product manufactured by the firm. The study attempt to find out problems related with Cash Management in Dashen Bank S.Co, at Head Office, And thus, it will provide valuable information and better approach to deal with maintaining a better Cash Management level in the Organization. Therefore, this study will investigate the cash management practices in Dashen Bank S.Co.in the year of 2009 - 2013. The researchers use a sample design of descriptive research method in order to present data in tabular form. In addition to this, financial ratios are used for analyzing the last five years. In this study, the Bank holds idle cash that will reduce its future profit. Besides, Dashen Bank can use up to 80% of its own deposit for loan, but almost for the all years the bank did not use the deposits as a loan, the way that Dashen Bank administers the cash gap between the limit and excess have still some undefined problems. Furthermore, the objective of the study is to attempt the render activity of Dashen Bank as fairly and acceptable way of Banking transaction services

    Unlearning Spurious Correlations in Chest X-ray Classification

    Full text link
    Medical image classification models are frequently trained using training datasets derived from multiple data sources. While leveraging multiple data sources is crucial for achieving model generalization, it is important to acknowledge that the diverse nature of these sources inherently introduces unintended confounders and other challenges that can impact both model accuracy and transparency. A notable confounding factor in medical image classification, particularly in musculoskeletal image classification, is skeletal maturation-induced bone growth observed during adolescence. We train a deep learning model using a Covid-19 chest X-ray dataset and we showcase how this dataset can lead to spurious correlations due to unintended confounding regions. eXplanation Based Learning (XBL) is a deep learning approach that goes beyond interpretability by utilizing model explanations to interactively unlearn spurious correlations. This is achieved by integrating interactive user feedback, specifically feature annotations. In our study, we employed two non-demanding manual feedback mechanisms to implement an XBL-based approach for effectively eliminating these spurious correlations. Our results underscore the promising potential of XBL in constructing robust models even in the presence of confounding factors.Comment: Accepted at the Discovery Science 2023 conference. arXiv admin note: text overlap with arXiv:2307.0602

    Automated Smartphone based System for Diagnosis of Diabetic Retinopathy

    Full text link
    Early diagnosis of diabetic retinopathy for treatment of the disease has been failing to reach diabetic people living in rural areas. Shortage of trained ophthalmologists, limited availability of healthcare centers, and expensiveness of diagnostic equipment are among the reasons. Although many deep learning-based automatic diagnosis of diabetic retinopathy techniques have been implemented in the literature, these methods still fail to provide a point-of-care diagnosis. This raises the need for an independent diagnostic of diabetic retinopathy that can be used by a non-expert. Recently the usage of smartphones has been increasing across the world. Automated diagnoses of diabetic retinopathy can be deployed on smartphones in order to provide an instant diagnosis to diabetic people residing in remote areas. In this paper, inception based convolutional neural network and binary decision tree-based ensemble of classifiers have been proposed and implemented to detect and classify diabetic retinopathy. The proposed method was further imported into a smartphone application for mobile-based classification, which provides an offline and automatic system for diagnosis of diabetic retinopathy.Comment: 12 pages, 4 figures, 4 tables, 1 appendix. Copyright \copyright 2019, IEEE. Published in: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS

    FewSOME: One-Class Few Shot Anomaly Detection with Siamese Networks

    Full text link
    Recent Anomaly Detection techniques have progressed the field considerably but at the cost of increasingly complex training pipelines. Such techniques require large amounts of training data, resulting in computationally expensive algorithms that are unsuitable for settings where only a small amount of normal samples are available for training. We propose 'Few Shot anOMaly detection' (FewSOME), a deep One-Class Anomaly Detection algorithm with the ability to accurately detect anomalies having trained on 'few' examples of the normal class and no examples of the anomalous class. We describe FewSOME to be of low complexity given its low data requirement and short training time. FewSOME is aided by pretrained weights with an architecture based on Siamese Networks. By means of an ablation study, we demonstrate how our proposed loss, 'Stop Loss', improves the robustness of FewSOME. Our experiments demonstrate that FewSOME performs at state-of-the-art level on benchmark datasets MNIST, CIFAR-10, F-MNIST and MVTec AD while training on only 30 normal samples, a minute fraction of the data that existing methods are trained on. Moreover, our experiments show FewSOME to be robust to contaminated datasets. We also report F1 score and balanced accuracy in addition to AUC as a benchmark for future techniques to be compared against. Code available; https://github.com/niamhbelton/FewSOME

    Rethinking Knee Osteoarthritis Severity Grading: A Few Shot Self-Supervised Contrastive Learning Approach

    Full text link
    Knee Osteoarthritis (OA) is a debilitating disease affecting over 250 million people worldwide. Currently, radiologists grade the severity of OA on an ordinal scale from zero to four using the Kellgren-Lawrence (KL) system. Recent studies have raised concern in relation to the subjectivity of the KL grading system, highlighting the requirement for an automated system, while also indicating that five ordinal classes may not be the most appropriate approach for assessing OA severity. This work presents preliminary results of an automated system with a continuous grading scale. This system, namely SS-FewSOME, uses self-supervised pre-training to learn robust representations of the features of healthy knee X-rays. It then assesses the OA severity by the X-rays' distance to the normal representation space. SS-FewSOME initially trains on only 'few' examples of healthy knee X-rays, thus reducing the barriers to clinical implementation by eliminating the need for large training sets and costly expert annotations that existing automated systems require. The work reports promising initial results, obtaining a positive Spearman Rank Correlation Coefficient of 0.43, having had access to only 30 ground truth labels at training time

    Distance-Aware eXplanation Based Learning

    Full text link
    eXplanation Based Learning (XBL) is an interactive learning approach that provides a transparent method of training deep learning models by interacting with their explanations. XBL augments loss functions to penalize a model based on deviation of its explanations from user annotation of image features. The literature on XBL mostly depends on the intersection of visual model explanations and image feature annotations. We present a method to add a distance-aware explanation loss to categorical losses that trains a learner to focus on important regions of a training dataset. Distance is an appropriate approach for calculating explanation loss since visual model explanations such as Gradient-weighted Class Activation Mapping (Grad-CAMs) are not strictly bounded as annotations and their intersections may not provide complete information on the deviation of a model's focus from relevant image regions. In addition to assessing our model using existing metrics, we propose an interpretability metric for evaluating visual feature-attribution based model explanations that is more informative of the model's performance than existing metrics. We demonstrate performance of our proposed method on three image classification tasks.Comment: Accepted at the 35th IEEE International Conference on Tools with Artificial Intelligence, ICTAI 202
    corecore