218 research outputs found

    Iatrogenic meningitis caused by Neisseria sicca/subflava after intrathecal contrast injection, Australia

    Full text link
    We report a case of invasive Neisseria sicca/subflava meningitis after a spinal injection procedure during which a face mask was not worn by the proceduralist. The report highlights the importance of awareness of, and adherence to, guidelines for protective face mask use during procedures that require sterile conditions

    Automated Reachability Analysis of Neural Network-Controlled Systems via Adaptive Polytopes

    Full text link
    Over-approximating the reachable sets of dynamical systems is a fundamental problem in safety verification and robust control synthesis. The representation of these sets is a key factor that affects the computational complexity and the approximation error. In this paper, we develop a new approach for over-approximating the reachable sets of neural network dynamical systems using adaptive template polytopes. We use the singular value decomposition of linear layers along with the shape of the activation functions to adapt the geometry of the polytopes at each time step to the geometry of the true reachable sets. We then propose a branch-and-bound method to compute accurate over-approximations of the reachable sets by the inferred templates. We illustrate the utility of the proposed approach in the reachability analysis of linear systems driven by neural network controllers

    Compositional Curvature Bounds for Deep Neural Networks

    Full text link
    A key challenge that threatens the widespread use of neural networks in safety-critical applications is their vulnerability to adversarial attacks. In this paper, we study the second-order behavior of continuously differentiable deep neural networks, focusing on robustness against adversarial perturbations. First, we provide a theoretical analysis of robustness and attack certificates for deep classifiers by leveraging local gradients and upper bounds on the second derivative (curvature constant). Next, we introduce a novel algorithm to analytically compute provable upper bounds on the second derivative of neural networks. This algorithm leverages the compositional structure of the model to propagate the curvature bound layer-by-layer, giving rise to a scalable and modular approach. The proposed bound can serve as a differentiable regularizer to control the curvature of neural networks during training, thereby enhancing robustness. Finally, we demonstrate the efficacy of our method on classification tasks using the MNIST and CIFAR-10 datasets.Comment: Proceedings of the 41 st International Conference on Machine Learning (ICML 2024

    ReachLipBnB: A branch-and-bound method for reachability analysis of neural autonomous systems using Lipschitz bounds

    Full text link
    We propose a novel Branch-and-Bound method for reachability analysis of neural networks in both open-loop and closed-loop settings. Our idea is to first compute accurate bounds on the Lipschitz constant of the neural network in certain directions of interest offline using a convex program. We then use these bounds to obtain an instantaneous but conservative polyhedral approximation of the reachable set using Lipschitz continuity arguments. To reduce conservatism, we incorporate our bounding algorithm within a branching strategy to decrease the over-approximation error within an arbitrary accuracy. We then extend our method to reachability analysis of control systems with neural network controllers. Finally, to capture the shape of the reachable sets as accurately as possible, we use sample trajectories to inform the directions of the reachable set over-approximations using Principal Component Analysis (PCA). We evaluate the performance of the proposed method in several open-loop and closed-loop settings

    A Case of Kingella Kingae Endocarditis Complicated by Native Mitral Valve Rupture

    Get PDF
    We report a case of Kingella kingae endocarditis in a patient with a history of recent respiratory tract infection and dental extraction. This case is remarkable for embolic and vasculitic phenomena in association with a large valve vegetation and valve perforation. Kingella kingae is an organism known to cause endocarditis, however early major complications are uncommon. Our case of Kingella endocarditis behaved in a virulent fashion necessitating a combined approach of intravenous antibiotic therapy and a  valve replacement. It highlights the importance of expedited investigation for endocarditis in patients with Kingella bacteraemia

    Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization

    Full text link
    To improve the robustness of deep classifiers against adversarial perturbations, many approaches have been proposed, such as designing new architectures with better robustness properties (e.g., Lipschitz-capped networks), or modifying the training process itself (e.g., min-max optimization, constrained learning, or regularization). These approaches, however, might not be effective at increasing the margin in the input (feature) space. As a result, there has been an increasing interest in developing training procedures that can directly manipulate the decision boundary in the input space. In this paper, we build upon recent developments in this category by developing a robust training algorithm whose objective is to increase the margin in the output (logit) space while regularizing the Lipschitz constant of the model along vulnerable directions. We show that these two objectives can directly promote larger margins in the input space. To this end, we develop a scalable method for calculating guaranteed differentiable upper bounds on the Lipschitz constant of neural networks accurately and efficiently. The relative accuracy of the bounds prevents excessive regularization and allows for more direct manipulation of the decision boundary. Furthermore, our Lipschitz bounding algorithm exploits the monotonicity and Lipschitz continuity of the activation layers, and the resulting bounds can be used to design new layers with controllable bounds on their Lipschitz constant. Experiments on the MNIST, CIFAR-10, and Tiny-ImageNet data sets verify that our proposed algorithm obtains competitively improved results compared to the state-of-the-art.Comment: 37th Conference on Neural Information Processing Systems (NeurIPS 2023

    Adverse Conditions in the Professional Lives of English Instructors: Recommendations to Ameliorate Teacher Resilience

    Get PDF
    Adverse Conditions in the Professional Lives of English Instructors: Recommendations to Ameliorate Teacher Resilience Abstract The present qualitative study strived to examine Iranian English instructors&rsquo; experiences of adverse conditions in their teaching environments. To yield this purpose, 30 novice language institute English instructors in Tabriz (Iran) were selected as the participants. To triangulate the data, we utilized interview and narrative data collection techniques. Adapting inductive, bottom-up approach to analyzing the themes, three main themes were generated: teacher factors, contextual factors, and student factors. The findings of the study were ascribed to the participants&rsquo; resilience in their academic settings. The findings can provide the teacher educators and educational psychologists with certain guiding principles regarding teacher resilience. The results of the study has a number of implications for pedagogy and teaching practice as well as the educational psychology and teacher education . Keywords: adverse conditions, emotional regulation, psychological wellbeing, teacher education, teacher resilience</p

    Gradient-Regularized Out-of-Distribution Detection

    Full text link
    One of the challenges for neural networks in real-life applications is the overconfident errors these models make when the data is not from the original training distribution. Addressing this issue is known as Out-of-Distribution (OOD) detection. Many state-of-the-art OOD methods employ an auxiliary dataset as a surrogate for OOD data during training to achieve improved performance. However, these methods fail to fully exploit the local information embedded in the auxiliary dataset. In this work, we propose the idea of leveraging the information embedded in the gradient of the loss function during training to enable the network to not only learn a desired OOD score for each sample but also to exhibit similar behavior in a local neighborhood around each sample. We also develop a novel energy-based sampling method to allow the network to be exposed to more informative OOD samples during the training phase. This is especially important when the auxiliary dataset is large. We demonstrate the effectiveness of our method through extensive experiments on several OOD benchmarks, improving the existing state-of-the-art FPR95 by 4% on our ImageNet experiment. We further provide a theoretical analysis through the lens of certified robustness and Lipschitz analysis to showcase the theoretical foundation of our work. Our code is available at https://github.com/o4lc/Greg-OOD.Accepted to ECCV 202
    corecore