40 research outputs found

    Utilizing Adversarial Examples for Bias Mitigation and Accuracy Enhancement

    Full text link
    We propose a novel approach to mitigate biases in computer vision models by utilizing counterfactual generation and fine-tuning. While counterfactuals have been used to analyze and address biases in DNN models, the counterfactuals themselves are often generated from biased generative models, which can introduce additional biases or spurious correlations. To address this issue, we propose using adversarial images, that is images that deceive a deep neural network but not humans, as counterfactuals for fair model training. Our approach leverages a curriculum learning framework combined with a fine-grained adversarial loss to fine-tune the model using adversarial examples. By incorporating adversarial images into the training data, we aim to prevent biases from propagating through the pipeline. We validate our approach through both qualitative and quantitative assessments, demonstrating improved bias mitigation and accuracy compared to existing methods. Qualitatively, our results indicate that post-training, the decisions made by the model are less dependent on the sensitive attribute and our model better disentangles the relationship between sensitive attributes and classification variables

    Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines

    Full text link
    Text-based games have emerged as an important test-bed for Reinforcement Learning (RL) research, requiring RL agents to combine grounded language understanding with sequential decision making. In this paper, we examine the problem of infusing RL agents with commonsense knowledge. Such knowledge would allow agents to efficiently act in the world by pruning out implausible actions, and to perform look-ahead planning to determine how current actions might affect future world states. We design a new text-based gaming environment called TextWorld Commonsense (TWC) for training and evaluating RL agents with a specific kind of commonsense knowledge about objects, their attributes, and affordances. We also introduce several baseline RL agents which track the sequential context and dynamically retrieve the relevant commonsense knowledge from ConceptNet. We show that agents which incorporate commonsense knowledge in TWC perform better, while acting more efficiently. We conduct user-studies to estimate human performance on TWC and show that there is ample room for future improvement

    Management of Late Onset Perthes: Evaluation of Distraction by External Fixator—5-Year Follow-Up

    Get PDF
    Background. Hip distraction in Perthes’ disease unloads the joint, which negates the harmful effect of the stresses on the articular surfaces, which may promote the sound healing of the area of necrosis. We have examined the effect of arthrodiastasis on the preservation of the femoral head in older children with Perthes’ disease. Methods and Materials. Twelve children with age more than 8 years with Perthes’ disease of less than one year were treated with hip distraction by a hinged monolateral external fixator. Observation and Results. Mean duration of distraction was 13.9 days. These children were evaluated by clinicoradiological parameters for a mean period of 32.4 months. There was a significant improvement in the range of movements and mean epiphyseal index, but the change in the percentage of uncovered head femur was insignificant. There was significant improvement in Harris Hip score. Conclusions. Hip distraction by hinged monolateral external fixator seems to be a valid treatment option in cases with Perthes’ disease in the selected group of patients, where poor results are expected from conventional treatment

    TIBET: Identifying and Evaluating Biases in Text-to-Image Generative Models

    Full text link
    Text-to-Image (TTI) generative models have shown great progress in the past few years in terms of their ability to generate complex and high-quality imagery. At the same time, these models have been shown to suffer from harmful biases, including exaggerated societal biases (e.g., gender, ethnicity), as well as incidental correlations that limit such a model\u27s ability to generate more diverse imagery. In this paper, we propose a general approach to study and quantify a broad spectrum of biases, for any TTI model and for any prompt, using counterfactual reasoning. Unlike other works that evaluate generated images on a predefined set of bias axes, our approach automatically identifies potential biases that might be relevant to the given prompt, and measures those biases. In addition, we complement quantitative scores with post-hoc explanations in terms of semantic concepts in the images generated. We show that our method is uniquely capable of explaining complex multi-dimensional biases through semantic concepts, as well as the intersectionality between different biases for any given prompt. We perform extensive user studies to illustrate that the results of our method and analysis are consistent with human judgements.Accepted to ECCV 2024. Code and data available at https://tibet-ai.github.i

    Mitigate One, Skew Another? Tackling Intersectional Biases in Text-to-Image Models

    Get PDF
    The biases exhibited by text-to-image (TTI) models are often treated as independent, though in reality, they may be deeply interrelated. Addressing bias along one dimension—such as ethnicity or age—can inadvertently affect another, like gender, either mitigating or exacerbating existing disparities. Understanding these interdependencies is crucial for designing fairer generative models, yet measuring such effects quantitatively remains a challenge. To address this, we introduce BiasConnect, a novel tool for analyzing and quantifying bias interactions in TTI models. BiasConnect uses counterfactual interventions along different bias axes to reveal the underlying structure of these interactions and estimates the effect of mitigating one bias axis on another. These estimates show strong correlation (+0.65) with observed postmitigation outcomes. Building on BiasConnect, we propose InterMit, an intersectional bias mitigation algorithm guided by a user-defined target distribution and priority weights. InterMit achieves lower bias (0.33 vs. 0.52) with fewer mitigation steps (2.38 vs. 3.15 average steps), and yields superior image quality compared to traditional techniques. Although our implementation is training-free, InterMit is modular and can be integrated with many existing debiasing approaches for TTI models, making it a flexible and extensible solution

    Real valued classification using complex neural networks

    No full text
    This report details the conception, design , implementation and analysis through comparative testing of a complex-valued neural network designed to classify datasets containing real values. The proposed network will consist of an input layer, which will utilise a circular(sine) function to map the real-valued input onto the complex plane, followed by a hidden layer employing a Gaussian-like sech activation function, followed by the output layer consisting of a single neuron, with encoded outputs corresponding to various class label used to depict the classification of the input data. The training process will consist of the Least Mean Square Error minimization problem, with the error being sought to be minimized between the obtained output and the encoded desired outputs. It will be shown during the presentation of the testing results that the network design performs competitively with real-valued as well as complex-valued designs, and could provide a foundation for building improvements on the faster performing Circular Complex-Valued Neural Networks.Bachelor of Engineering (Computer Engineering
    corecore