132 research outputs found
MEASURING OF CYTOKINES TNF-Α AND IFN-Γ CONCENTRATION LEVELS IN ASTHMA PATIENTS
Objective: Asthma is a chronic inflammation of the Pulmonary air passages, Many cells and cellular elements play a significant part in its occurrence, especially T lymphocytes. This inflammation in sensitive individuals causes frequent symptoms such as coughing, difficulty breathing, and asthma disease is characterized by increased immune overresponse in the bronchi, affected by several triggers, including inflammatory cytokines, which are crucial in the development of asthma. These include cytokines the TNF-α and IFN-γ, which contribute to the excessive immune response of the Bronchial passages leading to bronchonchial contraction. This study aimed to the Measurement of immunological standards TNF-α and IFN-γ in the serum of asthma patients and to study the extent of the effect of these two cytokines in increasing the severity of the disease and understanding the specific roles of these cytokines in the development of asthma and its pathological mechanisms. Method: 60 samples were collected from asthma patients who are reviewing the Allergy and Asthma Center in Marjan hospital in Babylon governorate and 40 healthy people who were used as a control group. Immunoassays were conducted for blood serum samples collected from patients and healthy patients by enzyme-linked immunoadsorption to measure TNF-α and IFN-γ. Results: showed that there was a moral increase TNF-α and IFN-γ levels in serum of asthma patients (p<0.05) while the concentrations of immunological standards in the control group serum decreased. Novelty: According to the this study, asthma patients have an increase in the concentrations of TNF-α and IFN-γ in their serums compared to the serum of the control group,Which serves to shed new light on the role of these two cytokines in the exacerbation of asthma, in order to manufacture inhibitory drugs that target them
Biologically-inspired hierarchical architectures for object recognition
PhD ThesisThe existing methods for machine vision translate the three-dimensional
objects in the real world into two-dimensional images. These methods
have achieved acceptable performances in recognising objects. However,
the recognition performance drops dramatically when objects are transformed, for instance, the background, orientation, position in the image,
and scale. The human’s visual cortex has evolved to form an efficient
invariant representation of objects from within a scene. The superior
performance of human can be explained by the feed-forward multi-layer
hierarchical structure of human visual cortex, in addition to, the utilisation of different fields of vision depending on the recognition task.
Therefore, the research community investigated building systems that
mimic the hierarchical architecture of the human visual cortex as an
ultimate objective.
The aim of this thesis can be summarised as developing hierarchical
models of the visual processing that tackle the remaining challenges of
object recognition. To enhance the existing models of object recognition
and to overcome the above-mentioned issues, three major contributions
are made that can be summarised as the followings
1. building a hierarchical model within an abstract architecture that
achieves good performances in challenging image object datasets;
2. investigating the contribution for each region of vision for object
and scene images in order to increase the recognition performance
and decrease the size of the processed data;
3. further enhance the performance of all existing models of object
recognition by introducing hierarchical topologies that utilise the
context in which the object is found to determine the identity of
the object.
Statement ofHigher Committee For Education Development in Iraq (HCED
Objects and scenes classification with selective use of central and peripheral image content
The human visual recognition system is more efficient than any current robotic vision setting. One reason for this superiority is that humans utilize different fields of vision, depending on the recognition task. For instance, experiments on human subjects show that the peripheral vision is more useful than the central vision in recognizing scenes. We tested our recently-developed model, that is, the elastic net-regularized hierarchical MAX (En-HMAX), in recognizing objects and scenes. In various experimental conditions, images were occluded with windows and scotomas of varying sizes. With this model, classification accuracies of up to 90 for objects and scenes were possible. Modelling human experiments, window and scotoma analysis with the En-HMAX model revealed that object and scene recognition are sensitive to the availability of data in the centre and the periphery of the images, respectively. Similarly, results of deep learning models have shown that the classification accuracy diminishes dramatically in the absence of the peripheral vision. These differences led us to further analyse the performance of the En-HMAX model with the parafoveal versus peripheral areas of vision, in a second study. Results of the second study show that approximately 50 of the visual field would be sufficient to achieve 96 accuracy in the classification of unseen images. The En-HMAX model adopts a relative order of importance, similar to the human visual system, depending on the image category. We showed that utilizing the relevant regions of vision can significantly reduce the image processing time and size
Molecular Detection of Bacterial vaginosis Isolated from Preterm Labor Patients in Wasit City, Iraq
The aim of this study was to evaluate the role of bacterial vaginosis in preterm labor. There is growing evidence that infections, particularly those that spread from the lower genital tract, might trigger preterm labor. One of the main causes of newborn morbidity and mortality is preterm birth. This is a cross-sectional study which was carried out in the labor facilities in local hospital in Wasit. A total 90 swab samples were collected by gynecologist from patients in Preterm and Full-term labor Bacteriological diagnosis was done using Molecular quantification methods have been reported recently, but the specific risk factors they might identify remain unclear. We carried our study on 90 pregnant women, divided into two groups. First group delivered preterm and other group at full term. There are 38 out of 45 women who had no bacterial infection in the group of full-term pregnancy compared to 19 out of 45 in the preterm group , so the incidence of BV was significantly more in preterm group. The commonest isolated pathogen was G. vaginalis, followed by Megasphaera and Atopobium vaginalis. More than one-third (37.8%) of patients with preterm delivery have Gardnerella infection when compared to only 6.7% of those who delivered at term , Around one-quarter (26.7%) of patients who delivered preterm were diagnosed with Megasphaera. Only 4.4% of full-term patients had positive tests for this infection and 22.2% of patients with preterm labour compared to 4.4% of full-term pregnancies diagnosed with Atopobium infection
Deep learning-based artificial vision for grasp classification in myoelectric hands
Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached for the seen and for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to . In addition, we show that with training, subjects' performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably
Vision transformers for automated detection of pig interactions in groups
The interactive behaviour of pigs is an important determinant of their social development and overall well-being. Manual observation and identification of contact behaviour can be time-consuming and potentially subjective. This study presents a new method for the dynamic detection of pig head to rear interaction using the Vision Transformer (ViT). The ViT model achieved a high accuracy in detecting and classifying specific interaction behaviour as trained on the pig contact datasets, capturing interaction behaviour. The model's ability to recognize contextual spatial data enables strong detection even in complex contexts, due to the use of Gaussian Error Linear Unit (GELU) an activation function responsible for introduction of non-linear data to the model and Multi Head Attention feature that ensures all relevant details contained in a data are captured in Vision Transformer. The method provides an efficient method for monitoring swine behaviour for instance, contact between pigs, facilitating better livestock management and livestock welfare. The ViT can represent a significant improvement on current automated behaviour detection, opening new possibilities for accurate animal design and animal behaviour assessment with an accuracy and F1 score of 82.8 % and 82.7 %, respectively, while we have an AUC of 85 %
A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring
The study comprehensively reviews artificial intelligence (AI) techniques for addressing algorithmic bias in job hiring. More businesses are using AI in curriculum vitae (CV) screening. While the move improves efficiency in the recruitment process, it is vulnerable to biases, which have adverse effects on organizations and the broader society. This research aims to analyze case studies on AI hiring to demonstrate both successful implementations and instances of bias. It also seeks to evaluate the impact of algorithmic bias and the strategies to mitigate it. The basic design of the study entails undertaking a systematic review of existing literature and research studies that focus on artificial intelligence techniques employed to mitigate bias in hiring. The results demonstrate that the correction of the vector space and data augmentation are effective natural language processing (NLP) and deep learning techniques for mitigating algorithmic bias in hiring. The findings underscore the potential of artificial intelligence techniques in promoting fairness and diversity in the hiring process with the application of artificial intelligence techniques. The study contributes to human resource practice by enhancing hiring algorithms’ fairness. It recommends the need for collaboration between machines and humans to enhance the fairness of the hiring process. The results can help AI developers make algorithmic changes needed to enhance fairness in AI-driven tools. This will enable the development of ethical hiring tools, contributing to fairness in society
Review of farmer-centered AI systems technologies in livestock operations
The assessment of livestock welfare aids in keeping an eye on the health, physiology, and environment of the animals in order to prevent deterioration, detect injuries, stress, and sustain productivity. Because it puts more consumer pressure on farming industries to change how animals are treated to make them more humane, it has also grown to be a significant marketing tactic. Common visual welfare procedures followed by experts and vets could be expensive, subjective, and need specialized staff. Recent developments in artificial intelligence (AI) integrated with farmers’ expertise have aided in the creation of novel and cutting-edge livestock biometrics technologies that extract important physiological data linked to animal welfare. A thorough examination of physiological, behavioral, and health variables highlights AI's ability to provide accurate, rapid, and impartial assessments. Farmer-focused strategy: an emphasis on the crucial role that farmers play in the skillful adoption and prudent application of AI and sensor technologies, as well as conversations about developing logical, practical, and affordable solutions that are specific to the needs of farmers
Labeled projective dictionary pair learning: application to handwritten numbers recognition
Dictionary learning was introduced for sparse image representation. Today, it is a cornerstone of image classification. We propose a novel dictionary learning method to recognise images of handwritten numbers. Our focus is to maximise the sparse-representation and discrimination power of the class-specific dictionaries. We, for the first time, adopt a new feature space, i.e., histogram of oriented gradients (HOG), to generate dictionary columns (atoms). The HOG features robustly describe fine details of hand-writings. We design an objective function followed by a minimisation technique to simultaneously incorporate these features. The proposed cost function benefits from a novel class-label penalty term constraining the associated minimisation approach to obtain class-specific dictionaries. The results of applying the proposed method on various handwritten image databases in three different languages show enhanced classification performance (~98%) compared to other relevant methods. Moreover, we show that combination of HOG features with dictionary learning enhances the accuracy by 11% compared to when raw data are used. Finally, we demonstrate that our proposed approach achieves comparable results to that of existing deep learning models under the same experimental conditions but with a fraction of parameters
- …
