475 research outputs found

    An Automated System for Epilepsy Detection using EEG Brain Signals based on Deep Learning Approach

    Full text link
    Epilepsy is a neurological disorder and for its detection, encephalography (EEG) is a commonly used clinical approach. Manual inspection of EEG brain signals is a time-consuming and laborious process, which puts heavy burden on neurologists and affects their performance. Several automatic techniques have been proposed using traditional approaches to assist neurologists in detecting binary epilepsy scenarios e.g. seizure vs. non-seizure or normal vs. ictal. These methods do not perform well when classifying ternary case e.g. ictal vs. normal vs. inter-ictal; the maximum accuracy for this case by the state-of-the-art-methods is 97+-1%. To overcome this problem, we propose a system based on deep learning, which is an ensemble of pyramidal one-dimensional convolutional neural network (P-1D-CNN) models. In a CNN model, the bottleneck is the large number of learnable parameters. P-1D-CNN works on the concept of refinement approach and it results in 60% fewer parameters compared to traditional CNN models. Further to overcome the limitations of small amount of data, we proposed augmentation schemes for learning P-1D-CNN model. In almost all the cases concerning epilepsy detection, the proposed system gives an accuracy of 99.1+-0.9% on the University of Bonn dataset.Comment: 18 page

    Efficient PPA-SiO2-catalyzed synthesis of β-enaminones under solvent-free conditions

    Get PDF
    An efficient method has been developed for the synthesis of β-enaminones under solvent-free reaction conditions using PPA-SiO2 as catalyst. The reaction yields were good to excellent (up to 90%). This methodology affords high selectivity and good tolerance of a variety of different functional groups present on both aromatic and aliphatic amines. In addition, the methodology is environmentally benign and cost-effective due to absence of solvent and easy work-up

    Pesan-pesan sosial pada pameran foto jurnalistik paradigma: Analisis Semiotika Charles Sanders Peirce pada pameran foto komunitas Photo’s Speak UIN Bandung

    Get PDF
    Pesan-pesan Sosial pada Pameran Foto Jurnalistik Paradigma (Analisis Semiotika Charles Sanders Peirce pada Pameran Foto Komunitas Photo’s Speak UIN Bandung). Sebuah foto jurnalistik merupakan penyajian informasi secara visual yang melengkapi tulisan pada berita. Foto juga dapat mepengaruhi pemahaman bagi orang yang melihatnya. Atas dasar ini, komunitas Photo's Speak mengadakan pameran foto jurnalistik Paradigma untuk mempengaruhi persepsi pengunjung agar dapat memahami sudut pandang yang berbeda dari perspektif lain akan suatu hal. Namun, sering terjadi kesalahpahaman dalam memahami pesan dari foto yang ditampilkan. Masalah inilah yang melatarbelakangi penelitian untuk melakukan analisis semiotika terhadap tanda-tanda visual pada foto cerita pada pameran Paradigma. Tujuannya adalah untuk mengungkap pesan-pesan sosial lebih dalam pada foto cerita yang ada di pameran ini. Hasil penelitian menunjukkan bahwa rangkaian foto cerita pada pameran ini memiliki pesan sosial setelah dianalisis dengan teori triangle of meaning konsep Charles Sanders Peirce. Pada tahap sign (tanda), penelitian menemukan tanda seperti warna-warna, makhluk hidup, benda-benda mati, dan kegiatan manusia. Kemudian pada tahap object (objek), Dalam menganalisis foto cerita, penelitian ini menemukan bahwa warna-warna yang ditampilkan merepresentasikan suatu emosi atau perasaan tertentu. Selain itu, terdapat pula aktivitas-aktivitas yang dilakukan oleh manusia, benda-benda yang digunakan dalam kehidupan sehari-hari, serta tempat-tempat di mana berbagai aktivitas tersebut berlangsung. Setelah melalui tahap sign (tanda) dan object (objek), pada tahap akhir yaitu interpretant (interpretasi), penelitian tersebut berhasil menyimpulkan dan mengidentifikasi berbagai pesan sosial yang terkandung dalam rangkaian foto cerita yang dipamerkan pada pameran Paradigma. Secara garis besar pesan sosial yang terdapat pada semua foto cerita yang ditampilkan pada pameran ini memiliki pesan bahwa cara pandang atau makna dari sesuatu hal dapat berbeda tergantung bagaimana cara orang tersebut melihatnya dan memaknainya

    The MIDAS score after Memantine in patients with migraine at a tertiary care Hospital.

    Get PDF
    ABSTRACT INTRODUCTION: Memantine has been suggested as a migraine prophylaxis therapy in some observational studies. OBJECTIVE: To determine the mean change in MIDAS score after Memantine in patients having migraine. SAMPLING TECHNIQUE: Non-probability, consecutive sampling

    Cytotoxicity, In vitro anti-Leishmanial and fingerprint HPLC- photodiode array analysis of the roots of Trillium govanianum.

    Get PDF
    Trillium govanianum Wall. ex D. Don (Melanthiaceae alt. Trilliaceae), commonly known as 'nagchhatry' or 'teen patra', distributed from Pakistan to Bhutan about 2500-3800 m altitude is indigenous to Himalayas region. In folk medicine the plant has been reported for the treatment of wound healing, sepsis and in various sexual disorders. This paper reports, for the first time, to evaluate the cytotoxicity, in vitro anti-leishmanial (promastigotes) and fingerprint HPLC-photodiode array analysis of the MeOH extract of the roots of T. govanianum and its solid phase extraction fractions. Reverse phase HPLC-PDA based quantification revealed the presence of significant amount of quercetin, myrecetin and kaemferol ranging from 0.221to 0.528 μg/mg DW. MeOH extract revealed distinguishable protein kinase inhibitory activity against Streptomyces 85E strain with 18 mm bald phenotype. The remarkable toxicity profile against brine shrimps and leishmanial was manifested by MeOH extract with LC50 10 and 38.5 μg/mL, respectively

    AI Vision for Health Care: Virtual Keyboard and Mouse Empowering Partially Disabled Patients

    Get PDF
    This paper introduces a machine-learning-based virtual keyboard and mouse system designed to assist individuals with physical disabilities. The system recognizes hand gestures using computer vision techniques and translates them into keyboard inputs and mouse controls. By utilizing Convolutional Neural Networks (CNNs) and the YOLOv8 model, the system achieves real-time performance with an average accuracy of 92%, enabling touchless interaction with computers. The solution uses widely available hardware like standard webcams, making it accessible, affordable, and easy to deploy. This system improves the usability of computing devices for people with motor impairments, offering an innovative, touchless alternative to traditional input methods. It also supports essential tasks such as scrolling, clicking, and zooming through simple gestures. The framework is adaptable to various environments, ensuring it is easy to use in different settings. Our system offers a complete virtual keyboard and mouse solution using a common webcam and real-time gesture recognition, making computer use easier and more affordable for users with motor impairment

    AI Vision for Health Care: Virtual Keyboard and Mouse Empowering Partially Disabled Patients

    Get PDF
    This paper introduces a machine-learning-based virtual keyboard and mouse system designed to assist individuals with physical disabilities. The system recognizes hand gestures using computer vision techniques and translates them into keyboard inputs and mouse controls. By utilizing Convolutional Neural Networks (CNNs) and the YOLOv8 model, the system achieves real-time performance with an average accuracy of 92%, enabling touchless interaction with computers. The solution uses widely available hardware like standard webcams, making it accessible, affordable, and easy to deploy. This system improves the usability of computing devices for people with motor impairments, offering an innovative, touchless alternative to traditional input methods. It also supports essential tasks such as scrolling, clicking, and zooming through simple gestures. The framework is adaptable to various environments, ensuring it is easy to use in different settings. Our system offers a complete virtual keyboard and mouse solution using a common webcam and real-time gesture recognition, making computer use easier and more affordable for users with motor impairment

    A Computer Vision Based Child Safety Solution Using YOLOv8 Architecture

    Get PDF
    Child safety continues to be a major concern in homes, public spaces, and schools. Physical barriers and supervision by parents or guardians are often not enough to prevent accidents in restricted or high-risk areas such as swimming pools, staircases near sharp objects, electrical sockets or places where drugs are stored. This project proposes a real-time computer vision-based solution to enhance child safety by detecting the presence of children in restricted zones and alerting guardians, caregivers or authorities immediately. The system is built using YOLOv8 (You Only LOOK Once version 8) for object detection, combined with distance estimation and an alarm-triggering mechanism. A custom dataset containing over 30,000 labeled images across eight categories was used for model training and validation. The euclidean distance formula was applied to measure the spatial relationship between the detected children and nearby hazards, enabling accurate risk assessment in real-time. The proposed model achieved a mean Average Precision (mAP) of 90% and showed high accuracy in detecting critical proximity scenarios instantly. The solution is scalable and deployed in various environments, offering a proactive approach to preventing accidents. This project aims to deliver and effective system using readily available hardware, making it easy to install in both private and public spaces. Early testing demonstrated high levels of accuracy, speed, and real-time performance, positioning this system as a potential breakthrough in child safety technology

    A Computer Vision Based Child Safety Solution Using YOLOv8 Architecture

    Get PDF
    Child safety continues to be a major concern in homes, public spaces, and schools. Physical barriers and supervision by parents or guardians are often not enough to prevent accidents in restricted or high-risk areas such as swimming pools, staircases near sharp objects, electrical sockets or places where drugs are stored. This project proposes a real-time computer vision-based solution to enhance child safety by detecting the presence of children in restricted zones and alerting guardians, caregivers or authorities immediately. The system is built using YOLOv8 (You Only LOOK Once version 8) for object detection, combined with distance estimation and an alarm-triggering mechanism. A custom dataset containing over 30,000 labeled images across eight categories was used for model training and validation. The euclidean distance formula was applied to measure the spatial relationship between the detected children and nearby hazards, enabling accurate risk assessment in real-time. The proposed model achieved a mean Average Precision (mAP) of 90% and showed high accuracy in detecting critical proximity scenarios instantly. The solution is scalable and deployed in various environments, offering a proactive approach to preventing accidents. This project aims to deliver and effective system using readily available hardware, making it easy to install in both private and public spaces. Early testing demonstrated high levels of accuracy, speed, and real-time performance, positioning this system as a potential breakthrough in child safety technology
    corecore