61 research outputs found

    Kinerja Skema Pemberian Tanda Air Video Dijital Berbasis Dwt-svd Dengan Detektor Semi-blind

    Full text link
    On the Performance of SVD-DWT Based Digital Video Watermarking Technique with Semi-Blind Detector.This paper presents a watermarking technique for digital video. The proposed scheme is developed based on the workof Ganic and Chan which took the virtue of SVD and DWT. While the previous works of Chan has the blind detectorproperty, our attempt is to develop a scheme with semi-blind detector, by using the merit of the DWT-SDV techniqueproposed by Ganic which was originally applied to still image. Overall, our experimental results show that our proposedscheme has a very good imperceptibility and is reasonably robust especially under several attacks such as compression,blurring, cropping, and sharpening

    Klasifikasi Beat Aritmia Pada Sinyal Ekg Menggunakan Fuzzy Wavelet Learning Vector Quantization

    Full text link
    Pengenalan pola beat dalam analisa rekaman elektrokardiogram (EKG) menjadi bagian yang penting dalam deteksi penyakit jantung terutama aritmia. Banyak metode yang dikembangkan terkait dengan pengenalan pola beat, namun sebagian besar masih mengunakan algoritma klasifikasi klasik di mana masih belum mampu mengenali outlier klasifikasi. Fuzzy Learning Vector Quantization (FLVQ) merupakan salah satu algoritma yang mampu untuk mengenali outlier klasifikasi tetapi juga memiliki kelemahan untuk sistem uji yang bukan data berkelompok. Dalam tulisan ini peneliti mengusulkan Fuzzy Wavelet LearningVector Quantization (FWLVQ), yaitu modifikasi FLVQ sehingga mampu mengatasi data crisp maupun data fuzzy dan juga memodifikasi inferensi sistemnya sebagai perpaduan model fuzzy Takagi Sugeno Kang dengan wavelet. Sinyal EKG diperoleh dari database MIT-BIH. Sistem pengenalan pola beat secara keseluruhan terbagi atas dua bagian yaitu data pra proses dan klasifikasi. Hasil percobaan diperoleh bahwa FWLVQ memiliki akurasi sebesar 90.20% untuk data yang tidak mengandung outlier klasifikasi dan 87.19% untuk data yang melibatkan outlier klasifikasi dengan rasio data uji outlier klasifikasi dengan data non-outlier sebesar 1:1. The recognition of beat pattern in analysis of recording an electrocardiogram (ECG) becomes an important detection of heart disease, especially arrhythmias. Many methods are developed related to the recognition of beat patterns, but most still use the classical classification algorithms which are still not able to identify outlier classification. Fuzzy Learning Vector Quantization (FLVQ) is one of the algorithms that can identify outlier classification but also has a weakness for test systems that are not grouped data. In this paper we propose a Fuzzy Wavelet Quantization Learning Vector (FWLVQ), which is modified so as to overcome FLVQ crisp data and fuzzy data and also modify the inference system as a combination of Takagi Sugeno Kang fuzzy model with the wavelet. ECG signal obtained from the MIT-BIH database. Beat pattern recognition system as a whole is divided into two parts: data pre-processing and classification. The experimental results obtained that FWLVQ has an accuracy 90.20% for data that does not contain outlier classification and 87.19% for the classification of data involving outlier ratio outlier test data classification with non-outlier data of 1:1

    Particle Filter with Binary Gaussian Weighting and Support Vector Machine for Human Pose Interpretation

    Get PDF
    Human pose interpretation using Particle filter with Binary Gaussian Weighting and Support Vector Machine isproposed. In the proposed system, Particle filter is used to track human object, then this human object is skeletonizedusing thinning algorithm and classified using Support Vector Machine. The classification is to identify human pose,whether a normal or abnormal behavior. Here Particle filter is modified through weight calculation using Gaussiandistribution to reduce the computational time. The modified particle filter consists of four main phases. First, particlesare generated to predict target’s location. Second, weight of certain particles is calculated and these particles are used tobuild Gaussian distribution. Third, weight of all particles is calculated based on Gaussian distribution. Fourth, updateparticles based on each weight. The modified particle filter could reduce computational time of object tracking sincethis method does not have to calculate particle’s weight one by one. To calculate weight, the proposed method buildsGaussian distribution and calculates particle’s weight using this distribution. Through experiment using video datataken in front of cashier of convenient store, the proposed method reduced computational time in tracking process until68.34% in average compare to the conventional one, meanwhile the accuracy of tracking with this new method iscomparable with particle filter method i.e. 90.3%. Combination particle filter with binary Gaussian weighting andsupport vector machine is promising for advanced early crime scene investigation

    Particle Filter with Gaussian Weighting for Human Tracking

    Get PDF
    Particle filter for object tracking could achieve high tracking accuracy. To track the object, this method generates a number of particles which is the representation of the candidate target object. The location of target object is determined by particles and each weight. The disadvantage of conventional particle filter is the computational time especially on the computation of particle’s weight. Particle filter with Gaussian weighting is proposed to accomplish the computational problem. There are two main stages in this method, i.e. prediction and update. The difference between the conventional particle filter and particle filter with Gaussian weighting is in the update Stage. In the conventional particle filter method, the weight is calculated in each particle, meanwhile in the proposed method, only certain particle’s weight is calculated, and the remain particle’s weight is calculated using the Gaussian weighting. Experiment is done using artificial dataset. The average accuracy is 80,862%. The high accuracy that is achieved by this method could use for the real-time system trackin

    Dataset Suara dan Teks Berbahasa Indonesia Pada Rekaman Podcast dan Talk show

    Get PDF
    Salah satu faktor keberhasilan suatu model pembelajaran dalam machine learning atau deep learning adalah dataset yang digunakan. Pada tulisan ini menyajikan dataset suara dari rekaman podcast dan talk show beserta transkripsi berbahasa Indonesia. Dataset ini disajikan karena belum adanya ketersediaan dataset berbahasa Indonesia yang dapat diakses secara publik untuk digunakan pada pembelajaran model Text-to-Speech ataupun Audio Speech Recognition. Dataset terdiri dari 3270 rekaman yang diproses untuk mendapatkan transkripsi berupa teks atau kalimat berbahasa Indonesia. Dalam pembuatan dataset ini dilakukan beberapa tahapan seperti pra-pemrosesan, tahapan translasi, tahapan validasi pertama dan tahapan validasi kedua. Dataset dibuat dengan format yang mengikuti format dari dataset LJSpeech untuk memudahkan pemrosesan dataset ketika digunakan dalam suatu model sebagai input. Dataset ini diharapkan dapat membantu meningkatkan kualitas pembelajaran untuk pemrosesan Text-to-Speech seperti pada model Tacotron2 ataupun pada pemrosesan Audio Speech Recognition untuk bahasa Indonesia

    Text Preprocessing using Annotated Suffix Tree with Matching Keyphrase

    Get PDF
    Text document is an important source of information and knowledge. Most of the knowledge needed in various domains for different purposes is in form of implicit content. Content of text is represented by keyphrases, which consist of one or more meaningful words. Keyphrases can be extracted from text through several steps of processing, including text preprocessing. Annotated Suffix Tree (AST) built from the documents collection itself is used to extract the keyphrase, after basic text preprocessing that includes removing stop words and stemming are applied. Combination of four variations of preprocessing is used. Two words (bi-words) and three words of phrases extracted are used as a list of keyphrases candidate which can help user who needs keyphrase information to understand content of documents. The candidate of keyphrase can be processed further by learning process to determine keyphrase or non keyphrase for the text domain with manual validation. Experiments using simulation corpus which keyphrases are determined from it show that keyphrases of two and three words can be extracted more than 90% and using real corpus of economy, keyphrases or meaning phrases can be extracted about 70%.   The proposed method can be an effective ways to find candidate keyphrases from collection of text documents which can reduce non keyphrases or non meaning phrases from list of keyphrases candidate and detect keyphrases which are separated by stop words

    PENYISIPAN TANDA AIR PADA CITRA DIJITAL BERBASIS DEKOMPOSISI NILAI SINGULIR (DNS)

    Get PDF
    Imperceptible of Watermarking in Digital Image Based Singular Value Decomposition. Watermarking is a commonly used technique to protect digital image from unintended used such as counterfeiting. This paper will address one of the techniques to embed a watermark to digital image which is based on the singular value decomposition. The primary target to be achieved by a good watermarking technique is that the watermarked image is imperceptible and that the inserted image can still be perfectly retrieved even though various transformations are done to the watermarked image. Our works show that the SVD-based watermarking demonstrates both imperceptibility as well as robustness of the watermarking scheme as indicated by significantly high value of correlation between the inserted and retrieved logo after some transformation such as PSNR, RML and Compression

    KINERJA SKEMA PEMBERIAN TANDA AIR VIDEO DIJITAL BERBASIS DWT-SVD DENGAN DETEKTOR SEMI-BLIND

    Get PDF
    On the Performance of SVD-DWT Based Digital Video Watermarking Technique with Semi-Blind Detector.This paper presents a watermarking technique for digital video. The proposed scheme is developed based on the workof Ganic and Chan which took the virtue of SVD and DWT. While the previous works of Chan has the blind detectorproperty, our attempt is to develop a scheme with semi-blind detector, by using the merit of the DWT-SDV techniqueproposed by Ganic which was originally applied to still image. Overall, our experimental results show that our proposedscheme has a very good imperceptibility and is reasonably robust especially under several attacks such as compression,blurring, cropping, and sharpening.Keywords: discrete-wavelet-transform, imperceptibility, robustness, singular-value-decomposition, watermarkin

    DETEKSI PEMALSUAN CITRA BERBASIS DEKOMPOSISI NILAI SINGULIR

    Get PDF
    Image Fakery Detection Based on Singular Value Decomposition. The growing of image processing technology nowadays make it easier for user to modify and fake the images. Image fakery is a process to manipulate part or whole areas of image either in it content or context with the help of digital image processing techniques. Image fakery is barely unrecognizable because the fake image is looking so natural. Yet by using the numerical computation technique it is able to detect the evidence of fake image. This research is successfully applied the singular value decomposition method to detect image fakery. The image preprocessing algorithm prior to the detection process yields two vectors orthogonal to the singular value vector which are important to detect fake image. The result of experiment to images in several conditions successfully detects the fake images with threshold value 0.2. Singular value decomposition-based detection of image fakery can be used to investigate fake image modified from original image accurately
    corecore