274 research outputs found
A hybrid method based on time–frequency images for classification of alcohol and control EEG signals
Educatıon Policy of Audit Firms in Turkey
Enhanced transparency of audit firms provides information regarding audit firms corporate governance and quality control applications. Moreover, transparency improves audit quality. Focusing on education of partners and staff may also contribute to a culture that promotes audit quality. This paper investigates whether continuing education part of transparency reports are in accordance with the Independent Audit By-Law and whether there is a difference on transparency in 2014, 2016 and 2017. In the study, 2017 transparency reports of 88 audit firms are examined and content analysis is conducted. According to the results of the study it can be concluded that there is no significance increase in the transparency level of companies regarding their education policy since 2014. It is higly recommended that all audit firms should consider disclosing Continuing Professional Education (CPE) hours of each auditors in their transparency reports in the future. Keywords: Transparency report, audit firms, continuous education, Turkey DOI: 10.7176/EJBM/11-11-13 Publication date: April 30th 201
TRANSPARENCY OF AUDIT FIRMS: ANALYSIS OF TRANSPARENCY REPORTS IN TURKEY
Transparency may improve audit quality. In order to increase transparency of audit firms, regulators require audit firms to issue a transparency report. In this regard, the Eighth EU Directive requires audit firms to publish annual transparency reports. In Turkey Independent Audit By-Law has requirements similar to those contained in Article 40 of the EU Directive. The paper examines contents of 2017 transparency reports of audit firms in Turkey. The paper assesses the level of audit firms compliance with requirements of transparent reporting set by the By-Law. The sample of the study consists of 88 transparency reports of audit firms that conducted audit of public interest entities (PIEs) in 2016 in Turkey. Keywords: Transparency report, audit firms, Turkey, KGK DOI: 10.7176/RJFA/10-8-22 Publication date: April 30th 201
Managing Uncertainty in the Airline Industry: The Interaction of Strategic Flexibility, Organizational Learning, and Dynamic Capabilities
The airline industry is one of the sectors characterized by being highly competitive, dynamic, and uncertain. In addition, the constantly renewed, growing structure and vulnerability to conjuncture of the sector usually renders the sector unstable and unpredictable. In such this environment, one of the most important strategic tools airline companies can use to keep their activities sustainable is increasing their strategic flexibility levels. In this context, this study aims to reveal whether dynamic capabilities mediate the relationship between organizational learning and strategic flexibility in the airline industry. In the research model, the Process Macro plugin, which was developed by Hayes and integrated with SPSS software, was used to analyze the mediating effects proposed. The results indicate that there are positive effects between organizational learning, dynamic capabilities, and strategic flexibility in the airline industry. Furthermore, the study reveals that dynamic capabilities mediate the relationship between organizational learning and strategic flexibility. These results highlight the importance of developing dynamic capabilities and organizational learning in enhancing strategic flexibility in the airline industry
Neutrosophic Hough Transform
Hough transform (HT) is a useful tool for both pattern recognition and image processing communities. In the view of pattern recognition, it can extract unique features for description of various shapes, such as lines, circles, ellipses, and etc. In the view of image processing, a dozen of applications can be handled with HT, such as lane detection for autonomous cars, blood cell detection in microscope images, and so on. As HT is a straight forward shape detector in a given image, its shape detection ability is low in noisy images. To alleviate its weakness on noisy images and improve its shape detection performance, in this paper, we proposed neutrosophic Hough transform (NHT). As it was proved earlier, neutrosophy theory based image processing applications were successful in noisy environments. To this end, the Hough space is initially transferred into the NS domain by calculating the NS membership triples (T, I, and F). An indeterminacy filtering is constructed where the neighborhood information is used in order to remove the indeterminacy in the spatial neighborhood of neutrosophic Hough space. The potential peaks are detected based on thresholding on the neutrosophic Hough space, and these peak locations are then used to detect the lines in the image domain. Extensive experiments on noisy and noise-free images are performed in order to show the efficiency of the proposed NHT algorithm. We also compared our proposed NHT with traditional HT and fuzzy HT methods on variety of images. The obtained results showed the efficiency of the proposed NHT on noisy images
Linear dimensionality reduction for classification via a sequential Bayes error minimisation with an application to flow meter diagnostics
Supervised linear dimensionality reduction (LDR) performed prior to classification often improves the accuracy of classification by reducing overfitting and removing multicollinearity. If a Bayes classifier is to be used, then reduction to a dimensionality of is necessary and sufficient to preserve the classification information in the original feature space for the -class problem. However, most of the existing algorithms provide no optimal dimensionality to which to reduce the data, thus classification information can be lost in the reduced space if dimensions are used. In this paper, we present a novel LDR technique to reduce the dimensionality of the original data to , such that it is well-primed for Bayesian classification. This is done by sequentially constructing linear classifiers that minimise the Bayes error via a gradient descent procedure, under an assumption of normality. We experimentally validate the proposed algorithm on UCI datasets. Our algorithm is shown to be superior in terms of the classification accuracy when compared to existing algorithms including LDR based on Fisher's criterion and the Chernoff criterion. The applicability of our algorithm is then demonstrated by employing it in diagnosing the health states of ultrasonic flow meters. As with the UCI datasets, the proposed algorithm is found to have superior performance to the existing algorithms, achieving classification accuracies of and on the two flow meters. Such high classification accuracies on the flow meters promise significant cost benefits in oil and gas operations.NOTICE: this is the author’s version of a work that was accepted for publication in Expert Systems with Applications. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Expert Systems with Applications, [91, (2017)] DOI: 10.1016/j.eswa.2017.09.010© 2017, Elsevier. Licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International http://creativecommons.org/licenses/by-nc-nd/4.0
Linear classifier design under heteroscedasticity in Linear Discriminant Analysis
Under normality and homoscedasticity assumptions, Linear Discriminant
Analysis (LDA) is known to be optimal in terms of minimising the Bayes error
for binary classification. In the heteroscedastic case, LDA is not guaranteed
to minimise this error. Assuming heteroscedasticity, we derive a linear
classifier, the Gaussian Linear Discriminant (GLD), that directly minimises the
Bayes error for binary classification. In addition, we also propose a local
neighbourhood search (LNS) algorithm to obtain a more robust classifier if the
data is known to have a non-normal distribution. We evaluate the proposed
classifiers on two artificial and ten real-world datasets that cut across a
wide range of application areas including handwriting recognition, medical
diagnosis and remote sensing, and then compare our algorithm against existing
LDA approaches and other linear classifiers. The GLD is shown to outperform the
original LDA procedure in terms of the classification accuracy under
heteroscedasticity. While it compares favourably with other existing
heteroscedastic LDA approaches, the GLD requires as much as 60 times lower
training time on some datasets. Our comparison with the support vector machine
(SVM) also shows that, the GLD, together with the LNS, requires as much as 150
times lower training time to achieve an equivalent classification accuracy on
some of the datasets. Thus, our algorithms can provide a cheap and reliable
option for classification in a lot of expert systems
Fused faster RCNNs for efficient detection of the license plates
Automatic license plate detection and recognition (ALPD-R) is an important and challenging application for traffic surveillance, traffic safety, security, services purposes and parking management. Generally, traditional image processing routines have been used in ALPD-R. Although the general approaches perform well on ALPD-R, new and efficient approaches are needed to improve the detection accuracies. Thus, in this paper, a new approach, which is based on fusing of multiple faster Regions with convolutional neutral network (faster- RCNN) architectures, is proposed. More specially, the deep learning (DL) is used to detect license plates in given images. The proposed license plate detection method uses three faster- RCNN modules where each faster RCNN module uses a pre-trained CNN model namely AlexNet, VGG16 and VGG19. Each faster-RCNN module is trained independently and their results are fused in fusing layer. Fusing layer use average operator on the X and Y coordinates of the outputs of the Faster-RCNN modules and maximum operator is employed on the width and height outputs of the faster-RCNN modules. A publicly available dataset is used in experiments. The accuracy is used as a performance indicator of the proposed method. For 100 testing images, the proposed method detects the exact location of license plates for 97 images. The accuracy of the proposed method is 97%
Improved PSO With Visit Table and Multiple Direction Search Strategies for Skin Cancer Image Segmentation
Automated screening is employed to assist skin specialists in accurately detecting skin lesions at an early stage. Multilevel thresholding is a widely popular and efficient technique for enhancing the classification of skin cancer images. This paper proposes improved PSO with a novel visit table and multiple directions search strategies to develop the performance of the multilevel thresholding. A visit table strategy has been developed that prevents unnecessary searches of the original particle swarm optimization (PSO) algorithm by allowing the discovery of new points by making fewer visits to frequently visited points and their neighbors. Besides, a multiple directions search strategy has been introduced for the PSO to increase the diversity of the population and overcome the stuck at the local optimum by enhancing exploration ability. The qualitative, quantitative, and scalability analyzes of the improved PSO (IPSO) method were carried out on 50 benchmark functions and the highest performance was achieved with the proposed method in most of these functions. Secondly, a multilevel image segmentation application is presented on skin cancer images using two-dimensional (2D) non-local means histograms, improved PSO and Renyi’s entropy. In this work, the ISIC 2017 skin cancer image dataset is used for segmentation application and various performance evaluation metrics are used. The obtained results are compared with seven state-of-the-art approaches to show the efficiency of the proposed approach. It can be noted from the obtained results that, the proposed method outperforms the compared method based on the average of evaluation metrics for all skin cancer images. The best results in SSIM value of 0.8285, FSIM value of 0.7332, and PSNR value of 19.0576 are achieved by using the proposed method in skin cancer image segmentation. Hence, our proposed method is ready to be tested with huge databases and can aid skin specialists in making an accurate diagnosis
Automated Detection of Neurological and Mental Health Disorders Using EEG Signals and Artificial Intelligence: A Systematic Review
Mental and neurological disorders significantly impact global health. This systematic review examines the use of artificial intelligence (AI) techniques to automatically detect these conditions using electroencephalography (EEG) signals. Guided by Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA), we reviewed 74 carefully selected studies published between 2013 and August 2024 that used machine learning (ML), deep learning (DL), or both of these two methods to detect neurological and mental health disorders automatically using EEG signals. The most common and most prevalent neurological and mental health disorder types were sourced from major databases, including Scopus, Web of Science, Science Direct, PubMed, and IEEE Xplore. Epilepsy, depression, and Alzheimer's disease are the most studied conditions that meet our evaluation criteria, 32, 12, and 10 studies were identified on these topics, respectively. Conversely, the number of studies meeting our criteria regarding stress, schizophrenia, Parkinson's disease, and autism spectrum disorders was relatively more average: 6, 4, 3, and 3, respectively. The diseases that least met our evaluation conditions were one study each of seizure, stroke, anxiety diseases, and one study examining Alzheimer's disease and epilepsy together. Support Vector Machines (SVM) were most widely used in ML methods, while Convolutional Neural Networks (CNNs) dominated DL approaches. DL methods generally outperformed traditional ML, as they yielded higher performance using huge EEG data. We observed that the complex decision process during feature extraction from EEG signals in ML-based models significantly impacted results, while DL-based models handled this more efficiently. AI-based EEG analysis shows promise for automated detection of neurological and mental health conditions. Future research should focus on multi-disease studies, standardizing datasets, improving model interpretability, and developing clinical decision support systems to assist in the diagnosis and treatment of these disorders
- …
