51 research outputs found
DIVERGENCE IN STAKEHOLDER PERCEPTIONS OF SECURITY POLICIES: A REPGRID ANALYSIS FOR NORM-RULE COMPLIANCE
Many organizations have a problem with synchronizing individual values regarding information security with expectations set by the relevant security policy. Such discordance leads to failure in compliance or simply subversion of existing or imposed controls. The problem of the mismatch in understanding the security policies amongst individuals in an organization has devastating effect on security of the organization. Different individuals hold different understanding and knowledge about IS security, which is reflected on IS security policies design and practice (Vaast, 2007). Albrecthsen and Hovdena (2009) argue that users and managers practice IS security differently because they have different rationalities. This difference in rationalities may reflect the mismatch between the security policies and individuals’ values.
In this research, we argue that occurrence of security breach can change individuals’ values in light of security policy of organization. These changes in the values can be reflected on the compliance between individuals’ norms and security rules and standards. Indeed, organizations need to guarantee the compliance between security policy and values of their employees. Thus, they can alleviate or prevent violations of security of organization. However, it is difficult to find a common method that all organizations can adopt to guarantee the synch between security rules and individuals’ norms.
The main aim of this research is to investigate how people perceive information security policy and how their perceptions change in response to security breaches. Besides, this research aims to investigate the relationship between individuals’ values and security policy. Thus, organizations can have the intended level of compliance between individual norms and security rules and standards.
With the aid of the Repertory Grid technique, this research examines how a security breach shapes people’s values with respect to security policy of an organization. To conduct the argument, this research offers an assessment mechanism that aids the organization to evaluate employees’ values in regard to security policy. Based on that evaluation, the organization can develop a proper mechanism to guarantee compliance between individuals’ norms and security rules. The results of this research show that employees in an organization hold different perceptions regarding the security policy. These perceptions change in response to security incident. This change in perceptions dose not necessarily result in better compliance with the security policy. Factors like the type of breach and people’s experience can affect the amount of change in the perceptions. Contributions, implications, and directions for future research of this study will be discussed
Mismatched Understanding of IS Security Policy: A RepGrid Analysis
Professional and academic literature indicates that organizational stakeholders may hold different perceptions of security rules and policies. This discrepancy of perceptions may be rooted into a conflict between the compliance of stakeholders to organizational norms on the one hand, and security rules on the other. The paper argues that a mismatched understanding of security policy can have a devastating effect on the security of organizations, and should therefore be treated as a key reason for non-compliance to security policy. Using Personal Construct Theory and Repertory Grids we explore how different stakeholder groups within an organization can hold divergent views on the same security policies. Our findings have implications for the design of security policy training and awareness programs, as well as for the institution and internalization of good IS governance practices
Resistance of multiple stakeholders to e-health innovations: Integration of fundamental insights and guiding research paths
Consumer/user resistance is considered a key factor responsible for the failure of digital innovations. Yet, existing scholarship has not given it due attention while examining user responses to e-health innovations. The present study addressed this need by consolidating the existing findings to provide a platform to motivate future research. We used a systematic literature review (SLR) approach to identify and analyze the relevant literature. To execute the SLR, we first specified a stringent search protocol with specific inclusion and exclusion criteria to identify relevant studies. Thereafter, we undertook an in-depth analysis of 72 congruent studies, thus presenting a comprehensive structure of findings, gaps, and opportunities for future research. Specifically, we mapped the relevant literature to elucidate the nature and causes of resistance offered by three key constituent groups of the healthcare ecosystem—patients, healthcare organizational actors, and other stakeholders. Finally, based on the understanding acquired through our critical synthesis, we formulated a conceptual framework, classifying user resistance into micro, meso, and macro barriers which provide context to the interventions and strategies required to counter resistance and motivate adoption, continued usage, and positive recommendation intent. Being the first SLR in the area to present a multi-stakeholder perspective, our study offers fine-grained insights for hospital management, policymakers, and community leaders to develop an effective plan of action to overcome barriers that impede the diffusion of e-health innovations.publishedVersionPaid open acces
Exploring the impact of smart cities on improving the quality of life for people with disabilities in Saudi Arabia
By using advanced technologies and data analytics, smart cities can establish conditions that are both inclusive and accessible, addressing the distinctive needs of disabled people. This research aims to examine the benefits of smart city technologies and develop strategies for developing environments that serve the requirements of individuals with disabilities in Saudi Arabia. Using a sequential mixed method, the study uses the social disability model. The initial phase involves gathering quantitative data from 427 individuals with disabilities in Saudi Arabia. Further, qualitative data was obtained through semi-structured interviews with a sample of four professionals employed in Saudi smart city initiatives. Quantitative data is analyzed using Partial Least Square-Structural Equation Modeling (PLS-SEM), while qualitative data is analyzed using thematic analysis. Quantitative findings revealed the robustness of the measurement model, confirming the significant effects of Smart City Initiatives on Accessibility Enhancement, Inclusive Information, and Health and Wellbeing Improvement. The respondents indicated that they are satisfied with the initiatives and their effectiveness, providing them with equal services and opportunities without discrimination. The qualitative analysis further revealed themes, i.e., Technology Integration for Accessibility, Inclusive Design, Inclusive Planning for Health, and others. Participants indicated special consideration for implementing the designs and approaches to ensure inclusivity and availability of services to disabled people. Besides, implementing infrastructure and policies to ensure the health and wellbeing of disabled people also remained prevalent. Hence, it is concluded that smart city initiatives break obstacles and improve the wellbeing of individuals with disabilities. Improved healthcare services and inclusive urban planning highlight the transformative effect of these initiatives on health and wellbeing, promoting an equitable and sustainable services environment. Finally, research implications and limitations are discussed
GAN-enhanced deep learning for improved Alzheimer's disease classification and longitudinal brain change analysis
Alzheimer's disease (AD) is commonly defined by a progressive decline in cognitive functions and memory. Early detection is crucial to mitigate the devastating impacts of AD, which can significantly impair a person's quality of life. Traditional methods for diagnosing AD, while still in use, often involve time-consuming processes that are prone to errors and inefficiencies. These manual techniques are limited in their ability to handle the vast amount of data associated with the disease, leading to slower diagnosis and potential misclassification. Advancements in artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), offer promising solutions to these challenges. AI techniques can process large datasets with high accuracy, significantly improving the speed and precision of AD detection. However, despite these advancements, issues such as limited accuracy, computational complexity, and the risk of overfitting still pose challenges in the field of AD classification. To address these challenges, the proposed study integrates deep learning architectures, particularly ResNet101 and long short-term memory (LSTM) networks, to enhance both feature extraction and classification of AD. The ResNet101 model is augmented with innovative layers such as the pattern descriptor parsing operation (PDPO) and the detection convolutional kernel layer (DCK), which are designed to extract the most relevant features from datasets such as ADNI and OASIS. These features are then processed through the LSTM model, which classifies individuals into categories such as cognitively normal (CN), mild cognitive impairment (MCI), and Alzheimer's disease (AD). Another key aspect of the research is the use of generative adversarial networks (GANs) to identify the progressive or non-progressive nature of AD. By employing both a generator and a discriminator, the GAN model detects whether the AD state is advancing. If the original and predicted classes align, AD is deemed non-progressive; if they differ, the disease is progressing. This innovative approach provides a nuanced view of AD, which could lead to more precise and personalized treatment plans. The numerical outcome obtained by the proposed model for ADNI dataset is 0.9931, and for OASIS dataset, the accuracy gained by the model is 0.9985. Ultimately, this research aims to offer significant contributions to the medical field, helping healthcare professionals diagnose AD more accurately and efficiently, thus improving patient outcomes. Furthermore, brain simulation models are integrated into this framework to provide deeper insights into the underlying neural mechanisms of AD. These brain simulation models help visualize and predict how AD may evolve in different regions of the brain, enhancing both diagnosis and treatment planning
Decoupled SculptorGAN Framework for 3D Reconstruction and Enhanced Segmentation of Kidney Tumors in CT Images
Our proposed work, SculptorGAN, represents a novel advancement in the domain of medical imaging, for the accurate and automatic diagnosis of renal tumors, using the techniques and principles of Generative Adversarial Network (GAN). This dichotomous framework forms a contrast to the normal segmentation models like that of U-Net model but, instead, founded on a strategy that is aimed towards reconstruction and segmentation of CT images, particularly of renal malignancies. The core of the SculptorGAN methodology is a GAN-based approach for precise three-dimensional rendering of renal anatomies from CT scans, followed by a segmentation phase to correctly separate the neoplastic from non-neoplastic tissues. In fact, SculptorGAN was designed to circumvent limitations that come as inherent in the segmentation techniques, and in this case to eliminate them. In fact, by including such an advanced algorithmic architecture, accuracy of diagnosis in SculptorGAN has increased to 96.5%, which is the primary aspect behind early detection and thus proper curing of renal tumors. The better results were ascribed to more accurate and detailed reconstruction of renal structures that the framework allowed, apart from the better segmentation. The performance analyses show quantitative results with respect to the presented datasets, while the validation shows that SculptorGAN outperforms most of the traditional models such as U-Net. In particular, SculptorGAN decreased the time taken for 3D reconstruction by about 35% while increasing the accuracy of segmentation by 20% or more. The outcome, in their turn, may suggest this improvement in efficiency and the level of reliability for renal tumor diagnosis as of having far-reaching implications for the patient treatment and its outcomes. In conclusion, the framework deals with all the challenges with an accurate diagnosis of renal tumors and brings betterment in the overall field of medical image analysis by providing the abilities of GANs for the betterment in image reconstruction and segmentation
Quality of Experience Aware Service Selection Model to Empower Edge Computing in IoT
Quality of experience-aware service selection can significantly remove well-known scalability issues of an Internet of Things (IoT) architecture. In traditional IoT architecture, several heterogeneous data streams from connected nodes are transmitted through gateways to the remote mobile cloud servers. The entire procedure is time- and energy-consuming if the target dataset is comparatively small and uninterrupted. Also, using this conventional technique, the reliability grade drops significantly to meet additional security-related quality of service (QoS) requirements compared to the service cost. We propose a quality of experience-aware task rescheduling model using edge modules that offer territory-based three-layered edge IoT data analysis and service selection. The observation module at the application layer takes a near-optimal remark upon each usage metric having distinct QoS components. Meanwhile, the QoS manager at the network layer handles network traffic due to the load associated with heterogeneous service needs. The precision of the knowledge is assured to the service manager through the sensing layer with few adaptability characteristics towards assorted service requests. The proposed three-layered energy-efficient model helps minimize data delivery time with minimal cost and optimized quality assurance for service-based IoT infrastructures like smart agriculture, patient monitoring, and student monitoring
Multi-class Breast Cancer Classification Using CNN Features Hybridization
Breast cancer has become the leading cause of cancer mortality among women worldwide. The timely diagnosis of such cancer is always in demand among researchers. This research pours light on improving the design of computer-aided detection (CAD) for earlier breast cancer classification. Meanwhile, the design of CAD tools using deep learning is becoming popular and robust in biomedical classification systems. However, deep learning gives inadequate performance when used for multilabel classification problems, especially if the dataset has an uneven distribution of output targets. And this problem is prevalent in publicly available breast cancer datasets. To overcome this, the paper integrates the learning and discrimination ability of multiple convolution neural networks such as VGG16, VGG19, ResNet50, and DenseNet121 architectures for breast cancer classification. Accordingly, the approach of fusion of hybrid deep features (FHDF) is proposed to capture more potential information and attain improved classification performance. This way, the research utilizes digital mammogram images for earlier breast tumor detection. The proposed approach is evaluated on three public breast cancer datasets: mammographic image analysis society (MIAS), curated breast imaging subset of digital database for screening mammography (CBIS-DDSM), and INbreast databases. The attained results are then compared with base convolutional neural networks (CNN) architectures and the late fusion approach. For MIAS, CBIS-DDSM, and INbreast datasets, the proposed FHDF approach provides maximum performance of 98.706%, 97.734%, and 98.834% of accuracy in classifying three classes of breast cancer severities
An Approach to Binary Classification of Alzheimer’s Disease Using LSTM
In this study, we use LSTM (Long-Short-Term-Memory) networks to evaluate Magnetic Resonance Imaging (MRI) data to overcome the shortcomings of conventional Alzheimer’s disease (AD) detection techniques. Our method offers greater reliability and accuracy in predicting the possibility of AD, in contrast to cognitive testing and brain structure analyses. We used an MRI dataset that we downloaded from the Kaggle source to train our LSTM network. Utilizing the temporal memory characteristics of LSTMs, the network was created to efficiently capture and evaluate the sequential patterns inherent in MRI scans. Our model scored a remarkable AUC of 0.97 and an accuracy of 98.62%. During the training process, we used Stratified Shuffle-Split Cross Validation to make sure that our findings were reliable and generalizable. Our study adds significantly to the body of knowledge by demonstrating the potential of LSTM networks in the specific field of AD prediction and extending the variety of methods investigated for image classification in AD research. We have also designed a user-friendly Web-based application to help with the accessibility of our developed model, bridging the gap between research and actual deployment
Load Balancing Using Artificial Intelligence for Cloud-Enabled Internet of Everything in Healthcare Domain
The emergence of the Internet of Things (IoT) and its subsequent evolution into the Internet of Everything (IoE) is a result of the rapid growth of information and communication technologies (ICT). However, implementing these technologies comes with certain obstacles, such as the limited availability of energy resources and processing power. Consequently, there is a need for energy-efficient and intelligent load-balancing models, particularly in healthcare, where real-time applications generate large volumes of data. This paper proposes a novel, energy-aware artificial intelligence (AI)-based load balancing model that employs the Chaotic Horse Ride Optimization Algorithm (CHROA) and big data analytics (BDA) for cloud-enabled IoT environments. The CHROA technique enhances the optimization capacity of the Horse Ride Optimization Algorithm (HROA) using chaotic principles. The proposed CHROA model balances the load, optimizes available energy resources using AI techniques, and is evaluated using various metrics. Experimental results show that the CHROA model outperforms existing models. For instance, while the Artificial Bee Colony (ABC), Gravitational Search Algorithm (GSA), and Whale Defense Algorithm with Firefly Algorithm (WD-FA) techniques attain average throughputs of 58.247 Kbps, 59.957 Kbps, and 60.819 Kbps, respectively, the CHROA model achieves an average throughput of 70.122 Kbps. The proposed CHROA-based model presents an innovative approach to intelligent load balancing and energy optimization in cloud-enabled IoT environments. The results highlight its potential to address critical challenges and contribute to developing efficient and sustainable IoT/IoE solutions
- …
