67 research outputs found

    Design and Implementation of a State-Driven Operating System for Highly Reconfigurable Sensor Networks

    Get PDF
    Due to the low-cost and low-power requirement in an individual sensor node, the available computing resources turn out to be very limited like small memory footprint and irreplaceable battery power. Sensed data fusion might be needed before being transmitted as a tradeoff between procession and transmission in consideration of saving power consumption. Even worse, the application program needs to be complicated enough to be self-organizing and dynamically reconfigurable because changes in an operating environment continue even after deployment. State-driven operating system platform offers numerous benefits in this challenging situation. It provides a powerful way to accommodate complex reactive systems like diverse wireless sensor network applications. The memory usage can be bounded within a state transition table. The complicated issues like concurrency control and asynchronous event handling capabilities can be easily achieved in a well-defined behavior of state transition diagram. In this paper, we present an efficient and effective design of the state-driven operating system for wireless sensor nodes. We describe that the new platform can operate in an extremely resource constrained situation while providing the desired concurrency, reactivity, and reconfigurability. We also compare the executing results after comparing some benchmark test results with those on TinyOS

    A novel cutting strategy for measuring two components of residual stresses using slitting method

    No full text
    An exact knowledge of residual stresses that exist within the engineering components is essential to maintain the structural integrity. All mechanical strain relief (MSR) techniques to measure residual stresses rely on removing a section of material that contains residual stresses. Therefore, these techniques are destructive as the integrity of the components is compromised. In slitting method, as a MSR technique, a slot with an increasing depth is introduced to the part incrementally that results in deformations. By measuring these deformations the residual stress component normal to the cut can be determined. Two orthogonal components of residual stresses were measured using the slitting method both experimentally and numerically. Different levels of residual stresses were induced into beam shaped specimens using quenching process at different temperatures. The experimental results were then compared with those numerically predicted. It was shown that while the first component of residual stress was being measured, its effect on the second direction that was normal to the first cut was inevitable. Finally, a new cutting configuration was proposed in which two components of residual stresses were measured simultaneously. The results of the proposed method indicated a good agreement with the conventional slitting

    Efficient In-Network Processing for a Hardware-Heterogeneous IoT

    No full text
    As the number of small, battery-operated, wireless-enabled devices deployed in various applications of Internet of Things (IoT), Wireless Sensor Networks (WSN), and Cyber-physical Systems (CPS) is rapidly increasing, so is the number of data streams that must be processed. In cases where data do not need to be archived, centrally processed, or federated, in-network data processing is becoming more common. For this purpose, various platforms like D RAGON , Innet, and CJF were proposed. However, these platforms assume that all nodes in the network are the same, i.e. the network is homogeneous. As Moore’s law still applies, nodes are becoming smaller, more powerful, and more energy efficient each year; which will continue for the foreseeable future. Therefore, we can expect that as sensor networks are extended and updated, hardware heterogeneity will soon be common in networks - the same trend as can be seen in cloud computing infrastructures. This heterogeneity introduces new challenges in terms of choosing an in-network data processing node, as not only its location, but also its capabilities, must be considered. This paper introduces a new methodology to tackle this challenge, comprising three new algorithms - Request, Traverse, and Mixed - for efficiently locating an in-network data processing node, while taking into account not only position within the network but also hardware capabilities. The roposed algorithms are evaluated against a naïve approach and achieve up to 90% reduction in network traffic during long-term data processing, while spending a similar amount time in the discovery phase

    Unsupervised Feature Selection Based on Matrix Factorization with Redundancy Minimization

    No full text

    Effect of two supplementary zinc regimens on serum lipids oxidizability in type II diabetic patients

    No full text
    Background: Chronic complications (e.g. cardiovascular failure) are among the most common problems in diabetics. It is suggested that oxidative stress and lipid peroxidation play a key role in chronic diabetic complications. Supplementation with agents containing antioxidant properties can suppress lipid peroxidation. Many studies confirmed the antioxidant properties of zinc in biological systems. The aim of the present study was to evaluate the effect of zinc supplements on serum lipid oxidizability in diabetic patients. Materials and Methods: In this clinical trial study, 60 diabetic patients were chosen and randomly divided into two groups. Serum lipid oxidizability and serum zinc level were evaluated in each group before and after zinc supplementation (25.50 mg/day for 2 month). Lipid oxidizability was followed through monitoring the change of conjugated compounds in diluted serum after adding Cu2+ by spectrophotometric method. S erum zinc level was measured by atomic absorbance spectrophotometer. Results: While there was no significant change in the post- supplementation zinc level (25 mg) in the first group, zinc serum level was increased significantly (

    Prediction of blood cancer using leukemia gene expression data and sparsity-based gene selection methods

    No full text
    Background: DNA microarray is a useful technology that simultaneously assesses the expression of thousands of genes. It can be utilized for the detection of cancer types and cancer biomarkers. This study aimed to predict blood cancer using leukemia gene expression data and a robust l2,p-norm sparsity-based gene selection method. Materials and Methods: In this descriptive study, the microarray gene expression data of 72 patients with acute myeloid leukemia (AML) and lymphoblastic leukemia (ALL) was used. To remove the redundant genes and identify the most important genes in the prediction of AML and ALL, a robust euro2,p-norm (0 < p <= 1) sparsity -based gene selection method was applied, in which the parameter p method was implemented from 1/4, 1/2, 3/4 and 1. Then, the most important genes were used by the random forest (RF) and support vector machine (SVM) classifiers for prediction of AML and ALL. Results: The RF and SVM classifiers correctly classified all AML and ALL samples. The RF classifier obtained the performance of 100 using 10 genes selected by the euro2,1/2-norm and euro2,1-norm sparsity-based gene selection methods. Moreover, the SVM classifier obtained a performance of 100 using 10 genes selected by the euro2,1/2 -norm method. Seven common genes were identified by all four values of parameter p in the euro2,p-norm method as the most important genes in the classification of AML and ALL, and the gene with the description "PRTN3 Proteinase 3 (serine proteinase, neutrophil, Wegener granulomatosis autoantigen" was identified as the most important gene. Conclusion: The results obtained in this study indicated that the prediction of blood cancer from leukemia microarray gene expression data can be carried out using the robust l2,p-norm sparsity-based gene selection method and classification algorithms. It can be useful to examine the expression level of the genes identified by this study to predict leukemia

    The Cluster-Heads Selection Method considering Energy Balancing for Wireless Sensor Networks

    No full text
    Wireless sensor networks (WSNs) are self-organizing networks in which sensor nodes with limited resource are scattered in an area of interest to gather information. WSNs need to have effective node's energy management methods for stable and seamless communication. As one of a number of good technical solutions, a clustering technique has been issued and proposed among researchers for reducing energy consumption in WSNs. Also, it can prevent the problem of data duplication by the sensor nodes. Generally, to reduce WSNs' energy consumption as much, cluster heads (CHs) are selected dynamically based on cluster rotation mechanism. However, the CH that is already previously selected could not be selected again unless the round process is over even though the node has more energy than others. Following this fact, in WSNs, there is a kind of irregular energy consumption status among sensor nodes because of CHs' overhead energy usages. To solve this problem, in WSN networks, any sensor node should be a candidate to be CH without any exception even if the node is already chosen before. Therefore, in this paper, we will establish and propose an energy balanced CH selection mechanism and the distribution of sensor node's energy consumption in WSNs for equal and stable energy management

    A novel proteochemometrics model for predicting the inhibition of nine carbonic anhydrase isoforms based on supervised Laplacian score and <i>k</i>-nearest neighbour regression

    No full text
    Carbonic anhydrases (CAs) are essential enzymes in biological processes. Prediction of the activity of compounds towards CA isoforms could be evaluated by computational techniques to discover a novel therapeutic inhibitor. Studies such as quantitative structure–activity relationships (QSARs), molecular docking and pharmacophore modelling have been carried out to design potent inhibitors. Unfortunately, QSAR does not consider the information of target space in the model. We successfully developed an in silico proteochemometrics model that simultaneously uses target and ligand descriptors to predict the activities of CA inhibitors. Herein, a strong predictive model was built for the prediction of protein–ligand binding affinity between nine human CA isoforms and 549 ligands. We applied descriptors obtained from the PROFEAT webserver for the proteins. Ligands were encoded by descriptors from PaDEL-Descriptor software. Supervised Laplacian score (SLS) and particle swarm optimization were used for feature selection. Models were derived using k-nearest neighbour (KNN) regression and a kernel smoother model. The predictive ability of the models was evaluated by an external validation test. Statistical results (Q2ext = 0.7806, r2test = 0.7811 and RMSEtest = 0.5549) showed that the model generated using SLS and KNN regression outperformed the other models. Consequently, the selectivity of compounds towards these enzymes will be predicted prior to synthesis.</p
    corecore