171 research outputs found

    Secure management of logs in internet of things

    Full text link
    Ever since the advent of computing, managing data has been of extreme importance. With innumerable devices getting added to network infrastructure, there has been a proportionate increase in the data which needs to be stored. With the advent of Internet of Things (IOT) it is anticipated that billions of devices will be a part of the internet in another decade. Since those devices will be communicating with each other on a regular basis with little or no human intervention, plethora of real time data will be generated in quick time which will result in large number of log files. Apart from complexity pertaining to storage, it will be mandatory to maintain confidentiality and integrity of these logs in IOT enabled devices. This paper will provide a brief overview about how logs can be efficiently and securely stored in IOT devices.Comment: 6 pages, 1 tabl

    Survey on security issues in file management in cloud computing environment

    Full text link
    Cloud computing has pervaded through every aspect of Information technology in past decade. It has become easier to process plethora of data, generated by various devices in real time, with the advent of cloud networks. The privacy of users data is maintained by data centers around the world and hence it has become feasible to operate on that data from lightweight portable devices. But with ease of processing comes the security aspect of the data. One such security aspect is secure file transfer either internally within cloud or externally from one cloud network to another. File management is central to cloud computing and it is paramount to address the security concerns which arise out of it. This survey paper aims to elucidate the various protocols which can be used for secure file transfer and analyze the ramifications of using each protocol.Comment: 5 pages, 1 tabl

    Weightless: Lossy Weight Encoding For Deep Neural Network Compression

    Get PDF
    The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496x with the same model accuracy. This results in up to a 1.51x improvement over the state-of-the-art

    GPT-InvestAR: Enhancing Stock Investment Strategies through Annual Report Analysis with Large Language Models

    Full text link
    Annual Reports of publicly listed companies contain vital information about their financial health which can help assess the potential impact on Stock price of the firm. These reports are comprehensive in nature, going up to, and sometimes exceeding, 100 pages. Analysing these reports is cumbersome even for a single firm, let alone the whole universe of firms that exist. Over the years, financial experts have become proficient in extracting valuable information from these documents relatively quickly. However, this requires years of practice and experience. This paper aims to simplify the process of assessing Annual Reports of all the firms by leveraging the capabilities of Large Language Models (LLMs). The insights generated by the LLM are compiled in a Quant styled dataset and augmented by historical stock price data. A Machine Learning model is then trained with LLM outputs as features. The walkforward test results show promising outperformance wrt S&P500 returns. This paper intends to provide a framework for future work in this direction. To facilitate this, the code has been released as open source

    Microkinetic Modeling of Complex Reaction Networks Using Automated Network Generation

    Get PDF
    University of Minnesota Ph.D. dissertation. April 2018. Major: Chemical Engineering. Advisors: Prodromos Daoutidis, Aditya Bhan. 1 computer file (PDF); xiv, 193 pages.Complex reaction networks are found in a variety of engineered and natural chemical systems ranging from petroleum processing to atmospheric chemistry and including biomass conversion, materials synthesis, metabolism, and biological degradation of chemicals. These systems comprise of several thousands of reactions and species interrelated through a highly interconnected network. These complex reaction networks can be constructed automatically from a small set of initial reactants and chemical transformation rules. Detailed kinetic modeling of these complex reaction systems is becoming increasingly important in the development, analysis, design, and control of chemical reaction processes. The key challenges faced in the development of a kinetic model for complex reaction systems include (1) multi-time scale behavior due to the presence of fast and slow reactions which introduces stiffness in the system, (2) lack of lumping schemes that scale well with the large size of the network, and (3) unavailability of accurate reaction rate constants (activation energies and pre-exponential factors). Model simplication and order reduction methods involving lumping, sensitivity analysis and time-scale analysis address the challenges of size and stiffness of the system. Although there exist numerical methods for simulation of large-scale, stiff models, the use of such models in optimization-based tasks (e.g. parameter estimation, control) results in ill-conditioning of the corresponding optimization task. This research presents methods, computational tools, and applications to address the two challenges that emerge in the development of microkinetic models of complex reaction networks in the context of chemical and biochemical conversion - (a) identifying the different time scales within the reaction system irrespective of the chemistry, and (b) identifying lumping and parameterization schemes to address the computational challenge of parameter estimation. The first question arises due to the presence of both fast and slow reactions simultaneously within the system. The second challenge is directly related to the estimation of the reaction rate constants that are unknown for these chemical reaction networks. Addressing these questions is a key step towards modeling, design, operation, and control of reactors involving complex systems. In this context, this thesis presents methods to address the computational challenges in developing microkinetic models for complex reaction networks. Rule Input Network Generator (RING), a network generation computational tool, is used for the network generation and analysis. First, the stiffness is addressed with the implementation of a graph-theoretic framework. Second, lumping and parameterization schemes are studied to address the size challenge of these reaction networks. A particular lumping and parameterization scheme is used to develop the microkinetic model for an olefin interconversion reaction system. Further, RING is extended for application of biochemical reaction network generation and analysis

    A Feasibility Study of Distributed Spectrum Sensing using Mobile Devices

    Get PDF
    Given the exponential increase in mobile data traffic, there is a growing fear of an im- pending spectrum crunch. Shared use of the so-called ‘Whitespace’ spectrum offers a solution. Whitespace spectra are those that are already licensed for specific use but is understood to be ill-utilized over space and time. Examples include spectra used by ter- restrial TV broadcast or various radars. Shared use of such Whitespace spectra requires that incumbent licensed devices (primary) would have priority. One straightforward mechanism to detect the existence of primary transmissions is spectrum sensing. We are envisioning a spectrum sensing model where mobile devices have built-in spectrum sensing capabilities and upload such data on a cloud-based server that in turn builds a spatial map of spectrum occupancy. This enables spatial granular data collection via crowd-sourcing, for example. However, such mobile device-based spectrum sensing is challenging as the mobile devices are generally resource constrained. On the other hand, spectrum sensing is energy and computation intensive. In this thesis, we perform a measurement study to understand the general feasibility of the mobile sensing approach. For the experiments, we use several low-power software radio platforms that are powered by USB so that they can be interfaced to a mobile phone/tablet class device with an appropriate USB support. We develop appropriate software/hardware testbeds to carry out latency and energy measurements for mobile-based spectrum sensing on such platforms for a suite of well-known sensing algorithms operating in the TV white space. We describe the setup, discuss insights gained from these measurements and different possible optimizations such as the use of pipelining or use of GPU. Finally, we demonstrate the end-to-end operation by showcasing a system comprising of i) a number of distributed mobile spectrum sensors and ii) an indoor small cell comprising of an access point and client device (secondaries) operating in TV band iii) and a cloud-based spectrum server that builds spectrum map based on collected sensing data and instruct/allow secondaries to operate in multiple non-interfering Whitespace channels based on availability at the location. | Given the exponential increase in mobile data traffic, there is a growing fear of an im- pending spectrum crunch. Shared use of the so-called ‘Whitespace’ spectrum offers a solution. Whitespace spectra are those that are already licensed for specific use but is understood to be ill-utilized over space and time. Examples include spectra used by ter- restrial TV broadcast or various radars. Shared use of such Whitespace spectra requires that incumbent licensed devices (primary) would have priority. One straightforward mechanism to detect the existence of primary transmissions is spectrum sensing. We are envisioning a spectrum sensing model where mobile devices have built-in spectrum sensing capabilities and upload such data on a cloud-based server that in turn builds a spatial map of spectrum occupancy. This enables spatial granular data collection via crowd-sourcing, for example. However, such mobile device-based spectrum sensing is challenging as the mobile devices are generally resource constrained. On the other hand, spectrum sensing is energy and computation intensive. In this thesis, we perform a measurement study to understand the general feasibility of the mobile sensing approach. For the experiments, we use several low-power software radio platforms that are powered by USB so that they can be interfaced to a mobile phone/tablet class device with an appropriate USB support. We develop appropriate software/hardware testbeds to carry out latency and energy measurements for mobile-based spectrum sensing on such platforms for a suite of well-known sensing algorithms operating in the TV white space. We describe the setup, discuss insights gained from these measurements and different possible optimizations such as the use of pipelining or use of GPU. Finally, we demonstrate the end-to-end operation by showcasing a system comprising of i) a number of distributed mobile spectrum sensors and ii) an indoor small cell comprising of an access point and client device (secondaries) operating in TV band iii) and a cloud-based spectrum server that builds spectrum map based on collected sensing data and instruct/allow secondaries to operate in multiple non-interfering Whitespace channels based on availability at the location. | 84 page

    Analyzing Disparity and Temporal Progression of Internet Quality through Crowdsourced Measurements with Bias-Correction

    Full text link
    Crowdsourced speedtest measurements are an important tool for studying internet performance from the end user perspective. Nevertheless, despite the accuracy of individual measurements, simplistic aggregation of these data points is problematic due to their intrinsic sampling bias. In this work, we utilize a dataset of nearly 1 million individual Ookla Speedtest measurements, correlate each datapoint with 2019 Census demographic data, and develop new methods to present a novel analysis to quantify regional sampling bias and the relationship of internet performance to demographic profile. We find that the crowdsourced Ookla Speedtest data points contain significant sampling bias across different census block groups based on a statistical test of homogeneity. We introduce two methods to correct the regional bias by the population of each census block group. Whereas the sampling bias leads to a small discrepancy in the overall cumulative distribution function of internet speed in a city between estimation from original samples and bias-corrected estimation, the discrepancy is much smaller compared to the size of the sampling heterogeneity across regions. Further, we show that the sampling bias is strongly associated with a few demographic variables, such as income, education level, age, and ethnic distribution. Through regression analysis, we find that regions with higher income, younger populations, and lower representation of Hispanic residents tend to measure faster internet speeds along with substantial collinearity amongst socioeconomic attributes and ethnic composition. Finally, we find that average internet speed increases over time based on both linear and nonlinear analysis from state space models, though the regional sampling bias may result in a small overestimation of the temporal increase of internet speed

    SERUM HOMOCYSTEINE AS A RISK FACTOR FOR STROKE: A PROSPECTIVE STUDY FROM A RURAL TERTIARY CARE CENTRE

    Get PDF
    Objective: Stroke is one of the leading causes of mortality and long-term disability in both developed and developing countries. Serum homocysteine level is one of the emerging modifiable risk factors for atherosclerosis which may result into a cerebrovascular accident. This study was designed to study the association of Serum Homocysteine level with the development of acute stroke at a rural tertiary care centre in North India.Methods: The present study was a prospective cross-sectional study conducted in the Department of Medicine, Maharishi Markandeshwar Institute of Medical Sciences and Research, Mullana, Ambala. The study population included 100 patients presenting with Stroke (either ischemic or hemorrhagic) in the indoor and outdoor facilities in the Department of Medicine. 50 age and sex-matched healthy individuals were taken as controls. Serum total Homocysteine level was measured in all the cases and controls.Results: Majority of the patients suffered from ischemic stroke (78%), while only 22% patients had hemorrhagic stroke. The mean Serum Homocysteine level in stroke patients (19.88±8.78 μmol/l) was significantly higher than in controls (10.48±4.39 μmol/l) (p<0.01). In a subgroup analysis, stroke patients with a positive history of smoking had significantly higher homocysteine level as compared to non-smokers (p<0.05).Conclusion: Increased level of Serum Homocysteine is significantly associated with risk of cerebrovascular accident, which is independent of the risk attributed to traditional risk factors.Â
    corecore