792 research outputs found
Unsupervised Anomaly-based Malware Detection using Hardware Features
Recent works have shown promise in using microarchitectural execution
patterns to detect malware programs. These detectors belong to a class of
detectors known as signature-based detectors as they catch malware by comparing
a program's execution pattern (signature) to execution patterns of known
malware programs. In this work, we propose a new class of detectors -
anomaly-based hardware malware detectors - that do not require signatures for
malware detection, and thus can catch a wider range of malware including
potentially novel ones. We use unsupervised machine learning to build profiles
of normal program execution based on data from performance counters, and use
these profiles to detect significant deviations in program behavior that occur
as a result of malware exploitation. We show that real-world exploitation of
popular programs such as IE and Adobe PDF Reader on a Windows/x86 platform can
be detected with nearly perfect certainty. We also examine the limits and
challenges in implementing this approach in face of a sophisticated adversary
attempting to evade anomaly-based detection. The proposed detector is
complementary to previously proposed signature-based detectors and can be used
together to improve security.Comment: 1 page, Latex; added description for feature selection in Section 4,
results unchange
Applied constant gain amplification in circulating loop experiments
The reconfiguration of channel or wavelength routes in optically transparent mesh networks can lead to deviations in channel power that may impact transmission performance. A new experimental approach, applied constant gain, is used to maintain constant gain in a circulating loop enabling the study of gain error effects on long-haul transmission under reconfigured channel loading. Using this technique we examine a number of channel configurations and system tuning operations for both full-span dispersion-compensated and optimized dispersion-managed systems. For each system design, large power divergence was observed with a maximum of 15 dB at 2240 km, when switching was implemented without additional system tuning. For a bit error rate of 10-3, the maximum number of loop circulations was reduced by up to 33%
Rapid Parallelization by Collaboration
The widespread adoption of Chip Multiprocessors has renewed the emphasis on the use of parallelism to improve performance. The present and growing diversity in hardware architectures and software environments, however, continues to pose difficulties in the effective use of parallelism thus delaying a quick and smooth transition to the concurrency era. In this document, we describe the research being conducted at the Computer Science Department at Columbia University on a system called COMPASS that aims to simplify this transition by providing advice to programmers considering parallelizing their code. The advice proffered to the programmer is based on the wisdom collected from programmers who have already parallelized some code. The utility of COMPASS rests, not only on its ability to collect the wisdom unintrusively but also on its ability to automatically seek, find and synthesize this wisdom into advice that is tailored to the code the user is considering parallelizing and to the environment in which the optimized program will execute in. COMPASS provides a platform and an extensible framework for sharing human expertise about code parallelization -- widely and on diverse hardware and software. By leveraging the "Wisdom of Crowds" model which has been conjunctured to scale exponentially and which has successfully worked for Wikis, COMPASS aims to enable rapid parallelization of code and thus continue to extend the benefits for Moore's law scaling to science and society
Recommended from our members
COMPASS: A Community-driven Parallelization Advisor for Sequential Software
The widespread adoption of multicores has renewed the emphasis on the use of parallelism to improve performance. The present and growing diversity in hardware architectures and software environments, however, continues to pose difficulties in the effective use of parallelism thus delaying a quick and smooth transition to the concurrency era. In this paper, we describe the research being conducted at Columbia University on a system called COMPASS that aims to simplify this transition by providing advice to programmers while they reengineer their code for parallelism. The advice proffered to the programmer is based on the wisdom collected from programmers who have already parallelized some similar code. The utility of COMPASS rests, not only on its ability to collect the wisdom unintrusively but also on its ability to automatically seek, find and synthesize this wisdom into advice that is tailored to the task at hand, i.e., the code the user is considering parallelizing and the environment in which the optimized program is planned to execute. COMPASS provides a platform and an extensible framework for sharing human expertise about code parallelization — widely, and on diverse hardware and software. By leveraging the "wisdom of crowds" model, which has been conjectured to scale exponentially and which has successfully worked for wikis, COMPASS aims to enable rapid propagation of knowledge about code parallelization in the context of the actual parallelization reengineering, and thus continue to extend the benefits of Moore's law scaling to science and society
Self-monitoring Monitors
Many different monitoring systems have been created to identify system state conditions to detect or prevent a myriad of deliberate attacks, or arbitrary faults inherent in any complex system. Monitoring systems are also vulnerable to attack. A stealthy attacker can simply turn off or disable these monitoring systems without being detected; he would thus be able to perpetrate the very attacks that these systems were designed to stop. For example, many examples of virus attacks against antivirus scanners have appeared in the wild. In this paper, we present a novel technique to "monitor the monitors" in such a way that (a) unauthorized shutdowns of critical monitors are detected with high probability, (b) authorized shutdowns raise no alarm, and (c) the proper shutdown sequence for authorized shutdowns cannot be inferred from reading memory. The techniques proposed to prevent unauthorized shut down (turning off) of monitoring systems was inspired by the duality of safety technology devised to prevent unauthorized discharge (turning on) of nuclear weapons
Increasing Tetrahydrobiopterin in Cardiomyocytes Adversely Affects Cardiac Redox State and Mitochondrial Function Independently of Changes in NO Production
Tetrahydrobiopterin (BH4) represents a potential strategy for the treatment of cardiac remodeling, fibrosis and/or diastolic dysfunction. The effects of oral treatment with BH4 (Sapropterin™ or Kuvan™) are however dose-limiting with high dose negating functional improvements. Cardiomyocyte-specific overexpression of GTP cyclohydrolase I (mGCH) increases BH4 several-fold in the heart. Using this model, we aimed to establish the cardiomyocyte-specific responses to high levels of BH4. Quantification of BH4 and BH2 in mGCH transgenic hearts showed age-based variations in BH4:BH2 ratios. Hearts of mice (\u3c6 \u3emonths) have lower BH4:BH2 ratios than hearts of older mice while both GTPCH activity and tissue ascorbate levels were higher in hearts of young than older mice. No evident changes in nitric oxide (NO) production assessed by nitrite and endogenous iron–nitrosyl complexes were detected in any of the age groups. Increased BH4 production in cardiomyocytes resulted in a significant loss of mitochondrial function. Diminished oxygen consumption and reserve capacity was verified in mitochondria isolated from hearts of 12-month old compared to 3-month old mice, even though at 12 months an improved BH4:BH2 ratio is established. Accumulation of 4-hydroxynonenal (4-HNE) and decreased glutathione levels were found in the mGCH hearts and isolated mitochondria. Taken together, our results indicate that the ratio of BH4:BH2 does not predict changes in neither NO levels nor cellular redox state in the heart. The BH4 oxidation essentially limits the capacity of cardiomyocytes to reduce oxidant stress. Cardiomyocyte with chronically high levels of BH4 show a significant decline in redox state and mitochondrial function
Dynamic circulating-loop methods for transmission experiments in optically transparent networks
Recent experiments incorporating multiple fast switching elements and automated system configuration in a circulating loop apparatus have enabled the study of aspects of long-haul WDM transmission unique to optically transparent networks. Techniques include per-span switching to measure the performance limits due to dispersion compensation granularity and mesh network walk-off, and applied constant-gain amplification to evaluate wavelength reconfiguration penalties
Recommended from our members
Concurrency Attacks
Just as errors in sequential programs can lead to security exploits, errors in concurrent programs can lead to concurrency attacks. Questions such as whether these attacks are real and what characteristics they have remain largely unknown. In this paper, we present a preliminary study of concurrency attacks and the security implications of real concurrency errors. Our study yields several interesting findings. For instance, we observe that the exploitability of a concurrency error depends on the duration of the timing window within which the error may occur. We further observe that attackers can increase this window through carefully crafted inputs. We also find that four out of five commonly used sequential defense mechanisms become unsafe when applied to concurrent programs. Based on our findings, we propose new defense directions and fixes to existing defenses
Recommended from our members
Concurrency Attacks
Just as errors in sequential programs can lead to security exploits, errors in concurrent programs can lead to concurrency attacks. Questions such as whether these attacks are real and what characteristics they have remain largely unknown. In this paper, we present a preliminary study of concurrency attacks and the security implications of real concurrency errors. Our study yields several interesting findings. For instance, we observe that the exploitability of a concurrency error depends on the duration of the timing window within which the error may occur. We further observe that attackers can increase this window through carefully crafted inputs. We also find that four out of five commonly used sequential defense mechanisms become unsafe when applied to concurrent programs. Based on our findings, we propose new defense directions and fixes to existing defenses
- …
