1,527 research outputs found

    Economic Factors of Vulnerability Trade and Exploitation

    Full text link
    Cybercrime markets support the development and diffusion of new attack technologies, vulnerability exploits, and malware. Whereas the revenue streams of cyber attackers have been studied multiple times in the literature, no quantitative account currently exists on the economics of attack acquisition and deployment. Yet, this understanding is critical to characterize the production of (traded) exploits, the economy that drives it, and its effects on the overall attack scenario. In this paper we provide an empirical investigation of the economics of vulnerability exploitation, and the effects of market factors on likelihood of exploit. Our data is collected first-handedly from a prominent Russian cybercrime market where the trading of the most active attack tools reported by the security industry happens. Our findings reveal that exploits in the underground are priced similarly or above vulnerabilities in legitimate bug-hunting programs, and that the refresh cycle of exploits is slower than currently often assumed. On the other hand, cybercriminals are becoming faster at introducing selected vulnerabilities, and the market is in clear expansion both in terms of players, traded exploits, and exploit pricing. We then evaluate the effects of these market variables on likelihood of attack realization, and find strong evidence of the correlation between market activity and exploit deployment. We discuss implications on vulnerability metrics, economics, and exploit measurement.Comment: 17 pages, 11 figures, 14 table

    Towards Realistic Threat Modeling: Attack Commodification, Irrelevant Vulnerabilities, and Unrealistic Assumptions

    Full text link
    Current threat models typically consider all possible ways an attacker can penetrate a system and assign probabilities to each path according to some metric (e.g. time-to-compromise). In this paper we discuss how this view hinders the realness of both technical (e.g. attack graphs) and strategic (e.g. game theory) approaches of current threat modeling, and propose to steer away by looking more carefully at attack characteristics and attacker environment. We use a toy threat model for ICS attacks to show how a realistic view of attack instances can emerge from a simple analysis of attack phases and attacker limitations.Comment: Proceedings of the 2017 Workshop on Automated Decision Making for Active Cyber Defens

    A numerical method to calculate the muon relaxation function in the presence of diffusion

    Full text link
    We present an accurate and efficient method to calculate the effect of random fluctuations of the local field at the muon, for instance in the case muon diffusion, within the framework of the strong collision approximation. The method is based on a reformulation of the Markovian process over a discretized time base, leading to a summation equation for the muon polarization function which is solved by discrete Fourier transform. The latter is formally analogous, though not identical, to the integral equation of the original continuous-time model, solved by Laplace transform. With real-case parameter values, the solution of the discrete-time strong collision model is found to approximate the continuous-time solution with excellent accuracy even with a coarse-grained time sampling. Its calculation by the fast Fourier transform algorithm is very efficient and suitable for real time fitting of experimental data even on a slow computer.Comment: 7 pages, 3 figures. Submitted to Journal of Physics: Condensed Matte

    The Effect of Security Education and Expertise on Security Assessments: the Case of Software Vulnerabilities

    Get PDF
    In spite of the growing importance of software security and the industry demand for more cyber security expertise in the workforce, the effect of security education and experience on the ability to assess complex software security problems has only been recently investigated. As proxy for the full range of software security skills, we considered the problem of assessing the severity of software vulnerabilities by means of a structured analysis methodology widely used in industry (i.e. the Common Vulnerability Scoring System (\CVSS) v3), and designed a study to compare how accurately individuals with background in information technology but different professional experience and education in cyber security are able to assess the severity of software vulnerabilities. Our results provide some structural insights into the complex relationship between education or experience of assessors and the quality of their assessments. In particular we find that individual characteristics matter more than professional experience or formal education; apparently it is the \emph{combination} of skills that one owns (including the actual knowledge of the system under study), rather than the specialization or the years of experience, to influence more the assessment quality. Similarly, we find that the overall advantage given by professional expertise significantly depends on the composition of the individual security skills as well as on the available information.Comment: Presented at the Workshop on the Economics of Information Security (WEIS 2018), Innsbruck, Austria, June 201

    My Software has a Vulnerability, should I worry?

    Get PDF
    (U.S) Rule-based policies to mitigate software risk suggest to use the CVSS score to measure the individual vulnerability risk and act accordingly: an HIGH CVSS score according to the NVD (National (U.S.) Vulnerability Database) is therefore translated into a "Yes". A key issue is whether such rule is economically sensible, in particular if reported vulnerabilities have been actually exploited in the wild, and whether the risk score do actually match the risk of actual exploitation. We compare the NVD dataset with two additional datasets, the EDB for the white market of vulnerabilities (such as those present in Metasploit), and the EKITS for the exploits traded in the black market. We benchmark them against Symantec's threat explorer dataset (SYM) of actual exploit in the wild. We analyze the whole spectrum of CVSS submetrics and use these characteristics to perform a case-controlled analysis of CVSS scores (similar to those used to link lung cancer and smoking) to test its reliability as a risk factor for actual exploitation. We conclude that (a) fixing just because a high CVSS score in NVD only yields negligible risk reduction, (b) the additional existence of proof of concepts exploits (e.g. in EDB) may yield some additional but not large risk reduction, (c) fixing in response to presence in black markets yields the equivalent risk reduction of wearing safety belt in cars (you might also die but still..). On the negative side, our study shows that as industry we miss a metric with high specificity (ruling out vulns for which we shouldn't worry). In order to address the feedback from BlackHat 2013's audience, the final revision (V3) provides additional data in Appendix A detailing how the control variables in the study affect the results.Comment: 12 pages, 4 figure

    A preliminary analysis of vulnerability scores for attacks in wild

    Get PDF
    NVD and Exploit-DB are the de facto standard databases used for research on vulnerabilities, and the CVSS score is the standard measure for risk. On open question is whether such databases and scores are actually representative of at- tacks found in the wild. To address this question we have constructed a database (EKITS) based on the vulnerabili- ties currently used in exploit kits from the black market and extracted another database of vulnerabilities from Symantec's Threat Database (SYM). Our nal conclusion is that the NVD and EDB databases are not a reliable source of in- formation for exploits in the wild, even after controlling for the CVSS and exploitability subscore. An high or medium CVSS score shows only a signi cant sensitivity (i.e. prediction of attacks in the wild) for vulnerabilities present in exploit kits (EKITS) in the black market. All datasets ex- hibit a low speci city

    MalwareLab: Experimentation with Cybercrime Attack Tools

    Get PDF
    Cybercrime attack tools (i.e. Exploit Kits) are reportedly responsible for the majority of attacks affecting home users. Exploit kits are traded in the black markets at different prices and advertising different capabilities and functionalities. In this paper we present our experimental approach in testing 10 exploit kits leaked from the markets that we deployed in an isolated environment, our MalwareLab. The purpose of this experiment is to test these tools in terms of resiliency against changing software configurations in time. We present our experiment design and implementation, discuss challenges, lesson learned and open problems, and present a preliminary analysis of the results

    THz time-domain spectroscopy of mixed CO2–CH3OH interstellar ice analogs

    Get PDF
    The icy mantles of interstellar dust grains are the birthplaces of the primordial prebiotic molecular inventory that may eventually seed nascent solar systems and the planets and planetesimals that form therein. Here, we present a study of two of the most abundant species in these ices after water: carbon dioxide (CO2) and methanol (CH3OH), using TeraHertz (THz) time-domain spectroscopy and mid-infrared spectroscopy. We study pure and mixed-ices of these species, and demonstrate the power of the THz region of the spectrum to elucidate the long-range structure (i.e. crystalline versus amorphous) of the ice, the degree of segregation of these species within the ice, and the thermal history of the species within the ice. Finally, we comment on the utility of the THz transitions arising from these ices for use in astronomical observations of interstellar ices

    Critical chain length and superconductivity emergence in oxygen-equalized pairs of YBa2Cu3O6.30

    Full text link
    The oxygen-order dependent emergence of superconductivity in YBa2Cu3O6+x is studied, for the first time in a comparative way, on pair samples having the same oxygen content and thermal history, but different Cu(1)Ox chain arrangements deriving from their intercalated and deintercalated nature. Structural and electronic non-equivalence of pairs samples is detected in the critical region and found to be related, on microscopic scale, to a different average chain length, which, on being experimentally determined by nuclear quadrupole resonance (NQR), sheds new light on the concept of critical chain length for hole doping efficiency.Comment: 7 RevTex pages, 2 Postscript figures. Submitted to Phys. Rev.
    corecore