189 research outputs found

    Techniques for large-scale automatic detection of web site defacements.

    Get PDF
    2006/2007Web site defacement, the process of introducing unauthorized modifications to a web site, is a very common form of attack. This thesis describes the design and experimental evaluation of a framework that may constitute the basis for a defacement detection service capable of monitoring thousands of remote web sites sistematically and automatically. With this framework an organization may join the service by simply providing the URL of the resource to be monitored along with the contact point of an administrator. The monitored organization may thus take advantage of the service with just a few mouse clicks, without installing any software locally nor changing its own daily operational processes. The main proposed approach is based on anomaly detection and allows monitoring the integrity of many remote web resources automatically while remaining fully decoupled from them, in particular, without requiring any prior knowledge about those resources. During a preliminary learning phase a profile of the monitored resource is built automatically. Then, while monitoring, the remote resource is retrieved periodically and an alert is generated whenever something "unusual" shows up. The thesis discusses about the effectiveness of the approach in terms of accuracy of detection---i.e., missed detections and false alarms. The thesis also considers the problem of misclassified readings in the learning set. The effectiveness of anomaly detection approach, and hence of the proposed framework, bases on the assumption that the profile is computed starting from a learning set which is not corrupted by attacks; this assumption is often taken for granted. The influence of leaning set corruption on our framework effectiveness is assessed and a procedure aimed at discovering when a given unknown learning set is corrupted by positive readings is proposed and evaluated experimentally. An approach to automatic defacement detection based on Genetic Programming (GP), an automatic method for creating computer programs by means of artificial evolution, is proposed and evaluated experimentally. Moreover, a set of techniques that have been used in literature for designing several host-based or network-based Intrusion Detection Systems are considered and evaluated experimentally, in comparison with the proposed approach. Finally, the thesis presents the findings of a large-scale study on reaction time to web site defacement. There exist several statistics that indicate the number of incidents of this sort but there is a crucial piece of information still lacking: the typical duration of a defacement. A two months monitoring activity has been performed over more than 62000 defacements in order to figure out whether and when a reaction to the defacement is taken. It is shown that such time tends to be unacceptably long---in the order of several days---and with a long-tailed distribution.Il web site defacement, che consiste nell'introdurre modifiche non autorizzate ad un sito web, è una forma di attacco molto comune. Questa tesi descrive il progetto, la realizzazione e la valutazione sperimentale di una sistema che può costituire la base per un servizio capace di monitorare migliaia di siti web remoti in maniera sistematica e automatica. Con questo sistema un'organizzazione può avvalersi del servizio semplicemente fornendo l'URL della risorsa da monitorare e un punto di contatto per l'amministratore. L'organizzazione monitorata può quindi avvantaggiarsi del servizio con pochi click del mouse, senza dover installare nessun software in locale e senza dover cambiare le sue attività quotidiane. Il principale approccio proposto è basato sull'anomaly detection e permette di monitorare l'integrita di molte risorse web remote automaticamente rimanendo completamente distaccato da queste e, in particolare, non richiedendo nessuna conoscenza a priori delle stesse. Durante una fase preliminare di apprendimento viene generato automaticamente un profilo della risorsa. Successivamente, durante il monitoraggio, la risorsa è controllata periodicamente ed un allarme viene generato quando qualcosa di "unusuale" si manifesta. La tesi prende in considerazione l'efficacia dell'approccio in termini di accuratezza di rilevamento---cioè, attacchi non rilevati e falsi allarmi generati. La tesi considera anche il problema dei reading mal classificati presenti nel learning set. L'efficiacia dell'approccio anomaly detection, e quindi del sistema proposto, si basa sull'ipotesi che il profilo è generato a partire da un learning set che non è corrotto dalla presenza di attacchi; questa ipotesi viene spesso data per vera. Viene quantificata l'influenza della presenza di reading corrotti sull'efficacia del sistema proposto e viene proposta e valutata sperimentalmente una procedura atta a rilevare quando un learning set ignoto è corrotto dalla presenza di reading positivi. Viene proposto e valutato sperimentalmente un approccio per la rilevazione automatica dei defacement basato sul Genetic Programming (GP), un metodo automatico per creare programmi in termini di evoluzione artificiale. Inoltre, vengono valutate sperimentalmente, in riferimento all'approccio proposto, un insieme di tecniche che sono state utilizzate per progettare Intrusion Detection Systems, sia host based che network-based. Infine, la tesi presenta i risultati di uno studio su larga scala sul tempo di reazione ai defacement. Ci sono diverse statistiche che indicano quale sia il numero di questo tipo di attacchi ma manca un'informazione molto importante: la durata tipica di un defacement. Si è effettuato un monitoraggio di oltre 62000 pagine defacciate per circa due mesi per scoprire se e quando viene presa una contromisura in seguito ad un defacement. Lo studio mostra che i tempi sono inaccettabilmente lunghi---dell'ordine di molti giorni---e con una distribuzione a coda lunga.XX Ciclo197

    Exploring the Usage of Topic Modeling for Android Malware Static Analysis

    Get PDF
    The rapid growth in smartphone and tablet usage over the last years has led to the inevitable rise in targeting of these devices by cyber-criminals. The exponential growth of Android devices, and the buoyant and largely unregulated Android app market, produced a sharp rise in malware targeting that platform. Furthermore, malware writers have been developing detection-evasion techniques which rapidly make anti-malware technologies ineffective. It is hence advisable that security expert are provided with tools which can aid them in the analysis of existing and new Android malware. In this paper, we explore the use of topic modeling as a technique which can assist experts to analyse malware applications in order to discover their characteristic. We apply Latend Dirichlet Allocation (LDA) to mobile applications represented as opcode sequences, hence considering a topic as a discrete distribution of opcode. Our experiments on a dataset of 900 malware applications of different families show that the information provided by topic modeling may help in better understanding malware characteristics and similarities

    Learning Text Patterns using Separate-and-Conquer Genetic Programming

    Get PDF
    The problem of extracting knowledge from large volumes of unstructured textual information has become increasingly important. We consider the problem of extracting text slices that adhere to a syntactic pattern and propose an approach capable of generating the desired pattern automatically, from a few annotated examples. Our approach is based on Genetic Programming and generates extraction patterns in the form of regular expressions that may be input to existing engines without any post-processing. Key feature of our proposal is its ability of discovering automatically whether the extraction task may be solved by a single pattern, or rather a set of multiple patterns is required. We obtain this property by means of a separate-and-conquer strategy: once a candidate pattern provides adequate performance on a subset of the examples, the pattern is inserted into the set of final solutions and the evolutionary search continues on a smaller set of examples including only those not yet solved adequately. Our proposal outperforms an earlier state-of-the-art approach on three challenging datasets

    Syntactical Similarity Learning by Means of Grammatical Evolution

    Get PDF
    Several research efforts have shown that a similarity function synthesized from examples may capture an application-specific similarity criterion in a way that fits the application needs more effectively than a generic distance definition. In this work, we propose a similarity learning algorithm tailored to problems of syntax-based entity extraction from unstructured text streams. The algorithm takes in input pairs of strings along with an indication of whether they adhere or not adhere to the same syntactic pattern. Our approach is based on Grammatical Evolution and explores systematically a similarity definition space including all functions that may be expressed with a specialized, simple language that we have defined for this purpose. We assessed our proposal on patterns representative of practical applications. The results suggest that the proposed approach is indeed feasible and that the learned similarity function is more effective than the Levenshtein distance and the Jaccard similarity index

    On the Automatic Construction of Regular Expressions from Examples (GP vs. Humans 1-0)

    Get PDF
    Regular expressions are systematically used in a number of different application domains. Writing a regular expression for solving a specific task is usually quite difficult, requiring significant technical skills and creativity. We have developed a tool based on Genetic Programming capable of constructing regular expressions for text extraction automatically, based on examples of the text to be extracted. We have recently demonstrated that our tool is human-competitive in terms of both accuracy of the regular expressions and time required for their construction. We base this claim on a large-scale experiment involving more than 1700 users on 10 text extraction tasks of realistic complexity. The F-measure of the expressions constructed by our tool was almost always higher than the average F-measure of the expressions constructed by each of the three categories of users involved in our experiment (Novice, Intermediate, Experienced). The time required by our tool was almost always smaller than the average time required by each of the three categories of users. The experiment is described in full detail in "Can a machine replace humans? A case study. IEEE Intelligent Systems, 2016

    An Author Verification Approach Based on Differential Features

    Get PDF
    We describe the approach that we submitted to the 2015 PAN competition for the author identification task. The task consists in determining if an unknown document was authored by the same author of a set of documents with the same author. We propose a machine learning approach based on a number of different features that characterize documents from widely different points of view. We construct non-overlapping groups of homogeneous features, use a random forest regressor for each features group, and combine the output of all regressors by their arithmetic mean. We train a different regressor for each language. Our approach achieved the first position in the final rank for the Spanish language

    Automatic Synthesis of Regular Expressions from Examples

    Get PDF
    We propose a system for the automatic generation of regular expressions for text-extraction tasks. The user describes the desired task only by means of a set of labeled examples. The generated regexes may be used with common engines such as those that are part of Java, PHP, Perl and so on. Usage of the system does not require any familiarity with regular expressions syntax. We performed an extensive experimental evaluation on 12 different extraction tasks applied to real-world datasets. We obtained very good results in terms of precision and recall, even in comparison to earlier state-of-the-art proposals. Our results are highly promising toward the achievement of a practical surrogate for the specific skills required for generating regular expressions, and significant as a demonstration of what can be achieved with GP-based approaches on modern IT technology

    Your Paper has been Accepted, Rejected, or Whatever: Automatic Generation of Scientific Paper Reviews

    Get PDF
    4noPeer review is widely viewed as an essential step for ensuring scientific quality of a work and is a cornerstone of scholarly publishing. On the other hand, the actors involved in the publishing process are often driven by incentives which may, and increasingly do, undermine the quality of published work, especially in the presence of unethical conduits. In this work we investigate the feasibility of a tool capable of generating fake reviews for a given scientific paper automatically. While a tool of this kind cannot possibly deceive any rigorous editorial procedure, it could nevertheless find a role in several questionable scenarios and magnify the scale of scholarly frauds. A key feature of our tool is that it is built upon a small knowledge base, which is very important in our context due to the difficulty of finding large amounts of scientific reviews. We experimentally assessed our method 16 human subjects. We presented to these subjects a mix of genuine and machine generated reviews and we measured the ability of our proposal to actually deceive subjects judgment. The results highlight the ability of our method to produce reviews that often look credible and may subvert the decision.partially_openembargoed_20160915Bartoli, Alberto; De Lorenzo, Andrea; Medvet, Eric; Tarlao, FabianoBartoli, Alberto; DE LORENZO, Andrea; Medvet, Eric; Tarlao, Fabian

    Can a Machine Replace Humans in Building Regular Expressions? A Case Study

    Get PDF
    Regular expressions are routinely used in a variety of different application domains. But building a regular expression involves a considerable amount of skill, expertise, and creativity. In this work, the authors investigate whether a machine can surrogate these qualities and automatically construct regular expressions for tasks of realistic complexity. They discuss a large-scale experiment involving more than 1,700 users on 10 challenging tasks. The authors compare the solutions constructed by these users to those constructed by a tool based on genetic programming that they recently developed and made publicly available. The quality of automatically constructed solutions turned out to be similar to the quality of those constructed by the most skilled user group; the time for automatic construction was likewise similar to the time required by human users

    Regex-based Entity Extraction with Active Learning and Genetic Programming

    Get PDF
    We consider the long-standing problem of the automatic generation of regular expressions for text extraction, based solely on examples of the desired behavior. We investigate several active learning approaches in which the user annotates only one desired extraction and then merely answers extraction queries generated by the system. The resulting framework is attractive because it is the system, not the user, which digs out the data in search of the samples most suitable to the specific learning task. We tailor our proposals to a state-of-the-art learner based on Genetic Programming and we assess them experimentally on a number of challenging tasks of realistic complexity. The results indicate that active learning is indeed a viable framework in this application domain and may thus significantly decrease the amount of costly annotation effort required
    corecore