4,813 research outputs found

    On the Effect of Semantically Enriched Context Models on Software Modularization

    Full text link
    Many of the existing approaches for program comprehension rely on the linguistic information found in source code, such as identifier names and comments. Semantic clustering is one such technique for modularization of the system that relies on the informal semantics of the program, encoded in the vocabulary used in the source code. Treating the source code as a collection of tokens loses the semantic information embedded within the identifiers. We try to overcome this problem by introducing context models for source code identifiers to obtain a semantic kernel, which can be used for both deriving the topics that run through the system as well as their clustering. In the first model, we abstract an identifier to its type representation and build on this notion of context to construct contextual vector representation of the source code. The second notion of context is defined based on the flow of data between identifiers to represent a module as a dependency graph where the nodes correspond to identifiers and the edges represent the data dependencies between pairs of identifiers. We have applied our approach to 10 medium-sized open source Java projects, and show that by introducing contexts for identifiers, the quality of the modularization of the software systems is improved. Both of the context models give results that are superior to the plain vector representation of documents. In some cases, the authoritativeness of decompositions is improved by 67%. Furthermore, a more detailed evaluation of our approach on JEdit, an open source editor, demonstrates that inferred topics through performing topic analysis on the contextual representations are more meaningful compared to the plain representation of the documents. The proposed approach in introducing a context model for source code identifiers paves the way for building tools that support developers in program comprehension tasks such as application and domain concept location, software modularization and topic analysis

    The role of chief risk officer in adoption and implementation of enterprise risk management-A literature review

    Get PDF
    Recently many companies view risk management from a holistic approach instead of a silo- based perspective. This holistic approach is called Enterprise Risk Management (ERM). Indeed, ERM is designed to assess the ability of board of directors and senior management in managing total portfolio of risk faced by an enterprise. Based on relevant literature Chief Risk Officer (CRO) is one important factor which may influence companies in deciding whether to adopt an ERM. The role of the CROs is to work with other managers to set up an effective and efficient risk management system and disseminate risk information to the entire enterprise. The main purpose of this paper is to provide a comprehensive overview of the influence of CRO on adoption and implementation of ERM. It was found that presence and quality of CRO are important determinants of ERM adoption and implementation. This research clarifies that there is a lack of research in respect of the effect of CRO in implementation of ERM in developing countries. This study is useful for companies which wants to adopt ERM or wants to improve the stage and level of ERM implementation in their companies

    Why do you take that route?

    Get PDF
    The purpose of this paper is to determine whether a particular context factor among the variables that a researcher is interested in causally affects the route choice behavior of drivers. To our knowledge, there is limited literature that consider the effects of various factors on route choice based on causal inference.Yet, collecting data sets that are sensitive to the aforementioned factors are challenging and the existing approaches usually take into account only the general factors motivating drivers route choice behavior. To fill these gaps, we carried out a study using Immersive Virtual Environment (IVE) tools to elicit drivers' route choice behavioral data, covering drivers' network familiarity, educationlevel, financial concern, etc, apart from conventional measurement variables. Having context-aware, high-fidelity properties, IVE data affords the opportunity to incorporate the impacts of human related factors into the route choice causal analysis and advance a more customizable research tool for investigating causal factors on path selection in network routing. This causal analysis provides quantitative evidence to support drivers' diversion decision.Comment: 7 pages, 3 figure

    An extension of analytical methods for building damage evaluation in subsidence regions to anisotropic beams

    Get PDF
    Ore and mineral extraction by underground mining often causes ground subsidence phenomena, and may induce severe damage to buildings. Analytical methods based on the Timoshenko beam theory is widely used to assess building damage in subsidence regions. These methods are used to develop abacus that allow the damage assessment in relation to the ground curvature and the horizontal ground strain transmitted to the building. These abacuses are actually developed for building with equivalent length and height and they suppose that buildings can be modelled by a beam with isotropic properties while many authors suggest that anisotropic properties should be more representative. This paper gives an extension of analytical methods to transversely anisotropic beams. Results are first validated with finite elements methods models. Then 72 abacuses are developed for a large set of geometries and mechanical properties

    Interpretation of Natural Language Rules in Conversational Machine Reading

    Get PDF
    Most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read. However, many real-world question answering problems require the reading of text not because it contains the literal answer, but because it contains a recipe to derive an answer together with the reader's background knowledge. One example is the task of interpreting regulations to answer "Can I...?" or "Do I have to...?" questions such as "I am working in Canada. Do I have to carry on paying UK National Insurance?" after reading a UK government website about this topic. This task requires both the interpretation of rules and the application of background knowledge. It is further complicated due to the fact that, in practice, most questions are underspecified, and a human assistant will regularly have to ask clarification questions such as "How long have you been working abroad?" when the answer cannot be directly derived from the question and text. In this paper, we formalise this task and develop a crowd-sourcing strategy to collect 32k task instances based on real-world rules and crowd-generated questions and scenarios. We analyse the challenges of this task and assess its difficulty by evaluating the performance of rule-based and machine-learning baselines. We observe promising results when no background knowledge is necessary, and substantial room for improvement whenever background knowledge is needed.Comment: EMNLP 201
    corecore