341 research outputs found

    Machine-Readable Privacy Certificates for Services

    Full text link
    Privacy-aware processing of personal data on the web of services requires managing a number of issues arising both from the technical and the legal domain. Several approaches have been proposed to matching privacy requirements (on the clients side) and privacy guarantees (on the service provider side). Still, the assurance of effective data protection (when possible) relies on substantial human effort and exposes organizations to significant (non-)compliance risks. In this paper we put forward the idea that a privacy certification scheme producing and managing machine-readable artifacts in the form of privacy certificates can play an important role towards the solution of this problem. Digital privacy certificates represent the reasons why a privacy property holds for a service and describe the privacy measures supporting it. Also, privacy certificates can be used to automatically select services whose certificates match the client policies (privacy requirements). Our proposal relies on an evolution of the conceptual model developed in the Assert4Soa project and on a certificate format specifically tailored to represent privacy properties. To validate our approach, we present a worked-out instance showing how privacy property Retention-based unlinkability can be certified for a banking financial service.Comment: 20 pages, 6 figure

    A Declarative Framework for Specifying and Enforcing Purpose-aware Policies

    Full text link
    Purpose is crucial for privacy protection as it makes users confident that their personal data are processed as intended. Available proposals for the specification and enforcement of purpose-aware policies are unsatisfactory for their ambiguous semantics of purposes and/or lack of support to the run-time enforcement of policies. In this paper, we propose a declarative framework based on a first-order temporal logic that allows us to give a precise semantics to purpose-aware policies and to reuse algorithms for the design of a run-time monitor enforcing purpose-aware policies. We also show the complexity of the generation and use of the monitor which, to the best of our knowledge, is the first such a result in literature on purpose-aware policies.Comment: Extended version of the paper accepted at the 11th International Workshop on Security and Trust Management (STM 2015

    Fiscal Multipliers and Public Debt Dynamics in Consolidations

    Get PDF
    The success of a consolidation in reducing the debt ratio depends crucially on the value of the multiplier, which measures the impact of consolidation on growth, and on the reaction of sovereign yields to such a consolidation. We present a theoretical framework that formalizes the re spo nse of the public debt ratio to fiscal consolidations in relation to the value of fiscal multipliers, the starting debt level and the cyclical elasticity of the budget balance. We also assess the role of markets confidence to fiscal consolidations under al ternative scenarios. We find that with high levels of public debt and sizeable fiscal multipliers , debt ratios are likely to increase in the short term in response to fiscal consolidations. Hence, the typical horizon for a consolidation during crises episo des to reduce the debt ratio is two - three years , although this horizon depends critically on the size and persistence of fiscal multipliers and the reaction of financial markets. Anyway, such undesired debt responses are mainly short - lived. This effect is very unlikely in non - crisis times, as it requires a number of conditions difficult to observe at the same time , especially high fiscal multipliers

    Modeling performance of Hadoop applications: A journey from queueing networks to stochastic well formed nets

    Get PDF
    Nowadays, many enterprises commit to the extraction of actionable knowledge from huge datasets as part of their core business activities. Applications belong to very different domains such as fraud detection or one-to-one marketing, and encompass business analytics and support to decision making in both private and public sectors. In these scenarios, a central place is held by the MapReduce framework and in particular its open source implementation, Apache Hadoop. In such environments, new challenges arise in the area of jobs performance prediction, with the needs to provide Service Level Agreement guarantees to the enduser and to avoid waste of computational resources. In this paper we provide performance analysis models to estimate MapReduce job execution times in Hadoop clusters governed by the YARN Capacity Scheduler. We propose models of increasing complexity and accuracy, ranging from queueing networks to stochastic well formed nets, able to estimate job performance under a number of scenarios of interest, including also unreliable resources. The accuracy of our models is evaluated by considering the TPC-DS industry benchmark running experiments on Amazon EC2 and the CINECA Italian supercomputing center. The results have shown that the average accuracy we can achieve is in the range 9–14%

    Minimizing disclosure of private information in credential-based interactions : a graph-based approach

    Get PDF
    We address the problem of enabling clients to regulate disclosure of their credentials and properties when interacting with servers in open scenarios. We provide a means for clients to specify the sensitivity of information in their portfolio at a fine-grain level and to determine the credentials and properties to disclose to satisfy a server request while minimizing the sensitivity of the information disclosed. Exploiting a graph modeling of the problem, we develop a heuristic approach for determining a disclosure minimizing released information, that offers execution times compatible with the requirements of interactive access to Web resources

    A specification-based QoS-aware design framework for service-based applications

    Get PDF
    Effective and accurate service discovery and composition rely on complete specifications of service behaviour, containing inputs and preconditions that are required before service execution, outputs, effects and ramifications of a successful execution and explanations for unsuccessful executions. The previously defined Web Service Specification Language (WSSL) relies on the fluent calculus formalism to produce such rich specifications for atomic and composite services. In this work, we propose further extensions that focus on the specification of QoS profiles, as well as partially observable service states. Additionally, a design framework for service-based applications is implemented based on WSSL, advancing state of the art by being the first service framework to simultaneously provide several desirable capabilities, such as supporting ramifications and partial observability, as well as non-determinism in composition schemas using heuristic encodings; providing explanations for unexpected behaviour; and QoS-awareness through goal-based techniques. These capabilities are illustrated through a comparative evaluation against prominent state-of-the-art approaches based on a typical SBA design scenario

    Towards improving adaptability of capability driven development methodology in complex environment

    No full text
    We are triggered to incorporate adaptability in information system designs and methodologies corresponding to complex and unpredictable environment of today and tomorrow and to complex adaptive systems they are aimed for. Adaptability as non-functional requirement is being portrayed and investigated from broad multidisciplinary perspective that influences how dynamic business-IT alignment can be accomplished. Capability Driven Development methodology has supported delivering dynamic capabilities by providing context-aware self-adaptive platform in the CaaS project implementations, as our case study. Along with the already incorporated mechanisms, components that enable adaptability, there is open space for further evolutionary and deliberate change towards becoming truly appropriate methodology for dynamic reconfigurations of capabilities in organizations and business ecosystems that operate in complexity and uncertainty. The analysis and evaluation of adaptability of the CDD methodology through three dimensions (complexity of the external and internal environment, managerial profiling and artifact-integrated components) in this paper conclude with instigation of starting points towards achieving higher adaptability for complexity of the CDD methodology
    corecore