145 research outputs found

    A UI-centric Approach for the End-User Development of Multidevice Mashups

    Get PDF
    In recent years, models, composition paradigms, and tools for mashup development have been proposed to support the integration of information sources, services and APIs available on the Web. The challenge is to provide a gate to a “programmable Web,” where end users are allowed to construct easily composite applications that merge content and functions so as to satisfy the long tail of their specific needs. The approaches proposed so far do not fully accommodate this vision. This article, therefore, proposes a mashup development framework that is oriented toward the End-User Development. Given the fundamental role of user interfaces (UIs) as a medium easily understandable by the end users, the proposed approach is characterized by UI-centric models able to support a WYSIWYG (What You See Is What You Get) specification of data integration and service orchestration. It, therefore, contributes to the definition of adequate abstractions that, by hiding the technology and implementation complexity, can be adopted by the end users in a kind of “democratic” paradigm for mashup development. This article also shows how model-to-code generative techniques translate models into application schemas, which in turn guide the dynamic instantiation of the composite applications at runtime. This is achieved through lightweight execution environments that can be deployed on the Web and on mobile devices to support the pervasive use of the created applications.</jats:p

    Context-aware Data Quality Assessment for Big Data

    Get PDF
    Big data changed the way in which we collect and analyze data. In particular, the amount of available information is constantly growing and organizations rely more and more on data analysis in order to achieve their competitive ad- vantage. However, such amount of data can create a real value only if combined with quality: good decisions and actions are the results of correct, reliable and complete data. In such a scenario, methods and techniques for the data quality assessment can support the identification of suitable data to process. If in tra- ditional database numerous assessment methods are proposed, in the big data scenario new algorithms have to be designed in order to deal with novel require- ments related to variety, volume and velocity issues. In particular, in this paper we highlight that dealing with heterogeneous sources requires an adaptive ap- proach able to trigger the suitable quality assessment methods on the basis of the data type and context in which data have to be used. Furthermore, we show that in some situations it is not possible to evaluate the quality of the entire dataset due to performance and time constraints. For this reason, we suggest to focus the data quality assessment only on a portion of the dataset and to take into account the consequent loss of accuracy by introducing a confidence factor as a measure of the reliability of the quality assessment procedure. We propose a methodology to build a data quality adapter module which selects the best configuration for the data quality assessment based on the user main require- ments: time minimization, confidence maximization, and budget minimization. Experiments are performed by considering real data gathered from a smart city case study

    A utility-based model to define the optimal data quality level in IT service offerings

    Get PDF
    In the information age, enterprises base or enrich their core business activities with the provision of informative services. For this reason, organizations are becoming increasingly aware of data quality issues, which concern the evaluation of the ability of a data collection to meet users’ needs. Data quality is a multidimensional and subjective issue, since it is defined by a variety of criteria, whose definition and evaluation is strictly dependent on the context and users involved. Thus, when considering data quality, the users’ perspective should always be considered fundamental. Authors in data quality literature agree that providers should adapt, and consequently improve, their service offerings in order to completely satisfy users’ demands. However, we argue that, in service provisioning, providers are subject to restrictions stemming, for instance, from costs and benefits assessments. Therefore, we identify the need for a conciliation of providers’ and users’ quality targets in defining the optimal data quality level of an informative service. The definition of such equilibrium is a complex issue since each type of user accessing the service may define different utilities regarding the provided information. Considering this scenario, the paper presents a utility-based model of the providers’ and customers’ interests developed on the basis of multi-class offerings. The model is exploited to analyze the optimal service offerings that allow the efficient allocation of quality improvements activities for the provider

    1 A Survey on Service Quality Description

    Get PDF
    Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in compute

    An ontology-driven topic mapping approach to multi-level management of e-learning resources

    Get PDF
    An appropriate use of various pedagogical strategies is fundamental for the effective transfer of knowledge in a flourishing e-learning environment. The resultant information superfluity, however, needs to be tackled for developing sustainable e-learning. This necessitates an effective representation and intelligent access to learning resources. Topic maps address these problems of representation and retrieval of information in a distributed environment. The former aspect is particularly relevant where the subject domain is complex and the later aspect is important where the amount of resources is abundant but not easily accessible. Conversely, effective presentation of learning resources based on various pedagogical strategies along with global capturing and authentication of learning resources are an intrinsic part of effective management of learning resources. Towards fulfilling this objective, this paper proposes a multi-level ontology-driven topic mapping approach to facilitate an effective visualization, classification and global authoring of learning resources in e-learning

    Information logistics and fog computing: The DITAS∗ approach

    Get PDF
    Data-intensive applications are usually developed based on Cloud resources whose service delivery model helps towards building reliable and scalable solutions. However, especially in the context of Internet of Things-based applications, Cloud Computing comes with some limitations as data, generated at the edge of the network, are processed at the core of the network producing security, privacy, and latency issues. On the other side, Fog Computing is emerging as an extension of Cloud Computing, where resources located at the edge of the network are used in combination with cloud services. The goal of this paper is to present the approach adopted in the recently started DITAS project: the design of a Cloud platform is proposed to optimize the development of data-intensive applications providing information logistics tools that are able to deliver information and computation resources at the right time, right place, with the right quality. Applications that will be developed with DITAS tools live in a Fog Computing environment, where data move from the cloud to the edge and vice versa to provide secure, reliable, and scalable solutions with excellent performance

    Limitations of Weighted Sum Measures for Information Quality

    Get PDF
    In an age dominated by information, information quality (IQ) is one of the most important factors to consider for obtaining competitive advantages. The general approach to the study of IQ has relied heavily on management approaches, IQ frameworks and dimensions. There are many IQ measures proposed, however dimensions in most frameworks are analyzed and assessed independently. Approaches to aggregate values have been discussed, by which foremost research mostly suggests to estimate the overall quality of information by total all weighted dimension scores. In this paper, we review the suitability of this assessment approach. In our research we focus on IQ dependencies and trade-offs and we aim at demonstrating by means of an experiment that IQ dimensions are dependent. Based on our result of dependent IQ dimensions, we discuss implications for IQ improvement. Further research studies can build on our observations

    A capacity and value based model for data architectures adopting integration technologies

    Get PDF
    The paper discusses two concepts that have been associated with various approaches to data and information, namelycapacity and value, focusing on data base architectures, and on two types of technologies diffusely used in integrationprojects, namely data integration, in the area of Enterprise Information Integration, and publish & subscribe, in the area ofEnterprise Application Integration. Furthermore, the paper proposes and discusses a unifying model for information capacityand value, that considers also quality constraints and run time costs of the data base architecture

    Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques and Assurance Actions

    Get PDF
    Crowdsourcing enables one to leverage on the intelligence and wisdom of potentially large groups of individuals toward solving problems. Common problems approached with crowdsourcing are labeling images, translating or transcribing text, providing opinions or ideas, and similar - all tasks that computers are not good at or where they may even fail altogether. The introduction of humans into computations and/or everyday work, however, also poses critical, novel challenges in terms of quality control, as the crowd is typically composed of people with unknown and very diverse abilities, skills, interests, personal objectives and technological resources. This survey studies quality in the context of crowdsourcing along several dimensions, so as to define and characterize it and to understand the current state of the art. Specifically, this survey derives a quality model for crowdsourcing tasks, identifies the methods and techniques that can be used to assess the attributes of the model, and the actions and strategies that help prevent and mitigate quality problems. An analysis of how these features are supported by the state of the art further identifies open issues and informs an outlook on hot future research directions.Comment: 40 pages main paper, 5 pages appendi
    corecore