167 research outputs found

    Clustering-Based Predictive Process Monitoring

    Full text link
    Business process enactment is generally supported by information systems that record data about process executions, which can be extracted as event logs. Predictive process monitoring is concerned with exploiting such event logs to predict how running (uncompleted) cases will unfold up to their completion. In this paper, we propose a predictive process monitoring framework for estimating the probability that a given predicate will be fulfilled upon completion of a running case. The predicate can be, for example, a temporal logic constraint or a time constraint, or any predicate that can be evaluated over a completed trace. The framework takes into account both the sequence of events observed in the current trace, as well as data attributes associated to these events. The prediction problem is approached in two phases. First, prefixes of previous traces are clustered according to control flow information. Secondly, a classifier is built for each cluster using event data to discriminate between fulfillments and violations. At runtime, a prediction is made on a running case by mapping it to a cluster and applying the corresponding classifier. The framework has been implemented in the ProM toolset and validated on a log pertaining to the treatment of cancer patients in a large hospital

    Incremental Predictive Process Monitoring: How to Deal with the Variability of Real Environments

    Full text link
    A characteristic of existing predictive process monitoring techniques is to first construct a predictive model based on past process executions, and then use it to predict the future of new ongoing cases, without the possibility of updating it with new cases when they complete their execution. This can make predictive process monitoring too rigid to deal with the variability of processes working in real environments that continuously evolve and/or exhibit new variant behaviors over time. As a solution to this problem, we propose the use of algorithms that allow the incremental construction of the predictive model. These incremental learning algorithms update the model whenever new cases become available so that the predictive model evolves over time to fit the current circumstances. The algorithms have been implemented using different case encoding strategies and evaluated on a number of real and synthetic datasets. The results provide a first evidence of the potential of incremental learning strategies for predicting process monitoring in real environments, and of the impact of different case encoding strategies in this setting

    Runtime integration of machine learning and simulation for business processes: Time and decision mining predictions

    Get PDF
    Recent research in Computer Science has investigated the use of Deep Learning (DL) techniques to complement outcomes or decisions within a Discrete Event Simulation (DES) model. The main idea of this combination is to maintain a white box simulation model complement it with information provided by DL models to overcome the unrealistic or oversimplified assumptions of traditional DESs. State-of-the-art techniques in BPM combine Deep Learning and Discrete Event Simulation in a post-integration fashion: first an entire simulation is performed, and then a DL model is used to add waiting times and processing times to the events produced by the simulation model. In this paper, we aim at taking a step further by introducing Rims (Runtime Integration of Machine Learning and Simulation). Instead of complementing the outcome of a complete simulation with the results of predictions a posteriori, Rims provides a tight integration of the predictions of the DL model at runtime during the simulation. This runtime-integration enables us to fully exploit the specific predictions while respecting simulation execution, thus enhancing the performance of the overall system both w.r.t. the single techniques (Business Process Simulation and DL) separately and the post-integration approach. In particular, the runtime integration ensures the accuracy of intercase features for time prediction, such as the number of ongoing traces at a given time, by calculating them during directly the simulation, where all traces are executed in parallel. Additionally, it allows for the incorporation of online queue information in the DL model and enables the integration of other predictive models into the simulator to enhance decision point management within the process model. These enhancements improve the performance of Rims in accurately simulating the real process in terms of control flow, as well as in terms of time and congestion dimensions. Especially in process scenarios with significant congestion – when a limited availability of resources leads to significant event queues for their allocation – the ability of Rims to use queue features to predict waiting times allows it to surpass the state-of-the-art. We evaluated our approach with real-world and synthetic event logs, using various metrics to assess the simulation model’s quality in terms of control-flow, time, and congestion dimensions

    Explain, Adapt and Retrain: How to improve the accuracy of a PPM classifier through different explanation styles

    Full text link
    Recent papers have introduced a novel approach to explain why a Predictive Process Monitoring (PPM) model for outcome-oriented predictions provides wrong predictions. Moreover, they have shown how to exploit the explanations, obtained using state-of-the art post-hoc explainers, to identify the most common features that induce a predictor to make mistakes in a semi-automated way, and, in turn, to reduce the impact of those features and increase the accuracy of the predictive model. This work starts from the assumption that frequent control flow patterns in event logs may represent important features that characterize, and therefore explain, a certain prediction. Therefore, in this paper, we (i) employ a novel encoding able to leverage DECLARE constraints in Predictive Process Monitoring and compare the effectiveness of this encoding with Predictive Process Monitoring state-of-the art encodings, in particular for the task of outcome-oriented predictions; (ii) introduce a completely automated pipeline for the identification of the most common features inducing a predictor to make mistakes; and (iii) show the effectiveness of the proposed pipeline in increasing the accuracy of the predictive model by validating it on different real-life datasets

    Evaluating Wiki Collaborative Features in Ontology Authoring (Extended abstract)

    Get PDF
    Abstract: This extended abstract summarizes a rigorous investigation about the effectiveness of the impact of wiki collaborative functionalities on the collaborative ontology authoring. The work summarized in this extended abstract has been published in Context. This extended abstract summarizes a rigorous investigation about the impact of wiki collaborative functionalities on ontology modelling, presented in: Good quality ontology modelling often demands for multiple competencies and skills, which are difficult to find in a single person. This results in the need of involving more actors, possibly with different roles and expertise, collaborating towards the ontology construction. Collaborative ontology authoring has been recently widely investigated in the literature A first requirement deals with the collaboration between who knows the domain that is going to be modelled, i.e., the Domain Expert (DE) and who has the technical skills to formalize the domain modelling. i.e., the Knowledge Engineer (KE). Traditional methodologies and tools were mainly based on the idea that knowledge engineers should drive the modelling process (producing ontologies in a formalism which is usually not understandable for domain experts) and domain experts should only report to KEs their knowledge of the domain. However, these methodologies often create an unnecessary extra layer of indirectness, an imbalance between the two roles and the impossibility for the domain experts to understand the modelled ontology. DEs should be actively involved in the ontology modelling process rather than only provide domain knowledge to KEs. A second important requirement deals with the support of distributed teams of actors. Independently of their geographical position or their role, team members should be made aware about the collaborative development of the modelled artefacts, should be supported in the communication of modeling choices, as well as in the work coordination. Wiki tools for the ontology authoring offer an appealing option for tackling these collaborative aspects. Indeed wikis usually provide collaborative features (wiki collaborative 1 Fondazione Bruno Kessler, Via Sommarive, 18, 38123 Trento, dfmchiara|ghidini|rospocher@fbk,e

    Analysing and Improving Business Processes Through Hybrid Simulation Model: A Case Study

    Get PDF
    The increasing amount of process execution data, i.e. the event logs stored by the company, can be exploited using Business Process Simulation (BPS). BPS serves as a valuable tool for business analysts, enabling them to analyze and compare business processes and identify changes that optimize key performance measures. Especially when evaluating alternative scenarios, it is crucial to start with an accurate simulation of the current process. Recent research in the field of BPS has demonstrated that Hybrid Simulation Model (HSM) approaches reliably replicates business process behaviour, overcoming the unrealistic or oversimplified assumptions often found in traditional discrete event simulators. In this paper, we present a case study conducted in collaboration with EY, where we apply the HSM to a real-life business process log. This study demonstrates the benefits of the HSM for business process analysis and its potential to improve process performance

    Explainable predictive process monitoring:a user evaluation

    Get PDF
    Explainability is motivated by the lack of transparency of black-box machine learning approaches, which do not foster trust and acceptance of machine learning algorithms. This also happens in the predictive process monitoring field, where predictions, obtained by applying machine learning techniques, need to be explained to users, so as to gain their trust and acceptance. In this work, we carry on a user evaluation on explanation approaches for predictive process monitoring aiming at investigating whether and how the explanations provided (i) are understandable; (ii) are useful in decision making tasks; (iii) can be further improved for process analysts with different predictive process monitoring expertise levels. The results of the user evaluation show that, although explanation plots are overall understandable and useful for decision making tasks for business process management users — with and without experience in predictive process monitoring — differences exist in the comprehension and usage of different plots, as well as in the way users with different predictive process monitoring expertise understand and use them

    Nirdizati Light: A Modular Framework for Explainable Predictive Process Monitoring

    Get PDF
    Nirdizati Light is an innovative Python package designed for Explainable Predictive Process Monitoring (XPPM). It addresses the need for a modular, flexible tool to compare predictive models, and generate explanations for the predictions made by the predictive models. By integrating consolidated frameworks libraries for process mining, machine learning, and explainable AI, it offers a comprehensive approach to predictive model construction and explanation generation. This paper discusses the tool’s key features, and its significance in the BPM community
    corecore