201 research outputs found
Lego Rovers (Home Set)
Android Application supporting the Lego Rovers activity for children in Key Stages 2 and 3, together with Parent/Teacher handbook, robot build instructions and activity workbook. Version for use with the LEGO EV3 Home Set
Advising Autonomous Cars about the Rules of the Road
This paper describes (R)ules (o)f (T)he (R)oad (A)dvisor, an agent that provides recommended and possible actions to be generated from a set of human-level rules. We describe the architecture and design of RoTRA, both formally and with an example. Specifically, we use RoTRA to formalise and implement the UK “Rules of the Road", and describe how this can be incorporated into autonomous cars such that they can reason internally about obeying the rules of the road. In addition, the possible actions generated are annotated to indicate whether the rules state that the action must be taken or that they only recommend that the action should be taken, as per the UK Highway Code (Rules of The Road). The benefits of utilising this system include being able to adapt to different regulations in different jurisdictions; allowing clear traceability from rules to behaviour, and providing an external automated accountability mechanism that can check whether the rules were obeyed in some given situation. A simulation of an autonomous car shows, via a concrete example, how trust can be built by putting the autonomous vehicle through a number of scenarios which test the car’s ability to obey the rules of the road. Autonomous cars that incorporate this system are able to ensure that they are obeying the rules of the road and external (legal or regulatory) bodies can verify that this is the case, without the vehicle or its manufacturer having to expose their source code or make their working transparent, thus allowing greater trust between car companies, jurisdictions, and the general public
An explainable approach to deducing outcomes in european court of human rights cases using ADFs
In this paper we present an argumentation-based approach to representing and reasoning about a domain of law that has previously been addressed through a machine learning approach. The domain concerns cases that all fall within the remit of a specific Article within the European Court of Human Rights. We perform a comparison between the approaches, based on two criteria: ability of the model to accurately replicate the decision that was made in the real life legal cases within the particular domain, and the quality of the explanation provided by the models. Our initial results show that the system based on the argumentation approach improves on the machine learning results in terms of accuracy, and can explain its outcomes in terms of the issue on which the case turned, and the factors that were crucial in arriving at the conclusion
Dialogue Explanations for Rule-Based AI Systems
The need for AI systems to explain themselves is increasingly recognised as a priority, particularly in domains where incorrect decisions can result in harm and, in the worst cases, death. Explainable Artificial Intelligence (XAI) tries to produce human-understandable explanations for AI decisions. However, most XAI systems prioritize factors such as technical complexities and research-oriented goals over end-user needs, risking information overload. This research attempts to bridge a gap in current understanding and provide insights for assisting users in comprehending the rule-based system’s reasoning through dialogue. The hypothesis is that employing dialogue as a mechanism can be effective in constructing explanations. A dialogue framework for rule-based AI systems is presented, allowing the system to explain its decisions by engaging in “Why?” and “Why not?” questions and answers. We establish formal properties of this framework and present a small user study with encouraging results that compares dialogue-based explanations with proof trees produced by the AI System
Dialogue-Based Explanations of Reasoning in Rule-based Systems
The recent focus on explainable artificial intelligence has been driven by a perception that complex statistical models are opaque to users. Rule-based systems, in contrast, have often been presented as self-explanatory. All the system needs to do is provide a log of its reasoning process and its operations are clear. We believe that such logs are often difficult for users to understand in part because of their size and complexity. We propose dialogue as an explanatory mechanism for rulebased AI systems to allow users and systems to co-create an explanation that focuses on the user’s particular interests or concerns. Our hypothesis is that when a system makes a deduction that was, in some way, unexpected by the user then locating the source of the disagreement or misunderstanding is best achieved through a collaborative dialogue process that allows the participants to gradually isolate the cause. We have implemented a system with this mechanism and performed a user evaluation that shows that in many cases a dialogue is preferred to a reasoning log presented as a tree. These results provide further support for the hypothesis that dialogue explanation could provide a good explanation for a rule-based AI system
Towards Forward Responsibility in BDI Agents
In this paper, we discuss forward responsibilities in Belief-Desire-Intention agents, that is, responsibilities that can drive future decision-making. We focus on individual rather than global notions of responsibility. Our contributions include: (a) extended operational semantics for responsibility-aware rational agents; (b) hierarchical responsibilities for improving intention selection based on the priorities (i.e., hierarchical level) of a responsibility; and (c) shared responsibilities which allow agents with the same responsibility to update their priority levels (and consequently commit or not to the responsibility) depending on the lack (or surplus) of agents that are currently engaged with it.<br/
- …
