49 research outputs found
Constructing futures: a social constructionist perspective on foresight methodology
The aim of this paper is to demonstrate the relationship between a particular epistemological perspective and foresight methodology. We draw on a body of social
theory concerned with the way that meaning is produced and assimilated by society; specifically, the social construction of knowledge, which is distinguished from its nearneighbour constructivism by its focus on inter-subjectivity. We show that social constructionism, at least in its weak form, seems to be implicit in many epistemological assumptions underlying futures studies. We identify a range of distinctive methodological
features in foresight studies, such as time, descriptions of difference, participation and values, and examine these from a social constructionist perspective. It appears that social constructionism is highly resonant with the way in which knowledge of the future is produced and used. A social constructionism perspective enables a methodological
reflection on how, with what legitimacy, and to what social good, knowledge is produced. Foresight that produces symbols without inter-subjective meaning neither anticipates, nor produces futures. Our conclusion is that foresight is both a social construction, and a
mechanism for social construction. Methodologically, foresight projects should acknowledge the socially constructed nature of their process and outcomes as this will lead to greater rigour and legitimacy
Accessible Cultural Heritage through Explainable Artificial Intelligence
International audienceEthics Guidelines for Trustworthy AI advocate for AI technology that is, among other things, more inclusive. Explainable AI (XAI) aims at making state of the art opaque models more transparent, and defends AI-based outcomes endorsed with a rationale explanation, i.e., an explanation that has as target the non-technical users. XAI and Responsible AI principles defend the fact that the audience expertise should be included in the evaluation of explainable AI systems. However, AI has not yet reached all public and audiences , some of which may need it the most. One example of domain where accessibility has not much been influenced by the latest AI advances is cultural heritage. We propose including minorities as special user and evaluator of the latest XAI techniques. In order to define catalytic scenarios for collaboration and improved user experience, we pose some challenges and research questions yet to address by the latest AI models likely to be involved in such synergy
