29 research outputs found
Cooperation and confusion in public goods games: confusion cannot explain contribution patterns
People behave much more cooperatively than predicted by the self-interest hypothesis in social dilemmas such as public goods games. Some studies have suggested that many decision makers cooperate not because of genuine cooperative preferences but because they are confused about the incentive structure of the game—and therefore might not be aware of the dominant strategy. In this research, we experimentally manipulate whether decision makers receive explicit information about which strategies maximize individual income and group income or not. Our data reveal no statistically significant effects of the treatment variation, neither on elicited contribution preferences nor on unconditional contributions and beliefs in a repeated linear public goods game. We conclude that it is unlikely that confusion about optimal strategies explains the widely observed cooperation patterns in social dilemmas such as public goods games
Initial validation of the general attitudes towards Artificial Intelligence Scale
Author Accepted Manuscript with Appendix A (Sources of News Stories) and Appendix B (General Attitudes Towards Artificial Intelligence Scale, with instructions and scoring). For data files, please follow the DOI https://doi.org/10.1016/j.chbr.2020.100014 to the publisher's site. This article is available Open Access via the Publisher's site: https://www.sciencedirect.com/science/article/pii/S2451958820300142A new General Attitudes towards Artificial Intelligence Scale (GAAIS) was developed. The scale underwent initial statistical validation via Exploratory Factor Analysis, which identified positive and negative subscales. Both subscales captured emotions in line with their valence. In addition, the positive subscale reflected societal and personal utility, whereas the negative subscale reflected concerns. The scale showed good psychometric indices and convergent and discriminant validity against existing measures. To cross-validate general attitudes with attitudes towards specific instances of AI applications, summaries of tasks accomplished by specific applications of Artificial Intelligence were sourced from newspaper articles. These were rated for comfortableness and perceived capability. Comfortableness with specific applications was a strong predictor of general attitudes as measured by the GAAIS, but perceived capability was a weaker predictor. Participants viewed AI applications involving big data (e.g. astronomy, law, pharmacology) positively, but viewed applications for tasks involving human judgement, (e.g. medical treatment, psychological counselling) negatively. Applications with a strong ethical dimension led to stronger discomfort than their rated capabilities would predict. The survey data suggested that people held mixed views of AI. The initially validated two-factor GAAIS to measure General Attitudes towards Artificial Intelligence is included in the Appendix
Recommended from our members
Deployment of Algorithms in Management Tasks Reduces Prosocial Motivation
Recommended from our members
Psychological reactions to human versus robotic job replacement
Recommended from our members
Collective Layoffs and Offshoring: A Social Contract Account
Recommended from our members
Psychological reactance to system-level policies before and after their implementation
Offshoring, Automation, and the Legitimacy of Efficiency: A Social Contract Account of Consumer Reactions to Collective Layoffs
Recommended from our members
