5 research outputs found

    The Impact of COVID-19 and “Emergency Remote Teaching” on the UK Computer Science Education Community

    Get PDF
    The COVID-19 pandemic has imposed "emergency remote teaching" across education globally, leading to the closure of institutions across a variety of settings, from early-years through to higher education. This paper looks specifically at the impact of these changes to those teaching the discipline of computer science in the UK. Drawing on the quantitative and qualitative findings from a large- scale survey of the educational workforce (N=2,197) conducted in the immediate aftermath of institutional closures in March 2020 and the shift to online delivery, we report how those teaching computer science in various UK settings (n=214) show significantly more positive attitudes towards the move to online learning, teaching and assessment than those working in other disciplines; these perceptions were consistent across schools, colleges and higher education institutions. However, whilst practitioners noted the opportunities of these changes for their respective sector — especially a renewed focus on the importance of digital skills — they raised a number of generalisable concerns on the impact of this shift to online on their roles, their institutions and their sectors as a whole; for example, the impact on workload, effective pedagogy and job fragility. More specifically for computer science practitioners, curricula and qualifications, there were concerns raised regarding the ability to meaningfully deliver certain core topics such as mathematical foundations and programming, as well as the impact on various types of formal examinations and assessment. Based on the data obtained from this rapid response survey, we offer informed commentary, evaluation and recommendations for emerging learning and teaching policy and practice in the UK computer science community as we move into the 2020-2021 academic year and beyond

    Keeping AI Legal

    No full text
    AI programs make numerous decisions on their own, lack transparency, and may change frequently. Hence, unassisted human agents, such as auditors, accountants, inspectors, and police, cannot ensure that AI-guided instruments will abide by the law. This Article suggests that human agents need the assistance of AI oversight programs that analyze and oversee operational AI programs. This Article asks whether operational AI programs should be programmed to enable human users to override them; without that, such a move would undermine the legal order. This Article also points out that AI operational programs provide high surveillance capacities and, therefore, are essential for protecting individual rights in the cyber age. This Article closes by discussing the argument that AI-guided instruments, like robots, lead to endangering much more than the legal order--that they may turn on their makers, or even destroy humanity
    corecore