Perceptions and Analysis of Digital Risks. Группа авторов
Risks: An Obstacle or a Lever for Education?
1.1. Introduction
Given the challenges of new technologies that society must face, the notion of risk has become a difficult one to grasp, because it refers to dreaded events and individuals’ worries and fears. For the sociologist Ulrich Beck (2008), risks no longer come only from outside (natural risks). Modern society, with its technological progress, in particular, generates risks in the sense that it debates them abundantly and seeks to prevent them in increasingly optimal ways: this is the emergence of a “risk society”. The introduction of digital technology into schools and families has generated numerous myths (Musso 2008; Amadieu and Tricot 2014) and unfounded fears (Cordier 2015; Plantard 2016) that are also widely debated. Digital refers in discourse both to computer tools or techniques (Internet, computers, smartphones, web platforms, etc.) and to uses. Indeed, the digitization of many practices has increased the number of situations in which digital tools are used, be it for information, communication, transmission and so on. Students, as well as teachers starting their careers, are now part of a generation that has grown up with the evolution of digital technologies and has built its practices and representations around them.
This leads us to examine the conceptions of digital technology among this new generation of teachers and the status they grant to digital literacy. In this chapter, we propose characterizing different types of digital risks and subsequently compare them with the perceptions of teachers at the beginning of their career. The objective is to understand how these representations around the notion of “digital risks” can be a brake or a lever for educating young people about digital issues.
This text builds on the research of the eRISK project, Digital Risks and Education 2.0, which was conducted between 2016 and 2019 with the support of the Maif Foundation.
First, we will define the notion of “digital risk” and characterize the diversity of risks that may correspond to this notion. We will then see which risks school faces with digital technology. We will then present the methodology used to collect the representations and declared practices of new teachers regarding these risks. Finally, a third section will be devoted to the analysis of the teachers’ representations and practices and to a reflection on the impact these have on the education of students in the digital world.
1.2. Digital risks and education: what are we talking about?
1.2.1. Digital risks
Digital risks can be considered threats, of varying levels of danger, which can manifest themselves during or following a digital activity and which are likely to affect the user or to have harmful consequences for others. These risks cover a wide variety of domains that we have grouped into nine types, following a preliminary study carried out collectively within the framework of the eRISK project by means of a thematic exploration in the major media. This step allowed us to develop this typology, which we then took a step back from and backed up with the scientific literature. We have thus identified technical, informational and political, cognitive, psychosocial, health, socioeconomic, ethical, legal and ecological risks. Here, we propose to briefly characterize these different types of risks.
Box 1.1. Typology of digital risks
Technical risks
Technical risks are related to potential malicious intrusions likely to damage digital equipment in one way or another (computer, tablet, smartphone, USB key, CD-ROM, etc.) and spread to other equipment. The motives behind these attacks can vary: to destroy documents to harm an individual or a group of individuals (economic, military or governmental organization, etc.); to infiltrate an information system to steal information with the idea of denouncing or conducting industrial espionage, for example; or simply with the idea of taking up a challenge. We use the term hackers to designate those individuals who illegally act on other computer systems. These acts fall under cybercrime and are punishable by law.
Informational risks
Informational risks can be very diverse in nature. They can involve the difficulty of knowing the reliability of certain information on the web, due to the proliferation of falsified information (rumors, disinformation) disseminated voluntarily and with the aim of causing harm. The risk of being informed on the Internet is also that of being conditioned by the giants of the Web (GAFAM), who direct individuals according to their profile towards information intended to “correspond” to them. This phenomenon, exposed by the idea of filter bubbles (Pariser 2011), stems from the fact that on the search engines or platforms used, the algorithms that filter and select information can be based on the users’ profile (from their personal data, their search and browsing habits); their network of relationships (i.e. relationships that generate frequent interactions and are considered more influential by users); the popularity of information (popularity based on the number of clicks, retweets on social networks, likes, etc.). The manipulation of information on the Web also manifests itself through “conspiracy theories” (Bronner 2013) that are disseminated in order to generate doubts about certain scientific facts or political powers. In this way, ideologies can be relayed and generate socio-cultural or political conflicts.
Cognitive risks
Cognitive risks refer to the disruption of attentional capacities that can be generated by intensive use of digital tools or cognitive disorders. In 2011, the American writer Nicholas Carr denounced the Internet in a book entitled “The Shallows” on the grounds that it would lead to an impoverishment of reading practices and thinking. Researchers in cognitive psychology (Amadieu and Tricot 2014) have shown that reading skills on the Web are indeed different from those required on paper, a thesis confirmed by the American neurolinguist Katherine Hayles (2016), who explains that the human brain has adapted to digital environments to acquire a new mode of attention: hyperattention. This new form of attention allows us to find our way around web content, through hyperlinks.
For some, this evolution in the ways of being attentive, with varying levels of concentration, of retaining certain information to varying degrees in various daily activities would be the harmful consequence of an overexposure to screens. For others (Citton 2014), the cause would rather be sought in the marketing model of platforms that seek to capture monetizable attention above all.
Psychosocial risks
Among the psychosocial risks, exposure to shocking, hateful or pornographic content (Jehel 2015) on certain websites or in games can generate psychological and social difficulties, particularly during adolescence. The phenomenon of screen addiction is also denounced on the basis that it distances some users from everyday social activities, to the point of isolation. Even if the origin of the malaise of these individuals comes, most of the time, from the personal and family context, the digital environment can reveal these difficulties (Stora 2018). Verbal violence or humiliation repeatedly suffered through the Internet (Blaya 2013) are also threats likely to affect the psychological and moral health of individuals. We use the term cyber harassment to define these practices in which aggressors act with a feeling of total impunity, believing themselves to be hidden by the anonymity of the Internet.
Health risks
Among the health risks worth mentioning are the potentially harmful effects of screens on eyesight, which can lead to visual fatigue as well as sleep disturbances. Posture disorders linked to digital uses or exposure to radio waves also generate concerns, as conveyed in speeches and in the media.
Socio-economic risks
Socio-economic risks correspond to what has been called the fractures linked to inequalities among users (Plantard and Le Mentec 2013). These fractures concern different levels: access to the Internet or to digital equipment (equipment inequalities); the urban or rural living environment (access inequalities); the environment and the socio-professional category of a user’s parents or of an individual (usage inequalities); the support available to users and their ability to train