Innovations in Digital Research Methods. Группа авторов

Innovations in Digital Research Methods - Группа авторов


Скачать книгу
219 nuisance call #gmp24;

      Call 220 aggressive shoplifter held at supermarket in Stockport #gmp24.

      A social science researcher could code such data for: incident, time, location and language. Follow-up analysis could be conducted in terms of outcome and administrative data on legal prosecution and offender rehabilitation. Qualitative research with people who live in the area and with police officers could also be conducted. Comparisons with other police forces could be made. As outlined above, a multiple data type approach to social science research may create a step change in the explanatory power of research. Other applications for analysing digital data, including social media data, are being developed such as looking at text message and use volumes to anticipate and monitor events and how people are responding to them.

      As well as these various forms of data regarding social behaviour, primary intentional type data are increasingly being generated using experimental techniques. These methods involve comparing interventions on test and control groups. The UK Government is pushing forward this approach with its Behavioural Insights initiative (Cabinet Office, 2011). Examples include: the use of information interventions in relation to voting and recycling, public recognition in charitable giving, the use of peer effects in voting and tax payment, and choice framing in relation to organ donation. We consider this kind of data in more detail in Chapter 3.

      2.4 Rethinking Data – Challenges and Opportunities for Social Science Research

      New types of data and data gathering are continuing to emerge and will enable new ways of researching social issues, sometimes through links with orthodox intentional data and traditional research designs such as sample surveys. However, these new types of data raise important research design and methodological questions.

      Social science should continue to be about testing theories and hypotheses but it needs to embrace the potential value of new sources of evidence and re-evaluate the existing ones. Below we consider some of the emerging opportunities and challenges for social science research.

      2.4.1 Researcher/Subject Boundaries

      There is a tradition within social science research of involving respondents in the research process and breaking down boundaries between researcher and subject. This tradition has been described as action research and participant research (Bryman, 2013; Emerson et al., 1995; McCall and Simmons, 1969). Here, research is done with participants rather than on them. Moreover, the research might be led or co-led by a particular interest group, such as service users or organization members.

      Extending this, several authors have argued that we are in a time where conventional social science boundaries are being blurred. Elliot (2011) posits that, as the proportion of our lives spent online grows, so the boundary between data and subject becomes less distinct. In the same sense that a person’s real life identity is partially constructed in the memories of others as they interact with him or her, so the person’s online self is partially constructed in the data footprints that they leave, intentionally or unintentionally. The activities of others also contribute to constructing these footprints, for example, a photograph of a person might be in the public domain as a result of being posted online by someone else. The photograph might contain identification information and meta-identity information. Given the apparent socio-technical trends, one need not go as far as the Singulatarians (e.g. Kurzweil, 2005) to acknowledge that this transfer of identity is likely to intensify.

      Along similar lines, Martin (2012) predicts that the distinction between data and analysis will become less clear. Undoubtedly, as we move from datasets to data streams and data arrays, analytical processes will be less divisible from the data that are analysed. Extending this idea, Perceptual Control Theory (see, for example, Marken, 2010) and its analogues suggest that the data collection-analysis-policy impact workflow could eventually become a closed loop system, even to the extent of policy makers having a ‘hands-on’ role in its management. So, rather than researchers analysing data and then the results feeding through into policy impact in a lagged and somewhat ad hoc manner, we might envisage researchers-cum-policy analysts directly intervening in social processes using real time data systems as a tool and combining what, in the past, might have been seen as very different data types and different stages of the conventional social research process.

      On a more immediate and less speculative note, we observe that as more ‘found’ data are used in research, the distinction between primary and secondary data itself becomes less consequential. But the use and legal status of any data for social science research needs to be clearly understood by both citizens and researchers alike. There are major data literacy and training issues here that need to be addressed (Elliot et al., 2013). This includes how the new types of data and information may be affecting more traditional data types. For example, how are the ways in which people’s attitudes are formulated and expressed changing under the influence of social media?

      2.4.2 Data Quality – Reliability, Validity and Generalizability

      Data quality is a key issue in any form of social science research. Data quality includes the reliability, validity and completeness of the data. In the rush to use new data there is a risk that the core values of social science, including rigorous research design and hypothesis testing, are put to one side. Orthodox social science research has developed quality control mechanisms over the long term to test the reliability, validity and generalizability of its explanations but at present these mechanisms do not easily extend to many new data types.

      Reliability and Validity. A key data quality issue relates to understanding the motivations of the producers of the data and how accurate the data is in relation to its use and the claims that are made from it. For example, a tweet might be generated for fun, to provide information or to persuade or mislead; the motivation obviously affects the meaning of the tweet. With survey data and even, to some extent, administrative data, the impact of respondent motivations is, at least in principle, structured by (or perhaps mediated by) the data collection instrument itself (see Chapter 4). Thus, a well-designed social science research instrument can constrain motivational impact. But this is not so with Twitter data; here people’s motivations are given full rein – a tweet might be designed to manipulate or obfuscate, to attract truth or to repel it. It might be designed to fantasize or ‘try out an opinion’, to provoke a response or simply to create controversy.

      As we have outlined above, the issue of the interpretability of tweets is subject to some debate. Verification techniques can be used to check the quality of the data or to profile a person’s tweets in order to assess their veridicality and some media are already wise to this.72 This can involve collating and analysing individual people’s tweets over time to look for consistency and changes in attitudes.

      Generizability. A common concern for social science researchers is what can be claimed on the basis of the data and, specifically, the question of generalizability. Since the development of sampling theory, more data is not necessarily better in terms of its explanatory power. A good illustration of this is the development of random sample opinion polls, in particular by the Gallup Organization in the USA in the early twentieth century, which led to greater accuracy in estimating. Gallup’s market share grew on the basis of better predicting election results on the basis of a random sample survey of several thousand voters compared to a survey of millions of Readers Digest readers in which no particular sampling strategy was in place. In the same way, at present, Twitter data is, at best, only representative of Twitter users (including fake accounts and performative issues) rather than a wider population. As such, depending on the research question, Twitter data can be either very useful or potentially misleading.

      A tweet in 2013 by a researcher reads: ‘Twitter is of great value to historians as you can analyse and archive public reaction to events’. The question is which public’s reaction? Estimates suggest that over 7 million adults in the


Скачать книгу