The Concise Encyclopedia of Applied Linguistics. Carol A. Chapelle

The Concise Encyclopedia of Applied Linguistics - Carol A. Chapelle


Скачать книгу
the computer does not recognize that the test taker has comprehended the input; the scoring algorithm only assigns points for vocabulary (or common synonyms of the vocabulary) found in the input. It is therefore important that test developers who use computers to assess listening recognize the strengths and limitations of this technology.

      An increasing number of researchers support listening assessments which include as much contextual information in the input as possible (Gruba, 1999; Buck, 2001; Ockey & Wagner, 2018). Of particular interest to most of these researchers is that the acoustic signal be accompanied by associated visual stimuli, such as a video which shows the speaker.

      Research which has compared test takers' scores on audio‐only assessments with tests that include visual stimuli as well as the audio input have produced mixed results. Some studies have found that including visuals leads to increased scores (Shin, 1998; Sueyoshi & Hardison, 2005; Wagner, 2010), while others have failed to find a difference in scores for the two conditions (Gruba, 1989; Coniam, 2000). A recent study by Batty (2018) may help to explain these contradictory findings. He found that particular question types are impacted in different ways by including visuals. His research indicated that implicit items were made much easier by visuals while explicit items were less affected by including visual stimuli. It may be that studies which had mostly explicit items failed to find a difference between the audio‐only and audio accompanied by visual information.

      Eye‐tracking research has also provided increased understanding of listening processes while test takers attempt to comprehend listening input. Using eye‐tracking technology, Suvorov (2015) considered dwell time (how long eye gaze fixates on a particular visual stimuli) and found that test takers paid more attention to content videos than context videos. Also using dwell time with eye‐tracking technology, Batty (2016) found that test takers spent over 80% of their time observing facial cues when watching videos.

      Researchers increasingly argue that the aim of assessing listening should not necessarily be to attempt to isolate comprehension from other language abilities. These researchers contend that listening is commonly interactive, meaning most listening includes opportunities to ask for clarification and that listeners are typically expected to respond after listening (Douglas, 1997; Ockey & Wagner, 2018). Other research indicates that listening and speaking cannot be separated in interactive discussions among two or more individuals (Ducasse & Brown, 2009; Galaczi, 2014). As a result of these conceptualizations of listening and research findings, test developers have begun to create listen–speak tasks (and other integrated listening items), which require both listening and speaking. They contend that it may not be appropriate or even possible to measure listening as distinct from oral production in an interactive communication context. Such an approach limits concerns about measuring more than “listening” with listening assessments.

      SEE ALSO: Assessment of Integrated Skills; Uses of Language Assessments; Validation of Language Assessments

      1 Bachman, L. F., & Palmer, A. S. (2010). Language assessment in the real world. Oxford, England: Oxford University Press.

      2 Batty, A. O. (2016). The impact of visual cues on item response in video‐mediated tests of foreign language listening comprehension (Unpublished doctoral thesis). Lancaster University, England.

      3 Batty, A. O. (2018). Investigating the impact of nonverbal communication cues on listening items. In G. J. Ockey & E. Wagner, Assessing L2 listening: Moving towards authenticity (pp. 161–85). Philadelphia, PA: John Benjamins.

      4 Brindley, G., & Slatyer, H. (2002). Exploring task difficulty in ESL listening assessment. Language Testing, 19(4), 369–94.

      5 Buck, G. (2001). Assessing listening. Cambridge, England: Cambridge University Press.

      6 Buck, G. (2018). Preface. In G. J. Ockey & E. Wagner, Assessing L2 listening: Moving towards authenticity (pp. xi–xvi). Philadelphia, PA: John Benjamins.

      7 Carr, N. T. (2014). Computer‐automated scoring of written responses. In A. Kunnan (Ed.), The companion to language assessment (Vol. 2, Pt. 8, p. 64). Malden, MA: John Wiley.

      8 Condon, W. (2006). Why less is not more: What we lose by letting a computer score writing samples. In P. Ericsson & R. Haswell (Eds.), Machine scoring of student essays: Truth and consequences (pp. 209–20). Logan: Utah State University Press.

      9 Coniam, D. (2000). The use of audio or video comprehension as an assessment instrument in the certification of English language teachers: A case study. System, 29, 1–14.

      10 Douglas, D. (1997). Testing speaking ability in academic contexts: Theoretical considerations. TOEFL monograph series, 8. Princeton, NJ: Educational Testing Service.

      11 Ducasse, A. M., & Brown, A. (2009). Assessing paired orals: Raters' orientation to interaction. Language Testing, 26(3), 423–43.

      12 Dunkel, P., & Davis, A. (1994). The effects of rhetorical signaling cues on the recall of English lecture information by speakers of English as a second or native language. In J. Flowerdew (Ed.), Academic listening: Research perspectives (pp. 55–74). Cambridge, England: Cambridge University Press.

      13 Feyten, C. (1991). The power of listening ability: An overlooked dimension in language acquisition. The Modern Language Journal, 75(2), 173–80.

      14 Freedle, R., & Kostin, I. (1999). Does the text matter in a multiple‐choice test of comprehension? The case for the construct validity of TOEFL's minitalks. Language Testing, 16, 2–32.

      15 Galaczi, E. (2014). Interactional competence across proficiency levels: How do learners manage interaction in paired tests? Applied Linguistics, 35(5), 553–74.

      16 Gruba, P. (1989). A comparison study of audio and video presentation modes in tests of ESL listening comprehension (Unpublished MA thesis). University of California, Los Angeles.

      17 Gruba, P. (1999). The role of digital video media in second language listening comprehension (Unpublished doctoral dissertation). University of Melbourne, Australia.

      18 Harding, L. (2012). Accent, listening assessment and the potential for a shared‐L1 advantage: A DIF perspective. Language Testing, 29(2), 163–80.

      19 Henricksen, L. (1984). Sandhi variation: A filter of input for learners of ESL. Language Learning, 34, 103–26.

      20 In'nami, Y., & Koizumi, R. (2009). A meta‐analysis of test format effects on reading and listening test performance: Focus on multiple‐choice and open‐ended formats. Language Testing, 26(2), 219–44.

      21 Jensen, C., & Hansen, C. (1995). The effect of prior knowledge on EAP listening performance. Language Testing, 12(1), 99–119.

      22 Jung, E.‐H. (2006). Misunderstanding of academic monologues by nonnative speakers of English. Journal of Pragmatics, 38(11), 1928–42.

      23 Koyama,


Скачать книгу