The Concise Encyclopedia of Applied Linguistics. Carol A. Chapelle

The Concise Encyclopedia of Applied Linguistics - Carol A. Chapelle


Скачать книгу
may need to comprehend the source texts and plan a response on the topic, but does not have to integrate the texts in the product of the assessment. In content‐responsible integrated tasks, both the process and the products require skill integration, and, therefore, the rating rubric should include criteria for assessing the test takers' use of more than one skill. The third test type considered is integrated assessment, which includes several test sections that are thematically linked (Esmaeili, 2002). For example, a section assessing reading comprehension would include a text that is also the topic for a subsequent writing prompt.

      In addition to these three types of integration, other tasks that may be considered integrated are being used to assess language, as well. Some are new, while others are not, but are being viewed in a new light. For example, story‐completion writing tasks have been used in language acquisition research for some time; however, scholars are looking into the potential for these tasks to elicit integrated reading‐into‐writing performances (Wang & Qi, 2013). Another familiar task, short‐answer questions in a reading tests, can be considered for their assessment of both writing and reading (Weigle, Yang, & Montee, 2013). Although the writing is much shorter in these tasks, they can afford a means to assess integration for lower proficiency students who may not be able to produce a written essay. Another type of task that adds to the variety of ways to assess integrated skills is to reverse the direction of the skills in the task. For example, asking writers to free‐write on a topic before reading texts that delve into the topic can activate background knowledge to support comprehension (Plakans et al., 2018). There is great potential for continued innovation or reframing of language tasks to elicit skills integration.

      Researchers have attempted to understand integrated tasks by comparing test takers' performances on them with their performances on tasks requiring only one skill. Research comparing independent and integrated writing task performance has found that overall scores show similarities (Brown, Hilgers, & Marsella, 1991) and are positively correlated (Sawaki et al., 2013; Zhu et al., 2016). Yet closer investigation of discourse features has revealed some differences, in such features as grammatical accuracy, development, and rhetorical stance (Cumming et al., 2005). For example, in studying the prototype TOEFL iBT task, Cumming et al. (2005) found that integrated task responses were shorter, but used longer words and more variety in words when compared to independent writing tasks. The independent writing responses were scored higher in certain rhetorical features, such as the quality of propositions, claims, and warrants.

      Studies investigating the test‐taking process across tasks types have found evidence that some test takers follow a similar approach for both independent and integrated tasks, while others treat integrated tasks as requiring synthesis and integration strategies, such as scanning the text for ideas to include in their essay (Plakans, 2009; Barkaoui, 2015). However, the study of test‐taking processes on two different integrated tasks also revealed differences across tasks: Ascención (2005) found that read‐and‐respond writing tasks required more planning and monitoring than a read‐and‐summarize task.

      The current interest in the profession for integrating skills for assessment resides in their apparent authenticity. Particularly for specific purposes, such as assessing academic language, needs analyses of language use have shown that skills are used in tandem rather than in isolation (e.g., Leki & Carson, 1997). Thus, including this integration as part of assessment creates test tasks that appear authentic in view of their alignment with real language‐use contexts. The connection between the test and the real world is intended to result in a positive impact on test users' confidence in the scores, increase test takers' motivation, and lead to scores that are more predictive of future performance. Integrated assessments that provide test takers with content or ideas for their performances may mitigate nonlanguage factors such as creativity, background knowledge, or prior education, or a combination of these (Read, 1990). Some research has reported that test takers prefer integrated tasks because they understand the task topic better than they do on single skill tasks and may generate ideas from the sources given (Plakans, 2009). However, Huang and Hung (2013) found that actual performance and anxiety measures did not support test takers' perceptions that integrated tasks lower anxiety in comparison with independent speaking tasks.

      Another advantage with this kind of assessment is the emphasis on the skills working together rather than viewing them as individual components of language ability. Integrated assessment may fit well with current language‐teaching approaches, such as task‐based language teaching (TBLT), which move away from teaching separate skills to focusing on accomplishing tasks using language holistically. Such tests may also have a positive washback, or impact, on classrooms that integrate skills, focus on content and language integrated learning (CLIL), or have goals for specific‐purposes language use.

      Although visible benefits exist with integrating skills in assessment, a number of challenges remain, such as developing high‐quality integrated tasks, rating learners' performance appropriately, and justifying the validity of interpretations and uses.

      Although several studies have found assessment of integrated skills tasks can lead to reliable rating (Ascención, 2005; Gebril, 2010), the issue of scoring these performance‐based tasks remains difficult. The rubric for integrated skills assessment needs to reflect skill integration in some way unless there is a clearly dominant skill that is of primary concern, such as with stimulus tasks or thematically linked tasks that do not require a content‐responsible response. Thus, a clear definition of the role of the integrated skills and what constitutes evidence for them in the performance is needed for meaningful scoring. The example below presents a detailed rubric checklist for assessing integrated reading and writing skills.


Скачать книгу
Librs.Net