The Concise Encyclopedia of Applied Linguistics. Carol A. Chapelle
lists of written vocabulary, including two new general service lists (Browne, Culligan, & Phillips, 2013; Brezina & Gablasova, 2015), two lists of high‐frequency words in academic texts (Coxhead, 2000; Gardner & Davies, 2014), and two of academic collocations (Simpson‐Vlach & Ellis, 2010; Ackermann & Chen, 2013). Corpus software can not only count the occurrence of words but also help to identify vocabulary that is distinctive to particular disciplines, genres, or types of text. Although spoken vocabulary is still greatly underrepresented in corpora, an academic spoken word list (Dang, Coxhead, & Webb, 2017) has recently appeared.
It should be pointed out, though, that not all vocabulary measures involve preselecting of the lexical items to be assessed. With measures of comprehension and use of vocabulary, described later in this entry, it is texts or tasks that are selected rather than words.
Design of Assessments
Once the target lexical items have been identified, the other main issue is the design of the assessment items or tasks. The choice of design involves a variety of considerations but, to provide a framework for the discussion, let us take the widely recognized distinction between receptive and productive knowledge of vocabulary. Although the exact nature of this division is still debated by scholars, all language users have a strong sense that the number of words they can understand is rather larger than the number that they use actively in their own speech or writing. Read (2000, pp. 154–7) further subdivides the distinction into recognition versus recall and comprehension versus use, which can provide a useful basis for classifying types of assessment and reflecting on what kind of vocabulary knowledge is actually being assessed.
Recognition
The first level of vocabulary knowledge involves whether learners can recognize and attribute meaning to a lexical item when it is presented to them in isolation, based on the idea that the core element of word knowledge is the ability to establish a link (which ultimately should be automatic) between the form of an L2 word and its meaning. This kind of test has often been criticized for a negative effect in encouraging learners to engage in too much supposedly unproductive study of decontextualized word lists, but there is a strong counterargument that the learning of high‐frequency vocabulary using mnemonic techniques is an efficient means of establishing a foundation for rich vocabulary development (Elgort, 2011; Nation, 2013, pp. 437–78). This issue aside, recognition‐type tests have been shown to work effectively for a variety of assessment purposes.
The simplest form of recognition assessment is the yes/no format, in which the test takers are presented with a set of words and are simply asked to indicate whether they know the meaning of each one or not. The list might begin as follows:
bag
ill
predict
estle
seminar
broccoli
sanglous
Obviously this format depends on self‐report and is unsuitable for higher‐stakes assessment situations in which learners have a vested interest in overstating their knowledge. It includes an indirect validity check in that a certain proportion of the items are not actual words (like estle and sanglous in the list above), which provides a basis for adjusting the scores of test takers who claim knowledge of such words. Research shows that the effectiveness of the format varies to some extent according to the learners' background but it can be a very useful means of estimating vocabulary size and even a quick method of indicating the learner's level of language competence.
A more widely used approach to assessing recognition knowledge is to require the test takers to show that they can associate each L2 word with an expression of its meaning, which may be in the form of a synonym or short definition in L2, or—if the learners share a common language background—an equivalent expression in their own language. The classic format of this kind is multiple choice, where typically the target word is presented in a short, nondefining sentence, together with four possible synonyms or definitions. For young children and other nonliterate learners, an oral version of multiple choice is the picture–vocabulary format, in which the test taker listens to a word as it is spoken by the person administering the test and chooses which of four pictures represents the word meaning.
Recall
The second level of vocabulary knowledge involves the ability to recall the target word from memory when prompted to do so. This is the converse of recognition and it is acknowledged to be a more challenging task, requiring a stronger form–meaning link in the learner's mind. Another way in which a recall item demands more of test takers is that, whereas recognition assessment typically requires them to select a response from those provided, with recall the test takers must supply the target word. Thus, one kind of recall task is simply to present the learners with a set of meanings, perhaps in the form of words or phrases in their own language, and require them to provide the L2 equivalent of each one. Labeling of objects in a picture or processes represented in a diagram are other ways in which the learners' ability to recall vocabulary items can be assessed.
Perhaps the most commonly used form of recall assessment is the gap‐filling task. In its simplest form, the items consist of sentences from which one content word has been deleted, like this:
I went to borrow a book from the ________.
The task is to write in the missing word. It should be noted that this type of item is more difficult to score than a selected‐response item. First, there are often several possible words that can fill the blank, some of which may be more acceptable than others. In this case, library is the intended answer, but librarian, teacher, and shelf also fit the gap. Second, having to supply the word means that test takers may misspell it or provide the incorrect grammatical form. This raises the question as to whether the focus of a recall item is simply on meaning, so that a word form which fits semantically is acceptable even if it is in the wrong form, or whether the gap‐filling task provides the opportunity to assess various other aspects of word knowledge as well. These include not only spelling and grammar but also, for more advanced learners, aspects such as collocation and idiom, as in these examples:
To stay awake, they drank several cups of ______ coffee. [strong]
Several people claimed to have seen the missing girl, but these were all red _______. [herrings]
Recall tasks can be based on paragraphs or longer texts, with multiple deletions, rather than just separate, unrelated sentences with a single gap. Such tasks are often referred to loosely as cloze tests. However, it is important to note that in the standard cloze procedure words are deleted at fixed intervals (e.g., every seventh word) so that all types of word can be omitted, including articles, prepositions, conjunctions, and other function words. A standard cloze obviously draws on vocabulary knowledge but, if a text‐based gap‐filling task is to assess directly the ability to recall lexical items, it should be nouns, verbs, adjectives, and adverbs that are selectively deleted.
Comprehension
If a gap‐filling task may be text based, can it really be considered a measure of vocabulary, rather than, say, reading comprehension ability? It may no longer be seen as a pure vocabulary test, but the underlying issue here is whether vocabulary acquisition is an end in itself or a means for the learner to use the second language more effectively for a variety of communicative purposes. At this point we move on to the second way of distinguishing receptive and productive vocabulary knowledge, namely comprehension versus use. The recognition and recall formats we have just discussed are intended to monitor learners' developing vocabulary knowledge, but they provide at best quite indirect evidence of whether learners can access their knowledge of words and exploit them effectively in performing real language‐use tasks.
In the case of comprehension, we are interested in the learners' ability to deal with vocabulary in texts