A Companion to Chomsky. Группа авторов
1 (25) [CP C [TP T [vP v [VP V ]]]]
The CP domain is the discourse domain where, e.g. topicalized constituents would move, as in John, Mary really likes. The TP domain typically hosts the (derived) subject and certain adverbs, wheras the vP and VP domains together are the domains for argument structure properties. However, frequently formulations such as the following can also be found: “C is shorthand for the region that Rizzi (1997) calls the ‘left periphery,’ possibly involving feature spread from fewer functional heads (maybe only one), […]” (Chomsky 2008, p. 9). This suggests that it is actually rather unclear what the size of the functional architecture across languages is, and how to capture the cross‐linguistic generalizations that can be found.
Ramchand and Svenonius (2014) engage with this controversy (see also Wiltschko 2014). Their argument is that a fine‐grained hierarchy of functional heads cannot be part of UG as it does not fit with a minimal UG whose origin needs to be evolutionarily plausible. While such hierarchies emerge in some languages, Ramchand and Svenonius argue that these emerge in a highly constrained way. Specifically, there is a core tripartition of the clause into three domains, V, T, and C (much like in (25)). Put differently, no language exists that has any different order, which is to say that V never appears above C in the hierarchy, and T never appears above C. In terms of why this hierarchy is the way it is, Ramchand and Svenonius argue that the tripartition has its source in extralinguistic cognition. As the authors state: “the most important source that we identify is grounded, we argue, in extralinguistic cognition: A cognitive proclivity to perceive experience in terms of events, situations, and propositions […]” (Ramchand and Svenonius 2014, p. 172). They continue by saying that “Granted, we have little direct evidence for these posited proclivities apart from the explananda themselves; but at present we do not know of plausible alternatives” (Ramchand and Svenonius 2014, p. 172). In years to come, it will be an important task to develop this approach further and better understand the relationship between the linguistic hierarchies and extralinguistic cognition.
3.6.2 The Nature of Phrase Structure Representations: Labels and Labeling
Across the history of generative grammar, the nature of phrase structure representations and how they are generated have always occupied a pivotal role (see e.g. Lasnik and Lohndal 2013). The development has gone from phrase structure rules to the X‐bar (X') schema in (26a) to the bare phrase structure representation that emerged with the Minimalist Program (Chomsky 1995), shown in (26b).
1 (26)
Within X‐bar theory, the positions were fixed: A specifier is a sister to
1 (27)
In (27), nothing beyond the lexical items are part of the representation and whether or not they is a specifier or a complement is something that is determined relationally (Chomsky 1995).
The change from X‐bar theory to bare phrase structure was seen as a major success, yet in the past 5–10 years, Chomsky has once again pushed the research frontiers by focusing on the labels themselves and on whether or not the algorithms need to incorporate facts about endocentricity or not. In previous models, each phrase had a head, thereby deriving endocentricity. In Chomsky's recent developments, endocentricity is still important for the syntactic derivation and to the interface systems, yet endocentricity is not a fundamental property of phrase structure qua phrase structure.
In deconstructing endocentricity, Chomsky has directed attention to the role of phrase structural labels: Are they needed (cf. Collins 2002), and if they are needed, at which level of the grammar may they be needed, i.e. what role do they play in syntactic derivations? As we have seen above, in Chomsky (1995), the operation merge yielded a labeled syntactic structure. That is, merge(X,Y) yields {L, {X, Y}} where L ∈ {X, Y}.
1 (28)
However, adding the label is an extra assumption, merge in its simplest form takes only two objects and puts them together: there is no labeling (Chomsky 2013, 2015, Chomsky, Gallego, and Ott 2019, Collins 2017). Thus, it should be scrutinized carefully to see whether its existence is properly motivated. Chomsky (2013) argues that the simplest merge needs to be supplemented with another operation, which he calls Label. Essentially, this provides a way in which some of the effects of labeling can be accounted for, without postulating that constituents are ever labeled. This operation locates the “structurally most prominent lexical item” within a syntactic object (Chomsky, Gallego, and Ott 2019, 247). For instance, if we have the syntactic object {H, XP}, where H is a lexical item and XP is a complex object, then H will be the most prominent lexical item. The latter follows from H carrying a feature that can be located by the search algorithm as the most prominent item, which is to say that prominence is really determined by particular features. Since the search algorithm is always looking for the closest possible target, the features of H will be more prominent than the features embedded within the XP.
Chomsky and others have developed this approach to labeling in order to account for a variety of empirical patterns, ranging from movement, constraints on movement, and also diachronic change (see van Gelderen, Chapter 13 of this volume). We cannot do justice to the rich technical details surrounding labels and labeling, except make the point that this has become a core area of current developments and that once again Chomsky has provided important contributions in shaping this research.
3.6.3 Extensions of the Theory: Multilingualism
Chomsky (1965) defined the focus of most formal linguistics when he argued the following:
Linguistic theory is concerned primarily with an ideal speaker–listener, in a completely homogeneous speech‐community, who knows its language perfectly and is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors (random or characteristic) in applying his knowledge of the language in actual performance (Chomsky 1965, p. 3).
Put differently, the monolingual speaker has been given the primary role in formal (and many nonformal) investigations of language (cf. Benmamoun, Montrul, and Polinsky 2013). Concretely, when a native speaker Y of language X is asked to provide judgments on strings of words, the researcher has deliberately ignored the fact that Y may also know other languages to various degrees. It has been argued that this kind of simplification made it possible to create new theories of hitertho unattested complex empirical patterns (Lohndal 2013).
There are several reasons why formal grammar would want to extend its empirical and theoretical scope. Speakers who know multiple languages to different degrees provide another type of data which in turn present new theoretical opportunities. They come in different guises, ranging from balanced multilingual speakers to receptive heritage speakers who are able to understand their heritage language but not produce it.13 Given that a core objective in Chomskyan generative grammar is to model what a possible mental grammar is, data from multilingual speakers are extremely relevant (cf. Alexiadou and Lohndal 2016). These speakers also provide examples of what a possible grammar is, which is to say that the ecological validity of generative grammar is increased if it is able to account for what is after all the most common phenomenon today – namely that of being multilingual in one way or other (cf. Kupisch,