Johannes Woschitz
University of Edinburgh
The following text is based on and is, where appropriate, an elaboration of Woschitz (2019), a paper I have recently published and which is the centrepiece of my PhD thesis. A different title could have been: the structuralist heritage in sociolinguistics. Yet another title could have been: the ongoing clarification of the register concept within sociolinguistics.
Consider the following phenomenon: All throughout North America, a range of sound changes – more specifically, phonological changes – have been reported (Labov, Ash, & Boberg, 2006). Speakers of the Inland North are undergoing the so-called Northern Cities Vowel Shift, Canadians are undergoing the Canadian Vowel Shift, the West merges CAUGHT with COT, Philadelphians show fronting of back vowels, and so on. Similar changes have been studied across the Atlantic Ocean. In Danish spoken in Denmark, for instance, vigorous sound changes have been reported to have happened in the 20th century, which is why Swedes find it increasingly hard to understand their neighbours (see Gregersen, 2003, p. 145).
In their analysis of such phenomena, linguists are faced with at least three challenges. First, the above-mentioned sound changes typically affect thousands, if not millions of speakers. They don’t usually affect them at the same time, as typically, some social strata “lead” in e.g. the fronting of a vowel, with other strata following suit. Nonetheless, changes are “complete” at some point, namely when even the most resistant members of a speech community have caught up. Second, speakers who are undergoing the change are typically not aware of their changing pronunciation. This fact alone has occupied linguists for more than half a century, and an ironclad solution to the problem of how millions of speakers can unknowingly change their pronunciation is still to be found. Third, changes in the pronunciation of a vowel, such as the tensing of /æ/ in the Inland North, typically break loose changes in the pronunciation of neighbouring vowels in the same phonological subset (such as ingliding or upgliding diphthongs, long/short monophthongs, etc.). The prime example of this is the Great Vowel Shift, which, down to the present day, is taught at universities as an example of a chain shift where high long vowels diphthongised to avoid a merger (even though it is still a moot point whether this is a case of a drag or a push shift, on which see Martinet, 1952, p. 11).
William Labov, the founding father of mainstream sociolinguistics, has spearheaded many routes of explanations for sound change, all addressing the above-mentioned challenges in one way or another. Many of the concepts he introduced are still used in present-day sociolinguists and phonology, such as changes from above and below (first introduced in Labov, 2006 [1966]), or his so-called “internal” principles of language change that seem to constrain how and in which direction phonological change is likely to happen (laid out in considerable detail in Labov, 1994).
These concepts aim to provide some insight into the inner workings of sound change. The change from above/below dichotomy, for instance, addresses the second problem in the above paragraph, in that it distinguishes between conscious and unconscious driving forces behind sound changes. When New Yorkers rhoticise syllable codas (such that “car” [kɑː] is pronounced as [kaɹ]), they do (or did) so to sound high class (Labov, 2006 [1966], chapter 3). Their changing pronunciation has been found by Labov to be dependent on the stylistic context: when speakers felt that their pronunciation was monitored, they adopted a more prestigious pronunciation.
If every sound change could be explained along similar lines (such that every sound change is initiated by people wanting to speak like the upper class), most historical phonologists and sociolinguists would probably be without a job by now – all that would be left to do would be empirical mop-up work, because the “riddle” of sound change would have been solved. As it happens, however, unconscious driving forces are, by a wide margin, more productive when it comes to sound changes. In most cases of sound change, similar social meaning attributed to the changing variant is practically non-existent. Linguists are thus faced with a phenomenon where thousands or millions of speakers systematically change their pronunciations (that is, they do so at the level of the phoneme) without being able to put into words that they are doing so, let alone why they are doing it. Worse, social commentary as “/r/ in syllable codas indexes high class” cannot explain why changes in one variant typically lead to changes in others (as, for instance, in the Great Vowel Shift). Thus, the above-mentioned third challenge remains unaddressed.
Linguists have, for a long time, referred to structuralist explanations to make do with challenges 1 and 3 (hence the “structuralist heritage in sociolinguistics” alluded to in the opening paragraph). That is to say, the adjustment of neighbouring variables has been attributed to “the inner workings” of the respective language. Labov (2010, p. 396), for instance, holds that the change of a phoneme leads to a disequilibrium in the respective phonological subsystem. Languages, however, “want” their phonemes in equidistant positions (as per Martinet’s 1955 principle of maximum dispersion). (Notice: who is the agent here?) If, so the logic goes, members of a speech community begin to front a vowel, the phonological subsystem of their respective language readjusts itself to ensure equidistance.
In the same vein, languages “constrain” speakers by dictating in which direction a changing variant can go. Every language, as argued in Labov (1994), subdivides its phonetic space (more specifically, the tongue locations when pronouncing a vowel) into a peripheral and a nonperipheral area (following Stockwell, 1966). What counts as peripheral varies from one language to another: for Germanic and Baltic languages, for instance, the peripheral/non-peripheral distinction is congruent with the tense/lax distinction, while, in Romance languages, only front unrounded and back rounded vowels are peripheral (Labov, 1994, p. 601). Under this assumption, a generalisation can be made that peripheral vowels become less open, while nonperipheral vowels become more open (Labov, 1994, p. 262). A speaker of French is therefore unlikely to raise their /œ/, because it is nonperipheral. For a speaker of Danish, raising would be perfectly fine.
A lot in this view evidently hinges on structural principles like the above-mentioned theory of maximum dispersion, or the division of the vocal tract into a peripheral and nonperipheral phonetic space. But none of these concepts are undisputed. Martinet himself was a student of Danish, whose vowel system is far away from being equidistantly dispersed (Gregersen, 2019). Mapudungun, a language isolate spoken in Chile and Argentina, doesn’t seem to conform to the principle of maximum dispersion at all (Sadowsky, 2019; Sadowsky, Painequeo, Salamanca, & Avelino, 2013). The same, apparently, holds for Standard Turkish. Figure 1 shows the unrounded vowels dispersion of a male speaker of Turkish who grew up in Turkey. Why the gap between his /e/ and /a/? It could easily be filled to maximise contrast, as functionalist explanations along the lines of Martinet would suggest.

Figure 1: Normalised unrounded vowels of a male speaker of Turkish. Ellipses show 30% confidence interval. Data taken from Woschitz (in preparation).
These inconsistencies notwithstanding, Labov (2010, p. 92) referred to the principle of maximum dispersion as a “well-recognized principle” – recognised enough, at least, to shoulder the responsibility of re-establishing a phonological equilibrium after a triggering event such as language contact or phonemic merge has wreaked havoc.
Only three years later, Labov, Rosenfelder, and Fruehwald (2013, p. 48) have found that in their study of changes in Philadelphian English, “[n]one of [the observed] movements can be accounted for by structural adjustments to maximize the functional economy of the system”. Some changes, such as the raising of /ey/ followed by checked syllables, actually behave in the opposite direction to what the principle of maximum dispersion would predict!
Rather parenthetically, they conclude that “the long-term evolution of language is the result of […] micro-fluctuations in the social context, controlled by the structural imperatives that govern the production of speech in everyday life” (Labov et al., 2013, p. 61). But are such structural prinicples strong or valid enough to explain the above-mentioned challenges 1 and 3, namely that sound change typically affects thousands, if not millions of speakers, and not only the variant in question but also its neighbouring phonemes? How much weight should one attribute to “micro-fluctuations in the social context”?
To give a short answer, it is probably more weight than linguists have wanted. Even when speakers are not consciously aware of their raising of a vowel, their unawareness is probably not enough to deny sound changes’ (more specifically, changes from below) social functionality in general (for a more detailed discussion, see Woschitz, 2019). In the same vein, the readjustment of neighbouring phonemes (or the lack thereof) is too complex a process to simply attribute to “structural principles” like the principle of maximum dispersion speakers seemingly have no control over.
Changes like the Philadelphian Shift (Labov et al., 2013) or the Northern Cities Shift (Labov, 2010, chapter 10) are only adequately accounted for when linguist acknowledge (as the above authors do) that sound changes always happen in the context of the formation of new registers. Philadelphians, for instance, had a desire to distance themselves from the South, and therefore changed their pronunciations associated with the South. The Northern Cities Shift has been theorised along similar lines, namely as an amplification of a cultural dividing line that can be traced back to North America’s settlement history (Labov, 2010, chapter 10). In both cases, sound changes have allowed speakers to position themselves in a “semiotic landscape” (Eckert, 2019) via the “enregisterment” (Agha, 2005) of a register that is perceptually different from others.
From a historiographical point of view, the study of Labov’s œuvre is revealing. In earlier work, structural or “internal” explanations were attributed much explanatory value in his explanations of sound change. In recent work, it seems, structural principles are mere explanatory fictions – idealisations at best – that have to bow to the fact that sound changes are best understood in the context of the formation of new registers (Agha, 2006). And registers are always meaningful in the cultural nexus because they communicate cultural differentiation (Gal, 2016), regardless of speakers’ inability to put into words that their /æ/ is 50Hz more front when compared to their parents’ generation’s average.
References
Agha, A. (2005). Voice, Footing, Enregisterment. Journal of Linguistic Anthropology, 15(1), 38–59.
Agha, A. (2006). Registers of Language. In A. Duranti (Ed.), A Companion to Linguistic Anthropology (pp. 23-45). Malden, Mass.: Blackwell.
Eckert, P. (2019). The individual in the semiotic landscape. Glossa: A Journal of General Linguistics, 4(1), 1-15.
Gal, S. (2016). Sociolinguistic differentiation. In N. Coupland (Ed.), Sociolinguistics: Theoretical Debates (pp. 113-135). Cambridge: Cambridge University Press.
Gregersen, F. (2003). Factors influencing the linguistic development in the Øresund region. International Journal of the Sociology of Language, 159, 139-152.
Gregersen, F. (2019). The role of the situation in providing comparable data for studies of language change and variation. Paper presented at the Language in Context Research Group, University of Edinburgh.
Labov, W. (1994). Principles of Linguistic Change, Volume 1: Internal Factors. Cambridge, Mass.: Blackwell Publishers.
Labov, W. (2006 [1966]). The Social Stratification of English in New York City. Cambridge: Cambridge University Press.
Labov, W. (2010). Principles of Linguistic Change, Volume 3: Cognitive and Cultural Factors. Chichester: Wiley-Blackwell.
Labov, W., Ash, S., & Boberg, C. (2006). Atlas of North American English: Phonetics, Phonology and Sound Change. Berlin, New York: Mouton de Gruyter.
Labov, W., Rosenfelder, I., & Fruehwald, J. (2013). One Hundred Years of Sound Change in Philadelphia: Linear Incrementation, Reversal, and Reanalysis. Language, 89(1), 30-65.
Martinet, A. (1952). Function, Structure and Sound Change. Word, 8(1), 1-32.
Martinet, A. (1955). Economie des changements phonétiques. Berne: A. Francke.
Sadowsky, S. (2019). Using contemporary phonetic and phonological data to shed light on historic contact situations: The case of Mapudungun and Spanish. Paper presented at the Linguistic Circle, University of Edinburgh.
Sadowsky, S., Painequeo, H., Salamanca, G., & Avelino, H. (2013). Mapudungun. Journal of the International Phonetic Association, 43(1), 87-96.
Stockwell, R. (1966). Problems in the Interpretation of the Great English Vowel Shift. Paper presented at the Conference on Generative Phonology, University of Texas.
Woschitz, J. (2019). Language in and out of society: Converging critiques of the Labovian paradigm. Language & Communication, 64, 53-67.
How to cite this post
Woschitz, Johannes. 2019. Language in and out of society: Converging critiques of the Labovian paradigm. History and Philosophy of the Language Sciences. https://hiphilangsci.net/2019/11/06/language-in-and-out/
I think this is a fascinating account of one of the big mysteries of language, and a couple of things occur as relevant. Firstly, how much account is taken by linguists of any stripe of the role of the unconscious in language? So much of our linguistic behaviour is habitual, outward directed, non-language-focused, that it stands to reason most of our “choices” (or perhaps “options” is better since it doesn’t necessarily imply conscious choice) must be “unconscious” – but exactly what kind of “unconscious” is involved in language use, and what are its implications for language change? Secondly, it is not often stressed that Saussure, who of course placed language within the larger group of semiological systems, then placed THOSE within social psychology. Where are the linguists who take the social and the psychological equally seriously and incorporate both into a comprehensive model of langue and parole,system and discourse, in language? Mainstream cognitive science seems locked within a very North American model of the autonomous individual that doesn’t allow most of the relevant questions even to be raised, but are there sociolinguists out there who are interested in the psychological aspects of language variation? Getting back to Saussure again, my feeling would be that it’s only by focusing on the notion of the linguistic sign, that psychological-social hybrid, and trying to understand how it functions both as system and as process, that we can begin to come up with some of the right questions to ask here. Thanks for an inspiring read!