Munich Center for Mathematical Philosophy, LMU, Munich
The question how language, a sequence of events in spacetime, can have meaning — which seems not to be in spacetime — has puzzled philosophers since antiquity, though it only came to dominate philosophy explicitly over a century or so of wide-ranging developments that culminated, provisionally, in the work of the later Wittgenstein. Philosophers continue to discuss these questions, and to read Wittgenstein, but they also seem largely unaware that their central questions about the nature and possibility of meaning, which they approach in the abstract, as conceptual questions, have also become a thriving subject of empirical research. This post briefly explores some consequences of this new research for philosophical questions about meaning.
One relevant context for such questions has always been the puzzle of how to coordinate two aspects of language, its subjective or cultural aspect (“meaning” in the sense of what a sentence, or any other object or action, “means to me”) and its computational aspect (“meaning” in the sense of generative linguistics or in the sense of a computer program, in which a syntactic structure built up from atomic components is endowed – or not – with an analogously compositional semantics). The subjective aspect is inherently more salient with respect to natural languages, while the computational aspect is more in the foreground where constructed languages are concerned, especially axiomatically constructed (or otherwise clearly defined) languages of mathematics or computation. What they gain in precision, though, such languages sacrifice in rhetorical power and suggestiveness; there would appear to be a trade-off between the cultural and the computational aspects of language. However, while this Janus-faced property of language has been apparent to all major theorists of language from Locke to Frege and Saussure, their interest in language has usually focussed mainly on only one of the two aspects at the expense of the other, and theories of the coordination between these two aspects have remained quite superficial. The few attempts at bringing these aspects into some sort of relation to each other have usually tried to reduce one of them to the other. The later Wittgenstein (followed in this respect by ordinary language philosophers) tried to bring logic and constructed languages more generally within the scope of social practices and the natural languages mediating them, while Chomsky tried to bring ordinary language within the scope of the combinatorial.
The empirical research of recent decades on these subjects has mostly avoided this kind of reductionism. It does usually focus on only one of these aspects or kinds of language — naturally enough, because you have to start somewhere, and where you start generally influences which kinds or aspects of language you consider. But there is on the whole, in most of this research, a recognition that both aspects (and both kinds of language) exist, but there is little explicit discussion of the relation between them.
I will focus here on only one research project from each of the two sides, one concerned with ordinary language and one concerned with more artificial constructed languages: N.J. Enfield’s (2015) investigations about how meaning is arrived at in the ordinary languages spoken in Laos, and Edwin Hutchins’s (1995) well-known work on the context in which constructed languages are employed in the navigation of large ships. Neither of these approaches is reductionistic in the sense that it regards one or the other of the two kinds or aspects of language as fundamental, or that it regards one as parasitic on the other. But nor does either really pay attention to the aspect it is not mainly focussing on. Each of these two approaches sees the emergence of meaning in the kind of language it studies as traceable to the utility of the respective language system to the society in which that system is used. Each has an account of how the linguistic subsystem it studies is embedded in an overall social system that maintains it. Also, each has an account of the subjectivity of each participant individual’s perspective within the social network that nonetheless gives rise to objective meaning.
However, they focus on very different scales of linguistic activity, so there are some obvious surface differences between them. Enfield (2015) looks closely at how particular words function in specific practical situations. He shows how the minimization of effort (on the model of Gigerenzer’s “fast and frugal” heuristics) leads individuals to widely different hypotheses about word meaning, but as these are confronted with evidence of actual use in the course of language learning and everyday practical interaction, public word significations converge and become sufficiently precise for purposes of ordinary communication. Utility of a word for this interpersonal communicative purpose, he argues, not the utility of its referent for any other practical or social purpose, is what underpins a word’s meaning and keeps it in circulation. The utility that motivates meaning is that of the signifier, not of the signified.
Hutchins (1995), meanwhile, is more concerned to show that the practice of navigation is a widely distributed expertise, not localizeable in any particular individual brain but requiring the coordination of many individuals of different specific expertise, transmitted by apprenticeship, within the framework of various languages in which concepts directly relevant to navigation are encoded. The picture is a Vygotskian one, of a widely distributed social software being installed, via a learning process of active assimilation, in the individual hardware (with machine code) of those who participate in it. There is room in this picture for both of the kinds of language we’ve been discussing, ordinary natural languages and constructed languages, but the relation between them is not specifically probed. Natural languages serve as the social interface for everyday communication while the navigation-specific concepts employed by (or in the background of) navigational practice are encoded in constructed languages, but these are not specifically discussed in their relation to the natural language in which they are mediated to their practitioner-users, and in which the concepts are embedded in the social interactions surrounding their practical application.
So while there are certain analogies between Enfield’s and Hutchins’s empirically detailed pictures of language and the basis of meaning, there is still a gap between the characterization of meaning in ordinary languages and the characterization of meaning in constructed languages. Even in Hutchins himself there is more or less that same gap between his characterizations of natural language and constructed language. But in Hutchins there is also a level of explanation that is missing in Enfield, perhaps because Enfield’s attention is devoted almost exclusively to the basis of meaning in ordinary language. For Enfield, the emergence of meaning proceeds directly from individual utilities to social outcomes, as in economic or other rational-choice models, with no intermediate levels of explanation. For Hutchins’s Vygotskian account, in contrast, social practices play an essential role, at an explanatory level between that of individual rationality (e.g. effort minimization in the generation of hypotheses about meaning) and social outcomes (e.g. convergence of word meanings to a sufficient degree of precision for successful interpersonal communication).
I would suggest that, to bridge this gap, it could make sense to think of evolved natural languages and constructed natural languages as two different kinds of system, between which, however, one could recognize a continuum of intermediate gradations. Instead of considering evolved and constructed languages categorically, then, one could think in terms of degrees of constructedness.
This is perhaps easiest to discuss from Enfield’s perspective, which emphasizes utility more explicitly and systematically. One can concede everything he says about natural languages (the subjectivity of the individual perspective, and also the mechanism of convergence on relatively clear meanings when these are useful from a social actor’s viewpoint) — but still note that he leaves out an important class of utilities: those that involve the creation, i.e. rule-evolution, of a new artificial language for a particular common purpose among a subgroup. We don’t know much about the early beginnings of such more constructed languages. Perhaps the earliest tendencies toward such artificial language manifested themselves in the practice of the law, which evolved over millenia. Much more sudden and epoch-making (though not at the time of much worldly significance) was the invention of geometry and the discovery of mathematical proof (Netz 1999). Also known in antiquity were accounting systems and (in various antiquities around the world, as Hutchins explains at length) systems of navigation. If we put ordinary language at one endpoint, and geometry at the other, then on a scale of constructedness of languages available in western antiquity the languages of law, navigation, and accounting would be in between those two endpoints, perhaps in that order.
Enfield’s account of convergence, which works brilliantly for natural languages, is less convincing for more artificial systems. It seems (though this obviously would require a broad program of detailed empirical exploration to investigate) that such systems require actual enforcement of some kind. Enfield too invokes enforcement, but makes a persuasive case that in the convergence of natural-language meanings, the enforcement is almost entirely self-enforcement, guaranteed by a convergence of interests, since language users have a common interest in conforming to established word usage to make their participation in society possible. On a larger scale than particular words, there is an analogous convergence; however widely human phenotypes differ along countless dimensions (and however divergent their subjectivities), they also recognize a common interest in a framework of communication, and will therefore tend to defer to equilibria that maximize its communicative efficiency (Enfield does not explicitly invoke optimality theory, but is is clearly in the background of his account).
There is no such automatic or self-propelled convergence in the case of specialized minority preoccupations such as physics or accounting. In such cases, a highly technical and refined system of intermediation lies at the heart of the enterprise, and acculturation to that particular system is essential for anyone to make a contribution to the discussion that other subgroup-participants can recognize as such. Therefore, enforcement in such cases is much more explicit, and is not spontaneous; it is entrusted to organizations that specialize in the enforcement of subgroup norms on new recruits (Campbell 1979, 1986), and their ejection from their apprenticeship when these norms are not respected and the use of the discipline’s terms is not well internalized. (This is all plain from Hutchins’ account of navigation as well, but he doesn’t stress this enforcement dimension, which would actually reinforce his picture.)
It may be hypothesized that degree of constructedness corresponds directly to, and results from, the degree of enforcement. This would be an empirical hypothesis — and in fact one of the motivations for introducing the terminology of constructedness is to make it possible to ask and address such empirical questions.
We can now also, in these terms of constructedness, express the trade-off between the kinds of expressive power available to evolved and constructed languages a little more precisely. For the two endpoints of the scale of constructedness correspond also to two different modes of communicative behavior, between which there are many gradations. Borrowing a term from Malinowski (1923), we can label the behavior corresponding to the less constructed end “phatic” communication, while at the more constructed end we have “literal” communication. Phatic communication need not even use language as a vehicle, though it often does. When it does, the literal, computational aspect of the language is far in the background; the burden of the intended communication is carried by an affective dramatization of which the words are a subordinate, almost arbitrary, part. The speaker may be using words, but the purpose is not to convey literal semantic meaning; it is to threaten, for instance, or to ingratiate, or flirt. Language certainly has “meaning,” in phatic communication, but the meaning is not the literal, semantic content of the speech, it is the meaning conveyed by the overall performance of which speech is a subordinate part. The words hardly matter.
Most language use, most of the time, is undoubtedly phatic. Certainly this is true in ordinary language; more constructed languages would appear to leave less leeway for phatic employment; presumably the more constructed, the less phatic. In ordinary language, though, the phatic and the literal components of language are subjectively co-present, much of the time, and inherently difficult to distinguish. Phenomenologically, the literal and phatic dimensions, together with all the affective associations and other connotations of words and usages, blend seamlessly together into a familiar toolkit of das Zuhandene used to negotiate one’s physical and social surroundings. This is the “user interface” of language, this is how it comes across to its speakers and listeners. This cultural front end acts as a user interface for the (largely) self-enforcing syntactic and computational system of literal meaning-conveyance. How do these components interact? From an evolutionary viewpoint, the cultural, subjective component was there first (Donald 1991, Burling 2005, Tomasello 1999), but does that make the literal, computational part a “mere superstructure” of the cultural part? Or is the computational part autonomous to some degree? These are fundamental questions — not to be addressed here! — and again, one of the motivations of suggesting that we talk in terms of degrees of constructedness is to make it possible to ask them. Without such an apparatus, and the associated complications concerning enforcement, an answer is presupposed, and the question can’t be asked.
The reconsideration of these two perennial aspects of language in terms of constructedness also, finally, yields an empirical (or empirically tractable) restatement of a widely discussed philosophical problem — the question whether constructed languages are “parasitic” on evolved languages, which lay at the root of the differences between ordinary-language philosophers and more formally inclined analytic philosophers (Strawson 1963), and is also at the bottom of (part of) the notorious debate between Quine and Carnap about analyticity. In natural languages, there is (as Quine argued convincingly) no possible behavioral criterion of analyticity; definitions are attempts to give loose guidelines to current (or past) actual use. But in constructed languages (as Carnap responded), definitions can be strict, and more or less precise — think, for instance, of the defined terms in any legal contract, which in the rest of the contract simply mean what they are there defined to mean; no court would think of questioning such definitions unless they are unclear or contradictory.
One remaining question concerns the utility of constructed languages. In Enfield’s very appropriate starting point, the utility in question (that gives rise to converging sharpness of meaning) is the utility to all participants, i.e. all users of the language in question. There are individual differences in utility functions, but the convergences of word meanings are on core functions that are of use to the language-user population more or less as a whole, with no subgroup differences. But as societies get more complex, specialized subgroups crystallize out who perform functions needed or desired in (some or all parts of) the society outside the subgroup itself, and nothing in Enfield’s utility model would appear to prevent it being applied (modulo some attention to the different forms of enforcement involved) to these situations of increasing complexity.
In those cases, an equilibrium results (and can sometimes be sustained for long periods) in which the subgroup provides sufficient perceived benefits to (at least some members of) the wider society that the enforcement of its constructed language and other institutional components of its subculture (sub-form-of-life) are left to an organized elite of the subgroup. This creates a permanent tension pervasive in modern societies: the subgroup is privileged in certain ways (e.g. is compensated well) for tasks that are at best imperfectly understood by the wider society since a full understanding would require acculturation into the subgroup.
Burling, R. (2005) The Talking Ape: How Language Evolved (Oxford).
Campbell, D.T. (1979) “A Tribal Model of the Social System Vehicle Carrying Scientific Knowledge” Knowledge: Creation, Diffusion, Utilization 1, repr. in Campbell 1988.
Campbell, D.T. (1986) “Science’s Social System of Validity-Enhancing Collective Belief Change and the Problems of the Social Sciences” in D.W. Fiske and R.A. Schweder, eds. Metatheory in Social Science: Pluralisms and Subjectivities (Chicago), repr. in Campbell 1988.
Campbell, D.T. (1988) Methodology and Epistemology for Social Science: Selected Papers (Chicago).
Enfield, N.J. (2015) The Utility of Meaning: What Words Mean and Why (Oxford).
Donald, M. (1991) Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition (Cambridge, MA).
Hutchins, E. (1995) Cognition in the Wild (Cambridge, MA).
Malinowski, B. (1923) “The Problem of Meaning in Primitive Languages” in C.K. Ogden and I.A. Richards The Meaning of Meaning: A Study of the Influence of Language upon Thought and of the Science of Symbolism (London), Supplement I, pp. 296-336.
Netz, R. (1999) The Shaping of Deduction in Greek Mathematics: A Study in Cognitive History (Cambridge).
Strawson, P.F. (1963) “Carnap’s Views on Constructed Systems versus Natural Languages in Analytic Philosophy” in P. Schilpp, ed. The Philosophy of Rudolf Carnap (LaSalle, IL), pp. 503-18.
Tomasello, M. (1999) The Cultural Origins of Human Cognition (Cambridge, MA).
How to cite this post
Carus, A.W. 2016. The utility of constructed languages. History and Philosophy of the Language Sciences. https://hiphilangsci.net/2016/06/22/the-utility-of-constructed-languages/