A rebuttal essay of classical semantic theory — word meanings are not fixed
Yes, Rosch’s work on prototypes present us a set of experimental data on how people interpret the meanings of words, which cannot be readily explained by the classical semantic theory decompositional approach.
Words are the fundamental building blocks of languages, the channels to convey the internal thoughts to the external world. Therefore, a natural question arises – what is the meaning of words? Or to ask in a different way, how do words link with their corresponding concepts? Linguists and philosophers have been trying ages to answer this question, and many theories have been proposed ever since.
1. Classical Semantic Theory
In the classical semantic theory, a word’s meaning is viewed as fixed, as it remains unchanged among different usages in various situation. In other words, words are a fixed set of tools. People with adequate fluency with any languages should know exactly which sets of tools to use, when trying to express certain concepts in their minds.
In order to describe the meanings of words, Nida (1975) and Katz & Fodor (1963), along with several other European and American linguists developed componential analysis. This analysis claims that every word existing in any human language could be decomposed into a finite number of indivisible universal meaningful subsets. These subsets are the building blocks of words in languages, and different combinations give words different meanings.
For example, “bachelor” can be decomposed into [+HUMAN] [+ADULT] [+MALE] [-MARRIED]. It is worth noticing the phrases used here, such as HUMAN, ADULT, MALE etc., are not words, instead they are representations of a particular concept, and this concept is believed to be indivisible – not able to be further divided into more basic subsets, universal – understood by all human beings regardless of their personal experiences and upbringing, and constant – relatively stable and resistant to change. In this way, based on this classical semantic theory, every word is believed to be able to be decomposed into their own indivisible meaningful subsets, and since those subsets are relatively stable, the meanings of words cannot and will not change easily.
If every word, as the classical semantic theory describes, could be decomposed into these universal elementary basis sets, this is equivalent to say that there is a clear cut around the boundaries of word meanings. Moreover, it implies that words sharing same basis sets are on the common grounds in terms of those sets. It is easier to understand this argument using set theory. According to the decompositional approach, for words x and y, we have x = y = . Let’s say x and y have certain similarities in semantics, which means they share certain subsets, denoting with . First of all, this method of denotation eliminates any possibility in ambiguity. Any word either “yes” – belongs to S, or “no” – does not belong to S. There is no grey area, where it is controversial to determine whether a word belongs to S or not. However, as we see in nature, not everything exhibits a clear boundary. Secondly, for words having the same basis set S, in this case, x and y, because S is absolute and invariable. It implies that, when we are prompted with the concept represented by S, our brains should treat x and y equally. However, as we will soon see, this implication has been found problematic by the experiments on prototypes.
2. Prototype Theory
Rosch (1975) formulated prototype theory from a series of experimental data obtained from students in University of California, Berkeley. In this experiment, subjects were asked to rank words in a category, based on their judgments on how well or poorly the words fit the category.
For example, in the category “bird”, robin, sparrow and bluejay are considered as the “birdiest” birds, whereas owl, geese and peacock are less prototypical, followed by turkey, ostrich and penguin. It is worth mentioning that most people after thorough consideration will still agree the animals mentioned above, even for outliners such as penguins, should be categorised as birds. This experiment is more than a simple “yes or no” question. It touches something deeper, namely different words demonstrating various degrees of agreement with respect to the ideal representation of certain category.
This experiment was very crafted designed, because a preliminary experiment was conducted to find out appropriate categories and corresponding member words (Rosch, 1973). Subjects were asked to verify “A (member) is a (category)”. Their reaction time was used as an indicator whether this particular member is a good representation of its category or not.
In this way, the words were very carefully chosen, so that everyone will agree, they all belong to their category technically. However, some are better representations of the category than others; or as the paper said, some birds are birdier than others. This prototypical effect is an often encountered phenomenon in daily life. For example, if a friend offers me salad for dinner, and then it turns out to be a bowl of potato salad. I would be naturally surprised, because in my mind the most prototypical salad is Caesar salad, with a lot of leafy greens and fresh vegetables, maybe topped with a bit Parmesan cheese. Even though potato salad fits the definition of salad, and I sure do recognise it as a type of salad, it is far less from the prototypical salad for me, and as a result I will be startled by the outcome. This raises a huge problem for classical semantic theory advocates, because one can easily deduce from the decompositional approach, that words like “robin”, “peacock” and “ostrich” will share a common elementary subset — [+BIRD]. However, it cannot explain why those North American university students prompted with “birds”, it is a robin that emerge first from their minds not the other two, even though everyone would agree they all belong to the very same category — birds. This experiment result demonstrates that the meanings of words exist on a spectrum; on one end, we have words closer associated to the prototypes, which require less reaction time for us to think about them; on the other end, there are words which are remotely related to the prototypes, and we need more time to think about them. Unfortunately, the decompositional approach and stable meaning theory in general, fails to address this more finely grained distinctions in meanings of closely related words. A word either belongs to the category or not. There is no middle ground to account for the subtle differences in the reaction time for words belonging to the same category. It is really a major problem for the classical semantic theory, because it simply does not do justice of the complexity of human brains’ cognitive activity in associating meanings with concept.
Another issue is about the meanings of the same word across different cultures, especially about their prototypes involved. For example, in China, Pak Choi, together with some other leafy greens, is considered the most prototypical vegetable, whereas in the North America, the most prototypical vegetable is pea (Rosch, 1975). Another example is sandwich in North America versus it in West Europe. In North America, when prompted with the word sandwich (as in an utterance “shall we have a sandwich later”), people will more likely to associate it with a toasted cheese sandwich. However, in the European context, people will more likely to expect a baguette with some slices of cheese and ham.
Once again, if meanings are fixed, this will be impossible. How come the very same word, when uttered in a different land, gives people different expectations? Of course, some advocates of stable meaning theory may dismiss such distinctions in meanings as insignificant, as they’re only part of connotations. However, the way in which the boundary was drawn between connotations and denotations is rather arbitrary at the very first place, which is in turn a strong argument for the innate fuzziness of words, as the meanings of words are so fuzzy that it is impossible to find a clear cut way to draw boundaries between its “real meanings” and its implications.
Lastly, Putnam (1975) as well as other advocates for the fixed meaning theory senses the challenges it is facing, and instead of abandoning the theory completely, they try to moderate the theory to account for the prototypical effect. What they are arguing is that for most people — ordinary people — words are fuzzy, and there are no clean cut around their boundaries. This is why prototypical effect exists. However, in order to know the “real meanings” of words, one must consult proper scholars to figure out. Therefore, the meanings of words are still fundamentally stable, except for the fact that most people are using them inattentively inaccurate. I find this argument extremely illogical, not only because of its elitist attitude towards ordinary people, but also because I believe every fluent language user is an expert in their own language. Taking my personal experience as an example, after learning the prototypical effect, I started a small experiment by asking my friends what their prototypes for various categories are. I realise that everyone has so many ideas to share and things to say about it. That is exactly what I mean by saying every active user of any language is indeed an incredible expert on the word meanings, because otherwise how come they can still convey their thoughts to others through daily exchange of utterances. In short, it is unreasonable to assume that only so-called “scholars” truly understand what meanings of words are.
In conclusion, even though there is certain innate simplistic beauty in the decompositional approach and perhaps pedagogical values in the classical semantic theory, its shortcomings are too prominent to be neglected, namely the challenges raised by the prototypical effect. To start with, it cannot account for the fact that certain elements of a particular category require less reaction time than the others. Moreover, the differences in word meanings, which arise from various cultural backgrounds, cannot be readily explained by indivisible universal elementary sets in the decompositional approach. Lastly, even though some supporters of the stable meaning theory propose a modified version, it is still not satisfying. Therefore, from Rosch’s experimental results on prototypes, the belief that word meanings are stable is undoubtedly rebutted.