just semantics

examining how and why language, education, and adjacent topics matter

“At any one time, language is a kaleidoscope of styles, genres, and dialects.”

— David Crystal

“Linguistic prestige is not an indication of intrinsic beauty in languages but rather of the perceived status of those who speak them.”

— Sarah J. Shin

If you’ve been around children for any amount of time, you’re sure to have heard some unique utterances — asking what you “buyed” when you “go-ed” to the store, telling you their drawing was “gooder” than yours. While we often associate such occurrences with children, I suspect most of us have caught ourselves realizing we’ve misspelled a word because it “seemed” correct — “calender” rather than “calendar”, swapping “affect” and “effect” or “principle” instead of “principal”. The reason that we clock these phrases and spellings as incorrect is due to “standardization”, a process in which a codified version of a language or group of language varieties is created for use in some official capacity. Standardization is so familiar for most of us that it operates like a fish-in-water scenario – that is, we’re so submerged in it from the beginning that it’s difficult to spot. What would destandardized language even look like? While it may be hard to imagine, I’m here to tell you not only that we can destandardize our language, but that we must. 

Part I: The Creation of the Western Standard

In order to understand how and why we must counteract standardization, it’s necessary to have a baseline understanding of where the modern “Standard Language” came from. I imagine that the average person has a rather neutral, harmless, perhaps even positive view of the Standard Language, however this is far from reality. While this section may seem a bit heavy on exposition, believe me, it’s non-exhaustive.

Western Standardization

While you may be familiar with definitions or spellings changing over time, this process is commonly seen as a series of piecemeal adjustments to an ancient standard that’s existed for time immemorial. However, standardization is not necessarily the default state of languages – tweaked over time like a mechanic working on a car. “Ancient Greek”, as many refer to it, existed as a cluster of dialects with no one being considered more correct or proper than the other, until the emergence of Koine Greek in the 4th century BC. Language negotiation is even present as far back as the period of contact between Sumerian, Akkadian, and other Semitic languages. The English language didn’t standardize until the 15th century, when the printing press permitted the proliferation of the Chancery Standard — a version of English that was used by the bureaucracy in documents intended for the king — beyond the walls of Westminster.

English wasn’t alone in this endeavor. The Spanish language underwent its first form of standardization in the 13th century with King Alfonso X of Castile and Leon. Though other standards arose at the same time, Castilian won out in the end. Italian was crafted from the Florentine Tuscan dialect employed by Dante Alighieri in the writing of the Commedia in the 14th century. The reach of this new standard would remain limited until the passing of the Coppino Act in 1877, which made schooling compulsory throughout the newly unified Italy. Prior to this, Standard Italian had mainly been a written language, spoken by practically no one. The French language emerged out of local northern dialects and saw instances of standardization in the form of several dictionaries and grammars throughout the 16th century — culminating in the L’ordonnance de Villers-Cotterêts, in which François I declared that French was the official language of the Kingdom of France. German wouldn’t see standardization until the turn of the 20th century, when the Second Orthographic Conference drew inspiration from the works of academics Wilhelm Wilmanns and Konrad Duden in order to form one language for “all German-speaking states”.

In the midst of the 17th century, the Treaty of Westphalia contributed to the emergence of many modern European states. By laying the groundwork for this new nationalism (defined by uniting the aristocracy and government of a place), the goal of these nascent nation-states shifted — generate social, political and economic capital by convincing citizens to contribute through the manipulation of their loyalty. 

Linguistic Nationalism 

As I detailed in my post on conlang ethics, languages are inextricably linked to the culture of their speakers. This is true for vocabulary, such as the word “candidate”, which comes from the Latin “candidus” (shining white) due to the white togas candidates for office wore. We also see it in phrases which reveal differences in perspectives — such as asking someone what they “do for a living” versus what they “do for work”. These kinds of cultural facets are often the most salient features that the average person uses to distinguish between dialects. A dialect may have a unique term for geographical features characteristic of that locale (holler, in Appalachian English — meaning a small sheltered valley, usually with some water source), or terms that reveal historic interactions (agbada, in Nigerian English — meaning a loose fitting robe worn by men). 

When a standard language is created and propagated by a State, it is removed from the context that it co-evolved in. Instead, these organic influences are replaced by the values and perspectives of the State which has co-opted it, becoming a mechanism for spreading its socio-political culture. The first step of this strategy involves the formation of regulatory bodies. The trend started with the 1583 founding of the Accademia della Crusca in Florence, which went on to publish the standard Italian Vocabulario in 1612, which the authors based largely off the work of Dante. Inspired by the Italians, the Académie Française came into being in 1637 when King Louis XIII granted it legal recognition to “give certain rules to [our] language and to make it pure, eloquent and capable of treating the arts and sciences” where “rules will be issued for spelling that will be imposed on all”. Prompted by increasing French influence, the Real Academia Española followed a similar trajectory a bit later in 1713. To this day, both institutions make regular attempts to assert their dominion over their respective languages. In the case of the Académie, their battles include gender inclusive language as well as North and West African influence. Similar disputes have been raised by the Real Academia, particularly regarding gender inclusivity.  

Once these regulating bodies were in place, the next step was to minimize local languages. This was explicitly called for by bishop and revolutionary Abbé Grégoire, who insisted that the “annihilation of patois” was necessary for the universalization of French in 1794. Similarly, Italian left-wing philosopher Antonio Gramsci expressed that Italians should be willing to abandon their “dialects”, lest they be left behind by modernity. This would prove to be a driving philosophy of the 18th century as the West sought to modernize. Though State-run public schooling was driven largely by class conflict that followed in the wake of several revolutions, it was swiftly re-tooled as a means of statecraft. As historian James B. Collins states:

“We see in this history a transformation of literacy: from a plurality of scriptal practices embedded in a commonplace working-class culture of political dissent, to a unified conception and execution, centered on the school, with deviations from the school norm attributed to deficiencies and deviations in working-class homes, communities, and minds”. 

He goes on to explain how this “universal literacy” becomes synonymous with literacy itself, despite the persistence of other literacies (perhaps I’ll write a post on Freirean literacy in the future…). This reined in the revolutionary attitude, re-establishing power in the State.

In a similar vein, Dr. Stephanie Hackert discusses (in her article Linguistic Nationalism and the Emergence of the English Native Speaker) how language became the single “constituting element of national belonging”. Essentially, the concept of a “native speaker” functioned in a way to validate being born into the citizenry of a nation-state. In order for this to work, States must adhere to what is called “monoglossic language ideology” where, in order to maintain the veneer of monolingualism in a society, instances of multilingualism are ignored, or punished. Standard languages became used as a litmus test of belonging to the nation — since the State culture was largely artificial. In their article Undoing Appropriateness: Raciolinguistic Ideologies and Language Diversity in Education (2015), professors Nelson Flores and Jonathan Rosa discuss how this construction of the native speaker is often racialized. This categorization lead to Irish, Scottish, and Welsh English speakers being relegated as second-class citizens due to their non-standard Englishes. Dr. Tomasz Kamusella demonstrates the inverse of this idea in the context of Central Europe, where the newly standardized German language was used to stake claims concerning which nations “should” be included in the German nation-state. In his doctoral thesis, Dr. Pontus Lindgren Ciampi demonstrates a similar 19th century state-building strategy in Eastern Europe through the example of Serbia and Bulgaria. Even today, the French state rejects the “arabization” of the French language through wesh or kiffe, ignoring former Arabic loanwords such as algèbre, alcool, or magasin.

Linguistic Imperialism and Linguistic Violence 

This strategy was integral beyond the bounds of Europe as well, as newly centralized Western states pursued new sources of capital through colonization. In the U.S., there has been a history of linguistic violence from the country’s inception — including cutting off the tongues of enslaved Africans who spoke their first languages, the restriction of/punishment for using Indigenous languages in boarding schools, and the forced implementation of English as the language of instruction in Puerto Rican schools.

The Welsh Not, a physical token of shame for those who spoke Welsh in some schools, mirrored practices in other, later British colonial holdings. The use of English in colonial institutions established the language as the language of prestige and later mobility, mirroring the establishment and proliferation of standard languages by the aristocracy a few centuries earlier. With the language’s themselves, European colonization also imported concepts of language standardization.

Violence also took the form of crafting writing systems for African languages that manipulated the languages’ sounds in the hope that it would eliminate sounds unfamiliar to Europeans which they had deemed “less than human”. Since standardized languages were adopted by States as vehicles for spreading their ideologies, their usages as tools of cultural genocide that became rampant in the age of imperialism is predictable. This system wasn’t unique to the West either, as evidenced by the use of “hōgenfuda” dialect cards in post-Meiji Japan, particularly in Okinawan schools in the Ryukyu Islands — used to shame speakers of non-Tokyo dialects. 

However, while this kind of violence persisted through the 20th century, it continues to this day. The European Union receives criticism for its exclusion of minority language and selective preference for specific, standardized languages – as the Council of Europe’s Charter on Regional and Minority Languages expressly excludes “dialects” of Standards and languages of migrant communities. Though the United States lacks an official language, many attempts have been made to render English the official language of the federal government – most recently in 2005 and 2021.

The appropriation and misuse of AAVE (African American Vernacular English) continues a long tradition of decontextualizing Black speech and canonizing it into Standard American English. All the while Black Americans face racist, material disadvantage for speaking in a manner regarded as “less professional” or “less intelligent” than the Standard. Latinx communities in the United States face a similar discrediting when Chicano English is mistaken for “incorrect English”, and simultaneously as “incorrect Spanish”, rather than as a distinct dialect. Casual hatred of discursive filler word “like” or colloquial quotative terms like “to be like” — which originated in the Southern Californian sociolect “Valleyspeak” — is regularly employed to discredit women. This tactic is also used against vocal fry, which can be found in other languages as a phonemic feature. (Known as “laryngealization”, creaky voice is used to distinguish the meanings of words in languages such as Jalapa Mixotec, as well as Danish.) Southern American English, Appalachian English, and other various dialects are regularly associated with lack of intelligence or naivety, due to their connection to working-class and/or rural communities, and a similar tendency exists in the United Kingdom with regard to their regional and working class varieties. Despite regularly innovating new terms such as “clock”, “slay”, “shade”, and “tea”, queer and trans communities face regular policing of the terms they use to describe themselves.

Part II: Understanding Destandardization

So now that I’ve established the history of Western language standardization, and the harm born of it, I can present the alternative — destandardization.

What is Destandardization

In my experience, people’s initial reaction to the concept of destandardizing is to jeer. “What, so we should just let anyone spell anything the way they want?” This misunderstands destandardization as a series of individual acts of linguistic rebellion, rather than a systematic shift in our perception of and relation to language. Standardization IS a system. When the language of a select demographic of people is established as the “right” form of the language, said demographic will be afforded an advantage withheld from others. In the case of the United States, maintaining white, middle class American English as the standard means that white, middle class Americans are more likely to have already encountered the language of media, of education, of the workplace, of politics — of society. Meanwhile, speakers of other Englishes are left to learn a second form of their language all while being chastised for the version they were raised with. 

If standardization serves to spread and reinforce State power, destandardization is the movement to remove the control that the State wields over the use/perception/validity of language use. We weaponize this State-backed power all the time. An interesting challenge to pose to someone whose gut-instinct is to defend standardization is to point out that in order to “correct” someone, you have to understand what they meant. When someone says “Where are you at” and you reply “You mean, where are you”. You’re not conveying that you didn’t understand them — you clearly did — because you were able to tell them what they “should” have said. This adherence to the Standard acts to position yourself above them. 

A core tenant of destandardization is the communicative principle of language, which emphasizes the ability to communicate ideas, expressions, or thoughts as the primary function of language. As a result, language can’t be deemed “right” or “wrong”, rather it can be deemed “successful” or “unsuccessful”. This is the systemic shift in priorities needed to create changes across the board. It leads to more effective communication across dialects and accents — and eliminates the preferential treatment of one. 

(Interestingly enough, when this principle is applied and State influence is diminished, mutual intelligibility becomes one of the key markers of language/dialect differences. This can really rock our perception of languages as demonstrated in NativLang’s video on asymmetrical intelligibility, and Name Explain’s video on mutual intelligibility between Scandinavian languages.)

What Does It Look Like?

What might destandardization look like in practice? The classroom — the domain I’m most familiar with — is rife with opportunities to destandardize. A child is telling a story and uses the word “chile” (AAVE) rather than “child” (“Standard”), or incorporate “güey” (Chicano English) to refer to a friend. The difference in reinforcing or dismantling standardization is how we respond to these scenarios. 

By saying “Say child, instead” or “don’t use that, that’s not English”, the student’s language and culture are invalidated and deprioritized for the sake of the Standard. If there’s any confusion, you approach it like you might any linguistic confusion. “I’m sorry, I’m not familiar with that term, what does it mean?” or “Some English speakers use ‘chile’ instead of ‘child’. It’s similar in many ways and different in others.” This can be applied to grammar as well — communicatively, there’s simply no need to correct “Where are you at?” or “I ain’t had no dinner”.

You may have been on-board so far, but what about writing? This is where many people have a hard time wrapping their mind around destandardization, because “correcting” is so ingrained in our reading brain. Look no further than the comments section of a YouTube video, or the replies to a tweet, and you’ll see several snappy “*you’re” and “*there”. Despite being one of the most normalized corrections, homophones (words that sound the same, but are spelled differently), are some of the most glaring sirens of the absurdity of standardization. For many English speakers, there is little if any functional difference in the pronunciation of their, there, and they’re — and even less so between to, two, and too. Yet, we don’t normally struggle to ascertain which is meant in speech. I’d argue that this is similarly true in writing. If the message is communicated, why “correct”?

What About…?

Now that the idea is out there, I can address some common critiques of the idea of destandardizing.

What if I don’t understand what they said? 

This is certain to happen. Truly, it’s a wonder that we’re able to communicate at all, ever. Language is a massive game of telephone where speakers try to externalize their thoughts to others through limited biological (vocal chords, manual gestures, and cavities) and tactile (writing implements) means. Not only that, it is done across space, and across time, and across contexts — complicated by social norms and expectations. You WILL miscommunicate. The goal is not to avoid miscommunication, it’s to mitigate the effects of a miscommunication. Your objective isn’t to establish dominance over another speaker by flexing your employment of the arbitrary standard. Rather, it’s to understand and be understood. 

When what you mean is “I didn’t understand what you said”, say just that. “I’m not familiar with that term” or “I’ve never heard that word before”, all demonstrate your willingness to learn from others and establish that you are not looming over them. This happens within dialects all the time, it’s not too different cross-dialectically. When encountering a non-standard spelling, “I think you spelled that wrong” wields the Standard against your collocutor. If they pronounce the word that way, is that spelling wrong for them? Instead, “I’m unfamiliar with this spelling, what are you referencing here?” indicates that the goal is to understand, and that you do not believe your dialect to be better or more “right” than theirs. 

Flores and Rosa, mentioned earlier, define this outlook as “additive approaches” which “promote the development of standardized language skills while encouraging students to maintain the minoritized linguistic practices they bring to the classroom”.

What if I’m an academic, or a professional?

The contexts I’ve mentioned so far have mostly been in classroom settings, but how does destandardization work in other contexts? For example, I’m writing this article in what many would consider Standard American English — why don’t I write it in my own dialect?  

Throughout history, lingua francas (common languages) have been established for the purposes of commerce, diplomacy, and other intercultural interactions. Though people may retain their own language in their everyday lives, or even in their communities, an additional language may be used to facilitate communication across spaces where many languages are spoken. While some may claim that standardized languages serve this purpose for dialects, this is not usually the case. As we’ve seen, standards elevate the language of the elite at the expense of others. This happened first in proto-colonial contexts within France or the British Isles, but also later in (post-)colonial contexts, where a Standard was imposed. If this imposition is the case, erasing the local language, how could a standard ever operate as a sort of lingua franca? 

Co-Creation

As we’ve covered, the main issues with Standardization are 1) it operates contrary to the fluid, evolving nature of language, and 2) its implicit tendency to reinforce State hegemony through imposition. Would it be possible for standardization to occur without these two problems? 

In the 1970s and 80s, the opening of a new school for deaf children in Managua attracted students from many different areas. While these children primarily used a variety of different home sign systems — they quickly co-created a new, standardized system known as Lenguaje de Signos Nicaragüense (LSN), which later evolved into the Idioma de Signos Nicaragüense (ISN) as new students arrived and acquired it. 

Lorenzo Dow Turner, along with the work of many others, demonstrated how the influences of several African languages spoken by enslaved Africans trafficked to the East Coast of the United States were standardized into Gullah — a language still spoken by Black communities along the coast today. A similar process occurred between French and several Niger-Congo languages in Ayiti — becoming Krèyol. Turner would go on to be known as a seminal figure concerning dialectology, creole linguistics, and African American studies. 

Closed Practice

A significant point is frequently raised concerning destandardization in the context of Indigenous and Minoritized languages. On one hand, the creation of standardized forms for the purpose of revitalization is often criticized for decontextualizing indigenous languages — subjecting them to a Western-style language-as-resource treatment that co-opts the languages. In many cases, Indigenous languages carry Indigenous knowledge, whose dissemination is meant to be closely guarded by specific community members. However, some Indigenous activists and linguists argue that without standardizing, Indigenous languages are deliberately kept beneath the influence of standardized Western languages.

Here, I return to the emphasis that co-creation must be a communal practice. The standard should not be imposed. In (post-)colonial contexts, this means that settler communities should not dictate the standardization status of Indigenous or Minoritized languages. A lot of advocacy is being done for models of learning Indigenous and Minoritized languages in ways that do not strip them from their context or commodify them — including this article by linguists Susan Chiblow and Paul Meighan. 

Applications

As stated earlier, the implementation of standardized languages carries in itself a cultural violence. However today, in the United States, we’re seeing that cultural violence morph into a physical and structural violence in several areas as the State weaponizes language further and further — ICE has arrested US citizens for not speaking English, a college has threatened to fail students who include pronouns in their profiles, English has been made the sole official language of 28 states despite the United States having no official language. Queer, trans, and non-conforming people continue to have the way they speak about themselves and each other policed and punished by the State.

Destandardization not only serves to defang the State’s weaponization of language but also to facilitate productive communication — where we see each other and hear each other as we are instead of how we’re told we should be. Paulo Freire’s book Pedagogy of the Oppressed is an excellent read for anyone interested in understanding how the State sees and defines “literacy”, and to what end language and education can be used to empower participation in society and self-emancipation.

~Jon

This brief post serves as an introduction Just Semantics, what inspired it, and its purpose. I’m starting this blog not only due to thef fact that I like to yap, but also because I’m passionate about several “niche” topics which have suddenly become more broadly relevant in light of recent events. Such topics include linguistic justice, language revitalization, education, pedagogy, community building, etc. Since I have experiences and perspectives regarding these areas, it may be useful to share them. 

The Title of the Blog

The turn of phrase “It’s just semantics” is employed to dismiss subtle discrepancies in meaning between various descriptions of a situation. While traditionally invoking a reductive definition of “just” (simply, only), my usage is quite the opposite. In terms of this blog, just indicates “fair” and “equitable”.  Rather than dismiss discursive differences, I hope to draw attention to the ways we discuss such topics. My hope is that this collection of accessibly-academic writing can introduce/elaborate on these subjects in a constructive way. Perhaps I can demonstrate how widespread their impacts really are.

Whether you find this blog entertaining, informative, provocative, or a mixture of the three, thank you for taking the time to read it over. I look forward to covering new topics as they arise.

(I probably should have posted this first, but I was eager to cover my first topic — conlang ethics.) 

Sounds, Foreign.

Often when discussing bias in worldbuilding media, discourse focuses on representation of race in fictional settings — usually pertaining to Fantasy, Historical Fiction, and Sci-Fi. On her website, Fantasy and Sci-Fi writer N. K. Jemisin points out that the implementation of orcs in many Fantasy settings serves as a way to create one-dimensional, mindless, evil characters against whom violence has no moral repercussions. For WIRED, games industry and gaming culture writer Cecilia D’Anastasio elaborates on the prevalence of such racism in worldbuilding. Citing Helen Young, author of Race and Popular Fantasy Literature, she reflects on decidedly white depictions of elves in archetypal Fantasy narratives, and the “anti-Black, antisemitic, and Orientalist stereotypes” that are prevalent in the works of genre heavyweights such as Tolkien.

 

“I like to pronounce it…”

Jemisin published her post in 2013, while D’Anastasio’s article was written in 2021, demonstrating that this discourse has existed for around a decade if not longer, however it seems rare that this conversation touches on an integral aspect of worldbuilding — the conlanging community, a group of linguists and non-linguists alike who create languages for fun, for study, or for works of media. However, one controversy was able to break through and elucidate the prevalence of this problem among conlangers. American Fantasy author Rebecca Yarros published Fourth Wing, the first novel in her The Empyrean series, in May 2023. In anticipation of the book’s sequel, a TIME article by Moises Mendez II lays out the author’s rise to fame facilitated by online communities on TikTok. However with this rise came a wave of criticism, which arose in response to an interview where Yarros was asked to clarify the pronunciation of  several common phrases in her novel — which took clear inspiration from Gàidhlig/Scots Gaelic. Among the examples are “Basgiath” (a military school the characters attend) which Yarros pronounces as buzz-guy-eth, as well as “Teine” (the name of a character’s dragon) which she pronounces as “tine”. 

As pointed out by user Muireann (@ceartguleabhar), both of these terms are based off of real Gàidhlig words or compounds — “bàs” + “sgiath” and “teine”. Besides mispronouncing both terms, which would more accurately be “bahs-skee-uh” and “cheyn-yuh”, Yarros also mispronounces the name of the language she derived these terms from, as Gàidhlig is pronounced “gaa-lick”. Yarros’ pronunciation refers to Gaeilge, the endonym of the Irish language. As Muireann puts it, “She’s just sprinkling Gaelic words in there to add a bit of spice to a Fantasy book.” 

While this may be the most recent and salient example of linguistic malpractice in Fantasy worldbuilding, it is by no means the only one. Several other, much more severe examples fly under the radar of many conlangers and consumers of fiction media alike.

Sounds, Mystical

Yarros’ wholesale implementation of Gàidhlig words in her worldbuilding is the strategy of a non-linguist, however there are several ways by which a writer can go about creating a language for their work. Two very common methods, both involving phonology or sounds, are “reduction/modification” and “blending”. “Reduction” was the strategy of Fantasy forefather J.R.R. Tolkien, who listed the Cymraeg (Welsh) language as the influence for Sindarin, the language of the Elves. While Tolkien did engage in similar tit-for-tat lexical influence that Yarros did, most of his influence is found in the sounds of Sindarin. While slashing 9 of the language’s 31 consonants, he maintained ones that were characteristically absent from British English (RP), such as “lh” /ɬ/, “rh” /r̥/, and “ch” /χ/. The second strategy, “Blending”, was used by linguist David J. Peterson, who has constructed languages for Game of Thrones and the CW’s The 100. In an interview for Reactor, Peterson describes Dothraki (the language of a nation of nomadic warriors) as sounding like a mix of Arabic and Spanish, citing various features taken from either language. 

We’ve established “who” is being borrowed from in many of these cases. This is the conversation most present in spaces where conlang discourse occurs. However, the unexamined question remains — the “why”.

Tolkien has been regarded as the architect of a deep, polylingual, polycultural world — claims that I support. However, I think it remains important to consider the implications of an English author employing the aesthetics and sounds of Cymraeg to construct a world that would appear mythical (see: fictional) to his audience. (It should be noted, unlike Yarros, Tolkien possessed a thorough understanding of Welsh phonology and phonotactics — the language’s sounds and how they worked. Unfortunately, the possession of this background knowledge has become less common in contemporary works. If you’d like to learn more about it, Welsh YouTuber TheWelshViking recently put out a video that expounds on the way this trend has evolved from Tolkien into modern ‘romantasy’.

The Dothraki are depicted as aggressive pillagers, raiders who take what they want by force, enslavers who put little value on human life. Peterson has said that he ignored this characterization when fleshing out the Dothraki language for the show, meanwhile author of the show’s source material, George R.R. Martin, states the Dothraki nation was inspired by “several plains cultures such as the Alans, Sioux, Cheyenne, and other various Amerindian tribes”, along with the Mongols and Huns. Though he makes clear that similarities to Arab cultures is coincidental, the use of Arabic as an inspiration for the Dothraki language, especially in juxtaposition to the world’s other languages such as Valyrian (inspired partially by and functions similarly to Latin) draws on a historic perspective on these cultures by the West, even if unintentionally. 

Sounds, Alien

This phenomenon is not only present in Fantasy, but in Sci-Fi as well. Everyone’s favorite, might we say ‘space orcs’, the Klingons of Star Trek present a similar conundrum. “tlhIngan Hol”, or “Klingon”, boasts one of the largest fan communities outside of the conlang community — possessing its own moderating body and several published works. However, the origin of the language is often left unexamined. 

Prior the creation of Klingon, American linguist Marc Okrand worked with Indigenous American languages, specifically now-dormant Mutsun. This context casts a shadow on much of what Okrand has said in regard to creating Klingon. In a 2009 Slate article, linguist and author Dr. Arika Okrent describes Okrand’s task for Star Trek as creating a language that “was supposed to be “tough-sounding, befitting of a warrior race” with characteristics such as “rough”, “crude”, and “violent”. It’s clear that the goal was to create a language that came across as foreign — inhuman. Okrand himself acknowledges this. Having admitted that his background working with various minoritized and/or indigenous languages contributed to his design for Klingon leaves a sour taste in the mouth once you see the languages Okrent associates with Klingon — Hindi, Arabic, Tlingit, Yiddish, Japanese, Turkish, and Mohawk. The author of an article describing a UC Santa Cruz course designed to examine constructed languages comments that “By learning common features of spoken languages, Okrand devised a language with distinctly alien characteristic.” What does it mean to categorize /t͡ɬ/ (featured in Nahuatl, Cherokee, Tlingit, Ladin, and Tswana), or /q͡χ/ (featured in Adyghe, Uzbek, and Avar) as “alien”? 

Another powerhouse Sci-Fi franchise that has recently re-entered the public’s eye is Frank Herbert’s Dune. I am by no means the first to point out the clear Islamic/Arabic influence on the worldbuilding in Dune. In knowing the narrative’s purpose as an allegory of Western dependence on oil, the connections are at least understandable.  However, it's crucial to critically examine Herbert’s one-for-one use of the Caucasian language Chakobsa in his story. Words such as and are adopted for use unchanged, much in the way Yarros employs Gàidhlig. (Similarly to Game of Thrones, David J. Peterson was tasked with expanding this conlang for this novel’s transition to the silver screen — usually referred to as “Neo-Chakobsa”, which implemented additional influences.) This worldbuilding strategies is meant to provide the audience with recognizable archetypes that allow them to identify socio-political dynamics in the narrative. However, this implementation is not uni-directional. It's necessary to recognize what happens when these strategies start to operate in reverse — influencing our understanding of our own world. 

Underpinnings 

All of the examples described above were create in either the later end of the colonial era, or in the area of post-colonialism, which has undeniably influenced their constructions and implementations. In his article Unraveling Post-Colonial Identity through Language, Dr. Rakesh M. Bhatt explains that the function of language education in British colonial holdings was to alter the culture of those who were granted access to education. They believed that teaching English to those they deemed “less civilized” would alter the culture of the local inhabitants. This exposes the colonial philosophy that language is an arbiter of some “civilized” quality — simply, some languages are civilized, while others are not.

Even in what some may call more “neutral” examples of linguistic scholarship that came later, this line of thinking is still evident. In a 1940 article published through MIT titled Science and Linguistics, American linguistic anthropologist Benjamin Lee Whorf insisted that speakers of Inuit languages had an expanded understanding of snow compared to English speakers due to the variety of terms the languages used to refer to snow in different contexts. Similarly, he claimed speakers of Hopi could not perceive the passage of time due to their language’s lack of tenses. On page 28 of his 1950 text An American Indian Model of the Universe, Whorf even describes the Hopi perspective as being “mystical”. Mirroring the dehumanizing construction of orcs as mentioned by Jemisin, the narrativization of the Hopi in this way serves a similar purpose. The mythologizing of Indigenous and minoritized cultures serves the aims of white supremacy by reducing vast demographics of people down to one-dimensional, simpleminded beings against whom violence has no repercussions. 

This weaponization of the link between language and culture is the exact mechanism being invoked when worldbuilders, authors, and conlangers carelessly craft their narratives. It is naive to claim that linguistic features such as phonology (sounds), grammar, and verbiage exist in a vacuum, when their relationship to us and the language we speak is clearly relational. The “common” language spoken by the protagonists found in many of these fictional settings is simply English reskinned. Just as Whorf neglects to examine the fact that English also lacks a grammatical future tense (and thus, by his logic, its speakers can’t conceive of the future either), these characters are written for a white, Western audience. Characters who speak “common”, such as in Game of Thrones, are not othered by their language — regarded as foreign or alien. Rather, they’re “just like you”. 

The authors and linguists mentioned in the article above — Yarros, Tolkien, Martin, Okrand, Peterson, and Herbert — are either English or American. Regardless of their individual intention, their perspectives emanates from within the imperial cores of the post-colonial era. In their selective plucking of features from Indigenous and minoritized languages, they decontextualize them and display them as exemplars of the “foreign”, “alien”, and “mystical” to an often global audience. When viewers, readers, or fans interact with these languages in real life, there’s a risk that they’ll associate them with the dehumanized, magical, or vicious depictions they encountered in fictional narratives. This does considerable harm in contexts where Arabic may already be wrongfully associated with violence due to islamophobia, or bolster arguments that Cymraeg is a language of a forgotten past — when in reality it’s spoken by several hundred thousands of people. It is not only unjust, but dangerous, to draw on Indigenous and minoritized languages when seeking to depict something as foreign to your audience. 

Moving Forward

So what is the point of acknowledging these systematic ties? Am I calling for the end of constructed languages in media? Certainly not. As a conlanger myself, I think conlangs add a rich way for audiences to engage with a text in-depth, and share that interest with a community. I’m not making any claims on the beliefs of the authors I mentioned in this article. This is a result of a systemic issue, and I believe the best way forward is to be aware of its presence and work against it. 

When constructing a language, beyond asking “what” you are drawing inspiration from, it’s important to ask “why”. When seeking to depict a warring culture, what draws you to your influences? What implicit connections might your audience draw from that connection? Is this influence necessary in order to achieve your narrative end goal? As an audience member, I encourage you to ask similar questions of the media you consume.  If a certain constructed language or description evokes a certain image, why is that? Is it due to your own experiences? Is it intentional on the part of the author? If the author lists a specific sources as an influence, how much do you know about that source? Is your knowledge of that culture being more influence by this fictional work than by non-fiction sources?

It’s important to keep in mind that life influences art AND art influences life. We face yet another instance in the United States where racial profiling, stereotypes, and deceptive narratives are being perpetuated to cause harms to specific communities. Media can reinforce such ideas that exist in our culture, and if we know that those ideas can/are racist or xenophobic, we should be swift to critique them. It is irresponsible to consume fantastical media uncritically once aware of how it contributes to the understanding of our own world.