Language is a powerful tool. It creates ideas, shapes opinions, develops actions. Without some sort of communication, the amelioration of society could not occur.
Yet one could argue that the opposite is also true. Language can serve influential impetus to the degradation of society. Hate speech can skew perceptions, rile up emotion, drive people to discriminatory and hateful and violent action.
For example, during their reign of murder, Hitler and the Nazis called upon “good” Aryans to destroy all associations with “Untermensch” (subhuman) and “Judenschwein” (Jewpig), the names they used for Jews. In the early months of 1994, Radio Television Libre des Mille Collines Radio Television Libre des Mille Collines spurred Hutu listeners to “exterminate the cockroaches,” referring to the Tutsis.
The aim of these linguistic tricks, and the implicit aim of all hate speech, is to dehumanize the other. Once dehumanized, there is a perception of acceptability in doing the unimaginably unacceptable to that other. Indeed, according to Genocide Watch, language comprises a quantifiable step towards genocide. So the question arises: if one can stymie the language that prompts the perceptual shift, can they stop the action?
The Sentinel Project, a Canadian group that aims to use social media and other technology to identify early warning signals for ethnic conflict, believes that the answer is “yes.” In conjunction with software developer Mobiocracy, they have developed Hatebase, a tool meant to map hate speech in the efforts of preventing violence.
Hatebase is comprised of two features. The first is a database which allows users to classify terms of hate speech according to region and group to which they refer. The second allows them to detail occurrences in which they have heard hate speech used. It is hoped that this feature will bring background necessary to make accurate judgments.
Some question whether concrete actions could actually result from this data when viewed in relation to freedom of speech. Additionally, there are concerns over the impossibility of being to able to properly contextualize hate speech. In an article in Foreign Policy Magazine, the Human Stain conundrum is raised—that is, what about words that are only hateful in specific contexts?
The site responds that “[its] goal at this point is to focus on vocabulary as low-hanging fruit, and over time broaden our focus to encompass a broader perspective on hate speech.”
Given that hate speech is, to one degree or another, present in almost every society, the site is focused on regions that are believed to already have potential for ethnic conflict. While the site’s developers are aware that their data cannot be used as a singular predictor of conflict, they hope that it will help prevent violent words from becoming violent actions.