Speech perception

A big focus on this blog has been centered around understanding and producing speech, but something that I have ignored up until this point is how speech is perceived. Speech perception is focused on hearing, decoding and interpreting speech. As we will see today, our brains are often not as reliable as we might think.

Photo by shutter_speed on Pexels.com

So rather than just turn this into a lecture about speech perception and the multitude of theories behind it (let’s face it, this is an educational blog, not a university course) I am just going to show off something weird and wild that our brains do and talk a little bit about the mechanics behind it. Alright, so raise your hand if you have heard of the McGurk effect. (Oh wait, sorry. Blog, not lecture)

The McGurk effect is an auditory illusion where certain speech sounds are miscategorized and misheard based on a conflict in what we are hearing versus what we are seeing. We can see this in action by watching the short video below.

So what is actually going on here? The audio that is being played in all three of those clips is exactly the same. You are hearing the same speaker say “ba ba” over and over. But when the audio is played over a video of someone mouthing “da da” or “va va” we are able to hear it as those instead.

Well as it turns out, this illusion provides positive evidence for something called the motor theory of perception. This theory argues that people perceive speech by identifying how the sounds were articulated in the vocal tract as opposed to solely relying on the information that the sound contains.

This motor theory is supported by something like the McGurk effect because we are taking this audio information and supplementing it with what we are visually observing in the video in order to decide what is being said. It also explains why it is easier to hear someone in a crowded or noisy setting if you can look at their mouth and watch them speak as opposed to not being able to see their mouth.

But it’s not as though we are following along with what people are saying by moving our own articulators or imagining how their mouths are moving while we are listening to them. Supporters of the motor theory argue for the process with specialized cells in our brains known as mirror neurons.

A mirror neuron is a specialized neuron in the brain that activates (or fires if you prefer) in two different conditions, it will activate when the individual performs an action and it will also activate when an individual observes another performing the same action. In speech, this would mean the same part of your brain that activates when you move your mouth to produce a “ba” sound will also activate when you watch someone else produce a “ba” sound.

With this knowledge in mind, it should be easier to see why we are able to get something like the McGurk effect to occur. If perception of speech is influenced by visual information, and we are observing someone producing a sound that is activating these mirror neurons, it makes sense that our perceptions might change slightly so that what we are hearing matches what we are seeing.

Photo by meo on Pexels.com

It is important to note that, as I mentioned earlier, this is not the only theory of speech perception that we have right now, and the motor theory is not without its flaws. It relies on a persons ability to produce the sounds themselves. According to the motor theory, if you were unable to produce the sound yourself, and you could not visually see how the speaker was articulating the sound, you should not be able to perceive it.

So what about prelinguistic infants? An infant who has not developed the ability to speak yet should not be able to perceive the difference between a “ba” and a “da” without visual assistance because acoustically these sounds are quite similar.

Some studies have used a novel methodology where the infant will suck on a specialized soother of sorts that will measure the rate at which they are sucking. Using this soother and presenting the infants with audio stimuli through a speaker (no visual input), they have found that presenting infants with new and novel stimuli causes them to suck faster and presenting them with familiar stimuli means that they will suck at a slower rate.

So, by presenting these infants with a series of “ba ba ba” followed by a sudden change to “da da da” will result in an increased sucking rate. These findings are contradictory to the motor theory of speech perception because the infants in this study are too young to speak on their own and their articulators are not refined enough to be able to produce both a “ba” and “da” sound. Because the infants cannot produce these sounds at this point, their mirror neurons would not activate because they would not have developed fully yet.

This is not to say that the motor theory of perception is wrong though. The fact that we are able to perceive the McGurk effect means that their must be some truth to it. It just calls into question whether this theory captures the whole story. This is something that almost every science deals with at some point. There is almost never a perfect explanation or theory that deals with every problem. If you look hard enough, there will be counter evidence to almost any theory, but it becomes a matter of refining theories as we learn more and more about the way that the world works.

There are many other theories of speech perception that have their own explanations and their own problems. I will likely return to discuss some of the other big ones such as Exemplar theory, but for now I think this is a good place to leave this one.

Thank you for reading folks! I hope this was informative and interesting to you. Be sure to come back next week for more interesting linguistic insights. If you have any topics that you want to know more about, please reach out and I will do my best to write about them. In the meantime, remember to speak up and give linguists more data.

The anatomy of speech

Photo by Pixabay on Pexels.com

Have you ever thought about how you talk? I don’t just mean the way that you say certain words, or maybe the fact that you slur your words after a few too many drinks. I mean HOW you talk. The anatomy of the mouth and the way that your tongue does such quick and precise movements is truly fascinating. I also want to issue a pre-emptive apology because if you are anything like me, after reading this you will spend way too much time being aware your tongue. But enough of the preamble, let’s just get into it.

If you think about it for too long, tongues are just gross muscular things in our mouths. We use them when we eat to move food around in our mouths and to get food that was trapped between our teeth free, and of course they are primarily responsible for tasting. An often underappreciated function of tongues is their involvement in speech. This is not to say that tongues are essential for all speech, but they play a major part in the formation of both consonant and vowel sounds.

For reference, of the 23 English consonants in the International Phonetic Alphabet, only 7 of them do not directly involve the tongue. But this is just a little taste of what is to come. For now, lets talk about all of the things we need to classify a sound. When it comes to identifying a sound, there are three things we need to consider: voicing, place of articulation, and manner of articulation.

Voicing is not something that involves the tongue at all, but it is something that we have talked about previously. As a reminder, voiced sounds are produced with your vocal folds being held close together so they vibrate when air passes through them. You can feel this in a word like “zit” by placing your fingers on your neck as you say it. Compare this to a word like “sit” which has a voiceless sound at the beginning. Voiceless sounds are produced by keeping your vocal folds spread open so that there is no vibration.

Moving up from the vocal folds, let’s get back to the tongue. We will begin talking about the tongue by discussing the different places of articulation. The places of articulation are mostly self explanatory with names like inter-dental (between the teeth) and bilabial (involving both lips), but the one we will discuss first deals with the “s” and “z” sounds we have discussed previously. These sounds are classified as alveolar sounds, meaning that they are articulated with the tongue at a place in your mouth known as the alveolar ridge. The alveolar ridge is just behind your upper front teeth and if you feel around with your tongue, you can feel a small protuberance where the roof of your mouth raises slightly. The diagram below shows a mid sagittal cross section of an oral cavity which shows the alveolar ridge, and all of the other places of articulation in the mouth.

Places of articulation

Not all of these places involve the tongue as we previously discussed. All bilabial sounds like “b”, “p” and “m” are produced with only the lips and the tongue is not involved at all. Sounds like “f” and “v” combine two articulators (the teeth and the lips) to produce sound and these are known as labiodental sounds which, again do not use the tongue.

Before we move onto the manner of articulation, I want to talk about “r” for a second. “r” is a unique sound in English because it can be produced in two different ways depending on how you move your tongue. So now is when I ask you, are you a buncher or a curler?

To figure out whether you are a buncher or a curler, there is a simple test you can do. Go grab a toothpick or something similar that you are comfortable putting in your mouth and just poke your tongue as you are producing an “r” sound. If the thing you are poking is the bottom of your tongue, congratulations, that means you are a curler. If you are poking the top, then also congratulations, you are a buncher.

It turns out that an “r” sound can either be produced by curling your tongue tip back toward the rear of your mouth, or by just bunching up your tongue blade toward the back of the tongue. It is important for speech language pathologists to know about this so they can be prepared to teach techniques for both of them. There is no advantage or disadvantage to either technique, bunchers and curlers can both produce “r” sounds just fine.  This is just a weird quirk of our bodies that we can observe.

Now back to the different sounds. Let’s talk about manner of articulation. Manner of articulation deals with the finer aspects of the tongue and how it directly impacts the airflow in the oral cavity. For example, lets return to the “s” and “z” alveolar sounds that we talked about earlier. These sounds are known as fricatives because they are produced by having the tongue very close to the place of articulation, but not touching it so that there is a small amount of space between them that results in a small amount of frication in the airflow, hence the name.

So now think about a sound like “t” or “d”. These sounds are both alveolar sounds as well, but they are produced by having the tongue touching the place of articulation and momentarily stopping the airflow entirely. Unsurprisingly, these are called stops. Now what about a sound like an “n” or an “m”. When you produce these sounds, you are producing them like you would a stop, but you can feel a little bit of reverberation in your sinus as you are producing them. These sounds are nasal stops, and they are produced by lowering the velum at the back of your oral cavity which allows the air to flow into your nasal cavity and resonate like that.

The amazing thing about all these actions is that they are not things that you actively need to think about to do. In fact, you probably put zero thought into how this works until you read this post. Our bodies can do all of this effortlessly and automatically.

As always, this is just a brief overview. We don’t have time to get into all the different places and manners of articulation. We will likely return to talk about more unique language sounds (like clicks), but for now, I think this is a good place to leave it.

Thank you for reading folks! I hope this was informative and interesting to you. Be sure to come back next week for more interesting linguistic insights. If you have any topics that you want to know more about, please reach out and I will do my best to write about them. In the meantime, remember to speak up and give linguists more data.

When you can’t see the sentence for the trees

Syntax is the greatest subfield of linguistics and I say this as a syntactician with absolutely zero bias (wink wink). The field of syntax cares about the ordering of words in a sentence, and the operations that took place to create the word order known as the derivation. The thing that I love about syntax is that is basically a series of math and logic problems. We can take a sentence and work backwards from it to learn how it was constructed. Now before everyone panics about the fact that I am trying to equate math and language lets all just take a deep breath and I will walk us through how a syntactic derivation works.

Photo by Brandon Montrone on Pexels.com

Before we start talking about full sentences though, we need to start a little bit smaller. We will start by talking about verbs. A verb, as you already know, is an action word that tells us what happened, what is happening, or what will happen depending on the tense used. Think of the verb in a sentence as the conceptual seed of a sentence. There are three major verb types that we will talk about today that are separated based on how many arguments they have. These verb types are transitive, intransitive, and ditransitive.

When you picture a basic sentence with a subject, a verb, and an object, you are likely picturing a transitive verb. Transitive verbs are verbs that have two obligatory arguments (the subject and the predicate). What this means is that the information contained in a sentence with a transitive verb includes: an action (the verb itself), the thing that is doing the action (the subject), and the thing that the action is happening to (the predicate). For instance, you can take the verb ‘discuss’ as a good example of a transitive verb. ‘Discuss’ needs at least two arguments (nouns in this case) in order to create a grammatical sentence. This means that you are no able to say something like “John discussed.” or “Discussed the contract.”, you would need to say “John discussed the contract” (Note that you can say “Discuss the contract!”. This is a null subject imperative though and we will talk about those another time).

So, to bring the math connection back around; one way that we can represent sentences like “John kicked the ball.” is by using parentheses. It would look something like this:

                [John [discussed [the contract]]]

Now I am simplifying things quite a bit from how it would actually be represented, but for the purposes of this article it’s good enough. Let’s break down the bracketing so you can see why it is organized like this. In the innermost layer, we have “the contract” which is the thing that is being discussed. This is why it is contained within the bracketing for the verb “discuss” because there is a connection between these two words. On the outermost layer, we have “John”, who is the one that is doing the discussing. “John” has the verb contained within his bracketing because he has the same sort of connection to the act of discussing that the action of discussing does to the contract. Again, this is simplifying things a bit, but I am just trying to explain why we have this embedded bracketing as opposed to something like [John][discussed][the contract] where there is no clear connection between any of the words.

Now, lets branch out to the other verb types. Intransitive verbs are verbs which only have one obligatory argument rather than two. Intransitive verbs are a little bit tricky because they are actually divided into two subtypes, unergatives and unaccusatives. Full disclosure; I constantly get tripped up on the difference between these two because it is subtle and a little unintuitive, but it is my hope that maybe teaching people about it on the internet will also help to clear it up in my head!

The biggest difference between these two verbs deal with whether their subject is semantically an agent or not. If you kick something or hit something, then YOU are a semantic agent in that case because (it is safe to assume) you are doing those things intentionally. Conversely, if you fall, it is not likely that you are doing this on purpose so we can say that you are not the agent in this case but rather that you are the experiencer; the one who experiences the fall.

Unergative intransitive verbs are single argument verbs that only have an obligatory subject (which is a semantic agent) and no obligatory object. I have used the word obligatory twice in this sentence to really drive home the fact that there is a difference between an object that needs to be there and one that does not. Take the sentence “John ate the cake” for example. This sentence has two arguments, “John” and “the cake”, but the fact that it has two arguments does not make it transitive. It is perfectly grammatical and acceptable to simply say “John ate” and leave “the cake” off of it. This is because “the cake” is an optional argument in this sentence, or what we call an ADJUNCT argument in syntax terms. Adjuncts are optional arguments that can be removed without making the sentence ungrammatical.

So we see that a verb like “eat” is a great example of an unergative verb because you only need to specify the thing that is doing the eating, and you are not required to specify the thing that is eaten. Other good examples are things like “run” or “walk” because they are things that require intent and agency to do, but there is no need to specify where you are running or walking to. You can simply just specify that movement is occurring and leave it at that.

An unaccusative intransitive verb is a verb whose subject is not a semantic agent. The best example of an unaccusative sentence would be something like “the tree fell” or “the window broke” because things like trees and windows do not have any agency and are certainly not falling or breaking on their own accord. It is also important to note that unaccusative verbs cannot have any type of object after the verb.

Both unergative and unaccusative verb sentences have simple bracket representations like this:
                [John [ran]]

                [The tree [fell]]

The third type of verb that we will talk about is the ditransitive verb. A ditransitive verb is a verb that requires three arguments (two nouns and a preposition usually) in order to be grammatical. Take the verb “put” for instance. You can’t use “put” as a transitive verb and simply say something like “John put the book”, you also need to specify where the book was put! This is why we would need to make it “John put the book on the table” instead. The labeled bracketing for these ditransitives gets a little more complicated:

                [John [put [the book][on the table]]]

Now keep in mind that these are just simple sentences, but you can imagine as they get bigger and more complex that the labeled bracketing will become very hard to read. Luckily, syntacticians have figured out a more visually pleasing way to represent these sentences that serves the same purpose. Allow me to introduce you to the sentence tree:

John put the book on the table

These trees are drawn with a program known as LaTeX, which is a typesetting used in many scientific and academic settings. If you haven’t worked with LaTeX before, it is hard to describe but it is essentially a halfway point between typing and programming. For example, for this sentence tree above I had to provide LaTeX with a command to draw the tree, and that command makes use of the bracketing that I have been talking about all along. Here is the command that I used:

\documentclass[12pt]{article}
\usepackage{qtree}

\begin{document}

\Tree [ John [ put [{the book} {on the table} ] ] ]

\end{document}

So essentially, this command is taking the same information that is contained in the bracketing and turning it into a visual representation that allows us to easily see how all of the parts connect without having to count all of the brackets by hand.

Now my biggest fear at this point is that my supervisor will somehow stumble across this post and think less of me for these trees. It is at this point I will remind you that there is SOOOOOOOO MUCH that I am glossing over. I am just trying to give a brief overview of the things that I do so that people like my mom will have a better understanding of some of the things that I do. There are entire four-month university courses dedicated to almost all of the subjects that I talk about on this blog so I can’t talk about them all in detail, but if people are interested, I will certainly keep writing more about it. I am a syntactician at heart and I could go on about this stuff until I pass out, but I don’t want to keep you here forever either.

This is a logical stopping point for this one, but I am sure that I will be returning to syntax again in the future so keep an eye out for that. Thank you for reading folks! I hope this was informative and interesting to you. Be sure to come back next week for more interesting linguistic insights. If you have any topics that you want to know more about, please reach out and I will do my best to write about them. In the meantime, remember to speak up and give linguists more data.

What makes a language? (Part 4)

For the past few weeks, I have been discussing the design features of language, and how we can use them to compare our communication systems to those of animals. Today we will discuss the final language design features and how they set us as humans apart from all the other animals.

First, let’s talk about the idea of discreteness. Discreteness is the idea that spoken language can be broken down into individual parts that can be combined in many different ways to create many words and phrases. If you say a word like “pan”, all you are really doing is saying a combination of a “p” sound, an “a” sound, and an “n” sound to make up the word. These same sounds also exist in other words with completely different meaning. The “p” sound also shows up at the end of the world “rip” while the “a” sound can show up at the beginning of “apple” and all three of these sounds show up in different places of a word like “eggplant”!

Photo by Toa Heftiba u015einca on Pexels.com

The point here is that we have infinite ways to combine these same sounds to make infinitely complex words and phrases. Like we discussed previously with recursion, there is no limit to the length and complexity of human language other than our own mortality.

This design feature also pairs nicely with the next feature, duality of patterning. Duality of patterning is defined as the ability to combine these meaningless individual units of sounds into something meaningful. In human language, we can create a sort of hierarchy of meaning so to speak. We can start at the bottom with individual sounds for instance. Like we have already discussed, these sounds can be combined into words, which we can use to form sentences and it just builds up from there.

There is a basic unit of meaning though at some stage between a sound and a word that we haven’t really talked about, and that unit is called a morpheme. A morpheme is a combination of sounds that carries some sort of a meaning to it. Some words can be made up of several morphemes and some words stand alone as a single morpheme. We know that morphemes are different than words because there are some morphemes that cannot stand on their own.

Take the word “unrecoverable” for instance. This word is composed of three morphemes; the base word “recover”, the suffix “-able”, and the prefix “un-“. These three morphemes have their own meaning that they contribute to the word, but of the three of them, only “recover” can stand on it’s own. You cannot use the word “able” on it’s own… well… you can, but it doesn’t quite carry the same meaning as it does when used as a suffix of a word. The stronger example in this case is the prefix “un-“. A prefix like this absolutely not stand on it’s own. There is an entry for the word “un” in the Oxford English Dictionary, but it is listed as rare and the only two cited instances of it are being used to refer to refer to several “un” prefixed words.

All of this is to say that we have evidence for combining units in human language in meaningful ways, but this does not show up to the same degree in animal communication. Of course, there is evidence of animals using the same sounds repeatedly such as bird calls, but we do not have evidence that they are combining smaller units in unique ways to form new, novel meanings like we do as humans.

And speaking of novel meaning, this leads us nicely into our final feature which is productivity and creativity. This feature is self explanatory, but it is extremely important. This feature is the main thing that allows something like this blog to exist! I mean think about it. Everything that I am saying here is from the top of my head. The actual ideas and concepts are not brand new. Like I mentioned in part 1, have been discussed since the 1960’s. Even though they are not my own original concepts, I am still able to create new sentences and find unique ways to express these concepts that have never been used before.

Every single day, you are being creative with language. You are saying things that have likely never been said before. You are expressing old concepts in new ways. It is not as if you are creating new sounds and using them in unique ways either. You are using the same sounds over and over in different ways to make new sentences. We can do so much as humans with what seems like a finite language. Our ability to utilize all of these features that I have discussed over these past four weeks is what sets us apart.

At this point, we simply don’t have the evidence to support animals being creative in the same way that we are. Sure, animals might produce unique sounds from time to time, but there is no way for any other animal in that species to innately understand what they mean by that new sound the same way we as humans can.

For example, let’s take a completely made-up verb like “flup”. Now let’s assign a meaning to this verb. Let’s say that “to flup” means to hide an object under your desk. With this verb, we can start to describe objects around us according to their flupability. I mean, a car is certainly not a very flupable object, but a paper clip is quite flupable. This word does not exist. I completely made this up, but we can have intuitions about it and use existing meaningful morphemes to do creative things with it!

This has been a very long and drawn-out series. I feel like I have really broken form on these posts several times and turned into something that is much to “lectury” for my taste (more creativity!). Once I started part 1 though, I kinda had to finish it so I really want to thank everyone for sticking it out with me. Everything will return the casual format starting next week I promise, so be sure to come back then so we can talk about more fun language facts. In the meantime, remember to speak up and give linguists more data.

What is a question?

Photo by Pixabay on Pexels.com

For many years I have been a fan of the TV gameshow Jeopardy! I have been recounted tales of when I was two years old, and I would be dancing in front of the television to the Final Jeopardy! music. I haven’t watched it in quite some time now unfortunately (partly because cable is too expensive and partly because I am still not entirely over the passing of Alex Trebek) but for some reason, the show popped into my head, and I started to think about questions.

For those of you who may not have seen the show, Jeopardy! is a trivia gameshow where contestants are given an answer as a prompt, and they then must respond to the host with the question that would generate that answer. It is not as tricky as it may sound though. For instance, contestants would be read the prompt “Snake Island off Brazil’s coast is filled with golden lanceheads a deadly pit type of this snake” and would respond with the question “what is a viper?”

We won’t focus on the other rules regarding the dollar amounts of the questions, the daily doubles, or the wagering in final jeopardy. For now, let’s just focus on the question formation.

In a random survey of 60 Jeopardy! categories from the Jeopardy! YouTube page (the equivalent of 5 games), contestants responded with 185 ‘what’ questions, and 115 ‘who’ questions. Let’s also note that this number may be slightly skewed thanks in part to current reigning champion Matt Amodio who has drawn some heat for his propensity to respond to every prompt with ‘what’s’, even if a ‘who’ would be more appropriate. Just take a listen to his response from the category named Audible (most contestants would respond with “Who is (Matthew) McConaughey?”).

Unexpected responses aside, we do see that using ‘who’ and ‘what’ are the two question words that these contestants are using. This is not mandated by the rules of Jeopardy! in any way. Although I was not able to find examples, there are reported instances of contestants responding with ‘where’ questions. According to the Jeopardy! rules, you are only required to respond in the form of a question. So how can you know for sure if your Jeopardy! response is valid?

The first thing we need to ask ourselves is: what is a question and how is it formed in English? Questions are sentences that are aimed to have the addressee (the person you are speaking to) to provide information. There are several different types of questions that a person could ask, and not all of them are acceptable Jeopardy! responses.

Typical Jeopardy! responses are known as wh-questions. These types of questions use words like who, what, where, when, or why to signal to the addressee what kind of response you are expecting. If you ask a question with “when”, you are likely expecting some sort of response dealing with time. In the same vein, if you ask a “where” question, you would be looking for a place.

The funny thing about Jeopardy! is that even though there are plenty of responses that are formed with place names or specific years, the contestants will use “what” for these questions. There are a few reasons for this. In the case of the place names, using a where would seem a little strange based on how the answers are usually worded. If you asked someone the question “Where is New Orleans?”, you would probably be confused if someone answered with something like “This city was founded in 1718 by Jean Baptiste le Moyne, Sieur de Bienville, y’all” (A Round of Gulf Coast category from June 11, 2013).

The second reason for these “what” questions is that contestants are under a large amount of pressure to perform quickly. If you were on the show, and you were asked to respond to a prompt quickly in the form of a question, it would just be easier to use a default “what” than it would be to stress about whether a “when” or a “where” could apply.

So this tells us “what” a question is, but “how” are questions made? For wh-questions, the sentence is first generated to mirror how the answer would look. So, imagine you had a declarative statement like “The capital of Ontario is Toronto.” To ask what the capital of Ontario is as a question, the initial form of it would be “The capital of Ontario is what?” This is acceptable as it is, although you could imagine someone saying something like this in an incredulous way perhaps (“The capital of Ontario is WHAT?!?!).

Once we have this form of the sentence, the wh-word will move to the front of the sentence and give us the question “What is the capital of Ontario?” What this means for Jeopardy! contestants is that, when faced with the prompt “This city is the capital of Ontario”, it would be perfectly fine for them to answer with a question like “Toronto is what?” This would likely raise a few eyebrows though, and it would be tough to do on the spot. Again, with the pressure these contestants are under, it’s easier to try and keep things simple and consistent.

Another way to form a question is through the process of auxiliary inversion, where the auxiliary verb (can, may, is) is moved to the front of the sentence. These questions are known as yes/no questions because the answer to them is, unsurprisingly, a yes or a no. This type of question is, surprisingly, permitted by the rules of Jeopardy! It would result in seemingly strange question and answer pairings; “Is it Toronto?” is not the type of question that would elicit the answer “It is the capital of Ontario.”

This type of question, apparently, has been used in the past by some contestants. I was unable to track down any physical evidence of this, but it is possible within the rules.

The rules of Jeopardy! (while not explicitly published anywhere) do not require the questions provided by the contestants to be grammatical or to match up with the answer explicitly. The questions are only required to be clearly identifiable as questions.

Questions that present two or more possible options are known as alternative questions. This would be the type of question that you might ask your child at desert time. “Would you like cake, or ice cream?” These types of questions could not be used in a game of Jeopardy! because the goal of Jeopardy! is to provide a question response that would satisfy the answer provided, and the answer to an alternative question is one of the alternatives presented. It would be confusing and difficult to construct an answer prompt that would elicit this type of a question. I also don’t imagine you could convince the judges that “What is Queen Elizabeth or Queen Victoria?” is a good response to “This female ruler was the first member of the Royal family to live at Buckingham Palace” even though one of those alternatives is correct.

Another question type not allowed in the rules of Jeopardy! is known as a tag question. It is called this because you are adding a ‘tag’ to a declarative sentence that turns it into a question. An example of this would be if you think that Tobias likes jean shorts, but you aren’t 100% confident and you would like some confirmation, so you would say “Tobias likes jean shorts, doesn’t he?” The first portion of the sentence is just a declarative statement (Tobias likes jean shorts), and it is the “doesn’t he” that turns the whole thing into a question.

The response to a question like this would either be true because the initial statement was true, or false for because the initial statement was false. Because of this, it would be difficult to construct a Jeopardy! style answer where the contestant could provide a tag question response. This has not been tried by a contestant, but I cannot imagine it being accepted by the judges.

Photo by Andrea Piacquadio on Pexels.com

A final type of question that would not be allowed by the rules of Jeopardy! is an inflection question. This is done in English, not by using question words or by rearranging the sentence in any way. Instead, these questions are made by raising the pitch of your voice at the end of the sentence as if to say “I think this right??” A question like this would not be permitted in Jeopardy! because you are not explicitly forming a question with your statement, you are instead questioning whether the statement that you said is correct.

All of this is not to say that there are no fun ways to bend the rules of Jeopardy! So long as your response is a proper question, you can still respond in some clever, corner-case ways.

For example, if the answer provided was something like “This book series has children around the globe searching crowded malls and beaches for the title character, portrayed in a red and white sweater and toque.” A perfectly acceptable and legal response question would be “Where’s Waldo?” The Jeopardy! team even cites similar instances of this situation that are permitted in this article.

So now we have seen that there are several ways that you can make a question in English, and many of them are permitted in Jeopardy! However, if you ever find yourself on the show, it is likely easier to just stick with the traditional “What/Who is __?” type responses. Under those bright studio lights with real money on the line, it would be a shame for you to make a silly error that get’s your response disqualified just so you could look a little bit clever.

Thank you for reading folks! I hope this was informative and interesting to you. Be sure to come back next week for more interesting linguistic insights. If you have any topics that you want to know more about, please reach out and I will do my best to write about them. In the meantime, remember to speak up and give linguists more data.

Voldemort has nothing on bears!

Historical linguistics is an incredibly fascinating subfield of linguistics with so many areas to explore. Rather than focussing on the language that we have now, historical linguists spend their days looking through old documents to try and understand how language and evolved over long periods of time.

It turns out there are so many consistent changes that have happened over time and doing this type of analysis has taught us so much about the universal rules of language change. We are going to talk about one peculiar case of language change though that was brought about by a strange superstitious belief. Let’s talk about the English word “bear”.

Photo by Photo Collections on Pexels.com

Before we talk about “bear”, let’s introduce some historical knowledge and general knowledge about language families. As you may already know, all of the languages in the world can be divided into groupings of languages called families that are related to each other in some way and share a common ancestor like a family tree of sorts. Two of the most well known language families that we will be focussing on today are languages in the Romance family (Spanish, Italian, French) and languages in the Germanic family (German, Dutch, English). There are many more language families out there that are all doing their own unique things, but these are the two that we will talk about for today.

First, let’s dive into Romance languages. Romance languages are all directly descended from Vulgar Latin and as a result of this close common ancestor, they share much of their vocabulary and grammatical rules. There are of course large distinctions in pronunciation and such that have developed over time, but you have likely noticed in your own life that a lot of the words in a language like French are quite similar to Spanish and Italian.

So let’s bring it back around to “bear”. “Bear” is no exception to the above facts. The French word for “bear” ours/ourse is similar to the Spanish orso which is similar to the Italian orso/orsa. These are all so similar because they would have derived from the Latin word ursa, which is likely not a huge surprise when you think about the name of the constellation “ursa major”, which was named using the Latin word for bear.

This is all very cool and interesting, but we need to remember at this stage that Latin was not the first language on earth. It’s not like Latin just showed up and created the word ursa for bear and things evolved over time. If we back it up even further, we arrive at a language that is known as Proto-Indo-European. Proto-Indo-European (or PIE for short) is a theorized language that existed from around 4500 BC to 2500 BC. I say theorized because, at this point at least, there are little to know written records that prove the existence of PIE. The reason we believe PIE exists is because there are many examples of languages in Eastern Europe and Western Asia that have common words and patterns in their language, and they can all be traced back to this hypothetical ancestor in some way or another.

A quick example of this can be shown by looking at some Italian and English comparisons. For instance: piede and foot, padre and father, pesce and fish… there are so many words in just these two languages that have developed to different forms over time, but their patterns are very consistent. The ”p” sounds in Italian seem to be roughly parallel to “f” sounds in English in all of these words for instance. Now I know this is just three words in two languages but trust me when I say that there are hundreds of examples across dozens of languages that give extra weight to this theory.

So if we trace the Latin word “bear” back to PIE, we end up with something like this: *h₂ŕ̥tḱos (note here that the asterisk is to mark the fact that this is a hypothetical reconstruction based on comparing many many languages. Like I said, we don’t actually have writings that include this word).

Where this starts to get really interesting is the fact that this PIE word can trace down to other languages in the PIE family that are not romance languages. Let’s look at the Greek word for “bear” now.

In Greek, the word bear is άρκτος (pronounced “arktos”) and you can notice two things from this. First off, the word arctic in English is derived from this in some way, which is how we have something like the Arctic Ocean, because this is the ocean that is in the northern direction where the Ursa Major constellation is (it’s all tying together again!). The second thing that you can notice is that “arktos” looks roughly like how the PIE word for “bear” would be pronounced. This is giving us more evidence that maybe PIE is a real thing and that all of these languages are tied together!

Photo by Magda Ehlers on Pexels.com

Now those of you with keen perceptions may have noticed something that I did when I first introduced PIE. I used evidence of English and Italian words to convince you that PIE was real. But if PIE is real, and we can trace back words to this common ancestor… how the heck did we end up with bear instead of something more closely related to *h₂ŕ̥tḱos?

It turns out that it’s not just English that has this “bear” problem either. This is where we start to talk about the Germanic lineage. Germanic is also descended from PIE, but not from Latin (English has a lot of Latin influence, but let’s thank the Norman invasions for that). The family tree in this instance splits off directly from PIE and gives us two subfamilies; one with Latin that bears (no pun intended) Romance and Mediterranean languages, and the other side with the Proto-Germanic languages. There are many other divisions and such, but we are only going to talk about these two for now.

All of the Germanic languages have similar words for “bear”. German has bär, Dutch has beer and many Slavic languages (also descended from Proto-Germanic) have it too (Swedish björn and Norwegian bjørn for instance). So with this evidence, it is clear that something happened to the Proto-Germanic word for “bear” that caused this shift.

Let’s take a closer look at some of the Romance languages and see what words that they do have which are similar to “bear” and it might give us a clue.

French has the word brun meaning brown, which was derived from the Latin word brunius. While these may not appear to be exactly the same as “bear” on the surface, it turns out when we trace back the word “bear” in the Germanic and Slavic families that they are in fact derived from the word “brown” somehow.

So what happened to the Proto-Germanic people to make them start referring to “bears” as “the brown ones”. Historical linguists theorize that the Proto-Germanic people were very superstitious people, and because of their superstitions, they were worried that calling a “bear” by its true name would bring it into your life somehow and increase the likelihood of “bear” attacks in ones life.

By a process known as euphemism, it is thought that the Proto-Germanic people collectively started to refer to *h₂ŕ̥tḱos as “the brown one” simply because they needed a way to talk about them without risking summoning them to their camps or hunting excursions.

The Proto-Germanic people had their own version of “he-who-should-not-be-named”, but instead of being some literary euphemism, it ended up influencing thousands of years of language use and giving me something to write about! This is the sort of stuff about language that I find truly fascinating. The fact that we can take this weird thing that you have likely never given a second of thought to and develop entire theories and papers and blog posts to talk about it in an informed and educated way.

So the moral of the story is: keep your friends close and your *h₂ŕ̥tḱos far away by calling them bears instead!

Thank you for reading folks! I hope this was informative and interesting to you. Be sure to come back next week for more interesting linguistic insights. If you have any topics that you want to know more about, please reach out and I will do my best to write about them. In the meantime, remember to speak up and give linguists more data.

There’s an elephant in my pajamas!

Last night I shot an elephant in my pajamas.

Elephant in pajamas – by: Amy Block

A sentence like this one above has two possible meanings, even though you probably only thought of one. One option is the logical meaning where “I” am the one wearing the pyjamas while the elephant being shot. The other possible meaning is that “the elephant” is the one in my pyjamas last night and that’s why I shot it. Now obviously, this meaning is a bit of a stretch (ha!), but that’s only because it is an elephant that was shot. If you change out “elephant” for something a little more realistic, it is easier to convince yourself of this alternate meaning.

Last night I shot a burglar in my pyjamas.

Here you can likely imagine both interpretations, although it does raise the new question of why is this burglar wearing your pyjamas?

There is also a way that we can modify this sentence so that the “I” subject is likely not the one that is “in” something.

Last night I trapped a burglar in my closet.

Just by changing two words, we have made it so it is most likely the burglar who is in the closet, and not me.

Now obviously these sentences are just one silly example of how changing a word or two can change how we might interpret a sentence, but ambiguous sentences show up quite often in one context quite often.

In a newspaper headline like this, we can see the same kind of ambiguity problem. Namely, there is a prepositional phrase (with knife) at the end of the sentence that could reasonably apply to either the subject of the sentence (the cops), or the direct object of it (the man).

Sentences like these don’t often pose a problem for us because we have out own logic and intuition to rely on. Let’s take it one step further and imagine the effects that this might have on a computer. If a computer were to try and “read” these sentences, what conclusion do you think it would draw?

Computers rely on several processes when it comes to interpreting language, but one of the biggest ones (and the easiest one to explain here) is known as statistical learning. Statistical learning is a process by which you take a large set of data, known as a training set, and feed it to a computer program that reads the data one chunk at a time, and makes note of what comes after each chunk. These chunks can be set to a certain number of words to be processed all at once, known as a window.

If you feed the computer a large enough set of data, you can then ask it to start making predictions (like you see in the predictive text on your phone). The computer is able to make guesses on what is most likely to come next based on how often that combination appeared in the training data that was fed to it. This is where all of the statistical stuff comes in.

This process is all very math heavy and quite hard to wrap your head around, but let’s try and simplify it with an example. Imagine I asked you to fill in the remainder of this phrase:

To kill two birds with one _______.

If you guessed stone, then congratulations! Your internal statistical learning system is working normally. If you put in a word like bullet, you might not be incorrect based on your own experience, it might just mean you are working from a different set of training data from most people and you are not familiar with this idiom.

The idiom “to kill two birds with one stone” is very common in North American English and you have likely seen or heard it so many times that you can intuitively know how to finish it. You can probably think of other examples too where after seeing one word come up, you would know for certain what the next word is.

Computers are working on the exact same principle that you just employed to complete that idiomatic expression, but they are doing it on a much different level than you are. Being able to change the scale of the “window” (how big of a chunk) that they are looking through allows them to notice patterns in language that you or I could never notice on our own.

The biggest problem with this from a computing standpoint is that memory is finite for computers so if you make these windows too big, the computer will not be able to handle it. If you make it too small, you won’t get enough useful data to make good predictions. You were able to easily predict the last word of that idiom because you have a large window and you are able to have access to the entire sentence at once. Imagine you were only able to see something like “with one ___”. It would be a lot harder to make a good prediction with this small amount of information.

Another problem is, computers don’t know the meaning of these phrases that they are reading and predicting. This leads us back to the ambiguous sentences from the beginning of this post.

Imagine you could design a program where you could give a “trained” computer the sentence “I shot an elephant in my pyjamas” and then ask it who was wearing the pyjamas. The computer would likely wrongly assume that the elephant was the one in the pyjamas because more often than not in English, when we have a preposition like “in” after a noun, it is meant to be associated with that nearest noun.

There is a chance that the computer might be tipped off in some way somehow by the fact that they are MY pyjamas though, and because of this first person possessive pronoun would correctly associate them. What about a sentence that only uses inanimate objects and pronouns?

The trophy would not fit in the cabinet because it was too big.

We as humans are able to reason that the trophy being too big is the most likely problem here. But again, the computer would likely make the wrong prediction here because it would want to associate the it pronoun with the closest possible noun in the sentence.

All these sentences can be easily disambiguated to ensure that the computer makes the right choice every time.

I shot an elephant while I was in my pyjamas.

The trophy would not fit in the cabinet because the trophy was too big.

Without any ambiguities the computers will be happier knowing that they can understand the sentences just like we can. All of this is to say that when you are writing, be kind to your computer and make sure that you are writing in clear, unambiguous sentences for their benefit too.

Alternatively, the takeaway might be that we should write needlessly ambiguous sentences to confuse the computers and hope it slows down the inevitable terminator-style uprising. I’ll leave the interpretation of this blog post to you the reader.

Thank you for reading folks! I hope this was informative and interesting to you. Be sure to come back next week for more interesting linguistic insights. If you have any topics that you want to know more about, please reach out and I will do my best to write about them. In the meantime, remember to speak up and give linguists more data.

When does “this” become “that”?

Earlier this week, beloved internet nerd Hank Green posted a tweet expressing his frustration about not understanding the relationship between what/that, where/there and when/then. The actual answer to the question is incredibly fascinating and is summed up brilliantly in this short video by Jess Zafarris.

But I am not here to try and take credit for this answer or to expand on it further, I want to take a minute to talk more about the relationship between “this” and “that”.

As Jess pointed out in the video, “this” and “that” are demonstratives that we have in English that are used to locate things in space. But when exactly does “this” become “that”? “This” is usually reserved for things that are in our grasp or are comparatively closer to us than “that”. For instance, if you were holding a pen, you could easily say “This pen is quite reliable” but it would be weird to talk about the same pen you are holding and say “That pen is quite reliable”.

If there were two pens on a table, you could pick up one of them and easily talk about “this” pen that you are holding versus “that” pen which you are not holding. But as we know, the concept of “that” is not as spatially confined as “this” is. We can talk about a “that” that isn’t even in the same room.

If you find a pen on a table that writes significantly better than your friend’s favorite pen he keeps at home, you could probably pick it up and say to them “this pen is so much better than that pen.” And your stationary obsessed friend would probably be able to figure it out. You may need to provide them with a few more specifics, but the point here is that “that” does not have to be within your eyesight. “That” could be anywhere other than here, and it is always going to be comparatively farther away than “this”.

Photo by Jess Bailey Designs on Pexels.com

Now what about this scenario. You walk into a room and there are two pens on the table. One of these pens writes significantly better than the other, but you know that your friend has an incredible fountain pen at home that makes both of these pens look like utter trash. You turn to your friend and say “This pen is much better than that pen, but that other pen you have is the best”. This is a perfectly acceptable and understandable statement, but wouldn’t it just feel so much better if we had a nice way to talk about “that’s” that are likely really far away as opposed to “that’s” that are here, but are not “this”.

This is where we get into the concept of deixis. Deixis is the use of words and phrases to refer to a specific place, time, or person in context. The demonstrative words “this” and “that” can both be used to locate things in space meaning that they are also deictic words. When you are speaking to someone else, you usually use yourself as a default centre point for these words which is how we get this distinction where “this” is closer to you than “that”.

So the concept of “this” is a proximal deictic word, meaning close in proximity to the centre while “that” is a distal deictic word meaning it is further away from the centre. This is a deictic system that all natural languages have to some degree (at least based on the evidence we have). In addition to spatial terms, deixis can also help us differentiate between “now” and “then”, and it can even give us a three way contrast in English between “you”, “me”, and “them”, but in English we seem to be confined to just a “this” and “that” contrast for spatial location.

I say confined here because there are actually languages that go above and beyond in their spatial location capabilities. A language like Korean for instance has a three-way distinction on spatial reference much like we have a three-way distinction on personal pronouns. In Korean, you can use the word yogi to talk about something that is near the speaker, kugi to talk about something near the listener, and chogi to talk about something that is far away from both the speaker and the listener.

Japanese also has this same pattern with the words koko, soko, and asoko, while Tamil uses the words inge, unge,and ange to express the same thing. This pattern also shows up in Thai, Filipino, Macedonian, Yaqui, Turkish, and many more languages so it is certainly not a rare or obscure possibility, it is just something that we English speakers don’t have the ability to take advantage of.

Photo by Karolina Grabowska on Pexels.com

Moving away from “this” and “that”, let’s talk a little bit about the temporal aspect of deixis. In English, we have a similar two-tiered system that we use to talk about the proximal “now” versus the distal “then”.

The thing about “then” is that it is slightly ambiguous in terms of which “then” we are talking about. Are we talking about the “then” that just happened now? Are we talking about the “then” from a few days ago? Or are we talking about the “then” that is going to happen at some point in the future.

And here we have another shortcoming of English. Like the spatial terms, this is not an insurmountable shortcoming, we just have to do some extra work to differentiate between all of these “then’s”. But like the spatial stuff, there are languages that do a much better job than English of differentiating between “yesterday”, “the day before yesterday” and “that one Tuesday six months ago” (okay, maybe not that specific, but let me explain a bit more).

Take the language Zulu, a Bantu language spoken primarily in South Africa. In Zulu, you can make the distinction between the recent past tense and the remote past tense just by changing up the suffix on the word and altering the initial vowel (Bantu languages love to change many things in different places to accomplish one thing. I promise this is just one thing). For instance, sihambile in Zulu means “we went” (phrases in Zulu are expressed by a single word), but it has a recent sense of time. Compare this to sāhamba, which also means “we went”, but it was further in the past than the first example.

Now, let’s just imagine a scenario. Let’s say you are out with your friends on Monday July 26th, 2021, and you are having so much fun that you want to try and get together again on Saturday August 7th, 2021. You could say to your friends “Hey, we should hang out again next weekend”, and it would likely start some debate about “Wait, do you mean this next weekend beginning in five days? Or do you mean the one twelve days from now?”.

And again, this is another shortcoming of English that it turns out Zulu does not have! Like it’s past tense, Zulu can make a difference between recent future tense and remote future tense, but it is incredibly subtle and not in the place you would expect. Zulu changes something in the middle of the word to achieve this effect. Let’s walk through an example and you can see what I mean.

The word Ngizokuza translates to the phrase “I will come”, but this is going to happen before Ngiyokuza, which also means “I will come”. Simply by changing a zu to a yo in the middle of the word, Zulu speakers are able to easily differentiate between a near future and a more distant future.

Now I am not here to say that we all need to go out and learn Zulu to make plans with our friends in a more accurate way, I am just trying to show off all the cool and interesting systems that languages of the world have. English is a serviceable language for sure. If it wasn’t, we would have abandoned it long ago. I just think that learning a little bit more about how other languages handle things like this is incredibly interesting, and that’s why I love this field so much.

Thank you for reading folks! I hope this was informative and interesting to you. Be sure to come back next week for more interesting linguistic insights. If you have any topics that you want to know more about, please reach out to me at talkinglinguist@gmail.com and I will do my best to write about them. In the meantime, remember to speak up and give linguists more data.

Semantic Illusions and How to Spot Them

How many animals of each kind did Moses take on the ark?

It’s two right?

The answer is none. I’m not trying to start some sort of religious debate here. According to the bible, Moses didn’t build the ark, Noah did.

Maybe you are a theology buff and your sharp eye caught this on the first read through. Maybe you aren’t familiar with the bible story though so it might not have been a fair test. Let’s try another one.

What is the name of the holiday where kids dress up and go out to give candy?

If you answered Halloween, you’d be wrong again. Kids don’t give candy on Halloween, they get candy handed out to them. So what causes our brains to skip over the most important part of the sentence and just decide that “It’s fine, I know the answer to this”?

If you remember last week when we talked about garden path sentences, I mentioned that our brains are driven by efficiency. That desire for maximum efficiency might also be able to explain why these sentences, known to linguists as “Moses illusions”, seem to trip us up. One theory with these Moses illusions is that our brains reach a point in processing these sentences where they feel that they have enough information to answer the question that they can ignore the wrong information.

According to some research out of the University of Maryland this is likely due to what is known as shallow processing. Shallow processing is a bit of a broad term, and the definition of it changes depending on the “thing” that our brains our processing. In garden path sentences for instance, our brains decide what the most likely interpretation of the sentence is before we get to the end of it.

With these Moses illusions, our brains process the sentence to the point where they see key words in the sentence like “holiday”, “dress up”, and “candy”. A word like “give” goes undetected on a quick glance. One reason for this is that our brain feels like it has enough information to answer the question. An even bigger reason though is that “giving” is closely related to “receiving”. If I had given you a sentence like the one below, you likely would not have been fooled as easily.

What is the name of the holiday where kids dress up and go out to grow candy?

Using the word “grow” here might make it easier to catch because it’s such a weird thing that maybe your brain is more likely to catch it. But this might not be fair because our brain has the benefit of hindsight.

Not all Moses illusions are created equal though. You can’t just change out one word in any sentence with something closely related and have it trip people up. It’s not just the similarity of the words that’s causing you to fall for this. The position of the substitution in relation to the other key pieces of information also plays a role in whether people fall for these illusions or not. Sentences with substitutions at the beginning of the sentence are more likely to be noticed by readers than those with subtle substitutions near the end when the brain already has “enough information”.

The explanation that we have at this point in the research is by no means perfect. This is still a growing area of research and there is certainly more to learn about how people read and interpret sentences before we have a definitive answer to why people fall for these. So the next time someone asks you “Which British monarch lit the torch at the London Olympic winter games in 2012?”, you will take a moment to remind them that those were the summer games rather than blindly answering with “Queen Elizabeth”.

Thank you for reading folks! I hope this was informative and interesting to you. If you want to see more of these posts, be sure to follow my Facebook page and get updates when new posts go live. If you have any topics that you want to know more about, please reach out and I will do my best to write about them. In the meantime, remember to speak up and give linguists more data.

Things I learned while walking in my garden

Photo by Daria Obymaha on Pexels.com

Have you ever come across a sentence where something feels off the first time you read it?

The horse raced past the barn fell.

If this is your first time encountering a sentence like this, you probably had to read it a few times before you figured out that it was the horse that was falling, not the barn. Although this sentence is weird, it’s perfectly grammatical and you have no problem understanding it once you know the trick to it. What about for a sentence like this?

After the man paid the clerk asked for more money.

Jerry Seinfeld performing stand up on Seinfeld. (NBC/Youtube)

So what’s the deal with these sentences anyway? Sentences like these are called garden path sentences. They get their name from the fact that you feel like you are being led down a lovely garden path as you read the sentence, before you are suddenly brought to the edge of a cliff looking into the void of “ungrammaticality” and realizing that you should have taken that left turn at Albuquerque.

Garden path sentences show up rarely in natural writing, but they are used in psycholinguistic research to figure out how our brains respond to unexpected things. One way that psycholinguists can test this is to do what is called a self-paced reading task, which presents a sentence to readers one word at a time.

By presenting a sentence like this and recording how long it takes for the reader to move to the next word, we can look at the exact point where they encounter the oddness of the sentence see how they react.

Before we get lost in that rose patch though, let’s think about the possible ways that people could process something they are reading. Because English has a relatively strict word order, seeing certain words might be a good signal of what is coming next. When you are reading it one word at a time through, are you simply interpreting the most likely possibility of how the sentence will end based on what you know about your own language? Or are you thinking of all the possible ways that a sentence could go, and then cutting off the impossible ones with some hedge trimmers as you encounter more words?

Both explanations seem possible, but the second one does feel a little bit more cumbersome. Because languages are recursive and sentences could be infinitely long, trying to keep all the possible structures in your mind would be impossible. But what if we relied on the fact that our brains are essentially super computers? What if our brains understand that most sentences aren’t infinitely long and there are actually only a few things that we would have to look out for if we don’t care about what specific word follows, but instead just care about whether the sentence could continue from this point or not? Now it seems a little bit easier to imagine that we could be processing things like this.

So how exactly are our brains interpreting sentences? And how can these garden path sentences confirm that? Well, if we isolate the point of weirdness in a self-paced reading task, we can see whether readers slow down at all when they reach that point. As I mentioned before, sentences can be infinitely long, so when we encounter something like this, it shouldn’t be a surprise to our brains that the sentence keeps going and there shouldn’t be any slowing down.

But it does slow down. When readers reach this odd point in the sentence where we introduce the second verb, a significant portion of readers will take a little bit more time to figure out just what in the fertilizer is going on.

The key thing that we do need to realize though is that while our brains might be super computers that could possibly keep this idea in mind, our brains are also driven by efficiency. What this means is that, would it be worth spending all that energy to consider the infinite possibilities of sentences when we could just keep the most likely possibility in mind and revaluate the rest of the possibilities when that doesn’t work.

Let’s take another look at the sentence “the horse raced past the barn fell”. As most of you have probably noticed by now, the reason we get tripped up by this sentence is the fact that “the horse raced past the barn” could stand on its own as a sentence.

As we work our way through the sentence one word at a time our brains, being the efficient machines that they are, are trying to only consider the most likely possibility. This means that by the time we reach the word “barn” we have come across a subject “the horse”, a past tense verb “raced” and an object “past the barn”. The possibility that our brain doesn’t consider is that this entire phrase is referring to a horse that was raced past the barn, presumably by a jockey who needed to get home to water the carrots.

When we encounter the next word “fell”, the first thing our brain thinks is that “oh, barns fall all the time, so it must be the barn that fell”. After trying that angle and realizing that it doesn’t work, our brain panics and thinks “wait, I must have missed something” before it goes back to reanalyze things from the beginning with this new information in mind and makes the correct assumption that it was indeed the horse that was raced past the barn who fell.

So why didn’t they just say “the horse that raced past the barn fell” in the first place? Because then, I would have nothing interesting to write about! This is another big part of linguistics where we try to push the limits of what is grammatical and see how people will react to it. After planting a small seed of an idea in someone’s head, we are able to grow our understanding of how the human brain processes sentences. There are so many more amazing things that research has been able to teach us about language, and I can’t wait to keep sharing them with you all every week.

Thank you for reading folks! I hope this was informative and interesting to you. Be sure to come back next week for more interesting linguistic insights. If you have any topics that you want to know more about, please reach out and I will do my best to write about them. In the meantime, remember to speak up and give linguists more data.