Summary of Study
For the past few weeks, I’ve been fortunate enough to work with Caitlin Fausey, a PhD candidate at the Stanford Psychology Department, and her study that compares the frequency of agentive versus non-agentive speech within English and Korean. To collect data for the study, a large sample of native Korean and English speakers were asked to watch a video-recording of accidental and purposeful events (i.e. a man popping a balloon on purpose and by accident, a man stepping on a can on purpose and by accident, etc.) and to describe what they saw.
The data collected from this process was then coded, with a “1” assigned to sentences that contained an agentive verb (i.e. he popped the balloon), a “2” assigned to sentences that contained a non-agentive verb (i.e. the balloon popped), or a “5” assigned to sentences that were un-codeable (i.e. completely irrelevant descriptions, sentences that did not contain a key verb, etc.). An initial joint meeting was facilitated between the two coders, and any disagreements in coding were resolved. Any outstanding disagreements were then resolved over email. A second joint meeting was conducted in order to begin analyzing and graphing the data collected during the experiment. The data was transferred from Excel to SLSS, where the data was consolidated and graphed. This data analysis examined how often Korean speakers used agentive language to describe accidental and non-accidental events and also identified the mean, potential outliers, and the significance factor of the experiment. A third and final meeting was conducted in order to compare these results with that of English speakers and identify any significant relationship between the two sets of data.
The data suggested that Korean speakers tended to use non-agentive speech more frequently to describe accidental events than their English speaking counterparts. Korean and English speakers used agentive speech to describe non-accidental speech with similar frequency.
The Difficulties of Coding
Although I considered myself fairly fluent in Korean prior to the start of this experiment, I realized that the Korean language was far more complicated than I had known. In the Korean sentences I coded, the test subjects rarely included the agent when describing the events they saw. Instead, they made the agentive/non-agentive part of their sentence by adding either a subject-marker or object-marker after the object being acted upon. This pattern was particularly evident in the “balloon-popping” scene:
풍선을 터트렸다.
(He) popped the balloon.
Demonstrates the use of the object-marker “eul.” In this sentence, the balloon is the object of an unknown (or rather implied) agent.
풍선이 터트렸다.
The balloon popped.
Demonstrates use of the subject-marker “ee.” In this sentence, the balloon is the subject.
Unfortunately, not all the data sets were this clear. Some sentences included a lot of extraneous information on the agent’s shirt color, what he was doing before he popped the balloon poten,tial reasons behind this seemingly senseless action, etc. Needless to say, it became progressively more difficult to find this single marker embedded within a sentence to determine whether the sentence represented agentive or non-agentive language.
Experiences Consolidating Data
Although I’d taken an introductory college-level psychology course prior to this study, I’d never used SPSS or Excel to analyze data first-hand. Therefore, I didn’t fully understand the concepts and processes we discussed during our meetings at first. But I gradually became familiar with the process of turning raw data into graphs to observe any general trends or patterns. One significant observation we made was that Korean speakers used non-agentive speech much more frequently than English speakers to describe accidental events, which supported Caitlin’s initial hypothesis.
One new thing I learned during this process was the importance of the “significance” factor, which estimates how likely the patterns represented in the data are just occurring by chance. In order to make a publishable claim, the significance factor should be less than .05, which suggests that there is a definite causal relationship within the data (in this case, how English and Korean speakers describe accidental events with either agentive/non-agentive speech). At first, the p-value was around .075, which was a little too high to make a definitive claim about the data. However, after two outliers out of a pool of 90 test subjects were eliminated (one English speaker who described accidental events with non-agentive language and one Korean speaker who described accidental events with agentive language with unusually high frequency), the p-value fell to .015, well below the magic number for publishing your results.
Future Prospects for this Study
One interesting trend I noticed while coding was that certain individuals tended to use either agentive speech or non-agentive speech more frequently when describing accidental events. In other words, the frequency of agentive speech may also depend on the test subject as well as the particular event being described. There seemed to be a loose correlation between the use of agentive speech and the particular form of speech used to describe the event. Korean is a language with multiple levels of formality (honorific, deferential, humble, polite, blunt, half-talk) that depend on the speaker’s relationship with the individuals being spoken to. Perhaps another step for this study would be to code each sentence into one of the 5 categories of speech mentioned below and to see whether or not there is any statistically significant pattern across the different levels of speech formality. However, as many of the sentences within the data set are incomplete, it would be difficult to accurately and consistently code each sentence into one of five categories. But if it could be done, one might be able to draw further conclusions on which level of speech lends itself to agentive or non-agentive speech.
Final Thoughts on the Study
Overall, I thought this study was a great opportunity for me to explore an entirely new aspect of language, one that focuses a quantitative versus qualitative scope of study. Coding the data allowed me to explore the intricacies of the Korean language, and I was definitely able to put all the grammatical concepts I’ve learned in my Korean language classes to use. The data analysis process provided a thorough refresher course in statistics, and I was able to see how statistics can be applied differently in various disciplines, particularly psychology. When I first applied for this introductory seminar, I mentioned that I wasn’t very interested in learning more about the technical aspects of language. However, after seeing how such details of language can be extrapolated and applied in a greater context, I’m happy to say that I am very much interested in learning more about the structure and syntax of various languages. I thoroughly enjoyed my experience working with Caitlin on her project, and I hope to become more active in research studies in the future.
Friday, June 4, 2010
playing video games to learn a new language
I'm traveling to Ecuador this summer, and I was a little apprehensive because my Spanish is a little rusty. Maybe rusty is the wrong term, because that implies that my Spanish was at some point in good, working order (which may not be entirely true). But anyways, I was looking for programs to help me brush up on my Spanish before leaving, and I ran across this:
http://www.amazon.com/My-Spanish-Coach-Nintendo-DS/dp/B000SQ5LOQ
My Spanish Coach, developed by UBI soft, is a videogame for the Nintendo DS system and promises to teach Spanish in just 15-20 minutes a day! Finally, a videogame my parents might actually want me to buy! UBI soft has developed games for several other languages including Japanese, French, and even Mandarin Chinese. Fascinated, I clicked through all the reviews of the product to learn more. It just seemed to be too good to be true.
And it was, sort of. Yes, the game is a an excellent tool for beginners, but it's hardly the tool to master the language as it promises to be. Very few verb tenses are covered, and the game skips over a couple critical language lessons including "por" vs "para," the different genders within the language, and much more. The game is useful for learning more vocabulary, which it presents in a random fashion after you reach a certain level of expertise. Also useful is the ability to record your voice to be able to listen to how you pronounce various words (something that I need very very much). Finally, the game also includes a Spanish-English dictionary, but I don't think I'll be able to walk around the streets of Quito consulting my DS to communicate to the locals.
In any case, it still seems to be an interesting buy. Who knows, I might go ahead and purchase it in a few weeks. I'm that desperate for Spanish help, and I'm a sucker for good marketing.
http://www.amazon.com/My-Spanish-Coach-Nintendo-DS/dp/B000SQ5LOQ
My Spanish Coach, developed by UBI soft, is a videogame for the Nintendo DS system and promises to teach Spanish in just 15-20 minutes a day! Finally, a videogame my parents might actually want me to buy! UBI soft has developed games for several other languages including Japanese, French, and even Mandarin Chinese. Fascinated, I clicked through all the reviews of the product to learn more. It just seemed to be too good to be true.
And it was, sort of. Yes, the game is a an excellent tool for beginners, but it's hardly the tool to master the language as it promises to be. Very few verb tenses are covered, and the game skips over a couple critical language lessons including "por" vs "para," the different genders within the language, and much more. The game is useful for learning more vocabulary, which it presents in a random fashion after you reach a certain level of expertise. Also useful is the ability to record your voice to be able to listen to how you pronounce various words (something that I need very very much). Finally, the game also includes a Spanish-English dictionary, but I don't think I'll be able to walk around the streets of Quito consulting my DS to communicate to the locals.
In any case, it still seems to be an interesting buy. Who knows, I might go ahead and purchase it in a few weeks. I'm that desperate for Spanish help, and I'm a sucker for good marketing.
Tuesday, June 1, 2010
become familiar with accents ... or die?
http://www.dailyfinance.com/story/learning-understand-unfamiliar-accents-save-life/19495301/
This fascinating article, entitled "Why Learning to Understand Unfamiliar Accents May Save Your Life," examines the issue of non-American accents and the role they plan in communicating with others. In an extreme example, the article opens up by considering a situation where a miscommunication because of an accent may have been seriously life-threatening. In this situation, a pilot landing a jet in Korea radioed in to the control tower to ask for permission to land. The Korean operator signaled that the plane was clear to land on airstrip "Zulu" - code for airstrip Z - but pronounced the Z like a J, as is common with Korean English-speakers. The pilot thought the controller was mispronouncing "Juliette," and began to land in airstrip "J" narrowly missing another airplane that was landing on the same strip.
The issue of understanding accents also plays a more realistic role in business as well. According to a recent study, multinational corporations better equip their employees in "accent reduction" training that American companies do. As a result, foreign businesses are more likely to conduct business with these employees rather than untrained workers.
These two situations bring up the issue of a "standard" accent for a particular language. Is it important to speak a language in a common accent? Do the above two examples justify the recent Arizona law that prohibits English teachers from speaking with an accent? Is there even such a thing as a universally-spoken accent within a language?
The one thing that this article does make clear is the importance of adjusting the way you speak a certain language to convey maximum clarity. Who knows, it might save your life one day.
This fascinating article, entitled "Why Learning to Understand Unfamiliar Accents May Save Your Life," examines the issue of non-American accents and the role they plan in communicating with others. In an extreme example, the article opens up by considering a situation where a miscommunication because of an accent may have been seriously life-threatening. In this situation, a pilot landing a jet in Korea radioed in to the control tower to ask for permission to land. The Korean operator signaled that the plane was clear to land on airstrip "Zulu" - code for airstrip Z - but pronounced the Z like a J, as is common with Korean English-speakers. The pilot thought the controller was mispronouncing "Juliette," and began to land in airstrip "J" narrowly missing another airplane that was landing on the same strip.
The issue of understanding accents also plays a more realistic role in business as well. According to a recent study, multinational corporations better equip their employees in "accent reduction" training that American companies do. As a result, foreign businesses are more likely to conduct business with these employees rather than untrained workers.
These two situations bring up the issue of a "standard" accent for a particular language. Is it important to speak a language in a common accent? Do the above two examples justify the recent Arizona law that prohibits English teachers from speaking with an accent? Is there even such a thing as a universally-spoken accent within a language?
The one thing that this article does make clear is the importance of adjusting the way you speak a certain language to convey maximum clarity. Who knows, it might save your life one day.
the flaws of language submersion?
In this article, Antonio Groceffo - a self-proclaimed language "expert" - disparages language submersion as "the worst possible way to learn a language." His tone is highly arrogant and condescending at times, but I will attempt to look past his enormous ego to examine his key argument about the best way to go about learning a new language.
Language "submersion" is essentially the act of placing an individual in an environment in which a certain language is spoken without giving him/her any teaching in that language. Essentially, it is like teaching someone how to read by throwing Don Quixote at him and expecting him to learn. Certainly, one can see that there are inherent flaws in this system, which may validate the claims Groceffo makes in his editorial. However, while this task is monumental, it is certainly not impossible. When the Dutch first made contact with Japan in the early 17th century, Japanese scholars learned to speak Dutch simply by listening to and attempting to communicate with the foreigners - no textbooks, flash cards, or language classes involved. Nevertheless, one can expect that such a method of learning a language - without any prior experience - can certainly be a difficult task.
Yet at the same time, the language learning methods that Groceffo seems to support, that is traditional classroom learning, isn't exactly the best way to go about learning a language either. Groceffo concedes this fact himself, whether intentionally or unintentionally, when he mentions that some individuals graduate with an advanced degree in a foreign language without achieving native-level fluency. I believe Groceffo discounts the value of language immersion and the importance of speaking with native-language speakers.
If simply talking to people who speak a certain language is such a bad idea, how did we learn our first languages to begin with? When I was a baby, my parents certainly didn't enroll me in English-language classes before I could start to talk. Instead, I learned English through the exact same methods that Groceffo completely disregards: language submersion. Certain software programs, namely the Rosetta Stone, have recognized this as an excellent method of learning a language and have applied this process to their products. Perhaps language submersion isn't that bad of an idea afterall.
Language "submersion" is essentially the act of placing an individual in an environment in which a certain language is spoken without giving him/her any teaching in that language. Essentially, it is like teaching someone how to read by throwing Don Quixote at him and expecting him to learn. Certainly, one can see that there are inherent flaws in this system, which may validate the claims Groceffo makes in his editorial. However, while this task is monumental, it is certainly not impossible. When the Dutch first made contact with Japan in the early 17th century, Japanese scholars learned to speak Dutch simply by listening to and attempting to communicate with the foreigners - no textbooks, flash cards, or language classes involved. Nevertheless, one can expect that such a method of learning a language - without any prior experience - can certainly be a difficult task.
Yet at the same time, the language learning methods that Groceffo seems to support, that is traditional classroom learning, isn't exactly the best way to go about learning a language either. Groceffo concedes this fact himself, whether intentionally or unintentionally, when he mentions that some individuals graduate with an advanced degree in a foreign language without achieving native-level fluency. I believe Groceffo discounts the value of language immersion and the importance of speaking with native-language speakers.
If simply talking to people who speak a certain language is such a bad idea, how did we learn our first languages to begin with? When I was a baby, my parents certainly didn't enroll me in English-language classes before I could start to talk. Instead, I learned English through the exact same methods that Groceffo completely disregards: language submersion. Certain software programs, namely the Rosetta Stone, have recognized this as an excellent method of learning a language and have applied this process to their products. Perhaps language submersion isn't that bad of an idea afterall.
Tuesday, May 25, 2010
translating the Word of God
This article deals with the issue of translation of the Bible, a potentially sensitive text to translate into various, obscure languages. According to this article, Jon Riding and his extensive team of scientists, linguists, and theologians at Bible society are developing a revolutionary technology called Paratext which will make the process of Biblical translation much more efficient.
Currently, translating the Bible into a new language is a laborious task that can take anywhere between 10 to 20 years. In order to uphold the authenticity of the original texts, the Bible can only be translated from Greek or Hebrew, and must be re-translated into the original language to cross-check for any potential discrepancies. As this task is incredibly time-consuming, the Bible is virtually unknown to speakers of thousands of languages around the world.
Apparently, this new software will change all that. Paratext won't necessarily translate giant chunks of text into a desired language, like Google Translate does, but will rather serve as an important tool for human translators to conduct their work much more effectively and efficiently.
In class, we've discussed the huge implications of possible mistranslations (ex the formation of divine conception through language), and this new technology may greatly increase the possibility of minute mistranslations with great disparities in meaning. Although this article doesn't describe the machinations of this new software in depth, one can expect opposition to such a controversial use of technology. But the implications of Paratext are even more significant. Perhaps this is the key to allowing everyone on Earth access to the Bible, a new religion, and an entirely new take on life.
Currently, translating the Bible into a new language is a laborious task that can take anywhere between 10 to 20 years. In order to uphold the authenticity of the original texts, the Bible can only be translated from Greek or Hebrew, and must be re-translated into the original language to cross-check for any potential discrepancies. As this task is incredibly time-consuming, the Bible is virtually unknown to speakers of thousands of languages around the world.
Apparently, this new software will change all that. Paratext won't necessarily translate giant chunks of text into a desired language, like Google Translate does, but will rather serve as an important tool for human translators to conduct their work much more effectively and efficiently.
In class, we've discussed the huge implications of possible mistranslations (ex the formation of divine conception through language), and this new technology may greatly increase the possibility of minute mistranslations with great disparities in meaning. Although this article doesn't describe the machinations of this new software in depth, one can expect opposition to such a controversial use of technology. But the implications of Paratext are even more significant. Perhaps this is the key to allowing everyone on Earth access to the Bible, a new religion, and an entirely new take on life.
Thursday, May 20, 2010
"Korean, the language of love"
http://www.globalpost.com/dispatch/south-korea/100506/learn-korean-language-dating-expat-life
This fun, but fascinating article discusses one possible motivation for learning the Korean language. According to the article, many foreigners, particularly men, learn Korean so they can better communicate with native Koreans and increase their chances of a romantic relationship. The influx in English-language academies "or hagwons" has guaranteed a steady supply of foreign men looking for love in Korea.
While this phenomena may seem quite random to someone not familiar with the subject, my experiences in a Stanford Korean class supports this hypothesis. Out of the 12 people in my class, 4 are non-native speakers, 2 of which are men. Both of these men are learning Korean to better communicate with their Korean girlfriends.
But this situation brings up many interesting issues regarding bilingual romantic relationships. How many of these individuals apply their language skills for a long-term relationship rather than a one-night stand? Is a meaningful relationship with a significant language barrier even possible? This article suggests that most men looking for a relationship are only interested in "hooking up" rather than settling down with their Korean counterparts. If this is true across the spectrum, this is certainly an interesting use of one's language skills.
Also, for your viewing pleasure:
This fun, but fascinating article discusses one possible motivation for learning the Korean language. According to the article, many foreigners, particularly men, learn Korean so they can better communicate with native Koreans and increase their chances of a romantic relationship. The influx in English-language academies "or hagwons" has guaranteed a steady supply of foreign men looking for love in Korea.
While this phenomena may seem quite random to someone not familiar with the subject, my experiences in a Stanford Korean class supports this hypothesis. Out of the 12 people in my class, 4 are non-native speakers, 2 of which are men. Both of these men are learning Korean to better communicate with their Korean girlfriends.
But this situation brings up many interesting issues regarding bilingual romantic relationships. How many of these individuals apply their language skills for a long-term relationship rather than a one-night stand? Is a meaningful relationship with a significant language barrier even possible? This article suggests that most men looking for a relationship are only interested in "hooking up" rather than settling down with their Korean counterparts. If this is true across the spectrum, this is certainly an interesting use of one's language skills.
Also, for your viewing pleasure:
Monday, May 10, 2010
Deafness in one ear lowers language skills?
A recent WebMD article entitled "1-sided hearing loss lowers language skills" caught my attention because - well - it seemed like such a random topic at first. According to this article, new research shows that loss of hearing in one year at an early age may have an effect on a child's ability to grasp certain language skills and concepts. Judith C. E. Lieu, M.D. of Washington University claims that "on average, children with hearing loss in one ear have poorer oral language scores than children with hearing in both ears."
The reasons behind possible correlation between this variable, hearing loss in one ear, and outcome, poorer language skills, are unclear. Lieu suggests that children with hearing loss in one ear may ignore group work activity because the noise and sound overwhelm them. To (very unscientifically) test this hypothesis, I plugged my left ear to see if the sound I heard in a group setting was any different when I didn't have a finger stuck in one of my ears.
The differences, I observed, were minimal. However, that brief test certainly does not discount Lieu's evidence in any way. Instead, I propose an expansion of Lieu's experiment. Lieu should provide each child with unilateral hearing loss with a hearing aid that amplifies sound so that they too can hear with two ears. The students' language performance should then be re-recorded in juxtaposition with children with full hearing capacity. Or an entirely separate experiment could be performed, with one group of children with hearing in both ears, one group with children with unilateral hearing loss and no hearing aid, and another group with children with unilateral hearing loss and a hearing aid. Of course this brings up certain ethical issues of withholding a potentially life-changing device to an entire experimental group of children though ...
In any case, the original article can be found here: http://children.webmd.com/news/20100510/1-sided-hearing-loss-lowers-language-skills
The reasons behind possible correlation between this variable, hearing loss in one ear, and outcome, poorer language skills, are unclear. Lieu suggests that children with hearing loss in one ear may ignore group work activity because the noise and sound overwhelm them. To (very unscientifically) test this hypothesis, I plugged my left ear to see if the sound I heard in a group setting was any different when I didn't have a finger stuck in one of my ears.
The differences, I observed, were minimal. However, that brief test certainly does not discount Lieu's evidence in any way. Instead, I propose an expansion of Lieu's experiment. Lieu should provide each child with unilateral hearing loss with a hearing aid that amplifies sound so that they too can hear with two ears. The students' language performance should then be re-recorded in juxtaposition with children with full hearing capacity. Or an entirely separate experiment could be performed, with one group of children with hearing in both ears, one group with children with unilateral hearing loss and no hearing aid, and another group with children with unilateral hearing loss and a hearing aid. Of course this brings up certain ethical issues of withholding a potentially life-changing device to an entire experimental group of children though ...
In any case, the original article can be found here: http://children.webmd.com/news/20100510/1-sided-hearing-loss-lowers-language-skills
Subscribe to:
Posts (Atom)