Sunday, July 21, 2019
Dichotic Listening Experiment
Dichotic Listening Experiment George Papamanolioudakis Dichotic listening Abstract: In this experiment we collected data from seventeen (17) first year psychology students in order to identify the differences in speech recognition between the left and the right ear. Based on previous findings we expect that there will be a significant difference between them, as the left hemisphere of the brain which controls the right side of humans bodies, contains major areas controlling speech producing and recognition (Gallese Stamenov, 2002). A dichotic test was produced, using headphones, presenting the participants nonsense syllables such as ââ¬Å"kaâ⬠and ââ¬Å"taâ⬠at the same time to both left and right ear. Our goal was to analyse scores from both ears and confirm if there would be a difference between them. The data we collected was ratio, within participants, and they were analyzed using a non-parametric test (Man-Whitney) due to the small sample given. The results have shown that we can confirm the above hypothesis, although later research with higher sam ple, would help as finalize the findings, and provide evidence with different methods. Introduction: In this study, we are going to examine whether peopleââ¬â¢s ability to report words accurately, is affected by which ear they hear them in. In order to investigate this, we are going to perform a dichotic listening task. Previous research (Kimura, 1961) on this subject, showed that the left hemisphere recognizes speech sounds better than the right. As the brain is connected with the body almost contra-lateral we assume that the right ear will be more capable of receiving words than the left. We can question this experiment, as it was performed to patients with epileptogenic foci, in different parts on the brain. Later on, based on an annual meeting of the academy of Aphasia in Chicago 1966, Doreen Kimura (1967) reviewed all evidence relating the asymmetry in speech recognition between the two hemispheres of the brain, confirming that the right ear of all humans was more able to recognize verbal stimuli due to better connections with the left hemisphere of the brain. Another experi ment (Molfese, Freeman, Palermo, 1975), which recorded auditory evoked responses from both cerebral hemispheres of humans in all ages, agreed that the left hemisphere responded more dynamically in speech stimuli, than the right which responded better in non-speech stimulus. The reason that makes the left hemisphere more accurate in verbal ââ¬â speech stimuli, is that many areas related to speech are located there. Variety of evidence can prove that, such as many case studies of damaged brain cells on the left hemisphere of individuals that caused speech dysfunctions. More specific Brocaââ¬â¢s area among other areas of the left hemisphere, has been repeatedly reported to be very important in the verbal domain (Gallese Stamenov, 2002). All these studies would not be so accurate if scientists were not able to analyze brain activity through specialized technology such as Magnetic encephalography (MEG), FMRI and PET scans. Using FMRI scientists Embick, Marantz, Miyashita and Oâ⠬â¢Neil (2000) concluded that Brocaââ¬â¢s area is specialized in the syntactic process of our brain, therefore there is a certain correlation given. Another area of the brain seems to play a crucial role on language understanding. Scientists found that when they increased the mean arterial pressure (pharmacologically) of a patient with a left frontal-temporal stroke, they managed to improve his language deficits as the Wernickeââ¬â¢s area (located on the left hemisphere) had improved perfusion (Hillis, et al., 2001) Other interesting findings have been discovered by examining patients with ââ¬Å"split brainâ⬠. These patients had their corpus callosum removed (the part that unites the left with the right hemisphere), for other medical reasons, and gave scientists the opportunity to explore the differences between the ââ¬Å"connectedâ⬠brain and the ââ¬Å"split brainâ⬠. Those findings showed that in the split brain condition the individual could not identify verbally an object presented on his left eye only, (left eye ââ¬â right hemisphere) because there was no connection between the two hemispheres (Gazzaniga, 1967). Many researchers have used the dichotic listening test in order to examine whether the left or the right ear (right or the left hemisphere of the brain) would analyze better speech stimulus or other sounds (birds, music etc.). In this experiment we will introduce the same method in order to come up with a conclusion, as we expect that there will be a significant difference between the left and the right ear. Method: Participants: Seventeen first year undergraduate psychology students participated in this experiment. Ten (10) males and seven (7) females. Mean age =22.3, and the range was eighteen (18) to twenty-nine (29). All participants were right ââ¬â handed. Design: The independent variable of this experiment was the left and the right ear, and the dependent was the correct identifications of the syllables provided both from the left and right ear. The experiment was within participants, as we measured correct answers from each participant individually. Materials: Each participant used a pair of headphones which provided stimuli for each ear. The stimuli was 15 combination of nonsense syllables, consisted of one of a series of consonants (b, d, g, k. p, t) paired with the vowel ââ¬Å"aâ⬠. These sounds were recorded in 16 bit mono-aural mode and edited to 500 millisecond duration. Each person listened 30 presentations of the stimuli, carefully balanced for both ears, each one providing a different consonant ââ¬â vowel pairing. For example the sounds ââ¬Å"kaâ⬠and ââ¬Å"taâ⬠were presented at the same time on a different ear. The presentation of the sounds was reversed for a total of 30 trials. For example the sounds ââ¬Å"kaâ⬠and ââ¬Å"taâ⬠were presented in both ears equally. Here is the link to the test (Dichotic Listening) Procedure: All participants arrived on CityU on time. They were welcomed by the instructors and placed on their seats. They were asked to read the information sheet and after all questions were answered they signed the consent form. Each participant used his/her own computer with her/his own headphones. They were asked to visit the link to the test, and when everyone was ready they completed the dichotic listening test individually. The test that was used was from APA webpage: ( http://opl.apa.org/Experiments/AlphabetList.aspx) on the ââ¬Å"experimentsâ⬠section located under word ââ¬Å"dâ⬠(for dichotic listening). After clicking in the test they were asked to put the class ID number in order to collect the data from each of them. After they finished, they were thanked for their participation in the study and left. Results: This experiment took place in order to confirm that the right ear would recognise better syllables due to the immediate connection to the left hemisphere, than the left ear. The data we collected was ratio, within participants, and a non-parametric test was carried out (Man-Whitney) because of the small number of participants. The data shows that there was a significant difference understanding syllables from left and right ear. More specifically the right ear scored much higher (m=11,76 sd= 3,63) than the left (m=6,71 sd=3,08). The hypothesis was two tailed, and based on Man ââ¬âWhitneyââ¬â¢s non parametric test z=3,64 p Discussion: Based on previous research, we were able to perform a dichotic listening test in order to confirm that there would be a difference understanding syllables from right to left ear. As Doreen Kimura suggested (1961) the right ear was more capable recognising verbal stimulus as it is connected directly to the left hemisphere of the brain. Assumption which was made after many dichotic listening tests (Kimura, 1961), brain dysfunctions especially in the Brocaââ¬â¢s and the Wernickeââ¬â¢s area (Gallese Stamenov, 2002), and specialized brain scanning through MEG, FMRI and PET technology (Embick, Marantz, Miyashita Oââ¬â¢Neil, 2000). The absence of corpus callosum in many case studies confirmed that after separating the two hemispheres of the brain (split brain), the patients were not able to recognize verbally an object presented on their left eye, as the connection to the left hemisphere was lost (Gazzaniga, 1967). Our hypothesis was that there would be a significant difference understanding speech stimulus from the left to the right ear, and our findings can confirm those differences showing a huge possibility to find the same results to the whole population p References: Embick, D., Marantz, A., Miyashita, Y., ONeil, W., Sakai, K. L. (2000). A syntactic specialization for Brocas area. Proceedings of the National Academy of Sciences, 97(11), 6150-6154. Etard, O., Mellet, E., Papathanassiou, D., Benali, K., Houdà ©, O., Mazoyer, B., Tzourio-Mazoyer, N. (2000). Picture naming without Brocas and Wernickes area. Neuroreport, 11(3), 617-622. Gallese, V., Stamenov, M. (2002, April 1). Mirror Neurons and the Evolution of Brain and Language. Retrieved from ebscohost: http://web.a.ebscohost.com Gazzaniga, M. S. (1967). The split brain in man. Scientific American, 217(2), 24-29. Hillis, A. E., Barker, P. B., Beauchamp, N. J., Winters, B. D., Mirski, M., Wityk, R. J. (2001). Restoring blood pressure reperfused Wernickeââ¬â¢s area and improved language. Neurology, 56(5), 670-672. Kimura, D. (1961). Cerebral dominance and the perception of verbal stimuli. Canadian Journal of Psychology/Revue canadienne de psychologie, 15(3), 166. Kimura, D. (1967). Functional asymmetry of the brain in dichotic listening. Cortex, 3(2), 163-178. Molfese, D. L., Freeman, R. B., Palermo, D. S. (1975). The ontogeny of brain lateralization for speech and nonspeech stimuli. Brain and language, 2, 356-368.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.