Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: This lecture covers the layout and organization of the course as well as separate introductions to the visual system and the auditory system.
Instructor: Peter H Schiller and Chris Brown
Lec 1: Introduction, the vi...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: This is our first introductory meeting of the course, which is 9.04. And we are going to cover vision and audition in this course, and there are going to be two of us lecturing. My name is Peter Schiller, and this is Chris Brown. And I will be talking about the vision portion, and Chris will be lecturing about the auditory portion.
Now, what I'm going to do is I'm going to hand out the syllabi that we have, in this case, for the first half of the course. And that we are going to discuss in some detail today for the first half of the lecture, and Chris is going to discuss it for the second half. So that is the basic plan for today. And I will go through some of the basic procedures and issues that we may want to deal with at this very introductory portion.
So first of all, let me talk about the reading assignments. If you have the handout, they are ready for you. If you look at the second page, that's where we have the assigned readings for the vision half of the course. Now, for that half of the course, the top eight assignments are all articles in various journals. We don't have a textbook for this portion of the course. And then in addition to the assigned readings, we have recommended readings that are listed there.
And then another important factor that is listed there-- let me first say that the lectures will be put on Stellar, in most cases, after each lecture. And in addition, the videos that we are now recording will also become available, but they will not be available until well after each lecture. So I would advise each of you to come to the lectures rather than hoping to read the assigned material only or to eventually look at the videos.
The reason I'm telling you this is that our analysis has shown that those students who attend the lectures regularly get much better grades on the exams than the students who do not. So I strongly will urge all of you to come to as many lectures as you possibly can.
Now, the additional requirement that you're going to have for this course is to write two research reports, one for vision and one for audition. And the assigned written report that you need to put together is in a paper at the bottom of the second page. In this case, it's going to be a paper that was written quite some years ago, a very important and remarkable paper that has been published by Oster and Barlow, as you can see.
And the task for you will be to not just report what they had reported, because that's repetitious, but to do a bit of research and write about what has been discovered since the remarkable findings that these two people had made at the time. All right. So that's the research report.
And then I want to specify the exams. We are going to have a midterm exam, and the exact date for this has already been set on October 23. All right? But as I say, you can find this, and I will specify that in more detail in just a minute. And then we are going to have a final exam at the end of the term. The exact date for this, as always at MIT, will not have been set until probably sometime in November.
So now let me also specify the grade breakdown. I think that's important for all of us. The written report for each half of the course-- there's going to be one report, as I've already said, for vision and one for audition-- and that will constitute 10% of the grade for each. And the midterm exam, this constitutes 25%.
The final exam constitutes 55% of the overall grade. And in that, 15% will be on vision and 40% on audition. So if you add that up, you can see that vision and audition are set up to be exactly equally weighed for the exams.
MICHELLE: Hi. I'm Michelle. I'll be helping the professors, especially with [INAUDIBLE].
PROFESSOR: So I'm Chris Brown, and I'm one of the instructors. I'll be teaching the second half. And my research is on two areas, brain stem auditory reflexes, like the startle reflex and the middle ear muscle reflex. And I also work on animal models of the auditory brain stem implant, which is a neural prosthesis that's used in deaf individuals.
PROFESSOR: All right. And I'm Peter Schiller, and I work on the visual system. And I'm a professor here in our department. So that's very nice. Thank you for the introductions. And I hope, you guys, we all get to know each other. I'm very impressed that there's so many seniors here. That's actually unusual. I don't remember having this high a percentage of seniors in the class. That's really very nice, very nice. OK.
So now we are going to talk, for the first part of today's lecture, about what aspects of visual processing we are trying to understand and, therefore, what we are going to try to cover in this course in terms of topics. OK?
So first of all, what we are going to do for several lectures is to talk about the layout and organization of the visual system itself. Most of it we will discuss as it applies to higher mammals, in particular monkeys and primates and humans.
Then we are going to talk about specific aspects of visual processing. We're going to try to understand how we adapt in vision, and, very interestingly, how we are able to perceive colors and process them accurately. Another fascinating topic is how we are capable of analyzing motion. That's a complex, very interesting topic, as is depth perception.
And the reason depth perception is particularly interesting is because, as you know, the retinal surface is essentially a two-dimensional arrangement. And yet from whatever falls on these two dimensions in the left and right eyes, somehow the brain needs to convert to be able to see the third dimension. And as a result, several mechanisms have evolved to accomplish that, and we are going to discuss them.
Then, again, another very complex topic is how we can recognize objects. Perhaps the most complex of those is our incredible ability to recognize faces. And that is highlighted, of course, by the fact that if you look at more simple organisms, like, I don't know, monkeys, they all look the same to you.
But human beings, who are actually more similar to each other than perhaps monkeys are, we are really capable of telling them apart and readily recognize them over long periods of time. So it's a very interesting topic.
And yet another topic that we will discuss is how we make eye movements. As you probably know, or you're aware of, that we are constantly moving your eye. You make saccadic eye movements about three times a second, thousands of times a day, hundreds of thousands of times, to be able to see things clearly in the world. So we are going to try to understand how that incredible ability has evolved and how it is realized by the brain.
OK. So now to look at exactly how we are going to cover this, let me go through this. During the next lecture, which is September 9, we are going to look at the basic layout of the retina and the lateral geniculate system, as well as how the visual system in general is wired.
Then on September 11, we're going to look at the visual cortex, then at the ON and OFF channels, so-called, that you'll realize what they are once we talk about it. And then there's another set of channels that originates in the retina, which are the midget and parasol channels. We'll discuss those, try to figure out why did they evolve and what is their role in being able to see the world in realistic fashion.
Then we're going to talk about adaption and color, depth perception, form perception. And then we're going to have a lot of fun on October 2, and we're going to look at illusions and also visual prosthesis, because one of you, in particular, is interested in that topic.
Then we are going to talk about the neural control of visually guided eye movements. That's going to consist of two sessions. And then we're going to talk about motion perception and another aspect of eye movements when we pursue something with smooth eye movements. And then we're going to have an overview.
And then, lastly, on October 23rd, we are going to have the midterm exam. That's going to cover questions from all of these lectures. I should tell you right now that the midterm exam is going to consist of multiple-choice questions. So you're not going to, maybe, asked to write anything. You're going to have to just pick from each of the questions the correct answer.
All right. So now what I would like to talk about next in a summary fashion are what we call the tools of the trade. What has happened over the many years that scientists tried to understand how the visual system and, for that matter the brain, works, what kinds of methods have been employed. And so I'm going to talk about each of these just very briefly this time, and then they will come up repeatedly during all of the lectures.
Now, the first method I'm going to talk about is called psychophysics. I'm sure most of you know what that is. It's a scientific way to study behavior of humans and animals to determine how well they can see.
Now, there are several procedures with this. I'm going to describe one that's used both in humans and monkeys. And what you can do nowadays, you can use a color monitor, and I will describe that in just a second. After that, I will talk about anatomy. I will talk about electrophysiology, pharmacology, brain lesions, imaging, and optogenetics.
So now let's start with psychophysics in more detail. So here is a color monitor, and either monkeys or humans can be trained to first look at a fixation spot. And that's important because we want to always be able to present stimuli in selected locations of the visual field or selected locations along the retina.
This is particularly important, because when you study the brain, different regions of the visual field representation are located in different areas, for example, in the visual cortex. So what you do then is you can present a single stimulus like this, and the task of the human or the monkey is to either make a saccadic eye movement to it, say that's where it is, or to press a lever that's in front of them.
And then on each trial, it appears someplace else. You can present it in many different locations, and maybe one of those locations will be relevant to what part of the brain you are studying. And then what you do, you can systematically vary all kinds of aspects of the stimulus. You can vary the color. You can vary the contrast. You can vary the size. You can vary the shape.
And by systematically varying this, you can create curves to describe exactly how well you can see any particular thing like, for example, just how much contrast you need to perceive something. All right. So that's called the detection task.
Now, a related task, which has also been used extensively, is called the discrimination task. In this case, you present a fixation spot again. The person or the monkey fixates, and then you present a whole bunch of stimuli, one of which is different from the others. And you have to select where that one had appeared, the one that's different, by making an eye movement or pressing the appropriate lever.
Now, when you do this, you systematically can vary the difference between the so-called distractor stimuli, which are all identical, and the target until the person is no longer able to tell the difference. And that way you can, again, generate a curve.
And you can specify just what is the amount of difference that you need to put in this, say, how good are you at perceiving slightly different colors. All right? And by doing that systematically, you can generate these functions using these psychophysical procedures to determine pretty well how you're able to see.
And now this particular approach is also very useful when it comes to studying individuals, humans in this case, who have some problems with vision. So if they have a problem in seeing colors, you can readily determine, well, what's the magnitude of that problem? And thereby it can tell you what procedures might be used to try to ameliorate their shortcoming.
Now, another method that has been used extensively in not only vision, but in many, many different areas of studying the brain, including audition of course, is anatomy. Numerous methods have evolved in the course of anatomists working on these problems, and the first is a very simple one. You just look at the whole brain.
And I'm showing this because this is a monkey brain, and you will encounter the monkey brain repeatedly. And it so happens that after people have studied this extensively, they were able to give names to just about every gyrus or every brain area and also relate it to what the function is of those areas.
And so, for example, just to name a few of these, this is called the central sulcus. I need to do one more thing here. OK. So this here is the central sulcus. All right? Just for you to remember, humans also have a central sulcus, of course. And this is the lunate sulcus. And this region back here-- let's see, did I label this?
Oh, here's a couple more, the arcuate and the principalis. You will encounter these repeatedly. And this region back here is the one which is the primary visual cortex in monkeys that has been extensively studied and has yielded remarkable discoveries about the way it works. So that is called area V1, or primary visual cortex. All right.
So now just another example, if I can show you just a few examples of anatomy. Here's another example showing what the eye looks like. And the remarkable thing about the human eye, and the eye of monkeys as well and primates, is it has become highly specialized.
There's a region here which is called the fovea. And in that region, you have a very, very dense distribution of photoreceptors and other cells in the retina. And because of that, you have very high acuity there.
Now, because of that, the eye movements have to become effective for you to be able to see fine detail. So even, for example, when you read, what do you do when you read? You make saccadic eye movements across a line. Then you go down the next line three or four saccadic eye movements, and so on down the page.
And you do that because you cannot make out the details of letters in the periphery because, there, the distribution of the photoreceptors and the cells in the retina in general become less and less dense. So that is a high degree of specialization that we will discuss in much more detail next time.
Now then, all the fibers from the retinal ganglion cells course across the inner surface of the retina and go to the so-called optic nerve through which over a million fibers from the retina project into the nervous system. And how they project exactly, what that's like I will discuss in considerable detail the next time.
Now, this area here is often also called the blind spot, and that you don't see even if you close one eye. But if you do a careful experiment-- I'll explain that the next time-- you can actually map out this little region where you don't see anything.
They're in different locations in the two eyes, so the two blind spots do not overlap. And so, consequently, when you look with both eyes, you don't have a, quote, blind spot. So that's an example of what the human retina looks like, and this has been studied extensively using a whole array of anatomical procedures.
Now, the third anatomical procedure I want to tell you is labeling individual cells. Now, the way this was done, or still is being done, is that you slice the brain into very, very thin sections. And you put them on a glass, and then you can look at them under a microscope.
Now, here's an example of a cross-coronal section of the monkey lateral geniculate nucleus. That's one region in the brain to which the retinal ganglion cells project. And it's a beautifully layered structure, which I'll describe in detail the next time. And these little spots that you see here are actual cells, which are labeled using a so-called Nissl stain.
Now, another method used in staining cells is the famous Golgi stain, which was discovered, invented, perhaps, you could say, by Golgi, for which he received the Nobel Prize in 1906. The remarkable quality of those productions is that this label-- it's a silver label-- stains not only the cell bodies, as the Nissl stain you have just seen, but also all the processes, the dendrites as well as the axons. So you see a whole cell as a result of that staining procedure.
Yet another way to do this, which is more sophisticated nowadays, is to record intracellularly from a cell and then inject a label. This happens to be the so-called Procion Yellow labeling substance. You inject in the eye, and then you process the tissue again in thin layers. And this is an example of what that looks like.
So this also stains all the processes of the cell. And the advantage here is that you can study this cell electrophysiologically and determine what it is like, and then stain it so you can establish the relationship what the cell does and what the cell looks like.
All right. So now let's turn to the electrophysiological method, which is a consequence of this, logical consequence of it. Once again, here we have a monkey brain. And what you do here, we put microelectrodes into the brain. Now, this was a discovery that was made around the turn of the century, little bit after.
Initially, microelectrodes were made from very thin tubes of glass, which were heated and then pulled so that the tip became smaller than a micron. So it was very, very small. Subsequently, other methods were developed. They etched fine pieces a wire until the tip was very, very small, and then they corded it. And they then we were able to put these electrodes into the brain and record single cells just like with glass pipettes.
So the example here then is that you take a microelectrode, put it into the brain, and then that is connected to an amplifier system and a computer. And when you do that, you can record from a single cell. Now, as you well know, single cells generate action potentials. And that is shown here on an oscilloscope in a schematic fashion.
And what some clever people did is that they decided that an easy way to process information about the manner single cells generated action potentials is to put this signal onto a loudspeaker system. And so every time a cell fired, what you would literally hear is beep, beep, like that. OK? And so if you shown a light on it several times, it will go like brrrp, like that.
And the big advantage of this was that many cells see only a tiny portion of the world. And if you don't know where it is, you have to take some projector or something and move it around. And if the receptive field is here, you go, brrrp, brrrp, brrrp, brrrp, like that. OK? And so then you can map it out very accurately.
Instead of having to like with the oscilloscope, you hear it, and that enables you to do all kinds of things, experiments, and you don't have to look at the oscilloscope. But you can perform all sorts of experiments, and you can hear the responses of the cell.
Now, another method that is used extensively in electrophysiology is called electrical stimulation. It's a similar process. You take a microelectrode, for example, you put it in the brain, and then you pass electric current. Typically, that electric current is passed in such a way as to mimic action potentials.
So if you are to listen to when it's activated, again, brrrp, you hear that. But this time, instead of the cell firing, you are firing the area there, and that then can elicit a response, all right, all kinds of responses.
If you stimulate here, for example-- remember now, this is the visual system in the monkey-- as has been shown in humans, electrical stimulation elicits a percept, a small spot, a star-like image. In the auditory system when you stimulate, you can hear something.
And if you stimulate in the areas that are related to moving your eyes, then the simulation causes a saccadic eye movement. So all those methods are very, very good in trying to understand better the organization of the brain for, in this case for vision, and for eye movements.
Now, yet another method that is used-- several methods I should say-- is pharmacology. And when you do pharmacological experiments, one procedure is-- and many different procedures, I'll just describe one here-- once again, you can stick a glass pipette into the brain, and then you can inject it.
You can inject it either with actually a syringe or using several other methods, and you can inject all kinds of agents. For example, you can inject a neurotransmitter analog or a neurotransmitter antagonist to determine what effects it has in various parts of the brain and, in our case, in the visual system or the ocular motor system.
Now, yet another method that has been used is brain inactivation. Several procedures are available to inactivate the brain. Once again, here's a monkey brain. And this region here that I already told you a little bit about because of naming the gyri there, OK, is called the frontal eye fields.
This region has something to do with moving your eyes. So what you can do if you want to study and find out just what does this area do, one procedure is to-- again, here is V1-- is to make a lesion. Now, sometimes you actually do that in a monkey.
But sometimes in some experiments, humans may have an accident, some event when you served in Vietnam or something, and a region of the brain has been removed by virtue of a bullet or something. And that way you can find out what is the consequence of having a lesion like that. You can study, use a psychophysical procedure, as I described to you, to determine just what is the consequence of having lost this area.
This is a huge region of research. A great many types of experiments have been done. One of the famous individuals who had done this kind of work is Hans-Lukas Teuber, who used to be the chairman of our department starting way, way, way back in 1962.
And his specialty was to study Second World War veterans who had sustained various kinds of brain injuries. And the basis of that, studying them using psychophysical procedures, to assess what various areas in the brain, what kind of functions they have, what kinds of tasks do they perform.
Now, yet another method, which has some major advantages, is to make reversible inactivations rather than permanent ones. And this you can do by, for example, using a method of cooling. There's a device, the so-called Peltier device, that you can put on top of the brain.
And then electronically, you can cool the surface that is in touch with the brain, and then you can see what happens when you cool it. And then when you warm it up again, you can see what happens to recovery, which, in this case, in almost all cases like this, leads to full recovery and the same performance as prior to starting the cooling. Yes?
AUDIENCE: Can you only use this method for surface structures or?
PROFESSOR: No. They have now developed methods where you can actually lower them into the brain. And they're usually much finer, and very often they're sort of a loop type device. And you can lower that into the gyri or wherever you like, and, again, do the reversible cooling. Yes. That's a fairly recently developed device, and it works extremely well.
Now, yet another approach is to inject substances into the brain that anesthetize, if you will, a particular region after the injection, but only for a limited time. And then you can see how the behavior is affected.
Now, a variant of that that I referred to before, of course, is that you can use agents that are selective, that don't inactivate the whole area, but for example, only affect excitatory neurotransmitters or only affect inhibitory ones. So you can do kinds of selective things. And so you can establish the role of various neurotransmitters in the role they play in various brain areas.
Now, yet another method very recently developed, which has been incredibly successful, is imaging. As you probably all know, you all know we do have an MRI facility here in our department on the ground floor, and some of you may even have been subjects in it. What that does is when you put a subject into this-- the variant of that is called fMRI, functional Magnetic Resonance Imaging.
And so if you put a person in the magnet and you present a certain set of stimuli repeatedly, OK, or differential ones, whatever, the brain areas that are active performing the analysis that you ask them to do light up. So that method-- here's a complex picture of that. This is one that is in Freiberg, Germany. This is done with monkeys again.
You have this device and you put the monkey in there. You lower the device on it, and then you can have the subject perform trained tasks. And then you can analyze the brain and look at the nature of activation.
And here's an example of that, whatever this task is, doesn't matter. You can see that a particular brain region has been very heavily activated as a result of whatever manipulation they did. Now, there are lots and lots and lots of experiments of this sort, and we will talk about several of those as we look at various aspects of the visual system.
Now, the last method I want to just mention very, very briefly is optogenetics. That's a new method, and we have actually several people in our department here who are using this method. Now, this particular method-- let me explain just very quickly what this is all about.
Does everybody know what an opsin is? OK. How about if I say rhodopsin? OK. Most of you will know that. A rhodopsin is a set of molecules that are in the rods in the photoreceptors, and they are sensitive to light. Now, there are all kinds of variants. That's why I mentioned opsins rather than rhodopsins.
And what can be done is that you can selectivity place these various substances into selected types of cells in the brain. And then, because these cells become light sensitive just like the photoreceptors are, then when you shine light onto the area where you had placed these opsins in cells, you can drive them. You can make them respond by turning the light on.
Now, the amazing thing about that is that makes it a more powerful technique than electrical stimulation, that you can set this up in such a way that, for example, if you use a rhodopsin substance that is sensitive to red light, it will be excited by the red light.
But if you have a slightly different substance that would be inhibited by blue light, then you can see what happens if you excite those cells, and then you inhibit those cells. So this gives you two sides to the coin, whereas electrical stimulation provides only one side.
So that's a wonderful technique. And here's sort of a good example of that. Here we have injected-- I shouldn't say injected-- but genetically labeled cells with channelrhodopsin, so-called. And when that's done, when you shine in a blue light, OK, you excite the cell. And then instead, if you put in so-called halorhodopsin, OK, halorhodopsin, then if you use yellow light you inhibit the cells.
So that's a remarkable technique. It's just at the very beginning of things, so we won't talk too much about this technique in studying the visual system yet. But I bet you that in another 10 years this is going to be a central topic.
All right. So to summarize these techniques, other than the psychophysics, just remind you again, number one, we have electrical recording using microelectrodes. OK? Then secondly, we have electrical stimulation. Thirdly, we have injection of pharmacological agent.
Then we have methods to inactivate regions, either permanently by lesions or reversibly by cooling or by injecting various substances. And lastly, we have optogenetics that enables you to activate cells or inhibit cells by shining light onto the brain.
So these are quite a remarkable set of techniques. And individuals who want to become neuroscientists, they're going to have to learn to master not maybe all of these techniques, but certainly some of them so that they can carry out new and original experiments in determining how the brain works and, in our case, of course, how the visual system works.
So that is, in essence then, what I wanted to cover. And now we are going to move on and have Chris tell you about his portion of the course, which will be taught during the second half of this semester. So as I said, next time we are going to start talking about, first of all, about the wiring of the visual system, the basic wiring. And then we're going to talk about the retina in detail and a little bit about the lateral geniculate nucleus. That's the next lecture. Please.
AUDIENCE: So for the eigtht readings, the eight assigned readings that you have on here, how will we know when to read what?
PROFESSOR: Well, that's a good question. The sections that have to do with eye movements that you can see, that you don't have to read until we get to the eye movements, the latter part.
Initially, when we talk about the retina, for example, you definitely want to read the [INAUDIBLE] paper and the Schiller paper on parallel information processing channels. Then the ones that have to do with the Hermann grid illusion and visual prosthesis, that you don't have to cover until you come to the section on illusions and visual prosthesis.
PROFESSOR: OK. Welcome, everybody. I'm Chris Brown. And I just, in the remaining time, wanted to give you a synopsis of what's going to happen during the second half of the term. So I'll be giving the lectures during the second half on the topic of audition, or hearing.
And there's my email, [email protected]. So as you can see by my email, I'm associated with Harvard, in fact, Harvard Medical School. And I'm in the so-called ENT department at Harvard Med School, and that stands for Ear, Nose, and Throat.
So some of you who are going to be going to medical school will certainly do an ENT rotation, where you learn about the various aspects of ENT. And much of it, of course, is the subject of otology, what happens when people have disorders of hearing, problems with their hearing. And in addition, many ENT doctors also operate on people who have head and neck cancers.
So surgeries of those two types go on at my hospital, which is Massachusetts Eye and Ear Infirmary. And that's across the river in Boston, and it is, of course, one of the main teaching hospitals for ENT as well as ophthalmology. There's a big Ophthalmology department where the ophthalmologists deal with disorders of sight and vision.
So I have an introductory reading, which is a book chapter that I wrote, and also with Joe Santos-Sacchi, which is actually now in the fourth edition of a textbook called Fundamental Neuroscience. And I believe this book chapter is on the course website now.
And it summarizes pretty much what I'll cover during the semester in a reading that you could probably do in an hour or less, and it has many of the figures that I'll use. So if you're shopping around for a course and want to know what's going to happen in the second half here, you can look at that book chapter.
There's a nice textbook that I'll also be assigning a number of readings from throughout the term, and it's called Auditory Neuroscience, Making Sense of Sound by Schnupp, Nelken, and King. And these fellows are, in the case of the first and the last, at Oxford University, and they work on psychophysics, that is, how we perceive hearing. And they test humans, and they also do a fair amount of animals psychophysics.
And in the case of Israel Nelken, he's at Hebrew University in Israel. And he does a lot of electrophysiological recordings-- you heard about electrophysiology from Peter's talk just now-- and recordings especially from the auditory cortex. But this is a very nice book for coverage especially of the central auditory pathway in psychophysics.
And it's pretty cheap. I think it's $30. And I believe I was told earlier that you can get it, as an MIT student, online free. And what's good about the online edition is there are lots of demonstrations, each indicated by the little icon in the margin of the text. And when you click on that demonstration, if you have your earbuds in you can hear what the sound demonstration is.
And I'll be doing quite a lot of sound demonstrations through the course of the semester because I think it livens up the class a little bit, and this book is especially good for sound demos. So I encourage you at least to get the online edition of that textbook.
So also on the course website is the syllabus for what the audition lectures will cover, and I just put a couple here to give you a flavor. In the first lecture on October 28th, we'll talk about the physical characteristics of sound and what happens to that sound when it strikes your external, and then your middle, and inner ears.
And associated with each lecture is an original research article that, in this case, is Hofman, et al., and the title is "Relearning Sound Localization with New Ears." And so, in a nutshell, what they did in that research report was they took little pieces of clay, and they inserted them into their external ears or pinna, and therefore distorted quite a lot your pinnae.
They couldn't do what van Gogh did, which was cut off the pinna, but they certainly distorted their pinnaes quite a lot on both sides. And then they tested, using themselves as subjects and other volunteers, how good their ability to localize sound was.
And especially in terms of when the sound varied in elevation, they found that there were huge differences, that they couldn't localize sounds that were straight ahead versus sounds that were coming from above themselves with these distortions.
But what was funny and what harks back to the title of the article is that when they had the volunteers go out and live their normal lives with these pinna distortions in for a few weeks, then they came back into the lab and tested them again, they found that they could now localize sounds in elevation with these new ears. They relearned how to localize sound.
So this is a nice demonstration of learning or plasticity, at least in psychophysical responses. And it also emphasizes the function of your external ear, which helps you localize sounds in space.
In the second lecture, we'll be talking about the receptor cells for hearing, which are called hair cells because they have these little appendages at their top that looked to the early neuroanatomists like hairs. And of course, sound is a mechanical energy that moves the fluid in which these hair cells are immersed and moves these little hairs or appendages at the top of the cell, and that's how the cell can respond then to sound. And so hair cells are the very important receptor cells for hearing, which are, of course, the analogs of the rods and cones in the visual system.
And the research report associated with our talk about hair cells will be how a special protein called prestin is required for electromotility, which is a function of outer hair cells, one of the two types of hair cells, which allows them to actually move and flux and change their membrane links when they sense these mechanical disturbances by their stereocilia.
So it's a pretty interesting paper in what we call a knockout animal. So the prestin is genetically knocked out in this particular animal, and the sense of hearing is then tested again in these knockout animals.
So we have a whole bunch of lectures. I haven't indicated all them. They're on the course website. Toward the end of the semester, we'll have, as Peter indicated, a written assignment for audition. So I haven't actually thought it up yet, but let me just give you an example.
Last year and this year, we'll be talking a lot about neural prostheses for audition. And the most famous neural prosthesis for audition, of course, is the cochlear implant, which I'll talk a little bit about later, and it works quite well.
There's also a neural prosthesis that goes into the auditory brainstem. It's called the auditory brainstem implant, and it's used in some other types of individuals who are deaf. And it doesn't work anywhere near as well as the cochlear implant.
It's sometimes called a lip reading assist device because people who have the brainstem implant usually can't understand speech over the telephone. They need to be facing you and looking at your lips as you're speaking. They need additional cues.
And so there'll be a lot of discussion this term about the differences between these two types of the implant, where they're put in the auditory pathway, and why one works much better than the other. And that was the written assignment for last year, which was a discussion of why the cochlear implant works a lot better than the auditory brainstem implant. So I'll have something along those lines that uses the material from our course that you can take to answer a question.
OK. Let me just go through in a half a dozen slides or so what I consider to be the high points of the auditory part of the course. About the first third of the auditory part of the course will be a discussion of the auditory periphery. And the periphery is usually divided into these three basic divisions, the external ear, which most of us think about as the ear, the pinna and the ear canal, which leads to the tympanic membrane here in yellow, or eardrum.
And at that point begins the middle ear. The middle ear is an air-filled space. If you've been on a recent plane flight and your ears are a little bit stuffed up, it can be very uncomfortable, especially when the plane is coming down in altitude.
And your eardrum bulges because the eardrum is just a very thin layer of skin, and it can bulge very easily. But it's painful when the eardrum bulges when the change in pressure happens as you're descending in a plane.
In the air-filled space of the middle ear are three small bones, the malleus, the incus, and the stapes. If you remember from high school biology, the hammer, the anvil, and the stirrup. The stirrup looks like what a cowboy has on his saddle that he puts the cowboy boots through.
It's a very, very small bone. In fact, it's the smallest bone in the body. These bones are very small because they have to vibrate in response to sound, and so they can't be big and massive. Massive things don't vibrate very well.
And so the stapes is sometimes encompassed by bony growths around it and prevented from vibration in a disease called otosclerosis. And so at my hospital, the Massachusetts Eye and Ear Infirmary, they do an operation to cure that type of deafness called the stapedectomy. So what's an -ectomy? You medical types, what does that mean?
AUDIENCE: Removal.
PROFESSOR: It means removal, right. So they take the stapes out. And that's because if they just loosen it up, the bone regrows and re-adheres the stapes from vibration. So they replace the stapes, the natural stapes, with a stapes prosthesis, either a little piston or a tube, that they hook on with a wire to the incus, and they put into the so-called foot plate area or oval window of the cochlea, which is the next structure I'll talk about. And that very nicely restores the sense of hearing.
In fact, when I was a postdoc fellow at Mass Eye and Ear, I could go watch the surgeries, and I watched a stapedectomy. And the patient was anesthetized, but not so much that she was really out. She was more sedated.
And at the end of the operation, the surgeon was adjusting the artificial prosthesis, and the surgeon said, well, can you hear me? And the patient didn't respond. So he moved it around a little bit or adjusted the wire, I don't know which. He says, can you hear me now? And there was no response from the patient.
And he did a little more manipulation and adjustment. And he finally said, can you hear me? And the patient said, why are you yelling at me? I can hear you just fine. So usually at the end of the operation, the patient has become more light. The anesthesiologist turns off the anesthesia. And they adjust the stapes prosthesis so it works well.
So that type of operation is fairly common and very successful to restore the so-called types of conductive hearing loss. These bones conduct the acoustic sensation into the cochlea.
Now, in the cochlea-- this is the structure here. The "cochlea" is the word for the inner ear. It comes from the Greek word "kokhlias," which means snail, and it is certainly a snail shell-shaped capsule. The cochlea looks like a coiled snail shell. And inside it are the receptor cells for hearing, the hair cells and the dendrites of the auditory nerve fibers. OK? And the cochlea is a bony-filled capsule filled with fluid and membranes and cells inside.
And this anatomy is a little bit complex, so I brought in a model here of the auditory periphery. So we have the external ear, the long ear canal here, the eardrum. It's kind of slanted here. And this part here that I'll lift out here is the cochlea or the inner ear, the snail shell-shaped area. And leading from it is the yellow-colored, in this case, auditory nerve, which sends messages from the cochlea into the brain.
And these funny, loop-shaped structures that I'm grasping are the semicircular canals, which mediate the sense of balance or angular acceleration. When you rotate your head, those hair cells in there are sensitive to those rotations.
OK. So I'll pass this model around. You can take a closer look at it. And in our hospital, the surgeons practice on real, live specimens like that from postmortem material because in otologic surgery there's a lot of drilling to access, for example, the middle ear or the inner ear.
And there's a lot of important structures that you don't want to run into with your drill bit, like the jugular bulb is that red thing there. The facial nerve goes right through the middle ear. And so the surgeons need to know their way around the middle ear so that they can avoid important structures and go to the right structure that they intend to operate on.
OK. So you heard Dr. Schiller talk about electrophysiology and recordings from individual neurons. And a lot of what we know about how the inner ear works comes from such types of experiments. And this is an experiment at the top here that gives the responses in the form of action potentials.
Each one of these blips is a little action potential, or impulse, or response from one single auditory nerve fiber recorded in the auditory nerve of a cat, which is the very common model for auditory neuroscience, or at least it was in years past.
So this response area is a mapping of sound frequency. So this axis is tone frequency. And the frequency of a sound wave form is simply how many times it repeats back and forth per second. Frequencies that are common in human hearing are, for example, 1,000 hertz. This is a graph of kilohertz, tone frequency in kilohertz.
And the upper limit of human hearing is approximately 20 kilohertz. The lower limit of human hearing is down around 50 or 100 hertz. In terms of smaller animals like the cat, they're shifted up in frequency of perhaps an octave, maybe a doubling of the frequencies that they're most sensitive to.
This auditory nerve fiber responded to a variety of frequencies, except at the very lowest sound level. The y-axis is a graph of sound level. This is very low level or soft sound, this would be a medium sound, and this would be a very high level sound. At the lowest levels of sound, the auditory nerve fiber only gave a response to frequencies around 10 kilohertz, or 10,000 cycles per second.
There are some spontaneous firings from the nerve fiber, and those can happen even if there's no sound presentation. These neurons can be spontaneously active.
If you outline this response area, you can see that it's very sharply tuned to sound frequency. It only responds around 10 kilohertz. And this exquisitely sharp tuning of the auditory nerve is the way, perhaps, that the auditory nerve sends messages to the brain that there is only 10 kilohertz that the ears are hearing and not 9 kilohertz and not 11 kilohertz. OK?
If you increase the sound level to higher levels, this auditory nerve fiber, like others, responds to a wide variety of sound frequencies, but it has a very sharp cut off at the high frequency edge. Maybe at 11 kilohertz, it responds, but 11.1 kilohertz there's no response. So the tuning becomes broader, but there's still a really nice, sharp, high-frequency cut off.
So what good is this for? Well, the ear is very good at resolving frequency, saying there's 10 kilohertz but not 9 kilohertz. And that's very important for identification of sounds. For example, how do we know, if we're talking on the telephone or not seeing the subject who's talking to us, that it's a female speaker or a male speaker or an infant speaker?
Well, male speakers have lower frequencies in their speech sounds. And so right away, if we hear a lot of low frequencies in the speech sounds, we assume we're talking to a male speaker. And that's, of course, a very important identification. How do we know we're hearing the vowel A and not the vowel E, ah or eh? Because of the different frequencies in the speech sounds for those two vowels.
So frequency coding is a very important subject in the auditory pathway for identification and distinguishing different types of sounds. And one way we know what frequencies we're listening to is if the auditory nerve fiber's tuned to a particular frequency [INAUDIBLE] responding and not the others.
Now, some very elegant studies have been done to look at the mapping of frequency along the spiral of the cochlea. And what those show is that way down at the base of the cochlea, the very highest frequencies are sensed by the hair cells and the auditory nerve fibers there.
And as you go more and more apically, you arrive at first middle, and then lower frequencies. So there's a very nice, orderly mapping of frequency along the receptor epithelium, or along the cochlea in the sense of hearing. And so, obviously, the hearing organ is set up to distinguish frequencies and identify sounds.
OK. So that's not the only code for sound frequency that we have. We'll talk extensively about another code that uses a time-coding sense. And this comes from the way that auditory nerve fibers so-called phase lock to the sound's waveform.
Here's a sound stimulus, and here's auditory firing, first with no stimulus. It's a fairly random pattern. And here is with a stimulus turned on, and you can see that the spikes line up during a certain phase or part of the stimulus waveform. And that's not that impressive until you look at the time scale here. As I said before, sounds that are important in human hearing are as high in frequency as 1 kilohertz.
So if this is going back and forth 1,000 times per second, then the scale bar here for one period would be 1 millisecond. And so the auditory nerve is keeping track and firing only on a certain or preferential phase of the stimulus waveform with the capability of milliseconds. OK?
And this is a much better phase-locking pattern than you get in other senses. For example, in the visual system, when you flash the light on and off just even 100 flashes per second, everything sort of blears out, and you sort of don't have any phase locking the way you do in the auditory nerves firing.
So this is a very nice coding for sound frequency that is sort of a secondary way to code. This is a very important coding for musical sounds. Musical sounds, for example, like an octave, 1 kilohertz and 2 kilohertz, a doubling of sound frequency, have very similar patterns in their temporal responses to those two frequencies that probably makes an octave a very beautiful musical interval to listen to. And it appears in music of many different types of cultures.
So one of the demonstrations that I'll play for you is an A for 40 hertz and an octave above that, 880 hertz, and you'll hear how beautiful the two sound together. And then I'll mistune them, which is easy for me to do because I'm an amateur violinist, and I'll be doing this on a violin.
And it's pretty easy to have a mistuned octave, and it sounds so awful and very dissonant when you listen to it. And one of the reasons for that, the reason for that, is the difference in phase locking for the two dissonant sounds versus the two consonant sounds.
OK. So we will talk about what happens when you have problems with your hearing. One of the main problems with hearing that causes loss, complete loss of hearing, is when the receptor cells are attacked by various types of insults, diseases, drugs, the aging process, stimulation, or listening to very high-level sounds.
These can all kill the hair cells. And in the mammalian cochlea, once the hair cells are lost they never grow back. And there's very active interest in trying to get hair cells to regenerate by using stem cells or various growth factors, but so far that can't be achieved.
Luckily, in the auditory periphery, even if you lose the hair cells, which is the major cause of deafness, you retain your auditory nerve fibers. So these blue structures here are the auditory nerve fibers coming from the hair cells. And even if the hair cells are killed by these various insults, the auditory nerve fibers, or at least many of them, remain.
So you heard Dr. Schiller talk about electrical stimulation. You can put an electrical stimulating electrode in the inner ear and stimulate those remaining auditory nerve fibers to fire impulses that are sent to the brain. And if you hook that system up right, you have a so-called cochlear implant.
The cochlear implant has a microphone that detects sound. It has a processor that converts that sound into various pulses of electrical stimulating current, which can be applied to a system of electrodes that is inserted into the cochlea.
The cochlea is a beautiful bony capsule. You can snake this electrode in. It doesn't move away. You can glue it in place. You can lead the electrode out to the processor that activates it when the subject hears a sound that's detected by the microphone.
And this cochlear implant is the most successful neural prosthesis that's been developed. It's implanted all the time at Mass Eye and Ear Infirmary. It's paid for by insurance. These days, insurance pays for a cochlear implant in your left cochlea if you're deaf, and it will also pay for another cochlear implant in your right cochlea if you're deaf.
So the metric for how successful this is, is whether the subject who's using the cochlear implant can understand speech. And so you can have various tests of speech. A speaker can give a sentence, and the person can respond. The speaker can give various simple words, and the cochlear implant user can respond. You can test these individuals in a very straightforward manner.
And we will have a demonstration by a cochlear implant user. I have an undergraduate demonstrator who's here at MIT. And she'll come in and she'll describe and show you her cochlear implant, and you can ask her questions. And there, I guarantee you with this particular demonstration that I have in mind, that she won't always understand your questions.
I have had really great cochlear implant users who are superstars, that understand every word. But the more norm is they understand much of what you say but not everything. And this particular room is a little bit challenging. There's some noise in the background. It's not one-on-one, so the implant user won't know exactly who's speaking at once.
And in this case, the person just has one ear implanted, so her ability to localize sounds is compromised. And she won't know who's asking the question until she sort of zeroes in a little bit on it. So you'll see the cochlear implant is not perfect, but it's pretty good in its metric for speech comprehension.
OK. Now, I said this particular cochlear implant user only has one implant in one ear. To really be good at localizing sound, we need to have two ears, and there are the so-called binaural cues for sound localization.
Here's a subject listening to a sound source. The sound source is emitting sound here, and it gets to the subject's left ear first. And a short time later, it gets to the subject's right ear. And that's one of the cues for binaural localization of sound, which is interaural time difference.
The velocity of sound in air is 342 meters per second. And depending on how big your head is, you can calculate the Interaural Time Difference. And of course, that ITD is maximal if the sound source is located off to one side, and it's exactly zero if the sound source is located straight ahead. OK?
So that is a cue for localization of sounds. We'll listen to sounds that differ in interaural time difference. We can play these through headphones for you. And we'll have some demonstrations of that.
The other binaural cue for sound localization is the interaural level difference. Here is the same sound source, same subject. This ear will experience a slightly higher sound level than the other ear because sound can't perfectly bend around the head. The head creates a sound shadow. And this is especially important for high frequencies of sound, like above 3 kilohertz. So that is a second binaural cue for sound localization. We'll listen to those cues.
And we'll talk a lot about the brainstem processing of those binaural cues for sound localization. If you compare the visual and the auditory pathways-- this is a block diagram of the auditory pathway, and there are a lot of brainstem nuclei. We have the cochlear nucleus, the superior olivary complex, and the inferior colliculus. And these latter two get input from the two ears on the two sides, and they probably do the bulk of the neural processing for sound localization.
You don't have to do that so much in the visual system because the visual system, in a sense, has a mapping of external space already on the retina. But remember, the inner ears map sound frequency. And so the inner ears themselves don't know or don't have a good cue for where the sound is localized in space.
Instead, you need input from the two sides, and you need a lot of neural processing to help you determine where that sound source is coming from in space. OK. So we'll talk about the neural processing of those binaural cues.
We'll talk toward the end of the course about the various auditory cortical fields. These are the cortical fields in the side of the cat's brain. So this is the front of the brain. This is looking at the left side of the brain. This is the rear of the brain where the occipital lobes are, where V1 is. And on the side or temporal cortex, you have the auditory fields, including A1 or primary auditory cortex.
And we'll talk, at least touch upon a little bit, toward the end of the course the human auditory-- primary auditory cortical field is right here on the superior surface of the superior temporal gyrus. And just near it, it could be called an auditory association area, is an area called Wernicke's area that's very important in processing of language.
And of course, connected with that is the Broca's area, which is another important language center, in the dominant hemisphere at least, of humans. So we'll touch upon that at the very end of the course. And we'll also have a special topic called bat echolocation, how bats use their auditory sense to navigate around the world at night even without their vision.
And finally, at the very last class period before the review sessions, we'll all go over for a tour of the research lab where I work at the Massachusetts Eye and Ear Infirmary. So it's across the Longfellow Bridge right at the end there where the arrow is. And we'll have some demonstrations there.
I think last year we had a demonstration on single-unit recording from an awake animal that's listening to sound. We had measurements from humans of the so-called otoacoustic emissions. These are sounds that can be detected in the ear canal with a very sensitive microphone. They're used in tests of hearing. And I also think we had a discussion of imaging of the auditory system.
And of course, if you've ever listened to an MRI machine, it's sort of described sometimes as being as loud as being inside a washing machine. And it's very challenging for people that image subjects who are listening to especially very low-level sounds when there's all this background noise coming from the imaging machine. So there's some special things that are done to minimize the noise coming from the imager in auditory studies.