EEG Studies of Social Perception, Dr. James McPartland


Uploaded by YaleUniversity on 08.03.2010

Transcript:
So I
am the guy who was sat over there
for the last four weeks who know has the pleasure of talking to you about what I do.
So I'm an assistant professor here at the Child Study Center and I'm the associate director
of the Developmental electrophysiology laboratory, which is run by Linda Mays and we kinda of
pool our resources cause we all apply the same
methods to study
child development,
both typical and atypical development.
And we also all are kind of interested
in social development
for different kinds of reasons. And I should also acknowledge Michael Crowley, who is the other associate director
of the DEL.
So when we start a talk like this in the medical school,
we make disclosures, we
present everything that gives us a bias.
So there's the financial biases. These are the people who fund
the research that we do. So I get
money to do my research from
a couple of different government institutes and I will soon from NARSAD.
And I also get money from book sales.
And then in this particular instance I want to disclose
a non-financial
conflict of interest
because
we,
all the presenters, you've heard us kind of gripe about this, it's weird for us. What we do as clinical psychologists
is show lots of pictures of people
and we show you videos of people and they make things interesting.
They keep the people who are listening to you awake, which is something that we all like.
We can't do that for this,
but for me it's a different situation because every moment
that I am speaking to an audience
and showing them pictures of people who are not my children,
I am experiencing internal tension.
So now I have the nice circumstance, where I have only the choice
to show you pictures
of people who's right to consent
I control.
And so you'll be seeing lots of
Nora
and Agnus
in the next hour or so.
So what I'm going to talk about today
is
the research that I
have been doing here. I have been here since 2004. I first came here as a,
as a clinical intern. I'm a clinical psychologist and then I did a post-doc.
And then I've been on the research faculty and the regular faculty,
since then.
So I,
I am
very interesting autism and the way people with autism process information because
I think of autism
as a social disorder. I am very interested in the way they process social information.
And one of the best ways to study social information processing is with faces.
So we're going to start off talking some about
behavioral and brain specialization for faces.
And then I'm going to talk a little bit about electrophysiology,
about the method of electrophysiology because
it's not familiar to
everybody and some of the things about it are a little
idiosyncratic.
I'm going to talk about face perception
as a tool to understand autism and social development in autism
and whether
the problems that we see in autism are better explained as a,
a general information processing problem or some kind of specific problem, a specific problem to social
information.
And then I'm going to talk about some of the work that's going on now and some of the things
that we have in store. And
please
do interrupt just because the cameras are rolling doesn't mean that you can't ask questions, so please do.
I, maybe I'll try to be unclear so you have to ask questions.
So when we think about autism, and this should all be familiar to you now,
and when we think about autism
as problem
in three areas:
so social
problems,
problems with communication
and that can be lots of different kinds of problems with communication. It could mean that you just
don't talk.
It could mean
that you
have lots of language, but have problems communicating with people,
but mostly resolving, revolving around social
communication.
And then a third category
that's a, a, a lot more nebulous
and less well defined.
We describe this as rigid, repetitive behaviors and interests. And this can be anything ranging
from
repetitive motor movements like hand flapping or
being really insistent
on a particular schedule or particular routine,
so those kinds of things, so.
And when we have problems in all those areas, we describe them as autism spectrum disorders.
One of things that's really
crucial
to think about when you think about autism
is that it happens in a developing child.
So autism isn't just something that happens,
right.
It's something that a person is most likely born with,
but then they carry that through them,
through, carry that with them
as they, they move through the world.
And in that respect
autism shapes the world the person lives in. So when whatever is different
about their brain when they're born,
it's also affected by a very different world
that they live in.
So there's the kind of
primary
things that first shape the brain,
but then there's a secondary things
that a person experiences as they move through the world,
as a person with autism.
So if we think about
typical social development
kind of representing this course. So
we come into the world
really attuned
to people.
And I'm going to talk about,
I know you are, you all are going to have a face,
face development overload because Kasha talked about some of it
already, but I'll try to talk about some of the facts
that she, she didn't touch upon because these are some of the things I think are really coolest
about human development.
But I'll talk about,
we cut, how we come into the world really paying close attention to people
and we remain that way.
For most of us people are really what make the world
go around
and we spend our lives thinking and learning about people. We become social experts,
but we can think about people,
well, people with autism, for example,
charting a different course,
so kind of becoming expert
in other things, so
paying attention to
things besides people, learning about those things,
becoming experts other things.
What's important to think about,
about this kind of way of understanding autism,
if we're not saying
the this is a brain
that can't do certain kinds of things,
we're saying it's a brain that is paying attention to the wrong kinds
of information,
which is a crucial distinction when we think about the different kinds of theories
that people have put forward to explain autism
they can kind of grossly fall into two categories:
the idea there's something wrong with social information processing
or the idea that this is a brain that can't handle
the nature of
processing
of things that happen to be social.
And by that I mean, for example,
complex information,
the idea that people with autism only have trouble with social information because it's complex
versus there being something uniquely social about it.
And so when we look at it this way
we're saying that this is a brain that can do many sophisticated, many complex things,
it just can't do them
with social information.
And so,
as a, a person is interested in development, when we look
at these,
these pathways, right,
we think of this person getting older at they move along
and we want to understand how we can trace it further back, right. If we think about this person
the further they get down here
the harder time they're having with many of the things that really matter
in the world cause a lot of the things right now that matter in the world from
having friends to,
to checking out of a super market involve people and so one of the ways we study this is with faces
because we can learn so much about faces very early in life
and we can apply it throughout the lifespan. So it's a good way for us
as people interested in child development
to try to get down to the brass tacks in terms of where things
start both in healthy social development
and in terms of where things go awry
in atypical social development.
So let's talk some about
faces
in typical social development.
So we know that faces are probably thee first way,
I guess sound too,
and some of my colleagues have interesting theories about the way
sound could shape
social development even within the womb, but
conventionally faces are one of the first ways
that we learn to interact with other people, that we
about other people.
and we also know a lot about faces, so
developmental psychologists, cognitive neuro-scientists,
neurologists have all done a lot of great research to help us understand
the way people approach faces, both in early infancy
and throughout the lifespan. So it's a really nice
comparison for autism,
which is we think it has something to do with social,
social interaction
that we think affects us very early in life
and that we think carries us, carries through lifespan,
but we know very little about in terms of the neuro bases.
And so faces are a nice way to kind of
tap in
to those areas that are, that have yet to be explained in autism.
So, just some of the, I guess I wouldn't even call them facts, but you can call them factoids
about faces, bless you,
is that
very early in life, so infants nine minutes old
are more likely to look at faces or things that are, even things that are just like faces. So
if you take
a schematic face, just with two squares on the top and
and one square on the bottom, kind of like two eyes and a mouth, but not even the right shape, just the configuration,
infants are going to look at that
more than other things.
They are going to track it further.
So if you move one across an infant's field of vision, they're going to follow it further.
And this is compared to
even an upside down one or a bull's eye, you can take lots of things that you might be interesting,
that you might think would be interesting to infants, all the Fisher Price toys.
Oh we're going to have to take that out of the youtube cut,
all the Fisher Price toys and
a Toys"R"Us are not as interesting to most infants
as, as faces.
We also know that babies are more likely to smile at human faces.
This is my older daughter
doing her best pirate impression as a five-week-old.
We know that,
that again infants seem to enjoy looking at faces, which is a great tool.
So how many people in this room have ever interacted with a really small baby, say a five-week-old baby?
So if you do something
to that baby
and that baby smiles at you, what do you do?
I'm sorry.
You smile back. You do it again.
I've spent
over the past three years tremendous amounts of time
making stupid faces and stupid sounds
to make this happen again.
And so here is this little helpless baby
that has this one tool they can use
that can then control
the behavior of an adult very effectively, right. So what a great way
to, to draw the kind of information that we need to develop. If people talking to us and people
smiling at us and making silly sounds
it's going to, what's going to help us grow.
It's pretty amazing that human infants come into the world
able to elicit that information from their environment.
So by the second day
very young children are able to recognize the face of their mother
and these have been very elegant studies
controlling for whether a person has also just given birth, so it's not new motherness they're
recognizing.
It's not the scent of their mother they're recognizing.
It's not her blondness or her burnettness or her blue-eyed-ness.
They're recognizing
the face of their mother
by the second day of life.
And in one of my,
my favorite studies of all time, this is a study done by Andy Meltzoff
who is at the University of Washington, who
I was fortunate to study with as a graduate student.
And what Andy did was
he partnered with a maternity ward in a hospital in Seattle
and he found some parents
who were apparently very, very motivated to help science
and they got his beeper number
and when the baby was born
and he was beeped
and whether it was noon
or whether it was midnight or whether it was three a.m. or nine a.m.,
Andy would zip down to the hospital and that newborn baby
would be put in a, a baby carrier
and Andy would hover over the baby
with a video camera behind him
filming the baby
but not filming Andy and then Andy would do things to the baby. He would stick out his tongue.
He would pucker his lips. He would open his mouth,
all while the video camera is recording the baby.
So then he would take these videotapes back to his research assistants
and they would code what the baby's face was doing
and sure enough you were able to predict
what face Andy had been making
by looking at the baby.
So, the youngest kids that they saw this happening in were kids who were forty-two minutes old.
So what's so remarkable about this is that
when we think about kind of the cliche of nature versus nurture, there hasn't
been a lot of nurturing
that's happened in forty-two minutes, right.
So these are kids who
haven't had a ton of time to learn about their own face.
They haven't had a ton of time to learn about other people's faces
and yet they're able to see what's happening there,
understand what's,
what that is, what's there
is what I've got here
and what that is doing
I can do.
And Andy really
takes this as
the, the basis for all social interaction, the idea he calls
the "Like Me" hypothesis, the idea that
children are born
with a,
with an internal and intramodal, and, and intramodal map, so that they can kind of map on
what they see
to their sense of their body.
And he thinks that that's from where social interaction emerges.
So then we also know that,
we theorize
the face processing also
gives us a foundation for much more sophisticated
social interaction skills.
So does everyone know what theory of mind is?
Who can tell me what theory of mind is? That is the idea that others have their own ideas and beliefs. Exactly.
Right, that that other people have their own intentions, their own wants, their own needs, their own beliefs.
And so, one of the ideas for the development of a theory of mind and, and Simon Baron Cohen, who's a very
productive autism researcher
has written eloquently about this,
is that under, understanding other people's faces and understanding other people's eyes
is the first way we can learn about other people's intentions.
And by understanding other people's intentions, we can then
build the idea that their intentions aren't necessarily the same as ours
and that's the basis of a theory of mind.
So we've talked about these kind of
early,
these aspects of early development that are really precocious.
We're very good with faces before we're very good with many other things
and as we develop
we see that the human brain
has special ways of handling faces and this is evident
through two lines of evidence.
One line of evidence
is processing strategies.
So behavioral studies that I'll talk about
show us that we seem to handle information in a face
differently
than we handle most other kinds of information.
And also brain studies through, through
autopsies to intracranial recordings, through FMRI,
and through electrophysiological studies,
on the outside of the scalp
we also find they're special brain regions to handle faces.
In terms of processing strategies, two things
that are really robust findings
in faces are the inversion effect and decomposition effect.
So the inversion effect is really simple,
when we turn a face upside down,
it's much harder to recognize.
We're all very good at recognizing faces.
And it's pretty remarkable when you think about. Right, so so
here's a room full of people
and you all look completely distinct to me,
but it,
but you also all have
two eyes, a nose, and a mouth. I apologize if I'm mistaken about anybody. I'm assuming.
But you, you all have this same set of features on your face, right. We all are kind of the same
and yet we look totally different. If you think about carrots
or apples, right,
or monkeys, we have monkey researchers in the room/
I have tried. I have done experiments where I use monkey faces,
monkey faces are at least as valuable as ours, right. They've got
different hair that might even
make it a, a, better clue
about
telling monkeys apart, but I can't tell them apart.
Right, we're really good at telling apart
these things, these faces that are all pretty similar,
but we turn them upside down and
we're not much better
at telling them apart than any, than anything else.
Then there's also a finding called the decomposition effect,
so even though
we can know
people's faces very well,
we're very bad at recognizing facial features
outside of the context of a face.
So even though this is my younger daughter, Aggie,
so even though I recognize her face very well, you know,
pay, taking her eyes or her nose and putting them outside the context of her face
with another baby's
nose or eyes
I'm not going do nearly as well as I would with the entire face.
And so what these kinds of findings, the idea that
faces upside down or faces in pieces are much harder to process,
it tells us that we might be encoding the information in the face in a different way.
So we think that most objects
we encode in a piecemeal way,
so kind of the idea that a monkey face or an apple or a carrot
is really just the sum of its parts,
but a human face isn't.
A human face is encoded as a whole using
configural processing strategy. So we learn faces
as a unit,
recognizing their features with respect
to one another.
So the idea is that this is a really distinct processing strategy
that we apply selectively to human faces and that we,
we learn to use over time,
that as we become experts in faces we start to apply this processing strategy.
We also know
that we have special
parts of the brain to process faces.
And the first evidence for this came from the work of a neurologist
and he found patients
who could lose the ability
to recognize individual faces,
while still being
fine at recognizing other things.
So if they go to get their keys,
they're not going to pick up their co-worker's keys, but they go to pick up their wife, they might
pick-up their co-worker's wife.
So he, he, he looked at the brains of these people
and he found that
almost without variation
this disorder was secondary to
damage to certain parts of the cortex, part of the inferotemporal cortex.
And most of the time
it was on the right,
sometimes it could be on both, but most of the time
this was on the right. And so this was the first clue. Okay, so maybe
if people who get this part of the brain damaged
lose this skill,
maybe
this part of the brain has something to do
with the application of this skill.
And so now we have a lot more elegant ways to look at the brains and you'll hear, I believe,
next week
from my colleague Kevin Pelphrey, who applies the method that's named on this slide,
Functional Magnetic Resonance Imaging.
And what Functional Magnetic Resonance Imaging done, does, or FMRI,
is lets us look at blood flow
within the brain.
So it tells us,
when we have a person performing the task inside a FMRI magnet,
we have a good chance sense of to where in their brain blood is flowing
and that gives a very good sense of the spatial resolution of activity,
so where in the brain things happen.
So, in 1997, a paper was published, a very
famous paper, a seminal work,
the third author of the paper some of you may recognize. He is now faculty member in the department of psychology,
Marvin Chun.
And if I was allowed to show
copyrighted information you would see a picture of Marvin from that journal article
with a little Fusiform Gyrus lighting up right next to him,
but we can't so we'll just say, so instead this is another excuse to put a picture of my daughter. There
she's modeling
a neuro-scientist-in-training t-shirt
from a mentor of mine at,
at Harvard, Chuck Nelson, who has done a lot of pioneering work in,
in understanding
infant brain activity using ERP.
So, what they found from using FMRI is if you show people faces compared to many, many things,
animal faces, places,
hands,
you saw this part of the brain selectively activated, so it's the part of the brain called the Fusiform Gyrus,
which they call the Fusiform Face Area.
There's a more nuanced story
and than, than just the Fusiform Face Area
that I'll get into in a little bit.
But now let's talk a little bit about electrophysiology.
I think that electrophysiology is a little mysterious than some of the other methods
that we use
to study,
to study brains, and to,
to look at people's behavior and things like that. So
when we talk about electrophysiology, we often talk about ERP,
so ERP stands for Event-
related Potential.
Have you all heard of EEG before?
So maybe you've heard about it
because EEGs are used to clinically, so
EEG is electroencephalaogram and that's just a recording of the electrical activity
that your brain produces.
So all the time when we're doing anything,
as long as we're alive
our brains are producing a small amount of electricity.
And it's noisy, lots of different things are happening in that electricity.
If I just show you a face and look at the electricity your brain is making,
it's going to be kind of confusing,
and if I show you a second face,
it might not look exactly like what we saw in response to the first face,
but what we can do
is if we show you many, many faces,
eventually
what is the electrical activity that is elicited specifically
by that face
is going to kind of emerge from the noise, right.
So if I do the same thing many times, sixty times, a hundred times,
eventually if I collapse all of those trials together and I average them,
I'm going to get a decent enough to noise ratio
that I will see the electrical truth.
I'll see the activity
that is happening in response to this discrete event,
this happening
that is related to this event, this event-related potential.
ERPs
don't reflect the refiring, the firing of a single neuron,
they reflect the firing of many, many neurons that are firing in synchrony.
So it really gives us a window and a particular time
of brain activit.y
The nice thing about ERPs is their timing.
So I, I stressed before
with FMRI
that we get a very good sense of where in the brain things happens,
with an ERP we don't.
I think about ERPs almost like shadows, electrical shadows
on the scalp.
And so
you know that a shadow has been cast
and you can show two different kinds of images and see whether the shadows are the same or different,
but you really don't know exactly
where the light source is creating that shadow or what exactly the size of the object creating that shadow is.
So we have a very difficult time knowing exactly where in the brain ERPs come from,
but we can
really pinpoint in a way that we can't with other neuro-imaging methods
the timing.
So ERPs gives us millisecond resolution to understand
differences
in cognitive events in the brain.
So the, the resolution of ERPs parallels the resolution of actual brain processes.
So we can get a sense
of things like the timing of a response
that we can't get from other methods.
Now when I say that we,
we can't really tell where in the brain things are coming from with ERP,
that's
kind of a debated point.
That's certainly the conservative stance to take.
As an ERP researcher, maybe I should come out
kind of fighting a little bit
and take a,
a more aggressive stance because what we can do is a process called source localization,
so if I have many, many points,
data points on the scalp
and I can
measure the electrical activity of all of them,
I can kind of test hypotheses.
I can say well if I see
a pattern of activity that is happening in response to a face,
I know for FMRI studies that Fusiform Gyrus is involved in faces,
so I can kind of create a seed in that Fusiform Gyrus
and see
if electricity, if electrical activity
propagating from there
could potentially result
in the pattern of activity I see on the scalp.
So we can do that
and that's called source localization analysis
and I'll actually show you an example of that later today.
The problem is,
is the inverse problem,
and the inverse problem is really simply that
for any given, because there could be more than one source of electrical activity in
the brain,
for any given scalp distribution,
there could be an infinite number of
potential solutions inside the brain.
So, we can get a sense of where things come from
with a,
with ERP where things come from in the brain,
but it's really not
as, as robust as FMRI.
And really we're increasing starting to think about them as complementary tools. Now we have
the technology
to be able to record ERPs inside FMRI magnets,
so that we can look at both the timing
and the,
and the places in the brain
where things happen.
So what does an ERP look like?
So this is a, a
Geodesics sensor net.
Im going to
give lots of attention
to the creators
of this net because I'm using pictures from their website.
So this is electrical Geodesics. The director John Tupper is a close collaborator
of ours.
This is actually an updated
version that they sent me several years ago,
but basically the idea is you have many electrodes. This is a two hundred and fifty six
electrode net.
And so you see
you could, all of those white things that you see are plastic pedestals.
Inside each one of those plastic pedestals,
and you can kind of see it right
there,
is a small sponge.
Inside that sponge is a small electrical censor,
which is connected by all of these wires
to an amplifier to boast the signal because it's a really
small signal, that's why we don't shock
each other.
And then that amplifier feeds to a computer that records
the brain activity.
The reason, who can guess,
maybe it's not a guess if you've been involved with an EEG,
why would we want to have sponges in
there?
And I'll give you a clue. Why don't you
wade in the surf during a lightning storm?
Exactly right.
And what kind of water best conducts electricity?
I'll give you another hint. We might, we would wade in a lake before we waded in the ocean in a lightning storm. Exactly.
So we actually, so we soak all of these sponges in an electrolite solution. It's actually potassium chloride not sodium
chloride.
And that helps
pick up the electricity from the scalp.
So this is a really nice technology.
There's different ways to record EEG.
There's the traditional way
is to kind of scratch the scalp a little bit with like a needle and use a, an abrasive gel that kind of creates the contact. But
this is a really nice solution,
especially
if you're working with
babies
or has anybody, anybody familiar with
the developmental disability
sometimes associated with sensory sensitivities?
Like throughout, let's say hypothetically you're sitting in a class about autism, right.
So it's a nicer solution for working with people with autism because it's a little bit easier to
tolerate
than scratching someone's scalp
and using the gel base?
The way you apply it is you just kind of stretch it over a person's head.
And here's a,
a close up
of what those electrodes look like and this is from the newer version from what we
call
hydro cell net, which just have smaller sponges, sponges embedded
in little cups that actually help things not dry out so fast, so
so you can do longer experiments.
So what's,
any questions about
the technique or this hat or any of this stuff or any questions about anything I've talked about so far?
Well how, you mean, how do you the experiment?
It's a really good question.
Another woman? And to, well let me
compound your question with a question.
Professor McPartland,
how do they do it when infants have such poor visual acuity?
And the answer is they probably do it with configuration, so
infants see low spacial frequencies better, so they're going to kind of see the dark spots.
And so
that's even remarkable, right, it's not like they're saying my mom's the one
with the really salient crow's feet.
No, they're recognizing my mom's the one with this configuration of eyes, nose, and mouth. My question is when do they encode the face? So obviously they have a test phase in this study? So do they encode it? You know, when do they encode the face before this time?
It's a good question.

We couldn't be sure, right. We,
my guess would be, and this is coming both
less from being a, a
cognitive neuro-scientist, a child psychologist, a
person who studies human development,
and more from a father,
is that when a baby is born
and you put them to the breast to nurse
or even if you don't breast feed, you give them bottle
that infant gazes
so intently
at a person's face
that, that's probably how.
And it's really,
I don't, I don't know the different theories about what exactly, exactly happens, is happening
in the brains of babies in the moments when they are born, but
little babies are,
can be quite ornery,
but they're generally
pretty peaceful. It's this period
of kind of they call it, what do they,
wakeful alertness
right after infants are born,
when they kind of look at everything and take everything in, so who knows, maybe they're.
But it's a good question, but it's not,
it's not known.
We could look at it,
I mean, we can use ERPs on very very early babies so who knows, I mean, you could look at
response to a
what's happen, what's happening and I guess it would be pretty hard to look at
because you need multiple trials,
but we could record their response rates per.
It could also be some kind of pairing right,
so even though an infant's seeing his or her mother's face for the very first time, time, they've been hearing
the voice
for a long time, right.
And so maybe it's hearing,
seeing this come with this familiar voice.
But, it's a good question.
Any other questions? I'm so happy we got a question, right, mid-lecture. This is banner day
for psychology 350.
Any other questions, okay.
So the nice thing about ERPs is really all you have to be able to do is
to be able tolerate wearing a hat.
So we can put this hat on very, very young babies, and
infants
children, adults.
When working with people with autism, we can put this hat on people with autism,
who are much, much smarter
than anybody in this room.
We can also put this on people with
autism
who don't have any language
or who have ostensible cognitive impairments.
So it really gives us a,
it's widely applicable in terms of studying brain function.
So when we think about the way we do an ERP is we have a person wear this hat
they sit in a chair in front of a, a TV monitor.
It doesn't have to be a TV monitor, we a,
you can also do ERPs to auditory stimuli.
In fact people have actually
to try to, so
I'll talk about face perception today
and all the faces I show you are, they're not real faces.
They're pictures of faces on a computer screen. So people have reasoned,
well, how can we be sure that people will treat the face on the computer screen the same way
as a regular face. People have actually done experiments where they have a,
a person behind, what's it
called, piezoelectric
piece of glass,
so when you turn charge, you can make the glass opaque or transparent.
So they've actually done ERPs to real faces, but
here we do ERPs
to, to pictures on a computer screen.
You make the same thing happen many, many, many times.
One thing that's really important about ERPs
is making sure that you have some measure
of a person's attention
because what if,
you know, I do a study
of face processing in people with autism
and typically developing peers
and I see differences, but
what if the people with autism were all looking out the window
during the course of the experiment, right. That could be the reason for different brain activity.
And so it's a very important you have some kind of measure
that person is paying attention.
And so these are, these are images taken from the website
of the Locke lab led by Steven Locke, who's really one of the pioneers
of ERP,
if any of you, of you are interested
in, in reading more about the basis of ERP, I'd be happy to lend you
his very excellent introductory books.
What you can see is you have sort of ongoing EEG
and then we have things that happen.
So if you see here this is the first, let's say, you know, he just calls it stimulus, but let's say it's the first
face
and then we see a response
and then the second face and then we see a response.
And then so on
to the nth face.
And we see lots of different responses
and then what we can do
is take all of those responses
and average them together. So
if you see this one, this one, this one,
they have some commonalities, right. For all of them have some kind of
dip here.
There's something else over here
and then there's something else over here.
But when we average them
all the different kinds of pieces of noise
because lots of things are happening. You might be blinking. You might be scratching your leg. You might be
thinking about your homework assignments or
what your going to do this weekend,
but you're not going to be doing all of those things
in perfect synchrony
with the presentation of each face each time and so when we only look at the activity that's in perfect synchrony with the
face,
we eventually get an ERP wave form,
which looks like this.
So it's a little bit smoother
and yet it preserves the kind of
features of each individual trial.
When we think about ERPs
there's, this is the, the most basic way to analyze them. There's a
now more sophisticated ways using multivariant statistics and I'm not going to talk about today, but
we can think about ERP components and an ERP component
is really just the feature of an ERP wave form
that we reason to be linked to some kind of cognitive event.
And the two aspects of an ERP component that we pay attention to are
the amplitude,
which is usually expressed in micro-volts,
and that's shown by that purple arrow there,
and the latency,
which is expressed in milliseconds, and that's show by that,
that green arrow there.
So this would be a component
and we would talk about its amplitude is how
far down it's going and its latency is when it peaks.
When we talk about ERP components
the way we refer to them
is usually with a combination of one letter
and a, a, a number or series of numbers and there are,
for certain exceptions to this rule.
But generally speaking we talk about things being N or P,
N meaning it's a negative
going electrical component, P meaning it's a positive electrical going component.
And then we can call it either an, like an N1 or an N2
reflects an ordinate value.
So an N1 would be the first negative peak in a wave form. N2 would be the second
negative peak in a wave form.
We can also refer to it specifically
with the timing of the peak
and that would be, for example, we'll talk about today
and N170.
So N170 would a component that goes positive or negative?
Come on, this is an easy one guys.
That's right, negative.
And what will be the latency of N170?
How does that, so N, so is a hundred seventy milliseconds fast or slow? That's fast.
A hundred seventy milliseconds is not a long time, right. So that's one of the neat things
about ERP. And that's one of the reasons we're going to talk about
the N170 because it's really, it,
the first study that kind of talked about N170
was in 1996.
One of my favorite studies of all time
by a guy named Shlomo Bentin,
who I believe actually did a post-doc here at Yale
earlier in his career.
And he found that when you show people lots of different kinds of things
you've got this negative spike at a hundred and seventy milliseconds
that was largest and
fastest
to faces.
It was a really robust finding.
And what was so interesting about this to people, cause people,
this had been looked at in other ways, but
what was really interesting
was to see that this happens so fast,
that a hundred and seventy milliseconds
after seeing a face
your brain is treating that
differently from everything else.
So that was kind of news.
So now since then there have been many, many, many studies
looking at the N170,
all kinds of manipulations.
What happens when you turn faces upside down? What happens when you do a, a
photographic inversion of a face, so it looks like a photonegative?
What does the N170 look like in little kids? What does it look like in people with autism? What does it look like in
people with schizophrenia? So
there's been many, many studies
of the N170.
Some of the basic things to know about it
are that it represents the, the
earliest stages of face processing,
maybe not not the earliest stage. There's some evidence that
activity at a hundred milliseconds might also individuate faces from other things or
distinguish faces from other kinds of individual stimuli.
We think that it represents structural encoding.
So this is the stage of face perception in which your brain registers a face as a face,
so not
this is my mother or this is my best friend or this is my neighbor, but just this is a face.
The N170 is sensitive to inversion.
Well, this is kind of neat. Remember when we talked about processing strategies and what happens when
we turn a face upside down.
Easier or harder to recognize?
I'm going to keep you all
awake. Harder to recognize.
So we a,
we might think, so if you were going to make a guess about what would happen to a face-specific component, so
we turn faces upside down, we stop
treating them like faces,
we turn faces upside down,
what should happen to a face-specific ERP component?
It should go away, right?
We should, we would, we wouldn't treat it as a face. Exact opposite.
It actually gets bigger
and it gets slower.
And so that's called the inversion effect.
And so I mentioned that we can study
ERPs in very young children.
People have looked at face perceptions in very young children. So by three months
we see specific activity
elicited by faces.
It happens later.
So it's kind of a, an amalgamation of
activity distributed between
an N290
and a P400.
And then over time those seems to
smear together. Both get faster.
And we have the adult-like patterns by the time a person is around fourteen years.
So this is a,
a face-specific component or, I should say, a face-sensitive component,
but we can also
just look at
other brain components
and put faces into a paradigm in which other things could be.
So we could look at recognition memory
for faces.
We could look at
the effect of salience
for faces all I'll talk about all those different kinds of things
in some,
in some of the ERP studies of autism that I'm going to talk about.
So let's move on and talk a little bit
more about autism. What do we know about faces
in autism?
And I going to make a disclaimer right at the start,
faces are
now
one of the most well studied,
one of the most well studied areas of research in autism and I guess well studied,
well studied gives an inappropriate connotation, cause well, well studied
seems to suggest that,
that they're well understood
and it's the exact opposite.
There's been so much research in faces,
but the research is really
painted a confusing picture.
And so I'm going to give you
a story they can fit into one class today,
but, but it's a more nuanced story. And I'll actually
tell you about some of the work that we're doing to shed light on those nuances at the very end of my lecture
cause that's one of our,
our, our lab's goals now
is to try and understand well, why does some studies find certain problems with faces and other studies don't?
Why,
why do we see this heterogeneity
in face perception in autism?
And it's worth mentioning, it's worth mentioning
one of Dr. Clinton's first slides
showed the idea of an autism spectrum, right,
so there were lots of people, all different colors, under a rainbow. It looked like
some kind of happy autism party.
Was it, if we think about the that we diagnose autism, right,
there's a list of twelve symptoms.
How many symptoms do you need to get a diagnosis of autism, not even autism spectrum disorder, autism,
the most severe diagnosis on the spectrum?
Does anybody know?
Six. Six.
So, six of twelve.
So you could theoretically have two children diagnosed with autism with non-overlapping symptoms.
So it makes abs, to mean, absolutely perfect sense that we would have heterogeneity
in research studies
of people with autism.
So one of the really most robust findings in autism both clinically, and I'm, I'm sure
many of you have noticed this now in your practicum places
is that people with autism seem to approach faces
in a different way.
Before we had these really elegant studies
that we have now, where we're taking infants
who are at risk for autism
by virtue of having an older sibling with autism,
and studying the development of the disorder
in a prospective way,
we use to study it in retrospective ways.
So one way
to study things retrospectively is using video tapes.
And so my graduate advisor,
Geraldine Dawson, and a person who was a graduate student
at the time, Julie Osterling,
had this idea
to look at first birthday parties.
So it's a nice experiment, right, but
what do we want in an experiment? We want things to be well controlled, standardized as the same for everybody,
and we can't really do that with home videos,
but for birthday parties the same kinds of things
happen at every birthday party, so
we have a nice comparison to see how infants might be different.
And also when we're interested in things like social referencing, social referencing is the idea
that you look to a, another person's face to figure out
what to make sense of some, what,
how to make sense of something novel,
and so
what better situation
to elicit social referencing in a one-year-old child
than for the first time in their life to take out
a, a piece of dessert,
put it in front of them,
and then set it on fire, right.
Most kids have never had parents set their, their
highchair
on fire before.
And so it's a nice chance for students, for, for kids to try to figure out
is this something good, is this something bad.
And so what they did is that they looked at these home videos
and they found that
first birthdays, and then in subsequent studies, even earlier,
by six months,
kids who would go on to be diagnosed with autism
look different from kids
who went on to develop typically
and also kids who went on to have
non-autistic developmental delays.
We know
that throughout the lifespan, people with autism have more trouble
with recognizing faces,
with interpreting emotional expressions,
with maintaining eye gaze.
That it's really striking,
I have encountered striking anecdotal examples. A fellow I worked with at the University of Washington
was a,
a man with high functioning autism
and
he's interest, his preoccupation
was food choices. And so in the halls
of the building he would approach me and say
Jamie, do you, do you
like apple pie or do you like
pumpkin pie?
And then I would say I like, I like both and he said well, do you like them or do you love them. And he would kind of ask this series of questions to
really get at your,
your food preferences,
but when I would see him outside
of work
I'd walk past him on the street
he wouldn't recognize me, he wouldn't say anything, but then if I'd say
hey how're you doing, he would say
do you like apples or do you love them?
And so he,
he needed a context for my voice to recognize me.
We also see different
processing strategies for faces in autism.
We see a reduced inversion effect, so they don't have the same decrement in performances when
we turn faces upside down.
We don't see the same decomposition effect, in fact, some studies have shown
that they're better
at featured-based matching, especially
if it revolves around the mouth.
And so we infer that people with autism aren't using this configural processing strategy that we've
talked about.
They're approaching faces in a piecemeal way,
the way most of us approach objects.
You've heard someone speak much more eloquently
than I would be able to about this already,
but Dr. Clinton and Warren Jones and others
have looked at the way people with autism
look at faces and look at social information.
And we know that people with autism look less at the eyes. They look more at mouths and
things.
When there's three people interacting on screen,
they have a harder time tracking
those interactions.
And they, they fail to follow attentional cues, things like a person pointing their finger
on screen.
From FMRI research,
that actually happened here. Bob Schultz, another close colleague who has since moved on to
the children's hospital of Philadelphia to start the center for autism research there to produce another seminal paper
at the child's study center
looking at
this activity in the Fusiform face area.
So we've talked about how Marvin Chun and Nancy Kin, Kanwisher
found this hot spot in the brain for faces, well Bob looked at people with autism
and saw how their brains activated to faces
and he saw that this activity was, was reduced in people with autism,
despite activity to objects looking very similar.
So now let's talk
a little bit
about ERP studies
of face-related brain function in autism.
so we're going to talk about three different areas,
so face recognition,
emotion recognition,
and structural encoding.
And so first I'm going to talk about face recognition
and this is work we that we did at the University of Washington
as part of a study of early development of children with autism.
We did this work in
three-year-olds and this was actually based very closely on a paradigm used in typical
development, a paradigm
created by Chuck Nelson.
And so what we did
is we showed children familiar and unfamiliar faces
and this is the slide
where someday when my daughters are old enough to log on to Youtube,
they can still think
that I'm a jerk,
but at least they'll know I wasn't a hypocrite.
So we, we showed them familiar and unfamiliar faces
and familiar and unfamiliar toys.
And we contrasted brain activity. So
what's the difference between a face and a toy
in terms of, think about the developmental paths I talked about earlier,
which path is for faces?
The social path or the non-social path?
Which path is for toys?
The less social path, right.
And so we have a nice kind of,
a nice design where we can look at familiarity
for social information and familiar, familiarity
for non-social information.
And we had
a, a reason to,
to have certain predictions based on typical development. We thought
that in typical development
we would see different, differential activity
to familiar versus unfamiliar faces and objects because that's what we'd seen before.
When we looked at people with autism
we saw something very different, so these are very young kids.
These are three-year-olds.
So I was there when we collected the data.
It's, I I cast a rosy picture about the applicability
of ERP for studying a range of developmental functional levels and a range of
ages in autism,
but it's still really an adventure
to try to do an ERP experiment with a three-year-old person with autism who doesn't have any language.
It
involves lots of energy, lots of toys, and lots of m&ms.
And so
what we saw is
if we look at
objects and these are both wave forms
for people with, for the young children with autism,
we saw this component, the P400,
it's called,
and it's associated with recognition.
So we saw
an enhancement
to novel objects,
and, relative
to familiar objects.
Now these are faces.
So we the same, we see the same kind of peak,
but what's different between the faces and the objects? Right. And so
we saw that when we look at the non-social information, they look just like
their typical counterparts.
For the faces they didn't. They weren't showing differential brain activity.
And so, when this study came out one of the concerns
was that we were saying that kids with autism don't recognize their mother,
which isn't the case at all.
I was there. I was running these kids. They,
they knew their mothers and they often ran to them
frantically when we tried to do the ERPs with them.
They certainly recognize their mothers,
but it's telling us they're using different mechanisms to do that.
And that's one of the things I think it's really important for us to learn about autism.
What are the compensatory mechanisms?
So in another aspect of social information processing that we've used ERP to study
in very young children with autism, and these are the same
group of three-year-olds,
is emotion recognition.
And so for this study
we contrasted their brain responses
to neutral
versus fearful faces.
And fear is a really useful emotion
to use because fear is closely linked to the function of the amygdala,
which is, which is one of the social brain regions
involved in autism.
And how do you get a, what a,
a eight-month-old child
to show fear.
You show her her older sister coming at her with one more sticker that she's about to put on her face.
So we show these three-year-olds neutral and fearful faces and to be clear, we did not show them neutral and fearful faces
of my daughters. These were
from a standardized series of faces,
developed the Ackman's series, they're called, but we can't show them
for the purposes of this lecture.
And again,
so this is a different ERP component.
This is, against, it's not a face-specific component,
it's just kind of associated with salience. It's called the negative slow waves.
And in the typical kids
we saw a big difference between the fearful face and the neutral face.
But what did we see in the kids with autism?
Difference or no difference?
No difference.
And so
one of the nice things about this study is that
ERPs
are kind of abstract. Right, we do them in
a, the lab. We can make guesses about how looking at pictures of emotionally faces
versus neutral faces
maps onto things that happen in real life,
but it's kind of a stretch.
So what we did in this study is we also did this kind of response-to-distress task,
where graduate students like myself
played with the kids. We sat at a table. It was videotaped.
And we had this hammering toy
and so
we would play with this hammering toy
and we would pretend to
hit our finger with the hammer
and then we would pretend to cry
and we would just
cry from like a minute.
And the video, and afterwards we'd look at the videotapes and determine what the kid did.
Did the kid just think "Oh great,
you know, this guy stopped playing with the hammer. Now it's my turn"?
Or would the kid
look said? Would the kid pay attention to me at all?
And this is one of the times I regret that we can't show the picture because we have this,
a great picture from that study,
where I'm sitting there crying
and a little three-year-old boy with autism is hammering away,
but his mother is looking at me, but it's not a
pathetic look. It's a looking like
what is wrong with you.
So, and what we found was actually,
so, if we, if we wanted to make a jump about how sensitivity, neuro-sensitivity to expressions,

to emotional expressions, would map on
to behavior, we might think of these things being connected. And that's exactly what we saw.
So a different component,
the N300,
it's latency,
how long it took happen
was associated with the amount of time
these children spent attending
to feigned distress.
So looking now at some different work. This was, again, work done at the University of Washington
and this is work done in adolescents and adults with autism.
We took that study done by Shlomo Bentin, the classic 1996 one, and
looked at the N170
and people with autism. We should them
upright and inverted faces .
So I describe the N170, but I hadn't
shown you one. So this is an example of an N170.
This is from fourteen typical,
typical adolescents and adults.
And what we see is that negative
dip in electrical activity
that happens around a hundred seventeen milliseconds, and this is associated with faces.
And when we showed
these same faces to the people with autism,
we saw a different kind of response.
The, it did, it wasn't like it didn't have
an N170, but
it was different from the typical counterparts. And I should make clear that these people were
just as smart as typical counterparts.
They perform similarly on many non-social behavioral measures.
But they had a very different response and they're response
was actually something that we could only pick up with ERP.
There's a, they're response was to be slower.
So while most people have
N170s around n, around 170, 180 milliseconds,
the people with autism were much slower.
Then we also looked at the
inversion effect. So if we show them the faces upside down
would we see a different response?
And so this is the, what an inversion effect
looks like
in typical people.
So you see a big,
the red light is upside down faces,
the blue line is the upright faces.
And so what you see is a bigger,
slower response to upside down faces
and,
in the kids with autism, the adolescents and adults with autism rather, we say some slightly different
response. So
we did see maybe a little difference in amplitude,
but what was really salient was, again, the timing.
So, whereas typical people
slowed down
when they saw an upside down face,
people with autism there wasn't any difference. The, in terms of the timing,
they're upright were equivalent
to their inverted faces. And that's shown by the, the slope of these purple lines there,
those, those purple lines are
the inversion effect.
So what we saw in these adolescents and adults were slower processing speed
and insensitivity to inversion and,
excuse me.
Again we wanted to see if we could tie this to something,
to some kind of behavioral measure
that we could be sure was more ecologically valid, or we could be
optimistic
was, was ecologically valid.
And what we, so we administered
a standardized test of face recognition. It's part of a memory scale called
the Wexler Memory Scale
and we looked at how good people were at recognizing faces.
And so again, despite being comparably intelligent, people with autism
made twice as many mistakes
on a face-recognition task.
And, moreover, these mistakes
correlated with their processing speed.
So how face a person
showed this spike to a face
was actually associated with how good they were
at face recognition.
So this was actually the first study
to look at the N170 in autism,
but since then there have been many studies and
what has emerged is that we see that
across a number of studies there's delayed latency for faces.
This has been found in children as young
as three years of age.
What's very interesting
is that the same effect has been found in parents of children with autism
and in a very recent study
by my colleague Joe McClearly
in the, in the UK.
He spent of it in ten-month-olds,
who are at risk for autism, but haven't been diagnosed.
We've replicated in other studies
the intensity to inversion
and there have been other
forms of atypicality. Some studies have found decreased amplitude.
Some have found abnormal lateralization or
patterns of activity at the scalp that are more characteristic of younger children.
And again as I said before,
not all of these results are replicated across all studies. It's really a heterogeneous picture.
And I'll talk about some manipulations that we're doing now, some experimental manipulations
to understand more
about why
we see some effects in some kids and different effects in other kids. It seems like part of the picture you're, you're suggesting that kids process faces like objects. But it seems like in one of the presentations you were showing they don't process faces like objects. They, they recognize objects better than they recognize faces. So, aren't those two things kind of contradictory?
Well, let me,
let me go on to the next part as see if,
I don't think that faces are
processed like objects.
I think that
faces may not be processed in a special way by people with autism.
But it's actually turns out
that objects can also be processed
in a very special way,
which I'll talk about right now.
So what do we make of this? Do we think that people with autism are just born with
a hole in their brain in their face-processing regions? Do we think that they're
unable to do
this kind
of
information processing in their brain? No.
We kind of explain in a developmental way.
So we know that
faces
are processed in special ways.
We know
that the,
the specialization associated with face processing
happens over time.
So what if
people with autism come into the world
less drawn to other people
and therefore they fail to, to pay attention to people and they fail to become face experts.
So if we think about our path towards social expertise culminating in face expertise,
we could think about people with autism failing to chart
that course.
And so we published this and we called it the social motivation hypothesis.
And the idea is that the problem in autism
is really decreased social motivation,
which leads to inattention to faces,
which leads to a failure develop expertise
and then that's reflected in the,
the atypical
specialization we see both in terms of brain and behavior.
But there's an important presumption here, and this is something that been very interesting
to me that I've studied over the past couple of years here at Yale
is, is this piece.
I started off today talking about how we have different explanations for autism and it,
it might be a social problem, but social information could all just be a red herring.
Maybe it's a problem with complex information processing.
And we know
that the development of expertise
is something
that requires complex information processing.
So what if people with autism
just have a problem
developing perceptual expertise?
So how can we look at that and
what is perceptual expertise?
So when we talk about perceptual expertise
it's the idea that as we become really good
we get a lot of experience
distinguishing among things that are very similar
like faces
or, if you are a monkey expert, like monkeys
we start to treat them different.
So we start to treat
the faces,
we start to treat those objects or those non-face things the way we treat faces. We starts to see signs of
holistic processing.
We start to see inversion effects.
We start to see activity in the fusiform gyrus, and most relevant to us,
we start to see enhanced N170 amplitude.
One thing
that hasn't been studied yet what, well, isn't,
isn't will understand it yet
is to what piece your affective experience plays in.
So I have there a, an example of a car
and a bird and those are both
things that have been studied in neuroscience
research
to show expertise effects. When a person gets really good at individuating among birds or
different kinds of cars,
they get a bigger N170
than people who don't care about them.
But what hasn't, but what we don't know is,
it's rare that a person develops some kind of expertise
without some kind of personal interest.
So car experts aren't just car experts in the way.
You know, like if you think about a baggage handler,
they see lots of bags,
but they probably feel differently about bags than car experts feel about cars, right.
And so that's something that isn't studied.
And I think it's relevant to studying perceptual expertise and autism cause the two ways
that people have gone about it so far
are to either train people with autism to be expert in something
or to look at something they're expert in already.
So the training studies teach a person
to become experts in a particular area.
What's nice about this is it makes for a very clean elegant experience, experiment
because you have a group of people then
who have
all comparable degrees of expertise.
You've controlled their access to it, you know.
But what the problem's with it is it seems not really the way we develop expertise in real life.
So there's not intrinsic interest
and there's not much affective
involvement we would assume.
Now naturalistic studies seem to solve those problems. People become experts in things
because they're interested in them.
And many people with autism
really become experts in things.
The problem is, is that
it's hard to find groups of children
who are all expert in the same thing,
so that we can do a clean experiment.
So, this is a problem that
we had kind of thought about quite a bit.
One of the nice things about being a clinician
is it sometimes you don't have to have good ideas, good ideas
kind of walk into your clique.
And so one day I was working with a kid, a four-year-old boy,
who was very bright. We had followed him since he was two.
I'm talking about this,
this is my favorite video to show,
but, again, we can't show it,
so we'll do a little re-enactment.
But we were doing a play session
and
he would,
he would say things different, he, he would see things very differently than I would.
So, you are unfortunate to be sitting right up front.
So you'll, you can be, you're going to be the boy. So what's in this bag? W.
I, I was a, I was afraid that was going to happen.
The, these are phone blocks.
We'll try a different one.
No, you, I got it wrong.
Okay.
What, what do you see there?
A boy snorkeling.
Can you tell me anything else about it? No. Okay.
So these are, these are pieces of actually a diagnostic assessment that we use called the ADOS.
Whenever I would bring out
the boy would
see as a letter.
So, the boy and one of my students saw
these as Ws.
To me they were just blocks.
When I brought this out
I said "Oh look! There's a guy snorkeling." And he said "Yeah, the snorkel is a J."
When I brought this
out I didn't even realize this was happening. I brought past fast his face really quickly and he said "WPS." I
said "WPS?" And then I looked oh
Western Psychological Services, the company that makes it.
So this boy was seeing letters
everywhere.
And it actually turns out it's a pretty common area of expertise in autism.
So many young children with autism
get the very interested in letters.
We're not exactly sure why. We think that because letters are omnipresent
and they're nice because their always the same twenty-six
and they're always in the sequence,
so they, they kind of fit,
fit the rules for a person who likes ritual
and routine.
We find that's an area of strength for
people on the spectrum.
Even
though problems with language
is a, a,
are a really salient part
of autism,
we see intact word reading and decoding in most children
with autism or at least what would be predicted by their intellectual level.
And then there's a subgroup of children,
who have what we call hyper-lexia, who are really
precocious, self-taught readers.
And that's a five to fifteen percent prevalence.
A lot of the really good work in hyper-lexia has been done right here by Elena Grigorenko
and Tina Newman
out of
Naples.
So
we've got an area
in which many people with autism develop an expertise, but
it's only good to us as people who are interested in the brain,
if it gives us a way to measure precept, perceptual expertise in terms
of brain function.
And it turns out it does.
So a colleague Alan Wong, who at the time
was a graduate student with Isabel Gauthier
at Vanderbilt and is now at the University of Hong Kong
looked at how people learn to process letters
and he did a very nice experiment,
where he took letters in the Roman alphabet,
which is what we read,
you know, what you're seeing here.
Then he took Chinese letters
and then he made up an alphabet
and he showed these letters to people and measured their brain response.
And what he found
was that you got an enhanced N170
if you could read the letters of an alphabet. It
didn't matter which alphabet.
So if you were bilingual reading Chinese in English
you get a bigger N170 to both Chinese characters and Roman characters.
If you were a person who could only read English,
then you'd get bigger
N170 to Roman characters, but not to
Chinese or pseudo-font. They were equivalent.
So what this let's us do then
is kind of map out
those endpoints. So if we think about our social expertise path ending in faces
we could take a look experimentally
at a non-social expertise path ending in letters.
And so we contrast things that we think we're social
experts in versus not experts in
with houses
and then pseudo-letters.
And this is some of the recent work that we've done
in people with autism.
And so what we've compared is
expert and then non-expert stimuli in the social domain, faces versus houses,
and in the non-social domain, letters versus social letters versus pseudo letters.
And then what we do is we take the N170 record
it from this a two-hundred-and-fifty-six electro-net and we record the N170 from patches on the
right
and left hemisphere.
And so for faces
we saw the kinds of things that we've seen before.
This was a younger group of children than I've studied before, but
we found in our test of face recognition
they performed significantly worse
than their typical counterparts, despite being as, as generally intelligent as them.
And we saw in the right hemisphere, the hemisphere that's most often associated
with face perception,
we saw that they had, a, a
slower latency
just to faces
than typical individuals.
When we look at houses, both groups were equivalent.
I talked earlier about
source localization and
we're starting to dabble in it, although we're not necessarily zealots.
But for both groups we saw that the N170 localized to the fusiform gyrus.
So it's not like there's really
qualitatively different kinds of things happening in the brains.
It's just happening in a different pace.
But what was really intricate, interesting to us
was letters. So first we looked at,
at behavior. We looked at reading
in two different ways.
So we can look at reading
words.
Reading words can happen in, in, in,
with different kinds of strategies. We might recognize them based on the shape of the word or we might sound, sound
them out.
But we can force a person to sound them out
testing their mastery of all the,
the individual pieces of the word
if we make up words.
So if we use words that nobody has ever seen before. The only way you can pronounce it correctly
is you understand the rules
of, of
sounding out words. And
on, on both kinds of measures
people with autism scored just like their typical counterparts.
And both groups scored in the average range.
So we know because this is a norm test
that it's not something idiosyncratic about people who live in New Haven.
They're performing just as well as anybody in their age range would.
When we look at the N170, the first question really was would we see an enhanced N170
to letters even in the typical
adolescents and children
adolescents because it hadn't been really looked at
in a group this young before. And that is what we saw.
So the pink highlights the N170.
And you see the purple line is the letters. The green line is the pseudo-letters.
And we see a larger amplitude
to letters.
And what we see in the group of autism we did.
We actually saw a, a bigger difference between letters and pseudo-letters in the group with autism.
So
when we,
when we compared the groups,
both groups showed an enhanced amplitude to letters.
There was no differences between the groups.
The typical group showed that the expected pattern of left-lateralized response.
But
the group with autism showed bilateral specialization. You can really see it here.
So this is a,
a picture, kind of a,
think of it as a heat map and
the darker the purple color shows the more negative
the activity.
And so what we've found is
the typical group will see things localized to the left. The autism group actually recruited right
hemisphere regions, which is kind of interesting
because those are the regions typically involved
in face perception.
So
we see in this experiment, we see that they have
impaired face recognition.
Atypical neuro-responses to faces
they show
intact word reading
and preserved,
kind of enhanced
neuro-response to letters
with the recruitment of the right hemisphere. So what
does this tell us about autism?
Well it makes the case
that their brains,
their brains can learn. None of us are born
learning to read, I'm pretty sure.
We have Yale students, if anybody could, you guys could.
And I'm pretty sure you couldn't.
So we know that these are things they've learned over the course of development.
We know that to learn these things
is kind of a complicated brain operation.
It requires your prefrontal cortex to be compute, communicated with cortical regions way in the back.
So we know that,
that things like this can work in the brain.
It tells us we need a more nuanced story.
So we need to look at connectivity within specific systems
rather than just generic dysfunction.
And this also gives us encouragement that the
all these different kinds of interventions that we do
aiming to improve social information processing
by driving people with autism to attend to these things
may payoff.
So now I want to talk about,
this is a,
this is all the
more cutting-edge stuff. So this is stuff that, that
isn't published, hasn't been talked about before.
These are some of the things that we're working on now.
So I've alluded to this
earlier, but
one of the, I,
one of the things we're very interested in understanding now is
why,
what accounts for the heterogeneity in autism,
especially in terms of faces?
So why do we see some kids showing delayed responding and other kids not showing
delayed responses
across studies?
And there's different kinds of ideas that have been put forward to explain it.
One involves visual attention and this is pretty straightforward.
If people with autism are always looking at different parts of the face
than typically developing people,
we're going to expect
different
patterns of brain activity.
So we could look at how visual attention
maps onto
face perception.
Another piece is social motivation.
And this was a, a, a
neat interaction for me. This was a group of parents,
actually were here at the Child Study Center, and I gave a talk on so much of this.
I showed our model social motivation hypothesis and a guy came up to me afterwards
and he said "Jamie,
what you just said doesn't make any sense.
My son has autism,
but he is super socially motivated.
He has spent his whole life getting in people's faces trying to interact with them
and he's a really odd, awkward, and often unsuccessful approach,
but he's very motivated."
And it's a very good point, right.
In 1979,
Lorna Wing talked about different groups of children who,
who were active, but odd
versus passive versus aloof.
And we really don't have a sense
of how those kinds of differences
within the very diverse group of people that we would call people with autism
maps onto difference in brain activity.
And then finally the third potential explanation
is looking at
connectivity within these face perception systems
or dysfunction within individual components cause
We've really talk today
about face processing
as a thing,
but it's not. It's really the, a,
the collection of a lot of individual things that happen in the brain.
So I'm going to now show you some preliminary data
that we've,
that we're using to start to look at these things.
So in terms of visual attention,
what we've done, and this is with a, a graduate student who was hear last year,
who's now getting her PHD
at the Institute of Psychology in the UK,
Celeste Chung,
and we've,
well, if you want to know
how looking at different parts of a face could affect the neuro-response,
what kind of experiment could you do?
Right, what could you do?
Sure. You could do that. And that's actually been done. The very first N170 paper showed different
chunks of faces, different pieces, but
the problem is you're,
you're changing the face. You're just showing the eyes.
But one thing you can do is force them to look at different parts of the faces, right.
So you can have a cross-hair on the screen that makes them look at the eyes or
makes them look at the nose
or makes them look at the mouth.
And this is what we've actually been doing
and this is in typical adults.
And the idea is that
maybe people with autism are showing different brain activity because they tend to look at the mouths
rather than the eyes.
And indeed we expected that when we force people to look at the eyes,
we'd get stronger brain activity,
a stronger N170
than if we forced them to look at anywhere else, but that's actually not what we're finding.
We're finding that
whether you look at the eyes or whether you look at the mouth you get a bigger
N170
than if you look at the center of the face or if there's no cross-hair at all.
So, it's almost like
looking at information, which parts of the face
elicit a bigger N170
than just eyes.
In isolation
the eyes actually elicit a bigger N170.
So we're now starting to do this
paradigm with people with autism. The question will be
when we compare people with autism
and typically developing peers,
controlling for where they're looking,
are we still going to find
these same kinds of brain differences or, or is it really explained away
by, by
where they look. And hopefully next week Kevin Pelphrey will talk to you about some
of the really neat FMRI work that
he's done showing that when you control,
when you manipulate a person's scan path in the magnet, you can make typical
people look like they have the brain activity of a person with autism
and vice versa.
So the other piece is social motivation.
So we're, we haven't yet
started to tease apart groups of people with autism
with social, who
are motivated for social interaction, who are, who are more aloof,
but we're actually looking at it in the typical population because
if you think about it, we all vary, right. There are people in this room who are going to spend
Friday night in, you know,
in some party, talking to lots of people and there's going to be people in this room who are going to spend Friday night
in the library. I'm assuming you all are going to be Friday night in the library, reading
about autism, but I know that's not true.
Right, and so you might say that the people who are at the party are
extroverts and the people who are at the library are
introverts.
And so actually, what we've done is we've brought, we've screened people to be really
extreme introverts or extreme
extroverts and we've brought them in for ERPs.
And we've really been surprised by what we've found,
really striking differences in brain activity. And these are all people, these aren't people with autism.
These are people out and about.
But if you look in blue
are extroverts and if you look at red are introverts. And again, this is work done by Celeste Chung.
You see that
the extroverts have bigger N170s
and really what's also salient
is
extroverts show an inversion
effects and introverts don't.
So this is very much in line with the idea that social
motivation shapes
our brain activity. We can think about it in developmental way or we could also
think about it in terms of a salience way. Maybe social information is just more salient in the moment
for extroverts.
And then now is a,
is a, this is a big slide, so I'll, I'll get you ready for it.
The idea that
face perception, I've talked about the N170, but the N170 represents one
stage in a series of stages
of face, of, of the face processing
system.
And so what we can do because of the temporal
acuity
of ERP, we can tease it apart.
So this is kind of a mock-up
of a theoretical,
a, a, a possible explanation of the way face processing
works in the brain. So if we think about a component called the P100,
showing us basic visual processing,
the N170 showing us structural encoding
and things like the N250 showing us higher-order face processing like affect recognition or identity
recognition, we can think about how those things tie in
to the mirror neuron system
and then we think about how those things tie into related behaviors like imitation or face perception
and then how those things influence
vie, how it influences social interaction. And we have ways
of measuring social interaction
in real life. And so we've actually
just submitted this
as a grant actually today.
A grant
went in to do a study with College of
the University of Washington to look at just this, to
use ERP to break face processing into
individual
components and then to see, well, if we are finding that
some people aren't,
some people have delays
in the N170 and other people don't,
maybe if we look at the
broader array of, of neuro-functions associated
with face perception
and beyond, we're going to start to see different profiles. So maybe there's
slow N170ers who compensate with fast N250s
and
actually look okay here. Or maybe they're slow N170 people
who have even slower
N250s and
have really big problems here.
But with,
with statistical approaches like structural
equation modeling
we can actually test the directionality
of these arrows
and look at in an empirical way
which models
best fit different groups of people. So we can compare
for sure people with autism
and typical people, but even more interestingly,
we can then try to tease apart
different groups of people within the category of autism.
And this, I should say,
this is developed with my colleague Ralph Bernier
at the University of Washington.
And then
the next step
is, this is work that we're not doing yet, but it is imminent,
we're going to take advantage of
the applicability of ERP
to babies and we are going to start doing ERPs on little babies
who are at risk for autism to try to track
the development of social behavior in a
prospective way. And what we're also going to try to do, we're already
doing in adults, is try to get around that problem
of what people are looking at by recording ERP and I-tracking at the same time.
So with Dr. Clinton and Warren Jones,
we're working on systems
so that, they have
as, as you've seen really elegant ways of
I-tracking in infants and we can record their brain activity
and what they're looking at at the same time. And the idea is to kind of really do
a broad,
a broad, broad measures
across
domains of perceptions,
so looking at visual information by comparing faces and toys, looking at
auditory information by comparing
voices, and then
as me and Warren
are, are inclined to do, using more nationalistic stimuli, so of examples of mothers and other people talking
and objects moving to music
and examining in real time the brain response
to that. So I want quickly acknowledge
all the people who have helped with this research, most
importantly the children with autism and their families who come in
and
tolerate all the things that we ask them to do
in the hope that we're going to
help future generations of children with autism. Many colleagues here at Yale,
colleagues in other places, and to Nora and Agnus, who, although they didn't have a say
in the matter, will have retrospectively
permitted me to, to plaster their face all over the internet. Thank you.