Do My Thoughts Deceive Me? Human Factors and Design


Uploaded by GoogleTechTalks on 08.02.2010

Transcript:
>>
RICARDO: Thank you all for coming. I see we've had a pretty good turn out. It's my pleasure
to introduce my friend, Dr. Jason Wong. Jason is a Human Factors Scientist with the Naval
Undersea Warfare Center. He received his Ph.D. from George Mason University in Human Factors
and Applied Cognitive Psychology in 2009. And today, he'll be giving us a talk about
whether our thoughts deceive us, Human Factors and Design. Please welcome, Jason.
>> WONG: Thank you very much. Thanks for having me and thanks for such a great turnout today.
As Ricardo mentioned, my name is Jason Wong. I work with the Naval Undersea Warfare Center.
Pretty much doing what we called human systems integration, but essentially human factors
on submarines which is certainly a unique environment, not one that people typically
think about. But I’ll be talking about some of the work that I do throughout the talk.
But in the meantime, the primary focus is the question of whether or not our thoughts
deceive us. And when I ask that question, a lot of people are like, what are you talking
about? These are my thoughts, I'm thinking them. Why would I work on deceiving myself?
But it turns out that the way we think and the way our cognition operates is not always
necessarily ideal. So I wanted to start off with an example. This is taken from, actually,
a television show called Derren Brown's Mind Control which was on SyFy. He doesn’t actually
control minds but it's SyFy so, you know, they got to play with the title. But, nonetheless,
I'm going to show a short video, the video is not going to have sound, but you can kind
of get the point. Essentially, by the way, Derren Brown took this from a researcher by
the name of Daniel Simons at the University of Illinois, who originally conducted this
experiment. But this is a nice video showing what's happening. So the guy here is Darren
Brown and he's playing the part of a lost tourist in the city. He has a map and he's
going to try and find someone off the street to give him directions. And then you're going
to find something interesting happens. So here he goes looking around. He finds someone
and the guy starts giving directions. Then this thing happens, you know, painting comes,
interrupts them. And then right off the bat, this guy without missing a beat, guy off the
street keeps giving--keeps giving directions. So we see a couple of smiles, heard a couple
of chuckles, a lot of people didn’t seem to notice anything interesting. So if I--this
might be a little bit easier. So if you noticed what Darren Brown is wearing here, black shirt,
black sports jacket, and after the painting, yeah, different guy wearing a white shirt,
black jacket, dark hair. And, of course, the guy off the street has no idea what just happened;
he keeps giving the directions. So, just as another example because someone might say,
well, you know, the guy looked similar, similar hairstyles. As another example, same set up
here Darren Brown talks, finds a woman off the street. She starts giving directions,
here comes the painting and it changes from a white guy to an Asian woman. And the woman
off the street does not miss a beat. So, returning back to that question, do our thoughts deceive
us? Well, to some extent, especially in this example, they most definitely do. Human cognition
has its limits. And the field of human factors is--the goal of this field is that it seeks
to understand these limitations of our cognition, but despite these still design usable systems.
So whenever I talked about human factors and these principles, I would get this, well,
a lot of this seems like common sense. And a perfect example of that--and something you
still learn in a human factors class is that you don’t use the color red unless that
means something bad or something dangerous is about to happen. That's the kind of association
that we have. And it seems like common sense and it seems like something that's innate
in everyone that we wouldn't violate this principle, yet it does happen. So this is
a screen shot from, actually the US government Dental and Vision Benefits site. So there's
a separate dental plan and a separate vision plan, and this is actually off of mine. I
did not enroll in the dental plan right now, and you can see up here it says that you are
not enrolled. Not enrolled being in red, being a bad thing, I mean that’s perfectly fine.
But I have terrible eyesight, so I definitely need the vision plan and it says you are enrolled
in big red text. This common sense principle of use green if it's good, red if it's bad
is violated here. It might just be that the designer was terribly lazy and didn’t really
care, but it's also entirely possible that the designer made this conscious decision
to make that text red as well; violating the supposedly common sense principle. So throughout
this talk, I want to talk about how we’re making some of these basic mistakes, talk
about some of the limits of low-level and high-level cognition. I'll talk a little bit
more about what that means later on. I want to talk about how this affects design and
also how we can go about fixing things? So the first topic I want to talk about is that
of visual attention. This is what I studied in graduate school, so it's near and dear
to my heart. But this is what I consider a little more a low-level cognition, a little
more basic cognition that we don’t have as much control over. Of course, we have some
control over where we pay attention or--yeah, the things that we pay attention to but a
lot of times when we don’t have that conscious control. So I call it a little more low-level.
But, nonetheless, the concept to visual attention focuses on the fact that there is only so
much stuff we can focus on at once. For example, that ridiculous noise outside is really easy
for us to focus on and it makes a little bit harder to focus on me or to read the text
on the slides. Visual attention means that only--we can only be focusing on this particular
stuff. And also, especially with visual attention, we have a broad area that's known as our visual
field that we can pay attention to but only a very small portion of that. Three angular
degrees right in front of us that we actually have very good fine detail. So if we want
to actually extract, fine detail from the environment, we only have a small window in
which to do so. So the big principle from this is that cluttered design is bad and you
don’t want that. So a good example of having uncluttered design, since I'm speaking at
Google, I have to use a little bit of but. But, the Google homepage here is nice and
clean especially with this redesign where a lot of the stuff up top with all the different
functionality has been taken away unless you actually move the mouse, but it's nice and
clean. If you imagine, you’re trying to extract information, if you've never been
to the Google homepage, there's only so much to look at; it's a nice uncluttered design
that's very easy to pay attention to. Again, it almost seems like a common sense principle
but you have people that completely violate this. So this slide right here is actually
the slide that we use at the government with the US Navy. If you want money for an internal
project, this is the slide that you create. You fill in this template. You have the project
title on top; you fill in the project description, justification, approach, all in Times New
Roman, 10 point-font, which is barely acceptable for print. Let alone the slide that goes up
in front of a bunch of people trying to figure out what your project is about. It's completely
a cluttered design that really doesn’t make any sense, but bosses say you have to use
it, so we have to use it. Okay. So, that’s kind of a broad overview of visual attention.
But I want to talk about specific kind of sub-phenomenon of visual attention. The first
being: Scene Gist, which is exactly what it sounds; just understanding the gist of a scene
or kind of the basic most fundamental aspect of the scene. So Scene Gist was actually originally
discovered and talked about by Mary Potter at MIT back in kind of the late '60s early
'70s. So as a brief example of this, I'm going to show you three pictures very quickly. All
I want you to do is kind of just give me the gist or just remember the gist of what the
picture is and after the three, I’ll ask you for what they are, okay? Here we go. Okay,
so what were the pictures? What was the first one?
>> Classroom. >> WONG: Classroom. Second one?
>> The library. >> WONG: Library. Third one?
>> Cottage garden. >> WONG: Cottage garden, yeah, that one is
a little bit harder, I kind of threw that in there. To throw you a little bit, it's
ideally a Japanese tea house, but that’s a lot to get out of this. But there's a cottage,
there's a garden, there's a library, and there's the classroom. So what Mary Potter and her
colleagues have found and it has been replicated all over and as a brief demo I gave you, people
within tens of milliseconds. A millisecond being 1,000th of a second can figure out the
gist of a scene which is pretty impressive. A very quick flash and you get the gist of
what's going on. So this is a very useful thing, a very--it's not a quirk of cognition,
but it’s a very good thing that we should try and take advantage of. So how can we do
that with design? One example of that would be through advertisements. We know and you
guys being in Google, doing a lot of advertisements, people don’t really stare at advertisements.
They kind of look at it and then they glance away. So this is a great place that we can
use the principle of scene gist to make sure that people get the gist of an advertisement
before looking away. So, here's actually a bad example of this. I found this on the website,
maybe September, October 2008 when the election was still going on, a nice image. So what
do we have here, is it uses with primarily blue which is the color of most closely associated
with Democrats that uses the Obama logo in big white bold text that says, Senator Obama.
This red button that says, Speak Out, which is neither here nor there, and then you actually
read the text, "Tell Senator Obama It's not 2004." Then you read the super tiny text that
says, "Paid for by McCain-Palin 2008." And you go, "What in God's name were this people
thinking?" The principle of scene gist, the fact that, you know, most people really don’t
care that much about politics, they might look at the ad and look away. But within that
20, 30, you know, milliseconds or half a second typically, what are they going to get from
this? They’re not going to get that was paid for McCain-Palin in 2008. They're going
to get that this was an ad for Obama. So this is an example of a complete violation of scene
gist and it takes a lot of effort to actually realize that this is not an Obama ad. Okay,
so moving on. Another concept I want to talk about what Inattentional Blindness. This is
a concept that was originally found by, again, Daniel Simons and other researchers are--well,
from University of Illinois. But here is one of the most interesting demos of this. So,
what I'm going to show you is a video. Some of you might have scene this; this is fairly
popular within pop cultures. So if you've seen this, no cheating, kind of state choir,
how many of you have seen this by the way? Oh, god. Okay. Well, it might be worth it
for the people who haven't seen it. There were six people and I'll just explain it briefly.
There are six people, there's the--three people wearing black shirts, that's the black team,
and the three people wearing white shirts, that's the white team. They're going to be
bouncing a ball or throwing a ball back and forth. And your task is to focus on the white
team here, completely ignore the black team and count how many times the ball passes between
members of the white team, okay? This video goes for--excuse me. This video goes for about
30 seconds or so. So count, focus on the white team and the number of passes that occur between
members of the white team. Okay. How many did you guys count?
>> Sixteen. >> WONG: Sixteen? That's the right answer.
How many of you that have not seen this video before notice something interesting? Notice
something kind of strange happening? What did you guys notice?
>> Gorilla. >> WONG: Gorilla, guy in a gorilla suit? Who
didn’t notice the guy in the gorilla suit? A few--okay. This demo is totally is worth
it for the people who didn’t notice it. So, oops, let me come back here. So instead
of counting now, just kind of pay attention to--just watch the video. But, yeah, so up
comes a guy in a gorilla suit, pounds his chest and then walks out. So the guys--so
for the guys of you who totally missed that and for the guys--it's, you know, it's like
how in the world did you miss that? And for the people that have seen this already, at
least now you know, it still works on other people. Yeah?
>> Okay. Did you know that if you ask people to watch the black dressed people and count
the number passes, how many people [INDISTINCT]. >> WONG: Yes. So in this condition, when you’re
focused on the white team, you tend to filter out the people wearing black and of course
the gorilla is in black. I mean, this condition, I think only about 42% of people actually
noticed the gorilla. If you ask people to focus on the black team so they’re ignoring
everything that's white that number jumps up to about 80%. So I made it, so you guys
would miss the gorilla. I mean the interesting part of this is well, by the way, the test
that you had was about Daniel Simons called the easy condition, just counting the number
of passes between the team. If you make the task much harder, which is count the number
of just throws and count the number of bounces. So, you're keeping two different counters
in your head, the number of people who notice the gorilla drops significantly. So even if
you're paying attention to the black team but trying to count the number of bounces
and throws separately, the people that missed the gorilla or the people that see the gorilla
is only about 55% or so. So that kind of illustrates this principle of inattentional blindness
where if you're paying attention to a task that's relatively difficult, then your attention
is focused on that and you tend to miss other things in the environment, specifically guys
in gorilla suits. So how can this come in handy? Well, one example that I'd like to
talk about is that of proofreading. We’ve all made stupid mistakes, and this is an example
from Jorge Cham's PhD Comics, some of you might be familiar with. The guy on the left,
Mike, just submitted his PhD dissertation and the woman on the right is his wife who's
volunteered to go ahead and sit-down and read the PhD dissertation. So, yeah, it is not until his wife actually
sits down and reads it did she realize that he misspelled his name. But typically when
we're reading important things like our PhD dissertations, we’re reading for content,
maybe a little bit of grammar but you're not really focused on spelling. Microsoft word
no matter what is going to underline that word in red, so your attention is busy doing
something else. You’re going to totally miss that proofreading error. And, you know,
this is in comic form, but of course proofreading errors happen all the time. This is pulled
up at the mbc.com website which presumably gets a lot of page views and you see the Air
Force telling them to never let, you know, never let him out of your site. I kind of
wish there was a little thing underneath that asked you to visit their website for more
information. But--and amusingly enough, they’re talking about Air Force vigilance which is
being very vigilant, really paying attention to the task at hand, obviously, the designer
was not in the least but vigilant when he was typing out the text for this banner ad.
Okay. So moving on quickly through the tour of visual attention is a concept of Attention
Capture which is not quite the flipside to Inattentional Blindness but it's kind of close.
What this is saying is that if you’re doing a task, that's not especially, attentionally
demanding, you are likely to be distracted by other things. And this field is all about
devoted to studying. What exactly is going to distract you? So in the most basic task,
what you have here, on the left or the main task is a visual search task. So once the
display on the right appears, you’re looking for the square amongst the circles, pretty
basic stuff. But the first screen that you see is on the left, six place holder objects
telling you where the objects will eventually appear and then you'll see the actual search
display. What's unique here is--is this going to work? Yes. This new object here where there
wasn't one before. So this new object is known as the new object that abruptly onsets to
the display, because there wasn't a little place holder for it before. And what you find,
when these kinds of conditions, 25 to 30% of the time, the people's eyes or the observer's
eyes move from the center of the screen to the new object then to the target. The new
object is never the target, it's always a circle, it's never the square, they never
need to look at it, but 25 to 30% of the time they look at it. If you ask them later, did
you notice anything funny? They might say yes. If you ask them, you know, they might
actually volunteer the information. But, yeah, I noticed, you know, a new circle appeared.
But were you distracted by it? No, of course not. You know, I knew it was there why would
I pay attention to it? But there's an involuntary eye movement made to that object that’s
totally, that’s distracting. Of course, they can get back on task fairly quickly,
but it's the distracting eye movement that people aren't even aware of. So, I guess,
it gets--so, sorry, one more thing, just to briefly talk about is that it's not just abrupt
onsets of new objects that suddenly can capture your attention. An example here, and this
is from my PhD dissertation where I've had the visual search task, the two boxes in the
middle, put the box on the outer sides as the addition of a memory task. You saw a color
disk. In this case, a certain shade of yellow, you have to remember it. And then you saw
the object so you switched to the visual search task and here, you can notice that not only
is there the abrupt onset of a new object but it's also the color that you were currently
memorizing for the separate memory task. And what do you found here is that on average
about 40%, some people were as high as like 50% or 60%. Their eyes were totally get distracted,
look at this object before moving to the target and then here they were just tested on which
shade of yellow they had to remember. Just, oh, which comes out okay on the projector.
But, nonetheless, there are lot of different things and a lot of different factors that
can distract us from the main task and capture our attention. In terms of design, this is
really important for things like system alerts. We're always trying to alert our users to
things that are very important or the things that aren't very important. I want to figure
out how exactly do we design those? Those of you who are our users of OS 10, you might
have the system application installed called Growl. Are people familiar with this, I see
some nods? Okay. Essentially, if you get like a new instant message, a new email or if your
Xcode compiler is done or whatever, whenever an application deems worthy of alerting you
about, a new box will appear in the upper right. So, you get, essentially, the onset
of a new object that will give you alerts. So, you can move your eyes, you can look at
it after maybe five seconds or so, the box fades away and you can go back to doing your
work. This, of course, an example where all--we've all used Windows at some point. We've all
had these stupid boxes come up in the lower right alerting you of whatever, you know,
your trial version of Windows is about to expire, who knows what. But, nonetheless,
you have to move your eyes down there even more insidious though, with these boxes as
you often have to move the mouse and click on this tiny little X right here in order
to close the box. Otherwise, the box itself sometimes, it doesn’t disappear, even worse
if you missed the box and click on the message itself, it brings up the application window
so you have to go and close that in order to get the whole thing to go away and that’s
very annoying. But, nonetheless, it brings into mind the fact that we need to design
our alerts very carefully. If it's something really important, like if you have a virus
or your computer is going to explode, you probably want to interrupt their task and
flash something right in front of their face. But, otherwise, if it's not that important,
maybe we can think about different alerting strategies and there's a whole lot of literature
in the--or a lot of human factors literature about alerts. Okay. So, okay. So, I want to
talk a little bit more about how we can understand and train the visual attention. Talk about
the different phenomena that I want to talk about that are of particular interests. I
mean, how can we seek to understand visual attention? Well, one way to do that is we
can do this through eye tracking which is one of my personal favorites. I know that
there's a lot of eye tracking work being done here at Google but here's how can we apply
that and here's how we're applying this at the Navy. This is the project done by Helen
Wright at the Navy that I got to work with last summer with some of the data analysis.
But dealing with people who are Fire Control Technicians on a submarine, these are not
the firemen; they're not running around with fire extinguishers. These are the people that
sort of subsiding, you know, right underwater. These are the people that localize where the
different ships are at the surface or other subs for that matter. So, here's a shipping
vessel. Here's a fishing vessel, here is, you know, a Russian boat that you might want
to keep track of. So, these are the guys in terms of the Fire Control Technician, it's
like firing torpedoes. So, a very important job that we want to make sure that they can
do it properly. So, here's an experiment looking at, you know, they have a lot of tools available
to them. We want to know what kind of tools they use that creates the best performance.
So, the experimental setup goes like this and the upper left is the sonar display. Sonar
being the sound display we send out. Sound things and those that sound from its back
in a certain profile and it shows up here. Typically, that data is dealt with the sonar
technician, fire control people don’t have access to this but we're going to give it
to them here anyways. All that data comes in gets computed through a bunch of computers
and it gets put up to a display in the upper right that displays the data in a slightly,
more friendly format. That goes into another computer that does stuff to the data and it
spits out into these two identical displays. And these are the main displays that the Fire
Control Technicians work on. And they have geographical display here. They work with
the actual things here and they do a bunch of other things. So, essentially, what we're
able to do is get a bunch of junior Fire Control Technicians to come, sit down with straps
and eye tracker to their head and we're able to present them with scenarios where they
actually at the localized targets in the environment and we're able to track their eyes. So, we
got information that look like this. In the upper left is a display of fixations. A fixation
is when you stop moving your eyes and you're focused on a location and you're gathering
information from that place. So, if I focus on a particular face and I stopped there,
I can fix it on that face and gather information. So each individual blue dot is a different
separate fixation that the Fire Control Technician made. So you can see here, there are a lot
of blue dots around right there, very specific tools within their main display. Down here
is saccade data. A saccade is essentially a very quick eye movement that occurs between
fixations. So, if I want to scan the room and look at each individual person, I fixate
on a person, make a saccade to move to the next person and so on. So this way, I can
see how the eyes are moving through what's called the scan path and how it's gathering
information in the sequence. So, you can see here the faint red lines that show kind of
on the screen between the individual blue dots. From this, we can find typical scan
paths. And what we're able to do with this is we were able to break down some of the
scan paths in a generalized manner and we're able to correlate this with performance. So,
we could see that we're able to come up with a specific workflow, so they would go from
this location to over here, press the button down here, then use this to inform what's
going on here and we're able to kind of make these assumptions and also interview Fire
Control Technicians about what they did to say, "Hey, this is kind of a best practice
sort of thing of how to use the system to most accurately localize your targets to the
best do your job." So, this is how by tracking where visual attention on the eyes were going,
we're able to understand kind of the thought process behind these Fire Control Technicians.
So, that’s kind of my favorite way of doing this but, of course, there are other ways
as well. I'll talk a little bit about those. In terms of training visual attention, one
of the most fun ways of doing it is through first-person shooter video games. A bunch
of researchers, most notably are early on Green and Bavelier brought in people who are
novice and expert video game players specifically those--that were best first-person shooter
video games; so, Doom, Quake, all those kinds of games. And what they found here--if you
think about it too, if you've ever played these games is that you're focused on the
central point on the screen because that’s where you're gun is aimed but you have guides,
enemies coming at you that could be, you know, way up top, away from the side even could
be coming in pretty close. But you have to be able to attack all these guys to shoot
and kill them. So, what's Green and Bavelier did with their expert video game players and
non-expert video game players was run them through a series of basic attention task.
So, something up top right there where they'd first be focused on the center and then kind
of a little blip would appear on the screen. It could either be these circles here were
not there but there's a little bright dot that would appear either close-in to the center
of the screen or progressively farther out, then the screen was wiped and I had to say,
hey, where was the object. And what they found was that as dash line, as video game players
and the solid line as non-video game players. They found that the video game players were
consistently able to detect objects better. They had, you know, greater capacity of visual
attention to pay attention to more of the screen and we're able to detect the small
blip that occurred very quickly and the experts were better than the video game novices. So,
just by playing video games, also these guys have done some training studies where they
bring in novices and have them play hours upon hours of video games which is not a bad
way to get your undergrad psychology credits and then test them on the same visual attention
task, and you do see an improvement after hours of video game playing. So some of the
work that I've done at George Mason University with Matt Peterson and his group is that we
did this with eye tracking. We've brought in video experts and novices, have them play
video games and track their eyes. This is some very early pilot data that we have--we're
working on expanding this entire study. But, nonetheless, what this says as throughout--it
was only 10 minutes--it was a 10-minute pilot study so they only played games for about
10 minutes. But you see that experts up top, that’s a graphic fixation frequency. How
many fixations they made? How intent they stop to gather information in the world. You
see experts actually made fewer fixations than the novice's did which is the blue bar.
So, it's interesting that the experts actually made fewer eye movements. If you think that
if they're better at the game they might make more. But that’s actually explained down
here with the average fixation duration. So this is how long they actually stop to gather
information. And you can see here that the experts stopped, when they did stop, they
stopped for longer and gathered and--yeah, they stopped for longer than the novices did.
So, combining these two results together, what we can say is that an expert will make
fewer fixations but when they stop to make a fixation, they stay there longer presumably
because they can gather more information which was shown like Green and Bavelier that experts
have a wider field of view, they can gather more information from a single fixation. So,
we're definitely working more with training, video game players well in seeing how their
eye movement pattern has changed over time at George Mason. But this is definitely an
interesting way to train a visual attention. Okay. So, I want you to--yes, absolutely.
>> I have a question. So, probably that fixation would be consistent [INDISTINCT] so that those
results would probably be consistent with the hypothesis that the experts have better
peripheral vision. >> WONG: Yes.
>> Was that--was peripheral vision measured directly in any way or does this...?
>> WONG: Excuse me, can you repeat the question? >> Yeah. Is there any sort of like measurement
of like how good your peripheral vision is? >> WONG: Sure.
>> And if so were the experts better? >> WONG: To some extent, this task right here
up top can be thought of as getting a peripheral vision because your eyes are forced to stay
at the center of the screen and you present objects that are close in or farther and farther
away. So you're tapping into that measure of how well can they pay attention in the
periphery. >> Okay.
>> WONG: Experts are definitely able to simply expand their field of view or expand how much
they can process in the peripheral vision. >> So, the whole screen about what's like
percentage of--like about what was the angle relative to the person? Like about what angle
of their vision was the size of the screen? >> WONG: Sure. They typically--the screen--I
don’t have that off the top of my head, but they're sitting about 20 inches or so
away for like a 19-inch screen or so. >> Okay.
>> WONG: So, it's pretty filling and actually the Green and Bavelier studies, they have--I
think they had a really large screen and people were sitting very close. So, for these studies,
it took in a lot of their visual attention. For the studies we had, it was just a standard
distance you'd sit away from the monitor. >> Thanks.
>> WONG: Yup, no problem. Okay. So, switching gears a little bit to higher level cognition
something and higher level because we have more control over it generally is Problem
Solving and Decision Making. Problem Solving is interesting because one of the most interesting
psychological concepts when we're studying how people solve problems is the term called
perseveration which is when we have a solution to our previously solved problem and we keep
trying to apply that solution to everything. It's like we've developed the hammer and everything
else looks like a nail. So, an interesting book that I found that’s relevant to software
development is this book called, Anti-Patterns, which is, of course, to play on software patterns
and development where people have written a certain piece of code to solve a problem
and any new problem that come out, they keep trying--did you read you this piece of code
to fit and to other software--to fit into this other problems. And this is the problem
to some extent as programmers being lazy but, of course, to some extent, it's just that
we've put in all this effort and we don’t want to see this effort go to waste. We want
to maximize the efficiency of the effort we put into something. And, of course, a lot
of the time perseveration can be a good or reusing solutions can be a good thing, but
perseveration which is the one you refuse to switch away from a solution you've already
found can be a very bad thing. But we all have these kinds of problems. This is X case
CD cartoon, it's the holidays, we're all going to have to go home and fix our parents' computers,
or else they're going to ask for a specific tech support and this is the diagram that
we all generally use. We have a problem. We're going to try and find the menu item or button
that looks like what they want to do and we're going to click it, and we're going to see
if it works or not. And they have this kind of loop that goes through that we generally
go through. We're trying a bunch of different buttons and Googling and see what's going
to work and that to some extent if you go with this for too long, you're going to be
perseverating because you're going to be trying this one particular solution that’s worked
before on all these different problems. Thankfully, the x-case CD comic has kind of an exit clause
where if you've been trying this for more than half an hour, you give up. You ask someone
for help, you call tech support, god forbid, or else you're just playing give up and say,
yeah, we'll try something else next time. But, nonetheless, you know, we want this behavior
that we have is generally good but we have to be aware of when we're perseverating, of
when we're just trying too darn hard to make this solution fit into something else. And
we have to recognize that and say, okay, I kind of wipe the slate clean and start over.
So, this related concept of decision making is an interesting one. So, what some experimenters
did was give people two different glasses of wine and they were told that one wine was
significantly more expensive than the other wine even though, of course, the two wines
were very, very similar. I don’t think they are identical but hey had similar properties.
And they told them, you know, drink the wine and tell me how much you like it on a scale
of one to five. Naturally, as you would expect, people tended to like the wine that they were
told was more expensive even though it was very similar, the chemical composition was
very similar. What's more interesting is when they had people do this in the fMRI scanner.
An fMRI scanner is able to look at the brain and see which particular area--bundles of
neurons are active. So, what you can find here and what the researchers did is that
they imaged specifically what's called the orbital frontal cortex, an area in frontal
cortex that’s known for emotion and pleasure and kind of happiness. So, what you find here
is that on the x-axis, you have the rating of how much they liked it. So, here when they
really liked it, you can imagine, here's where the expensive wines go and here's where the
less expensive wines go, "more and less expensive." On the y-axis, is how much activation you
saw in those--in the orbital frontal cortex. So what you see is that as you're told it's
a more expensive wine, you tend to like it more and that area of your brain is more active.
So, what this is saying is that not only, you might think, you know, people, you are
kind of or you may think if you're told that it's a more expensive wine, you're going to
like it, but you're kind of lying to yourself that you can actually know that you'll know
they're really the same but because I paid more, I'm not going to get ripped off. I like
this one better. But it turns out at the neuro level, your neurons are helping perpetuate
this lie by saying, hey, you really do like this better on a neuro level. So, this is
kind of the ultimate example of, do my thoughts deceive me, where your brain is actually supporting
these lies that you're being told that yes you like this more expensive wine better.
Okay. So, real quickly, because I'm starting to run out of time, talking about understanding
cognition from a broader perspective not just understanding visual attention but how we
can breakdown also this high level of problem solving and decision making strategies that
I'm talking about. So, we can do this--one way we can do this is through something called
Task Analysis which we're applying at the Navy right now, where we have a system for
pilots and navigators on surface ships trying to navigate a giant ship. Imagine trying to
parallel-park a giant, really expensive, really heavy ship against the dock without bumping
into it. It's a big task and we don't want to just set them out to sea and say, "Hey,
have fun." So we've built a virtual simulation of this. Here's a screenshot of that simulation.
It's pretty high fidelity. They have all the physics built in. But the issue is right now
that every time a student gets into the simulator you need to have an instructor there as well
giving feedback. Hey, you're going too fast. Hey, you made this turn too early, or a good
job, which is only occasional that they get the positive feedback. But, nonetheless, it's
very expensive to have these instructors there. They're often hard to find because they'd
rather be on the boats and also they get really cranky. You know, nobody wants to see a student
screw up a turn 50 times over; they'd rather be somewhere else. So the goal here is to
develop an intelligent tutor. A software-based tutor that can monitor what the student is
doing and give feedback, so an instructor--so you can have one instructor for eight students
instead of one instructor for one student. So Stanford University is developing the intelligent
tutor, so that's actually why the main purpose I'm here for the week is working with the
people at Stanford, but the Navy is developing essentially what I had to call the answer
key. So the student is doing something and the tutor is monitoring what the student is
doing but the tutor doesn't know what the right answer is or what an expert would be
doing. It's not able to give feedback properly. So at the Navy, what I'm working on is working--is
understanding what an expert would do. So we actually went across the street to the
surface work for offshore school. For indemnity purposes, that's why that guy's face is blurred
out. But we got them into the system and we said, "Hey, can you dock a ship, can you undock
a ship." And we could ask them--pepper them with questions, why did you do this? Why did
you make the turn? Why were you going this fast? How did you know when to stop? And we
interviewed, we videotaped and got all this information. We read a bunch of books and
we worked on distilling this into something that work essentially like a flowchart. In
this case here, it's kind of a text-based--it's a text flowchart. Well, they have the goal
to do this which means they developed this sub-goal. If this, you know, if they haven't
hit the threshold they're looking--excuse me. If they haven't hit the threshold they're
looking for, keep monitoring; if they have then move on to the next goal, things like
that. But essentially, breaking the task down into its primary cognitive pieces and then
laying those out so that it's easy to understand. And then from there, we can take that and
actually implement it in what's called the cognitive architecture, which is a piece of
software so you can actually click on go and the software will go through similar steps
and make decisions and solve problems in a similar way. And from this piece of software,
the intelligent tutor can pull out information about what an expert would be doing. Okay,
so finally the last thing and then I'm done is the idea of Training Higher-Level Cognition;
problem solving and decision making skills. And really one of the best ways to do that
I think is to make sure that the people you're trying to train have a full, a best of an
understanding of the problem at hand that they can possibly have without going all the
way down, you know, atoms and chemistry and things like that. But in the upper left you
see a sonar display. I've actually shown an example of that earlier. Okay. It's all this
sonar sound data coming in. And the interesting thing about this is that you have a certain
sound thing coming in, you don't necessarily know what that means for where the target
is in the environment. The target could be at 3,000 yards away at this angle or it could
be 2,500 yards away at this angle. It's not exactly a perfect science of figuring out
because of the sound, I'm getting back, this target is here. So you have to end up trying
a bunch of different solutions but what about this one and this one and this one and this
one. And each of those solutions have a better goodness of fit or have more error or less
error and that's displayed on a display that looks like one of these four known as a PEP
display, Parameter Evaluation Plot. On the x-axis here is actually time. So you can see
with not a lot of data, you got a display that's all red. In this case, red is actually
good which is probably backwards based on from the very beginning, but red means very
little error. The solution fits very well so it's very likely that the target is over
here. So without much data, what this is saying is, oh, the target could be anywhere, which
is not very helpful if you want to actually fire a torpedo. Cover with more time, you
got more information and by 20 minutes you've got lots of data and you can see the little,
the red piece of the very small function of that. So this is the kind of display that
these Fire Control Technicians are working with but they only pretty much read it on
a book. They might have a classroom, lots of explaining but it's not very good. We want
to make sure they really understand it. So what we've actually done, this is built, believe
it or not, in Second Life. On that virtual world technology that nobody uses anymore,
I won't be surprise the government is still using it. But they've actually--it's essentially--they're
not using it to chat with other people, but they're using it kind of as a modeling and
simulation platform that's been built by someone else; in this case, Linden Lab; so we're just
leveraging that technology. But they built a 3D path because what actually happens is
that the color, as I said represents more or less error but in this case you can actually
represent it especially by the height in the z-axis of how much error there is. And there's
all kinds of functionality in here where people can click a play button and it can actually
go up top. They'll try to do different solutions, show you how good to fit is and actually build
individually each of those little boxes. So if you have enough time and if the Fire Control
Technician is really curious, they can sit there and play around and see how this graph
is actually built square by square to figure out where the best solutions are. So from
here, they get a really comprehensive picture of what this PEP display is actually doing
and hopefully this gives them--they're able to perform their job much better. So in summary,
as a whole, our cognition doesn't always work the way that we want it to, but there are
many different ways to understand cognition and there are many ways to make our cognition
better in terms of training our cognition. So human factors sometimes it feels like common
sense but it's not always common sense but creating usable designs is critical, whether
people are searching for something on the web or whether people are trying to localize
targets on the submarine. Thanks a lot. >> So at this point, we have 10 minutes for
questions. >> WONG: Sure. Anyone has questions? I'm happy
to take them. If not, that's okay--oh. >> Thanks for the talk. I was curious about
a couple of points. >> WONG: Sure.
>> Let's do a match up of couple points that you mentioned. So the $10, $30 sort of over
a [INDISTINCT] of cortex thing, the biases that are created…
>> WONG: Yes. >> …in interpreting something. So when--how
do you actually deal with that when you're designing an interface around, maybe the correctness
of the data versus any biases like in terms of color, like, is color bias and will that
create someone to have--to misread a result or leave an impression of the result as an
interpretation that's more favorable when it may not be.
>> WONG: Sure. Absolutely. And it's definitely something that you have to be aware of. In
this case, I would consider more of a need for designers to really understand the presence
of that bias versus trying to train out those biases from people. For example, if something
is happening at the neuro level, it's going to be very difficult. It's almost like an
optical illusion, what's happening at the neuro level, so training them. This is really
what's happening, it might be very difficult versus if we're able to understand these biases
through things like research and human factors. You know, not necessarily reading the literature
but talking to someone who has read the literature. Understanding this is how people actually
were, we're going to have to work in designing around that. Something like color is certainly
one of those issues that come up a lot. People are color blind. People have different associations
with color, but there's a lot of research out there that we're going to have to seek
to understand. Oops, there's a question right here.
>> So the attention anti-focus, the distraction from what you ought to be paying attention
to but it's not quite interesting enough. >> WONG: Yes.
>> How do you--are there good ways of trying to keep someone's attention on that even though
it's a boring thing, I mean, this is like you have airplane pilot problem of…
>> WONG: Yes. >> …about 250 people and if he screw up,
they die and your job is boring. >> WONG: Yes, absolutely. And that's actually
a field or a field within cognitive psychology known as vigilance where people have to pay
ultra attention to something or baggage screeners at the airport have to, you know, sit there
and look at these bags. They got, you know, five, ten seconds to look at them and, you
know, how often is there going to be a knife or a gun or anything like that. And there's
actually been a lot of research on this as well. And the strategies, for example, you
know, TSA baggage screeners get a lot of training; it doesn't really seem to help. Ever so often
you get a news article saying, you know, someone, you know, was testing and passed through five
guns through security and they totally missed them. But there are some strategies things
like giving people lots of breaks. TSA employees get a break fairly often. I want to say maybe
every half hour. They don't call me on that. But breaks, for example, and then up being
what's called the hit rate. So essentially because in a task, where you're trying to
detect something like baggage clean or baggage screening; this very rarely going to be a
gun or a knife in the bag for you to detect, so it's not a very satisfying job. However,
if you start inserting, you know, inserting guns which they do, you know, they're putting
like an image of a gun. So someone says, you know, oh, there's a gun, but it turns out
it's just--it's a kind of--it's a false thing but at least they got some satisfaction that,
hey, they did their job correctly. So especially if it's their job and it's a really a boring
task, those are ways to give them incentive to pay attention. If it's something that's
not necessarily their job then it might be harder to keep their attention versus them
just totally turning away. >> Okay. I'll put you on the spot. What do
you think of the search results page that we have? Not the homepage, the search results
page. >> WONG: The search results page. I personally
like the search results page. Everything is on the left-hand side. That stuff, you're
putting me on the spot. Thanks. >> Okay. I'll talk to you about it afterwards.
>> WONG: In general, I tend to like, I think it's relatively uncluttered things like the
ads that are right on top or kind of highlighted as these are ads. You're able to kind of skip
pass that and the actual content is nicely segregated, so it's really easy--it's really
easy to find. >> The question I did want to ask you is around
the study that you mentioned where they're using video games for…
>> WONG: Yes. >> …attention peripheral vision. Is it--was
there also measuring attention in the sense that experienced players? Is there two different
types of attention neurologically, so is it sort of the active or passive attention? As
well as are they actually focusing because I thought maybe novices would focus too hard
and not actually see things in their peripheral vision so it's a matter of being too specific
and having to move whereas if you don't focus you actually increase your peripheral vision?
So our experience players just relaxing more and not moving because they're more relaxed
but then they're actually seem less detailed. >> WONG: Sure. Absolutely. And it's actually
kind of--the best way to answer your question is that there was one specific task of the
useful field of view where there's a central task, where they have to discriminate between
one item or another and you can make that very difficult. For example, line orientation
from horizontal to like five, ten degrees off. But then at the same time that the central
task is presented, something in the peripheral is the--something from the periphery is presented
that they also have to detect. So it's a matter of central detection test in the peripheral
and what you find a lot of times is that the standard effect not ignoring expert versus
novices is that the harder the central task is it's almost like attention get sucked in,
so they're worse at detecting things in the periphery. What do you find with experts is
that their atten--is that attention doesn’t get quite sucked in. They're still good at
the central task but they're still also able to pay attention in the periphery. So it's
almost like if you're buying to this theory of attention of resources, we have kind of
this pool of how much you can pay attention to. Experts can pay attention more both centrally
and peripherally as well. >> Does anyone of these--do you have a question?
>> Following up on his question, this is drifting a little bit. There is, you know, a thing
that people can get in the zone or in flow. >> WONG: Yes.
>> …and I'm wondering if you ever tried to, you know, see how attention changes when
someone who's in the flow. >> WONG: Sure. And that's an excellent question
and I'm not familiar of any research off the top of my head that's actually looked at that.
Ricardo, are you familiar with that? Any research… >> RICARDO: I've done a lot of research in
flow but that question was specific [INDISTINCT]. >> WONG: Yeah. It's definitely interesting,
but they're not familiar with anything like that, but…
>> Because I know that one thing that happens is totally screened out.
>> WONG: Yes. >> Everything that does not…
>> WONG: Yeah, your attention is fully focused on something.
>> But when you've got, you know, sort of a central and peripheral task, I wonder if,
can you get in-flow with that or is it impossible? >> WONG: Yeah. Yeah, absolutely. We typically
think of flow as you're focusing on one task very intensely. So in terms of things like
alerts, that pop-ups you might totally ignore them. Like when the phone rings and you totally
ignore it. But yeah, but I'm not familiar with too many studies with that. One place
I might look for example is the attention deficit disorder. Literature, where people--obviously,
that's a case of--a specific case of how attention works with--people what they need tend to
get very focused on one particular task and it's hard to distract them. So studying that
phenomenon could inform normal attention. >> It feels like a [INDISTINCT].
>> WONG: Okay. >> That'll be nice.
>> All right. What's your perspective on level of detail because I was at a talk recently
and it surprised me because it was a tough tech talking, he's very adamant about detail
and that… >> WONG: Sure.
>> …or throughput in their eyes like 20 megabytes of data second and he's all for
like adding as much details as possible, so that as you're trying to show data if it's
a graph line, it should be a spark line where like if you look at it really sharply that
your eyes could actually pick up all the very minor differences.
>> WONG: Sure. >> And so when you're trying to present data
of an idea like someone has to be--someone needs a larger version of it to sort of grab
their attention but then once they have their attention they sort of need the detail and
usually we only have one state versus a graduated state. So do you have any opinions on that?
>> WONG: Sure. Absolutely. And I think with proper design, it can definitely be done.
There are some examples of like fisheye displays that might be tied to something like eye tracking.
So as you move your eyes where your eyes are focused, that part becomes almost like zoomed
in. So if you have a display, so almost from a zoom out perspective you can see where the
data is that you want but as you look at it that data actually kind of comes into focus.
As long as it needs to be designed correctly in such a way that from the zoomed out perspective,
which is, you know, when you're standing back and looking at everything you know where the
data is that you want, otherwise, if it's--if you really need to focus on it and you don't
know where it is, it's going to be impossible to find. So that's--so you have this problem
with designing from a zoomed out perspective, but also designing once they've actually walked
up to this poster or whatever it is and look much farther--computer screen and look much
more closely that they're able to still retain the central space of where they are in the
entire thing but still receive that information kind of right at the center. So it's certainly
going--it makes sense. It's going to be a really tough design challenge, someone like
toughly, of course, and you'd probably pull that off, but I'm not sure about as a standard
usage. >> Thank you.
>> WONG: Thank you all.