Video Games and the Future of Learning (Jan Plass and Bruce Homer)


Uploaded by GoogleTechTalks on 07.06.2011

Transcript:
>>
LANDIS: Hi everyone I'm Matt Landis within GDU and we're pleased to finally have Jan
Plass and Bruce Homer from the Games for Learning Institute here at Google as part of our series
games at Google Techtalks series. The series and educational games space is at a pivotal
stage right now where the skeptics are declining in numbers and people are really interested.
But they're starting to ask some questions. They want to know if this is effective, how
can it be effective, and they want--they want to see some data. They want to see some results
and they want to see symmetrics. And that's why--one of the reasons I'm so happy to have
you guys here today because I think you are at the forefront of trying to do serious research
to demonstrate the efficacy of some of these games and what can happen in a classroom.
Please join me in welcoming Jan Plass and Bruce Homer to Google.
>> PLASS: Thank you Matt for the introduction and for inviting us to speak here today and
hello and good afternoon to you here in the audience and on the broadcast. The Games and
the Future of Learning is our title. Suppose to be little bit provocative in some audiences
but I don't imagine that might be the case here but just--because that's my favorite
question to ask, who is a gamer in the audience? All right. So, we come from Games for Learning
Institute and what I want to talk about today is really, you know, series of presentations
combined. I want to bring up the national challenge that we're facing right now just
so that you get a context for what we're up against and then talk about the Games for
Learning Institute and how we try to address that. Bruce is going to talk about some previous
research that we've been doing that it's all the foundation of what we are doing now then
I'll talk about some current issues and challenges that I think very much relate to some of the
work that some of you are doing here. So, national challenge wise, I don't want to spend
too much time on that because there's a high tech organization and corporation as yourself.
Just looking around in the audiences I don't see too many women here. In many cases there
are very few women engineers and when we look at computer science departments, you see between
2-4% women actually choosing computer science and we're asking, among other things, the
question, why? So, the national challenge is really that U.S. students are falling behind
their peers in other countries, and studies like PISA and TIMMS to show that. PISA is
international comparison studies administered to 15 year old students on all different kinds
of topics: Science, Literacy, Math, etc. But it's also the 21st Century skills requires
or workforce requires in these skills that are not taught in schools designed for the
19th Century. We have Science and Engineering positions that are open in many companies
and they try to fill that with qualified applicants, especially get some more female applicants
and that is a very difficult thing to do. And so we're asking one of the things to question
why that is and how we can address that. And we believe in part that is because the education
system that we have is not preparing or providing materials in a way that really addresses a
broad range of learners and their needs. It's really very specialized in how we deliver
instruction because we essentially have a one-size-fits-all approach to this. So, here's
a piece of study. The red markings are the United States and when you look at the countries
that are ahead of us, you come to one simple conclusion: It has--doesn't have to be that
way. The fact that Belgium, Iceland, Norway, South Korea, Australia and New Zealand are
ahead of us--the Canada is number two in literacy--reading literacy which was the focus in 2000. We're
24 in Math and 21 in Science--it doesn't have to be that way. And so we believe that technology
can actually have an answer to that and that's what we do in the Games for Learning Institute
and create as the consortium for research and evaluation of advanced technology in education
that does the educational research within the Games for Learning Institute. So we are--we
are interested in providing guidance for designers who decide to build educational interventions
because currently there's very little research that does that, and certainly very little
research that's translated into something designers can work with. We want to show best
practice examples and we especially want to focus on a variety of digital media and not
just multimedia learning or [INDISTINCT] based learning but a broad variety of visual media
for learners of all ages coming from various cultures, so from a broad range of subjects.
And we want to look at conditions for the success for integration of media into schools
because we're not working in isolation, there are formal and informal learning environments
we need to address. So that's what we're interested in. But bridge that with out of school experiences
on the go of the [INDISTINCT] mobile devices etc. And the--what we're trying to do is--in
order to get this all done is drive in the research agenda because currently there isn't
really one. So we have a number of collaborators, we have an amazing board of advisers with
game designers and Alan Kaye is on it, and many others who are advising us in this process
and to help us do this right. Anybody who can tell me who the guy is in theó-is going
to [INDISTINCT] in the middle of the right? Kenó-exactly. So, Will Wright is on the advisor
board--so we have the people who know what to do and how to do it. And the guy in the
middle there is Ken Perlin who is the director of the institute [INDISTINCT] Games for Learning
Institute. I'll talk about them in a second. So we're bringing a lot of people together
to do this work that is the Games for Learning Institute and there's faculty members from
nine different universities including Teachers College, Columbia Universtiy, Dartmouth, you
see the list here. All looking at the question, how can we design games so that they are effective
for learning? There's a question on the design of games and then there's the question that
we are interested in, empirically-based and theoretically derived design patterns for
Games for Learning. So, we're not interested so much in kind of, just observations of game
play, we're interested in collecting the dataó-data mining or analyzing the data with statistical
methods and then coming to conclusions that inform the design of games. And so the mission
then is to look at design patterns for effective games for learning, combining what we know
from educational psychology research to translate that into design for Games for Learning and
those theory-based empirically validated design patterns I've talked about. And really, in
the end we want to help create a nation with informed citizens that have the necessary
skills--digital literacy skills to be successful in the 21st century. So we use a variety of
research methods, experimental research, video observations, play testing, and we use all
different kinds of measures to get at the kind of data we want and this is--in many
ways, something where you have to break and grant and I'm going to talk about this in
a little while. But why do we glue games then? We think that they provide highly contextualized
places for learning which is exactly the problem in schools which is de-contextualize and compartmentalize
and situate it. Everything that happens if within the context that makes sense to you
is something where you have a much higher likelihood of actually applying that in the
future. They're highly engaging, highly individualized which we currently can't do in school settings
to out of many out of school settings. And they don't just teach 21st Century skills
but also content that is important, so we can combine the two. And were bridging in-school
and out of school learning. They have an emotional impact by design which is something that's
often overlooked when it comes to design in learning. And they allow for embedded assessment
and that's what we're really excited about, not just of learning but also of a number
of other variables, and I'm going to talk about that in a minute also. However, we really
don't understand yet from a research perspective how to do that all right and how to make games
that are effective for learning and of fun. And so that's the core mission of the institute.
So, we are building adventure games for science learning with strong narratives in science
problems, we're building cool-located AR games with Geo-located hot zones and authentic scientific
data feeds here about Times Square in New York City and renewable energy sources there.
We're building games for simple Math skills--for Math skills like Super Transformation, Sound
Transformations. And so we're going to talk about some of the research on that, and it's
all grounded in a lot of psychology research and Bruce is going to give a presentation
on some of the studies that were done outside of games that inform how we're now looking
at games and the kind of research questions we have there.
>> HOMER: So this is just a sort of high level overview of our research objective and we
develop--we use those cognitive, social, cultural and affective theories to influence our design
of various Digital Media, we're looking at Learner variables, both as how they're influencing
our design of Digital Media and also how they're influenced by the Digital Media. We're trying
to build in embedded assessments and biometrics into the products that we're developing and
studying. And for outcomes we're looking at both Cognitive outcomes, Meta-cognitive outcomes,
Engagement, Learning engagement and affect of outcomes. What I want to do is just give
a quick overview of some of our main findings. I'm happy to talk at the Q&A if anyone have--has
more questions about these. But I just want to kind of highlight some of the main findings
for the work that we've done already. So this is one of the projects looking at computational
thinking. It was--particularly targeting computational thinking in young women, and it was a dance
game where you could change outfits and influence, like a dance moves by basically scripting
the activity. This was funded by the National Science Foundation and the key findingsó-where
the participants were 56 middle students, roughly half were female, and it was a pre-test
post-test design, it lasted for 4 weeks with a 50 minute session per week. The key findings
we found--although there wasn't a [INDISTINCT] increase in programming related knowledge,
that wasn't really the main focus but we did find was that there was an increase in general
self-efficacy for the young women that took part in the project. There was significant
pre/post game and programming self-efficacy for girls and marginally significant increase
for boys, and a significant pre/post testópost test increase in self-esteem. And this increase
in self-efficacy and self-esteem, that's what's necessary if you're going to have girls going
to computer science and programming and stay at it. So, you know, the short term gain is
not as vital as seem of these kinds of increase in self-efficacy, you know, that's sort of
exciting thing from that work. This is from a project-- molecules and minds where we've
been developing chemistry simulation, you have high school students. This is--the science
exploration looking at the annual gas law. And one of the things that we've looked at
in this design is representational format: How to represent the information, these simulations
so that a variety of learners can understand the content of what's being presented. So,
we looked at the adding of iconic representation. So if you look at theó-if you look at the
simulation to your right, you see that we've added weights which represent pressure, flames
represent temperature, heat. And then look to seeósorry, empirical question was, if
adding these iconic representations help improve learning for these learners? And this is grounded
in the symbiotic theory of a person--some other literature. We did a study with 93 New
York City high school students, in 11th grade and, you know, 2x2 studies. So we looked at
adding the icons versus not having the icons and whether it was something that was more
direct construction where they watchedóthey--it was very guided or whether they're allowed
to just kind of explore it on their own, and looked at how these two factors influence
learning. So for the representational format, what we found is that the adding icons did
improve comprehension, especially for learners with low prior knowledge. So, ifó-and those
were our target audience. So if you came into this not learning a lot about chemistry, those
icons really help you make sense of the simulations and youró-and your learning really improve
of the comprehension which is kind of the basic information that you're trying to get
from the simulation. When we looked at younger learners, middle school students though we
found that, even those young students that--younger students that knew the information, they benefited
from the icons. So, there's also an age effect going on here as well. When weó-the second
factor that we looked at was, how much guidance that they required in using this simulation?
And we compared more discovery learning versus more direct instruction again with the 93
high school students. And what we found is that, for comprehension, there's a general
trendó-significant trend for the exploration to be better than the more direct instruction,
so there was benefit to exploration. But when we looked at some of the individual factors
it sort of became slightly more complex and to me, more interesting that in a factó-we
looked at executive functions and how that predicted learning. So executive functions
are related to frontal lobe functioning, its executive functions that allow us to set goals,
monitor, whether or not we're meeting our goals and to adjust our behaviors if we're
not meeting our goals. It's a very important but low level neurophysiologicalóneurocognitive
feature. And we looked at how could [INDISTINCT] on the executive function task and found that
controlling for their prior knowledge if they had low executive functions then they benefit
much more by the direct instruction. Yes. >> How do you measure executive function?
>> HOMER: This wasó-we used the strip task which isó-there's various versions of it
but the version we used is we'd have color word, like red, written in green ink and you
have to not pay attention to the semantic content of the word but just name the color
of the ink, and we looked at how long it takes you to do that. So if it inhibit the prepotent
response of saying, "Oh, that's red." "No, no. It's green ink." And compare that to--if
it's just written in neutral ink. So that's the executive function measure. Yes.
>> Did they spend the same amount trying [INDISTINCT]? >> HOMER: Yes they did. We controlled for
time as well. Yes. But if the kids with the higher executive functions controlling for
prior knowledge did better than the exploration. So, this is one of the big debates in educational
psychology: How much direct instruction? How much exploration? It's like very political
in the field of education psychology and so this is a study that's says, "You know what?
Yes there's--it's actually slightly more nuance than A is better or B is better." And then
with this--this whole project theó-we did a big efficacy study to see did the simulations
actually help and we put them into a real classroom. So we had a big study with over
700 students involved where we randomly assign classrooms to use the simulations or not use
the simulations. And we have two sites, one urban site in New York City, and one rural
site in Texas where the students used this for their chemistry. Again it was pre-test/post-test
and some individual measures. And what we found is that for a real sample in Texas the
simulation group had increases greater transfer of knowledge, so what they learn they were
able to apply it to new situations from the chemistry. They had greater self-efficacy
for chemistry and they also acquired some graphing skills. For our urban sample which
have lower prior knowledge, they had increase comprehension and increase transfer when they
used it to our simulations and they also had higher engagement in class. So we actually
set video cameras in and recorded how engaged the students were in the classroom and group
simulations kept them engaged and askingó-they've been asking key questions. Another factor
that we've looked at is emotional design. So, what happens if you try to design material
to create a positive affects. Does that help learning? And this was a 2x2 factors sign
where we looked at external way of boosting positive affect and also created design, you
can probably guess which one of these is kind of the happy design. The positive affect design
which is the one on the right, yes? >> [INDISTINCT]
>> HOMER: It's reading a passage with either, like a sad story or happy, you know, narrative.
>> [INDISTINCT] >> HOMER: Yes. Well, they read some themselves.
It's kind of a standard. In social psychology it's a standard mood induction technique that's
used and we adopted that for the study. And just quickly, the main findings that the internal
mood induction actually did--was preferable to the external induction. The external induced
more positive emotions but they decreased during learning, whereas the internal induction
which is actually part of the learning, kept the positive affects up. And most importantly
what we found is that with the positive affects you see positive boost in learning outcome.
So, having learning materials that makes you feel happy increases your learning. This is
just [INDISTINCT] up with the two studies from the Games for Learning Institute. This
is a factor reactor game that Jan briefly mentioned during his introduction where we
looked at play modes. We looked at three modes of play, either just kids playing solo, kids
playing competitively, or kids playing cooperatively. And this was a study with 63 middle school
students. What we found is that for the collaborative and competitive play, so when--there's the
social component to the play. There's a greater situational interest so the students that
had either collaborative or competitive, they actually were more interested in playing the
game in of itself. It was more engaging for them--greater interest in it. Stronger master
goal orientation so they actually want to learn the content more. But the solo game
play--those students, although they report it less enjoyable--to be less enjoyable than
the collaborative--but the solo play group actually demonstrated great amount of fluency.
So we did a post test and in the post tests they learned more from the solo play, but
they liked it better and had better goal orientation in the group play. So again, if you're a teacher
who wants to use a math game, if you're going to be using repeatedly maybe it's worthwhile
doing the social play even if it--there's some decrease in learning. The amount of increased
motivation, like keep the student engaged and learning overtime. You like a greater
learning. Yes. >> [INDISTINCT]
>> Just a quick question, when you say social collaborative/competitive, how many students
are at terminal? Are they playing each other one at a terminal or they're multiple people?
>> HOMER: There's a--they were both at one terminal. So either--for the collaborative,
they--there's like two rings, so each person controls one of the rings so they have to
create--get rid if the fact in the middle by collaborating together. For competitive,
they both see each other's rings and they have to, you know, try to reduce their factors
and what reduce more factors than their competitor. >> Did you break the [INDISTINCT] prior mastery
to see whether people had greater prior mastery did better in the solo or competitive environment?
>> HOMER: We didó-I don't think that it had an influence.
>> No influence? >> HOMER: We control for prior mastery in
our analysis but I don't think that there was a--an interaction between prior mastery
and [INDISTINCT] play. >> These results are averages or you looked
at the best performance of the worst student or...
>> HOMER: These are averages so... >> I would expect [INDISTINCT] in collaborative
situations... >> HOMER: So we...
>> [INDISTINCT] effect >> HOMER: With all theseósorry. We, you know,
we screen the data for any outliers, I think we had two outliers in this study who came
in, either having known math knowledge or weigh too much math knowledge. I can't remember
which now. And then it's averages, looking at group averages.
>> Okay, because when you have group activities, you sort of expect different people to carry
the weight differently, right? So you'd expect a different shape of curve.
>> HOMER: Right but the--but the outcome measure is an individual measure. So the--the post
test is they do it on their own. So, the post test is just a standard math fluency test
that it is a paper and pencil test that they did individually. So I don't know if that...
>> Okay. I think it's just okay. >> HOMER: Okay. And this is another game that
we've developed. Looking ató-we've tried to look at different learning mechanics and
see what influence that has on learning outcomes. So this is an angles game where you have to
completely angle and the two mechanisms, one is solve it by just computing the number and
the other is just choosing a rule and dragging that in. And so, either using a number approach
where you [INDISTINCT] have to compute it or just using the rule which is more conceptual
approach and... [PAUSE]
>> HOMER: ...and what we found in thisó-so we compared the rule use versus conceptualó-either
number solving or just using conceptual numberó-conceptual rule or actually using the number to solve
it. We had 89 middle school students from 6th and 8th grade. And, you know, this is
a study that we're still completing but the preliminary results indicate that they saw
more problems in the rule-based game but the arithmetic game where theyó-where they drag
the rules, they actually find that the learning outcome continues. So there'só-I didn't say
that quite right. These diminishing turns--so they learn from both games. That's the key
thing. So from the arithmetic, whether they're doing arithmetic or they're just using the
rules, they learn in both cases. But what we find is after about 30 levels, it flattens
out, the learning flattens out for the ruleó-for the computational one where they calculate
the numbers. But if they're using dragging the rules, then just keep seeing learning
games the more they play the game. So there's greater long term games from doingó-playing
the game more conceptually rather than just back computing the answers. So this is a summary
of the key findings which I won't go over again, but if we put them back into our original
layout of what our goals are, we've been looking at exploration versus icons versus visualizations
as sort of, cognitive design factor, engagement collaborative interactions, cultural and localized
designs. So how does itó-does it matter where you areó-where you're learning if you are
in a more rural or urban environment. Looking at self-efficacy, self-esteem, and emotional
design. And all that is fitting into developing games for learning. We've looked at various
learning--learning variables, executive functions, prior knowledge, and building imbedded assessment
to further explore what's going on with this learning games. And I'm looking at cognitive
outcomes, the meta-cognitive outcomes engagements and affective outcomes. All right so now.
>> PLASS: So, all of these is kind of--it's just a very brief overview of the research
that we're doing and we're happy to talk afterwards about some of the details of how we're approaching
this and what kind of measures we use, and we're using game-based measures and out of
game measures because right now you have a lot of skeptics that say, "If you only measure
performance through the game, it's very hard to make it a case that does actually means
anything." So we have paper-based instruments after the game plates have--so we can talk
about all of that. But I want to mention and get into some questions and challenges that
we're facing related to games related research because they actually are the ones that keep
us in this--in this work. It's so interesting to do this work outside of traditional learning
environments and going into games and now many challenges have come with that. And just
to summarize something that we haven't really talked about, there are four functions that
we see for games for learning. Preparation of future learning where you don't even attempt
to teach something, you just set up future learning, goes back to the work by [INDISTINCT]
at Stanford. Games for specific learning goals where we can teach new content of skills,
but many games just practice those, that's number three. Practice existing skills and
lead to automization of those skills, very important but not actually teaching. And then
development of 21st century skills. What we find is that most...yes?
>> [INDISTINCT] >> PLASS: Okay. Thanks for bringing this up
because some of those things we should define and didn't. 21st century skills is actually
more a hype work than anything else that I really should take out and replace with more
concrete ideas. It means teamwork, it means collaboration, it means more of--instead of
IQ, more on EQ and emotional quotient. Being able to work in teams creatively solve problems,
addressing all of the things that you face everyday in your work most likely. But that--are
not taught in schools or at least not in all schools consistently. And so, when we talk
about 21st century skills, we differentiate that from the 19th century skills which would
mean to know how to add and subtract and to regurgitate some facts about history and so
on. So that's the main difference here. Thank you for bringing that up. So, there's a lot
of interest in using games to teach those 21st century skills because games have that
collaborative component and they have ways of engaging in teamwork of creative problem
solving, etc. So, when we found this--the result actually that solo play is in this
math game better than collaborative and competitive play. It is very clear that this is for a
game that falls into category number three, which is practice existing skills and automization.
If we have a game for development of 21st century skills which we're actually working
on to use as a--for the same study design to see if the patterns fold, we would expect
very different results based on the game like that. So it's useful to think of games not
as a one kind of monolithic thing, but not only different genres for games but also different
functions of games for learning. But the most general realizable research, experimental
research, quantitative research, focuses on games to predict--to practice those STEM skills
and qualitative research which is few of participants think of descriptions but typically not involving
numbers focus on games to develop 21st century skills, and we feel we can go into that, of
that area and do more comprehensive work using data driven approaches as well. So, those
challenges. The first one that we're very interested is embedded assessment and they're
many as opportunities to assessing games. We keep logs obviously of what people do,
user logs, event logs, but also biometrics. And there's a number of variables that we're
interested in measuring that are in addition to learner variables, and we believe we can
measure what we call general trait variables, something like spatial ability, verbal ability,
executive functions. State variables, your knowledge, your strategies, your goal orientations,
self regulation and very specific variables for our particular context of play engagement
cognitive load and so on. We can--we believe we can measure all of that out of embedded
assessment in addition to the learning outcomes which are important but which is typically
what people focus on when they [INDISTINCT] imbedded assessment than game. So, embedded
assessment is extracting that information out of an instrumented game and out of the
lock house within that game. What is required though is the design of thoughtful game mechanics
and that's what I want to spend sometime talking about. And we separate learning mechanics
and assessment mechanics and I want to go into those. So, talking about learning mechanics.
Starting with the definition of mechanics in general in games. Methods that are invoked
by agents, the game player or the agents in the game for interacting with the game world,
and are constrained by the game rules. So, in other words, the pattern that you show
when you play game, the essential game play that is either single action or group of actions
that you play. When you look at some of the games, you see that there's Implode or Osmosis
or Engel Burt's--they're very specific mechanics that we typically use to describe a game.
You know this is a first person shooter refers to the mechanics, right? So, none of these
are particularly learning games but all of them have some relevance for learning and
I'm going to talk about that in just a moment. So, game mechanics for learning then need
to, not just define the play but also need to facilitate learning. That's completely
a different challenge and what--it may provide details on learning process and outcome but
not necessarily. It may reveal insights into learning variables but only if it's designed
to do that. And so we think that game mechanics for learning need to meet some very different
functions and definitions, we need to engage to play on meaningful learning activities
rather just play activities and illicit behaviors that we can observe and actually have meaningful
lock data. So in a way then, we need to--we need to--and be fun and engaging. In a way
then when you think of game mechanics, we really need to think about learning mechanics
that help the learner learn, and assessment mechanics that help to learn or assess or
be assessed. And they need to be translated into game mechanics, and that's something
that came out of our research that helped us communicate amongst our team when the learning
sciences--like what I do talk to psychologists or game designers then I use game mechanics,
people think about different things than when I say learning mechanics or assessment mechanics.
And so we started developing that and that actually mushroomed into a lot of other things.
And learning mechanics are building blocks of learning interactivity not play interactivity
and have essential learning activity that they define by doing that. And we--for instance
if you take one game mechanic--one learning mechanic where you say, "I want to apply rules
to solve problems." You saw that a moment going there--geometry game that we used, that's
a game for quadrilateral and other topics in 6th,7th and 8th grade. We want to, for
instance say, "Well, in this game the learners selects different rules and shows where and
how they apply." So we have a problem here--some more abstract problem--and I can choose the
rule that is represented by an icon--iconic representation and icon of some sort or another,
of complimentary angle, supplementary angles, opposite angle rule, etc. Number of the angles
inside of the triangle, etc. So, rather than asking you for the arithmetic answer, for
the numeric answer, I'm asking you for which rule applies? So, that might be a learning
mechanic that I define based on my understanding as a learning scientist of how people should
learn. Now, so, I drag that rule and it might even show you the answer and attract the other
rule and that's my response. I could translate that into a game mechanic and that game mechanic
might be an Engel Burt's mechanic where I take that and I fling it over to the--to the
response, right? But is that the right way to do that? And so what we--what we asked
in our research then is, you know, I fling my [INDISTINCT] is that Engel Burt's mechanic
is a good mechanic to use that and the answer, it's not. And it's a--it's a really fun mechanic
for games but it's actually, for this particular case, not a good mechanic for learning and
for assessment and why is that? It's because--and I'm going to talk about that in a moment--I
introduce additional skills in using the mechanic that might prevent me from actually doing
something that I know I want to do but I just can't get that darn Burt to that particular
part of the structure where I want to shoot it, so. Implode might be another mechanic,
I don't know if anybody plays that, that's one of my favorite games on an unnamed touch
device that I carry with me on occasion--where you have a structure and you have bombs and
you place the bombs in destruction and hit you bit--then you hit the bit--button on the
right that says "implode" and then it shows you what would've happened if you had actually
placed those bombs in that structure and you need to go to below the dotted line with that.
So, yes I just dragged an icon from the bottom to each location, right? I don't have to fling
it and get it right or not get it right. I just drag it and it's as much fun. But I don't
have to add a particular other skill of getting the angle right and getting, you know, the
power right to fling it to the right location. So, but both follow the same mechanic of learning
translated into a different game mechanic. So here is another learning mechanic, arrange
concepts to solve problems. This is the game gravity on the same unnamed touch device that--where
you can use different items that can be arranged in time or space to solve a problem. Here
the problems is to get that ball to somehow or get something to hit that red button that
then ends the game. So, the one that you see down the lower part of the screen. So, arranged
concepts to solve problems could be translated this way that I just have a free way of dragging
those balls around, but I can also do that in five control. I'm also arranging things
in time and space which are in this case, the airplanes suddenly needs to be landed
in particular landing strips. So, again one game mechanic--one learning mechanic arranging
things in time and space, translating into different kind game mechanics. Tubes or plumber,
the same thing, I placed the tubes in a way that they construct a system that doesn't
leak. Again I can drag them to a particular place but then they're locked in place. So
I constrained the flexibility to change things around. I'll give you another example, for
learning mechanics. Select sets of certain properties to solve problems. Here, in this
case, that need to belong to each other in time or space. So, Bejeweled is an example
an example for that where I need to select a certain set of items in this case. I need
to-- actually bring them together and then eliminate them that way. Here's another one,
Osmosis, where I actually carry out the joining of objects that I think belong together in
time of space in very different ways. So my whole points is that, the same learning mechanic
can be translated into different game mechanics and we need to have criteria for those learning
mechanics and for the translation into game mechanics that actually will help us build
games that then collect the data that we want collect. Or in this particular case actually,
facilitate the learning that we want to facilitate. So the game--the learning mechanics themselves,
need to be grounded in learning sciences and what we know about learning in general and
there's a lot of research done on that. It's just that those people typically don't know
how to design games. They need to describe meaningful interaction with specific subjects
and need to be based on a theoretical model of interactivity which is a paper I wrote
awhile ago with two colleagues. And looking at interactivity in something that is both
happening on a behavioral level, on a cognitive level, on an emotional level, and we need
to understand all three in order to really usefully design those mechanics. And they
need to provide different but equally useful or appropriate solutions to problems, so you
actually have a choice. So, what we then started doing is we build a library of learning mechanics
where you see those different items that I already mentioned with the examples. And so
you have this learning mechanics library that we're constructing where we say, "Here's one
mechanic that learning designers is a useful learning mechanic." And here are several ideas
for how game mechanics could look like. And so we're starting to build that as a library
and we're going to post that online on our website g4li.org to have that available for
other designers. So, something where we want to have ideas, kind of a community project
for people to say, "Oh, I have an idea how to do that in a meaningful way." Then add
new game mechanics to learning mechanics or come up with new learning mechanics to begin
with. The point though is that, the approach that many people are taking, which is to say
"Well, there are so many game mechanics, why invent new ones?" Starts from the wrong perspective.
We need to start from understanding how we learn and then pick game mechanics rather
than saying, "Here is a game mechanic, now can I learn with that?" Right? An that's the--that's
the idea. And there's some requirements for selecting game mechanics based on learning
mechanics and I don't if you have played this game, this is Dimension M, the only first
person shooter algebra game that has made it into the mainstream. It's a tabular digital
game and to be applauded for, for actually trying that. But, this a good example for
ideas where game mechanics introduce excessive amounts of extraneous cognitive load on occasion
where you have to do things--extraneous cognitive loads is when you have to process unnecessary
information where that might be related to the narrative or to resource management if
it's excessive. I mean, yes, it might make it fun but I have to manage so many resources
so I have to think with so many other things. In this case collect certain packets of data
and places where it constantly ran out of or ran up against some obstacles that prevent
you from doing that. Then that might not be useful. Or in this case, this is solving equations
or transforming equations. If the game mechanic reduces the amount of germane load which is
actually the investment of mental effort too much, that I don't even have to even think
about anymore what I do. If the game doesn't at all for me, in this case if I move the
b to the--from right side to the left side, the y kind of moves to the side and I can
drop it right in, then I have don't even have to think if that's the right way to make that
transaction or if I don't have to invest any thought into that. So that would be another
requirement that you actually still need to think about that. The game mechanic shouldn't
do that for you. We have, you know, have all the mechanics, in this case, Angry Birds,
you add fine motor skills to that or content knowledge or skills to--to what otherwise
would be a learning task. So, this is all to say that the idea of starting to [INDISTINCT]
about learning mechanics rather than just game mechanics really helped us design those
games and I think it's a very interesting way going forward. But interestingly, this
also implies to assessment mechanics and that's actually something that relates to what you
many of you are doing. Now we're talking about patterns of building blocks of diagnostic
activity. How can I actually build something that can diagnose any particular variable
of interest during game play? And so, coming back to my example here, the idea that I respond
to my problem with a number versus I respond to my problem with conceptual rule, is diagnostically
fundamentally different though. If I get that number wrong, if I say, "That first angle
that is 25 or 35 degrees," I don't know why I got it wrong. It could be that I didn't
know the rule or I could not be--I mean that I didn't know arithmetic of coming up with
that answer. And so, diagnostically it's completely not helpful unless I have other ways of getting
at that answer and that why is that necessary. So when we use the rules to answer the question,
we have much more insights into what you actually cognitively were processing before you give
that response. So we built all of these work on the framework by Ms. [INDISTICT] called
Evidence-Centered Design and that's an approach to say, "We need to think in terms of modeling
all of that. We need to think of a competency model." In other words, what competencies
am I interested in and how are they broken down over the set elements of those competencies,
math-related, science-related and others. Then, in evidence model, what kind of evidence
would I actually accept as a support for those competencies? What behaviors reveal those
constructs? So, if I say, "I want somebody to be able to apply rules to solve for angles
and triangles or quadrilateral," what would I actually accept as an expression of that?
And then the task model, what kind of action should elicit those behaviors? And having
those three steps really clarifies the way we need to think about that modeling the domain
itself and then going from evidence to tasks. And so, criteria then for assessment mechanics
are that they need to be based on this evidenced model. So we need to understand what we want
to see as evidence and describe aspects of the task model. What tasks are actually required
to be able to use those mechanics for assessment purposes? And then, the Test Theoretical concerns.
So when I have Test Theory prescribe that no two test item should depend on one another,
that's not what games do. Games typically heavily rely on what you did previously. And
so, if I make something like this, an assessment of learning and you messed something up in
the beginning because maybe we're just playing around. That shouldn't affect future performance
in the next task but in many games it doesn't that would not be necessarily be a good learning
mechanic unless--assessment mechanic unless I find a way around that. So those are test
theoretical concerns which is also why we separate learning mechanics from assessment
mechanics because for learning mechanics it would be fine to base it on what you did before.
For assessment mechanics that's not always the case. So, it's very helpful to think about
that in separate terms. We need to create repeated exposures to the same problems. So
we can and have multiple observations of the behavior of interest and then we need to,
somehow help the learner make explicit the steps of learning rather than "just give me
the answer" because I want to be able to diagnose how you got to the answer. And it may or may
not be obvious to the learner that they are being assessed if it's not obvious. A colleague
of mine [INDISTICT] talks to staff assessment and if it's more obvious which we are typically
are friends of, we just called embedded assessment, but that's just different flavors of the same
idea. So, again then, you can build a library of saying, "Here are different ways of how
I want to assess." That translates into different game mechanics. So again, starting now from
a test theoretical approach, I can develop assessment mechanics that translate into game
mechanics and then I have this way of working with game designers who know how to make this
fun and engagement. And say, "What are some requirements then to do that?" We need to
reduce unnecessary processing which we can extraneous cognitive load, we need to make
sure that there is a certain level of mental effort involved in the processing that does
not reduced to too much, that there are no fine motoric skills involved that have to
relevance for the task. In other words, I shouldn't have a certain proficiency with
my mouse to very quickly move around the screen to do things. The content knowledge or skills
shouldn't be added to by irrelevant things. Sp if I have a math game and all of a sudden
I put you in a situation where you have to apply that to something where you also have
to know physics to do it right, then that's not helpful because it might be that other
content knowledge I introduced that wasn't there and which is why I got a low performance.
So I need to add a control for that or eliminate that. And the emotional response that those
kinds of practices can give and bird anger is a reference to a talk I gave earlier where
some people said, "Well, we have this game that's a little bit like Prince of Persia.
Every time that dies, there's a bird that actually comes and picks you up and puts you
back." And people hated the bird. They were so angry at the bird which is interesting
because that's angry birds, too. They were so angry at that bird and so there was an
emotional response that completely ruined the entire diagnostic ability of the game.
So we need to understand how emotions are affected by this kind of environment as well.
Since summary, there's a lot of interesting things to be said about separating learning
mechanics and assessment mechanics and then turning those into game mechanics and that's
part of the work that we're doing where we're heading forward with that, and where we have
workshops every month. If you ever in New York City and you want to be part of that,
just drop me a line. We have invited outside game designers and other people come to those
workshops to think about those mechanics and we have different themes every week. Every
month, we had themes related to some of the movement based on speech recognition, base
systems that are coming out now. We have touch-based teams, etcetera. So, there are other challenges
though and other thins we're doing. Biometrics, I want to just briefly touch on them because
I know we are running short of time. Want to leave some time for questions. We're looking
into supplementing or complementing those mechanics in the assessment we get through
user behavior and system events in the log files with some by metrics that triangulate
the variables we're interested in. So, if I have a posture sensor or movement sensor
that I see how much I move on the screen, I might actually, with my first or second
derivative get at changes in my movements rather than or behaviors. And so we're looking
at the idea of how posture can predict that. [INDISTINCT], who came out of [INDISTINCT],
and my team has done some work in that, so we're working with him on looking at some
of those method. But the idea is to have biometrics that help us explain what we're seeing in
the log files. So a synchronized biometrics that we used to look for patterns in log files.
We use eye tracking, where we find very interesting results that we typically can't see in log
files because we don't know what we're looking, we just know where they're clicking or what
the results of the actions. So the in between space that the log file can't capture, eye
tracking can capture. So it's used for the right purpose, which is always a tricky thing,
but if it's used for the right purpose it's a useful thing. Here we have this simulation
that you saw earlier. And you see there's the actual simulation on the left and the
chart on the right, and we actually find that frequent transitions back and forth between
the two. Help to learn or process the information and have a comprehension and transfer of that.
So if we know that from eye tracking, if we know that frequent transitions back and forth
predict learning, then we can use some measures to actually support that in the design and
see if that actually helps learners or if it was just a byproduct of that. Other research
methods that we're using--other measures are EMG to look at emotions, Galvanic Skin Response,
and EKG to look at engagement and emotions and some EEG measures as well. So there's
a lot of, kind of, branching out and getting those biometrics be synchronized with the
log files, and that of course raises questions of log file analysis. So, what we're working
on is what we call--what I call currently a data crypt because I'm an Cryptonomicon
fan, of having a combination of an open lock file standard for games research, so defining
how we want to ride our locks. Use tasks--tags that refer to, for instance, Common Core Standards
which is now, something that for the first time we are looking at the National Curriculum
of what our standards in the various disciplines that we should all teach in the schools. And
so we can retag specific actions and games or specific events and games to those standards
and skills, and then use actions and game events and biometric data, put that that on
a lock file and then look at specific analysis on that. Put that in a place where I don't
have to worry where I stored and how I have access to that because I'm really more interested
in learning sciences so that needs to be taken cared off. And then find analysis to tools
for visualization and data mining on that. So, that's work we're doing and that's work
we're actually looking for partners to that with. There's a number of issues on the granularity
of that I want to talk about because there's one more slide that I thought that was useful
to talk about. And this is App Inventor, we just talked about the idea that, if instrumented
that could get a lot of interesting insights in the process of putting those apps together,
what kind of decisions people are making? If that were instrumented and the data was
collected then we could look at App Inventor essentially as an entry level programming
language and then ask questions in a super App Inventor of what that might look like
if we added a number--number of things to it that are not currently not in the variable
scoping data typing object classes, instancing, etcetera. And made it the quote actually something
the learner could edit. So you open a window, you get the code and do what scratch doesn't
do, what coding doesn't do and what other systems don't do which is to say, "Let me
first built it, understand how it works on a principle--conceptual level and now peel
away the layer that actually lets me look at the code," and then start making the seamless
transition into actually coding. We could use game-line features to apply our research.
This is something that especially Ken [INDISTINCT], the Director of the institute who is a computer
scientist, is very interested in from how to design that to actually be possibly teaching
to for programming. And I oversee the educational assessment side of things. So we're very interested
in the data mining and the data analysis issues and what that could show us. So that's what
we've got prepared. We're happy to answer questions. I just want to point out that even
though its Bruce and I who are here, there are a lot of collaborators who have contributed
to all of these over the years and here are their names. If you have questions, I'll be
happy to. Yes? >> [INDISTINCT] the use of education did work
or not? Can you elaborate on why this link? >> PLASS: So, it's not our preferred approach
to say, "Here's a game. Let's see if we can add educational content to it," because games--many
games are designed for very different purposes, for entertainment purposes, for, you know,
things other than learning of subject matter. Almost any game, probably any game teaches
you something, at least to learn the game. But often it's kind of this reverse engineering
that causes more problems than it solves. However, having said that, it is very likely
that there are games to which you can do that. Yes.
>> [INDISTICT] possible alternatives. There are lots of games out there, which of these
can we actually use to make education and have it work or pre/post test, and then say,
"What do these have in common? What are the pattern for [INDISTINCT]? It's often easier
if you guys [INDISTINCT] examples and go the other way and try to meet in the middle.
>> PLASS: Well, that is an interesting approach and if you have a lot of money that probably
would be one approach to take. We found that, being inspired by those games, learning from
the games, how they've done it and then building our own games that are much more based on
what we know, first of all, how learning has to takes place? And second, what content we
want to add? And then designing game mechanics that actually are directly corresponding to
the educational goals seems to be the--well certainly was the approach that we chose and
with which we're pretty happy? I could see doing the other approach but we're kind of
a very theory-based bunch. We like the idea of starting with the theoretical approach
and then deriving our own steps from that and then looking empirically whether that
holds. You're doing more the casting a wide net; I have a lot of ideas and examples. You
can do that if you're able to put those online and get the large data sets that you might
get. I can completely see why somebody here would ask that question.
>> I find that actually achievable. >> PLASS: Yes. So if you use some of that
type of data collection you could do that. We're currently asking questions that require
us to know a lot about learners that they would never give us online. And so, you know,
you have to kind of strike that balance. But there a people who are actually using approaches
like that with large data sets. And it's obviously a very valid approach. Ideally you would use
both and then figure out different questions based on the two different approaches.
>> So this sort of [INDISTINCT] from that. I was a little confused when you kept in saying,
"Reduce the cognitive load. Reduce the cognitive load." It seems like, there are a lot of games
like say, "World of Warcraft where students--kids in particular, are very good in realizing
when they're being taught and then certain kids will stop learning, right? Like, "I don't
want to be taught. I'm playing." It seems that game, if I want a Warcraft, [INDISTINCT]
which is like the literacy goes up, you know, mathematical skills go up, their ability to
pass information on the internet goes up. But you kept saying, "Reduce cognitive load.
Reduce cognitive load." Is that for you so you can research and assess better? Or is
that for the students? Because I'm not sure whether I agree with the students are better
off by reducing other cognitive load. >> PLASS: All right. A very good point, too
and I apologize for just throwing this in. There are three types of load, and I always
had cognitive load proceeded by a type. So there is intrinsic load, difficulty of the
material; extraneous load is the excessive processing or the unnecessary processing;
and germane load is the amount of mental effort you invest. What I typically was talking about
is reducing the extraneous load which is the unnecessary processing. And so, that is layers
that you add that don't really help you solve the problem that you're solving but in many
cases have other functions. And so, that is the type load where would I say--looking at
whether to reduce that is possible, is useful to entertain? Especially, and my point was
when you do that for assessment mechanics, right? If you want to assess something but
you add a narrative on top of something and so on, that then it doesn't become a good
assessment mechanic. But I completely agree with you that there should not be an overall
goal to reduce cognitive load, because cognitive load is investment of thinking and of mental
effort and that's the only thing that brings about learning. So, yes, I don't think our
positions are that far apart from each other. >> So you're inherently against extraneous
load. So I was trying to sort of, PICA is the idea that extraneous load is what hides
the learning from their students that otherwise would choose not to learn, right? They think,
"I'm playing World of Warcraft". They don't think, "I'm learning spread sheets and I'm
leaning how to use multiplication to make my damage higher." Right?
>> PLASS: Right. This is the question where you really would have to go into some more
detail to come to a sufficient answer but the short answer perhaps might be that, when
we are in the luxurious position to design our games and not taking existing games like
World of Warcraft, which works for many reasons that, you know, constants and curtains that
we're looking into. When you are in luxurious position to design your own game where you
can say, "Well, then let's try reduce that unnecessary processing and focus on the central
processing," then that's a good approach to take. When you have something that works and
when you want to study that, then I wouldn't even come up with that argument. I wouldn't
make that argument. So we take very different positions from constants and people like her
who look at existing games and what you can--how you can benefit from those and we say, "Yes,
that's one type of research." The other type of research is to help the designers who want
to build new games to help them think about the decisions that they're making. And so,
if you are in the position of designing a new game, then you can think about that and
you should. And if your answer is, "No, this is still very important to me," then at least
you've though about it, right? And you have an answer to why did you do that versus World
of Warcraft was never designed to teach you something. So, they made a lot of decisions
that a learning game designer probably wouldn't have made. Now, that's probably also the reason
that we don't have any really good learning game that people played at the same numbers
as World of Warcraft but, you know. >> Thank you.
>> This is an embarrassing question and I'm not claiming to be representative either.
But how can you compete with a book? Boyle's law is less than 60 seconds with the reading.
It's not hard to understand once you have the intuition that it's about momentum. I
can't believe that I could learn to play the game that's suppose to teach me the concept
in less time than I can read it in a book and understand it. I'm trying not to be hostile
but I've been subjected to many learning innovations in my life and none of them were an improvement
over reading in a book. >> PLASS: Do you want to talk--there are many
ways of responding to that. I'll bring out the big guns in [INDISTICT], to first respond.
>> HOMER: Two things. One, you are not our not our target audience. I mean this as a
compliment, you're an outlier. So, students like you will learn no matter what gets thrown
at them. The students that we are typically targeting, they had the books and they were
failing. So for example, the games that we talked about with the understanding grafts,
these students were type graphs, in, you know, the traditional--with the book. They had no
clue but after using the simulations, they actually gained some knowledge that weren't
there. So, books are of great tools. I don't think this is meant-- so two things, one this
targeting students that maybe books we're not working so well for. Secondly, the games
for learning is never meant to be replacement for teachers or for classrooms. They're meant
to be supplementary tool that can be used in the classroom. So the same with our simulations,
we didn't say, "Okay, we get rid of the teachers and put the simulations in." We work with
the teachers to develop the curriculum that the simulations were embedded into. So, books
are great but they're not reaching all of the learners and this is another tool that
may reach learners that books made them fail. >> Yes, because what I fear when you start
talking using pre-evaluation and so on is that, rather than taking the role of the Lego
bricks that you can, sort of go on and play with and see if it improves your understanding
of something, they sort of become the mandatory thing that everybody has to do and becomes
a break on the entire experience for the people who aren't quick. So, you know, which is that
you were talking about the frustration from firing the--firing the angry birds. Having
to learn the game in addition to learning the material when the material is not hard
to learn is going to be a great source of frustration for some people.
>> HOMER: Yes. In the same way that you say that having to learn the gameóI'm sorry,
there's some [INDISTINCT] teams, so [INDISTINCT] got the question but said, "Having to play
the game when you've already got the concept maybe frustrating". The same way, you know,
as a student I was, you know, good at Math, having to sit-through the lecture on how to
do ladder derivatives [INDISTINCT]. How do you do derivatives after I got it after the
first lectures? Things to the next three lectures on it. Where the heck [INDISTINCT]? Yes, there's
always the trade of and so that's--but the advantage with games if it's well designed
is that, it picks up on that and you suddenly, you know, leveled up. So it's always at that
sort of, sweet spot of where the learning is taking place which is not always the case
in some traditional [INDISTINCT] educational formats. That's our goal.
>> Thank you >> Thank you very much.