Neukom Institute presents Ge Wang: Music, Computing, People

Uploaded by Dartmouth on 24.10.2012

>> Hi. I'm Dan Rockmore, Chair of the Department of Mathematics here at Dartmouth
and also Director of the William H. Neukom Institute for Computational Science.
And on behalf of the College and the Neukom Institute, I'd like to welcome you
to this year's Fall Donoho Colloquium: Music, Computing, People, and Emerging Dimension
for Engaging One Another Creatively, Expressively, and Socially,
to be delivered by Professor Ge Wang of Stanford University.
The Donoho Colloquium is an ongoing series of public lectures aimed at increasing awareness
of the many important and sometimes surprising places
in which computational ideas are relevant.
These lectures are made possible by a generous gift from David, Mickey and Dan Donoho in honor
of Dan's graduation as a member of the Class of 2006.
The Donoho Colloquium is a central piece of the larger mission of the Neukom Institute,
whose aim is to support and integrate computational thinking
and computational ideas throughout Dartmouth.
In particular, as we find ourselves in the early stages of the Year of the Arts here
at Dartmouth, it seems especially appropriate to use the Donoho Colloquium to showcase some
of the ways in which computation has been used to transform the process
and products of artistic creation.
This is especially true in the acoustic arts,
where a long line of computational achievements have combined with the creative spirit
of modern composers to give us today's amazing world
of digital music, instruments, and software.
These range from the early mathematical discoveries of Pythagoras that link harmony
with rational numbers to technologies like analog to digital converters
and algorithmic advances such as the Fast Fourier Transform.
As computer music pioneer Max Mathews presciently said in the early 1960s:
"We have made sound and music directly from numbers.
The musical universe is now circumscribed only by man's perceptions and creativity."
Today's Donoho lecturer, Professor Ge Wang, widely known as a co-founder
and the creative force behind the mobile music- making company Smule,
is among the most innovative of those using computational ideas and technologies
to push the boundaries of our musical universe.
Ge's body of work includes the authorship of hugely popular music-making mobile apps,
such as Ocarina, which generated over half a million downloads soon
after its release in 2008.
For those of you yet to experience Ocarina, I'd like to give you a brief sample.
[ Music ]
>> So to put this kind of simple-to-use music-making environment in context,
it's worth comparing it with the first piece, a synthesized computer music, Silver Scale,
created by Max Mathews in 1957 at Bell Labs on what was a room-sized IBM 704.
So there it is, and here for comparison is Silver Scales.
>> I think it's fair to say we've come a long way.
The accessible and beautiful design of Ocarina and others
of Ge's creations have made it possible for anyone to join the new generation
of digital troubadours making music on their modern hand-held devices, either alone
or together, even when separated by thousands of miles.
With innovations like these, Ge and other acoustic pioneers continue
to reveal the transformative possibilities, artistic and social inherited hybridization
of the arts with modern computing.
This is but the most recent chapter in an ongoing story
in which Dartmouth has played a significant part on many occasions.
Notably, in the late 1970s, when Dartmouth music professor and founding faculty member
of Dartmouth's Bregman Studio, Jon Appleton, teamed with Thayer professor, Sydney Alonso
and Thayer student Cameron Jones, to build the first commercial portable digital synthesizer,
the Synclavier, which was then produced and sold by the New England Digital Corporation,
just across the river in Norridge, Vermont.
Enabled by a gift from Gerald Bregman, class of '54,
the Bregman Studio was among the earliest computer music centers,
and it continues to be well-known for its creative environment and innovative faculty,
which in 2006 included today's speaker.
Professor Wang received his doctorate in computer science from Princeton University
for his development of the well-known audio programming language, ChucK.
He was taught at Princeton and Dartmouth and since 2007 has been an assistant professor
at CCRMA, the Center for Computer Research in Music and Acoustics at Stanford University,
where he is also the co-founder and director of the Stanford Mobile Phone Orchestra as well
as the founding director of the Stanford Laptop Orchestra.
In addition, Ge continues his work at Smule, where he is Chief Creative Officer and CTO.
His work has already been recognized by numerous honors, including being named as one
of Creativity Magazine's Creativity 50 in 2009 and 2010.
He was also a winner of an App Nation Pioneer Award in 2010.
We look forward to his sharing his thoughts and ideas with us here today.
Please join me in welcoming our Fall 2012 Donoho Colloquium speaker, Professor Ge Wang.
[ Music ]
>> All right.
I guess first of all, thank you, Dan, and thanks to the Neukom Institute for having me.
It's a great honor to be here and it's wonderful to be back and hang
out with some really good friends I made when I taught here in 2006
and also to make some new friends.
So overall it's awesome to be here, and maybe good to kind of start kind of at the beginning,
probably even more than you actually may want, probably earlier than you may actually want,
and that's going back kind of far.
I grew up in Beijing with my grandparents.
I was with a generation of kids that kind of, for one reason or another,
kind of was taken care of and raised in the early age
by grandparents, and they're my two grandparents.
And this is for me I guess significant, because they say, well, Chinese grandparents tend
to spoil kids more than parents maybe that's universal across cultures.
But I think both my grandparents and parents were really supportive
of well, of my interest in general.
They didn't really have anything particular in mind.
They just said -- they just kind of --
they only pushed me to try to figure out what is it that I like doing.
And one of the attempts was getting me an accordion at age 7,
which actually was my first instrument.
And I played that for you know, a few years,
and there was actually this hot-shot accordion teacher that rolled into town back when I was 7
and set up shop and said, parents and grandparents,
your children need to learn the accordion.
And so I remember sitting in a room full of about 30 5 to 9-year olds,
all with an accordion, playing in unison.
And accordion's interesting.
It's one of those instruments that one, is not easy to play softly on it necessarily.
In fact, dynamics was not one of the attributes
of the particular accordion I have control over that.
And two, it doesn't necessarily improve when you increase the number
of people playing in a room together in unison.
But my grandfather helped me carry this in a cart every week
to the lesson, and I'll never forget that.
Fast forwarding a bit, at age of 9 I came to the U.S., lived with my parents in Atlanta,
where my dad was getting his Ph.D. in Operations Research and later we moved
to the Midwest to St. Joseph, Missouri.
And at age 13 something in retrospect really strange happened.
My parents for no reason that I could recall or discern,
decided to get me an electric guitar for my 13th birthday.
I did not ask for this, but they said, we're getting you an electric guitar.
And in retrospect, this was a really bold move by my parents.
It's like, why arm your kids with the very instrument of rebellion?
I guess they could have gotten me a drum set, which would have probably been worse for them.
But I guess in another way I guess it's kind of smart but I guess by sanctioning rebellion.
By giving the very instrument you've kind of squashed it,
because it's just not that fun anymore.
So I just started actually learning the guitar and had a great time,
and I still play the guitar to this day.
And actually, it was great starting out on the guitar.
I think it's the first instrument -- it's not my first instrument but it's the first instrument
that I really got into, and it's around that age where I think you --
just so many things, so easy, so many things to make an impression.
And I think when I got the guitar, I think I just wanted to rock,
because the first song I asked my guitar teacher to teach me was Metallica's Enter Sandman.
And I played a lot of heavy metal for some years.
We didn't have a lot of money, so I got this Ibanez SoundTank Distortion Pedal,
did not have an amplifier because well, didn't have a lot of money.
Dad was nice enough to convert our stereo into our amplifier but adding a quarter inch jack
to the back, and so I could plug my guitar into the distortion pedal and the pedal
into the car stereo, and that's how I rocked for a few years
until I saved enough money to get an amp.
But that was enough to rock.
And I look back and I really want to thank my parents and grandparents for giving me I guess,
really asking me to explore the interests,
because to this day I really don't know what exactly it is that I'm doing,
except let me say I really like what I'm doing.
I just don't know exactly what it is, and I know it has to do with music,
and that's pretty much what I do know.
And this is where I currently work.
And I think between the last picture and this one, a lot of things happened.
I went to undergrad at Duke, studied computer science.
Also, I took at the time the only computer music course that Duke had to offer,
which was offered by the wonderful Scott Lindroth.
And it was offered once every other year, so thinking about that, I feel like places
like Dartmouth and Stanford were kind of have an embarrassment of riches
where there's this wonderful number of courses, faculty resources,
fellow peers with which you can explore this wonderful world of computer music.
But just like I guess having a stereo as your guitar amp, the one course is also kind
of enough to rock out in computer music.
In that course, I decided that I wanted to go to grad school
in computer -- something to do with music.
So went to Princeton, and actually one of the first things I did getting to Princeton
as a grad student was to come to Dartmouth,
and this was before I came here to teach the semester.
This was for a symposium on computer music programming languages.
And it's kind of interesting that five years later I've been writing a dissertation
on computer music programming language.
That may or may not have had -- well, it had a lot to do with it.
And there I met Jon.
Actually, I slept in your guest bedroom, and that was very nice of you to host all of us,
and met Max Mathews and Gareth Loy and that was the only time I met Barry Vercoe.
That was full of memories, and it got me started on thinking about computers,
but also really what you can do with it.
And at the end of the day, technology is really central to everything we do,
but I think it's kind of what we do with it that at the end of the day really matters.
So I guess one of the things I really like doing in undergrad was making things.
I've always liked doing that.
And one of the ways I've found to do that was through computation, through programming,
through this wonderful world of computing.
And I guess coming here to this symposium made me really feel that well, geez, there's --
it's not such a ridiculous thing to think about a computer music programming language.
In fact, there are a lot of really interesting, smart,
wonderful people that have come before me doing this.
So when I decided to make a new programming language, it seemed like,
oh, well, I'm going to go for it.
And I'll come back to ChUCK in a bit.
And fast-forwarding now to CCRMA.
And oh, yeah, and I should mention that just right before coming to CCRMA in 2007,
that's when I did I think the most epic commute of my life,
which was for a whole semester I was teaching as a grad student at Princeton
and also teaching one of the seminars here, and I'll always thank Jon
and Larry for inviting me to do that.
But I taught at Princeton on Thursday and I taught one Monday
and Wednesday here at Hallgarten.
So every Sunday I would pack up and drive from Princeton, New Jersey, to here,
and I would get in around 3:00 a.m. and then I would like pass out.
I would wake up and I would usually freak out and think about, oh, my God,
do I really know what I'm going to talk about in the seminar?
And then after I teach on Wednesday, I would then --
actually I sometimes go into the Jewel of India which is right behind Hallgarten,
and then I drive on back to Princeton, and the next day I'd be teaching the laptop orchestra.
That was a super wonderful experience, for many reasons.
And one I think actually in retrospect, teaching the class on additional signal processing,
that was I think when I learned the most about digital signal processing,
and I studied it before but nothing like having to teach it for the first time.
And since then I'm actually still teaching it.
And this is where I'm doing it, I guess.
This is Stanford University Center for Computer Research and Research Acoustics.
This next is electrical engineering signal processing music cognition computer science
and a lot of other disciplines, all in the service of music and people.
And I think it's this intersection of technology people, creativity, and especially all music,
I think that's kind of this common thread, I guess I've been finding in my life.
And I really don't know what I'm doing, to be honest,
but apparently the New York Times thinks this is what I'm trying to do,
which is apparently getting a lot of random people
to play instruments out in the street together.
And in a way, yeah, this is kind of cool.
That is kind of what I'm trying to do,
except maybe these traditional instruments perhaps are difficult, for one reason
or another, at least for people to get started, or to even have access to.
And perhaps I've been looking for ways to do it through other means,
namely computers and even mobile devices.
Well, I'm going to jump back in time once again.
This is kind of the, at least until this talk stabilizes,
is going to be a little bit of a time travel.
So supposing for instance I had "fundamental relations of pitch sounds, science and harmony
and musical composition were susceptible to such expression and adaptation,
the engine might compose elaborate and scientific pieces
of music venue to create complexity or extent."
I think some of you will recognize this quote, and for those of you
who have not seen this before, if you'd care to guess as to when and whom might have said this?
Well, maybe you guys have all seen this before, but Ada Lovelace, 1843.
Ada, commonly thought of as perhaps the world's very first programmer, she theorized a lot
about what computers were, and she worked for Charles Babbage on the analytical engine,
which probably in a sense, it had constructs that really makes computers what they are,
had looping and decision, and even though the analytical engines never built well,
essentially she said, might we not meet music with computers?
But it wasn't until 100 years later, so Endance opening,
that computers actually started making sound.
But once they started making sounds, it was kind of definitely the beginning of a new era,
where suddenly we felt like, ha, here is another technology for music.
But, perhaps unlike other technologies that come before it,
this technology is a very general purpose.
With it, we might make sounds that are yet unimagined.
We can craft the timber of the sounds that we make.
We can also create fantastical automations with these computers,
and of course this is an IBM 360.
You compare this and also the image and dance opening of the mainframe computer with kind
of computers we kind of think of ones that we can put in our pockets, in our hands.
And then this actually the symbol for ChUCK, and this is a programming language that I worked
on while I was at Princeton, ended up being my dissertation.
In a sense it was made because of the advancement of computing,
because ChUCK is like really -- I think ChUCK is probably the slowest,
least efficient programming language computer music ever made.
And it's flexible but it's not efficient.
Computers getting so fast, we're like ah, we're not going to worry about it.
Let's just focus on what this language does.
And so that's one of the things I worked on.
I'm going to give you a quick demo of ChUCK here.
>> And whether you write code or not,
I hope this will give you a sense of what ChUCK is like.
So what I'm going to do here, I'm going to want to see that bigger.
I'm going to create a sign adulator, call it Fu.
I'm going to use the symbol, which is the ChUCK operator, and ChUCK it to the deck,
which is an abstraction for the digital analog converter, which is just a way of saying connect
that to the output so it can be heard.
This is a valid ChUCK program.
You can try to run it, and it's a correct program but it doesn't actually make any sound.
That's because ChucK is a what we call kind of a strongly timed language in which you have
to interact with time in order for sound to happen.
So what I'm going to do here is just to say 2 second ChucK to now, which is ChUCK's way
of saying, hey, let's wait around for 2 seconds and let things happen.
So if I do this, we hear that for 2 seconds, and you can set the frequency explicity if you like,
so it's actually the same frequency.
And what I'm doing now is going to -- I'm just going to copy and paste this
and double the frequency every time.
We're just going to go up in octaves and change with the amount
of time periods you have 2nd half 2nd one second.
And if we were to play this, you should get, that.
So now we've created a sequence, and of course you can imagine creating pieces of music
within this very straight line fashion.
But one of the things a computer's really good
in doing is repeating stuff tirelessly, precisely.
So I'm going to put this into a loop, make it a little smaller so you can see it all.
That's going to keep going, I can't really empirically prove that to you, but trust me --
and also you don't have to indent, but indenting's just a good habit, so I do it.
Let's go ahead and not hard code this guy, but let's change this into a random number generator
between -- and generate a number from the uniform distribution between 30 and 1,000.
Set that as a frequency of the sine wave.
And -- okay.
Now let's mess with its time.
Instead of half a second, let's do 200 -- let's do 100 milliseconds.
[ Computer music ]
>> Here we get to this point where I think of that,
to me will always be the canonical computer music.
This is the sound that growing up, we were taught from the sci-fi movies
that main frame computers, when they're thinking really hard, it sounds like this.
[computer sounds] Okay, but let's keep going.
So that was changing the frequency randomly every 100 milliseconds.
If we were to change the time scale again
and do this randomization every 10 milliseconds, what does that sound like?
[ Computer music ]
This computer's version go bla-bla-bla-bla-bla- bla-bla.
And then if we keep going, 1 millisecond.
[ noise ]
It sounds like this crunchy carpet of a sound.
And now a couple of interesting things here of note,
and one is that we actually cross the very interesting threshold,
and that's the perceptual threshold.
Between 100 and 10 milliseconds, we kind of --
it's already blurring at the boundaries of when these things were happening.
And we stop hearing individual events, we start hearing more of a singular consistent carpet
of - and indeed somewhere in there is the very important number of around 30 Hz or so,
and that's when we stop perceiving initial events and beyond that,
start perceiving kind of a continuum.
And ChucK can actually abuse this even more, so he can say, well, every digital sample I'm going
to keep the phase where it is, but I'm going to randomize how fast --
basically how fast the phaser's turning
or actually randomizing the frequency every digital sample of which there are
in this case 44,100 every second.
And if we were to do this, you get kind of another signal.
Or there's that, which actually you can think of as having two components.
One, you hear this kind of hissy, noisy component.
The other is that whistle.
And that whistle turns out to be a frequency, which is really the average,
the bounds of this uniform distribution we're randomly drawing from.
That's pretty interesting, even if it's just to hear what that's like.
And this is the kind of thing that ChucK allows you to do to try out,
is to actually be very precise about time and to really zoom in to time
and to control things at really any rate you want.
And if you want, you can even go subs amperage.
We can go in the other direction, so this is one duration of the sample.
we saw there was a millisecond.
There was like a -- you saw the minute.
You can do that.
You can do that.
Now, this is a very slowly evolving sound.
Here's weak.
We kind of stopped at weak, because months are kind of tricky,
because not all months are at the same length.
So that was a very practical concern in designing this aspect of the language.
Some people have suggested a fortnight, because they would be --
not changing duration of the time where you have to add that.
And the idea is that you can really unify, kind of talking about time,
in working with time in kind of this one system.
And that's kind of one aspect of ChucK
and is why we call it kind of a strongly timed language.
I guess it's kind of a joke based off of the strongly typed languages,
but I feel like at least half the things I do are based off
of things I started as a joke or as a bad bet.
So that's one of them.
I'll show you another aspect of ChucK, and this is one where we're still going
to use the sine wave, even though you can do a lot of other things other than sine waves,
or when I say that if you have sine waves you can do everything, which is true in some sense,
but in this case just sticking with the one sine wave.
And what I'm going to do here -- I already have this program kind constructed here.
We're basically going to connect it to the output, set the gain here.
We have an array of pitch classes that we're going to draw from.
Right now there's just one value in there.
We're going to draw randomly from this bag of numbers, and basically offset
that into a pitch register, and then we're going to do so every 200 milliseconds.
Now in order to play this, sounds like that.
What I'm going to do now is actually to change this program as it's running
and basically replace the existing version with a new version, and doing this as a way
to essentially experiment with stuff.
So here, there's -- which is here because we only have one element here,
but we randomized the register here.
And every time you see an = sign, like that,
I'm actually replacing the existing code with this new version of the code.
Okay, so there's that, say 3, going to add a second to that, a 3rd,
a perfect 5th, make it a little faster.
And see if we add a little reverb to all this.
Shift it up a little bit more, add a 6th, a little faster, higher yet, to 7th,
drop this guy, and there.
So this is kind of designed as one way you could work with ChucK, which is in a way,
very rapid prototyping, by giving you a tool with which you can either use that to sculpt
and zoom in into a particular sound, particular passage of music or a particular interaction
when you connect, for example, a controller to ChucK to make sounds.
So that's a quick demo of ChucK.
ChucK is open source, it's freely available,
and I always tell people it will crash equally well on all major operating systems.
So if you're interested, I definitely encourage you to give it a try.
So that's a little bit about ChucK.
Now, when I was at Princeton, I was also very fortunate to be involved
with a thing called the Laptop Orchestra.
My advisor, Perry Cook and music professor, Dan Truman, they've been working
with musical performances and with different ways
to project the sound in computer music performance.
In 2005, they started the Princeton Laptop Orchestra, so PLOrk, and I was very fortunate
to be among -- with Perry and Dan and Scott Smallwood,
we got to figure out what the heck a laptop orchestra actually might be,
one way in which it could be and also how to teach the thing as a class
and how to start writing pieces for it.
And this is the Princeton Laptop Orchestra.
Actually that's a kerosang in the middle, flanked by sole percussion.
And it's people, it's laptops, it's these hemispherical speaker arrays that's local
to each performer and each computer, and it's a lot of those.
When I started at Stanford, I brought the idea to I guess to Stanford
and we started a Stanford Laptop Orchestra.
Of course, we were kind of lazy with the name, just called the SLOrks.
So there's PLOrk on the Princeton and there's SLOrk at Stanford,
and here is us doing one last SPN update before we go to perform in the Sculpture Garden,
where there's no wireless connection.
And we also have -- this is one aspect of this type of laptop orchestra,
is that this focus on individualized sound.
These are speaker arrays for the Stanford Laptop Orchestra.
We actually built these out of IKEA salad bowls, and the idea is really to have the sound come
from near the instrument, it was the person or the computer, the performer.
Just like if you were to play an instrument, a traditional acoustical instrument,
like say a violin or cello, the sound doesn't actually go through a mixing console
and out a PA system; it comes from the artifact itself.
And along with that is a certain opportunity to explore kind
of what it means to be sosonically intimate.
And this is why it's a concept that's been used for the longest time in things
like chamber music performance, where there's such an intimate sense
of the sound of the music of the performance.
And the Laptop Orchestra, and this configuration, is meant to explore that,
paired with kind of what's possible now with the computer.
I'm going to give you a quick rundown of how we built the speakers, because it was kind of fun.
And that was what happened Spring break of 2008, and a lot of people kind of went into this.
This is an 11 inch bland and mat.
That's what it's called from I get that at IKEA.
First step is you turn it upside down, you drill six holes in this.
In our case, this is one of six channels.
And we lovingly routed the bottom base plate,
which we carved out of these giant boards we got from Ace Hardware.
And then we got six speakers.
These are four-inch drivers.
They're meant as kind of car speaker drivers, and as well as these fairly efficient T-amps.
These amplifiers are pretty low powered,
actually sounds pretty good, and it's fairly inexpensive.
We of course have to make 20 of everything.
And this is the old Max Lab at CCRMA.
These are the brave people that were designing and prototyping
and eventually building the speaker array.
And to this, we add a laptop, an audio interface,
so you can get both pristine good audio and multichannel.
We sit on an ax and pillows.
These actually were mail ordered from New Hampshire,
this place that sells Zabuton mats and pillows.
We were very happy with those.
This is a -- it's hard to see, but that's a breakfast tray
from IKEA in which we put laptops.
A lot of things come from IKEA in this ensemble.
They really should give us a sponsorship.
If IKEA happens to be here if you're watching this, hint, hint.
Here is power, which were daisy chain.
A lot of our pieces are actually networked, so we have wireless routers, sometimes wired.
Power conditioning, make sure the power is nice and smooth.
And to it we connect sometimes various kind
of musical interfaces, as well as gaming interfaces.
So to get an idea of what a laptop orchestra looks and sounds like,
I'm just going to play a few excerpts.
>> It's a commercial for a piece.
Spring is an Empty Mind.
In this case, everyone is -- actually, of course there's a laptop and this hemispherical speaker.
We also have this other game track controller, which is kind of hard to see,
but everybody's wearing the same glove,
that's tethered to a bass at the bottom, and the gloves basically is moving the hands, tracking,
the locations of their two hands at one time.
It's just basically like a 3D position tracker.
It used to be this golfing gaming controller that completely failed
and tanked in its commercial product,
and I guess one man's like failed commercial product
for gaming is a computer musician's paradise sometimes and we got like 100 of these.
Here's something that uses the Wii-mote.
It's another -- it's by [inaudible].
This is called Monk-Wii-See Monk-Wii-Do.
It's a piece about chanting.
It's in Wii-mote,
horrendously brilliant 3-way pun, Monk Wii- See, Monk-Wii Do.
Here's a piece that I like for many reasons.
One is just for, it's a very simple name.
It's a very unpretentious name.
It's just called "Barrel."
And this is created by Mike Rotondo and Nick Kruge.
Mike is actually standing on top of a steel drum.
And around the edges are actually attached eight of these Game Track controllers.
And these players around him are actually playing this eight person audio part.
He's conducting.
Nick is using another Game Track controller [inaudible].
Game Track has been a really fun to work with because it offers a lot of flexibility.
Certainly it's not only for use of [inaudible].
This is a piece that Jieun Oh and myself did, called "Converge."
And what we did with this piece is actually collect hundreds of images,
sound location text data through mobile devices and essentially put them
through a visual and sound blend during the
performance and spatialize to the orchestra rush controlling the graphics
and process the sound in time.
These all sound [inaudible].
And it's a piece that kind of explores just memory and place and time,
and just something beautiful about the mundaneness of everyday life.
It's also telling you when these are happening relative to how [inaudible].
Move it forward a bit.
Game track theory.
This is a little more self-explanatory.
You guys see these Game Tracks up close?
>> That is as close as we are.
Here we're using a [inaudible], a string and a bass, and by pulling on it [inaudible].
So these are already two very different instruments.
And of course, offered a great opportunity to try out an air drum.
Let's see...
This is Experimental Headbang Orchestra.
[ Inaudible voice over loud music ]
>> They shake their heads, or tilt their heads to the side.
They're actually vibrato or [inaudible] together tether.
[ Music ]
>> And actually part of the orchestra is actually not present on stage.
They're located in [inaudible] locations.
And certainly their instruments in their studios or homes or classrooms is actually coming
over here and being processed by the [inaudible].
So this is actually networked real-time networked.
And another noteworthy piece is eventually what evolved into these called "Tweet Dreams,"
as in Tweet Dreams are Made of These.
In this case, we're using Twitter to actually songify kind
of various Twitter task-related to the concert.
It's actually a chance for the audience to participate in real time.
This is an instrument called Intervals,
and it's actually using accelerama on the laptops on the keyboard.
This is a patio kind of harmonies, little bit of [inaudible].
[ Music ]
Everything's done for you is kind of this rotating,
the fact that it's actually happening around speakers.
This is a piece called We're All [Inaudible] And all the players are lizards
in this virtual world, and the way they interact
in the world is [inaudible], and organize musical structure.
This is a work by Robert Hamilton in Exploring.
This is a cultural performance.
This a piece called Claire de Lune, [inaudible] in a group to explore Claire de Lune.
And this is kind of a live piece.
The conductor is actually typing instructions to the [inaudible].
There he spelled incrementally sometimes isn't easy.
This is a very exaggerated finish.
So that's a sampling of the Laptop Orchestra.
And to date between Princeton, Stanford
and actually now many other laptop orchestras have just happened, well over probably 2
or 300 pieces have been created and formed.
And I think working through this medium of the laptop orchestras is really informative
when now eventually, like you're going to know this,
like creating interfaces for something like mobile phone.
This was an adventure in turning commodity computers into musically expressive things.
And having done that, I'd say is a lot of film and certainly informed on how we actually think
about doing this sort of thing on mobile.
But before I go there, there's always kind of an interesting question,
which I don't think I'll try to answer so much as just put out there,
and it's what at the end of the day is an instrument?
You look at a laptop, you look at an instrument in the laptop orchestra.
When does it become an instrument?
People might look at a 1750 Stradivarius violin and say, that's an instrument.
Well, in my hands, it's not an instrument.
If it's in someone's hands who can play it, that's an instrument,
that's a gorgeous, beautiful instrument.
And the other side, you can look at a tin can and some sticks and say,
that's not really an instrument, but in the right hands
that can be used very expressively and very musically.
So at least to me it seemed like not everything is necessarily an instrument
or non-instrument out of the box.
It's kind of what you do with it, and maybe these guys,
these mobile phones aren't that different in that regard.
Out of the box, I think they're just what they are, a piece of hardware.
And so I think about it in instantly,
and that's the only way I can describe Smule having been started.
Smule was founded in 2008, and I like to tell people that well, I would --
people that sometimes say, I think I'll start a company.
Should I start a company?
what should I do?
I usually say, well, in my experience, no one in their right mind should ever start a company,
unless there's something that you've got to do, got to find out,
got to try, that you can't afford not to do.
And to me, it's probably the only good reason for starting a company, because otherwise --
I guess -- this is just my opinion, but I feel like starting a company should not be a goal.
It's just a vehicle for doing something.
And that, at least to me with Smule has meant to me, is as a researcher, it seemed like dang,
these mobile devices are getting really powerful.
Working with a mobile phone orchestra at Stanford in late 2007,
they were using Nokia in 95 Smartphones back then.
It was just like, wow, these are as powerful as computers were, maybe eight, ten years ago.
They got a lot of sensors, and more than that, these guys are super personal.
They're more personal than what we think of as a PC because we have
such an attachment of identity to these guys.
It's my phone, it's my phone number.
You text this, you call this to reach me.
It's also very portable.
That's part of the mobile aspect, I suppose, and to the point where it's for better, for worse,
feels like a natural extension of ourselves.
So it's kind of with this in mind that Smule was started,
and one of the first things we did was Ocarina.
And maybe I'll just play you a little ditty here.
So the design of Ocarina was meant to be something that --
well, it was a question of like, how can we best take advantage
of everything that's on an iPhone?
It was kind of this backwards design process that took place.
It didn't really feel right, and by the way,
when I'm talking really loud, the Ocarina picks it up.
Ahhh. It didn't feel quite right to just take an instrument and then try to squish it
and try to force it into anything, really, and including a mobile phone.
And instead, I think we try to ask the question, hey, what is this really essentially good for?
What fits its profile?
And I remember actually being in Rome when I was a teenager with my parents
and buying an Ocarina off the street and annoying my parents with it
for several days before it mysteriously disappeared.
And if you think about an Ocarina and the iPhone, there's some similarities.
One is just the form factor.
It's about the right size.
And the other part is that there's a certain simplicity to it.
Ocarina is -- anything could be difficult, is difficult to master,
but to start as an instrument, Ocarina is among the simplest instruments.
It's like a recorder.
In that regard, you can start playing something very fast.
And that seemed to fit the profile here as well.
And then looking at the sensors on this screen, there's multitouch.
On the screen there's a microphone.
It's a phone, after all, and there's some accelerometer.
>> So that's how we actually mapped this, and we mapped vibrato to the accelerometer so --
that's initially how Ocarina started, and let's see, I'll play a little ditty here.
I'm actually going to play this on the newer Ocarina II, which actually has like prompts
to tell you which notes, or how to do the fingering for the next notes.
I bet you can kind of embellish off that if you want.
So this is the Legend of Zelda theme song, which I always play,
because when first making the Ocarina, there was another part to this, which was, man,
doesn't anybody know what the heck an Ocarina is?
When you make an app, you kind of -- and you make an app for a lot of people
and potentially I'm thinking you kind of want them to know about it and want to download it.
So what I did was go on YouTube and I typed in Ocarina,
and as if that's like the most definitive source -- in some sense, it kind of is --
and came up with this like these videos from Doc Jazz, who's an online Ocarina tutor.
He teaches people how to play Ocarina.
And his most popular videos were 2 or 4 million views,
and they were like, Zelda, Legend of Zelda.
So I was like, yes.
If people know Zelda, then maybe they know Ocarina because there was
that game, Legend of Zelda Ocarina of Time.
Common Regard is one of the best games ever made -- yeah!
Nice. So after that we started making the thing.
>> Yo. The one of two videos we made.
The other was the Stairway to Heaven video, in which we were obviously trying
to reach a slightly different demographic with that.
And so I can play something else here.
Once you heard the Laptop Orchestra version of this but--
[ Music ]
>> Kind of has this harmony behind it, and it follows you as you play,
and you can kind of just hang out here and whenever you're ready--
[ Music ]
>> And notice that I'm kind of tilting this, and that's of course trying
to engage the accelerometer differently throughout the note.
[ Music ]
>> Just like wait for it.
It's kind of a game of sorts, Ocarina II, but it's kind of a game where it doesn't --
I guess part of the design is that we didn't want
to make people feel like they were pushed into it.
And so this is kind of one of those things where it kind of just play it
at your own pace, unless you discover the music.
At the end of the day, if I were to give like a really rough definition of what --
basically what the one thing the instrument has to be is that potential to be expressive.
We can kind of bring that expressiveness and maybe a little bit of joy of making music
to people who use these apps -- that seems like a really cool start.
So that's part of the Ocarina, and I think what we were hearing was actually the other part
of the Ocarina, which is the globe.
And this also came out of this question of like, how can we take advantage of everything that's
on the iPhone, or at least as much as we can, and part of that is using GPS and the fact
that your iPhone is often just connected to the network.
So we added this feature where you can actually listen
in on other people playing Ocarina around the world.
So let me see if I can get that going.
>> Hold on, let me log onto the Wi-Fi here.
>> Let me switch devices here.
>> Playing very minimal, piece of music.
Novi or Navi from, maybe Paris.
>> Dragon Breath from Chicago, and you can kind of--
>> So this is kind of a -- today, people have actually --
Zelda is always a popular tune on the globe.
So that's the social feature to Ocarina.
It's a pretty simple feature.
That feature really literally just means we capture the gesture, the accelerometer data,
the multitouch data, the breath envelope, and we basically put
that into neat, compact little binary file.
We send that up tot he server, and we just store these with a location.
And we're actually hearing -- just the server kind of picking out which stream to listen to,
and it sends you down, just gesture data and gets re-rendered
on the globe using the same audio engine that makes the Ocarina sound.
So that's a little bit of the social feature.
And then people really took to YouTube, kind of to the streets in a different way, I suppose,
and started making videos and performing for the world.
This was super shocking, pleasantly so, to us, in that wow, people, they're actually --
they're performing for their friends for the world, for whoever sees these videos,
and that's pretty cool, including Lindsay here.
>> This is why I love the iPhone.
>> So she's a winner in the Smule -- this contest Ocarina contest,
and so apparently she's been a nose flautist like all her life, she says,
and this is just her most recent nose flute.
So we send her and all the other winners like $1,000, a Smule shirt,
but for her we also send her a box of Kleenex for her troubles.
And then you see comments like this on YouTube, and which is like --
we don't even really know where to begin.
And I'll just let you read that.
And I think earlier I had a slide that I kind of glanced over and said,
there's as notion actually back from the old ubiquitous computing days
that technology should create calm.
There's this notion of calm technologies, that can be taken in a lot of different ways,
one of which is technology should not be in the foreground of our lives.
This is something that was championed by Mark Weiser and others in actually Xerox Park,
when they were working on ubiquitous computing about 20 years.
And they thought, wouldn't it be nice if computers were powerful but also invisible?
They were all around us, they were pervasive, and at the same time,
they didn't really require us to really mind them
or to really even be aware that they're there.
I guess this is a different calm, and this maybe has more to do with there's an opportunity
to do something that engages esthetically or emotionally, viscerally.
And just seeing comments like this were, whoa, that's pretty deep, that's pretty cool,
and it's very humbling that we never thought that this would --
something like this would be happening.
And then you get other stuff like this.
This -- before Ocarina we actually had another app called the Sonic Lighter.
And I don't have time to show you that today, but you had the same GPS features in there.
It's a fake lighter that you kind of just like --
you basically just start and also records the location when you do that.
And so this blog was in Sonic Lighter map and then saw that there was like --
they wondered if that location up there was location
of the northernmost recorded iPhone user at the time.
And it's a place where someone apparently lit up their Sonic Lighter.
And we looked at this location, there was actually no visible land mass there.
So we wonder, is that like an oil platform?
They had that network.
For a while we thought, man, is that a call for help, a distress signal?
But if you have your iPhone, you have network, because you need it for here.
There's GPS on this.
There must be better ways to call for help than with your fake virtual lighter.
And then there's -- another one of these unintended I think artifacts that came
out of this is zooming into the Sonic Lighter ignition map in late 2008 in the Pasadena,
California area, we saw this pattern emerge, where someone was trying to tell us something.
And I think the easiest way to do this is probably just walk down the street
and every few steps, you'd light the lighter, blow it out.
And you can see the length of the football stadium over there, and this is actually someone
who really had a lot of time that day.
And another aspect of this was an experiment we tried after Ocarina, and that's Leaf Trombone.
This is another social experiment.
And the Leaf Trombone is kind of like Ocarina, but like Ocarina II eventually,
is kind of this thing that gives you hints as to how to play a particular song.
But you didn't have to follow those, and you weren't really scored.
But thinking about how to really give people feedback and how they're playing was
where we came up with the questions, like people want feedback.
How do we give it to them, like how well are they playing Yellow Submarine
on the Leaf Trombone, for example?
And that's when thinking, like another part of actually computation and computer science,
is this idea of human computation, one central tenet of which is that certain tasks
which are inherently difficult for computers but easy for humans.
We encounter this actually pretty regularly through various things like CAPTCHA.
These are supposed to be easy for people, hard for computers.
Though I don't know about you guys, for me, CAPTCHAs lately have become difficult for me.
But this is another -- don't always have to be like these scrambled words, make it really hard
for computer vision to kind of figure out what was actually there.
But like this: Please prove that you are more than a mindless spam-bot
by identifying who gets the beer.
That's a CAPTCHA.
And of course, things like the Amazon Mechanical Turk,
where you can do these human intelligence tasks to actually --
it's kind of this micro economy where you can set up tasks and other people do them,
and they call it artificial artificial intelligence because it's really people-powered,
but at the same time you're using technology simply
to harness the computational power that's in the brains of humans.
So we thought, hey, why don't we do that and make Leaf Trombone this
like crowdsource platform for a musical critique.
So basically in Leaf Trombone what you would do is you would --
someone would compose a particular song or a piece for the Leaf Trombone,
that gets made available to all the users,
and users perform a particular piece; for example, auld lang syne.
And those go on to these online real time collaborative judging sessions,
where judges in real time to each other are listening,
giving feedback on what they're hearing, and they're actually getting rewarded and leveling
up as judges while performers are leveling up in this game as performers,
and at the end they kind of give each other a score.
And you kind of do this three at a time,
and that completes this feedback loop that we're going after.
They also use this set of emoticons to indicate at any given time they're kind
of the closest thing to how they're feeling, everything ranging from that's funny
to I'm falling asleep to I'm rocking out in the middle, I guess.
And these are of course, animated.
And at the end you give them kind of a score between 1 to 10 and more comment.
This is all time-tagged recorded, and the performer can go back and view this.
To date, Leaf Trombone is actually no longer an active product.
It's a rather an old product that we shelved, but during its time,
it had 6,000 user-generated songs, 700,000 performances judged.
And that's three judges per session.
It had nearly a million users, and the most judged and played songs
on the Leaf Trombone included the following.
And the other point about this is that once you put a system that has these gaming aspects
in place, people are amazingly willing to spend time doing stuff.
So the most prolific composer has put out 177 songs.
We do not pay them anything.
The most prolific performer performed 2,600 times.
That's like a three-minute performance each time.
The most prolific judge judged 10,000 sessions.
Who are these people, and what are they doing?
It might be the same guy that's like walking around spelling letters in Pasadena.
And of course, this is one end of this distribution.
So that's the Leaf Trombone.
That's another experiment, and then there's the Stanford Mobile Phone Orchestra,
which abbreviates to MoPho with a "ph," and who actually wear these speaker gloves.
And this is kind of a very mobile version of the salad bowl speakers,
in that these will help amplify the sound in a mobile device
but will cut off the fingertips off of these gloves
so that you manipulate touch screens on mobile devices.
These are actually sewn on by hand to the back of these speakers.
All of this you can get off of the Amazon, or somewhere online and you can give performances
that are both more traditional but also some are very geographically
and even temporally disjoined.
And by way of creating more kind of instruments and thinking about this,
and thinking about how we can bring more this idea of expression and just the fun
of playing music to a wide audience, we created things like Magic Piano and Magic Fiddle.
And the Magic Fiddle is something that really was done because of like a bad bet.
Walking out of the San Francisco Symphony Performance with Lang Lang,
who performed on his iPad, and I commented to some friends and said, man, wouldn't it be funny
if there's an iPad app for a violin that forced you to put your chin on it to play it?
Wouldn't you look ridiculous?
And a few months we had the Magic Fiddle, and I'll just play a little bit for you here.
Magic Fiddle wasn't really created to emulate the fiddle so much as it was
to emulate the nostalgia of learning a fiddle or instrument,
or having a neighbor who's learning the violin.
So this controls the dynamics.
>> I'm horrible at this.
>> And you can sound even worse.
>> So it's -- there's -- you can -- as bad as it sounds, it could always sound worse.
So that's the Magic Fiddle.
It came with a storybook that has eight chapters of teaching you,
of instructions of teaching you how to play the Magic Fiddle, including starting
with how you should actually position this, how you should hold it.
I even had a mode actually here, which originally I had it
so you had to put your chin on this.
And you can kind of see this in reacting,
but I also found that not all chins will activate the touch screen.
Like some said I should just shave, and I did and it still didn't work.
So my chin doesn't really do it, but other people's chins will consistently activate this.
So that was the reason why -- otherwise it was totally going to go in as the feature.
So there's that.
And let me show you a Magic Piano video.
[ Music ]
>> So you can see the interaction is one where again, it kind of doesn't put you on --
it kind of puts you on rails, but the timing and how you express these notes are kind of up
to you, and I have no idea what I'm doing there.
I was trying to say it's -- actually, this is the foothills behind Stanford.
I actually rented like a slow-motion camera to make this.
Just to completely lower your expectation, I was going to let this finish.
It's only going to get worse from here.
[ Music ]
>> All right, so okay, continuing on.
I just have a few more things to show you.
Oh, that's the St. Lawrence String Quartet, trying out the Magic Fiddle.
It's June and I playing a duet, and there's a quote I want to just throw out there,
and it seems very relevant to these computers.
And that's the most profound technology are those that disappear that weave themselves
into the fabric of everyday life until they're indistinguishable from it.
And if you think about kind of these mobile phones,
that's kind of what they're starting to do.
And we kind of took this somewhat literally and actually created it in the instrument.
This was created by actually Nick Krut, which he later published.
Nick is a Master's student at CCRMA and actually started this app while a student
in my mobile music class, and he really took it a distance.
So Map Pad is an instrument -- well, it's kind of a meta instrument that allows you
to make instruments out of sounds and images from your surroundings and your daily life.
So here's an example.
Bad puns everywhere.
[ Background sounds ]
>> People have created instruments out of everything from kind
of actual traditional instruments to just banging on things to the sound of money
to the sound of food, things being cooked to pets, you name it.
So there's also like -- it's something that's also meant
to be shared, so you can create a MapPad set.
You can share, you can also browse for those around you who have created this.
And then there's T-Pain.
We worked at T-Pane to bring for better, for worse, Auto Tune for iPhone back in 2009.
Now it's called the T- Pain effect, but it's the pitch detection and correction,
and of course it's that T-Pain sound.
And I seem to have set myself on a path of total self-humiliation, so I'm going to continue that,
and to show you this with a song, or part of a song.
Check-check- check-check.
Shawty Snap (Yeah) T-Pain Damn Shawty Snap Young Joc (Shawty)
[Young Joc:] Ay Ay She Snappin Ah She Snappin Shawty Snappin
[T-Pain:] Snap Ya Fingers 2 Step You Can Do It All
By Yourself Baby Girl What's Your Name Let Me Talk
To You Let Me Buy You A Drink Theen I'm T-Pain,
You Know Me Konvict Music nappy boy ohh wee I Know The Club Close
At 3 What's The Chance A You Rollin Wit Me Back
To The Crib Show You How I Live Let's Get Drunk Forget What We Did Buy You A Drank
>> He sings about a lot of different serious subject matters,
most of which involves either getting drunk or a song called "I'm in Love with a Stripper."
But he's like a really cool, really cool dude.
And when we actually talked to him about building a T Pain app, T Pain was like,
yeah, Auto Tune, Auto Tune the iPhone.
And it was kind of his -- in many ways his vision for kind
of how this app was supposed to go.
And so then there's I Am T Pain.
And then maybe related to this, is a more recent app called AutoRap.
And I'll demo this with another video.
if you thought the piano one was bad, this one may be even worse.
It's called the Corn Holior test, so that should already be
>> This is the AutoRap stress test.
>> Here we're going to put AutoRap through a process known as the Cornholio test.
The idea is that if we can AutoRap this, we can AutoRap just about anything.
Let's go to it.
>> So basically turn speech into rap, or whatever this is--
>> I am the Al-migh-ty Bumholio- o-.
My bum-hole is speaketh, is ticket-ta-ka-ta-ka-ta-ka-ta-ka-
ta-ka-ta-ka-ta-ka-ta-ka-ta-ka- ta-ka-ta-ka-ta-ka-ta-ka-ta-ka-
ta-ka-ta-ka-ta-ka-ta-ka-ta-ka- ta-ka-ta-ka-ta-ka-ta-ka-ta-ka-
ta-ka-ta-ka-ta-wooo-a-ja-ba-ja- ba-ja-ba-ja-ja-ba-ja-ba-ja-ba-
ja-ba-ja-ba-ja-ang-ang-ang-ang- ang-ang-ang-ang-yang-yang-yang-
yang-yang-mng-mng-mng-maaow-ng- ng-ng-ngng-ng-ng-ng-ng-naw-rng.
Tepe tepe Tepe tepe Tepe tepe Tepe Daaaaaa!
Let's AutRap this.
>> So this is the result.
[ Auto rap music ]
>> So that was actually the output of that one take
and with actually absolutely no additional tweaking or doctoring.
That was actually the output.
And the person crawling behind me was actually trying to avoid the camera.
He thought he could, but -- so that's AutoRap,
and this was actually created by the Kush team at Smule.
Kush joined the Smule family as a company late last year,
and they've been creating really wonderful audio technologies.
It started with La-de-da, which is reverse Karaoke.
You virtually sing a melody and they'll intelligently put a backing track
in a certain style under you.
Then songify, speech into song, and then there's AutoRap.
So let's go back here.
And if we're to look back and think about, to the beginning, kind of what we're trying to do
with this, part of that is just to get people to really, just to really feel
that music is just a lot of fun, really before anything else, and especially with people
who might otherwise not play an instrument or not play music.
And in some ways these people are -- with T Pain actually in this case,
I've really taken to the streets.
And I'm just going to show you one more video here.
And this is the lyrics from this user-produced video, which is a parody of I'm on a boat,
which is itself kind of a parody, but so there's actually a fair amount
of swearing in here, so here we go.
>> Dude, the new BBM is sick.
>> Dude, I am T-Pain.
>> I'm on a phone.
>> Shortayyyy
>> Aww shit
>> Get your phones ready it's about to go down (shorty, yeah)
Everybody in the place hit the fuckin deck (shorty,
yeah) But stay on your motherfuckin toes We runnin this, let's go
>> I'm on a phone (I'm on a phone) I'm on a phone (I'm on a phone) Everybody look
at me cause I'm talkin on a phone (talkin on a phone)
I'm on a phone (I'm on a phone) I'm on a phone Take a good hard look
at the motherfuckin phone (phone, yeah)
>> I'm on a phone motherfucker take a look at me Straight whilin, speed dialing like 1-2-3,
Talkin loud as fuck like I'm all alone
You can't stop me motherfucker cause I'm on a phone
I take a picture, click (click)
On my phone, bitch (bitch)
I send that shit to your phone, cause I got MMS (MMS)
I got Safari son, I got that Google Maps,
They call me Steve Jobs, cause I got so many apps,
I'm talkin on my bluetooth, makin deals and shit,
No cords are clashin, so my hands are free to knit.
They think I'm talkin to myself, but I'm just calling my Vet.
I'm on a phone motherfucker, don't you ever forget.
I'm on a phone and, my batries lastin
>> Okay, you got the idea.
So for better, for worse, this is the generation that --
part of the generation that is growing up with mobile technology kind
of entrenched already in their daily lives.
And interesting to see where things go.
I do want to leave you with one last thing, and this is thinking about,
more about the social dimension of all of this.
And in that, I think building instruments for these things, it's definitely probably good
to ask why, and I'm not sure if we have the answer to that, but to the extent that we do it,
I think we've been trying to do it in a way that's expressive,
that whatever we make can afford some type of musical expression.
And if it even has gaming elements,
and just because it's gaming elements doesn't mean it can't be expressive.
The other is that simply making ports or facsimiles with instruments on here,
I don't really see the point in that as much.
And at the end of the day, if you really want a great piano or a great cello
that you go learn the piano or play the cello, you're never -- these are not --
these can only be worse, potentially much worse.
But if you can do something here that you potentially can't really do
on a traditional instrument, then I think it's interesting from a research point of view.
For example, if you can get Ocarina users to listen to each other,
like other people play this instrument, there's something potentially interesting in that.
It's kind of maybe one of the first instruments that kind of allows its users to casually listen
in in a kind of waristic way into other people.
Okay, so that's -- maybe that gives us more of a reason to try things.
And the other is can we bring more people to start making music and understand how fun
and awesome, how much it rocks to play music.
Because we've become a very consumer-oriented people, and ironically it's really technology
that kind of took us as a civilization from I think a very --
apparently there's a time when families would like play music together as entertainment,
and to consume music 100 years ago, you had to play it or be around someone that plays it.
So people didn't think very hard about making music or learning an instrument.
And the word amateur actually used to mean something quite good.
and that's someone who, you're learning an instrument, now we think amateur
as not professional but there's a time when there are a lot of amateurs
in a really great sense of the word, that play music.
And then technology came in and recorded media and radio and television and made it so easy
for us to consume things like music that perhaps took away some impetus for making it.
And to me it almost seems so hard to make it now.
Children don't really have this problem.
They seem to have very little inhibition of doing just about anything.
For some reason, as we grow up, we seem to develop inhibitions of saying, well,
I'm not an artist, I'm not a musician, I'm not this, I'm not that.
You become very specific to what you do.
I don't see why it has to be that way.
It didn't use to be that way, and I can't think of any actual reason
to actually have this external --
to cave in essentially to this external social pressure, perhaps.
So it's good to find ways to try actually stave that off.
And now I feel like technology's at a point where we actually have a chance
to use the same -- to use technology again to turn things around, to get more people
to make music, and also to make music in ways that we previously couldn't.
So back to this example.
In the wake of the 2011 earthquake and tsunami disasters in Japan,
a woman started a rendition of Lean on Me in the Glee app.
And this is an app we did with Fox Digital Entertainment based on the television show
"Glee," which is all about singing.
And in this we had a feature where you could actually add your voice to any other song
that someone has already performed on the globe.
So she said, hey, join me in singing "Lean on Me."
And so people did.
And then it started to snowball to the point where something like 4 or 5,000 --
there's actually a rendition of Lean on Me that has 5,000 people on it.
And in this visualization actually it's in the app, you can kind of see all these points
of kind of all of these performances converging in Japan, which is at one end of this,
and at the other end it's kind of from all around the world.
And these are strangers kind of coming together and making music.
And there's certainly curiosity that I have, wondering what kind
of new musical instruments can we make.
It isn't just -- certainly we want to find ways of re-
envisioning certain traditional instruments
and maybe some we should just leave the frigging heck alone.
But in the other cases, can we actually make truly new instruments,
that can we get a million people to contribute meaningfully to a piece of music,
in creating it, composing it, making it and rendering it.
I don't know, but I feel like the stage is set.
This is not a new instrument.
This is just kind of almost just something that happened.
Let me see if I can have an audio clip of it here.
These are faces.
>> Sometimes in my life.
>> At this point the rendering was made, there are about 1,000 voices on here.
>> We all have sorrow But if we are wise We know that there's Always tomorrow Lean on me,
when you're not strong And I'll be your friend I'll help you carry
on For it won't be long Till I'm gonna need Somebody to lean on
>> So I guess in this case, I just want to end by saying, the hope is that when someone listens
to this, or if they listen to just a little ditty that someone's playing on the Ocarina,
I'm hoping that the first thing that's going through their mind isn't, wow,
the technology's pretty cool, or like, this is a cool product or whatever.
I think it should just be a very simple, or hopefully visceral reaction that wow,
there's other people out there somewhere.
And then through this music, they feel just a little bit more of a sense of connection.
And I think there is a lot of potential in the social dimension here for using,
combining music, technology and people to --
just to kind of get back almost to more tribal versions of ourselves in some sense.
So with that, I'll stop talking and see if any questions.
[ Applause ]
>> Kind of talk that makes you want to go home and create something.
Any questions for Ge?
I have a question about the choir.
[Inaudible], everybody was just on --
>> It was -- so in the app you have an option to turn on pitch correction or to leave it off.
And so actually if you heard that, it was kind of, you know,
some people were actually pitch corrected and some people weren't.
And so if you have a lot of people who are essentially Auto Tuned together,
it actually doesn't sound that great.
It sounds like it's hollow.
Everyone is this perfect intonation.
To be honest, we had no idea that that would actually even work.
The system wasn't built to support like 1,000 voices.
It's built to support like 5 or 6, like you and so many of your friends.
But it ended up working, at least as well as you heard, and now there's like thousands.
>> I was just wondering, did anybody see their neighbors, or was it just --
>> I think what audio they heard were people sang on their phones,
and conceivably some people may have sang in other languages.
Is that what you're asking?
>> Yeah.
>> Yeah, I think everyone was -- it's kind of -- part of it's just Karaoke, so it has the words
and everything on there as well as kind of the pitch we're supposed to be seeing
as a visual indicator, and then whatever people want to sing into.
Some people could just even talk or conceivably even sing something entirely different.
So it's up to -- what we're hearing is kind of a mix
where any single voice isn't really that prominent.
And to be honest, it seems like -- they say birds are, counting- wise,
they kind of have this concept of 1, 2 or many.
Beyond like 2 things they just thing of as many.
Beyond -- I don't know, maybe like 30 to 100 voices,
it kind of just sounds like a lot of people.
And the 1,000 version and the 3,000 people version sounds kind of similar.
And part of it may have to do with its limitation of kind of the technology we have
in here in terms of -- also our human perception system and just how we can actually --
how much dynamic range we can actually perceive.
So yeah, it's an interesting -- it's been an interesting ride, trying to figure that out.
>> One more question?
>> I just wondered if your grandparents and parents are still alive
to see what's become of their experiment?
>> Yeah, they're -- I have to say well, my mom I think finally knows what I'm doing,
after many years, and my grandfather passed away in 2008 when Smule was started,
but I think he got a sense I was on to something really crazy and whacky,
and I think that fit him very much.
I went to Beijing this summer and where I found my old guitar.
But I found it with this like sawed- off cooler at the nut and tuning into the guitar
and then raised the string about two-thirds of an inch up.
And I looked at it and it's like whoa, I don't know when this happened.
My grandfather must have done this some years back to convert this
into a slide guitar, because he played slide guitar.
And that's kind of my dad kind of fixing the stereo to make it into a guitar amp.
Looking at that was like -- it was like yeah, that's my grandfather.
And my grandma, I think she has an idea of --
she actually e-mails with my dad now pretty regularly on her iPad.
She never used the computer, and she's 95 this year.
So yeah, they seem to -- I feel very fortunate.
>> Last question?
>> What's next?
>> Oh, good question.
I don't really know.
It depends, probably.
There's always some new things cooking up, but I think this idea
of exploring the social dimension a lot more.
I'm really interested in thinking about how we can use what we know
about how we make music currently and trend to extend that in ways that we can maybe do
that in meaningful ways, across the world but also thinking of, can we make instruments
that we don't even think of, don't even know today and have a lot of people,
or different people kind of make music in a different way.
And a lot of this has to do with I think with who we make music with.
There are -- music is this wonderful -- it's like the best icebreaker sometimes.
If you really meet someone, you don't exchange that much meaningful information, nice weather,
my name is such and such, this is what I do --
but when you play music with someone, that's a very true bond.
You don't even need to say anything else.
And it doesn't matter how well you make the music necessarily, but just the act of trying
to connect to make music, that's pretty cool.
And so I think making music with your friends is one.
Making music with total strangers, I think would be quite interesting,
and espeically now we have this global positioning possibility.
And also there are people who are in the middle,
who you might refer to as like familiar strangers.
These people share the same location and time.
It's like the same people who take the train with you into the city to work every day.
You see the same cats kind of on the train, but you never talk to them, and if you don't talk
to them in two years, you're probably never going to talk to them.
But being in the same location, can we use mobile software, musical things,
to get them to at least interact together.
They still don't have to talk to each other.
That's not the goal, but because they share the time and place,
there might be some interesting musical interactions
that I'm sure there can be interesting musical interactions that can arise out of that.
So really thinking about not just what we do in music, but who people make music with,
it's just one of I think many really interesting questions out there,
and I think the time's never been better just for us all to just explore what is now possible.
Making an app on an iPhone, it's not easy, to be sure, but it's like a lot easier than --
there's the same spirit that exists now as there did maybe like back
in early kind of the garage programming days.
And I hope it doesn't really leave that.
Now I think it's wonderful that it really doesn't feel that hard to make an app,
and the truth of it, it kind of isn't.
So hope more music apps come out there.
>> All right, before I let you go, I just want to thank first Wayne Cloud
and Victoria Smith who made this happen.
I want to thank you for coming, [inaudible].
[ Applause ]