Meaningful Innovation: Whether to Design or Evolve?


Uploaded by GoogleTechTalks on 18.10.2010

Transcript:
>>
I've known Steve for about 20 years now and Steve, more than most people on the planet,
has been in a unique place to help drive innovation and to watch as other people try to drive
innovation and be sort of up close for the successes and the failures of those innovations.
Steve has gravitated to the harder problems and has seen probably more of the fat tails
of innovation and the positive-negative side than anyone else I know. And Steve is going
to talk to us today a little bit about what he's seen and some thoughts about how innovation
can happen, does and doesn't happen, and relate it to some things that I'm personally interested
in, about evolution. I hope you guys will find it as interesting. Please welcome Steve
with me. >> JURVETSON: Thank you, Astro. What he may
not have mentioned is that we used to sleep together, which is we were on the same undergraduate
dorm back in undergrad, and many stories there could be had in Q&A if you really want to
get to know Astro. But he's obviously a pretty special individual. And it's an honor for
me to be here today, addressing both you people here in the room and those out in TV land,
as we used to call it back in undergrad days when literally it was, you know, analog TV
streams. What I hope to share with you today is part of our enthusiasm for the future as
embodied in our views learned from many entrepreneurs and visionary scientists that we've met with
over the decades on what the future portends in terms of technology trends, the major shapes
of forces shaping our lives, and some of the underlying dynamics of how we get to those
futures. As Astro alluded to, what I hoped to spend a fair bit of the backend of this
talk discussing is the dichotomy of design in evolution. Different processes of building
complex systems, be that code, synthetic biology, or nanotech, and various futuristic visions
of the big interesting artifacts of the future. How will we build them? And there's a grand
engineering challenge that I think remains there in that sort of open question of how
best we can build things that transcend human understanding. I will try to also maybe make
some passing references to organizational design, maybe there's one slide on that, as
it applies to Google. And for those who read my abstract, I'd sort of gave a nudge to that,
and also when Astro and I were brainstorming over lunch a few days back, and led him to
think, "Hey, maybe I should be brought in for a TechTalk." It was a wide range of topics
we were talking about and so probably would seem a bit scattered, which is appropriate
for someone like me because, you know, I think I found the perfect career for someone with
ADD, which is you get to meet with a bunch of interesting entrepreneurs, hear their pitches,
and then move on to the next one. And so do take, of course, with a grain of salt everything
I say as a pattern recognizing generalization across many smart people who actually have
original ideas. And what I'm sharing with you today is perhaps the synthesis or catch-all
of some of the patterns we're seeing across those ideas, as opposed to, you know, scientific
work. I mean venture capitalists don't directly contribute to the economy. If anything we
help lubricate those who do. So with that preamble, let me just mention some of the
things I'm going to talk about. I'm going to start with a generalization about innovation
and disruptive technologies in general. Move on to how that is now intersecting energy
markets just because of their sheer trillion dollar size is I think, an important economic
driver for why many of these trends are important to industries for which they may not have
been as important before. And then end with that topic because I think it will naturally
segue into it, that many of the new powerful capabilities we have today open really important
questions on how we build these systems. Okay, one quick overview on innovation and disruption.
From all the business plans we're seeing, we're seeing more innovation than ever before.
Probably no surprise there but at least I want to say it because in its state of economic
tumult that may not be obvious. But the nature and quality of that has changed dramatically
over the last decade. It is increasingly globalized, that uber trend is not reversing. The companies
we invest in today sell to international customers day one. Ten years ago, unbelievably, we did
not see business plans from non-U.S. companies. If you can just ponder that for a moment,
just 10 years ago, you know what we did, had not invested in a single non-U.S. company.
The companies we invest in, those startups, only serviced U.S. customers. The only time
they looked to a European or Japanese customer was about the time they were going public.
This was our software companies, the Internet companies, if you will, and remarkably different
today. The vast majority of the wealth created in the last 10 years has been outside the
U.S. for us. And there are reasons why those network effects over the Internet reinforced
the globalization trends. Entrepreneurs can be anywhere. They can service customers globally
with lower transaction costs than ever before. We think these are mutually reinforcing. So
it's just sort of a philosophical preamble, the one and only slide I'll show about DFJ
itself. For those not familiar with, you know, what is this firm I'm referring to, I work
for a venture capital firm, with folks in early stage startup investments. And maybe
the most unique aspect of our structure is that we have a distributed set of partnerships.
You can think of them almost like franchises in a way or more like a pre-war Japan saibatsu,
where every one of these entities has cross-linked equity ownership in the others. They want
to see each other succeed. They want to--in a sense, what we've tried to instantiate is
a repeated game dynamic where everyone knows they're in business together for decades and
so therefore we help each other. It's quite as simple as that and it allows us to scale
horizontally across the planet more easily than we otherwise could. In aggregate, there's
about $6.5 billion under management which for a venture, you know, for an early stage
venture firm, is large. But for any one of these funds, it's a small reasonable pool
of capital, and then no single fund is a billion dollars in size. Okay, innovation. Let me
start with a thought experiment, just in case you're sort of digesting in half, half here
in terms of engaging with what I'm saying. I want to pose a question to everyone in the
room, which is, think of the most important company 20 years from now and actually try
to pick a tangible example in your mind, so that when I sort of make the next statement,
you can judge it based on what you're thinking. So pick what you think would be the most important
company 20 years from now. It could be judged from, presumably, the one that's on an incredible
growth trajectory, that's changing the world, and that people generally think is the most
exciting company on the planet, like Google was or maybe still is. So 20 years from now,
what is it? Is it Google? So you have something in mind, hopefully. I'm willing to bet against
everyone in the room and everyone out in TV land, even money. And actually, anyone wants--anyone
who wants take me up on this bet we can do a long now bet and actually bet real money
on this and then I'm willing to bet every single one of you is wrong, not even knowing
what your guesses are. And that may seem cheeky and, oh, I don't know, a stunt, which it is,
and the reason is that that company doesn't exist yet. I'm willing to bet on a new entrant
that doesn't yet have a name over all existing companies some of you think of today. Looking
forward, that may seem like a claim. Some of you may agree, "Oh, yeah, of course," others
may seem, "Oh, that seems a bit cheeky." But in retrospect, that's always the case, right?
In retrospect, we realized the new, the important, the vector of change in the world, the vector
of disruptive change always comes from new entrants, never from incumbents. It's a variation
of quaint impressions since the innovator's dilemma. But that bet looking forward, people
tend not to apply this often. You could ask who's the most important business executive
20 years from now if you don't want to think about companies, you get the same answer.
There's a variation on the same thing. Imagine you saw a parade of every startup that we
see. We have about 50,000 business plans a year. And let's see they could somehow encapsulate
in a short pitch whatever they do and they present to this audience their new idea of
how they're going to change the world. Then imagine we did a popular vote as a policy
on which one we think is the one we'd want to invest all of our capital in. Again, that
would be the wrong selection process. I'd be willing to bet against whoever wins a popular
vote in any investment domain of startup ventures that are trying to change the world. Instead,
what you should look to, and what you'd want to invest in is the one where maybe 5, 10%
of the room thinks is brilliant and the rest think it's completely idiotic. That is exactly
the kind of company you want to invest in. Because if it's obvious to the majority, right,
it's not revolutionary enough to change the world. And this is a truism that in retrospect
of course makes sense. Despite what venture capitalists will say today, in the moment,
they laughed at eBay, Hotmail, Google, and just about every company that has in fact
changed the world. At the time of their founding, a good idea is generally not regarded as such
by the average. Some general points on innovation. Now, I've qualified innovation as disruptive
innovation, distinguishing it from the sustaining innovations that big companies do all the
time and they're very good at, right? Disruptive innovation is what really matters. And why
do I focus on that? Well, you can ask yourself the question, if it weren't for disruption,
why would any startup ever exist? And I would contend that they wouldn't. That if you have
Mark and Stacey's predictability, the big get bigger, the rich get richer, and the new
entrants have no chance. They don't have brand, they don't have capital, they don't have distribution
relationships, and you can look to old crusty industries like, you know, aluminum powder,
you know, it is very difficult to compete without quo, and not because they make better
powders, it's because the distribution channels had been co-opted for so many years they're
not about to give a chance to a new entrant even if you have better pricing. So it's an
interesting thing, where does disruption come from? Well, there's periodic unpredictable
sources such as privatization of industries, deregulation of industries, right? When they
exist, we were all over it for short periods of time but we don't build an investment thesis
that spans decades on privatization or deregulation. It's hard to find the next one. You might
be waiting a few years for the next interesting opportunity, you certainly was scanning the
planet for the--where the frontier of that sort of market deregulation takes place. Another
reason, new channel distribution, right? When the old--for those, maybe three or four of
you in the room that might remember Dell versus Compaq in the old days competing, and how
it seems almost written in stone that Dell would take the business from Compaq due to
channel conflict, the fact that Compaq sold through a distribution channel that couldn't
be revoked. Kind of like automotive companies in the U.S. had dealers that cannot--dealers
with contracts they cannot revoke, it makes it easier for new entrants to sell direct
to bypass that. The Internet was the greatest new channel distribution we've ever seen in
terms of the power and scope. That impact has barely been fully felt. The next one seems
to be mobility as a way to reach customers more directly. I couldn't tell you what the
next one is. In Q&A, please tell me if you think, you know, what the next great channel
distribution is to reach customers because it--then that would be a good investment thesis
for us. But there is one, the whole buildup, is there is one you can count on every year,
every decade, and every century. So that 50 years from now, even a 100 years from now,
there will be venture capitalists and there will be startups, and the reason is because
technology being nonlinear in it's pace of change. The accelerating pace of technological
change is the fundamental perpetual driver of disruptive innovation. It's why new entrants
continue to exist and why they continue to enter the market, and why venture capital
by proxy exists, that's why we're enthusiasts for it. Okay, well, what's the most canonical
example? Of course, Moore's Law, right? And this is Gordon Moore himself--oh, the graphic
doesn't come out so well on the screen, at least from my angle. But he's a happy Simon
fisherman, he grew up here in the Half Moon Bay area, Pacifica. And, you know, he planted
a pretty important curve off with just five data points and, as everyone knows, has held
true remarkably to this day even though it's been recast several times into ways he never
actually stated. But, you know, the sort of consensus that we have today is at least,
a Gestalt that everyone has, that we're doubling computer power roughly every 12 to 18 months
depending who you talk to. Now, the cool thing about Moore's Law, and I show this graph for
two reasons, one because I think it's the most important graph of all technology business,
and second because every time I show it I say, that I show it at any presentation I'm
giving regardless of topic, and I always ask the question, how many people have seen Ray
Kurzwell's curve of Moore's Law, which is the 100-year abstraction of Moore's Law, and
now 110 years. Some of you must have seen it. Don't be shy. But it's actually--maybe
22% of the room. That is on average. Every talk I give, just the percentage doesn't change
and I'm not sure why. So let me reemphasize why I think it's so important. For the...
>> There's no units in this graph. >> JURVETSON: What's that?
>> There's no units in this graph, that Kurzwell. >> JURVETSON: Well, not all of them. This
one--this one is... >> By Kurzwell, isn't it? Kurzwell has these--all
these graphs on system distribution and about anything.
>> JURVETSON: Which are a whole book full of graphs, right?
>> Yes, it does. >> JURVETSON: Well, that's a bogus. That will
be a jump. I mean, that's an interesting point. So this graph has been vetted recently by
a Santa Fe Institute in some degree of depth to say in what is the, instead of just drawing--what
I don't like is the sort of like white, squiggly double exponential with these kind of eyeballs,
I could see a straight line just as easily. But they have at least kind of, and if you
look at the data points to say, is it a power law? Is it an exponential? Is it a double
exponential? Is it sub-linear? You know, I love a mixed scale, of course. So Kurzwell's
argument is a double exponential, you know, slightly upticking curve. SFI thinks it fits
better to a power law but, in any case, it's massive. The compounding curve over 110 years,
for which there's probably no parallel in business, certainly not technology business,
where the pace of change seems--in this ever accelerating partially business curve. What
we're plotting here is calculations per second a $1,000 can buy. Why are we doing that? Because
no one buys transistors, right? There's no customer of--at least in the integrated circuit
world that is, in the streets, of course, there are. People buy computation of it by
storage, that's why--that's what matters, that's what Moore's Law is all about. And,
you know, by either metric, you see a similar curve over long periods of time so this abstraction
allows you, across the color bands, to compare different technologies, right? And what's
astounding is people didn't know they were formfitting to a curve. There are dots below
the curve. This is the frontier of human capacity, the best priced performance computer of the
day. And the takeaways are, well, first, we use our tools, build better tools and so on
and so forth, computer controlled machines, and then better lithography equipment, better
simulation tools, most importantly of late, to build better and better systems. Second
major takeaway is it spans five paradigms of technology already, what might be the next
one, that's been an investment thesis for us, normally like the electronics and climate
computing. But I think the most important points are no one knew they were formfitting
to a curve, which is kind of interesting. It begs the--almost the cosmological question
of why and will it continue and will it saturate? And also it's completely exogenous to the
economy, right? World War I, World War II, great depressions had no impact on the pace
of innovation. Scientists don't rest during recessions. Now, the companies behind these
dots may have gone out of business like flies on the windscreen based on the economy but
they haven't change the pace of innovation, which is astounding observation over a long
period of time. And one, I think, should really take some time to sink in. People recently,
Brian Arthur and others, have written, Kevin Kelly in his most recent book, "What Technology
Wants?" that not only is this exogenous to the economy, it is curves like this that create
the economy. And they make the argument that every idea, every innovation, is the product
of prior innovations; usually a pair, sometimes more. The really radical ones are interdisciplinary.
And if you take that premise as possibly true, that all good ideas happen not in isolation,
they occur in remarkably a time-synchronized fashion across the planet and they are the
compounding of prior innovations, then the initiating outgrowth of that observation is
the conventorial possibility space of idea combinations is itself growing exponentially,
right? So we have understandings of electromagnetism today and the circuitry that it has exude,
it allows us to build things we otherwise couldn't, and there's a sequence of innovations
that allows more innovation and so on and so forth. So there's at least one theory that
this isn't just a weird, you know, people formfitting to a curve like people used to
say in the early days of Moore's Law, it's like, "Oh, it's the whole industry roadmap,"
and therefore everyone's purposely trying to just formfit to, you know, a forecast that
was put up by the industry roadmap. Well, you know, these guys didn't know there was
an industry roadmap and they weren't formfitting to anything and also it's a concatenation
of ideas it--that I think is most powerful. Okay, now let me actually go back for just
one second. And then [INDISTINCT] can take away is that as--I mean this is an enormous
scale change, right? As you reach certain thresholds, a formally lab science becomes
a simulation science, right? The life scientists industry is going to that revolution as we
speak, over the last decade, where things that were done through trial and error experimentation
were done in-silico and therefore the pace of the innovation dramatically improves. What's
happening of late is that there's some new thresholds in energy, clean tech markets,
fuels and chemicals, even rockets and cars, where the ability to simulate something like,
"Do I need to actually crash the car," or can I just simulate how it will crash before,
you know, and send your very first vehicle to the government for crash testing, confident
it will work based on the simulation. Just like Boeing designs their planes entirely
in-silico without the wind tunnels, computational power allows you to do things much more quickly
and then revolutionizes industry after industry, so that's why Moore's Law is so important
not just to first computers then communications and telecom and data com, the obvious ones,
right, but life sciences, vehicle design, fuels and chemicals, now biology. Just one
random example again, it's a gratuitous opportunity to show off one of my favorite activities,
of launching rockets on Sunday, but also a company we've invested in, that's used Moore's
Law and a variety different ways to redesign how rockets are built, that roughly, you know,
a tenth of the cost of incumbents, where you use the same electronics chassis throughout,
the same FEGA module for all control elements and just all kinds of new ways. The way you
build a rocket today, not the way you build it 23 years ago or so, when Atlas and Delta
were both coming together. And allowed them on their Falcon 9 launch to succeed getting
to orbit on the first attempt, you know, a lot of simulation went into that, whereas
the last attempt to build a new rocket in the United States, the Atlas, took 13 crashes
in a row, 13 explosions and dramatic failures before they finally put something in orbit.
Thanks to Moore's Law. Now, for a historic throwback machine, you know, back to the '90s
and then into early 2000s. You know if you look at the Internet, I'm preaching to the
choir here about how important that is and how it accelerates technology's proliferation
itself, right? So if we had Moore's Law as a phenomenon, the rate at which humans are
networking is itself increasing the pace of change. Anything about packets of ideas, how
many good entrepreneurs in the planet can see--have their ideas see the light of day,
that's dramatically changing. Here we're compensating for, you know, if all these companies had
launched at the same date, which they didn't, really interesting how their pace of growth
early, the buddy communication systems, ICQ at Israel, Hotmail--actually, does anyone
know where Hotmail is based? Pretty close. They purposely chose Fremont, but they did
move later, so there was a later location that was closer. They weren't really putting
a lot of marketing thought into this. They were working on Freemont, on Liberty Avenue,
and launched on the Fourth of July, free email, and of course with that no press because they
launched on the 4th of July and no one's working on the Fourth of July. But, you know, in prior
talks people usually think India. Then, of course, these companies could've been anywhere.
Skype was built in Estonia by four programmers in four months. Hotmail by, you know, two
programmers in three months. And those companies could be and are increasingly anywhere today.
WoW, it is based in Serbia, you know, you can build and launch a product, it's the obvious
point. But also the pace at which products can proliferate globally is, you know, unprecedented.
At the time, these were record setters for the rate at which a new product had been subscribed
to by users over time. And, of course, now, as you know, Skype, 400 million users, you
know, like the third largest telecom in the world and growing great guns still today.
So this globalization effect is important and it further accelerates the proliferation
of ideas. And if you think about ideas mating, ideas having sex the way I'm not really will
describe it, as a source of innovation that's what makes Silicon Valley so magical, the
rate at which interdisciplinary ideas and intergeographically, sort of, dispersed ideas
can cross-pollinate is a powerful driver of the pace of innovation. It's all--there's
a loose analogy one want to draw to differential annuity, like when, you know, the missionaries
first came to the New World and killed everyone through their transmission of smallpox and
related diseases, ideas are the same way. Like disruption is good, remember if, change
is good, like killing off all the incumbents is a good thing in your mind, then the idea--the
extent to which ideas can cross-pollinate across academic disciplines, disciplines that
had been perhaps segregated from each other like islands through their vernacular and
system's theory, and then finally come back together around some new locus of conversation
like nanotech or synthetic biology where the folks who weren't talking to each other are
now starting to talk to each other, usually universities, that's where you see a proliferation
of radical ideas. An example recently of how that spills into energy markets, now a trillion
dollar industry, water purification, solid-state lighting, energy and clean tech in fuel and
chemicals, are all markets that are 10 to 100 times bigger than anything the venture
community has ever aspired to attack, if you will. An example, just from our portfolio,
EnerNOC, which went public a few years back. It looks a lot like an Internet company under
the covers. What they do is they stitch together demand, sort of shedding opportunities and
demand generation opportunities in industrial accounts. So imagine a hospital, a Safeway
store, what have you, and in some cases, diesel gen-sets for backup power, and of the ability
to shed load like tune, you know, tune down the lighting, lower the temperature on the
fridge for, you know, an hour at a time, let's say. And by stitching all these industrial
accounts together they can present virtual load shedding to a utility and say, "Look,
instead of having a brownout or instead of having to build this, you know, the natural
gas peaking power plant that you only turn on a few hours each year for those peak load
moments, instead, let's use our network information network to turn on, you know, diesel gen-sets
worst case or just shed load in your region on demand." In exchange for that option value,
the option to do it whenever the utility wants, they get a constant steady source of money
every month that they then share with these industrial partners. So it's a network effect.
It's an Internet company. It's only made possible by the ubiquitous state the networks span
across these industrial accounts and so it grows like an Internet company, right? They've
got to profitability on a three million of invested capital, which is again contrary
to what people might think for energy clean tech markets. It's places like that where
we're most excited, where it feels like information technology and it is business applied to trade
in a lot of markets. And by the way, they've already prevented the manufacture of 70 gas-fired
plants in the East Coast of the United States alone through their virtual network. I'm not
going to spend any time on this. In some context, people really reach about energy clean tech,
we can come back to it in Q&A if you want. But we've been investing, and are probably
the most active investor globally in energy and clean tech products. This is about a couple
of months old, but there are about 75 companies across the network, about 45 from this office
here alone. We started in energy generation distribution storage kinds of place, we're
much more interested in the ones on the right side of the page at this time which is, you
know, the more efficient use of solid-state lighting, building materials, what have you,
fuels and chemicals around synthetic biology and resource management or broad catch off,
everything in agriculture to water purification, and things like that. Just for a sense that
these--there are real businesses here. The revenues have been doubling about every year
for this agri-portfolio and current--this group alone, I think will do about 1.4 billion
of revenue this year, up 100% from last year. Now, what's one example that's kind of interesting
and sexy? I love Tesla, you know, I'm biased, of course, but I put together a couple of
slides for a Castrol management retreat, it blew my mind. This is a company that has 10
billion a year in sales and the only product they make is lubricants for internal combustion
engines, right? That's the only--imagine having a $10 billion industry, it's like the buggy
whip, you know, supplier of the last century. And they're gleefully, that's all they do,
they have no intent to go outside of that. And I had to tell them that that entire industry
is going away. There will be no internal combustion engine, right? In some near-term future, like
50 to 75 years from now, and they were aghast. And one of the reasons is, just for those
who don't know, these--if it was for a gas engine, it's about one of the most least efficient
things on the planet. It wastes 80% of its energy in heat that is not used for anything
useful. And only 0.3% of the fuel you burn in your car, if you're driving a gas car,
moves you, the passenger, 0.3%. I believe all transportation will be electric. Airplanes
is a much longer debate, you know, it'll take the longest time, but planes, trains, automobiles,
the new ones are already, diesel gen-sets in line to run the motors, this is just a
better way to build every vehicle you can imagine. Even if you generate electricity
by burning oil, which you wouldn't, of course, eventually it'll be all fusion and fission,
but even if you use stupid ways like one big centralized oil burning, fossil fuel burning
machine, it's still better to transmit that power, go through the battery inefficiencies
and run an electric motor. In a car, for sure, in a train, if you have better storage. Parma
trains today is, and tanks, and heavy equipment, is they use electric motors, but they don't
have good storage density. How we can store enough energy on the vehicle to make that
make sense? Sooner, and along in the way, whenever I think about transportation is in
itself is an enormous industry, ripe for change, and all kinds of information economies are
going to percolate through the vehicles, it's going to be the mobile iPad if you will. Oh,
let me go back one. But that's the high end. When we think about disruptive change, it's
pretty interesting. Clayton Christensen talks about this pincer movement, and the auto industry
is an easy example everyone I think can understand, is you got this high end fancy brand, you
know, luxury kind of segment if you will, that the mainstream auto companies can try
to ignore and say, "Well, that's just going to be as with past luxury cars, a high end
niche, a Ferrari, a Lamborghini kind of category. It doesn't matter to us, right?" And Tesla
will disagree but that's one way to dismiss the new entrant. The other category that can
be dismissed is the low end. The products that are useless to your current customer
base. If you're selling, you know, Hummers or F-150 pickup trucks, there are certain
categories of vehicles you are just not watching with a lot of detail. An example I love to
give is China today has 120 million electric vehicles buzzing around its roads. Think about
that for a second, a 120 million electric vehicles sold by 1,300 different Chinese manufacturers.
Today, that's not the future. I gave a sneak peak, unfortunately, before so you didn't
have the time to think about what the hell I'm talking about. It's these two-wheelers,
right? They're the equivalent of electric Vespas, e-bikes and scooters. What's astounding
is the pace at which they've taken off in China. And just as an example, let me show
you that graph and I've updated it. This was some work initially done by Jonathan Weinert
out of Chevron and I've been gathering the last few years of data to update as his curve,
because ever since he joined Chevron he isn't really supposed to be focusing on this so
much. Electric two-wheelers are the green curve. It's the most popular powered vehicle
category in China today and has been for a while. Now what's interesting is what made
this change occur? There's been a few things. First, there was policy, right? It's amazing
when you have a government run by engineers; they actually set policy you know, in a slightly
different way than other democracies around the world. So they just say, "We're going
to outlaw these two-cycle, you know, polluting bikes from our cities." Eventually, they got
up to 150 cities banned petroleum two-wheelers. Then they start to get scrap, 50,000--53,000
of them are scrapped in Shanghai. Then they gave a law of granting them the right to the
bike lanes. So kind of like a commute-link policy here--not--you can see which has correlating
with growth. And by the way, technology improved, right, they switched to Vrla batteries, they
got a 30% improvement yield--I mean, efficiency there. They switched to brushless motors in
the early days. I got them also noted about a 30% efficiency gain. So there's technology
improvement as well in the background. And, oh, by the way, processing power of the average
person in China, you know, doubled roughly during this period so that also helps. Then
there's some other changes. Now, here's an interesting question, what do you think was
the biggest kicker and sort of what does knee in the curve, what was the biggest driver,
at least, in Jonathan's mind of this transition? Does anyone want to venture a guess? Nothing
to do with policy, it was nothing intentional. It was SARS. People didn't want to ride on
the bus and then they bought these electric two-wheelers as a substitute for bus transport.
And when they actually paid for them, they realized it costs less than riding the bus.
If you do an all-in cost analysis, the E2W, it's actually cheaper to buy and operate one
of these for solo passenger use than to ride the buses in China. And that by the way, was
perceived to be even lower still because that factors in battery replacement over its life
which most people, when they first buy it, they don't realize they're going to have to
replace those crappy batteries that they use over there, the Vrla battery a year and half
or two years. But even when you factor all and you actually do a real cost of economics,
it's cheaper, which is kind of much interesting. Those 1,300 different vendors, in the top
right corner are the examples of their latest products. The four largest manufacturers are
now doing four-wheelers. These cars sell for between and $2,000 and $4,000. A lot of them,
to the criticism of the government, are not even regulated as cars so they're just selling
them with absolutely no nod to the crash-safety laws of the country. But nevertheless, that's
the bottom edge, right? That's a great example of Clayton Christensen's coming from the bottom.
You know, by the time they have shipped more units than anyone else, it'll be a bit too
late for other auto companies. And, by the way, looking back in time a 100 years, the
cumulative electricity use, you know, in the U.S., right, in 1900, transportation was the
single largest consumptive source of electricity in the U.S., and I believe it will become
something closer to that again in the future, sort of just for the long view. Okay, that
was just a brief digression on something that I think is pretty interesting but not the
main point. The main point of that as example was disruptive change is occurring in industries
that had been sheltered from disruptive change for a while, right? When Tesla went public,
it was the first U.S. car company to go public since Henry Ford. So it has been sheltered
for a while, just as--I would argue, fuels and chemicals have been as well and other
huge industries. So now accelerating change, things like Moore's Law. And here's where
Sebastian's good point about Kurzwell's curves being this, just fire hose of--how should
I put it, bias, fire hose of bias perspectives. Nevertheless, if you look at some of the examples
where the curves hold really well, and let's say other people did the analysis, if you
just want to look at it that way, there's, of course, IT-related topics where you see
similar curves. IBM's done analysis of the disk drive industry or mass storage if you
will, and it holds even more powerfully the Moore's Law over a slightly shorter period
of time but more dramatic slope of change. Obviously, networking, all kinds of different
metrics of the size and scope of the Internet. But also biological things, evolution itself
for over a hundreds or millions of years follows a very similar accelerating pace of change,
which is fascinating. And we are now, of course, really--I mean, you think about how does that
accelerate in our lifetime, how do we feel accelerate in change of technologies? Of course,
the vector of abstraction is moving towards technologies not our biological systems so
you take that broader definition of us and our technological substrates, meaning our
societal means and norms. The technologies we build and in the world that we live in,
our extended self, that is evolving at an accelerating pace of change even in the current
day. But also a lot of sort of life science like things, like in that curve there, that's
Dickerson's Law, not very famous guy but it's even more predictive than Moore's Law itself.
It's held a 0.5% accuracy to the current day over 35 years when he first set it of how
many proteins are going to crystallize each year? You know in 1965, he predicted, the
number's held true till today. I mean this formula, that is. And then the numbers of
genes mapped. I think genes map is an interesting proxy for the information accrual and accumulation
in life sciences and so I'll give you an example of that. On the left here the Human Genome
Project. How many genes have been checked into the gene bank, the public, you know,
effort? And, you know, this exponential curve looks familiar to some now. At that time it
was deeply contentious, especially midway when very little progress had been made and
naysayers were saying, "Hey, the shotgun sequencing approach will never work." And, you know,
Craig Venter and the team realized, "Hey, you guys think it's going to take this long?
We can look forward to the computers that will be available four years from now." And
we don't really need to work so hard in the early days because we don't really want to
use the computers in the early days. Let's just collect samples and do all the shotgun
sequencing at the backend and do half the project from the last six months and that's
exactly how it played out. That the pace in Moore's Law drove the shotgun sequencing technology,
basically the computers after, you know, reassemble the genetic code from the nebulized random
fragments that are overlapping. And then sure enough, they kicked in the year they're just
there. Now what's intriguing is that scientists didn't like rest on their laurels and sort
of do a stop and digest as one might have assumed. That in fact, there's some criticism
that human health hasn't improved much from this research. They moved on to microbes.
And what's astounding is if you look in the right curve, this whole Human Genome Project
is that little part of the curve and since then, it has just mushroomed like crazy. Now,
what's interesting is you might see, "Well, was it, you know, surely that's some sort
of local peak." But what's even more astounding because this just doesn't take us to the current
date, is if you look in the last 18 months, one team, again Craig Venter sailing around
the world, shotgun sequencing whole ecosystems now at a time. Again, using Moore's Law to
say, "Hey, now the computers are good enough that we don't even care to segregate the tens
of millions of organisms that are in one little milliliter of seawater that we sampled the
bacteria and viruses, we're just going to put the whole damn ecosystem unsegregated
through our methodology and deduce all the interesting genes that matter out there,"
right? Well, that team alone has grown this dataset ten-fold, right, so this now, is 10%
of known genes on the planet today. The number of known genes for energy transduction, how
we harvest energy from the sun, have grown a hundredfold, things like chlorophyll, the
rhodopsins, the things that are--have evolved in the ocean water over longer periods of
time and are more efficient then land-based plants and animals. Fascinating stuff, right?
The problem is why would you ever done this, because in biotech 1.0, if you don't have
the host organism from which to physically cut and paste the genes, what good is the
data, right? And so people may not have thought to do these 10 years ago because it would've
just been an academic exercise at best. "Here's a bunch of genes but we don't know what to
do with it." And it could've been like, you know, maybe a statistical study but nothing
actual, no business could've come out of that. Well, what's changed, and what Craig Venter
saw, was another exponential curve. These ones by the way done by Carlsson. Don't even
worry about the axis at this point because what matters is the relative slopes. Red is
Moore's Law, blue is the advances that have been made in gene sequencing which is reading
the code of life, and green is the pace of change in the synthesis of genes, writing
the code of life. And slight differences in slope have a dramatic compounding impact over
time. So if you put this on a linear graph, gene synthesis technology advances will make
Moore's Law look flat-line in comparison, for example. Now what are we referring to
here? We're literally talking about emailing files to various places around the planet.
"Give me an A. Give me a T. Give me G-G-C," whatever and then getting a FedEx back and
get all of the nucleotide, sometimes several series of these. And they're all over the
place. This is a local company just down the street, DNA 2.0. They literally have bakers,
affectionately labeled ATG and C and the things are just printing them out. There is--no animals
are involved, right, this is just, you know, all chemistry in the lab. Well, the ability
to predict that falling cost of gene synthesis, a lot of people will think about, "Oh, well.
I mean, when will we build first the virus? 2002." Sure enough it happened. "When will
we build the first free-living organism that can propagate, the first organism whose parents
are a computer, in the sense that it's just code pulled out of the air with, you know,
embedded URLs in its code and all kinds of interesting poems and quotes, build base--paper-based
first from the ground up with no animals involved. So when would that happen?" Just a few months
ago. A company on the right-hand side, Synthetic Genomics, again led by Craig Venter did that.
It's an astounding accomplishment. First, that you can take 100% of the DNA out of an
organism, put it in an entirely foreign bolus and have a changed species, changed phenotype
from one to another. And they showed that across organisms that were as genetically
different as mice and men, just for symbolic value, which actually is pretty darn close.
It turns out. But these are single cell organisms, don't get me wrong. No one has taken on a
mammal, the million cells are quite a bit larger and more complex. But it's a profound,
I think, watershed achievement in a field that won't have--that field won't have business
applications this year because--directly, it will indirectly but not directly. We're
not going to build new life forms, don't get me wrong, but that's where we're heading on
our conversation. What we will do is high through put experimentation at a pace it hasn't
done before. It will radically change the process of innovation where you don't have
to find the host organism to test the hypothesis. "Well, what if we spliced in this version
of rhodopsin that has a different frequency tuning ability to a different specter of light?"
You can run as George Church at Harvard has shown, using a different methodology, four
billion experiments a day of genetically different organisms. It's just mind-blowingly different.
Biotech 2.0 is mind-blowingly different than Biotech 1.0 in both the pace of innovation
and the rate at which we can run experiments. It will as--these will all may lead up to
say, well, it begs the question, what one would design that the capacity of design is
with us now? And by the way when teenagers are playing with it then you know it's somehow
hit mainstream. So the--for those familiar with the International Genetically Engineered
Machines Competition, iGEM, started by Drew Endy now at Stanford formerly at MIT, teenagers
and people in the early 20s are, you know, reengineering E. coli like it was Lego bricks,
right? You know, make it into an oscillating repeater with logic gates, have--put a [indistinct]
and add a chemical sensor. Just, you know, drag and drop like a Mindstorm's NXT kind
of workspace and make organisms. They do fascinating things, right? There was a binary system where
one organism was a sensor; the other was a therapeutic for Crohn's disease. It's not
yet in animal trials but at least on paper, it seems quite plausible that it could work.
Another team has done--they reengineered the E. coli smell better, because they smell like
that from which they come and they just didn't want to work with something that smelled like
that all day so they made it smell like bananas. And then the third team, create a biofilm
and had it change it's--kind of like the e-ink screens on a Kindle, changed the way at which
it portrays the color base on light exposures so it's really like a film that you can expose
to light with higher resolution than normal film, in what I call "E.colorite" and that,
I think, should at least be played with some more. It's a fun one. So now, as you know,
you just take these little paper blotters, blow in, and then they're just playing around
with the stuff, just splicing it together, they even have their own comic book. Oh, I
accidentally left that slide in. Well, what is the business application in the near term,
not the immediate term but, you know, next few years, maybe a decade out is, you know,
90% of all organic chemicals. In fact, everything you can see and touch in this room, not the
underlying metals but the surface coatings, just about anything in this room probably
came from petroleum or natural gas, which is pretty astounding because we have organic
coatings and everything in this entire thing, for example. And inevitably, it'll be known
as the formerly petrochemical industry. Because I think we'll look back at some point in the
distant future, maybe 50 years to the present day and marvel that we had this incredibly
rich resource like coal and burned it, right? It's the least creative thing you could do
with a resource like that. Think of it as like the Amazon Basin. There's a lot of things
you do with the Amazon Basin, burning it is the least creative of those things. Instead
you could bio-convert it to all kinds of fuels and chemicals that have more rich use that
aren't just burned; I should say [INDISTINCT] but chemicals. And so these organisms can--some
of the things with stereo--with the precision in their stereochemistry, is very valuable,
it's at the left hand or the right handed version of the molecule. Is it the sugar that's
indigestible or digestible? They can do that without a statistic sample but as, you know,
as a metabolic pathway, just screening one not the other. So there's some big companies
in the world connected to this. So ExxonMobil has probably spent more in marketing than
science, but they have put $600 million towards the science with Synthetic Genomics. And there's
a bunch of other companies working on, in this case for example, reengineering algae
to continuously secrete their oils across the cell membrane. So unlike a sort of farming
approach, we have to grow, kill, extract the oils to make biofuels or chem or fish foods
or any other number of other lipids, you have continuous secretion in a very inexpensive
separation process which is going to continuously get separated off the organism, it makes more
and more and more. That could be revolutionary to algae-based fuels and chemicals. And there's
other really interesting projects like converting coal to natural gas underground. It's already
done biogenetically, 10% of our natural gas comes from there. There's been some recent
efforts that have shown you get a hundred maybe thousand-fold improvement on that, where
you take the dirtiest burning of the fossil fuels, coal, [INDISTINCT], sulfur and mercury,
leave it on the ground, convert it to natural gas which is the cleanest burning. Now you
burn it in your kitchen without a fume hood and without any--in a sense energy infusion,
you don't actually have to crack or frock the wells you just let the microbes do the
work. And when they did genetic sampling of those wells, it's kind of amazing. They drilled
down, pulled up a genetic time capsule of sorts. The organisms living in there had a
whole rich ecosystem, completely isolated from all sources of light or contact with
the outside world. They were stripping hydrogen--excuse me, stripping electrons off the carbon bonds.
There were spirochetes that were living on these things, the whole little time capsule
that had been segregated from the rest of the world for 70 million years and fascinating.
You know, everywhere you look on the planet you find life doing very interesting stuff.
What does all that mean? Well, there's an increasing sense that this world we live in,
at least of late, is completely unpredictable, financial crisis and collapses, which are
all by the way, is good for startups, too. I forgot to mention sources of disruption,
you know, a good financial collapse helps new entrants more than it hurts them and it
hurts incumbents in their, you know, debt-laden balance sheets and union contracts and pension
contracts more than it hurts the startup. So if you're in the automotive industry, what
better time to compete with GM and Toyota, right, I'm just thinking about this from the
startup's point of view. Anyway, Black Swan events could be considered good or bad depending
on your point of view. But there are things that make sense in retrospect, like Google
being successful. Of course, we all understand why Google is successful. But when they launched,
I'm not sure any of the investors at least, to make fun of, you know, our brethren, could
have told you, "Here's how it's going to play out." You know, "Here's the broad map of how
Google will succeed." It's what, in retrospect, someone like Nicholas Taleb, would call a
Black Swan event. Those aren't going away, right? Financial crisis, near collapses of
financial systems, the pace at which those events occur is going to accelerate and is
going to increasingly define our world. You can do retrospective analysis of the stock
market and say it's only those huge inter-day swings that are--seem like outliers that really
drive change over time. If you took those out, stock market doesn't look anything like
it really does today and increasingly will be so. Well, you might ask yourself, "That's
kind of weird. How do I plan around that?" And, you know, as venture firms we think about
that too. And we try to do portfolios, we try to only be equity investors, not debt
lenders, because you can take advantage of upside and you have limited downside, if you
only lose 100% of your money at any one company, unlike some other kinds of synthetic vehicles.
But you might also ask questions, "Well, you know, there's nothing completely out of left
field that could change computing, you know, is that kind of stable?" Processors the way
we kind of know them, yeah, parallel processing, not that radically different but, you know,
it is computing more or less a stable thing? Well, some of you may have caught the Google
announcement with Hartmut Neven earlier in the year of the use of a quantum computer
from Vancouver to four kinds of machine learning tasks that are on Google Goggles in the recognition
of cars in a particular demo study. And they found all through, you know, in the way which
they did the script machine learning versus continuous variables and [INDISTINCT] which
is a quantum computer versus a classical computer and they got improvements over the way machine
learning is done today, and my guess is almost everyone here has some interaction with machine
learning algorithms. What's fascinating about quantum computers is, for those familiar with
them, they're just completely different ways of computing. The only physics explanation
for how they work is the engagement of parallel universes, just to get a sense how different
it is in the--in this sort of infraction patterns that occur across nearly neighboring parallel
universes. And I'm not joking. That is literally the only explanation that actually matches
with a measure of reality. And David Deutsch from Oxford, he's one of the few, you know,
professors that are willing--just got on a limb and say that, "I mean, guys, you have
to admit it, this is what the two slit experiment and everything else in physics points to."
But what's even more interesting is that certain classes of problems, things that you can map
to a graph, this is an application specific quantum processor in this particular case,
problems like the traveling salesman problem, Monte Carlo simulations. It turns out in molecular
modeling optimization algorithms of almost every type can be mapped onto this quite efficiently
but not all computation can. So certain classes have really hairy problems like FedEx route
optimization. If you look at that class of problems, the ones that people really care
about, that are, you know, at the cutting edge of what we can do computationally today
in a high-end datacenter and all machine learning for example, if the pace of progress in quantum
computer--computing follows what I've come to called Ross's Law because Jodi Ross is
a founder of this company, had sort of plotted the points. He made a claim early on when
I first met him that every year we're going to do twice as many qubits as the prior year.
By the way when you add the qubit, you almost double the power of the computer. It's not
like the Moore's Law thing about, you know, adding a bit doesn't--well, like from a memory
register size, it does but not from a computational power. Here, it really does call it computational
power, doubling this qubit. And we've been doubling the number of qubits each year, right?
So if that pace of progress continue--and that curve's held for seven years now, if,
in memory, occurred in more than five data points. We got seven data points versus five.
If that curve continues for the next seven years and I really have no reason to believe
why it would or wouldn't because it's just, you know, semiconductor process it's pretty
linear tiling. We're not talking about changing the design as you scale it's just putting
more of these next to each other. So if this trend continues for the next seven years as
the prior seven, one of these computers that would fit roughly in the space of this podium
area could outperform all computers in the planet combined, all computers ever made.
Then you give it a few more spins and soon it's outperforming all computers that ever
could be made with any traditional architecture. Even if you use all the matter in the universe
to build them and gave the entire length of the universe as a compute time they still
wouldn't solve the problems. Now, I use them to get around those. We use heuristics, right?
Where talking about the brute force solving of problems versus using heuristic, where
we've long since given up on solving the problem, that you know, NP complete problems, things
that--or NP hard problems rather. So it's pretty interesting. It begs some interesting
questions. Okay, let me finish up soon here. Good, I think I'm right on time. So this is
the last topic. Now, I think a lot of folks here in some ways made--well, not lash. I
shouldn't say that. I don't know if you dream of building complex systems. I sure do. You
know, as I think about the big unsolved problems in science, especially information science,
what are the ones--or least some people say that will never be done and I'll solve that
as a challenge. Either that's impossible or we'll never do that, like artificial intelligence
that exceeds human intelligence or nano robots doing amazing things, that's actually probably
the hardest of all of them, or designing microbes from scratch that do our bidding, right, in
the synthetic biology side. We can build a nebula what's the code? We don't have the,
you know, programming environment if you will or understanding of the interpreters and compilers
of that world. Well, all these have something in common. They are in many ways, the attempt
to build a complex system and now we're building, I mean, no single human understands Windows,
right, and we're trying to build things in many areas of--in many artifacts in our world
that are beyond human understanding in any one human. So we have teams of humans that
are sort of that at that frontier of human understanding in the paradigm I designed,
who are hitting something that's in certain areas. So I first put this slide together
in 2001 or 2002 around nanotech. So there was a lot of nanotech-specific and the like,
and I pulled it out because I realized the big labels kind of applied to all these areas.
So, but imagine we're first thinking about, you know, dreams of nanotech futures, these
little nano robots that do amazing stuff. How would we get there from here, right? Because
as an investor we also have to think about, we can't invest in a dream that just says
let's just hunker down for 20 years and something pops up amazingly at the other end of it.
That's a bad investment thesis. But is there a way to engage with markets, customers, and
inadvertently get to some visions, some sort of shining star that motivates the team to
work hard? And there have been two approaches we can think of, of paradigm, so how you get
there. And this is, plus all kinds of stuff, this plus architecture, art, it's like a whole
bunch of domains of activity for which you can talk about a top-down or a bottom-up.
But specifically, what I mean by this is top-down is sort of the engineering approach of things,
the Germanic if you will tradition of we will--like, take the semi-conductor industry, we'll build
tools to scale down from above. We will inherit interfaces of scale from above, meaning things
that are big and, you know, discrete capacitors, printed circuit boards, plugs that go on walls,
the big stuff. We couple down to small and smaller things with technologies where the
frontier is sort of that thing about that red point coming down, working at that frontier.
We're not reinventing the plug as much, we're sort of working on the cutting edge of, you
know, gate oxides or whatever might be done to the nano scale today. And the pace of change
is slow, incremental, and generates revenue along the way, right? That every Intel processor
has a market, right? They didn't hunker down for 20 years and then come out with some new
architecture at the other end of it. And so, it's slow and steady that's why HP, IBM, and
everyone doing research in nanotech said, "All of our interesting products will be 20
years out." It made sense. It probably will be. But there was this other group, Kevin
Kelly wrote about it and others, that really inspired me, which were doing radical things
in the near term, like Angela Belcher's early work, reengineering viruses to do, you know,
lithography, it's fascinating work. A little bit different though. Grown, iterated, a little
bit out of control, you know, movies have been written with that. Very powerful but
lacking a lot of system theory. How do you really dictate the outcome of this? And it's
an area of active research in, you know, evolution design and things of that sort. But most interestingly,
at least the heuristic existence proof of transcendence where if you think about evolutionary
algorithms they are capable of producing products that are more complex than their antecedents.
And what I mean by antecedents are people like us, if we're putting it together, the
components of biological evolution looking back in time. Now, there is one asterisk to
put on that. Some, deep in this theory, argue that in fact evolutionary algorithms don't
produce anything more complex than their selection pressure and how you design the selection
pressure, which organisms survives, which gets killed off, is actually the embodiment
of the magic. And it, in our real world, where we co-evolve with a lot of artifacts is remarkably
complex, it's the matrix if you will that we live in, and in a rarefied laboratory environment
usually simplify it dramatically. Let's say I'm just going to select for the fastest bubble-sort
algorithm--no, it's not even a bubble-sort, the fastest sorting algorithm and just trying
to genetically mutate, sort algorithms and eventually pick the fastest one. Well, that's
a very simple selection mechanism and you generate a sorting algorithm and what's subtle
in that argument is, you know, there's still as much power as I'm about to describe and
that's the sort of the asterisk that I want to put there because that's something I don't
fully understand myself, but I find fascinating. So let me give an example. Let's imagine we
think about building AI, because I would hope everyone here would think that's interesting
if it could be done, and may at some point in their life debated it with somebody whether
it can be done. One of my favorite books, On Intelligence by Jeff Hawkins, I think I
agree with everything I read in the book about how the cortex works and how we're going to
build intelligent machines in the future, except for his comments in page 219, which
you don't actually have to read them. I'll just summarize them as, he believes that once
we build iteratively, you know, not like in a rules-based systems or traditional AI back
from the 60's, but when we, you know, grow an artificial intelligence that we'll somehow
be able to magically cut and paste things like English and French-speaking capabilities
as if they were modules. Or down here, sell and swap memory configurations to reprogram
intelligent machines to do entirely new things as if you're just installing new software.
At some level, he's right. You know, yes, it could be just software running on a computer
that's true and that's powerful and worth thinking about. But I think the profound,
in my opinion, error in that analysis that conflates the power of design and the power
of evolution is that we would know how to do this in any way without expending perhaps
more effort than it took to build the system in reverse engineering it. So take for example
you grow in your own network to do speech recognition. To actually decipher how is that
speech recognition algorithm, the resulting algorithm working, what are the mechanisms
or where case lie? It's not--does not come out, right, the decomposition of design is
not free. Similarly, if you think about how long it has taking to do, reverse engineer
the human brain, which is itself of course a product of iterative algorithms in evolution
and organic growth and pruning based on usage-based feedback, understanding the brain and it's
subcomponents is remarkably difficult, right? If we grew--if we could very quickly run experiments,
billions of experiments per day per second and could grow an artificial intelligence,
if we could do that, it would not--it would be just as difficult to reverse engineer as
the human brain today, if not more so, because it would have started with the foreign substrate
and its interfaces will be kind of alien to us. So I don't think there's any reason why,
us poor little humans, in terms of our cognitive capacities and lack of theory for this could
in any way take a really complex system and just say, "Oh, yes, here's the decomposition
of design," because there's no design, "the decomposition of subsystem starting to do
that." Think about how many human minds are currently trying to understand our own brains.
So what is it about these evolved systems that make them so inscrutable? Well, you get
these emergent layers of abstraction, that helps, which is, you know, in human evolution,
right, you don't tinker with basic genetic code when we went from primates to humans,
right? We don't tinker with amino acids or protein signaling. We don't really tinker
with much of the hierarchy of evolutionary historical artifacts. It's all in the network.
It's all in the number of nodes in our brain, right? And that's really fundamental. It wasn't
even a new cell or a new neuron. Some might argue the mirror neuron but it's really how
it's wired and not so much, you know, that novel a concept as the neuron itself. And
that's a new vector of indirection. I would argue that evolution today is far outpaced
by anything in the biological world and it's all technological; we'll get to that later.
Now, the subsystem inscrutability. For anyone who has worked in these areas, it sort of
becomes part and parcel, I mean it's obvious. But when Danny Hillis for example, did genetic
programming to build a better sort algorithm which will compete with bubble sort, it did
a pretty good job at sorting. It wasn't--it didn't beat the best human design, that's
a pretty simple task, design does fairly well in it. But it did better than most programmers
can program. So this completely artificial evolution process beat most computer scientists
in their attempt to write from scratch the best sorting algorithm they could write. But
it made absolutely no sense. The code was full of all kinds of--it was a very large
code base. It had all kinds of crazy stuff in it that may have made some sense and he
never actually got around to figuring out how it works. The same is true in neural networks,
an area I did some research in. So it first sensitizes me to the fact that it's the process
you learn about, not the product, right? When you do run another experiment or, "Let's try
that one again," you can tweak mutation rate, you may tweak selection pressure, tweak whatever
perimeters of communication if you will in the growing system but you don't say, "Oh,
okay, here's the output and here's what makes sense. Let's take what we've built and build
more on it," you really can't do that. So the learning it's about the process not the
system level and it's a black box defined by its interfaces. What really is understandable
when you think about the brain is the interfaces, the sensory cortex, right? We can hack in
from the outside. When there's something like the retina, there's an interface to the world
we know. So most of the brain studies that come in through the auditory cortex, the sensory,
the visual system and that makes a lot more sense than the core cortex itself. And maybe
what lies is, and I'd be curious maybe in Q&A if anyone has an argument against this
because I did find it fascinating, is that there's no mathematical shortcut. There's
no way to take an iterative algorithm like a fractal or rule 31 in a [INDISTINCT] or
evolution itself and say given the output of that process, run the feed backwards and
tell me how we got there having only the output as your input, right? No, there's only with
the artifact of creation with an evolved human. Show me how evolution got us here, right?
There's no word "reverse evolve" like there is reverse engineering, right? And our intuition
does not lend itself well to figuring out that process because information is lost in
each step along the way. From a Wolfram sense of computational equivalence, you can argue
that this will never found and that there is no mathematical shortcut, and that sort
of the logical depth, if you will, complexity in an emerging system is really where all
the action is. So what is this dichotomy of design I'm talking about? Again, and just
to jump to the chase so I don't mislead people, I think the richness is to explore where this
dichotomy doesn't hold, to not believe it has to be this break point. So I'm personally
going to describe it as a, you know, diverging path of the futures, where let's say two different
research teams working on AI and they have very little to say with other, if one's going
the design route and the other is going the evolve route. But I want to argue that hopefully
someone can figure out a way to break that dichotomy. So when you design something, you
have control generally. What you build is very brittle, just think of Windows versus
things that are evolved, and they can transcend in terms of complexity what their antecedents
or at least creators if you will, and you can create a process to create complexity
more easily than you create the complexity itself. Here is, I mean, not to denigrate
all of computer science but generally simple problems compared to really complex problems,
and just for a sense of the code density difference, if you take the human genome, burn it on a
CD even without any sophisticated compression algorithm just, you know, kind of brute force
soft encoding, it's much smaller than Microsoft Office. It just says a lot about the different
methodologies of code representation and, of course, the value and power of the interpreter
in the case of human genome. Subsystem clarity, not always but usually it ends with you design
something, if you're good about commenting your code and--but even if you're not good
at commenting your code, people are going to usually figure out, other humans, could
figure out what you're up to most of the time. I mean, unless you're really trying to obfuscate
what you're doing. And that allows for portability, modular reuse, object engineering in programming.
Everything comes out of this, right? The whole field of computer science focuses on making
that more efficient, right, in a compartmentalized way. Or much of computer science, I shouldn't
make such sweeping generalizations. But is in drastic contrast to the hierarchical substantiationary
architecture of let's say Rodney Brooks the way he builds robotic control systems. And--I
don't know if I'll mention this now, but if we think about the AI example, there's potentially
a lot of sense, sort of a path dependence. Some would argue you can't build an AI unless
you build a physical robot that it's instantiated in. That sort of the brain in the box would
never work, you'd have all kinds of weird non-recognizable intelligence that's emerging.
And that's a whole another interesting side debate, but Rodney I believe would agree with
that, and how you go is very interesting. If you do a series of iterative selection
pressures, thus inevitably when you're talking about intelligence, do you build in a survival
instinct, right? If only those organisms that survive some selection pressure are selected
for it, regardless of your attempt to, you know, evolve for friendliness or evolve for,
you know, sacrificial, you know, I-robot like, you know, rule codes, it maybe an inevitable
outcome that any evolutionary algorithm creates a survival instinct, which is kind of interesting.
And also its co-evolutionary elements. Back to the point of, you know, not being able
to transcend the complexities of your selection pressure. If you evolve anything in a restricted
environment, it will only work and be robust, resilient, and adaptive in that environment.
The moment when you take it out of that environment, it could be just completely slaughtered. I'll
give you a great example. There's this group in the Midwest that was evolving FPGAs to
do some interesting tasks, I forgot what it was, some computational task. These are digital
circuits with, you know, an SRAM bank and some gates but they're just, you know, you
can put code in and the code reprograms the wiring and the function of the circuits, the
about this circuit design reduced the code in FPGAs. And they have banks and banks of
these and they're running this set of genetic algorithms and selecting for the best ones.
Then they found one that just blew the doors off, that in fact, solved the problem at hand
better than with any human design, better than was theoretically possible for that chip.
And they're like, "Oh, my god. This is unbelievable. The breakthrough has occurred." And they took
it somewhere else and it stopped working. And they're like, "Oh, yeah. It's not robust
outside of its environment. I mean, the temperature was off or vibration in the floor, who knows
what," right? "Okay. So it didn't work when we moved it. Let's take it back. Put it back
where it was. Okay, it starts working and what is going on?" It had found a way to RF
couple to neighboring cells and engaged their computational resources in a completely undocumented
manner. "Life will find a way," from Jurassic Park, comes to mind. So what are some of the
implications of this? Well, I don't think Jeff Hawkins' thing about cut and paste is
easy. I don't think Ray Kurzweil idea that you upload your consciousness to silicon is
so easy. Take phantom limb pain, right? You know, a lot of amputees, a large percentage
of them, have this horrible ringing effect given their neuroplasticity where they have
this excruciating pain in the missing limb area even though the limb is not there and
they just want to scratch at something that's not there. Those kinds of effects at the interfaces,
right? We said, "Well, we got the brain in the box but that brain in the box maybe ringing
horribly with its sensory system," again, "unless we really understand how sensory cortex
works." That's just an AI kind of geeky point, but the most important point is the second
one, that in all this, maybe it's obvious, but you don't learn about what you're building
you learn about the process of building it, which I think is important and will have some
implications on management as well. This co-evolutionary island thing, I don't think I'll get into
that so much, but the differential immunity concept carries across here, too. If you evolve
something in a lab it's not just going to be ready for the real world. It does beg the
question, do we need to have these technologies robustly amongst us from day one for them
to ever become intelligent in the way that we would understand. Oh, and then one thing
maybe for last funny anecdote on this slide, which is if you think about the uploading
folks, you know, there's only a small handful I think--no, there's a handful of you who
believe that we will do better and better brain scans and nondestructive methods of
reading the normal states, everything from every ion channel state. I mean, it's massive
in complexities, it's mind-boggling, but, you know, they're like, "Hey, follow the curves
we'll one day will be able to do this." Why couldn't you reinstantiate that on a silicon
simulator? And an issue in thought experiment in terms of race conditions to AI futures
are, given FDA regulations if nothing else, what would you upload first, a primate, right?
Do a test to see if this works before you, you know, especially if there was a disruptive
upload as a thought experiment, Brad Templeton has pointed out make sure you do bonobos and
not chimps. By the way bonobos for those don't who know are these incredibly cool animals
that aren't really seen in zoos much because they do it in a missionary position and kids
freak out and they're basically bisexual sisterhoods of hideousness, they just do everything that
you could imagine you'd see in a porn flick with each other. And they have a wonderful
social structure. They don't fight as much as chimps which are like the opposite and
male-centric kind of battle bots. So, imagine you upload a primate, bonobo, let's say. The
pace of silicon progress will be so great that by the time you wouldn't bother to reverse
engineer it. You wouldn't bother to say, well, you know, how it would work or didn't, you
just start to run the silicon faster and it would, you know, could very well be the AI
that takes over wouldn't even make human based for that reason because the, you know, the
sort of race condition by the time someone gets around uploading a human it hasn't had
time to catch up. By the way, this is an interesting little time capsule in 2007 for those who
just don't realize where like it's state of the art today on attempts to unify some of
these things, let's say evolve for purposeful outputs. But [INDISTINCT] has gone more into
politics of late, but he was breeding programs for quite awhile and found that, you know,
any application space was very easy to validate if you have a good solution but very difficult
to think of a solution like analog circuit design, antenna design, it lends itself great
for these kinds of algorithms, right? And so what he did is, he bred programs to build--you
know, and some of these have flown in space, antenna that flew in space. It was this cockamamie
thing that no one understands why it works but it was the best antenna for a given need
or analogic circuit design. And he's already gotten patents on this. So, you know, back
in 2007, he was just starting to get patents that he was reproducing many U.S. patent,
sort of human patented inventions, without any input. Just, you know, here's what the
filter needs to do, find the filter. But then he got new designs that were actually winning
patents as being novel. And he was believed in this domain, he was routinely delivering
human-competitive machine intelligence. Another example that I like--it's one of our companies,
is the way in which you can design for evolvability, and this will be the last example that I use.
Imagine you're trying to, again, back to synthetic biology, build an organism that does something
useful and the most common example is make a chemical you want. In this case, it's butanediol
which is the major precursor of Spandex and automotive plastics. So the Spandex of the
future maybe built from sugar and maybe green through their work. So the goal is how do
you make the microbe, and is there a metabolic pathway shown here, do what you want. So they
started off with a simulation tools. They simulated all of 40,000 different pathways
to get from A to B and analyzed mathematically and engineering sense what is the most carbon
efficient, the most sort of efficient use of energy. And then one design team was working
on that and got about a five-fold improvement in the yield to make a chemical they wanted
from the start. By the way, the chemical is not made in any organism naturally it's a
completely synthetic pathway. A different team in the same company instead selectively
crippled the organism and they didn't pick, for this design, they didn't pick the most
efficient pathway through. They picked one that will allow them to knock out all the
other pathways of survival, where it can't eat a variety of feed stocks, you know, it
can only live on certain sugars. It's an incredibly weak portably suited organism to live but
it happens to make the chemical that you want as a byproduct. And instead they just skimmed
the fastest growers over three months. So they take whatever organisms growing fastest,
put a new beaker, give it another day, put that in a new beaker. And after a few months
they had improved the chemical yield 20,000-fold. So human design, five-fold improvement, random
search 20,000-fold improvement. They have no idea why or how but what's interesting
is they don't need to. And the process may keep improving even as they scale, so many--one
of the problems with synthetic biology is you have a lab scale demo where the surface
area, the volume of a beaker, is different than the surface area or the volume of let's
say a huge tank and all a sudden the process falls out to bed when you try to scale up
some, say the yeast fermentation process. And scaling up in bio-chemicals and bio-fuels
has always been a risk factor that scares people. Well, with this evolutionary approach
in a way, you know, the temperature varies, if the environment varies, they get better
and better over time. So it's not just like over three months they got a 20,000-fold improvement
it got better a month later and a month later and so on and so forth so it could be more
robust, resilient, and capable of deployment. I'll just end with one quote. Danny Hillis
said in his last page of his computer science summary book believes that the greatest challenge
would be to create tools that go beyond engineering, to create more than we can understand. And
then also Kevin Kelly I like, that we are in fact evolving evolvability. I'll leave
it at this. I'll go to Q&A. I'll just leave these four bullet points up there to say that
if you think about how you manage people--actually, I'll just make these four points then and
I'll be done. If you want to exploit the wisdom of crowds to be innovative as an organization
there's certain policies you might want to consider, like diversity is more important
than ability of the individuals. This all falls out of the same analysis in genetic
algorithms and such. The disagreement in the team about a topic is more important than
reaching a census if you want to find disruptive change, innovative change. Your voting policies,
like what does it takes to commit resources to a project or not, is more important than
having any coherence or comprehensibility of what you're actually doing. And the function
of management is to tune the parameters of communication, things like, "When do we meet,
how do we communicate, what's the fan out and what, most importantly, what's the maximum
group size," you know? Teams of three to seven get stuff done; everything larger than that
productivity goes through the floor. That's more important in direction setting. It's
not leaders doing the symbolic leadership stuff it's setting up the parameters for a
high of mind to emerge in organization. So, what have I tried to share with you today?
I think accelerating technological change continues forever. It leads you to an interdisciplinary
renaissance of learning which is fascinating and exciting to just be a student of and now
is reaching trillion dollar markets where it hadn't before in revolutionizing energy,
fuels, chemicals and such. And that it is the perpetual driver of disruption. I think
you can count on 50 years from now to create new entrance, new entrants, new entrepreneurs,
and therefore get time to build new companies. So for example, if this isn't obviously made
tangible in a Google example. I do not believe Google will ever reinvent search. I don't
think Google would ever be expected to disrupt search marketing or search as it exists today.
You would never expect Google to do that. To reinvent the datacenter and how computers
are built, absolutely, right? In terms of [INDISTINCT] to punt on ever using an Intel
chip again, Google could absolutely lead the charge there. Just like in music, right? Apple
was a new entrant with the iPod and iTunes, right? It wasn't going to be Sony. Sony would
never revolutionize the Walkman. It took a new entrant. New entrants can be big companies
like Apple. So I, and again, if you want to think how--where these theories apply, it's
not to anything that you have researched. And I think the Technium, as Kevin Kelly calls
it, is moving increasingly from design to search. So on a positive note, concatenating
code that exist is I think is going to be the paradigm of the future and not designing
new code. So let me leave you with that so we have time for questions. Thank you.
>> ASTRO: Thank you, Steve. >> JURVETSON: One minute. One minute for questions.
>> ASTRO: So we have run over a little bit, but we're willing to take a couple of questions.
>> JURVETSON: I'll leave this up there, so. >> That was really interesting. And with all
the talk of technological progress increasing in speed and everything, how concerned are
you about, you know, the technological singularity where we become unable to predict the future
or model our future because of things that are smarter than us in charge?
>> JURVETSON: Right. That's a good question about the--well, there's one subset of the
risk of singularity, other people are scared of it. There's sort of, for those not familiar,
just imagine exponential does continue forever and never becomes an S curve, what is the
sort of ultimate endpoint of that completely life changing event, Ray Kurzweil writes about
this quite a bit, where perhaps, you know, all matter, you know, gets really compiled
in some computronium in the future, you know, just really crazy out of the bend thinking.
When I say crazy, crazy to our thinking, right? So, again, all great ideas are considered
crazy so don't take that as a negative. But specifically, this idea of us not being able
to understand it, made it necessitate the AIs that I was describing. That we may in
fact need to birth technologies that can assist us in this. And in fact, we may not even need
to do that, right, there's some argue the Internet's already got all the attributes
of a living organism that all, you know, attributes of life and those who study the Internet in
its totality have to resort to the kinds of techniques that biologists were using, not
the techniques that computer scientists were using to make sense of the patterns. So let
me phrase in different way. If someone were to ask you, "Would you want your great grandchildren
to be smarter, healthier and more capable than yourself?" Yes, that sounds good. If
instead you say, "Hey, there's going to be machine intelligence next month that's going
to be healthier or happier and more intelligent than yourself," that feels threatening, right,
and most sci-fi novels are based on it. But I think once we shift our sense from an entitlement
that humans are the endpoint of evolution to a sense of parenthood that we're going
to create something greater than ourselves, then we could actualize our universal quest
for symbolic immortality. I think every human wants to be symbolically immortal--meaning,
they create something that will outlast their shortened time in this planet, a work of art
that children most tangibly, a company that will live on past the founders. Hewlett-Packard
being, you know, that sort of easy to remember examples something like that, or believe in
the afterlife if all else fails. And I think creating those artifacts will be one of our
most powerful vectors of being transcendent to say, you know, we didn't end evolution
through some clinging motion to say, "God," you know, it has to include us hopefully it
will find some way to include us. Will it be augmented intelligence or artificial intelligence
that we create, I think that's profound debate. And there's one more. Maybe that would be
the last question if--okay, last question. >> So you had one side about managing disruptive
innovation and you kind of mentioned the challenges of getting two or three, you know, that teams
of two or three are very efficient larger teams aren't.
>> JURVETSON: Right. >> What do you think about the challenge of
innovation in like social technology to allow us to get more brains working on the same
problem in parallel? I feel like there's a lot of opportunity there that's kind of outside
this whole branch of physical technology, if you--what are your thoughts on that?
>> JURVETSON: I think it's a great question, because clearly and historically in meet space,
you know, it's physically getting together for meetings that those are metaphor are not
virtually and to do some of the lightweight periodic basis. So let me first say a little
bit more about the group size and then see if a thought occurs here because I don't have
a quick answer other than, "Yes, that sounds exciting and I'd love to hear what your thoughts
on it." Because as a backup, venture capitalists almost never come up with an innovation or
an idea or the solution to a problem. But what they try to do is recognize big problems
like when some smart person does to glum unto it and say, "That does sound like it could
work," you know, that's where the mind goes. So I don't know the answer to almost any useful
question other than to share the enthusiasm, so the--actually, I would say groups of three
to seven. So board of directors, that's my experience, more than seven they're horrible.
I do believe there's some attempt to try to keep programming group size down low here.
I, two weeks ago, had a whole weekend with Bill Gates which is fascinating. He is just
sort of off the record on all subjects. Unbelievable. But one of the things he did share which I,
no surprise is that, he thought Microsoft has never been as productive or as fun as
when it was five people and he thinks they are just a complete train wreck right now.
No surprise. So, at least he's honest now. You know, he's not in the corporate, you know,
veil, he can speak openly. And you know, you get this dramatic--you end up inevitably putting
in policies that, you know, sort of migrate to the mean. You don't have outlying experiments
running on and it gets difficult. In some organizations, by the way, they'll keep doing
that, so like Gore, a manufacturing company actually physically moves manufacturing plants
when they get over 150 people and think how expensive that is from a manufacturing company,
but the logic is no one knows the name of their coworkers beyond a hundred. That's where
startups usually hit a major growth curve, I mean, a growth hiccup, is sort of the volume
that when you don't know the numbers of your tribe in the same way your management style
has to change and then it gets very unpleasant as a transition. So back to how can social
software help with this, you know, I'm a big believer that we have really barely scratched
the surface of how to exploit the hive-like mind in any organization, right? For a group
of people to be more productive than the sum of its members, there has to be something
going on at that higher level abstraction that allows that and so the questions I would
ask is what should be the frequency of interactions? What should be the fan out? Should it be by
directional information flows or not? If you think about a cortical analogy, the fan out
has a dramatic impact on learning phases. You may want to prune. You may want to have
like the brain does, you know, a hundred synapses per neuron dropping, you know, tenfold on
the ages of two and a half--I'm sorry, 10,000 synapses per neuron dropping to 1,000 between
the ages of two and three. It's just kind of interesting, you know. You lose the 90%
of your synapses. Those kinds of parameters could have a huge impact on it and I generally
think it's--the best answer could be to run experiments because when you got--whenever
you're getting to the social sciences I think you can run experiments in organizational
form much more quickly than you can form like the theory for, you know, for why they work.
Yes. >> ASTRO: Great. Thank you, Steve. I appreciate
you're taking the time for a talk here at Google.