>>Presenter: I first met Cory Doctorow in 1986 when he was working for Metallica and
trying to anticipate the coming Napster wars. People don't know this about him, but he is
a fervent defender of the RIAA and the MPAA. This has all been just a big lie, he says,
[laughter] to make people feel that he's one of us.
Recently, actually, I ran into him in Washington, DC, and he was telling me, "You guys do the
legal minimum of compliance with the DMCA. We think that you're just pirates." I said,
"You've been working for the RIAA now for 16 years. I don't see how you can possibly
say that with any sort of honesty." Then I realized who I was talking to, of course.
[laughs] [laughter] Realistically, I met Cory when he was doing
sugar water. Right? For the first time.
>>Cory Doctorow: [inaudible]
>>Presenter: Yeah. And then Zelig-like, I noticed as he rose as an author. I actually
don't want to spend much time introducing him. So I won't. Everyone, please welcome
Cory Doctorow.
[applause]
>>Doctorow: Hi folks. I give-- I write science fiction novels and stories. You've got some
of my short story collections there in front of you. They all relate, in one way or another,
they kind of circle these issues. I feel like science fiction stories put the sinew and
the marrow into the argument. Before George Orwell came along, if you wanted to talk about
surveillance, you could say, "I kinda feel like it might change my behavior if I were
being watched all the time in some abstract way." And someone else might say, "Yeah, but
if we knew everything about you, we could provide services to you and we could know
when bad things were going to happen" and so on. And now we have this great word we
can use to describe what that means. You can say it's Orwellian.
So there's now a lot of muscle on the bone when you talk about this stuff. That's what
I do in the fiction. But I don't want to stand here and read stories to you, although I have
a podcast where you can hear me read stories. What I'm going to do today is the argument
that the books that you're holding in your hand are the blood and sinew for, and take
it from there. The talk runs 35 minutes. And then there's
time for Q&A. The one thing I want to say as a caveat to this: I've given this talk
twice now. I gave it at the Long Now Foundation and I gave it at DEFCON, both in the last
week or ten days. Both times, there was a little bit of feedback. There's a hypothetical,
technical solution I propose, and I'll tell you when I get there. I want to clarify that
it is purely hypothetical by way of example, and not a thing that I think we should do.
With that said, [laughter] I'm going to get to it.
I gave this talk in late 2011-- Ha. Ah! There we go. I gave this talk in late 2011 at the
28C3 in Berlin called "The Coming War on General Purpose Computation." In a nutshell, the hypothesis
of that talk was computers and the internet are everywhere, the world is increasingly
made of computers and the internet. We used to have these separate categories of devices
like washing machines, VCRs, phones, and cars, and now we just have computers in different
boxes. Cars are computers we put our bodies into. 747s are badly secured Solaris boxes
connected SCADA controllers. [laughter] Hearing aids, pacemakers, other prostheses: computers
we put in our body. That means that from now on, all of our socio-political
problems in the future are going to have a computer in the middle of them. That will
beget a regulator who says, "Can't you just make me a computer that solves the problem?
Can't you make me a self-driving car that can't be programmed to drag race? Can't you
make me a bioscale 3D printer that doesn't print out an organism that puts the human
race at risk or blows Monsanto's quarterly profits?" That is, "Can you make me a general
purpose computer that runs all the programs except for the one that pisses me off?" [laughter]
Now, we don't know how to make that computer. We don't have a theoretical model for Turing
Complete minus one. Our closest approximation to a computer that runs every program except
for the one that abets a criminal or evinces a social problem is a computer with spyware
on it when it comes out of the box. That is, a computer that watches everything that you
do all the time, so that when the moment comes, it can say, "I can't let you do that, Dave."
A computer that runs secret programs that the user isn't supposed to even know about.
If the user finds out about it, the user can't terminate these processes, even if the user
really thinks that they run contrary to their interests, and even if the computer that they're
running on belongs to the user. In other words, digital rights management.
Now, digital rights management's a bad idea for solving social problems for at least two
significant reasons. The first one is that it doesn't actually solve the problem. Breaking
DRM isn't hard for bad guys. As the copyright wars have shown us, digital rights management
is a solution that ends within 24 hours. As soon as a bored Norwegian teenager encounters
the DRM, it goes away. DRM only works if the "I can't let you do that, Dave" program remains
a secret. Once the most sophisticated attacker in the world finds out that secret and puts
it on the internet, everybody else on the internet has the secret, too.
Now, the second reason is that DRM not only has weak security, but it weakens security.
In order to be secure, you need to be certain about what software is running on your computer.
You can't secure the software on your computer if you don't know what software is running
on your computer. When you design the "I can't let you do that, Dave" facility into a computer,
you create this enormous security vulnerability. You now have a program running that users
aren't even supposed to know about. If they know about it, they can't find details of
or terminate or override. When some bad guy hijacks this, they can do things to your computer
that, by design, your computer doesn't show you.
You probably remember, Sony BMG put root kits on 51-- no, 6 million CDs, 51 audio CD titles,
and distributed them to their customers. They stealthily installed malware. The root kit
made any process or file that was prepended with dollar sign SYS, invisible to the file
manager and process manager. Immediately, malware writers started prepending dollar
sign SYS to their program files and their processes because if they ever found themselves
on a computer whose immune system had been blown by the Sony root kit, that immune system
would no longer even be able to see their process.
Now, once governments solve problems with DRM, there's this perverse incentive to make
it illegal to tell people things that might override the DRM, things like "This is how
the DRM works" or "Here's a flaw in the DRM that might allow an attacker to secretly activate
the microphone or turn on the camera or grab your keystrokes."
Now after I gave this talk at 28C3, I got a lot of feedback from various civil libertarians
and other people, including some very distinguished computer scientists. I got a very thought
provoking email from Vint Cerf after I wrote this, which really made my day. It led me
to the conclusion that within the fields of civil liberties and technology and policy,
there's a kind of good guy consensus that if you own your computer, you should be in
charge of what's running on it, at least as between you and corporations, or you and the
government. That mandating what software you may or may not run on your computer is just
not a good idea if it belongs to you. Now, most computers-- Let's examine, for a
minute, what it would mean, as an owner, to be able to absolutely control the software
that was running on your computer in an adversarial relationship against, not an advanced persistent
threat, but at least against script kiddies or griefers or just your garden variety deputy
dog cop who wants to screw with your computer. Most computers today are fitted with these:
TPMs, trusted platform modules, a secure code processor mounted on the motherboard. The
specification for TPM is published. There's an industry body that certifies that devices
that advertise a TPM actually have a real TPM in them and not a fake TPM. To the extent
that that spec is good, and to the extent that these people are diligent in doing their
jobs and sue people who list a device as having a TPM when it doesn't, it's possible to be
reasonably certain that if you think you have a TPM, you do have a TPM, and that it faithfully
implements the spec. TPM is secure. One of the ways in which it's
secure is that it has some secrets. But it's also secure in that it's designed to be tamper
evident. If you try to extract the keys from a TPM, it's supposed to be really obvious
that something has been done to your computer. Someone takes your real TPM out and puts a
fake TPM that they 3D printed or cooked up in their hack lab or made down in Quantico
and sticks it in your computer, it's supposed to be really obvious that it's happened. There's
a TPM threat model that crooks or governments or police forces or some other adversary try
to compromise your computer, and TPM tamper evidence lets you know when that's happened.
But there's another TPM threat model. It's that a piece of malicious software infects
your computer. Now, all the censors—. When that happens, all the censors that are attached
to your computer, --the mic, the camera, the accelerometer, the fingerprint reader, the
GPS, and so on--, can be switched on without your knowledge, and the data can be cached
on the or can be sent to a bad guy or both. Not only that, of course, all of the data
on your computer-- your sensitive files, your stored passwords, your web history-- can also
be harvested and sent to a bad guy, or harvested and cached for a later retrieval, as can all
your keystrokes. All the peripherals that are attached to your computer can either be
subtly altered, turned off, or turned on to do bad things. Today, those peripherals might
be your printer, your scanner, your SCADA controller, your MRI machine, your car, your
avionic, your 3D printer. You can understand why that would be a bit freaky, but of course,
in the future, those peripherals might also include your optic nerve, your cochlea, and
the stumps of your legs. When your computer boots up, the TPM can ask
your bootloader for a signed hash of itself and verify that the signature of the hash
comes from a trusted party, someone you trust. Once you trust the bootloader to faithfully
perform its duties, you can ask it to check the signatures on the operating system, which,
once verified, can check the signatures on the programs that run on it. And so on and
so on up the stack, ensuring that you know which programs are running on your computer,
and that any programs running in secret have gotten there by leveraging a defect in the
bootloader or operating system or the other components, and not because this computer
was designed to actually hide things from you.
Now, this story always reminds me of Descartes: he starts off by saying that he can't tell
what's true and what's not true, because he doesn't know if he can trust his senses, he
doesn't know if he can trust his reason. He does some mental gymnastics, which I won't
get into here, although that's generally the thing people find interesting about him, but
what's interesting to me is that once he establishes this tiny little nub of certainty, a kind
of mental gymnastic exercise that says, "Well, I can trust my reason, I can trust my senses."
Then he is able to erect this stable edifice of a worldview on it. He knows one thing to
be true, and everything else can be hung off of that one thing. He can build it up.
Now, a TPM is like that. It's a nub of stable certainty: if it's there, it can reliably
inform you about your bootloader, and thus, your operating system, and thus, the processes
running on your computer. Now, you may find it weird to hear someone
like me talking warmly about TPMs. After all, these are the technologies that make it possible
to lock phones, tablets, consoles, and even some PCs so that they can't run software of
the owner's choosing. Jailbreaking usually means finding a way to subvert a TPM. Why
on earth would I want a TPM in my computer? As with everything interesting in tech and
policy, the devil is in the details. Imagine for a moment that there's two different ways
of implementing a TPM. There may be more, but imagine these two. The first one we'll
call lockdown. In the lockdown world, your TPM comes with a set of signing keys it trusts,
and unless your bootloader appears in that list-- is signed by one of those signing keys,
it won't run it. It won't boot, the operating system won't run. You're just stuck there
in whatever it is that people who installed the bootloader on your computer want you to
run. You can't change that. There's another mode that I'll call certainty.
In the certainty mode, you tell your TPM which signing keys you trust. The first time you
turn your computer on, you initialize it with some authentication token-- whoops-- like
a key or a password or some other thing. A biometric that it knows so it knows who it
belongs to. Then you, the owner, are the only person who gets to say what it trusts. You
can say, "I don't trust this person's operating system" or "I do trust that person's operating
system." "Only run operating systems that are signed by Cononicle, EFF, ACLU and Wikileaks.
[laughter] Approximately speaking, these two modes correspond to, of course, iOS and Android.
iOS only lets you run the code that's been approved by Apple. Android lets you tick a
box and say, "I'm a grown up. Let me choose who I trust”. Critically, Android lacks
an important facility: it lacks the facility to verify that what you think you're running
is what you are running. It's freedom without certainty.
Now, freedom without certainty is a big deal in a world where the computers we're discussing
can see you and hear you, where we put them in your pocket and take them into the toilet,
where they sit by your bedside, where they fly airplanes, where we put our bodies into
them, and they drive our cars around, which is why I like the idea of a TPM, provided
it's implemented in the certainty mode and not in the lockdown mode.
Now, if that's not clear, think of it this way: there's the war on general-purpose computation,
and that's what happens when control freaks in governments or companies decide that they
should have the final say in what you do on your computer. There's also-- And then there's
this: there's the counter position, which is that defenders against those people are
also control freaks, but they're control freaks like me. We want to be people with the ultimate
destiny over what we install on our computers. Both sides want control, they just differ
in where the nexus of control should be. Control requires knowledge. If you want to
be sure that songs that are moved onto an iPod, stay on the iPod, and don't come off
of the iPod, the iPod needs to know that the instructions that it's getting are coming
from an Apple-approved version of iTunes, and not one pretending to be iTunes. Otherwise,
you don't get the roach motel. If you want to be sure that my PVR won't record a watch-once,
video-on-demand program, or if it does record a program, that it won't output it to anything
except something that will honor whatever business rules came along with it. You have
to be sure that you know what programs I'm running and what they do.
But if I want to be sure that you aren't watching me through my webcam, I need to know what
firmware is running, and I need to know that the little green light always comes on when
my webcam switches itself on. If I want to be sure that you aren't capturing my passwords
through my keyboard using a software keylogger, I need to know that the OS isn't lying when
it says there aren't any keyloggers resident in the system. Whether you want to be free
or whether you want to enslave, you need control and you need knowledge. That's the coming
war on general purpose computation. Now I want to investigate what happens if
we win it. That's the civil war over general purpose computation. Let's stipulate that
we have a victory for the "freedom side." It means that we have computers where owners
always know what was running on them, because the computers would faithfully report the
hash and the associated signatures for any bootloaders they find, and control over what
was running on computer's ghost to you, because the computers would allow their owners to
specify who was allowed to sign their bootloaders, operating systems, and so on.
There are two arguments that we can make in favor of this victory, why this victory would
be a good one. The first one is a human rights argument. If your world is made of computers,
then designing computers to override their owner's decisions has significant human rights
implications. Today, there are people who worry that the Iranian government might demand
import controls, so that all the computers that come in have UFE-style bootlocker that
only boots operating systems that have lawful interception back doors built in. You can
move the spying right to the edge, to the user of the computer, the owner of the computer.
But tomorrow, it may be that I live in the UK, and it may be that our Home Secretary
says, "If the NHS gives you a cochlear implant, it has to intercept and report all the extremist
speech it hears." The human rights stuff is easy to understand.
The second argument comes from property rights. The doctrine of first sale is a very important
piece of law. It says once you buy something, you own it. You should have the freedom to
do anything you want to it, even if it gores the ox of the person who sold it to you. DRM
opponents like me, we love the slogan, "You bought it, you own it."
Property rights are an incredibly powerful argument to have on your side, anywhere. But
they're especially powerful to have on your side in a nerd fight, because you can't swing
a cat in Silicon Valley without hitting someone who thinks that property rights are an important
way of solving most social problems. But it's not just nerd fights. Copyfighters get really
pissed off about the term "intellectual property," because property is also a really good way
to win arguments in policy circles. Before the term "intellectual property" came into
prominence, we had other terms like "creators' monopoly". It's very hard to go to a regulator
or a lawmaker and say, "My monopoly isn't large enough", but going and saying, "My property
rights are being not respected enough or need to be expanded so that I can make sure that
they're policed adequately," that's a very powerful argument to have on your hand.
That's where the civil war part comes in. Human rights and property rights both demand
that computers not be designed for remote control by corporations or governments. Owners
be allowed to specify their OS and the programs running on them to freely choose that nub
of certainty in the void that allows them to build their whole stable edifice of certainty
on. Now, remember that security is relative. You
are secured from attacks on your ability to freely use your music if you can control your
computing environment. But, if you can control your computing environment, the Recording
Industry Association of America is now vulnerable to attacks on their ability to rent you music
on a single-use basis. We have this notion of streaming, this consensus hallucination
that there's a difference between a stream and a download, as though there's some means
of transmitting a stream of bits to someone's computer without actually having them download
that stream of bits, like the internet is made of mirrors and speaking tubes. [laughter]
We say "Stream", we mean "I think that your receiving software doesn't have a 'save as'
button." Now, if you get to choose the nub from which
the scaffold dangles, you get control and power to secure yourself against people who
attack your interests. If the Recording Industry Association of America, or the government,
or Monsanto get to choose the nub, then they get control and the power to secure themselves
against you. So we all agree that at the very least, owners should control what runs on
their computers, or I'll ask you to stipulate that.
Now, what about users of computers? Users of computers don't always have the same
interests as the owners of computers. Increasingly, we will be users of computers that we don't
own. Where you come down on the conflict between owners and users of computers, I think, is
going to end up being one of the most important both technological and moral questions of
the coming decades. There's no easy answer I have, no bright line, for when users or
owners should trump one another when it comes to computers.
Let's start with a position I'll call "property maximalism": "If I own my computer, I should
have the absolute right to dictate terms of use to anyone who wants to use it. If you
don't like it, find someone else's computer to use. This one's mine. I set the rules."
How would that work in practice? Well, you got some combination of an initialization
routine where you set the root of trust, tamper evidence, law, and physical control. For example,
you turn on your computer for the first time, and you initialize a good secret password,
possibly signed by your private key, and without that key, no one is allowed to change the
list of trusted parties who are allowed to sign your bootloader. We can make it against
the law to subvert this for the purpose of taking control away from the owner. That makes
writing malware that hijacks your computer extra special, super duper illegal, but it
also makes stealthy DRM installation even more illegal. We can design the TPM so that
if you remove it, or tamper with it, it's really obvious. You give it a fragile housing,
so that when it's changed out, you can tell at a glance that it's happened. Then, if you
still trust physical locks, you can put it under lock and key, too.
Now, I can see a lot of benefits to this, but there are unquestionably some downsides
to giving owners absolute control over their computers. One wedge issue is probably going
to be a self-driving car. There's a lot of these around already. They come out of places
like this and other places. It's easy to understand, on the one hand, why self-driving cars would
be insanely great. We are terrible drivers. Cars totally kill the shit out of us. [laughter]
They are the number 1 cause of death in America for people aged 5-34. I saw my friend, Katherine
[indistinct], last night. She pointed out that it's also the number 1 way for humans
to kill other humans. If you kill another person in your life, you're almost certainly
going to do it with a car. I've been hit by a car. I've also cracked up a car. I'm willing
to stipulate that humans have no business driving. It's also easy to understand how
we might be nervous about the prospect of people homebrewing their own self-driving
car firmware. On the one hand, we do want the sourcecode for these cars to be open,
public, and subject to scrutiny so that defects can be discovered, so that hidden features
that may act against their owners' or users' interests can be quickly found out. You'd
want to know it if there's a kill switch built in. You'd want to know it if your car secretly
drives you past more McDonalds when the kids are in the back seat. It's going to be plausible,
I think, to say, "Cars are safer if they have a locked bootloader and if that bootloader
only runs a firmware that's been signed by the Department of Motor Vehicles or by the
FTC”. But now we're back to you, whether you get
to decide whether your computer is running the software that you want it to run. Now,
there are two problems with this solution, the solution of giving the state a veto over
your self-driving car. The first one is that it won't work. As the copyright wars have
shown us, firmware locks aren't effective against dedicated attackers. People who want
to sow mayhem with custom firmware will be able to. We need a security model that doesn't
believe that all the other cars on the road are going to be well-behaved. If that's our
security model, then we are all dead meat. Self-driving cars must be conservative in
their approach to their own conduct, and liberal in conduct they expect from others, a venerable
principle familiar to people who work in computers, and also, the advice that you got on your
first day of driver's ed. And it remains good advice today.
Now, the second problem with this is that it invites some pretty sticky parallels. Do
you remember the information superhighway? Now, if we can justify securing physical roads
by demanding that the state or a state-like entity gets to certify the firmware on our
cars, how would we articulate a policy explaining why the devices on our equally vital virtual
roads, our information superhighways, shouldn't also be locked with comparable firmware locks
for PCs, phones, tablets, and other devices? After all, we have a general-purpose network
now. That means that MRIs, space-ships, and air-traffic control systems share the information
superhighway with Game Boys, Arduino-linked fart machines, and the dodgy voyeur cams sold
by spammers from the Pearl River Delta. In addition to that, you're going to have
more wedge issues. You'll have things like avionics and power-station automation.
These are a lot trickier. If the FAA mandates a certain firmware for a 747, it's probably
going to want those 747s designed so that the FAA and the FAA alone gets to choose what
runs on it. Just as the Nuclear Regulatory Commission is going to want the final say
on the firmware for a reactor pile. This may be a problem for the same reason that a ban
on modifying firmware in self-driving cars is. Once you start saying it's the place of
government to sign and certify firmware on computers that they don't own, it invites
people to find other computers that they should send the firmware for. But on the other hand,
cars and nukes exist in a completely different regulatory framework to most of the other
computers we use. Or rather, planes and nukes. Remember, a 747 is just a Solaris box. A nuke
is just a specialized computer as well, with a particularly exotic housing.
It may be that since these things already exist in this regulatory regime where they
have no-notice inspection and so on, that adding signed firmware locks is not going
to be something that invites comparisons to all the other computers in the world.
But there's a bigger problem with owner control. What about people who use computers, but don't
own them? This is not a group of people that the IT industry as a whole has a lot of sympathy
for. We spent an enormous amount of energy as a group, devoting ourselves to stopping
non-users-- or non-owners from harming owners. Users can do things like inadvertently break
the computers they're using, they download menu-bars, they type random shit they find
on the Internet into terminals, they plug malware-infected USB sticks into their computers,
they disable the firewalls, they install plugins or add repositories or add certificates to
their machine's root of trust, they punch holes in the network perimeter by accident,
and they accidentally cross-connect networks that are absolutely, positively not supposed
to be cross-connected. We also try to stop users from doing deliberately bad things,
like installing keyloggers and spyware to attack future users, misappropriating secrets,
snooping on network traffic, deliberately breaking their machines, deliberately punching
holes in the network perimeter, deliberately disabling their firewalls, deliberately interconnecting
networks that are supposed to remain secret-- separate, rather.
There's a kind of symmetry here. DRM and its cousins are deployed by people who believe
that you can't and shouldn't be trusted to run the computer you that you want on your
own computer. IT systems are deployed by computer owners who believe that computer users can't
and shouldn't be trusted to set policy on the computers that they use. Now, as a former
systems administrator and a former CIO, I'm not going to pretend that users aren't a terrible
challenge. But I think that there are good reasons to treat users as having rights to
set policy on the computers that they don't own.
Let's start with the business case, because I think that's the easy one to make. When
we demand freedom for owners, we do so for lots of reasons, but one of them is the possibility
that programmers won't have anticipated all the contingencies that their code might run
up against. There may be a day where the code says no and the owner needs to say yes. Owners
sometimes possess local situational awareness that can't be captured in nested "if-then"
statements, no matter how deeply you nest them.
This is where communism and libertarianism both converge. This guy, Hayek, thought that
expertise was very diffuse, and that you were more likely to find the situational awareness
necessary for good decision making very close to the decision itself. Devolution gave you
better results than centralization. And then there was this guy, Marx, who believed in
the legitimacy of workers' claims over their working environment, saying that the contribution
of labor was just as important as the contribution of capital, and demanding that workers be
treated as the rightful "owners" of their workplace, with the power to set policy. For
totally opposite reasons, they both believed that the people at the coal-face should have
the first cut at running the operation. The death of mainframes was attended by an
awful lot of Sturm und Drang and hand-wringing and concern over users and what they were
going to do to the enterprise. In those days, users were even more constrained than they
are today. They could only see the screens the mainframe let them see, and only undertake
the operations the mainframe was programmed to let them undertake. When the PC and Visicalc
and Lotus 1-2-3 appeared, employees risked being fired by bringing these machines into
their offices, or bringing home office data to use with these machines. They did this
because they had a computing need that couldn't be met within the constraints set by their
employer and its IT department, and because they didn't think that the legitimacy of their
request would be recognized. The standard response to a request from an
employee to do something that the IT department doesn't like is one or more of: "A regulatory
compliance prohibits you doing the thing that you think will help you do your job better"
or "If you do your job that way, we won't know if you're doing it right" or "You only
think you want to do your job that way" or "It's impossible to make a computer that works
the way that you think it does" or "Corporate policy prohibits you doing it."
Now, these may be true, although sometimes they aren't. And even when they are, they're
the kind of "soft truths" that we pay bright young things millions in VC money to try to
falsify, while if you're a middle-aged admin assistant, you merely get written up by HR
for doing the same thing. The personal computer arrived in the enterprise
through the back door, over the objections of the IT department, without the knowledge
of management, at the risk of censure and termination. It made the companies that fought
it trillions. The reason that giving workers more powerful, more flexible tools was good
for firms is that people are generally smart, and they generally want to do their jobs,
and because they know stuff that their bosses don't know. As an owner, you don't want the
devices you buy locked, because you might want to do something the designer didn't anticipate.
And employees don't want the devices that they use all day locked, because they might
want to do something that their bosses didn't anticipate. This is the soul of Hayekism:
that we're smarter at the edge than we are in the middle.
The business world pays a lot of lip service to Hayek's 1940s ideas about free markets.
But when it comes to freedom within the companies they run, they're stuck a good 50 years earlier,
mired in the ideology of Frederick Winslow Taylor and his notions of "scientific management":
The idea that workers are just particularly unreliable kinds of machines whose movements
and actions should be scripted and constrained by all-knowing management consultants, who
would work with the equally wise company bosses to find the one true way to do their jobs.
In other words, the exact same ideology that let Toyota cream all three of Detroit's big
automakers during the 1980s. Letting enterprise users do the stuff that
they think will allow them to make more money for their employers often results in making
more money for their employers. For the record, scientific management is about as scientific
as trepanation and Myers-Briggs tests. [laughter] The business case for user rights is a good
one, but I really wanted to just get it out of the way so we could dig into the real meat
of the argument: the human rights case. This may seem a little weird on its face, but bear
with me. This is a guy named Hugh Herr, and I saw him give a talk earlier this year. He's
the Director of the Biomechatronics lab at The MIT Media Lab. You may have seen him do
a TED talk. There's a bunch of them on YouTube. It's electrifying to see him give these talks.
You should go and watch one after this. He starts out with a bunch of slides of cool
prostheses his lab has cooked up. There's legs and feet, and hands and arms, and even
this awesome thing that if you have untreatable clinical depression, they stick your head
in a magnet, and the magnet suppresses activity in the parts of your brains that are overreacting,
and people with untreatable clinical depression become treatable. It changes their lives and
brings them from the brink of suicide back into a happy place.
Then he shows this slide of him, and he's climbing up a mountain. You can see he's clinging
to the mountain like a gecko. He's super buff. He clearly knows what he's about. And he doesn't
have any legs, he just has these awesome mountain climbing prostheses. Now, he's been standing
at a podium like this. In fact, he does it wearing a little lav mic or an ear mic, and
he walks up and down while he's giving the talk. Then he stops and he says, "Oh yeah,
didn't I mention? I'm robot from the knee down. I lost my legs to frostbite. These are
my legs." Then he does this cool thing. He runs up and down the stage, jumping up and
down like a mountain goat. It's the coolest thing you've ever seen.
When I saw him give this talk, the first person who asked a question stuck their hand up and
said, "So what do those cost?" He named a price that would buy you a brownstone in Manhattan
or a nice terraced Victorian in Zone One. A pretty penny. The second question that was
asked was, "Whose going to be able to afford these?" And he said, "Well, of course, everybody.
If it's a choice between owning legs and owning a house, you'll take the 40 year mortgage
on your legs." Which is by way of asking you to consider the possibility that there are
going to be people, potentially a lot of people, potentially you someday-- remember, we are
only temporarily able-bodied-- who are "users" of computers that they don't own, where those
computers are going to be parts of their bodies. I think that most of the tech world should
be able to understand why you, as the owner of your cochlear implant, should be legally
allowed to choose the firmware that runs on it. After all, when you own a device that
is surgically implanted in your skull, it makes a lot of sense that you have the freedom
to change software vendors. Maybe the company that made your implant had the best algorithm
for signal processing at the time that they were stuck in your head, but what if a competitor
patents a superior algorithm next year? Should you be doomed to inferior hearing for the
rest of your life or the 20 year span of the patent, whichever comes first?
This is a problem that can't be overcome merely by escrowing the code of important embedded
systems. That might help you if the company goes bust. It also can't be helped by code
publication, the thing you would want anyway for your cochlear implant, just to make sure
that it was good code. This is a problem that you can only overcome by having the unambiguous
right to change the software, even if the company that made your implant requires you
not to. So that helps owners. But what about users?
Consider the following scenario: you are a minor child and you have deeply religious
parents who pay for your cochlear implants. They ask for the software that makes it impossible
for you to hear blasphemy. You are broke, and a commercial payday loan company wants
to sell you ad-supported implants that listen in on your conversations and insert contextual
ads that trigger discussions about the brands you love. Or your government is willing to
install cochlear implants, but they want to archive everything you hear and review it
without your knowledge or consent. It sounds far-fetched, but remember, the Canadian border
agency, just a few months ago, had to be slapped down from its plan to put hidden microphones
through the entirety of all of the country's airports, so they could listen in on and record
all the conversations taking place in every airport in real time and later. Will the Iranian
government, will the Chinese government, will other repressive governments take advantage
of this if they get the chance? Speaking of Iran and China, there are plenty
of human rights activists who believe that boot-locking will be the start of a human
rights disaster. It's no secret that there are high-tech companies who have been happy
to build "lawful intercept" back-doors into their equipment to allow for warrantless,
secret access to their communications. These backdoors are now standard, so even if your
country doesn't want the capability, it's still there.
In Greece, for example, there is no lawful interception requirement, but of course, all
the telecoms equipment they buy is made for jurisdictions in which there is. They just
don't turn on lawful intercept. During the 2004/5 Olympics bid, someone, we don't know
who, broke into the Greek telecom switches, turned on the lawful intercept capability,
listened in on the conversations at the highest levels in government, turned it off again,
and walked away. It's only because they didn't erase the logs that we know about it.
Surveillance in the middle of the network is nowhere near as interesting as surveillance
at the edge of the network. As the ghosts of Misters Hayek and Marx will tell you, there's
a lot of interesting stuff happening at the coal-face that never makes it back to the
central office. Even so-called "democratic" governments know this. This is why, for example,
last year, the government of Bavaria started illegally installing the "Bundestrojaner",
or the state-trojan, on people's computers, when they were of interest, something that
allowed them to access cameras, microphones, hard drives, and so on. And of course, it
was very badly written, so it allowed anyone else to do that, too. Once you were infected,
you were infected for everybody. It's a safe bet that the totalitarian governments
will happily take advantage of boot-locking and move surveillance right into the box.
You may not import a computer into Iran unless you limit its trust-model so that it only
boots up lawful intercept operating systems. Now, assume that we get an owner-controls
model, wherein the first person to use the machine gets to initialize its root of trust.
You still get the problem, because in Iran, every computer that comes into the country
is first opened by the customs authority, who installs a root of trust that's run by
the government. Because it's tamper-evident, even if you figure out how to override it,
the next time a snitch or a policeman looks at your computer, they can tell that you've
been up to something naughty and locking the government out of your computer.
Of course, repressive states aren't the only people who like this. There are four major
customers for the existing complexive censorware, spyware, and lockware. There's repressive
governments, there's large corporations, there's schools, and helicopter parents. That is to
say, the technical needs of protective parents, school systems, and enterprises are convergent
with the governments of Syria and China. I don't mean that they have the same ideological
grounds, but they have awfully similar technological means to attain their ends.
We are very forgiving of any institution that pursues those ends, provided that they're
doing so in order to protect either shareholders or children. For example, you may remember
that there was widespread indignation, from all sides, when it was revealed that employers
were asking prospective employees to turn over their Facebook login credentials. Employers
argued that they needed to be able to review your list of friends, what you said to them,
and what you did with them, in order to make sure that you didn't have any skeletons in
your closet that would compromise your ability to work for them. Facebook logins were fast
on their way to becoming the workplace urine test of the 21st century. A means of ensuring
that your private life didn't have any unsavory secrets lurking in it, secrets that might
compromise your work life. Now, the country wasn't buying this. From Senate hearings to
op-eds, the country rose up against this practice. But no one seemed to mind that many employers
routinely insert their own intermediate keys into their employees' devices-- their phones,
their tablets, and their computers-- that allows them to spy on their employees' Internet
traffic, even when it's "secure", with a little lock showing in their browser. This gives
your employer access to all the sensitive sites you access while you're on the job,
from your union's message board to your bank website to your Gmail to your HMO or private
repository managed by your doctor's office to Facebook.
Now, there's a wide consensus that this is okay because the laptop, the phone, and the
tablet that your employer issues to you are not your property. They are company property.
And yet, the reason that employers give us these mobile devices is because there is no
longer any meaningful distinction between home and work, between personal life and professional
life. Corporate sociologists who study the way that we use our devices have found consistently
that employees are not capable of maintaining strict boundaries between "work" and "personal"
accounts and their devices. And of course, in America, we have the land of the 55+ hour
work week, where few professionals take any meaningful vacation time, and when they do
get away for a day or two, they bring their Blackberry along.
Even in the old, predigital, traditional workplace, we recognized that workers had human rights.
We didn't put cameras in the toilets to curtail employee theft. If your spouse came by the
office on your lunch break and the two of you went into the parking lot so that she
or he could tell you that the doctor said the cancer was terminal, you would be rightfully
furious to discover that your employer had been listening in on the conversation with
a hidden mic and watching through a hidden camera.
But if you take your laptop on your lunch break and access Facebook and discover that
your spouse has left you a message saying that the cancer is terminal, you're supposed
to be okay with that because the laptop is your employer's property. There are plenty
of instances in which not just peons, but important and powerful people, not kids and
corporate employees, are going to find themselves users of computers that they don't own.
Every car-rental agency would love to be able to lo-jack the car they rent to you. Remember,
cars are just computers you put your body into. They'd also like to log all the places
you've been for "marketing" purposes and analytics. And there's lots of money to be made in finagling
the way your GPS roots you around to make sure that you drive past certain billboards.
But in general, the poorer and the younger you are, the more likely you are to be a tenant
farmer in some feudal lord's computational land. The more likely it'll be that your legs
will cease to walk if you get behind on payments on them. That means that any thug who buys
your debts from a payday lender could literally — and legally — threaten to take away
your legs (or your eyes, or your ears, or your arms, or your insulin, or your pacemaker)
if you don't come up with the next payment. Before, I discussed how an owner override
might work. You have some kind of combination of physical access-control and tamper-evidence,
designed to give owners of computers the power to know and control what bootloader and OS
was running on their machine. How will user-override work? I think an effective user-override has
to leave the underlying computer and its programs intact, so that when the owner takes it back,
she can be sure that it was in the state she believed it was in when she handed it over.
In other words, we need to protect users from owners and owners from users, as well as users
from other users. Here's one model for that. This is the hypothetical.
I'm not suggesting we do this, I'm suggesting it by way of example. Imagine that there is
a bootloader that can reliably and accurately report on the kernels and OSs it finds on
your computer. This is a prerequisite for all the scenarios we've discussed: the one
in which the state controls your computer, the one in which the owner controls your computer,
and the one in which users may be able to control their computers some of the time.
Now, give the bootloader the power to suspend any running operating system to disk, encrypting
all its thread and parking them, and the power to select another operating system from the
network or an external drive. So I walk into an Internet cafe, and there's an OS running
that I can verify. It has a lawful interception back-door for the police, it stores all my
keystrokes. It stores all my files, all my screens in an encrypted blob that the state
can decrypt. Now I'm an attorney, or a doctor, or a corporate executive, or just a human
being who doesn't want all of his communications being available to anyone who can bribe a
cop. So I do some kind of three-finger salute on my keyboard. It drops into a minimal bootloader
shell, and I can give the net-address of an alternative operating system, or insert a
thumbdrive. Now the cafe owner's operating system gets parked. I can't see inside it.
But the bootloader can assure me that it's dormant and not spying on me as my operating
system fires up. When it's done, all my working files are trashed, and the bootloader confirms
it. Not just because this keeps the computer's owner from spying on me, but it keeps me from
spying on the computer's owner. Now, there will be technological means of
subverting this. You could make a thing that looks like the bootloader but isn't the bootloader.
But there is a world of difference between starting from a design spec that aims to protect
users from owners and vice-versa, and one that says that users should always be subservient
to owners. Now, human rights and property rights often
come into conflict with one another. For example, landlords aren't allowed to enter your hotel
without adequate notice-- or your home without adequate notice. In many places, the hotelier
can't throw you out if you keep paying for your room, even if you overstay your reservation.
Repo men can't take away your car without serving you a notice and giving you the opportunity
to dispute it. When these laws are streamlined, we get all kinds of bad effects. Robo-signers
taking away people's houses even though they've paid their mortgage or don't even have a mortgage.
The potential for abuse in a world where everything is made of computers is, of course, much greater.
Your car might drive itself to the repo yard. Or your high-rise apartment building may switch
off its elevators and its climate systems, stranding thousands of people until a disputed
license payment is settled. Now this has already happened with a parking
garage. Back in 2006, there was a 314-car Robotic Parking model RPS1000 garage in Hoboken,
New Jersey, whose owners believed that they were up to date on their software license
payments and whose vendor disagreed. So the vendor shut off the garage and took 314 cars
hostage. The owner said that they were paid up, but they paid again because what the hell
else were they going to do? Now what will you do when your dispute with
a vendor means that you can go blind, or deaf, or lose the ability to walk, or become suicidally
depressed? The negotiating leverage that accrues to owners over users in this scenario is total
and terrifying. Users will be strongly incentivized to settle quickly, rather than face the dreadful
penalties that could be visited on them in the event of a dispute. And when the owner
of the device is the state or a state-sized corporate actor, the potential for human rights
abuses skyrockets. Now, this is not to say that owner override
is an unmitigated evil. There are lots of reasons why you might not want users to override
their computers. Think of a smart meter. Smart meters need to be able to turn down your building's
temperature by a couple of degrees, otherwise we have to keep using dirty coal because it's
the power source that we can raise and lower on demand. Now, that works best if users can't
override the meter on their wall. But what happens if there's a big freeze, and a griefer
or a crook or a government turns off your heat? What happens if the HVAC in your house
is cranked to 110 degrees during a heat-wave and you can't override it? Once we create
a design norm of devices that users can't override, how far does that end up creeping?
Especially risky would be the use of owner override to offer payday loan-style services
to vulnerable people. If you can't afford artificial eyes for your blind kid, we'll
subsidize them, but you have to let us redirect your kid's visual focus to sponsored toys
and sugar-snacks when you go to the grocery store.
But foreclosing on owner override probably means that there will be poor people who won't
get offers that they would get otherwise. I can lease you something, even if you're
a bad credit risk, if I know I can repossess it handily. But if your legs can decide to
walk away to the repo depot without your consent, you will be totally screwed the day that muggers,
rapists, griefers, and the secret police figure out how to hijack that facility.
It gets even more complicated, of course, because you're the user of many systems that
you aren't-- in the most transitory of ways: the subway turnstile, the elevator, the blood-pressure
cuff at the doctor's office, public buses and airplanes. It's going to be hard to figure
out how to create "user overrides" that aren't nonsensical, although we can start by saying
that "users" are someone who are the sole user of a device for a meaningful amount of
time, although we'd then have to define "meaningful." This is not a problem I know how to solve.
Unlike the War on General Purpose Computers, the Civil War over computers seems to present
a series of conundra without any obvious solutions, at least, obvious to me. Which is why I'm
talking about them to you. These problems are a long way off, and of course, they'll
only arise if we win the war on general purpose computers first. But come victory day, when
we start planning the constitutional congress for the new world, where regulating computers
is acknowledged as the wrong way to solve problems, let's not paper over this division
between property rights and human rights. This is the sort of division that, while it
festers, puts the most vulnerable people in our society in harm's way. Agreeing to disagree
on this one is not good enough. We need to start thinking now about the principles we'll
apply when the day comes. Because if we don't start now, it may be too late.
Thank you.
[applause]
So I've got some time for questions now. You don't have to ask me questions about this.
You can ask me questions about books and stuff.
>>Male #1: I have a question. So one thing usually people don't talk about is why we
should not allow people to inspect what we're doing. If you're not doing anything wrong,
why do you care? I've only seen an argument to this once, when people said, basically,
it violates a fundamental expectation of humans to be individuals rather than part of a collective
ant hive or ant colony of some sort. Now I wonder if you could speak to that.
>>Doctorow: Yeah. I mean, there's-- I think that that argument starts by presupposing
that everything private is secret, and everything secret is private. We say, "Oh, well, it's
not a secret what you're doing, so why do you need to keep it private?" But I can make
a pretty good guess about what you do when you go to the toilet. I'm pretty sure I knew
what your parents did to get you here. But it takes a pretty special kind of person to
want to do that in public. There are behaviors, and not nefarious ones-- In fact, some of
the most important ones, the ones that, you know, are the origin of all life and the reason
you don't explode in a shower of poo, that we habitually do in private, and that aren't
the same when you have to do them in public, particularly if you're coerced to doing them
in public. The modern concept of privacy is pretty new,
but there are elements of our privacy that are quite old: the privacy of thought, the
privacy to make mistakes. I mean, remember this notion that if you want to double your
success rate, you triple your failure rate. It's very hard if you have to make all your
mistakes in public. You may-- Does anyone here work on Blogger?
I mean, before Blogger was a really big deal, when it was a little deal, it was running
on an NT box that Ev found somewhere. It went down all the time. And no one cared, because
he wasn't in the public eye. Now, you guys can't afford to experiment with the Blogger
backend the way Ev could. Ev could refactor his code altogether, take it offline for two
days, and then put it back on again. He was able to innovate really, really fast, in a
way that you guys can't, because you don't have privacy in what you do. What you're doing
is public. If you've ever watched a kid play, and play
in a way that's sort of pushing at their boundaries, they do this thing where if they don't know
you're watching, they make a lot of mistakes, and they just keep pushing through them. But
if they catch you secretly watching them while they make mistakes, they put the thing away
and they walk away from it. It just kills their play. As a father, it's the thing that
breaks my heart when I do it, because it's very tempting to look over at your kids when
they're doing something awesome and intense. But then, you humiliate them and you embarrass
them. So there's something about us that wants to have vulnerable moments not take place
in public, that wants to choose the moment of disclosure. I think that that's-- that
doesn't change just because we have Facebook, or just because we can track user behavior
with 1x1 pixel gifs. You know? Yeah?
>>Male #2: You've published a lot under the Creative Commons license. So I was curious,
from the point of view of someone who's incredibly cynical and just wants to make a living writing
things, would you advise it? Will it catch on?
>>Doctorow: So if you want to make a living writing things, I would advise you to stop
trying, because [laughter] that's a bit like saying, "I want to make a living buying lottery
tickets." It's like--. That sounds like a great plan if you can find the winning lottery
tickets, but if you don't have a plan B for earning a living, you have the wrong career.
Writing is a very, very high-risk entrepreneurial venture that almost everybody who tries it
fails at. Some fraction of the people who try it succeed using Creative Commons, and
some fraction succeed without using Creative Commons, but they're rounding errors against
all the people who try to earn a living with writing.
So for me, the reason to use Creative Commons is not just commercial, although I think in
my case, it enhances my commercial fortunes, because people who get the book for free then
go on to buy the book. That may not be true of everyone. There isn't a kind of global
theory about this. But there's two other dimensions to it. The first one is moral, and then the
second one is artistic. The moral case is that I copy all day long, you copy, everybody
copies. If it wasn't for mix-tapes, I would have been a virgin until my mid twenties.
You know. Copying is what we do. If I were 17 years old today, I would have a giant hard
drive and it would have a copy of everything. So I--. To pretend that when I copy, it's
like part of a legitimate, artistic adventure that allows you to assemble your influences
and recall them as you need them, but when you copy, you're just a thief, that's just
dumb, right? And moreover, it leads to all these crazy consequences, where we're talking
about three strikes rules in New Zealand, for example, where they're saying if you--
I know that's an Australia shirt, but very close to Australia, where they're saying--
>>Male #2: We don't think very much of the Kiwis.
>>Doctorow: I understand. It's like Canadians and Americans. But they, you know, they're
saying if you are accused of three acts of copyright infringement, we take away your
internet access and all the stuff that comes with it, and that's partly being driven by
people who say, "Well, if you copy my stuff, it hurts my fortune." Being able to give my
stuff away means that I'm not part of the rubric for Draconian network policy. But then
there's a third dimension, which is the artistic dimension. It's the 21st century, and if you're
making art that you don't intend to have copied by people who like it, you're not making contemporary
art, because the realpolitik is if someone likes it, they'll just copy it. Right? I mean,
we put DRM on ebooks. This is crazy. It's like they've never heard of typists. [laughter]
It's like there have-- It's like they don't know that we live in the moment with the largest
number of skilled typists in the entire history of the world, you know? This is an amazing--.
My grandmother was like a 75 word per minute administrative assistant, and she was like
a circus freak. Today, she's not even in the top quintile. Everybody can type. If you're
making art that you don't intend to have copied, you're not making contemporary art. That's
cool. I mean, if you want to be the blacksmith at Pioneer Village or reenact the Civil War,
that's awesome. Go follow your weird. But I'm a science fiction writer, so I'm supposed
to make at least contemporary work, if not futuristic work. It gives me great satisfaction
to allow it to be copied.
>>Male #3: So, a question about the average user. So if somebody is sitting out watching
this broadcast on YouTube, and they think, "I want to be a part of the solution", it
doesn't feel like there's a great avenue for them to express the need for digital freedom
other than disobedience. Do you have any suggestions as to what people might do?
>>Doctorow: Well, the first talk, the 28C3 talk which is on YouTube, ends with a pretty
good, compact call to action for people of all stripes. If you're a hacker, get involved
in things like Free Software Foundation and policy stuff that the Free Software Foundation
does, or get involved with the Electronic Frontier Foundation. If you're a lawyer, join
the Cooperating Attorneys list for EFF and get involved in other groups like FSF and
so on.org, or Nets Politique. Bits of freedom all over the world: we have them. If you're
an artist, use Creative Commons. And so on. I actually think that there's a lot of venues.
Today, we have, between Defective By Design and Fight for the Future, who led the SOPA/PIPA
fight. We have so many different groups that are doing really exciting things that need
people to do everything from send an email to their congressman at the right moment to
design logos and packaging and write copy and put the word out and blog and give talks
to their school. If you're a student, you can join Students for a Free Culture.
That's all great. I think we don't have that answer for the civil war thing yet. We don't.
And we kind of—. And my point about the civil war is that we'll get to the civil war
pretty quickly after we win the war. We get to the thing where as soon as you give owners
total dominion over their computers, you immediately get to the moment where users can't trivially
change what's on their computers. I would be really nice if, as we sit here advocating
for owners having total control over their computers, that we start thinking about when
users should be able to change that.
>>Male #4: So you gave some examples comparing and contrasting physical devices versus electronic
goods, and devices that are used by many users and may be only one user. And you said, "Well,
some of these policies, there's analogies between physical and electronic that you should
use, and some of them don't really make sense. There's a discord." Are you saying that, overall,
we should go by case-by-case in order to examine: does this make sense?
>>Doctorow: No. I mean, I hope that's not what we end up having to do. I mean, that
was the point at the end and the beginning. I don't know how to solve this one. And it
would suck if, basically, you solved this with nested if-then loops. It would be really
nice if we had a nice, generalizable case that we could say, you know, "If it's not
a nuclear power plant, anything goes"? Or something else. Right? I mean, if every single
thing that is Turing Complete has a different set of rules for when users and owners get
to control it, that's going to be a really big rule book. So I would love to have a better
one. I don't know what that is yet. Maybe, after I've given this talk for another year
or two, between all the feedback I get from people like you, I'll be able to propose a
solution.
>>Male #5: It was interesting that you talked about having a user override mode where you
could change what the employer was doing and come back. I don't know if you followed much
of it, but [beep] Chrome OS does that now.
>>Doctorow: Yeah, I just heard that at the weekend that there's a user override. And
it's funny, because the first email that got me thinking about this was Vint Cerf saying,
"Why shouldn't Google be able to choose what software runs on my Chromebook if they bought
it?"
>>Male #5: And we don't, and I work on that. I'll be happy to talk to you about that. And
for the rest of you guys, I'm about two months away from having a way that you can put your
own keys on it, so you can sign your own images and boot your own stuff. Right now, you have
to turn the security off. But we're doing that, and I would love to do more.
>>Doctorow: I think that's a really cool model. I'm done. Of course, the really challenging
thing is going to be computers that don't have interfaces, like your legs.
>>Male #6: So this is a little tangential, but we are reaching the point where a lot
of third parties can maintain public databases of public information about you. So, like
the American Credit Report is one of the original settlers. So is there any legal theory that
would give you rights over the database?
>>Doctorow: Is there a legal theory that gives you rights over that database? I don't know,
but I like the fact that you said rights instead of property rights, because I think that we
have started-- We have the best of intentions sometimes. We created property rights and
facts, or property-like rights and facts, about you that don't make any sense. Like,
you know, the Well, which is very old, and now endangered. Salon just put it up for sale,
this online conferencing system. It's motto is "Yoyow", "You own your own words." And
that sounds like a really cool idea, but it has all these weird fraud things, like if
we're in a conversation and I quote something you said, do you get to tell me to not quote
it? I mean, this is a contract, not fair use. This is what our contract says. And the European
data norms are starting to move towards ownership of your personal information. But what does
it mean to own your phone number? You know? Does that mean that if your phone number happens
to contain the first seven digits of pi that other people can be enjoined from writing
pi? And it's funny because we do actually have
ways of expressing value about things that aren't property that we may be able to bring
in here. We talked about interests a lot. My daughter is not my property, she's pretty
important to me. And if you kidnap her, the charge isn't theft. But we can acknowledge
that my daughter has an interest in herself, that I have an interest in her, that my wife
has an interest in her, that her grandparents have an interest in her, that the state has
an interest in her, that her friends have an interest in her. That's what it means to
be a person in a society. We need to start, I think, talking about information
that way. It's crazy, I think, to talk about things like phone numbers or your address.
This is where I think, you know, even though I'm a privacy advocate, I think the Germans
were crazy about saying you own the likeness of the front of your house. I mean, that's
just dumb to me. 'Cause it means that, like, as you move through time and space with prostheses
that record the world, you can't record your neighbor's house. You can't record your kid's
first run down the street on her bicycle without the training wheels because she rides past
your neighbor's house and they own the likeness of their house. Right? That's just dumb.
We need to be able to express--. And property is a bad organizing metaphor for a thing that
a million people own. Right? You end up with, like, shareholder corporations or, you know,
there's this whole Spider Robinson aphorism when 700 people share an apple, no one benefits,
especially the apple. You know. [laughter] That's true of physical, rivalrous property,
but non-rivalrous information doesn't have that characteristic. We still may want to
give exclusive access or semi-exclusive access to certain parties. Like, the image of your
colonoscopy may be something between you and your doctor. But to call it your property
is the wrong thing. It doesn't organize well that way. I don't know what does, but I know
what doesn't work.
>>Male #6: No, but the point I'm making is that, let's say ten years from now, somebody
could run a background check on your without your opting into it?
>>Doctorow: They can already do that. I mean, they already can.
>>Male #6: Okay.
>>Doctorow: Yeah, I mean, it would be-- So Lessig talks about four ways of organizing--
of regulation. He talks about law code markets and norms. So we don't have a lot of code
to help people protect their privacy. Like, when you fire up your laptop, it doesn't--
So here's an example. If you had a browser that, every time you turned it on, loaded
the-- checked to see whether it was being asked to load the Google Analytics JavaScript,
and suppressed it, but implemented all the features, all the libraries locally so that
pages didn't break. That would be code out of the box that treated-- that defaulted to
treating privacy as though it's valuable. And so now, if Google wants to get your private
information from you, information about where you are on the internet, they have to offer
something of value to you that is inextricably linked, because that is extricably linked,
right? You can-- We can, in fact, conceptually understand how you divide it.
So, like, if my Android phone, when I installed an app like my daughter's Connect the Dots
app, it said, "In order to use this app, you need to tell us where you are all the time."
If it let me say, "Tell this programmer where I am all the time, but make it up", then the
program would actually have to devise an offer where where I am was actually a piece of using
it. So I use another Android app all the time
called Hailo, for hailing black cabs in London, which are a pain in the ass to get when it's
raining and so on. And Hailo knows where I am all the time, and so Hailo has an offer
where if everybody else couldn't get my location trivially just by, like, getting me to download
a Connect the Dots app for my kid, Hailo would be sitting on a giant asset. And you'd have
real privacy markets. Like right now, we have this idea that we know what your privacy is
worth, and it's worth nothing because you trade it for zero. But you don't have the
option of not trading it for zero. So you could imagine--. One way that you could
stop people from being able to do background checks on you really trivially is if all the
devices that you have didn't hemorrhage information about you all the time, as though it had no
worth.
>>Male #7: Okay, change of gears to a fluffy, lighthearted thought experiment.
>>Doctorow: Sure.
>>[Male #8]: Since you mention the Norwegian script kiddies, imagine the Marcus Yallow
Memorial Pentathlon. Which countries take gold, silver, bronze there?
>>Doctorow: Oh, wow. I'd like to think-- I mean, without being ideological about their
governments and just thinking about their track records, I would think that you'd get
the four brick countries plus Israel, probably. Brazil, Russia, India, China, Israel. Without
endorsing or condemning any of those governments.
>>Male #8: All right. Thanks for coming. It seems like you're looking for sort of an overall
rule for what people can and can't do with their devices. But I'm afraid it's going to
end up just a whole pile of special cases, kinda the way it is now. Like, if you look
at your car, what hardware and software you can do, some things are legal, some things
violate smog laws, some things violate safety laws, some things will be charged with criminal
negligence if something goes wrong, some things you'll get sued if something goes wrong.
>>Doctorow: Are you saying that if you modify the firmware, or if you modify the firmware
and then something bad happens?
>>Male #8: Well, you know, with the smog law, just modifying the firmware, I believe, is
illegal.
>>Doctorow: Is that right? I didn't know that.
>>Male #8]: Well, I think in California. You know, there's other things you can do that--
If I decide to reprogram my brake system and I crash into something, I'm likely to get
either sued or go to jail. But other parts of the system, you know, it's probably okay
for me to redo-- if I want to put in a new engine, but in new shock absorbers. So it's--
I think these issues apply to both the hardware and the software and firmware.
>>Doctorow: So, it makes a certain amount of sense to me. I think you're describing
after the fact, largely, modulo this question about whether changing the way your car is
smogged gets you in trouble or doesn't. I think, mostly, you're describing after the
fact stuff. So in the same way that if I program my nuclear power plant so that it melts down,
I'm held liable for having written bad code. Or, if I program my software to find radio
so it turns into a spark app generator and blows all the RF in my region, again, I have
done something bad and I'm punished for that, but it's not against the law to write my own
software to find radio code. You can check code in and out of GNU radio on GitHub without
breaking the law. If you use that code in a way that ends up breaking something, that
may be illegal. And that's kinda what I'm talking about here.
It may be that users take control of their legs to run up to someone and kick them in
the face. And I don't think that-- I think that writing code that lets you take over
your legs is good. I think that, having taking over your legs, to kick someone in the face
is bad. And I think that we can punish the one without punishing the other.
>>Male #8: Okay, that seems fair.
>>Doctorow: Yeah.
>>Male #9: You touched earlier on the concept of an illegal number, which is something that
I've thought about a lot, because all information can be a number. Which raises the question,
which I think is central to all of this, of where you draw the line.
>>Doctorow: Sure.
>>Male #9: And obviously, to anyone who has studied any kind of mathematics, there is
no line in numbers. So the question is: why is there a line in anything?
>>Doctorow: Right.
>>Male #9: In essence, why do you believe that this problem is at all soluble?
>>Doctorow: [inhales] [laughter] Let me find my illegal number here. There we go. Yeah.
That's a really good question. You're right. Everything can be encoded as a number. I mean,
now we're getting into girdle and incompleteness and whether numbers are special.
>>Male #9: That was just an example, though, of why it seems a priori that this is likely,
technically, not soluble. So the question is: why?
>>Doctorow: So I think it's soluble in time scales. It's not soluble in infinity. So,
for example, we may say that--. So today, we have a bunch of rules about locking and
unlocking that are largely governed by the copyright office, because the relevant law
is the DMCA and anti circumvention rules. And so you may have heard that in the triannual
review, it was made legal to unlock phones and tablets, iPhones and tablets, and also
to unlock phones so that they can switch carriers. That's a thing that works for now. It won't
work against more robust bootlockers and it doesn't help certain classes of users. But,
I mean, I don't think we pass technology laws that are supposed to last through the ages.
I think we pass technology laws that are supposed to last, we hope, through the half life of
the technology. You're right that there will come a time when the rainbow table of all
numbers exists, and all possible decodings for them exist. I mean, it may occupy all
the hydrogen atoms in several parallel universes to ours. But it's at least within the realm
of contemplation. But that doesn't mean that between now and
then we should try not to have any rules about how numbers are used. You know. I mean, you
might say, "Well, the specifications necessary to get your AR15 to go full auto can be expressed
as a series of numbers." That's a bad example, because you don't suppress the rule about
the specification. But, like, the 3D meshes. That currently exists as a 3D mesh. I think
that was uploaded to Thingiverse. I think they've taken it down. That converts AR15s
from semi to full. I may not agree with regulating that, but I wouldn't disagree with it on the
grounds that it's impossible. Right? Like, that it's impossible to say-- telling people--
giving people AR15 automatic modification kits can't be made to work, or shouldn't be
made to work, because those numbers might also be poetry or something. Like, it just
seems like, although that's true, it's true in some sense that the law can safely ignore
for a while. Does that sound right? It's hard.
>>Male #9: It's good enough that I don't want to take up any more time following up. Thank
you.
>>Doctorow: Okay. Thanks.
[applause] [whistling]