GCCIS Dean's Lecture Series: Dr. Gary McGraw

Uploaded by ritetcvideos on 04.12.2012


>> Okay, good afternoon everyone and welcome
to the B. Thomas Golisano College of Computing
and Information Sciences and the Dean's Lecture series.
My name is Andrew Sears, I'm the dean of the college,
and it's my pleasure to introduce our speaker today.
The purpose of this lecture series, which has been
around for about 10 years now, is to bring talented
and insightful individuals from industry and academia
and government to campus to share their insights
with our faculty, with our students,
and with the community at large.
And I believe this is the 46th speaker in this series.
And today, it's my pleasure to introduce Dr. Gary McGraw.
Gary is globally-- a globally recognized authority
on software security and is the Chief Technology Officer
at Cigital, I got it right.
>> You got it right.
>> We were talking about that earlier.
A leading software security firm that is headquartered
in the Washington D.C. area.
In addition, he's the author of eight best-selling text--
or books, including Building Secure Software,
and a strategic adviser to quite a few IT executives.
In today's lecture, "Bug Parades, Zombies,
and Building Security In Maturity Models:
A Decade of Software Security"-- you've got a long title.
>> That's a long title.
>> Gary will take us on a journey through the last decade
of developments in the area of software security,
and we'll talk about how we've moved really from very much
of an art to more of a measurable science,
and kind of talk about how far we've come and the steps
that we have left to take.
And without any further delay, it's my pleasure
to introduce our speaker, Gary.
>> Thanks.
[ Applause ]
>> Hi!
>> Oh good, everybody is awake.
It's the coveted after-lunch speaking slot.
It's always everyone's favorite.
Let me tell you where I'm coming from,
so you know what we're going to do.
The dean just said exactly kind of what I'm going
to talk about, so that's good.
My company called Cigital does software security all day,
and that's all that we do.
So if you don't know what software security is yet,
by the end of the talk you ought to.
But what I want to let you know is, I'm not going to talk
about theory, I'm not going to talk about, you know,
possibility space, I'm going to actually tell you
about what we do on planet earth to make software secure
for huge software companies and huge multinational banks.
At the end of this-- I want to start at the beginning and sort
of give you my view of a decade of this stuff,
but at the end I want to show you kind of a way we've come
up with to measure software security.
Just so you know though, this is all stuff that's based
on experience over the last 15 years
or so doing software security.
I got started in this field myself in 1996
when this language called Java came out.
Anybody heard of Java in here?
Yes, so you might remember that there were some people
who were breaking Java a lot back in the mid-'90s,
well that was me and Ed Felten from Princeton
and we wrote this book, which is hidden behind this unimportant
book, this really important book right here.
And it was really about how the Java virtual machine was busted
and, you know, ways to fix it.
And the question that it brought to mind for me after, you know,
doing it for a few years was why were these amazing guys,
like Guy Steele who's probably the best programming language
person on the planet, or Bill Joy who's no slouch,
he wrote "Berkeley UNIX" among other things.
Why did they screw it all up when it came to security?
And if you were not a super wizard, but you were just sort
of a mere mortal, how would you not screw it up when it came
to security, and more importantly where would you go
to learn any of that crap?
The answer, around 2000, was nowhere.
There were a couple of books,
I mean there were a couple of papers.
There was a paper called "Smashing the Stack for Fun
and Profit" that Aleph One wrote.
There's a paper called Bishop and Dilger's "TOCTTOU:
Time of Check to Time of Use" stuff.
But really, that was about it back then.
So, I wrote this book with John Viega called
"Building Secure Software in 2001".
And it's been, I guess, 11 years since that book was published,
and things have changed a lot in those 11 years.
Now there are plenty of places to look, we actually know how
to do software security, we're beginning to understand how
to measure some aspects of software security,
and that's really what I'm going to talk about today in the talk.
So, let's go back to the beginning of time.
In the beginning of time, pictures were black
and white [laughter] and they didn't have bits, right?
And people had funny-looking hairdos like that.
So, you used to have computers that were a room,
and you would walk into the room and, you know,
there were certain high priests of computer land
that were allowed to walk in the room and everybody else had
to genuflect in front of the little tiny window, right?
And so, what you would do is you had your punch cards--
ah, there's some genuflectors in here, I can tell.
[Laughter] You had your punch cards
which you had neither folded, spindled, or mutilated
and you brought them to the window and you sort of bowed
to the high priest and say,
"Would you please run this eventually," right?
And so, they would take it in and they would schedule it
for three days later and there would be bug
and you would get it back and they'd go,
"I'd never bring this thing with a bug again," right?
By the way, the next cycles are in seven weeks.
Right, so people were a little more careful with their code
and it was pretty easy to secure the computer because, you know,
the only way in was through that little window.
And the only code that ran was the code that was
on those punch cards that those guys ran.
And usually one thing ran at a time.
Well, IBM fixed all that.
I guess in around the '60s, they unbundled the computer software
and the computer hardware
and started selling things differently
so the hardware could have multiple software possibilities
and you could change what the thing did.
Although, you know, for the most part,
the computers were still in the room.
What happened with the computer room was people started running
wires out of the room and the wires made it all the way
to the terminal room next door, so 20 feet, right?
And there were a bunch of dumb terminals connected
into the mainframe.
The dumb terminals were like VT100's and ADM5's
and you might remember that.
You know, remember how to do a Bell on an ADM5 or VT100?
It's control G. You can still send a control G to a browser
by the way, and it'll bing, which ridiculous,
but it still does it, right?
So what happened was the guys--
the high priests of computers lost track
of what exactly was running
because we had multiple users on terminals.
Now we still could keep our eye on them
and we could make still make them log in.
And basically the wires only went about 25 feet or so.
So, security was really impossible.
But it was beginning to get worse
because the situation was actually beginning
to get geographically distributed.
And this was before the net really caught on.
And, you know, then there was the internet
and then it was big trouble.
So, let's fast forward all the way to now.
[Laughter] Now, in a lot of corporations,
you have this problem in computer security.
You've got the people that write code that really needs
to be secure, who may not be doing it right,
the super rad developer dudes.
And you have the poor networks security people who are supposed
to be doing security for the whole entire system.
So you go over to the computer security guys, you go,
"All right you guys, what about software security?"
And they go, "Yeah, software.
We hate those developer people, you know.
It-- we have a perfect computer.
Everything is all set up.
The network is all great.
If it weren't for the users, it would be awesome.
And the worse users of all are the users
with compilers over there.
So why don't you go talk them."
So you go over here and you talk to the people in software's land
and you go, "Hey, what do you guys know about security?
Are you thinking about security while you're creating all
that software?"
And they go, "Yeah, security, I think those guys are over there.
They live in the basement.
Yeah, they don't get much sun.
And I think they, you know, we hate those guys.
They take away our privileges.
We can't even run a compiler.
They won't let us do anything.
They run antivirus and slow our machine down
and eat all those cycles for my really wonderfully tight code.
And you know, could you just take care of those guys for me?"
[Laughter] This was the state of the practice about the mid-'90s.
So, here in the middle was who?
It says it on the slide, in red.
Nobody. So, right, who was getting software security done?
Nobody. And who was getting fired when it failed?
Nobody. Who was getting a huge bonus
when everything went peachy keen in software security?
CEOs, that's right.
[Laughter] Good answer.
There's always one.

[Laughter] Right, so really, we've made some progress
since then, but the first question to ask yourself,
as a firm, is whether anybody actually has the job
of doing software security.
If you work for a company and you ask who's in charge
of software security and the answer is nobody,
don't surprise-- be surprised
when that's who's getting the job done, right,
and they're not doing anything.
So Management 101 dictates that you identify somebody
and you give them both authority and responsibility
and maybe some resources to get that thing done.
Now, we're at that state.
But it's taken us a long way to get there
and we've got all sorts of problems.
So I want to try to give you a very quick history
through the bug parade.
The bug parade is important because, you know,
in the commercial world, there's this massive overemphasis
on bugs in software security,
sometimes called application security.
So you might have heard of, say, the OWASP Top Ten.
Who's heard of that around here?
Some nods, yeah.
Yeah. Well, the top ten according to what criterion?
Do you guys know?
Well, it was like some guys sat around drinking beer
and they're like, "Well I think SQL injection."
"All right, that's cool."
[Laughter] "Well how about,
like cross-site request forgery, dude?"
"Oh yeah, yeah.
That's important."
"Well how many do we have?"
"Two." "Okay, think of some more."
[Laughter] So you might think--
sorry, it's the way it really works.
[Laughter] So you might think that, you know,
the bug of the day is like the most important thing
and if we only eradicated that bug, it would be great,
but that's just a bunch of hocus pocus
and to make clear how hocusy pocusy it is,
think of the first bug in the bug parade,
the dreaded buffer overflow from long,
long ago in the way back machine.
You know, let's look at some code real quick.
Here is a declaration of an array and then
in a little thing, we're setting the 12th thing
in the array to zero.
What's wrong with that code?
Quick. [Inaudible Remark] There's no 12th thing.
There are 12th things.
Look, right there, there are 12 things.
Now, and I'm setting the 12th, right?
[Inaudible Remark] [Laughter] Yeah,
well in C, we start with zero.
Why? Because no other humans do that, [laughter] right?
Who has a 2-year-old?
You're like, "Okay kids, start counting."
"No, no, zero is first."
"No, no, no, zero is first."
Yeah, yeah.
I didn't think so.
So why would you start with zero?
Well there's a perfectly legitimate reason for that.
Anybody know what it is?
[Inaudible Remark] Yeah, it's offsets.
You start with that address and you've got an offset times,
you know, you multiply the offset times that thingy there
and you get zero times that zero, so you start
with the first address.
Wow! That's great.
So it makes it more convenient for the computer
because the computer is more important than the human.
Right, that's no longer the case, right?
So now, computers are cheap.
They're almost free.
In fact, you know, net seems to be free and this space is free.
About the only thing that's not free now is the people
who run the machines, but the ones
at Google are better than us.
So we'll just let them do it, right?
But we wonder why there are a million problems in C. One
of the issues is silly things like this.
Offset makes sense if you're a deep geek,
but it's a really crazy way to think about it
if you're just coming at this fresh.
Who the hell starts with zero, right?
So if you look at the dreaded buffer overflow overtime,
this was a little bit of data that was first captured
by David Wagner who's at Berkeley now.
And I remember David gave his first talk at ISOC NDSS
in maybe '97, he was a nervous wreck, and now he's
like a super amazing guy.
So he did a calculation of the number
of major CERT alerts,
which at the time were the most important bugs
because they were bugs that were being, you know,
found all over the place in all sorts of installed basis
that needed to be fixed and patched.
And how many of those were buffer overflows?
And the answer was about half.
So that was a good reason to stick that on the bug parade.
And in fact, you know, we still have all sorts of problems
that are caused by C. In fact, C is bad.
Let's all say it.
You ready?
C is bad. Now, one more time.
He's not going to say it.
Let's make him say it.
Here we go.
Ready? C is bad.
No, no? [Laughter] There at two,
it's the first two rows, are problematic.
So, I once gave this talk at AT&T research and Dennis Richie.
[Laughter] I was there before he died.
I guess it was 2 yeas ago or last year.
And I use to get everybody to chant "C is bad, C is bad,"
and nobody would do it until Dennis kind of went.
And then everybody is like, "C is bad.
[Laughter] C is bad," which was pretty funny.
But I have another Kernighan and Ritchie story, it's about Brian
and we did a workshop in 2003 where we got all
of the software security people in the whole world together
and there were like 25 of us, right?
So, everybody got together and Kernighan gave a great talk
about beautiful code and trying to do higher liability stuff
and how you do that, how he did the phone switch code and so on.
And at the end of the talk, there was some smart ass person
from software security who goes, "Yeah, that's a great talk
about the beautiful code and all that stuff,
but I just got one question for you."
And Brian said, "What is it?"
He said, "Well, what's up with those string handling
functions anyway?"
You know what Brian said?
He didn't even slow down.
He goes, "Yeah, that was Dennis's code."
[Laughter] And, really that's what he really said.
So, if you want to know the problem in software security,
you got it right there in a nutshell, right?
So we started with some obscure stuff and we blamed all the bugs
on the other guy and there you have it.
So, we teach everybody who learns how to code in C
on Chapter 7 of the Bible on page 164,
we have this function called "gets"
which is a total disaster.
Like why the hell was that still in the language?
What does "gets" do?
It gets a string and until the attacker decides
to stop sending you data,
[laughter] which means you're buffer ain't big enough, right?
And it can be worse.
We can stick "gets" up here in, you know,
where we declare a buffer locally,
where is that buffer going to be stuck?
On the stacks.
Superb. So we stick a buffer on the stack that's only 1K big
and then we get as much input as we want from the attacker.
And you know, when you do a clever buffer overflow attack,
the idea is to overwrite the return address
on the stack frame so that you point back to your own self
and you execute some code of your own, right?
So you can build a tight little thing
and then you take control flow over and all
of the sudden you have the privilege
of whatever process happened to be running
when it did that buffer overflow.
That's bad.
[Laughter] It's very bad, and there are a lot of those.
So this was an early, early bug.
And sadly, in C, there are a whole bunch of these things.
There are a whole ton of things that you have to avoid
when you're writing C code.
And in C++, that's even worse.
That's the worse language ever voiced on planet earth.
[Laughter] Truly, horrendously bad.
It's horrible and we should stop using it.
That would help security a lot, but we can't.
So instead when we review code that's written in C or C++,
we have to remember all of this stuff,
at the same time we're trying to figure out what that person
that wrote the code was thinking.
Like, "Whoa, who's done a ton of code review in here?
Who's ever done a multimillion line code review code?"
We got one right here, so was it really fun?
No. Exciting?
No, no. There's two people agree that no.
So here's how you do it when you're doing code review.
You're really diligent at first.
You sit down with the code and you go, "Wow,
I'm going to check all the includes and the ifdefs
and I'm going to check all the hash finds
and everything's going to be great,"
and then you got the first line of the code and you go, "Yeap,
there's main and here we go," or "That variable is declared
over here," and you're very diligent until you get to page 2
and then you go, "Huh, that looks like a bunch of functions.
Some curly braces, that's good.
Oh, there's a curly brace, yup."
Then you get to page 3 and you're like,
"Well that's code too.
In fact, it all looks like code from here on out.
Okay, I'm done."
[Laughter] Anybody ever do that?
You didn't do that?
Yeah, that's how code review works.
So when things are tedious and boring, and you know,
they take a lot of keeping track of all sorts of stuff,
what do we generally do as a species to solve that problem?
Ignore it, bad, bad people in the middle.
No, we, we have a computer do that, right?
A computer can do that,
so we can invent some static analysis technology to look
for some of these problems and debug our code for us
and you know, we've come a long way
with static analysis technology and that's good.
But really, what we might want
to do is just ban C. Really, it might be better.
Or, maybe there should be a test,
like a driver's license, right?
So you go-- you take a test, and if you pass the test,
you can use C. [Laughter] And if you fail the test, Visual Basic.
>> Ohh.
>> Ohh. Oh I'm sorry, it has a grammar now, never mind, right?
It used to be what is the interpreter think today?
I don't know.
That's one bug in the bug parade.
Bug number 2 in the bug parade
and a bug that's even more important
than the buffer overflow has to do with time and state.
Everything that you learn in operating system class
about race conditions and lockout and starvation and time
out and all that jazz and semaphores
and critical sections, all that stuff is super important
and every single piece of code that we write today,
like if you use Java, guess what, Java's multi threaded.
So you get a Java guy in for an interview and you go,
"Do you know how to write reentering code?"
And they go "Re who?"
And you go, "Do you know what reentering means?"
They go, "Re who?"
[Laughter] And then you hire them anyway, right?
So, the problem is best shown graphically.
Here's the blue stuff that we need to get done
in one atomic operation and instead we divide it
into three things, and a clever attacker can change the state
in between the things.
And you know the problem in computer science
and in programming is when we think about programs as humans,
generally speaking, we think, "Well, here I am at step one
and then I do this and then I set A to four and then I go
from one to A, do some stuff and, you know,
you do a little loop and whatever and then there's a--
" and you think it's like one thing at a time, right?
But what really happens on a computer these days is,
you do step one and then the entire world changes.
[Laughter] And then you do step two,
as if step one might still be valid.
Well, is it?
That's a really good question.
And if isn't, you can be completely, totally hosed.
And we seem to have all sorts of problems like this.
I don't know if I'm going to talk about one later on.
But time and state-- wooh, that's a biggy.
All right, now I got into this
with the whole Java Security stuff,
you might remember there was a little mascot named Duke, right?
Here's Duke having a little suicidal issue [laughter],
so sad.
So, we discovered a whole series of problems in Java,
this is like a little list of all of their--
and you know, they all got fixed and everything
and we wrote a book about them in order for people
to understand this is the kind of bugs
that really good people actually make in their design
and maybe you should go eradicate those bugs
in your own code pile.
And you know what was weird?
You guys remember that Java Security bug that happened what?
Three months ago?
It got all the news.
You know, we looked at that and it was like woah, [noise].
Is that how it goes?
[Noise] You guys are too young, wow!
[Laughter] That used to mean something, really.
[Laughter] Time warp, you know it seems like the mid-'90s,
there's some problems in the current version of Java,
which is of course made by Oracle
who doesn't know what they're doing.
And, you know, although you should definitely base your
class finding system on Oracle, I'm pretty sure of that, right?

[Laughter] That's your fault.
[ Laughter ]
Right. So, deja vu means we've got these same problems cropping
up again in the Java virtual machine that look exactly
like the ones from the mid-'90s, and that's sort of bad.
I wish we'd learned something about that.
Then there's the modern bugs,
everybody knows SQL is a database thing
and they're just database language and do some queries
and blah, blah, blah, they can be in code
and yadda, yadda, yadda.
And some people think this is the most important bug
on earth because, I don't know, they're clueless, right?
So, here's one of my favorite SQL stories.
We had a competitor who had one
of our current customers as a customer.
And they said, "Hey," you know, the customer said, "Hey,
will you guys do a review of the security of this system?"
And they said, "Oh yeah, yeah we'll do it."
So the competitor did a review and they said
to the customer who's now their ex-customer, "We did a review
and we got some good news.
You have no SQL injection vulnerabilities
in this system at all."
And the customer said, "We don't have a database.
[Laughter] You're fired."

Right? Really?
And now they're our customer, right?
So, don't look for SQL injection bugs in code
that doesn't have database.
And be all excited when it's not there, okay?
That would be helpful.
There's still consultants that get away
with that nonsense, right?
There's cross-site scripting, that's another one,
you know, flavor of the day.
There are millions of these things.
And the problem is, there are million of these things.
It's a sort of an endless parade.
And what we really need to do is figure out a way
to glob these things into categories.
So when I was writing software security, I got together
with Brian Chess and a couple of other people and we sat around
and we tried to make seven or so categories.
Now, why seven?
Well, because that's how many people can remember.
That was our constraint.
Our constraint was seven plus or minus two.
Really? And so, we came up with eight, right?
And there they are and it took some bulldozing and some baloney
and some squinting and standing way back.
But listen, if we're going to come up with the science
about vulnerabilities and bugs,
we need to think like biologists here.
What we have in computer security
and in software security, in particular, is a lot of people
who want to explain this particular bug
and that piece of code right there.
And there's no way to generalize from that.
So when we were talking about exploiting software today,
part of the challenge of writing that book was figuring out how
to pile up the bugs into categories
which we call the attack patterns of that book.
But there are categories of bugs that you can think
about like this, we call them The Kingdoms,
because we were trying to echo biology, right?
So the idea was, these are the kingdoms.
They are the ones at the top.
And if you think about all of these categories instead
of just thinking about a particular bug
or god knows 10,000 particular bugs,
maybe you'll make some more progress, right?
So, here's what I really think about the bug parade.
[Noise] Now, let's talk about why.
We can actually get rid of bugs because bugs are syntactical,
generally speaking, and they're, you know,
they're not so distributed, generally speaking, and they are
in the code on line 42.
So you can go, "Oh look, 'gets,' I shouldn't have that in there."
And you can even grep for "gets,"
which is great unless you use something like father beget son
as a method because it has "gets" in the middle,
you know, so whatever.
Or, you know, you put in a comment that says don't use
"gets" or I didn't use "gets."
Really, grep will be like, "Dude 'gets' you."
"I know. Come on."
So, we have to be a little bit cleaver
about how we look for things.
Grep's not so good, there are better ways to do this.
We can build an abstract syntax tree and we can slide
around looking for bugs in that syntax tree, and that's good,
but that solves about half the problem
because there are two kinds of defects in the world.
There are bugs and there are flaws.
If you had to divide up bugs and flaws into percentages
of the defect pile, what would your division be?
Anybody have a guess?
Yeah? [Inaudible Remark] Fifteen percent bugs,
eighty five percent flaws.
Wrong, but good guess.
Anybody else?
>> Eighty twenty.
>> Eighty twenty which way?
>> Eighty percent flaw.
>> Oh you two are going to have to fight it out, yeah.
Yup. Also wrong, but interesting.
Well, we have a mathematician in the audience.
[Laughing] I heard you say that earlier, but I ignored you.
50/50, 50/50, going ones, going trice,
oh that is correct, 50/50.
My favorite answer of all time to a quiz that I did like this
out in California once, some guy said, "70/70!"
[Laughter] And everybody said, "Woah,
he should not have a compiler."
Moved away from him slowly.
>> Yeah. It was C++, right.
So, [laughter] it was in the SDL.
So, really maybe the guy was right after all.
Maybe there were 40 percent more than we thought.

[Laughter] I don't know.
I don't know, but I got to tell you this,
when you're solving the software security problem,
realize that security is not just
about bugs and the bug parade.
Sure we got to get rid of bugs, sure we got to find those,
let's do that with some automated technology,
but let's realize that that'll solve just
about how, much of the problem?
No, 70 percent.
My God. [Laughter] All right, so I've been talking
about software security for about 10 years
and we have 250 people at Cigital,
so a couple of years ago, I started bringing them together
for what we call a Technology Fair, and the idea is, you know,
the guys in New York, they think they're better than the guys
in Silicon Valley at something like code review
and you get them all together and they realize
that we're all actually pretty good and maybe we can learn
from each other and kumbaya and all that jazz, right?
So, I always give a big talk for that
and I put my slides together a couple years ago
and I showed them to my boss, the CEO, and he looked at them
and he goes, "Dude, that's what you say all of the time.
You can't say that again."
And I said, "I have to say that again."
He said, "No, no, no.
You can't say that again."
Like I have to say that again
until I'm dead [laughter] really, really.
I mean we-- this is just barely getting started.
We have to make sure we don't forget the obvious stuff.
So I had to come up with a clever way
of presenting the same old crap, and so I used zombies, right?
And it-- these are ideas that are-- that should never die.
They should just eat your brains, okay.
So, [laughter] hear are the zombies right here,
we'll go over them real quick: network security doesn't work,
I'm sorry, more code more bugs, SDLC,
bugs and flaws, and badness-ometer.
So let's talk about those zombies real quick.
Number one, perimeter security.
Now, that would be really cool, it's an excellent paradigm
if we had a perimeter, right?
What happened to the perimeter?
Well it disappeared, right?
So we-- now we're having massively distributed systems
and finding the perimeter is hard.
And if you don't have a perimeter,
how the heck are you going to do perimeter security,
like where do you stick the firewall?
I really don't know because there's not really an edge,
like do you stick a wall-- the firewall between yourself
and your cloud provider, if so, why?
So, I mean, you know, these are things we got to think about.
And then it get's worse.
So if you ask a typical set of developers,
what do you guys think security is?
First they point over there, and then they say,
"Well we're pretty sure it's a thing,"
what do you think the thing is?
[Inaudible Remark] Close, it's a certain kind
of technology, anybody?
>> Cryptography.
Cryptography, exactly right.
They go, "Yeah, yeah!
It's secure because we added SSL."
Yeah. [Laughter] And the idea is
if you just sprinkle magic crypto fairy dust everywhere,
surely it'll be secure, surely.
Now why would people who build stuff for a living think that?
The reason is we've been training them for years to think
about software as features and functions and to think
about their planning in terms of features and functions
and how long it's going to take and how much it's going to cost.
And so naturally, when you go to them and you say "Security,"
they go, "Well that must be a feature
and I know one called Crypto,
so let's just pretend it's that," right?
And we have to tell those guys, "Look, security is not a thing.
We can use those things to try to get some security properties,
but security is a property, it's kind of like quality."
Wouldn't it be cool if you could just like select quality
and check the check box?

Why can't we do that?
We can do that with security, "Yeah,
I'll turn the security on."
[Laughter] Yeah, wrong.
Yeah, wrong, wrong.
So that's bad.
And then there's a-- so overwrite security function.
Then there's the worst thing of all, reviewing products
when they're complete.
So imagine you're from security and you're here to help--
you're here to help, and you go talk to the developers
and you're like, "Yes, we just did a security test
that you didn't know was coming and we like broke your stuff,
and so you can't ship it and you suck."
[Laughter] And they're like, "Who are you again?"
"Security and you suck."
[Laughter] Seriously, that's what we do.
It's like the first time you meet somebody,
you throw a rocket at their face [laughter]
and then you wonder why they hate you.
[Laughter] They-- because they thought they were done
and they were getting ready to ship, right,
and you said that they suck.
It's like calling somebody's baby ugly.
[Laughter] Never, don't do that, no, don't even try to spin it,
like you can't say, "Oh those three arms will come
in handy, right?"
[Laughter] No spin.
That's just when you be quite, all right?
So, yeah, [laughter] I know,
that's beyond the pail, but that's okay.
It was amusing.
So we have to figure out a way to engage with developers
that doesn't involve telling them they suck or throwing rocks
at them, you know, and trying to teach them how
to do the right thing.
Here's the good news, developers actually, by and large,
want to do things right.
They just don't know how.
So if we teach them how, then they're natural propensity
to want to do things right will get them to do some stuff right,
and that's something that we can realize.
So, we got to fix that in security,
and probably the worst way of doing this
by far is penetration testing.
So, penetration testing works like this in the real world.
You hire some reformed hackers, and you know they're reformed
because they told you [laughter] that they were reformed.
And you give them two weeks to do system analysis for 20k and--
or whatever, and they find six bugs and they tell you
about four of them and you figure out you have enough time
to fix two of them, so you fix those two.
And then, "Yay, done!"
That's how penetration testing really works in the real world.
That's one problem with it.
So that leaves a whole bunch of bugs,
the ones that they didn't tell us
about plus the two they did tell us
about that we couldn't fix anyway,
still in the bug pile, and it gets worst.
How much does it cost to fix a bug when you're done?
A lot. Remember that study from Barry Boehm
that we've all been citing since 1972,
I'm so sick of that work, right?
I asked Barry if he would please,
please just do a new version,
and instead he's doing coco mower
or shmoko mower or like-- I don't know.
And, you know, we-- so we're all relying on that,
but guess what, that study was right.
It cost about 1,500 times more to fix a bug in production
when the thing's already shipped or maybe more
if you just lost control when shipped it to 40 million people
than it cost when you're thinking about the problem
and you're-- get your thinking wrong and you fix your thinking,
guess how much that cost?
Zero, maybe one, right?
So, we need to do thinking about security way earlier
in the life cycle and that idea of penetration testing is not
so good for that since it's late, it's economically silly.
All right, that's one thing.
Another thing, more code, more bugs.
You can go to any development group on earth and you can say,
"Who's planning on having more code at the end of the quarter?"
They're all like, "Yeah, yeah, me.
Oh, oh, we get paid to make that.
Oh, oh, oh, oh, oh."
Really? And then you say, "Well who's planning
on having more bugs at the end of the quarter?
[Laughter] So let me get this straight, you guys, you're going
to have lots more code and no more bugs."
"Yeah, you're all correct."
[Laughter] The fact of the matter is
that if you have more code, you're going to have more bugs.
And in fact, the DataBack is up, Dan Geer who--
this is Dan Geer and this is not Dan Geer, wrote a paper
that got him fired from @Stake about Microsoft being a danger
to national security because they have a monopoly grip
on the market and their software was broken
and the funny thing was @Stake's biggest customer was Microsoft,
so guess how long Dan's job lasted,
[laughter] like 15 seconds, we were all waiting
for the phone to ring.
Ring, "Hello Redmond," "Yes?"
Can you fire Geer?"
"Okay, you're fired."
So he knew he was going to get fired, right?
But he had in his paper this prediction that the more code,
the more bugs, and I said, "Oh yeah, well here's the data."
So I gave him the data about, you know,
the number of bugs tracked by CERT up through 2007
and I also got data about the Windows code pile which is
like pulling teeth but here's the Windows code pile over time
up to XP, you know, two and a half million lines in 1990,
XP was 40 million lines, Windows Vista doesn't really matter
'cause nobody uses it and Windows 8 is something
like a trillion lines, right?
So-- 'cause it does some stuff.
It does tiles and stuff.
Tiles [laughter], stuff, it's cool,
it's like squares are cool again.
It's the '50s-- Green text,
yeah ADM5's for you, right?
So I gave this data to Dan and I said, "What do you think?
And you know what?
You should never give actual data to people with a PhD
and biostatics because you made these things correlate
like you would not believe which is in Chapter 1
of Software Security," you can read that if you want.
But it's surprising 0:36:42.7 to say more code, more bugs.
Now, there is a subtle message here and I spoke
about this earlier with a little small batch of students
but I'll repeat my self here.
So we're getting better at software security
and the defect density ratio is dropping over time in terms
of KLOC and you can see that in actual measurements made
by Microsoft and others ,you know,
I'm sure you do that recall time.
So the defect density ratio is going down.
That is bugs per square inch going down.
The problem is that we're building more square miles
of code everyday than we ever made before in the history
of the planet so the problem, the bug pile appears
to be growing even though we're getting better at it
in little gloves of [inauidible] lines.
Everybody see that?
It's got two whole variables which is way too hard
for technology analyst to grasp.
So they say, "Well, all the stuff
that Software Security guys' doing doesn't work 'cause
there's more bugs than ever," and you just go, "Oh my God,
you're so dumb, stop telling me what to do."
Right? But we're making progress
and the defect density ratio is going down, okay.
Next, zombie.
You should put software security activities
into your software development lifecycles, right?
There are many firms that think they have a software development
lifecycle short of Microsoft
who are fascists and they kill people and fire them
if they do not use the special SDLC.
Everybody else has 25 million SDLCs.
My favorite is when there's a department of bureaucracy
who believes there's one, like at EMC.
We talked to the department of bureaucracy and they're like,
"Oh yes, everybody follows the one special thing here,
it's all written down in this piece of paper."
And you go over that, you go, "Are you guys using that crap
like a [noise] [laughter]?"
"No." No, when they come with a clipboard, we just say, "Oh,
yes, yes, yes,yes oh yes, yes,yes, we did some testing,
we get-- we did it, we did it."
And, you know, the people of the clipboard
from software bureaucracy,
they're too stupid to ask for results.
So they're like "Oh testing, good, check", you know
and then you're done and instead, there are 40 SDLCs,
maybe every single possible SDLC.
Let's name a few.
Well, there's Agile, there's extremely bad programming
which is related to Agile.
There's scrummy Scrum, Scrum, there's this spiral thingy,
there's a waterfall one.
Let's see, there's Tim's way, that's the best way.
If you want a bonus, Tim.
Work for Tim's group, Tim always ships, Tim's gets a bonus.
Tim's way is the way.
You'll find Tim's way in every corporation.
So the thing is you have to come up with a way
to describe software security activities that fit
in no matter what the SDLC is.
That's what I talked about in the Touchpoints which is similar
to the stuff that is in Microsoft's SDL
but Microsoft's SDL assumes that you're using NSDLC
which is incorrect and you should never ever go
to a dev group and say, "Well, you guys,
I know that you do Agile scrumy Scrum, Scrum,
but you really should switch to CMM level 53 and use UML."
And they'll be like, "Okay, I quit."

That's what really happens,right?
So you can't come from security and go, "Well,
you guys are going to have to change your entire SDLC
because I have a better one."
They'll be like, "Okay,I quit and get another job next door
where they don't have an SDLC."
Okay, bugs and flaws, we already did the bugs and flaws thing.
What's the division there?
Middle guy?
Division between bugs and flaws?
It's 70, 70, still not painted, too bad.
All right, so that's another zombie.
Here's another zombie, this one is really important,
and testing people know this and we need tell the non geeks
and the normals out there that this is the way the stuff works.
Imagine that you took a hacker and you put the hacker
in a can right?
So who's the hacker?
We put XF in a can or whatever.
And the Can Test can be run
against any arbitrary program, let's call it program A.
So you run the Can against program A
and the program gets hacked by the Can.
Like can test hacked program A. What do you know
about program A?
>> It sucks.
>> It sucks, that's exactly right.
That's a technical term [laughter],
and it's an important one, yes, Can Test broke your code.
Your code sucks.
Go back. And do it again.
So imagine that you run the same Can Test
because there's only one Can against program B
and it doesn't find any problems.
What do you know about program B?
>> Not nothing! Who said nothing?
That's wrong.
No it's epsilon.
You know epsilon, right?
So you know that you run some tests
and it didn't fail those tests.
It's a smidgen but it's better than nothing.
Now, what you have is a badness-ometer.
It's not a security meter, it's a badness-ometer
and it goes all the way from your code fell prey
to the Can Test so your code sucks and you're
in deep trouble all the way to--
we ran some tests and didn't find anything so, we don't know.
Now, I dare you to find any security people that go,
"We don't know, we did some testing
and we didn't find anything.
So who knows?"
You know what they say?
"Well, we did so we hired some reformed hackers
and they found two, then we fixed it."
Right? So, you know, so never forget
that badness-ometers are badness-ometers
and not security meters.
And watch out when you go get your job out of school,
there's going to be some dingbat VP who wants
to treat a black box testing tool as a security meter.
That guy is wrong.
Okay, those are the normal zombies.
Now, I have one extra zombie for you.
This is a zombie baby.
I gave this talk in Argentina just a couple of weeks ago
and there's this woman in the front row who is pregnant
and it was like [noise] [laughter] not your baby,
not your baby.
It's really rather awkward [laughter].
We spent a lot of time in security
and in software security talking
about what's the best way to find bugs.
My way is better than your way.
We should do static now, we should do dynamic,
we should hybrid glass house upside down, left handed,
bottle washing static now, you know, and there's a lot
of argument over what's the best way.
Let me tell you the truth of the matter you guys,
it doesn't matter what the best way is.
We already have too may bugs.
They're already in the pile.
And if we don't fix the damn bugs,
who cares if we find more, right?
You go anywhere on earth and the bug pile
for security is still large and they need to be fixed.
And we're all arguing in software security land
about how to find more?
How about if we instead figure out how
to fix the ones we already know about?
Now, we spend a lot of time in Cigital fixing stuff
because if you find bugs and you don't fix them,
you are part of the problem.
You're not part of the solution.
And we need the solution done
which is actually fix the dang software
and so that's the zombie baby.
You'll hear me saying that a lot for the next 10 years.
Okay, Touchpoints we sort of already talked about.
Remember at the beginning of the talk
where had the security people and the development people
and nobody in the middle?
Nobody has been replaced by the software security group.
Every firm that does a software security initiative
in a reasonable fashion has a software security group.
And the software security group does software security all day.
I'm going to tell you about a study that studies the work
of a thousand people who full-time do software security
on planet earth for 51 different firms.
That's full-time software security people,
and they have a boss who runs the software security group.
Sometimes it's called the products security group.
Sometimes it's called application security group.
It doesn't really matter but there's one of this in each one.
If you figure out how to be a productive member
of a software security group, you will have a job
for the next 30 years guaranteed
and you'll get paid a ton of money, guarantee.
Right, so we all know about this SDLC thing.
You should buy my book instead of Microsoft's,
that's the only point of that slide.
And here's why, because there are multiple SDLCs so we have
to focus on artifacts instead of process.
And if we look at artifacts and we put little boxes counting
as the artifacts, there's this standard issue artifacts no
matter what your SDLC is.
Then we can figure what to associate
with those artifacts when we have them.
What's the one artifact in those boxes up there that is--
that should be associated with every single software project,
read the boxes, code, who said code?
That is right.
There is some government projects
that don't have code yet.
But they're going to have code any day, any day, any day.
Now, the good news is
in software land, there's usually code.
So if you have some code, then you do code review with a tool.
Yehey. There are tools for that.
They're pretty good.
They're a lot better than they used to be.
They're not perfect but they sure beat code review
by hand., right?
Yes they do.
Yes they do because we suffer from the "get done,
go home" phenomenon when we're doing it by hand, right?
What's the second most important artifact up there on that list?
Architecture and design and here's how it goes,
we use to get calls back in the early 2000 and people would go,
"Hey, you guys,
you do architecture analysis, don't you?"
We said, "Yes".
"Well, how much does it cost?"
We said, "Well, sir it depends on what you built."
"Can you send us your spec?"
And there we go.
[Laughter] "Yeah, we were going to write that down."
And we would go, "The number you have called is no longer
in service."
Now, and we would go, "All right, all right,
we'll come help you do reverse out your architecture
because you can't do an architecture analysis unless you
have an architecture or spec".
Now, here's the dirty little secret of software.
The dirty little secret.
If you don't have a spec, your code cannot be wrong.
It can only be surprising.
[Laughter] So when you go to a coding group
and they don't have a spec, well they can't be wrong, right?
Do you think they're going to want again a spec?
No because then they might screw up, seriously.
So it's sometimes as simple as that.
I like to do a one-page overview of the system
and sometimes they don't exist and you have to build them
and the most valuable part
of the analysis is the one-page overview where you can look
at the forest instead of looking at all the trees
which security tends to do.
All right, in the last 10 minutes, I'm going to talk
to you-- or 15 or something, I'm going to talk to you
about what we're doing
to measure software security initiatives.
So here's a little story
about how this project got started four years ago.
Obviously, the technical advisory board meeting
at Fortify, I chaired their technical advisory board
and we have one of the Fortify guys present
yet another software security methodology.
This was going to be the one that Fortify was going to use
for all their customers which is rather silly idea.
And so this guy presented a fairly reasonable thing based
on the experience of working in one firm.
So it's a one dot generalization and if you know
who Fred Schneider is from Cornell, Fred was in the room
and you know Fred has this way of asking a question
that like somehow takes all the skin off of the questioner
and the skin is like laying around their ankles.
And they're trying to think about Fred's question
with their skin around their ankles and all of a sudden,
they realize they've just bled to death.
Well, so Fred was doing the Fred thing and I stepped
and it was like, "Oh, don't kill the guy, he's just a kid."
Well, I think maybe what we should do is instead of talking
about yet another software security methodology
because we've got the Touchpoints, we've got SDL,
we got CLASP, we got LMNOP, why don't we go out
and gather data from real firms and then describe what we see
in a model based on the actual data.
We'll call it science, right?
So we decided that that would be a good thing to do.
And so I called up ten of my friends and I'm like, "Dude,
we're going to do some science."
So I called up Lipner at Microsoft,
"Do you want to do some science?
We're going to measure some stuff.
We're gathering data."
He said, "I'm in".
So we went out there and we gather some data at Microsoft.
We did it at, I don't know, Goldman Sachs, in Fidelity,
and few other place than that, ten places.
Ten places where I knew they did a reasonable job
with software security because I was either helping them
or I'd know them for 15 years and I trusted these guys
and we were trying hard to build a descriptive model.
Well, four years later, we have 51 firms,
actually there are 58 now, 51 in BSIMM 4
which was released in September.
There are now over a hundred measurements,
at the time we were at BSIMM 4, there were 95.
So how can there be 51 firms in 95 measurements?
And the answer is some firms are measured twice.
One firm has been measured three times.
Some firms are so big that we have to measure their subparts
and then roll them up into one score
and that counts as one firm.
For example, Bank of America is one firm
and they have this small little division called Merrill Lynch,
and another one called Countrywide
which they really shouldn't have but they have it
so they're stocked with it.
[Laughs] And, you know, we measured them separately
and rolled that into one.
So we gathered a whole bunch of data
and we've published those data for free.
Here are the firms that we gathered the data from.
See if you recognize any of these, right?
So, biggest software firms on earth?
Yeah, prolly, the three biggest are there.
Biggest banks on earth, we got JP Morgan Chase,
we got Bank of America, we got Wells Fargo.
People doing hardcore new stuff, well, how about these guys?
This is pretty darn hardcore new stuff.
How about Cloudy Cloud?
Oh yeah, we got some sales for this dude, there's the box guys
that we haven't heard of them yet, you know.
So, we even have some Germans, look, the Germans are up there.
And my favorite of all is Intel whose name we can use
but we can't use their logo.
So I invented a new logo
for Intel myself called "Times New Roman".

[Laughter] It's pretty cool, huh?
And then there're some chicken firms, 17 chicken firms
who don't want us to add them in.
I couldn't tell you like they don't-- they--
oh, I shouldn't add them.
All right, so here is the idea behind this work.
The idea is not to tell you what you should do but rather,
tell you what they're doing.
So we go out into the jungle and we went to 51 jungles and we go,
oh look, in this jungle, monkeys eat bananas.
And in that jungle, monkeys eat bananas.
This jungle, monkeys eat bananas.
And in fact, in 38 out of 51 jungles studied
so far, monkeys eat bananas.
What? Does that mean monkeys should eat bananas?
I don't know.
It just means monkeys eat bananas in 38
out of 51 jungles, right?
So what we're talking about is the descriptive model.
It's not prescriptive.
It does not say, "Do not run while eating bananas."
It does not say, "Only eat yellow bananas."
And, it does not say,
"Thou shall not steal thy neighbor's bananas."
[Laughter] It says none of those things.
Instead, what it says is a whole bunch of facts.
Now, the thing is, it's hard to argue with facts.
And, the interesting thing
about the BSIMM is there's a whole ton of facts.
And so, if you have a theory about software security,
go check our data and see whether it can--
holds with the data.
And if you believe that the firms
that we've studied are not full of complete idiots,
which is debatable, then maybe those data will be useful.
But guess what, there is data.
The data is published.
The data is tracked over time.
We have a consistent way of measuring the data.
Let me say this one more time.
There is data.
We have a measurement for software security.
Now, the measurement is not, we took some software
and we stock it in a box and the box either turned red or green.
There's a problem with that box.
You know what that is?
Can't exist.
Anybody heard the halting problem?
Right. Apparently, there's some security vendors who have not.
If someone would please let them know about the halting problem,
that would be helpful.
Now, we gathered up all these data and we crammed them
into these little boxes.
There are 12 boxes here called the software security framework
and those boxes are kind of like an archeological dig.
You know, when you go to archeology land
and you put a little stick in the sand
and then you put a little string and then you say, look,
in square number 4, we found like a--
what did you find when you guys dig?
We found an Indian head, right?
The arrowhead.
And if you're in Europe, we found like a piece
of metal from Rome, right?
So, those are the two things that you find
and you just subscribe which square you find those things in.
We have identified 111 activities and we divided them
into these 12 squares, for example,
architecture analysis has these activities
that we have observed in it.
There are 1, 2, 3, 4, 5, 6, 7-- there are 9 activities.
Why are there 9?
[Laughter] There are 9 activities right?
Now, if these were some sort of theoretically perfect model,
there would be a 120 activities and they would fit nicely
into the 12, there'd be 10 for each and--
that's not how it worked.
Some practices have a lot of activities
and some have not very lot, right?
We divided those further into 3 levels.
Level 1 stuff, super easy.
Level 2 stuff, kind of harder because sometimes,
you got to do level 1 stuff before you can do the level
2 stuff.
It has nothing to do with the CMM levels by the way.
And level 3, rocket science.
Really? So, if you were out looking for stuff,
how often do you think you'd see the level 1 stuff?
Oh, pretty often 'cause that's easy.
How about rocket science?
Oh, not very often 'cause that's hard
and that's the way the data are organized.
And we've run statistical analysis against our model
since we hit 30 firms long ago.
Now, to give you an idea about the intensity of this model,
you can download the model for free at bsimm.com.
Elvis has downloaded many, many versions
and Bill Gates has done a surprisingly large
number downloads.
There are people who call themselves Bill Gates.
So each of those 111 activities has a paragraph associated
with it and the paragraph has real example.
This is not a theory.
This is a description of two or three ways
that we saw this activity carried out in particular firms.
We don't out the firm names, but we could, we could
but I just want to make clear this is real data,
the observations are based
on descriptions that look like this.
We've done it for four years and we have a whole ton of data
and we actually got some accidental side effect data
like this.
We said, "Well, how many-- how long you guys been doing that?"
And here's the average, five and a half years.
So you don't just sort
of do software security and done, right?
How about two for that check?
That's not how it works.
It takes a long time.
Everybody has a software security group.
The smallest has one poor guy,
but that development group has only 11 people.
And you can do some interesting things.
You can divide up those numbers and you can say
in our population the average of averages that is
if we take the two vectors: SSG size and development group size,
and we put them together and we don't roll them up first
and we do the math, the average is two developers--
sorry, two software security guys
for every developer, hundred developers.
Let me say it again.
One person for every 50 developers, how about that?
Now, is that good?
I don't know.
Is that right?
It's right.
Is that what it should be?
I don't know.
That's what it is.
That's just the fact, sorry I know the Republicans don't let
us use those anymore.
But it's-- [laughter], oops, that just slipped out.
But imagine that we're using facts in science stuff, right?
Okay, so we can count up the number
of times these activities had been found among the 51 firms.
And we can divide those into the 111 things, and we can come
up with a chart like this.
So here's all 111 activities and you see the 12 chunks,
those are the 12 practices.
And we can say, for example,
this particular activity was observed 45 times out of 51.
Wow, that's a lot, right?
So we know which things are popular.
We know who does what?
And we just let you take a look.
And you can decide whether it's important to you that 45
out of 51 firms do that activity and you don't or not.
It's your call.
And in fact, we can do a measurement and compare you
to the world population or we can compare you
to a sub population, right?
So I just was at NASDAQ on Wednesday
and showed them their BSIMM measurement and compared them
to 19 other financial services organizations.
And I was sitting with Mark Graff who's the new
CISO over there, and we're talking about budgets
for next year and he said, "Damn, I'm taking this chart
into budget land, because now I know I'm going
to spend stuff on."
Bing, mission accomplished, right?
So this is a good thing.
You can measure these measurements over time
and you can notice things like, "Oh look, spiky,
spiky not round" or something like that, or, "Oh look,
we seem to need to do some work in policy and compliance
because we're way behind the curve there
and we're way behind the curve in architecture analysis,
of course that's a very low resolution graph right there
because we do it with what we call the high water mark thing.
We can do a high-resolution graph too.
Look at this, there's a little one if you do the activity,
otherwise there's a blank.
And remember those things that are really popular
like this one here, 45 firms, don't do it.
Does that mean that firm should do it?
No. It means they don't and everybody else does.
Remember when your mom said,
just 'cause everybody else is going to mall
by themselves doesn't mean you can.
Your mom was right, right?
Remember that guy who got arrested?
That was not you, right?
So sometimes, mom is right, this doesn't mean you should do it.
This is just data.
In fact, this whole little chart with all these special colors
and everything, it was done
with this amazing tool called Excel, right?
And Excel is pretty stupid.
Excel is not a strategy.
This is just data.
You take this data, you drive your strategy
with the data then you have a smart strategy,
and you have a measurement tool.
So, are there some firms like Fidelity has used BSIMM
for three years to measure their software security initiative
and adjust it and figure out how to make investments
and where they're behind and where they're ahead
and what they want to do next year, and it's working
like crazy for them which is kind of cool.
So this is a little science experiment
that accidentally escape the test two.
We started with ten, now we got 51, actually we got more
like 60, shooting for 75, it says 60
but we've already hit that, so now it's 75.
My guys don't know that yet though.
And we've translated it into German, in Italian,
there's somebody working on Portuguese
and I think there is a Japanese version that may never get done.
But we're going to collect more data and guess what happens,
when we collect enough data we go back
and we fix the model 'cause the model is describing the data
and not the other way around.
So we did data first, model second, adjust the model,
use math, use statistical modeling techniques
to actually make the model super tight.
And the thing works like crazy.
It's a beautiful measurement tool.
I would encourage you to steal that work.
Now, in the remaining 34 seconds,
'cause that's what I've got, to hit an hour,
I got to tell you where to learn more.
I write a column for SRT security.
And this month's column is about cyber war.
In the middle of debates in Washington about, you know,
whether we should spend more on defense and less on offense
when it comes to cyber weaponry and unfortunately,
the people in Washington are truly confused about this.
But my belief is that we live in a glass house,
and so we should stop working
on fast accurate rock throwing, right?
And instead make our house less glassy.
We have a blog.
Everybody else has a blog, but ours has a cool name.
I have this podcast.
It has about 10,000 listeners, called Silver Bullet,
who is the current one?
We're just going to release one today.
I think it just got released maybe 10 minutes ago.
And who did I do?
Oh, it is Thomas Rid, from King's College London, yes?
[Inaudible Remark] I will give the slides to the dean.
And the dean will somehow miraculously make them appear
in all of your e-mail inboxes.
Or maybe he'll put it on that oracle system, never mind.
[ Laughter ]
There is this magazine called "Security in Privacy".
It's good because it's peer-reviewed and some
of the articles are even reasonably scientific.
But on occasion, they may apply to the real world.
It's a very small sliver but this magazine seems to hit
that sliver more often than most.
I'm sure that you guys have an electronic subscription
to the IEEE Library, so that's for free.
The dean said he would buy everyone in this room a copy
of the book so just ask him later.

[Laughter] Is that wrong?
And-- [Laughter] And now I'm done.
Thank you very much.
[ Applause ]

We have time for negative seven questions.
So whose got one?
Anybody want to break the ice?
We've got mics who will run to you, over here.
>> So.
>> Right behind you, mic coming.
>> So just from a methodology.
>> Talk really loud.
>> So just from a methodology perspective,
how are you collecting the data for this model?
>> Right. So the data has been collected by the three people
who built the model, me and Sammy
and a guy named Jacob West.
And every single measurement activity takes a team
of an odd number team usually three to do a set
of in-person interviews.
And we also look at various artifacts.
And then we go-- and we have about a two-week flight
over what we observed and then we show the measurement
to the people.
So the only way that it's objective is
by having the same measurers doing
at the same way all the time.
And we've never let the cat out of the bag.
I'm sure that one day, KPMG will be using the BSIMM incorrectly
so what can I tell you, right?
But right now, we're very concerned
about data consistency.
And I got to tell you this other thing.
I'm a software guy and I'm like an anti-process guy
if you've known me for years.
And I used to have a huge raging battles
with Watts Humphrey like on stage.
And we were yelling to each other stuff.
And here, I built a process thing
that describes activity is really very distressing for me.
[Laughter] Very distressing.
>> You're talking about more code, more bugs.
Do you want to address the myth or abuse of code reuse?
>> [Laughs] Well, code reuse can be kind of helpful and one
of the activities in the BSIMM involves building middlewear
that secure for some of your developers to use.
So-- that instead of running out
and doing their own implementation of how to do open
as a cell and screwing it up again, you say, "Well,
when you're going to use open as cell, use it like this.
Here are some codes, steal this code.
If the code that you're providing doesn't suck,
then you will have made some progress.
But if the code that you're providing is broken,
then you will have introduced to single point of failure into all
of your systems and so--
>> Or if it's misapplied.
>> Or if it's misapplied and it goes both ways
and know what Ross Anderson said about distributed computation.
Maybe that was Les Lamport, I can't remember,
I guess Lamport said, "A distributed system is one
in which a computer you didn't even know existed could screw
up your entire system."
And Ross said, "Programming a distributed system
and a computer into the distributed system is
like programming Satan's computer."
Both are right, right?
And so, you know, we can't provide perfect code
to everybody all the time if they will misapply our stuff,
but if we do it carefully it seems
to be an activity some people have found use for.
Fair enough?
Okay, who else?
Don't be shy.
It's not all bad.
It can be good.
>> I know the first principle
of security is never assume secrecy.
>> Yeah.
>> Secrecy is terrible.
On the other hand, by publishing this, aren't you, in some sense,
giving a broad audience attack vectors?
>> Yes. Welcome to planet earth.
Yeah, I think it's okay to do that.
In fact, I was at a meeting in Washington where, you know,
I thought we're done with this debate long ago and this is more
about activities so it's not really so dangerous.
Although it might help you think,
well they secured it that way.
They're using, you know, gothic arches or whatever
so we can attack those.
But, I was at a meeting in Washington of, you know,
they've been talking about software assurance for 8 years
in a row, they have the same meeting every six months
and so far I don't think they've done a damn thing.
But they have good meetings
and so somebody was presenting the CVE or the CWE
or the LMNOP whatever, what it's called, CWE?
CWE. And gave a pretty good talk about this bug collection
that they've built over MITRE
and they put it up for free.
You can use it if you want, that was disorganized mess
but there's lot of bugs in it.
And a question that went up to the mic and said, "Well,
I think it's real dangerous
to publish damn bugs 'cause Al Qaeda's going
to use their bugs against us."
And you're just like, "Oh my God, take the mic away.
"Seriously, seriously.
So my belief is this.
If you want to know how to fix your systems,
you damn well better know how they're going to be attacked.
We have to talk about attacks.
We have to talk about exploits, and we need to publish them
and that sunlight is the only way to solve this problem.
Who else? Are we done?
One more.

>> So with your bug parade, you mentioned these sort
of well-known bugs buffer overflow, SQL Injection,
we've known about these bugs for ages.
>> Yeah, and still we use C.
>> So-- yeah, the question is what can be done
because we're still seeing software
with the same old well-known bugs if we already know
about the problem and it's not being fixed,
what else is there to do?
>> What department are you in?
>> Computer science.
>> Are you professor?
>> Yes.
>> Are you in theory?
>> Here's a challenge for you.

[Laughter] Got him, could tell.
Theory guys, they stick out, right?
So imagine that you spent since the '40s trying to figure
out how to make universal computer and describe it with,
you know, grammars and a Turing machine model and then imagine
that you decided that these thing could compute a lot
of stuff and then imagine that you used it to compute a lot
of stuff and it computed some stuff you didn't want computed.
Damn it. The question is why does it compute
that stuff anyway?
So is there a way that we can shrink wrap things
down so we can have a theory of a machine
that might not have universal power but that can compute less
and can't compute things we don't want to compute?
Does that sound hard?
Good, work on that.
[Laughter] Yeah.
I think we're done.
Thank you.
[ Applause ]