>> DOUGLAS CROCKFORD: Thank you very much. Welcome to the YUIConf Gala Costume Masquerade.
It's great to see all of you here in your festive costumes; you all look fabulous. We
are here at YUIConf at Yahoo!'s -- where are we? -- Great America campus in Santa Clara,
unusual things to talk about in one talk -- how could these things possibly be related to
connection, and surprisingly and hopefully entertaining.
This talk comes from a scientific discovery. There were some social scientists who were
doing some work, as they do, in trying to understand human intelligence. How do we think?
How do our brains work? They came up with a set of simple experiments and their results
were really surprising. They published their results and the scientific community said
that can't be right, there must have been an error in the design of the experiments,
there must have been some error in the evaluation of the data. This cannot possibly be the correct
explanation for how we think.
But the results were at least interesting enough that other people attempted to replicate
the studies, and then did follow on studies and new experiments. The conclusion was: yeah,
it's true, we actually think like this. This is sort of how science works, where someone
will come up with an experiment which has a conclusion which contradicts what is commonly
understood by the scientific consensus, and it will immediately be challenged. But then
we'll try to replicate it, and if it bears out then the scientific consensus changes
to adopt that, until further experiments will contradict again. Then there's a Nobel Prize.
All of that happened in this case. This is how science happens.
This is all told in a wonderful book: The Science of Fear by Daniel Gardner. Gardner
talks about this research and its implications on society, and the fact that because of the
way we think, we tend to misunderstand what danger is. We fear the wrong things. The consequence
of fearing the wrong things is that you actually put yourself in greater danger. Great book,
I highly recommend it.
What was this discovery that was so amazing and unusual and probably true? It was that
there are two systems in the brain. System two could be called the head. It's where we
do analytical thinking, it's where we do arithmetic and mathematics. It's where we think deep
thoughts. It's great, but the problem with it is it's kind of slow. It's because the
head is kind of slow that we had to invent computers, because the modern world demanded
a lot of number crunching, and we're just not very good at that. We needed machines
that were faster than we were at doing that stuff.
Then there's system one, which could be called the gut. It's evolved from the flight or fight
response. It is very, very fast, but there are tradeoffs in the way it's fast, so it
makes mistakes. If you ever saw a cat just standing there and then suddenly he freaks
out and runs, and then he stops and goes yeah, I meant to do that, that's the flight or fight
response. The biases in that system are set so that false positives are OK, because it's
better to get a false positive than a slow positive, because then you don't have time
to get out of the way of whatever's at you. You'll see a flock of birds suddenly take
off for no reason and come back; that's that mechanism. We got that.
The gut is really fast, but it's only approximate because it doesn't know arithmetic, it can't
solve hard problems. But it's not trying to solve hard problems, it's just looking for
certain kinds of patterns, and when it sees those patterns it reacts.
Now, neither the head nor the gut are surprising. I mean, we all know about that. My head tells
me this but my gut tells me that. We're always experiencing that kind of stuff. The surprise
in this research was the connection between those two systems. It turns out that the gut
informs the head. The gut sets up the assumptions that the head is going to use in doing its
computation. The head is not generally aware that that's the case. The things that the
gut tells the head are weighed as more important than things that the head itself has experienced.
That makes it really easy to confuse people by confusing their guts.
This was news to the scientific community, but there is another community which has known
about this stuff for a long, long time, and that is of course the advertising community.
They figured this stuff out decades ago; this is not news to them, because a lot of their
business is selling stuff to us that we don't need. They create wants for things that we
don't really want. How do they do that? You don't do that by talking to the head, you
do that by talking to the gut, and nobody is better than the tobacco industry.
Because look at tobacco. What does tobacco do? It makes you smell bad. From the first
time you light up, you stink. Tobacco does that. Then your teeth turn yellow. Then you
get sick, and then you die. That's what tobacco does. So how do you sell a product like that?
You don't sell it to the head, you sell it to the gut.
If you ask a smoker "why do you smoke?" Well, why, that's an analytical question, so that
goes to the head. But the head is going to now start thinking with facts that it got
from the gut, and the gut says cigarette smoking is not dangerous because of the way the gut
looks at danger; it's looking at immediate danger, it's looking at shocking danger. Slow
death doesn't register. So slow death is as good as good for you.
If you have any logical system and you start with false premises, you can reach a false
conclusion. I mean, we know that; logic tells us that. But that's how we think. You can
ask a smoker "why do you smoke?" and you can be sitting with them in a restaurant and if
he were rational he'd go "I smoke, well that's stupid, I should stop immediately." But that's
not what happens. Instead he's trying to think "why do I smoke? There must be good reasons
for it. Um, it makes me popular." And he'll say that not noticing that the healthy good
smelling people who are near him are trying to get away so he doesn't taint his food.
So people will smoke until it kills them, until they die. You could say that's because
of nicotine addiction, they're addicts and that's why it happens, but that doesn't explain
brand loyalty. Most smokers will stick with the brand that got them hooked until it kills
them. That's not addiction, that's something else. That's some confusion that's happening
in the gut.
Let's switch for a minute and talk about computer programs, a completely different topic. Computer
programs are the most complicated things humans make. We don't make anything else which is
as complicated as programs. Programs are made of a large number of parts, usually expressed
as statements or variables or functions or other things, and they all have to work together
in a way which is amazingly complicated. There's nothing we do that's more complicated than
software. You could point at systems which are very complicated, like an airplane is
very complicated, and that's true, but the most complicated component in an airplane
is the software. That's true of all systems today. Software is complicated.
People are not good at that, so early on when we started programming there was this idea
that programming is just too difficult and we should write programs that can write better
programs than we can, because people are just not very good at this. Once we can get the
program to do that, we can then tell the programs "make a program that's better than you," and
we keep doing that until they become our overlords.
That didn't happen. Artificial intelligence failed. Artificial intelligence has done a
lot of amazing things -- it can play grand master chess and a pretty good game of Jeopardy
now -- but you can't give a set of requirements and custom router views to a computer and
have it write the program for you. It can't do that. We're still writing programs by hand.
We've made extremely small progress in the evolution of technological thinking around
The most powerful tool we have is the programming language. That is a partnership between us
and the computer where it's doing some of the work and we're doing some of the work.
The programming language allows us to work at a higher level of abstraction, which allows
us to be more productive, but still it's basically handmade stuff.
Programming languages are extremely important in that they control, or shape, the way that
we think about things. Sometimes that's a really good thing if the programming language
helps us to organize things well, but that can also create blind spots in us. If you're
using a programming language that does not offer recursion, you're not aware of what
a valuable tool recursion is. Or if you have a language that doesn't have closures, you're
not aware of what a valuable tool closures are. So, programming languages work for us
and against us.
The thing that makes programming so difficult is that it has to be perfect. A program has
to be absolutely perfect in every detail for all possible inputs and all possible uses,
even including uses which were not anticipated in the design of the program. If any flaw
is found in the program, the computer has license to do the worst possible thing at
the worst possible time, and it is not the computer's fault. Whose fault is it? It's
the programmer's fault. It is the programmer's fault for having created a program that was
You would think that the way we should do this stuff is we should hold onto the programs
and keep working on them and refining them and cleaning them until finally they are perfect,
and then we'd release them. We don't do that because we would never release anything. Instead
we put out stuff which is known to be imperfect and just hope that we get away with it, that
we won't find a bug in it until the next cycle. Sometimes we get away with that, and very
often we don't.
So we have this problem of perfection. Humans are terrible at perfection. We are greatly
imperfect creatures. I am a very imperfect person, but I program because we've got no
one else to do it.
We are hunters and gatherers. I don't mean that metaphorically. There has been no human
evolution since the Paleolithic era. We are those guys, we've got their brains. We were
selected for running around, our heads were designed for thinking about strategies for
getting food, and our guts were designed to get us out of danger when the predators were
coming after us. We're using that brain now to program computers, and there is no reason
to think that that should work. Somehow it does. It's miraculous, it's crazy.
We're using everything we got. Obviously we're using our head to do the analytical stuff,
and it can do it. It needs to keep the problem and the state of the program and all of the
stuff it's working on. You try to keep the whole model in your head so you can reason
about it. But if you try to teach programming, it's really hard. We don't have an algorithm
for how we do it. As we're decomposing a problem, sometimes we're going top down, sometimes
we're going bottom out, inside out, outside in. We're constantly changing strategies as
we're working on this. Nobody can describe how we do that. Somehow we all figured it
out, and we do that every day, but we don't know how we do that.
It occurs to me that it must be the gut that's allowing us to do that. It's the thing that
shifts us from one strategy to another, that gives us the occasional flash of insight which
makes the whole thing work. We could not program without our gut. I have absolutely no evidence
to justify that statement. But my gut tells me it's true, so I believe it.
Much of the craft of programming is making tradeoffs. There's never a perfect solution
to anything, which is one of the things that makes perfection unattainable, so we're constantly
having to tradeoff space for speed, or other things. The gut, unfortunately, is extremely
bad at making tradeoffs. Sometimes the gut, instead of helping us, significantly hinders
us. We'll talk about some of the consequences of that.
but the things I'm going to talk about are applicable to all programming languages. It's
Eich managed to get some good parts in it -- and they are really, really good -- but
he got a lot of bad stuff. There was just not time to try it out and make sure it worked
right, and it doesn't, and it's horrible.
not to learn the language before I started working in it, which was about as stupid a
thing as you could do. I did that. But I kept running into all these sharp edges that the
language had, so I decided I needed to make a tool to help me manage this crazy language.
I created something called JSLint.
I started with a Pratt parser. Pratt parsers are amazing things. You can write compilers
really efficiently. It's a lovely technique. I owe you a talk on that, because it's fascinating.
Just a little bit of fluff, and suddenly you've got a compiler. It's great stuff. Having this
help me find problems in my programs, because there were a lot of them.
Yeah. People would come on and say "hey, my program doesn't work," and they'd post it.
I'd take it and throw it in JSLint and see if I could automatically figure out what the
problem was. Sometimes I could and sometimes I couldn't; sometimes the way they were coding
was just so complicated or so error-prone that there was no way that you could tell
if it was right or wrong. In cases where these examples suggested rules, I'd put them into
JSLint. In examples where there weren't, sometimes it suggested a discipline, that if you had
a discipline then it would be possible to determine if that was right or wrong. The
program evolved that way. When I started I had no plan for how JSLint was going to grow;
JSLint comes with a warning: JSLint will hurt your feelings. This is true, this is absolutely
true. I hear from people all the time "waaaah, JSLint..." You wouldn't believe how much mail
I get from people whining about JSLint. Well, wait a minute. Let's look at this. JSLint
is a code quality tool. Its intention is to help people become slightly more perfect in
their programming. They're running the program on purpose, for that purpose. Why are they
complaining that it's telling them how to make their programs better? I think there's
some interference going on with the gut, that the gut is telling them they're doing things
which are bad. I determined these things were bad empirically by looking at bad programs
and trying to identify flaws in bad programs. They haven't been through the process that
I have, so they just don't understand why JSLint is telling them to do the things they're
We've talked about this argument. Should curly braces be on the left or the right? When Ken
Thompson designed the B language he decided to put them on the right. They could be anywhere.
When the late Dennis Ritchie did the C language he copied Thompson's syntax, so he put them
on the right too. But there were guys in their lab who wanted to put them on the left because
they thought it looked better. They probably argued about that for a week and then said
leave us out of this, because this is a stupid argument.
There is no reason for why you should do it one way or the other. There isn't. There is
no argument which is right or wrong; left or right, it's completely arbitrary. It's
like driving -- should we drive on the left or the right? The British can demonstrate
that you can drive on the left. We drive on the right. It's OK. It's good there aren't
any roads connecting us with England so we don't have to deal with that. It doesn't matter
which side we're on, it's just that we all be on the same side. I told that story in
Bangalor, and it was like confusion; "what is he talking about?"
If you have someone who's always put them on the left come into a shop where they put
them on the right, and they say you've got to put them on the right now, he'll start
to cry. "No, I'm not going to put them on the right, that's so wrong. Can't you see
how wrong that is?" He'll start making up reasons. The head is now working, trying to
figure out why it's wrong. "It's more readable?" He gets more frustrated because it doesn't
make any sense, and he might even hear himself saying these things and know this doesn't
make any sense, but there's got to be a reason, because why would I be this passionate, why
would I be crying, if this weren't important? We'd like to think we're rational. We're programming
computers, we're analytical, we're mathematicians, we're laying this stuff out, and we're crying
because we had to move a piece of punctuation from one line to another. There is something
else going on here.
have a right answer. You want to put them on the right. You've got to be consistent,
no matter what you do. Everybody will agree that it looks stupid to sometimes put them
on the left, sometimes put them on the right. You've got to be consistent. It's like driving.
You've got to always do it the same way, and everyone will agree that everybody should
do it like they do it. But we can't agree on how it should be.
I think Thompson did the world a disservice. He could have easily fixed his B compiler
and said it's got to be on the right, dammit, and shut up or you get a syntax error, end
of story. But he didn't, he left it sloppy. That introduced some personal expression into
the thing, so we can't agree.
of semicolon insertion, which does this stupid thing if you have a return statement that's
returning an object literal instead of returning an object, it returns undefined. You get no
compile time warning, no runtime warning. Your program may not fail at the point here;
it may fail some minutes later, which makes debugging really hard. In debugging if you
finally track it down to this line, you can look at that for hours and not understand
what's wrong with that. That should work.
But if you always put your curly braces on the right, you will never suffer. Once after
explaining this at a talk, someone came up to me and said "wow, that thing about the
return statement, that is so right. From now on on all my return statements I'm going to
put them on the right, but the rest of the time I'm still going to put them on the left."
Prefer forms that are error resistance. For no cost, if you just say I'll do this thing
for which there isn't a right answer and do it this way, you avoid a terrible problem,
and the net cost of doing that is zero. If you look at it in terms of tradeoffs, this
is a good tradeoff. For zero cost, big benefit, you're trying to be perfect? This is good.
Your gut might be saying no, this is so wrong, can't you see how wrong this is? The gut is
really bad at making tradeoffs, but the gut will tell the head yeah, we're making a good
tradeoff. But you're not.
The switch statement. I have to tell the switch statement story again. Early in the development
of JSLint someone wrote to me and said "you know, there's a problem with the switch statement."
The switch statement was designed by Thompson. He modeled it after the Fortran computed goto
statement, which is now recognized as harmful. But that hadn't been completely accepted yet.
There's this goto-ness in it, in which you really don't have a set of cases, you've got
a set of labels and the code will fall through to each other. It's really a computed goto
statement, which is terrible. You can have fall-through going on.
This guy wrote to me and said "that's sometimes a problem. When I'm looking at the code it's
really hard to see where the fall through occurs, and that can be a difficult thing
to debug, so your Lint program should warn about that." I thought about it, and I wrote
back to him, I said "well yeah, that's true, I've seen it, but that hardly ever happens.
On the other hand, you can optimize the flow going through all these cases and that could
be quite elegant. There's value in that, so looking at the tradeoff, I think it's probably
better not to warn on that."
OK. The next day, the same guy wrote to me and said he found a bug in JSLint. I'm always
happy to hear of that, so I threw the debugger, and it turned out I had a case that was falling
through. In that moment, I achieved enlightenment. We like to think we spend most of our time
power typing. "Yeah, I'm being productive, I'm writing programs!" But we don't. We spend
most of our time looking into the abyss, saying "My God, what have I done? How am I ever going
to make this work?" Once we figure it out, we forget that we did all of that. It's sort
of like the way we forget the false positives that the gut triggers. If we observe all of
the false positives then the gut couldn't work anymore, because we'd be analyzing our
reactions instead of responding, which is what the gut wants us to do.
I got that really wrong. What was my mistake? That hardly ever happens means the same thing
as it happens. This is the gut talking, and the gut is confused about number. The gut
thinks that a lot has more value than all. The gut thinks that not very much, or not
very often, is the same as none, or never. That's not true; I mean mathematically we
know that's not true, but the gut tells us that. So that caused me to reach a very wrong
conclusion about the safety of switch statements.
Normally when we make mistakes we just forget about it and keep going. We spend most of
our time making mistakes, not power typing, because there's something wrong with us. I
used to think that everybody should learn programming when I first started learning
it and thinking about how to organize the world in terms of data structures and algorithms.
I thought wow, this is such an amazing way to organize information -- everybody should
learn to do this. I don't think that anymore. I think there has to be something seriously
wrong with you in order to do this work. A normal person, once they've looked into the
abyss, would say I'm done, this is stupid, I'm going to go do something else. But not
us, because there's something really wrong with us.
A good style can help produce better programs. Style should not be about personal preference
or self-expression. Some people think that style is the stuff in your program that the
compiler ignores, and because the compiler ignores it, you've got license to do whatever
you want with that because it doesn't matter, it's not part of the important information.
You can structure it any way you want, just to show off or to amuse yourself or to express
your individuality. That's not what it's for.
Let's talk about style. The Romans wrote Latin all in uppercase with no word breaks or punctuation.
They were the biggest empire in the world at one time, and they did fine with this.
It's definitely harder to read from our perspective. Like, on the third line, you could read that
as "NOW OR DB REAKS". You have to work a lot harder in order to parse the text and understand
what it says.
This worked fine for the Romans up until about the time Constantine established Christianity
as the state religion of the Roman Empire. At that point it became necessary to replicate
all of the Holy documents and distribute them over the empire. There was a problem, because
they didn't have the originals of any of these documents; all they had were copies of copies
of copies, and none of the copies agreed. That was a problem because they were claiming
their spiritual authority on the written word, and nobody was really sure what the written
They decided well, we need to get better at this. It turned out this style of writing
was contributing to the problem. This turns out to be really hard to copy by hand accurately.
Medieval copyists introduced lower case, word breaks, and punctuation in order to improve
their error rate. It made it easier to extract the information from the texts and to copy
them into good copies. It worked -- this was a very effective way of making their copying
much more reliable.
Gutenberg copied this convention when he laid out his Bible, and it's been in printing ever
since then. It's on the internet now. This is the way we do text, not just in religious
texts now but in academic work, popular work, fiction; everything uses these conventions
because they work. We've been using these now for many centuries. It's a good idea,
and I think to the extent it makes sense, we should continue to use these in programming
too, because the most important thing for a program is for people to understand it.
This is a convention that people can understand. We have centuries of evidence about it. But
what they demonstrated was that use of good style can help reduce the occurrence of errors.
Since we're going after perfection, we want a style which helps us to do that.
There have been lots of books written about style in literature. One of my favorites is
William Strunk's Elements of Style. It's very influential. It was written almost one hundred
years ago. Some of his advice is a little dated now, just because English has continued
to evolve as a language, but much of his advice is still really, really good.
Programs must communicate to people. There are some people who think as long as the compiler
understands what I meant, that's all that matters. That may be the least important thing.
It's more important that humans be able to understand it, because we need to keep that
thing working. Particularly as we get more and more agile, programs are never finished,
so you can't say it's in the can, it's done. It needs to always be in a form which is always
editable, and in order to be editable it needs to be readable.
We want to use the elements of good composition where applicable. For example, if you have
a comma, you put a space after it, not in front of it. Even though the compiler will
let you put it in front, there's no reason to do that. It looks stupid, and you don't
want to write programs that look stupid.
Now, the rules of English are not good enough for software. There were attempts at trying
to create programming languages which look like English, and they didn't work because
English is not precise enough. You need extreme precision as you're trying to approach perfection.
The computers are not smart enough to make sense out of wacky prose, so we need to be
much more specific. We have new kinds of ambiguities that we need to be concerned about that we
didn't need to be concerned about when we were writing in English.
For example, we've got parentheses which can indicate grouping, and they can also indicate
invocation. One way to help us distinguish which of those is going on is that when we're
grouping, we put a space in front of it, and when we're invoking we don't. In these examples
foo is a function, so there should not be a space, it should be adjacent. Return is
so we should make it clear it's not a function. There should be a space there. Here we have
an anonymous function, but it looks like we're calling a function called function, and that's
not what we're doing. We should put one more space in there to make it clearer.
You could argue that the reader ought to be able to figure this out, and in fact the reader
can. You could figure out all these bad examples and make sense out of what was really going
on, but that is not where the reader should be spending their time and energy. They shouldn't
be trying to get past your syntax and your lousy grammar, they should be trying to understand
the thoughts and the structures of the program so they can make it work.
are a really useful form, but there's a syntactic hazard in that you can't put them in as the
first thing in a statement because of an ambiguity in the language. If the word function appears
in statement position, then that's assumed to be a function declaration, and you can't
immediately invoke a function declaration. I think you ought to be able to, but you can't.
The way we get around that is by putting parentheses around it, and those parens change it from
being in statement position to expression position, and then there's no ambiguity; we
know it's a function literal.
I see a lot of people wrap the function in parens, but I think that looks goofy. What
we're talking about is the whole invocation but we've got these things hanging outside
of it looking sort of like dog balls.
reader, saying no, what's important is the whole invocation. I would put the parens around
it. That's better, I think. We can see everything between the parens should be treated as a
unit, which is the function being invoked. That's what we're talking about in this expression.
Now, there's another hazard in the language. We've talked before about automatic semicolon
insertion. Here's another example of that. We can have almost any statement before this
immediately function and it will be misinterpreted. Instead of setting x to y and then doing our
function thing, it will treat y as a function and pass the other function to it as a parameter.
That's screwed up. This is majorly screwed up. The moral of this is: do not rely on automatic
semicolon insertion. Put all of the semicolon insertions in all of the places where they're
supposed to go and nowhere else, because if you don't, things like this happen to you.
This is another thing where you could read this code for hours and go "I don't see what's
wrong with this." If you put it in JSLint it'll tell you right away what's wrong with
it, and that's why you should pay attention to everything JSLint tells you.
intentioned, but it just doesn't work right. We've got this green with statement. It does
one of those. Can anyone tell me which one it does? Yeah, well you can't tell. It could
do any of them. In fact, every time this statement is executed, it could do a different one.
There's no way you can tell, reading the program, what it's going to do. When JSLint's looking
at a program trying to figure out what's wrong, it says oh, you've got a with statement, that's
Stop using those with statements! You don't need it. There's nothing you can do with a
with statement that you can't do just as well without one, so just don't do it and you avoid
the confusion that this causes. Now, some people argue, well yeah, I know with is screwed
up, but still sometimes it's really useful. You can do these really clever, useful things
with it. I'm not saying it's not useful, I'm saying there isn't ever a case where it isn't
confusing. Confusion must be avoided. Confusion is the enemy. Confusion is what causes bugs
and security mishaps and all the other things that make us miserable. We want to be precise
and clear and clean and elegant and simple; we don't want confusion. We can't afford it.
was standardized he was trying to fix it in the standardization committee, but Microsoft
insisted no, it stays wrong in the spec, so we've still got it. Fortunately Brendan was
able to get triple equal in kind of late, so I recommend to always use triple equal
and never use double equal, because triple equal avoids these stupidities.
Now, some people will ask, well, what about in the case where double equal actually does
what I want? Can I use it in that case? I'd recommend no, don't even use it then, because
someone who's reading your program, they have to stop and ask is he using it because this
is one of the rare cases where it actually makes sense ,or is he just incompetent? Which
is it? Maybe it's undecidable. Maybe he's read other things you've written and gone
"I don't know." So don't use it, because it's hard to tell if it's being used right. Anything
which is indistinguishable from a common error, if there's a better alternative then go with
that. If there's a feature of the language that is sometimes problematic and if it could
be replaced by another feature that is more reliable, then always use the more reliable
function. The cost is zero. The benefit is that you avoid a whole class of bugs and confusions.
Another good tradeoff.
Multiline string literals. This is a new feature in the language; it was added in ES5. I don't
like this feature for two reasons. One is it breaks indentation. We have deeply nested
structures in our programs, and this breaks it. The text has to go all over to the left
margin, so it makes the program harder to read, it really does. But there's a worse
problem than that; there's a syntactic problem. The first one is OK, and the second one has
an error in it. Can anyone spot the error?
Yeah, yeah. There's a space here. I mean it's obvious once it's pointed out, right? But
I don't want to be putting things in my program that I cannot distinguish from errors. I want
my program to be obviously right, or at least obviously trying to achieve rightness. I don't
want to be putting in things which are clearly wrong. That makes it much harder to spot the
things that are really wrong.
So avoid forms that are difficult to distinguish from common errors. Some people say "but I
did that on purpose! I'm intentionally using this thing that looks wrong." I don't even
have to answer that, do I? I mean it's just stupid. Often I hear people saying "it's OK
because I know what I'm doing." If you knew what you were doing, you wouldn't be doing
This is a common bug. You see this in most of the C-like languages. We've got the first
statement, which appears to be doing what that one does, but actually does what that
one does. When you're reading this program you have to stop and ask "OK, which did he
mean? Did he accidentally get it right, or not?" All we know for sure is that the programmer's
incompetent. Beyond that we really don't know what he's thinking. My advice is don't write
the top one, write one of these two. Figure out which one of these you mean and write
that. Make it clear.
the idea that you've got some block, and anything that's declared, any variables declared inside
of it, are not visible outside of it. That turns out to be a really good thing. It came
with ALGOL 60, it's one of the great ideas in programming language history. Most languages
have something that's called block scope where any block defines the scope, so any variable
you declare inside of that block is not visible outside of the block. Nice.
declared anywhere within a function is visible everywhere within the function. That turns
out to be OK. You can write good programs just having function scope. The problem is
confusing for people who learned programming in another language like Java or C or C++
of variables are, and confusion leads to errors, so that's a bad thing.
In a block scope language, the advice is to declare the variable as close to the first
use as possible. That's good advice in those languages. In a function scoped language,
that is really bad advice. You should declare the variables at the top of the function,
because that's what's actually happening anyway.
Let me show you what's actually happening with a variable statement, with a var statement.
A var statement gets split into two pieces. The var part gets hoisted to the top of the
function and initialized to undefined, and then the initialization part stays where it
is and later gets executed. If you declare a variable inside of a block it doesn't get
declared inside of a block; the declaration part gets hoisted up to the top of the function
so it actually gets declared at the top, so it's visible everywhere within the function.
Again, not a bad thing, except the syntax suggests that's not what's happening.
Because of that confusion, I recommend declaring all of your variables at the top of the function.
has these hoisting tricks which allow you to not do those things, but confusion happens
because most people don't understand how that hoisting happens. Even if you understand it
the people who are reading your program may not, and you want to write the programs so
that they can make sense of your work so that they can transform your program into something
with greater value. That's hard if the conventions you were using were causing them to make bugs.
So make your programs look like what they do.
Now the thing that seems to be hardest for people to accept is the for var statement,
where you're declaring the induction variable in the conditional part of the for statement
in the initialization clause, because that's how you do it in Java. I mean, it makes so
loop, and if you do things which are dependent on the scoping of that variable inside of
the loop, your program will fail. It works differently than what you're thinking, and
even if you think you understand it, the person who's reading your program may not.
I've seen really good programmers get hung up on this. You've just got to tell them you're
not working in Java. This is not Java, this is a different language. Write in the language
you're writing in. This is a different language; it has different rules, different conventions,
different good parts, and you need to respect that.
we'll fix it in the next edition. Dave Herman will be here tomorrow morning to tell us about
some of the stuff we're going to do here. But for now we're stuck with global variables,
and that's a problem because global variables are evil. They cause all sorts of reliability
problems, security problems, really bad things, and the language is dependent on them. My
advice is to avoid global variables as much as possible. You should try to reduce your
use of global variables in a program or a subsystem down to one or maybe zero. It's
In the few cases where you have to use global variables, I recommend making your global
variable names all uppercase. I want my global variables to be as rare as hen teeth, and
I want them to stick out like a sore thumb. All uppercase seems to be the most obnoxious
way you can write a variable name, so that's the way I want to write it so it's really
clear that this is stupid.
The biggest resistance I see people having to that is that they come from other languages
like C where all uppercase means something completely different. C has a different class
of problems. C has a preprocessor, a macro processor on the frontend, and it can create....
In the language you can create things that look like variables that are not variables,
they're macros, and that is a source of confusion.
People at Bell Labs really early on discovered that confusion was intolerable, so they established
a convention: let's make all the macros all uppercase, and that way we can tell which
is which. That convention of having consonants be in uppercase traveled over to Java for
no good reason, because they had finals, so there was no reason to do that. But they did
because we don't even have final. It's nothing. My advice is let's use that convention for
something that has value in this language, never mind what it might mean in other languages.
I don't use it anymore. I use object.create now exclusively. But some people still use
new, so that has to be respected. But there's this big problem in the language that if you
write a constructor function and someone forgets to put the new prefix in front of it, instead
of creating and initializing the new object it clobbers global variables, which is horrible.
It's the worst -- maybe second worst -- thing it could do. Maybe it could kill somebody;
that might be worse.
So that's a really bad thing. Fortunately that gets fixed in ES5 strict. Unfortunately
ES5 strict is not in IE6. It's not in IE7. It's not in IE8. And it's not in IE9. But
someday we'll get some value out of ES5 strict, and that'll be good.
In the meantime we have the InitialCaps convention. If a function starts with an InitialCap, we
know it has to be used with new, and that should be the only use ever of the InitialCaps.
Having that convention, it becomes possible to read a program and figure out if it's using
new correctly, if the constructors are being used correctly. Without that it's hopeless;
there's no way you can read a program and know if it's right. So this is an important
convention. In some languages they don't have this convention. Microsoft still hasn't figured
this out. But it's important. This should be respected.
Here's another fun thing you can do with var statements. The top statement appears to mean
this, but it actually means this. Again, you have to look at it and ask what the intention
was here, and again, all you really sure know for sure is the programmer was incompetent.
Again, figure out which of these you meant and write one of those. Don't ever write the
one on top. Write in a way that clearly communicates your intent; that's what we should be doing
it respects function scope, so there's no more hoisting, and that'll be great. When
that happens, my advice will change from declare all your vars at the top to saying never use
the var statement ever again, we're using let from now on. But we're not there yet,
and we're not going to be there until we get rid of IE6, and IE7, and IE8, and IE9, and
++. This came out of C. It was intended for doing point arithmetic, so you could increment
pointer and get to the next thing that was being pointed at. We've since determined that
point arithmetic is a really bad idea. The last popular language with point arithmetic
was C++, a language so bad it was named after this operator. But we still have it, so it's
in all of our languages now even though we don't have pointers anymore. It adds one to
something. It is generally used to create side effects, and side effects are a source
of confusion anyway.
But I've found that when I'm writing stuff with ++, I cannot stop myself from trying
to pack as much stuff into a statement as I can. I'm not the only one. A generation
of programmers has this affliction. Most of the buffer overrun craze in the eighties and
nineties, where it was so easy to take over operating systems, was because of this operator;
it just is so easy for the enemy to get something into memory and get it executed because this
encourages you to write stuff which is just way too damn clever, way too difficult to
understand. It's bad.
I've found in my own practice, I do not trust myself to use this operator ever, because
if I use it anywhere, even in a for statement, I start getting the twitch and I'm trying
to push too much stuff in, I'm using commas, I'm doing all sorts of stupid stuff trying
to get it all on one line. I'm trying to optimize a thing which has absolutely no value, has
negative value. There's no cost to us today in getting stuff all into one line and it
presents a tremendous cost in terms of reliability, readability, all of the good abilities. It's
a bad thing. In my own practice, I say no. I do not trust myself to use this ever. I
use + = 1 instead, everywhere, because I'm no good with the other one.
I get people complaining "wait a minute, I should be able to write x++ because it's one
character shorter, and it means the same thing." Saving one character here has no value at
all. It is not a good tradeoff. In fact, it doesn't mean the same thing. ++x means the
same thing. This actually means something slightly different, and something slightly
different means you've got a subtle off by one error that is really hard to detect because
it's only off by one for a moment, so debugging that is really hard.
When I see someone using that interchangeably with that, I have to look at every ++ in his
stupid program and ask OK, did he get it backwards or not? Does he understand the difference
between pre-increment and post-increment? Because he doesn't there. I don't have time
for that, it's just way too much work. I'm just trying to make the program work, I don't
want to have to figure out what he's doing.
Recently I was reviewing some code and I saw this: ++x; ++x. Now had he originally written
x += 1, he could have easily changed that to x += 2 and it would have been great, but
now it's too much work so he doubled the line. Or maybe it was a copy and paste error? I
don't know. I don't know. It's stupid. I would call a bug on that. I don't know that it actually
is the wrong thing, but it sure looks stupid. It would cause me to wonder if there are other
really bad things in this program.
So for no cost, by adopting a more rigorous style, many classes of errors can be automatically
avoided. That is a good tradeoff. No cost, big benefit.
Here's another one. The biggest design error that Thompson made when he designed B -- and
Ritchie copied the error in C, and everybody's copied it in everything since then except
for maybe Python -- was that he made the curly braces optional on these structured statements.
He should have made the parentheses optional, but he decided to make the curly braces optional
instead, and that was a huge mistake. As a result of that you have statements like this,
where it looks like we're going to conditionally call B and C, but that's not what happens.
It looks like it wants to do this, but it actually does that. This is another one of
these confusion things.
I talk to people all the time saying my advice is put the braces in, put them in everywhere.
It's only two characters and it makes your programs so much more resilient and it's so
much more likely that someone who's going to be maintaining your program doesn't accidentally
break it. It makes your program stronger, less error prone, and that's a good thing.
"But it's two characters! So hard!" It's not that hard. Again, they're thinking we spend
all of our time typing. If we can somehow save those two keystrokes, we're going to
be so much more productive. And you're not. You're going to be so much less productive
because you're going to be chasing down problems due to false structures in your code.
As our processes become more agile, our coding must be more resilient. We need to anticipate
that this code is going to be changed many, many times, and will not break as a result
of that change. We can't do anything that's too tricky or too intricate, because it will
Programming is the most complicated thing that humans do. Computer programs must be
perfect and humans are not perfect, so that is the struggle; that is the core problem
in our craft. That is what we do.
Designing a programming style demands discipline. We're not selecting features because they
are light, or pretty, or familiar. We're designing features because we want to improve our error
rate. Now, because we spend way too much time staring into the abyss, if you want to be
more productive, spend less time staring into the abyss. That's what I'm talking about.
The JSLint style was driven by the need to automatically detect errors. I decided that
forms that can hide defects are themselves considered to be defects. I had no idea I
was going to reach that conclusion when I started working on JSLint. When I started
I don't claim to have come into this with perfect knowledge, and I'm not trying to impose
my personal standard on anybody. This all came about empirically as a result of trying
I can live without, I will happily live without it. It's been said only a madman would use
all of C++. You could also say only a madman would use C++, but I'm not going to argue
that tonight. But this approach to subsetting works in every language. It's really easy
to add features to a language. Some designers will think of a clever feature and put it
in, and sometimes it works and sometimes it doesn't. If it doesn't, they cannot take it
out. If a language is popular, if there is significant use, they cannot remove a bad
part without breaking things. Even if that part is dangerous and causes people to write
bad programs, it's stuck there.
You have a power that language designers and standards bodies do not have -- you can take
features out of any language at any time by simply refusing to use those features. My
advice is not that you be ignorant about the features that you don't need -- I think it's
important that you know the whole language -- but that you know enough to know to avoid
the features which are working against you and not for you.
There will be bugs. I'm not promising that by using JSLint or using these techniques
that you're going to be bug free. What we're trying to do is to move the odds slightly
to your favor. Any bugs we can help you to avoid will make you more productive and happier.
My conclusion: good style is good for your gut, even if your gut doesn't think so. Thank
you, and good night.
I think we have time for some questions. Do you want to run the mic, or should I repeat?
>> FELIPE GASPER: Has anyone written a JS tidy tool? For example, if I write a var statement
in the middle of my function, I would love some tool that strips out that var and sticks
one at the beginning of the function for me so I don't have to worry about adding two
lines just because I added a new variable to the function.
>> DOUGLAS CROCKFORD: Has anyone written a JS tidy tool?
>> FELIPE GASPER: Like, Perl has a beautiful one.
>> DOUGLAS CROCKFORD: Someone may have, I don't know. For a long time I've been thinking
about writing something called JSMax, which would be the reverse of JSMin. It would take
something and pump it all back up, and it would do things like that. But I haven't.
Maybe somebody else will.
>> AUDIENCE MEMBER: Do you have any comments on CoffeeScript?
>> DOUGLAS CROCKFORD: Do I have any comments on CoffeeScript? I really like CoffeeScript.
CoffeeScript takes the good parts and puts a minimal syntax on top of it, which is lovely.
I like CoffeeScript a lot.
I don't recommend using it in production because it comes with its own world of problems and
of CoffeeScript in the long run. There are a lot of pressures on the language, and I'm
not sure it's ever going to get there, but that would be my preference if we could.
>> AUDIENCE MEMBER: What are your thoughts on Google Dart?
>> DOUGLAS CROCKFORD: I'm sorry?
>> AUDIENCE MEMBER: Google Dart.
>> DOUGLAS CROCKFORD: Google what? Oh, Dart? I've thought for a long time if I could take
but retains all of the goodness in it -- because there is a lot of goodness in it -- I would
not have come up with anything like Dart. That's not to say there aren't some good ideas
go back and be more Java-like. And its syntax stinks. I don't get it. There's something
wrong over at Google, I don't know what it is. Apparently there are a whole lot of guys
over there who think it's a good idea; I don't know what's going on.
>> AUDIENCE MEMBER: Are there any other languages you would borrow any other goodness from to
going the other way, I'd like to make it smaller. My goal for the next edition of the standard,
which will not be met, but my personal goal is to make JSLint unnecessary. We're not going
to accomplish that. Instead we're going to add a lot more stuff which will probably introduce
more bad parts unintentionally; that's just how this stuff happens if you try to add too
much stuff too quickly.
and dynamic objects with prototypal inheritance, two things that were never put together, two
experimental ideas, and put it into the mainstream wrapped in a familiar syntax that worked.
Anything else it needs can be added as libraries.
>> FELIPE GASPER: You did mention it tonight, but JSLint also has a thing against -- I forgot
how you put it -- if you use a for in but you don't have a hasOwnProperty check in it.
>> DOUGLAS CROCKFORD: Yeah, JSLint complains about unfiltered for in statements.
>> FELIPE GASPER: Doesn't that filtering defeat a lot of the benefits of prototypal inheritance
though? Like, one of my beefs with YUI is that they do this. If you have use prototypal
inheritance to define your configuration object, YUI will reject anything that that configuration
object has inherited from the parent object, which was really frustrating until I realized
that's what was going on, because it seems so counter-intuitive. It's this great thing
in the language but I can't use it. Even Object.keys has this, what seems to me, a flaw.
>> DOUGLAS CROCKFORD: Well, no, it's not a flaw because it's so hazardous. So again,
the argument is not that it isn't useful, it's that it's dangerous and error prone.
The reason that I'm recommending you not use it isn't because you can't figure out a clever
use for it -- I admire you for having found a clever use -- I still recommend you not
>> AUDIENCE MEMBER: Why in the catch statement do you have to use a different variable name?
>> DOUGLAS CROCKFORD: Why in the catch statement do you have to use a different name in every
catch? It's because in some browsers -- and I don't need to iterate through that list
again -- all of those become the same variable. The correct way to do it should be to make every
catch clause, every catch block, be a separate scope. But in some browsers they don't, so
you might think that each one of these is a unique variable, and they should be, but
they're not. If they all have the same name they are all the same variable, and that is
a confusion, and we don't want to be confused.
OK, so that's that. Thank you very much everybody. Be careful going home. Good night.