How to Write Clean, Testable Code

Uploaded by GoogleTechTalks on 26.01.2011

Open source projects. Recently, his interest in test-driven development turned into testability
explore and JSTestDriver with which he hopes to change the testing culture of the open
source community. If everyone could put your hands together and welcome Misko Hevery.
>> HEVERY: Howard, you are too kind of words here. So thank you all for coming. We're going
to talk about something different today. I want to know what psychologists test around
and talking about how to do the testing or more interested in. What is actually going
through our heads when we are confronted with writing a test and how we look at this particular
thing? So hopefully it's going to be a little bit different. I love questions, so please
ask as many as you want and to encourage questions, we have a little bit of schwag that Igor over
here will help to hand out. So for good questions, we'll give schwag. So do ask. Don't wait for
the Q&A session at the end. Okay, let's get started. So I'd like to--or rather, my boss
always like to say "Testing is not like frosting" meaning that you can't bake a cake without
sugar and then decide that you're jut going to frost the cake on the--after the fact.
Like, it just doesn't taste right, right? You got to put the sugar in the process of
baking. And so it works the same way for testing as well. You can't just build your project
and then assume that after the fact you're going to, like, sprinkle on some test and
then everything is going to nice and fine because it doesn't work. It just doesn't work
after the fact. So, let's look at how we actually build a code. And I think whether you do testing
or not this is basically how everybody does it, right? The red area is where you put the
bugs in and then in the green area is where you take the bugs out, right? I mean, that's
kind of how it works. And it's an endless cycle. Like, whether you do test driven or
whether you do manual, you code for a bit and then you try it. Like, you can maybe code
for half an hour max, but after that, like, you got to try whatever you coded up because
sooner or later, like, you're just going to get yourself off. Like, it's an inevitable
process whether you like it or not. And in some point, you decide, "Yeah, that's a good
thing. I'm going to check it in," right? But the thing is is in the green box and that
is that we do it manually or a lot of us do it manually and we'd like to transition to
do it automatically. And so the difference is writing your test or rather deciding what
you're about to implement and then writing a little scenario, a little story about it
and then implementing it and we'll talk about that later. So the whole trick is how do we
transition from us doing the test to a machine doing the test for us? And so the way most
people kind of look at it is they kind of do this. So they say, "Well, we'll put the
bugs in over here and then we expect that they're going to have some kind of QA folks
that--it's not our problem, right? It's like software engineers. We write the code and
then somebody else's problem is to kind of make finer bugs after us." And then at some
point people will say, "Well, the whole QA manual, QA doesn't really work out so we got
to have some kind automated test." And so at this level, people are always looking for
some kind of magic, like "If only we bought some software X, then all of our testing problems
will go away." And there's lots and lots of companies that do this and it never works.
And the reason is it's too late of that particular point in time. There's another problem with
this picture and that is that the people who put the bugs in are not the people who feel
the pain. And unless you have the close circle, unless you feel your own misery, you'll never
going to learn and get better at this particular thing. So the magic to testing is not here.
It's up here. It has to happen up there. If you do this right, everything else will just
fall automatically in place. And so we'll talk about this a little more. So, the funny
thing is you come to an engineer and you say, "Why don't you write a test?" And they always
have some kind of a beautiful excuse for you. And usually it goes, you know, a long list.
I'm not going to go into individual details as to what all these excuses are, but one
thing that nobody ever says is "I don't know how." That's a valid excuse. You know, just
say "I don't know how." Like it's perfectly valid, right? I mean, if I come to you say,
"Do you know JavaScript?" You can say no and, you know, it's not shameful to say that I
don't know a particular language. But somehow it is shameful to say, "I don't know how to
write a test." Now, why is that? It's a skill like any other thing, right? You could say,
"Do you know how to ski?" And it's not shameful to say, "No, I don't ski." But somehow as
engineers it's kind of programmed into our head, "Of course, I know how to write a test."
Nobody ever says they don't know how. And, really, this is the reason why most tests
don't get written is that writing tests is a skill like any other skill and you have
to learn it like you learn all other skills. It's not innate to the whole process. So,
let me demonstrate this thing. This is my favorite interview question when I do interviews
at Google. I say, "Suppose you're an evil engineer and you want to make hard to write
code--I'm sorry, hard to test code. What do you do?" It's an open question to all of you
guys. What do you do? Yes. >> Put singletons on the code.
>> HEVERY: Okay, so you've been reading my blog, good. Yes.
>> Yes. I'm sorry, but he... >> HEVERY: He said put singletons everywhere,
global states in other words. What about--so I'll give the gentleman some schwag to motivate.
Yes? >> I think you could, you know, put new operators.
>> HEVERY: Yes. Put new operators everywhere. Okay.
>> Do the hard code dependency. >> HEVERY: Hard code dependency, it's a different
way of saying put news everywhere, yes. >> Lots of static method.
>> HEVERY: Lots of static methods. This cannot be overwritten.
>> Overload a lot of things in hard ways. >> HEVERY: Yes, that's general just complexity,
right? >> Yes. You increase complexity and you'll
blow the [INDISTINCT]. >> HEVERY: Right. So that's an interesting
statement. So a lot of people say that on an interview that, "Oh, I'll just make it
really complicated." And while it is true that that makes it hard to test, it doesn't--that
in itself doesn't fundamentally prevent testing. It just makes it miserably difficult, right?
Whereas... >> [INDISTINCT]
>> HEVERY: It makes it miserably difficult. >> And it becomes very difficult to understand.
>> HEVERY: Yes. Yes. Whereas, new operators fundamentally make it impossible, right, that's
the kind of the difference that I'm going here. But excellent. I mean, you guys know
a lot of good points. I get a lot of blank stares. People are not quite sure what they're--what
do I mean by this particular question. But I figured if you know how to write hard to
test code then you probably know how to avoid it, right? So let's talk about less theory
and a little more hands on. So I'm going to have some examples to go through. And, yes,
this is kind of hard to see, but I was kind of hoping that some of you would have computers
and you can just go this URL and actually see the code. But it doesn't matter. We'll
go through them in more in-depth in particular thing. So, usually in an interview, I present
the candidate with a piece of code something similar to this and I say, "Pretend you have
to write a test. And specifically, I'm interested in what would you change about this code in
order to make this testable?" And, I mean, you're squinting. There's going to slides
that I zoomed in a little more, so don't worry about it just yet. The interesting thing about
this question is that everybody immediately takes a piece of paper and, you know, sticks
their nose into it and tries to understand what the code does. And this is interesting
because why would you think that knowing how to write a test is related to knowing what
the code do--does? In other words, whether or not a piece of code is easy to test, it's
a function of structure of that code; it's not a function of what the code does. And
this is something that people don't seem to understand. You were going to say something?
>> Yes. The problem is a lot of code bases don't have mission statements or specs that
are simple to understand and it tends to make the code [INDISTINCT] spec.
>> HEVERY: Yes. >> And that's one of the reasons why you get
that let-me-see code... >> HEVERY: Right.
>> ...because the spec are inherently non-existent. >> HEVERY: Right. So the message I'm trying
to say here is that when you're presented with a piece of code and you're trying to
say, "How can I test this?" don't try to understand what the code does. That's really a secondary
problem. Rather look for how the code is structured. And it's something that you can train your
eye to do over time and I'm going to show you what I look for when I see a piece of
code and how I really don't think that what the code actually does is relevant to this
question. So this particular code is designed to slap you around. It's the most evil piece
of code I could think of and from--I was really inspired over the years of looking at our
people's and also mine, especially if I go back in history. So then a couple of red flags
that I look for and I think, you know, we should learn to recognize as a software engineering
industry is global state and singletons, somebody already pointed that out; Law of Demeter Violation
and I'll have examples of why that is a problem; global references to time because, really,
time is a global function that changes underneath you all the time; hard-coded new operators
and just the general lack of dependency injection or really dependency management. So let's
do an exercise. Let us assume the code is fixed, right? This is where most people, when
they first come to testing, they have this assumption, "Oh the code is already written.
I've done all my work and now I'm going to go on and try to write a test," right? And
so, I think this attitude that the code is perfect I'm just trying to write some tests
on top of it, all right? And that's--what I'm trying to say is that it really is just
the wrong attitude to look at when it comes to writing code. So--but for this exercise,
let's just assume for a second that the code is fixed and we're going to try to do our
best effort in running a test for it and the process that we're going to identify problems
with it and then maybe hopefully in the end we'll suggest some refactoring. So, hopefully,
this is a little more zoomed in. The part on the left is the code that I'm looking for
and then that part on the right is the test case. I'm going to try to write a test. Keep
in mind that this is what would the world look like if we assumed that the code is fixed
and we're just trying to write a test and this code is specifically badly written to
make testing really, really difficult. So the first thing I look for is the static key
word because that tells me that there's global state. Now what does it mean? What it means
in test is if there is a static keyword here, it means I only get a single chance of instantiating
this thing in my test system, right, because it's a--once the inbox synchronism is instantiated,
I can't make a new one because it's final, right? So, good old singleton, which means
my test set up will have to have some kind of like--if you're not initialized, initialize
it. And the trouble with that is you really don't know when the class gets initialized.
You're hoping that you are the first class to reference this thing, but you have no idea.
So, this is a big assumption whether we can even do this in our test. But nevertheless,
we'll kind of break down to this thing. Let's see what else--the piece over there. Referencing
to more global state over here. The next thing I look, it is the constructor and I say, "Well,
it looks like I need a certificate but I need to read it from it looks like some config
object that I have to pass some path property that I have to indirectly look up and then
read the certificate from the file system." So, the equivalent on the other side from
the test point of view base is, well, you got to generate a certificate, you got to
attempt to create a temporary file, you got to write the file somewhere, close the output
stream and then you're hoping that the configuration object is writable. Now, given that this--whoever
wrote this piece of code was paranoid, chances are it was also paranoid here, which means
this assumption I'm making that I can actually modify the configuration object might be outside
of the scope or was actually possible in that code base. So that might be troublesome as
well. So, here in the next piece is, you know, I'm getting a user, username and password.
And in order to do this, I'm going to have to, let's see, create a user object. And here's
another big question mark is can you actually instantiate a user without any arguments?
Like it could be that the user requires a whole bunch of other things, for example,
the person could have decided the user's constructor takes a servlet in order to get a hold of
the cookie, in order to initialize itself in the cookie, right? If that was the case,
then instantiating the user would get really, really difficult. So here I'm making a big
assumption that the users are going to be actually easy to instantiate. I'm going to
put the username and the password in there and I'm going to be able to set it on a config.
Again, these are all assumptions that one would have to make in order to write this
test. And these are really tough assumptions to swallow. What else is in there? So then
when you do all this, the next piece of code I look for is these are your true dependencies
almost--usually. Not always, but usually the true dependencies are the fields that you
have stored in your object and therefore, those are the things that you should--that
you would want to mark and inject, but that's not actually what you're asking for inside
of the constructor. In other words, you're pretending that what you need in order to
get your work done is the configuration object. But the reality is you need all these other
stuff and you are actually going to go look for stuff, right? So, I always like to say
"Ask them what you need. Don't look for things." This is what I mean by that. Like, you should
be really asking for the certificate username and password directly rather than looking
for all these things. So, look at all the code that we have written over here. That's
actually quite a lot of code to just get through the constructor and the constructor is something
that we need to do all the time when we're testing. So, let's go further. So now here's
the sync method we want to test. And, again, there's new operators here but this is a tough
one. It's going to say "Make something called pop connector," doesn't matter what it is
and it's just going to connect to and disconnect, which means you'll probably have some socket
communication going on. So in our set up method, we'll probably going to have to create a server
because there is no way to give you a fake connector over here because there's a new
operator here, right? So, I can't instantiate something else for you over there. So in order
to test this, I actually have to have kind of a fake server, which is going to have real,
you know, TCP/IP stack to get it up and running. And then, hopefully, [INDISTINCT] I will be
able to shut it down, which is also usually a big question with servers whether you can
actually shut them down. So, they--these are just assumption after assumption after assumption
I'm making that if everything else is written in a correct way, then maybe I can test this
untestable piece of code if I just jump through all these hoops, right? But the reality is
if this piece of code is not written well, chances are the rest of the system isn't written
well either which means many of these assumptions I have made over here are probably false and
will probably require a lot more of code to get this thing tested. Right. So, paying attention?
Did we forget anything? No. Well, we haven't written the test yet. This is just the set
up, right? We're just trying to get through the new operators so that we actually get
an instance of this thing. So when you survive that crazy set up we just one through, then
maybe you can write a test that actually does something. And what's interesting when people
write test after the fact, they'll usually say, "testSync." It's usually a dead giveaway.
What exactly does that mean? Does this tell you a story of what you're trying to do? It
doesn't really tell a story here, right? You're just--you were told by somebody "Go that--test
that class." You saw that that class had a single in method called Sync so you made a
test called testSync. And then, you hack that in until you were able to get something to
execute and then you said, "Well, I'm executing stuff and it looks like I've got some coverage,
so ship it." So, here's another mental exercise. Now that you've written this test, which is
a little scary, could you reconstruct your project if I deleted your source code? That
is, I--you get to keep your test, but the production code is gone. Would you be able
to be--would you be able to reconstruct your code base? I'm going to say you're going to
have a really, really hard time. And the reason for that is tests like these, they don't tell
a story. They say if you do all this set of crazy things, then the following set of crazing
things should be true but, like, they don't seem to be related. Why is it that I have
to be writing certificates to this special file and setting these global variables? And,
like, it's not--there's no rhyme or reason to this thing. So, let's see if we can flip
it around. So before we can flip it around, I'm going to say tests are so yesterday. It's
all about the executable specs and executable specs are just tests with syntactic sugar
and sugar is always good and makes things taste better. So how many of you guys have
heard about specs? All right, good. Do you like them?
>> [INDISTINCT] >> HEVERY: Really? What language are you familiar
with the specs in? >> [INDISTINCT]
>> HEVERY: I don't mean that kind of specs. I mean like specs as an executable test in
our specs. Well, I'll show you an example in a second. Who else raised their hands?
And do you like specs over tests? >> [INDISTINCT]
>> HEVERY: Yes. >> Yes. I use [INDISTINCT].
>> HEVERY: You are? Okay. So we're talking the same language.
>> Yes. I use BDB, but it's also [INDISTINCT]. >> HEVERY: Yes, BDB is another fancy word
for this stuff. It's just that--I mean, people laugh. The BDB is really just testing done
right. So, let's change our assumptions here and that is assume the code we were about
to test is yet to be written, right? And the other thing--assumption you want to do is
that you want to demonstrate a single behavior per spec or rather, to put it differently,
we want each test to tell a story, right? Imagine I tried to explain to you what a particular
product does. What I'm going to do is I'm going to tell you a lot of, you know, a lot
of examples, like, I you would say "Oh, if you do this, this and that, then this is going
to happen. And then you if you do this other set of things, then this other thing is going
to happen." And then once you--what humans are really good for is that if I give you
a series of examples of how something works, humans are really good about generalizing.
So if I say, you know, it does the following set of things and the following set of things
and the following and then you say, "It's a car," right? But the other way around, it
doesn't work so well. If I come to you and I say, you know, "It's a sinker for email"
and you're like "It's too general of abstract thing. I don't really know what it means concretely."
So what it means to tell a story is that I give you a whole bunch of examples that demonstrate
how that piece of code work. Let me show you that. So, the biggest difference--and this
is Java and the Java's BDB is a little--not as good as, for example, Ruby's BDB or in
JavaScript there is Jasmine that has pretty good BDD syntax. But the basic idea is that
you want to tell a story. And so, you're saying itShouldSyncMessagesFromPopSinceLastSync.
"Oh, that's a story. Like, I understand what it means. It's much better than saying testSync."
It should close the connection even when an exception happens. I understand what that
means and I understand how to code that, right. itShouldSyncMessageOnlyWhenNotAlreadyInTheInbox,
right? So, these are the specifications of what that particular code should do and then
we fill in those specifications with little stories to demonstrate this particular scenario.
So, let me give you an example. So, each of the sync message from pop really doing, you
know, something like that. I'm going to create a pop class, I'm going to add two messages
to it then I'm going to connect my syncro, whatever that means. I'm going to pass in
all the dependencies like pop, inbox, filter, when was the last time I synced, and then
I'm going to say, "Sync now". And then I'm going to assert that this particular message
actually got copied but there's two messages I put in but only one got copied because one
was already synced because it was after a particular date, right? The test like this
tells a story. And the other thing that test does is notice that it's much easier to follow.
There isn't all these destruction about writing, creating certificates and writing them to
temporary locations and then cleaning them up and starting servers and, you know, all--all
just destructions that aren't telling a story. So, you want to make sure that these are short,
you know, they fit on the page and they're easy to read. And to me, that's really that's
really what specs are, little stories like that. So, maybe another story would be, you
know, tested--you should close a connection even when exception happens. So here again
I can create a filter that throws an exception and I can simulate an exception throwing--because
when I try to sync, I expect to catch it. Again, I'm telling a story. And something
like this wouldn't be possible if we already have written our implementation and had new
operators everywhere, which most people have. So notice is another thing, that if you do
this right, there usually isn't a set up method, right? There is really no--the set up method
is needed because you need to execute some complicated piece of code, right, which maybe
has some if statements, loops or something like that. All we're doing here is we're instantiating
an object and wiring them together and as a result, your set up is essentially just
declaring your fields. And these fields actually become part of your language. They become
kind of like a DSL, right? So, now, your test reads nice because you can say, let's see,
right--you have these things like long ago, past, now, future, inbox, filter, long ago
message, past message, current message. And then if you're trying to write a test, can
we go back up a second, like over here, notice that you could say add message long ago, add
message, past message. And then when I do a sync, I say sync now or last time I synced
was long ago so that it reads more like--I'm trying to--the DSL--do you know what DSL stands?
Dynamic--Domain Specific Language [INDISTINCT] I'm trying to get out. So, you want to make
sure that you're--you really tell a story with these things. Now, I can do all of these
is because none of the code was actually written yet. I'm just making it up and it's easy to
make stuff up. And the other things is if I'm making it--this stuff up, nobody, really,
in their right mind will start making crazy things up like, "Oh, let me start a server.
Let me write a bunch of things to a file system. Let me create some obscure or global variable
that I set," right? You don't think that way. You create the most simple scenario that you
can think of because creating complicated scenarios is a lot of effort, right, and nobody
really wants to do that. So, writing the test before the code is written, it really allows
you to just think about it in a really pure form and then worry about how these pieces
get together or wired up later. So, given the specs, let's write an implementation.
And so here's an example of what an implementation would look like. And the first thing, what
do you notice? What's the difference between the previous one?
>> Shorter? >> HEVERY: Maybe. It's about the same, I think.
But more importantly, with the exemption of the new date, there is really no new operators
here. They just got an exemption because it's kind of a value object. It's like [INDISTINCT]
strings, so it doesn't really count. And I'm really not creating a date from global timer,
I'm really passing in the date because it's being passed in. So it's kind of an exemption
here. But in general, you don't have new operators. So, for example, the most important one is
that the pop connector, which was actually the thing that connects to the server, is
now passed in from the outside so that I can pass in a fake implementation if I choose
to. So, when you look at a code like this, the thing that I would look for is I either
want to see a whole bunch of new operators and know if statements or loops or a whole
bunch of if statements but no new operators. Because I either want to see a code that's
a factory, in which case I'm wiring things together and I'm not doing any work or I'm
doing work in which case somebody else is responsible for wiring process because if
I mix the two together, then I'm going to have a really hard time writing a test. The
other thing you're going to notice is that a good constructor looks like this, and that
is list your dependencies. Unfortunately, in Java, we have to do this three times. We
have to list your dependencies once here, then the once over here and then once copying
over. I think Scala has fixed this. But this is what a good constructor looks like. You
know, you just say, "I need a whole bunch of these things. It's somebody else's problem
on how you get there." So, what have we learned? We learned that the new operators are really
the thing that gets you. If you need it, then you must interact with it. So what does that
mean? I don't have a slide. The slide was too far back ago. But in the original code,
there was--we were asking for, for example, for config and we really weren't interacting
with the config. All we were doing is we were getting stuff out of it, right? And so, the
rule is to not be violating the law of the Demeter or you must be sending a message for
the other object to do something rather than sending it a getter, right? That doesn't count.
So, passing it in a config and then having the other objects say config.getuser.getusername
just so I can take the username and password to the pop connectors so the pop connector
can connect on my behalf is kind of very convoluted. Rather, you should just say, "Look, I just
need a pop connector and it's not really my problem how you construct one. It's somebody
else's business and that whoever constructs it should be passing in the username and password
and certificate as well." >> What about passwords? Are they valid [INDISTINCT]
the user? >> HEVERY: The factor is valid in all of these.
You can be basically more lenient with them, right? Because they're...
>> Can you repeat the question please? >> HEVERY: Oh, sorry. The question was do
factories validate Law of Demeter because you don't really interact with them. And to
some degree--I mean, you--the factories shouldn't because you're just instantiating a whole
bunch of things and passing it back in, but sometimes you really have to say, "Well, get
that object from over there and pass it over here." So, to some degree, you can be lenient
about Law of Demeter violations in factory code, but not too lenient. Even there, it
just looks kind of suspicious. There was a second question over there? No. Okay. Oh,
yes. And lastly, doing work in constructor is just a bad idea because constructor is
one method that I cannot over ride. It's the method that I absolutely have to call every
time I want to call--instantiate--any call--any method I'm [INDISTINCT]. And so as you put
work in the constructor, you're basically saying that every time you want to do something
with this object, you better be prepared to go through whatever that constructor wants
to do. And if that constructor wants to read certificates or wants to, you know, send e-mails,
well then you're stuck. You're sending e-mails and you're reading certificates. So--and the
last piece is that what we talked about is we talked about unit testing, which is an
individual levels all way at the bottom, but the same rules also apply for end to end tests.
To give you an idea, a unit test is to end to end test as classes are to components.
So, if my class has followed proper dependency injection so that I can instantiate some individual
classes, then I should also be able to do the same thing for components. So if I have
an application--I don't know, I'm going to use Gmail as an example--but if I have an
application like Gmail, I should be able to say--in the end to end test, I am going to
remove the authentication component and then replace it with a fake always authenticated
component, so when my end to end test run, I don't have to worry about logging in. And
that's another form of dependence injection, but really just slightly higher level. So,
just like your individual classes need to be properly managed and you can replace them
through dependence injection, the same rule really has to apply to components. And it
turns out that if you do class injection properly then you get the component injection kind
of for free. So, if you do your unit testing properly, then your end to end test should
just kind of fall into place. So, what are components? Well, just well-defined [INDISTINCT]
like I just pointed out, like being able to replace the authenticator with something else
or maybe the persistent engine with something else or something like that. Your app should
really be global state free because global state is going to hunt you whether it's with
unit test or end to end test or at any level. You really don't want to have any. And last
piece is that when you are testing a unit test--you know, we kind of went to the idea
of creating a DSL out of your variables so that you can easily write sentences. That
is even more important with end to end test because when--in the end to end test, usually
you want to tell a very complicated story, while in the unit test, you might say something
very simple like "If I have an account and I deposit five bucks to it, then the account
should have now--you know, if the account started with $5, I deposit $5, I should I
have $10 in there now." That's an easy story to tell. But if you have an end to end test,
the stories you're going to tell are much more complicated. You probably are going to
say a story along the lines of, "Oh, the user logs in as--you know, into his Gmail account
and then he checks the inbox. And then if he sees an unread message, he clicks on it
and then that unread message then when he goes back to the inbox, it should not be showing
as unread but it should be showing as read and then he should be able to unread that."
It's a very complicated story, and unless you have a very good DSL, a good Domain Specific
Language of telling the stories, you're going to get lost in the process of building your
end to end test. Last thing is I oftentimes demo how I do test-driven development, and
so I usually show up at a meeting and, you know, I have Eclipse going or whatever your
favorite ID is and I start typing, I had saved and all of a sudden my test run. And what
that really means is you really have to spend time on making the environment easy for you
so that it doesn't fight you, right? And I oftentimes forget that I have maybe spent,
you know, four hours trying to get a Eclipse to behave properly so that when I do a demo
to the students or whoever I'm giving the demo to, they kind of see, "Oh, look, if you
hit save it just magically runs," right? But that magical part is actually the hard part.
When we're talking about being able to write test, it's not just being able to write test,
it's also being able to set up your whole environment that the environment doesn't fight
you every step of the way in writing those tests. And that's the kind of the tricky part
to the whole business. So that when you set everything up, of course, it's easy, right?
If you come to a project where everybody does this thing and you're a new kid on the block
and they just say, "Look how easy it is to write a test." And then you say, "Well, it
take me three lines of code to write a test or I can start a server which takes 30 seconds.
And then I can go and log in, which takes another 30 seconds, and then I go to inbox
and I click on the message and I can verify that, you know, the message is no longer unread
but it's read or I can just run three lines of code." Of course the person is going to
write three lines of code because it's easier, right? But getting to that point is something
that is a lot of effort and it's a lot of knowhow. So a lot of times when you're talking
about having--doing test-driven development, it's this unspoken thing that nobody talks
about which is that how do you get your environment set up so that these things are natural, so
that it's the easiest thing for you to do? And that's not exactly simple or easy and
sometimes finding the right tool is tricky. So, I want to talk about one last thing which
I call the Theory of Bugs. Yes. >> I have a question.
>> HEVERY: Yes. >> When I started testing as an intern at
IBM, we were testing in two different environments [INDISTINCT] which is the backend of WebSphere
>> HEVERY: Yes. >> And the problem was I was given like a
junk file that was in Python and XML, but it was really hard to test from one environment
that they'd been using into the new one. >> HEVERY: Yes.
>> So how do you that when you're testing in multiple environments because that had
to do with software as well as the web where... >> HEVERY: Right.
>> need to test in different... >> HEVERY: So you're asking an open-ended
question of... >> Yes.
>> HEVERY: know, "We were in a situation where it's hard to test. What do you do?"
Unfortunately, there isn't one magical answer I can give you. It's basically fighting the
environment trying to figure out some kind of a harness or something to make your life
easy, and there aren't really easy answers to it. There just has to be this goal and
this vision that you have to have that is if I somehow solve all these problems, and
for every project the set of problems are different, then--and I get to this happy place--then
it--life makes sense and it's easy. But getting there is always the tricky part and unfortunately
I don't know or it can't really help you on an individual basis. Did you--thank you.
>> You're basically circling on the same concept that all testing [INDISTINCT].
>> HEVERY: Yes. >> If you're testing hardware, even mechanical
hardware, you have to build it with testing out [INDISTINCT].
>> HEVERY: Yes. >> You have to build--if you want interchangeable
parts, take it back another 150 or 200 years. If you want interchangeable parts, you have
need to change rules. >> HEVERY: Yes.
>> You can't do it by hand. >> HEVERY: Yes.
>> Because if you do it by hand, you won't have the interchangeable parts. If you don't
have interchangeable parts, you can't attach the test [INDISTINCT].
>> HEVERY: Yes. So the thing you're bringing up is that the cost of making a mistake in
hardware world and in anything but software is just so expensive that people have no choice
but to test. Unfortunately, in the software world, the cost of making a mistake is that
feeling you get like, "Oh, if I just fix this, everything's going to work" and then you just
iterate on it for three months, right? >>
Yes. >> I know.
>> But the only problem is, as you noted, the trivial case, it's easy to do by hand.
>> HEVERY: Yes. >> The large case is difficult to do by hand.
>> HEVERY: Yes. >> However, along that path, that one meandering
path from A to B, it's not clear that there's a bright light at any point that says, "Here's
the cliff." >> HEVERY: No, there's isn't a cliff. Yes.
>> [INDISTINCT] >> HEVERY: Yes, this...
>> [INDISTINCT] no problem and all of a sudden, wham.
>> HEVERY: Yes, it's--yes, it's--there's no cliff. You're right. That's a good way of
putting it. >> And yet the cliff is clear in retrospect.
>> HEVERY: Which is always why it's so hard to sell somebody any idea of testing because
there's this mountain in front of you and I'm saying, "Look, if you just climb that
mountain, on the other side, there's a paradise." >> Would you suggest that we go about software
testing as if it was hardware testing, as if--or designing things in a way that...
>> HEVERY: Yes. >> ...the [INDISTINCT] would be really bad
as opposed to just something? >> HEVERY: Absolutely. Testing is something
that has to be baked in. From the moment you start writing the first line of code, y have
to think about, "What am I going to do to test this thing?" And really, better yet,
you need to have the harness ready before you write the first line of code because then
you don't have to say, "Well, am I sure that this design is actually testable?" Well, you
know it because you wrote the test first and then you write an implementation.
>> So, the test and the design are interconnected? >> HEVERY: They are very much interconnected.
And it's something that people fail to acknowledge, especially when they have this attitude that
you can just sprinkle the test after the fact to the code base and you just can't do that.
>> Even the--if I may. The test--the concept between test gates on [INDISTINCT]...
>> HEVERY: Yes. >> easy to do upfront than part of your
[INDISTINCT] and it's viciously productive. >> HEVERY: Yes. So let me finish a couple
of slides and then we can throw this into an open-ended conservation. I promise I only
have about five or so more slides. So, I call this a theory of testing and that is, I think
there are three different kinds of bugs. So, one bug--kind of bug is what I call the scorpion,
which is you thought about the problem and you thought wrong, and so it's a thinking
a mistake, right? The other kind of problem is you got the pieces right, but you miswired
it. Like, you got yourself a stereo system and you got yourself a TV but you plug the
input to the output. I wouldn't call it really as a thinking mistake, it's just you happen
to miswire the pieces together and that happens sometimes. That's a--that's a--it's a wiring
problem. And finally, the lady bug problem, which is you put it together then it just
looks funny. And this is your UI, right? Like you put it together it just--it doesn't quite
look right. And why do we--why do we talk about these two different tests? It's because
each one of these kinds of bugs can be caught by different kinds of tests. The thinking
problem is really your if statements and your loops, and that's where unit testing is really
wonderful at, which I have a class and there's a whole bunch of complicated if statements
and loops and logic and I want to make sure that I test those particular things. And so,
unit testing over there is what--is what the doctor has ordered. Once you know that individual
pieces work together, you could have very well miswired the factors, right? Unit tests
don't help with that. What you need isn't kind of an end-to-end or at least medium level
test to make sure that pieces work together. And so, this is where end-to-end comes in.
And that basically verifies that the--at least the happy paths work because if I can get
the system up--like, I like to use the example of a car. You know, if you put the car together
and in the end line if they put the key in and they start and they drive out of the parking
lot, chances are they got the right parts in right places, right? I mean, for the most
part, you know, the battery is there, the fuel's there, it's hooked up properly, the
engine's there, the brake's there. So, for the most part it looks like nothing got forgotten.
You don't know whether the car will perform at some really cold temperatures or will it
be able to perform, you know, with the ABS and all the other stuff but, hopefully, there
are other unit test that kind of proved that yes, the brake system works when it's minus
10 degrees or something or like that. And so, really, we're just interested if the pieces
got put together properly. And so, that's what unit tests are for. They're really not
meant to make sure that every single scenario works. They are really meant to be that the
basic flow works because that proves to us that we wired this thing together properly.
The last one, I don't have an answer for you because that's like the font color is off
or, you know, it looks weird when it's on some weird [INDISTINCT] monitor or maybe when
you translate it to a different language, it doesn't look right. And so, unfortunately,
there is no test for this. This is still humans, so we can't really automate this thing. But
there's a good news and that is the probability of finding a particular bug of a--of a particular
category and the cost of fixing this particular bug is different. So, logical bugs, they're
really, really--the probability of you finding a logical bug is really high. You make lots
of them everyday, you fix lots of them everyday. You probably don't want to think about it
because you--as you're changing the code around. The difficulty of finding it is actually hard.
Like, if something doesn't work and only understand some cornered cases, it's really difficult
to find these things. And then when you find it and fix it, the probability that you introduced
a new bug is actually quite high. So, luckily for these set of bugs, the logical set of
bugs, you know, we have unit test and that's what verifies us that everything is kind of
working and we're not regressing. The wiring bugs, this is the end to end test, right,
chances are, usually, they're kind of easier to find because if you miswired, usually the
whole system just crashes spectacularly, you know. Like, it's not like a logical bug where
only on leap year and at the end of the year, at 12 o'clock, our system doesn't work, right?
The wiring problem, usually if you have one, the system just doesn't come up. It throws
an exception on the main method or something crazy like that. And so, fixing those is much
easier, finding them is much easier. And when you fix it, chances that--of you introducing
a new bug is actually quite low. The rendering bugs are the easiest to kind of spot, to fix,
but there is no real way of making sure that we don't reintroduce them, right? That's kind
of the end of it. But when I explained that, people say "But my code is different. You
know, I have a super bug." That's because you mixed all the concerns, right? The reason
I could have two different kinds of tests is because my code was either responsible
for wiring or it was responsible for logic. I didn't mix it together--or I had, you know,
the rendering the UIs or whatever the view templating system you have--happen to. I didn't
mix the pieces together. When you mix the pieces together, all bets are off, right?
You have the super bug where it's a rendering and a wiring problem and a logical problem
all tied together. Okay. That's everything. So, I'm open to questions if you have any.
Yes? >> Okay. So I have two questions.
>> HEVERY: Yes. >> One comes from a work problem I'm having
right now, which is what is the best way to test multi-threaded codes?
>> HEVERY: That's always the question that always comes up. So, the best way that I can
come up with is that you separate it into two problems. So, one is there is the problem
of being the traffic cop, making sure that your--your threads don't deadlock and making
sure the right stuff is happening at the right moment. That's a deadlock problem or rather
a traffic problem. And then the second problem is the actual work that you have to do. So,
you need to structure your code in such a way that this--you can schedule the threads
in any point in time so that you can simulate what happens if this thing runs first and
that thing runs first. And that really means that you have to separate the scheduling from
the actual logic that is happening inside of the scheduled blocks.
>> I mean--well, the way that I'm going to approach it was that once you set certain
frame points... >> HEVERY: Yes.
>> ...they would meet at each point. >> HEVERY: Yes.
>> They'd be able to [INDISTINCT]. >> HEVERY: Something like that. There is really
no good easy answer. >> But then, I realized even though, for me,
it was like a one-line code, it's several lines of codes. And even if it's...
>> HEVERY: Yes. >> [INDISTINCT], there's no guarantee.
>> HEVERY: And it's--what you're doing is you're having flakiness, right? Because even
if you test, sometimes it work, sometimes it doesn't. It doesn't prove anything. So,
threads are hard and there is no--yes, a hard question. But so digress a little bit, I think
the answer to your problem might be something like [INDISTINCT]. How many people aren't
familiar with [INDISTINCT]? And interesting there is you have a single threaded system,
therefore these problems don't exit, and instead of scheduling, you have callbacks that execute
this [INDISTINCT] system and you're guaranteed that each callback happens by itself, exclusively,
so you don't really have any thread-locking issue. And so, there you can--you don't really
have this particular problem. But that's digressing. So, the answer to the question is look at
the [INDISTINCT], maybe that's what you're looking for.
>> And then another one is there's a difference between whether some are logically correct
where they're not optimized [INDISTINCT]. >> HEVERY: So, you can have a little micro
benchmark test in your code. Those are trickier. So, I usually have a separate test suite that
runs them so that they don't pollute [INDISTINCT]. But there' no reason why you can't say, you
know, "I should be able to run 10,000 operations in a second" and then just run it for 10,000
operations, measure the time. Yes, it might be flaky because you set the threshold to
early write, but the other thing that you might want to do is you might want to collect
these numbers. So, one thing is to fail a test outright, but the other thing which you
can do is you collect these numbers and then you put them on your continuous build and
you make a chart. If you use Hudson, it's actually--there's a plug-in for Hudson that
allows you to collect these things and enjoy them. So, as long as your test produce some
kind of a properties file that has all these numbers of performance, then you can chart
them and then you can see as--commence--go in to see if any of them as regression test.
Yes, it's kind of a visual. So there's really no 100% guaranteed way.
>> Regarding the running testing [INDISTINCT]. So, you put a lot of codes [INDISTINCT] and
you not only [INDISTINCT]. It's not perfect, but it's [INDISTINCT].
>> HEVERY: I think you were first. >> Do you prefer mocks or objects?
>> HEVERY: Do I prefer mocks or objects? That's always a religious debate. I guess you can
say that I'm the kind of the guy who likes sate-based testing to some degree. I think
mocks are good and you should use them. I have almost these rules that if you're going
to have to use a mock, you should never have more than two mocks involved in a--in a test.
And then if you're going to have expectations on these mocks, then you shouldn't have more,
like, three expectations per mock. If you're doing anything more complicated than that,
you're doing it wrong. And if a mock returns a mock, if you train a mock to return another
mock, you're doing it wrong again. But in general, I find that I don't really use mocks
that often. I much rather prefer to do state-based testing, which is that I put the state in
a known situation like, "Oh, the account has $10." And then, I perform a story on it which
is, you know, add someone into it, subtract and add interest or whatever. And then I ascertain
that now the account has a particular amount of money on it. I don't know whether it's
like a personal preference or it's actually better or not, so. I mean, I heard arguments
both ways, but that's just what I do. >> The test vector.
>> HEVERY: I'm sorry? >> HEVERY: Test vector, okay.
>> The same thing as you do with a chip. >> HEVERY: Yes. I think you had a question
there, right? >> [INDISTINCT]
>> HEVERY: Really? What team are you on? >> [INDISTINCT]. But it looks like the performance
is kind of like [INDISTINCT] actually you want to do is [INDISTINCT]. So now it's like
you do [INDISTINCT] and say, "This is what I always did, so this is the way we should
work." >> HEVERY: Yes, it is in a way like that.
So, business analysis and all these things are important and it's really outside of the
scope of our discussion. The discussion really starts with [INDISTINCT] PM or whoever knows--whoever
knows what needs to be built comes to you and says, "We need to build an e-mail program,
right, and it should have the following set of features." Then you can turn those features
into specs or stories and then the stories is what you end up writing. And it turns out
that as you're writing the spec, you're indirectly designing the code because you're saying,
"Well, this class needs to do X and in order to do X it needs to collaborate with this
other concept over here. And so, I'm instantiating this concept and wiring it together." And
in the process of it, you're really doing a design because you're looking at it like,
what does the API look like? You're starting from the user point of view. Like, what does
the API look like and then I'm going to back track and do the implementation, which is
very different than doing the implementation and then really having accidental or incidental
APIs is what most of code has. >> [INDISTINCT]
>> HEVERY: It's all mixed in together. Like, I don't really see boundaries for it. It's
like you write a little code, you write a little test, you run some test, you--something
breaks, you fix it. It's just a continuous reiterative process.
>> [INDISTINCT] >> HEVERY: I have no idea. I'm really the
wrong person to ask about that. But I'm assuming you're referring to Android. I believe that
there is a company called Pivotal and they have a testing framework for Android called
something robot, electric robot, electric sheep. Something. I can't remember what it
is, but they did some--a whole bunch of interesting work around the--around testing Android stuff.
I can't remember the name though. It's something about electric and something about a robot.
>> Robo-electric. >> HEVERY: Robo-electric? There you go. Robo-electric,
it's called. >> Robo-electric?
>> HEVERY: Yes. Yes? >> [INDISTINCT]
>> HEVERY: What advice do I have somebody--for somebody who was interning quality assurance
testing? I don't have a specific advice, but my general advice is that a good way to learn
all these things is to be part of some open source project. And the reason for that is
because you can try all kinds of different things. You can kind of experiment. And, you
know, the worst thing that's going to happen is they're not going to accept your patch
and you'd only have some time pressure or anything like that which happens in most companies.
So where I learned most of my stuff and I--where I see the people that I talk to where they
learn more of their stuff is really at open source projects when they tried different
things and tried to add testability to it. So, my general advice is be part of open source
and try to do something. And in process of doing, so you'll probably discover and learn
from other people. >> So what you're saying is rely on your team
that's there with you because there's certain things that you're not going to know and be
open to learn and be flexible because sometimes it requires you think of--and then we learn
Jython performing live, which is Java and Python.
>> HEVERY: Yes. >> So being open to learn those new things
to be able to fix whatever defects that... >> HEVERY: You know, I want to make a more
general statement about what you just said, it is that there's something interesting about
engineers in general and that is that they're really--I don't know if they're afraid or
there's something about saying "I don't know" that they're having troubles with, right?
And there's nothing wrong with saying that because there's so much information out there.
Like, you don't know most of it and you just have to accept it. So, a lot of times when
I actually--especially with interns and younger people who are new to work I always find is
that they are really having a hard time asking for it and saying, "Look, I just don't know.
I need some help." And that's really the best way to learn, just swallow your pride and
say, "Yes, I need it. Help me out" and you will learn all kinds of stuff from there.
>> [INDISTINCT] >> HEVERY: That's a tough one because you're
not--really, it's not a--what you're asking is not an engineering question. So the question
was how do you inject process into an organization and that's not an engineering problem, that's
a people problem. >> Maybe just on the technical side.
>> HEVERY: On the technical side, it's easy. Getting some time with the [INDISTINCT] injection
system. If you're working in Java, I like Juice but there's other ones, Spring and I
can't think of any right now. >> [INDISTINCT]
>> HEVERY: [INDISTINCT] thank you. Both are very good. They're all about the same in terms
of capabilities, so they can't go wrong with any of them. And just start at one location
of the code base and slowly grow it and then people will slowly see, "Oh, yes, that piece
of code base looks much nicer. It's more testable." And just slowly as you need more and more
things just grow it from there. From the people's side--of point of view, for a while what I
was doing at Google was actually--there was a group called Test Mercenaries where we went
from group to group and we try to inject these practices, as you say, into the team. It's
very, very difficult. It's very project-specific because each team has different kinds of hang
ups or different things, different problems that they're faced with. But in general, what
works is that pairing with people works tremendously. Just--you know, instead of arguing about some
arbitrary not real thing, sit behind a keyboard, somebody have to trick them, and two people
behind one keyboard solving one problem and just work on the one problem together. And
in the process of working this, all kinds of questions come up and instead of talking
about in general things, you can talk about it concrete things that you're actually doing
in a code base. So that is by far the only thing that I know that works and expected
to take a long time. Yes. >> [INDSITINCT]
>> HEVERY: If you--if you must go with that, yes, but sometimes it's a dirty world, especially
in corporations. Yes, it is. You can actually get two keyboards. And USB keyboards, they
work together and they can fight over the mouse. Yes. It's called extreme programming
sometimes or agile. >> [INDISTINCT]
>> HEVERY: Again, so you're asking this general open-ended question and it's very specific
to what you're doing and what you're building. So what is--what we might advise is choose
a framework that--where they thought about testability, right? So, what would be an example
of extreme? So, for example, Ruby, and I'm not saying Ruby is the answer. I'm just saying
Ruby and the Rails, they spent a lot of time making sure that they have a testability story.
So Rails, I think part of the success of Rails is the very fact that not only do they give
you a language in the framework but they also give you the whole methodology. So, you know,
keep building the Rail's application. This is where you put the controllers. This is
where you put your test. This is how you run the test. This is how the whole thing is broken
down. It's all laid out for you so that you can just kind of go ahead and start it. On
the other hand, if you choose a different framework--let me think of one. What was the
thing popular in Java for a while, a long time ago? Struts, for example, Struts didn't
ever really came with any test ability story. Like it was just a framework by itself, right?
And so already, you know, it's harder for you because if that's what you're introducing
to the company and not only do people have to learn Struts, but now you also have to
introduce some kind of a test ability story. And given that Struts was not written with
probably testability in mind, you're going to have a much harder time doing it. So, my
advice is just choose pieces that have already a test ability story in there. So, I'm--my
hope is that as the industry evolves, basically they realize that when frameworks are being
done, the testability story of that framework really is just featured just like anything
else. And so I'm hoping that when people are going to be choosing framework A over framework
B, one of the features would be, "Hey, does the framework have test? What's the coverage
and do they have a story for both unit testing and end to end testing?"
>> [INDISTINCT] >> HEVERY: To make sure you have a testability
story. Part of the framework that you are building should really be "How do I test different
pieces?" Like, you should think about it from day one. It's not something that your user
should really be figuring out. You need to deliver to them not just the framework but
also a story of how you're going to test this thing.
>> [INDISTINCT] >> HEVERY: I'm sorry. Can you speak up? I'm
having a hard time--there's a--there's a microphone coming your way.
>> Check. Check. Application that are crossed platform, how would you test for that if you
have environments that? >> HEVERY: Cross platform, you mean like a--the
thing with the multiple browsers? >> Multiple browsers or even the new application
that allows you to develop one application that will work on multiple forms.
>> HEVERY: Yes. I think that it really comes down to the same story. I'm sorry, I'm going
to sound like a broken record, but I just--you need to have a testability story with it.
So, if you have a choice within two frameworks, you know, one is promising that they have
a testability story and the other one isn't, that makes--that should be part of your decision
making process. And if you--if you're going to choose a framework that doesn't have it,
it really should--part of the whole design process is you figure out like "How am I going
to test this thing?" If it's a web browser, maybe you can use something like Selenium
for end to end test and then can I write my classes for that particular framework in such
a way so that there is a testing story involved in this as well? Maybe you--if you, for example,
choose a framework that you absolutely have to use but it doesn't have a good testability
story, for example, servlts are a perfect example of that, testing a servlet is next
to impossible. And part of the reason is because the server--the method, for example, doGet
or doPost takes an HTTP request and HTTP response. And then instantiating that class is next
to impossible. And that requires other things. And so this [INDISTINCT] things like pulling
on a sweater [INDISTINCT] never comes. I'm not threading a sweater, right? So, in that
case, what you need to do is you need to build a tiny layer between yourself and the framework.
It presents the testability for you. So, if you're going to--for example, the servlets
are the answer, then the first thing you need to do is build a really thin layer, almost
like a veneer between your code and the servlets so that your code never talks to these offending
objects like HTTP request and HTTP response because those things will be closing troubles
for you later on. >> Do you have any advice with regards to
design, like dev data so... >> HEVERY: Dev data?
>> Like development data to, like, [INDISTINCT] data?
>> HEVERY: Oh, I see what you're saying. You're saying that if you want to do an end to end
testing you probably want to have some data sets to kind of--yes, sorry that's kind of
a little out of my--what I focus on because my focus really is unit test and you're really
asking an end to end question and I don't really have an good story for you there. Sorry.
>> I want to [INDISTINCT] questions about the servlets.
>> HEVERY: Where is the mic? >> Is it reasonable, for example, to use stops
for this? For example, [INDISTINCT] has, like, nice stops for request response. Like in--create,
like, default constructor for servlets and instantiate and [INDISTINCT].
>> HEVERY: Is it reasonable to use that, right? >> Yes.
>> HEVERY: Yes and no. It is reasonable because it's better than what you had, but in the
long term, you don't want to be mocking out or stubbing out HTTP request and HTTP response,
right, because get back to this example we had over here. If I had a user class and suppose
a user asks for a cookie as the constructer and so Spring framework comes along and says,
"No worries. We got a cookie; fake cookie for you that you can make." Now it becomes
this troublesome thing because you got to make the fake cookie, then you have to compute
the cookie because cookie is just a string. You got put the string into the cookie, instantiate
the user and then hope that the user reads the cookie and then probably talks the database
in order to get the user data, right? It's this convoluted way. Like, it's testable but
it's not clean. So a much better design is to say, "No, no, no. User class is just a
value object. It is not responsible for reading its state," right? Put that information somewhere
else and then have the users simply take a string, which is the username. So now if you
have a third class called authenticator and it says "I need a user," it's a piece of cake
to instantiate because you just make a user, set the username and you're done. So these
patches that we have that allow you to instantiate hard to test objects like servlets and cookies,
et cetera, they're better than nothing but, really, they're not going to solve--save you
from death by a thousand cuts because, yes, it's a little less miserable, but it's still
desirable. >> Misko, we have a question over here.
>> HEVERY: Yes. >> Hi. My case is a bit surprising maybe.
It's I hardly have specs or requirements or cases...
>> HEVERY: Yes. >> ...and my application is like a software
for emergency departments in the hospitals and they used to hire like clinical nurses
who have smart enough--who are smart enough technically also. So now they want an automation
application, so that's why they hired me. >> HEVERY: Yes.
>> So it's not exactly an automation, they want an auto-magic. So they want me to perform
everything. So they hardly have specs or test cases. So, what do you suggest--which framework
do you want me to follow up? Do you want me to write a test cases first or that would
be [INDISTINCT] scenarios. >> HEVERY: No. You have a bootstrapping problem,
right, like "How do I get myself going over here?" And in a situation like that, you know,
test first has no meaning. What it really comes down to is step one, I need to figure
out a way to write an automated whatever, right? And that might be a really hard thing
and maybe it might be something else. If it's a web app, maybe you can look into Selenium,
which means the only thing you have is end to end test.
>> It's a desktop application. >> HEVERY: It's a what?
>> A desktop application. >> HEVERY: So, desktop application. Then I'm
sure there is some kind of robots that will pretend to be user clickers. Yes, it's a horrible
way... >> Yes. I used to do the record and replay
thing, but that doesn't work at all. >> HEVERY: Right. So that doesn't really work
and the reason for that is because you don't want to record a replay. What you want to
do is you want to build up a DSL. You want to save yourself. Like, you want to say, "Go
click on patient details," right? And that method should go figure out of where the button
is and do the right stuff, et cetera. It's not the best solution because, really, what
you want to do is you want it to be a layer below it which is that you want to say, "Forget
the UI. I'm just talking directly to the controller object and calling the [INDISTINCT] to verify
that the right data is being uploaded." But give the fact that it probably was written
without testability in mind, It might be the only option that you have.
>> There were no tests before. It was a Fox application and they are upgrading to .net.
>> HEVERY: Right. >> And it's all like cut and paste, like,
patches all over. It's like hell for me. I've been working there two years.
>> HEVERY: So, there is no magical--like I said earlier, like, there is no magical answer
for you, right? It's just, you know, one miserable step after another, like figuring out how
I can put some kind of a test framework n it, how I can build some kind of a domain-specific
language around what I'm about to test because doing clicks will simply drown--yes, will
down myself to early--you know, in the whole mess of things. So just abstractedly away
from it and just accept the fact that this will probably never be the sexy super fast
application that you cannot [INDISTINC] verify everything. But it is possible through a lot
of blood and sweat to get there. >> Yes. Right now, I'm automating in the scenario
base like, you know, they have discussed--you know, issuing an order, competing an order.
>> HEVERY: Yes, it is important to have stories like I said earlier.
>> Yes. All right, thank you. >> HEVERY: Yes.
>> I was just going to say more about how I got into testing. I mean, I'm sure almost
everyone in the room has done this where you're running some piece of code and you're looking
at the console or you're looking at webpage and you put some sort of print line system
or something like that and you're looking for a value. So, you are doing a test day,
you're expecting some kind of value. So if you run that even once, so--and certainly
if you run it more than once, you might as well write the same amount of code as a test
where you're saying instead of expecting to see this on the screen, you just--in the test,
you say, "This is the value I expect when I put this value into a method." And I got
started in a very simple way just where--just testing a handful of tiny methods really simple
and it's gradually built up and built up on the project I'm doing and I have hundreds
of tests now. And to me, that's a much better kind of metric for how successful the project
is and how much progress I've made because I've actually--each one of these tests test
something real, which I really want to happen in the system.
>> Yes. >> And the other thing I'd say as well is
to try--when you start off this, it's quite easy to end up testing things that isn't your
code. So, you end up testing libraries and things like that. So, really think about it.
It's where your own logic is. It's what--what is it that your own code should be doing rather
than self-testing whether it will get something out of the database because, hopefully, your
URL is going to be tested. >> HEVERY: You know, that's a great story.
Anybody else who has a story like that to share?
>> [INDISTINCT] >> HEVERY: Yes. Do you have a story?
>> I actually have a question. >> HEVERY: Yes.
>> The question was do you have any text or papers or books that you would recommend that
expound upon this subject well? And you're premising as I ask that.
>> HEVERY: There are good books. Igor, can you suggest something?
>> [INDISTINCT] >> HEVERY: Yes, I think that's for JavaScript.
I mean, you were doing JavaScript. There's another one Growing Object-Oriented Software
Through Test, I think it was called. There's always my blog.
>> Also, another Googler, James Whittaker. >> HEVERY: James Whittaker, yes.
>> He's written a couple of books. Whittaker with two Ts.
>> HEVERY: That's right. >> There is one of them that [INDISTINCT].
>> HEVERY: Yes, Google's Java Reviews Guide. >> [INDISTINCT]
>> HEVERY: Yes. >> And so, I personally found the Google Java
Reviewer's Guide very useful, so I really recommend that one.
>> HEVERY: I have this bug in my Mac that if it wakes up sometimes I don't have a keyboard.
But if I put it to sleep and wake it up again, the keyboard comes back.
>> [INDISTINCT] I work at an agency where we do a lot of web development in the server
side and front-end and their automated builds are really good at testing every time there's
a check in. However, we have problems with the front end of the JavaScript code. Is there
anything that you know of or Google might use to emulate a web browser or a dom or a
JavaScript rendering engine so we can actually test our [INDISTINCT]?
>> HEVERY: Right. So, you there's a couple of testers. You--one is something like Selenium,
right, that's an end-to-end runner. What else? Well, the web driver Selenium kind of is for
end to end. It depends on what you wanted to assert, right? Do you want to assert that
the page looks--renders properly or do you want to assert that the--when I click a button,
the right stuff happens, right? So they're different extremes over there.
>> What's sort of the logic in order to have [INDISTINCT]?
>> HEVERY: So, a good framework would help with rendering. Again, a little shameless
plug. So, we're working on a project called Angular [INDISTINCT]. There, we focus quite
a lot on testability and also making sure that rendering is easy. So, again, it's just
a choice of your framework that helps a lot in that--in that situation. That's the URL
and I'm not sure why my internet is not working, but that's the URL.
>> Maybe you're on [INDISTINCT]. >> HEVERY: I thought I have them, but I don't
know why it's not running. >> Do you want to [INDISTINCT]?
>> HEVERY: Let's see this. Yes. Let's see what's going on with my network. Sorry, I'm
going to try to be back in--other questions? Yes, back there.
>> Sorry. I had a--a long work [INDISTINCT]. >> HEVERY: Yes.
>> So, are there recommendations or [INDISTINCT] any copies in commencing communication? Just
wanting--or is there a key to design [INDISTINCT] for sandboxing that--so that it doesn't impact
the test? >> HEVERY: Yes. So, the general answer to
that is it's always the--everybody has a, like a--you know, I showed you the super bug
on the Android. Everybody's like, "Yes, my code is different. I've got this big problem."
And usually what it comes down to is you're doing too many things and you need to decompose
it, right? So break this down into smaller problems and then the smaller problems usually
are testable. And these very specific questions, it really comes down to, you know, somebody
on the team really caring about it and spending a lot of time trying to figure out like how
can I break this thing down so that different concerns are separated so that I can test
these things in pieces. >> So, yes, we have some--we've got a question
over on the other side of the room as well. >> HEVERY: Go ahead.
>> Hi. I'm a TDD new. I'm a veteran of the Cowboy Camp of Coding and I have a project
up--I'm right over here. >> HEVERY: Oh.
>> I've got a project upcoming which is just me and it's about two weeks long. It's an
ideal candidate to sort of do this for the first time. It's in Python and I'm planning
on using Jango for it. Is there anybody out here who--who's built or built a lot of TDD
stuff with Jango or Python? If they could suggest maybe a quick start path, something
that I could look into to actually running and learning during the course of this two-week
project. >> HEVERY: Does anybody have any suggestions?
I'm not a Python guy. >> [INDISTINCT]
>> When I'm trying to learn a new languages or anything like that, I usually try to, like--they
call them like Python Challenges where you have write a code to fix what they're looking
for. You could look them up on Google, Python Challenge. They give you a scenario where
you have to decode something and you had to write a Python code to fix it or to decode
their code. There's a lot of them like that. Like, when I was learning about security,
I went to and it will show you how to write or how things are hacked
so that way you know how to secure it. And they've got different levels and tip and stuff
like that. I did the Python one where you had to decode a layer of coded and you have
to translate it back to English to know what--how it worked and you have to use Python to fix
that problem. It's called the Python Challenge. >> HEVERY: So, to add to that, this might
not be obvious is that it's--a lot people when they start to do unit testing, they somehow
come to it from the point of view "I got to test the whole system end to end," right?
And try to give up on that idea and rather say--start small and say, "Hey, look, there's
this tiny little method that almost does nothing. It sorts people by age," right? Can I write
a small little test that basically asserts that that method works properly? Maybe this
is part of a larger web application that allows you manipulating your address book and that's
one of the features, to sort them by their birth date. And that's a method, right? So
start small. Start saying, "Hey, can I write a test for such and such utility method?"
And then, maybe you can grow from there and say, "Well, can I write a test to read from
the database?" and "Can I write a test for a controller such that I can replace the real
database with fake database?" And that really would reasonably force you to restructure
the code so that you cannot decouple the controller or rather the behavior of a page from the
persistent storage. All along, what you're doing is you're not really doing any kind
of assertions about the UI because, as I said, UI is really difficult to test and it's hard,
right? So don't try to pretend to be that you're the user and writing an end to end
test. Usually, that's more than it's worth fighting for. Rather, start small and slowly
build up kind of experience and get better at it and grow it. And then over time when
you have this experience you'll be able to say, "Well, this is worth testing and maybe
this one is more effort than it's worth where I can do some trade off." I mean, it's just
a UI or something like that. Then maybe you have an end to end test framework added to
the mix. So, it really is a continuous process that you just simply get better at it over
time. It's not just some magical thing that just happens, right?
>> You got time for maybe two more questions. >> HEVERY: Okay.
>> [INDISTINCT] >> HEVERY: And as you go--and then it's all--try
to push the limits and get... >> Repeat the question.
>> HEVERY: Sorry. The question was, "You're saying start with the most trivial of methods
and then grow from there to get methods that are more and more complicated?" and the answer
is yes. That's exactly how you learn. You just get better and better at it. You know,
stuff that--even the most trivial method to test would require you to have some kind of
a test harness, require you to have the ability to run this thing in a continuous fashion,
to run it maybe on a regular basis, to manage your test, and as the test grows, to have
some kind of a strategy for it. And all of these are learning that you're going to gather
as you're--as you're doing this. All of these are kind of unspoken things that are in the
background that you just have to know how to manage this process. And you can learn
it whether it's something complicated or something simple. So, start small and then just grow
from there. So, two or more questions. I think you already asked. Wait you guys all had a--who
hasn't had the chance to speak? >> [INDISTINCT]
>> HEVERY: Okay, sorry. I thought you did. >> Do you have any thoughts on developer testing
was just QA testing? >> HEVERY: Developer testing was just the
QA testing, yes. So my thought is that developers absolutely have to write test and their job
is to write a unit test. The second level would be the QA. So, there's two kinds of
QA. There is a, what I would call, exploratory QA which is that you have a system and nobody
really wants to be that guy who reads the script and goes clicks on it, right? You don't
want to [INDISTINCT], but you don't want to have them. But it is useful to have people
and especially some people are better than the others, who is really twisted about, "Hey,
can I enter some crazy value into this thing?" This is exploratory testing, right? And when
they discover something, ideally, you want to turn that into another story inside of
your reservoir or tests. That's one kind of testing. The other kind of thing is--well,
at least in Google be call SET, Software Engineering and Test. And their job is to worry more about
the end to end harnesses in the framework, right? So as an engineer, as a software engineer,
my responsibility is to write the unit test. I think a good engineer should also worry
about how the end to end framework should work, but in this situation, it is useful
to have a specialists to basically comment and say, "Okay, if you've done your unit testing
story properly, then let me help you write an end to end story" and then it's everybody's
responsible to add to the reservoir of the end to end test. But it's really a different
kind of a thing. So that's kind of how the breakdown should happen. In theory and practice,
you know, things are slightly different in all ways. And one last question?
>> Thank you for the talk. My question is based on a lot of legacy [INDISTINCT] PL sequel.
Do you have any advice on setting up a test harness for that and that testing that [INDISTINCT]?
>> HEVERY: For PL sequel? Well, if I was faced for that, right, the first thing I would try
to figure out is right set of scripts so that I can bring up a data base when it resets--and
it resets into an unknown state, probably an empty state. And then, have some kind of
a harness that can go and write and read into the database and then build up scenarios from
there. This--what you're describing is very much an end to end test, right? And it comes
down to I need to build some way of executing a set of sequels so that I can create scenarios.
And once I have scenarios, I'm going to drown because it's nothing but a whole bunch of
sequel, so I need some kind of a domain-specific language so I can tell stories. So, it really
is kind of a progression. Can I execute sequel? If so, can I turn it into stories next level
up? I think we're out of time, Howard. >> Yes, exactly. So, please--everyone, please
help me thank Misko for coming out. >> HEVERY: It was a pleasure.