Agile Connect 2011 - Keynote: Software Design in the 21st Century


Uploaded by CMCMediaInc on 10.06.2011

Transcript:
I am pleased to, I'm very pleased to introduce Martin Fowler. Marked as an author, speaker
and, in his words, the loud mouth pundit on the topic of software development. He's worked
in the industry since the mid-eighties, starting in what was the brand new world of object
oriented software. Spent much of the nineties as a consultant, and trainer, and joined ThoughtWorks
in 2000.
He's the author of six books. The first one the way that I discovered Martin was his book
"UML Distilled" because I was one of those OO UML guys a long time ago and trying to
puzzle through giant documents trying to figure out what UML was all about. And then, he has
this wonderful little book, it's what, half an inch thick? And it's just a marvelous book
that takes you right to the basement. So, that was the first book I read of Martin's.
He's also written "Patterns of Enterprise Application Architecture," a marvelous book
on refactoring called, subtitled "Improving the Design of Existing Code." A book on analysis
patterns, patterns that we see over and over again as we do analysis. His latest one called,
"Domain Specific Languages" is fantastic also. I've been after Martin for a number of years
to come and present. I'm so thrilled that he's here.
Let's hear it for Martin Fowler.
[applause]
Thanks. Yeah. Of course it is, if you're quieter than the applause you gave to the previous
speaker, I'm going to feel all depressed and unwanted.
Okay, so I'm going to do what I've been doing for a year or two now with my keynotes, which
is instead of - obviously the title, in case you haven't noticed, is one of those anonymous
titles that allows me to talk about whatever I want to talk about. It's what Lee refers
to as an IOU title. Meaning I'm not going to tell you what I'm going to talk about,
I'll make it up sometime before I actually go on stage.
And in fact, what I've been doing with the keynotes is I've decided that rather than
do one long, boring keynote, I'm going to give you three short, boring keynotes instead.
The first one, I'm going to look at a common problem in testing, why it's important and
why we should fix it. In the second one, I'm going to look at why software design is something
that's important and worthwhile to us and why we should care about it.
And then, in the third one, I'm going to muse a bit on the fact that it's been ten years
since the Agile manifesto and where the Agile movement has been going and where I think
it might go next. So, I will kick off by talking about non-determinism and testing. This is,
if you read my website and stuff regularly, this is an article I posted a, a month or
so ago.
In fact, everything I'm talking about is stuff that's posted on my website. So, if you read
stuff on my website, you might as well head out and enjoy the glorious architecture of
downtown Las Vegas. So, what do I mean by non-determinism and testing? Well, let's imagine
I want to run a test and it goes red on me. And I go, "Oh, okay, what's going on here?
Let's run that test suite again," so I run it again and, oh, now this time it's green.
OK, let 's run it again. No, no, no, maybe, no, no maybe things are okay after all. Well,
well let's just be absolutely be sure and just run it one more time. Oh, now, now it's
gone red again! What the hell's going on here? Has anyone had this kind of reaction to tests
that seem to work and not work, almost at random times.?
OK. Yes, that's what I call something that's non-deterministic. By the way, anyone recognize
this GUI window? Anyone remember this GUI window? Anyone actually used it in their young
youth. This is version 1, 1.0 of J Unit. I had to dig out that from some old, CDs to
find it. But, I thought, you know, if I'm going to show anything else as a test window,
it really has to be the original J Unit.
Anyway, people talk about these test failures, they might talk about them as intermittent
failures. You hear the term "flaky" a great deal. You know, "Those tests of flaky." But,
the term I like to use is non-deterministic and it's non-deterministic because it seems
to pass or fail in ways that are seemingly random to me.
Now there's another term for non deterministic and when it comes to testing and that is that
non deterministic tests are a complete waste of space. They are useless. And they are useless
because the whole point of tests, with a regression suite like this, is that what you're doing
is you're using it essentially as a bug detection mechanism.
Did I screw up somewhere when I was doing something over here in the code, and broken
assumption over there in the code? And its value is that the fact that they tell you
right away. They go red and they say, "Whoops, you screwed up." And if you're like me you'll
say, "Uh-uh, yet again. But thank you very much tests because since I know I've screwed
up quickly, I can quickly fix the problem."
The beauty of using this kind of automated testing suite, is the time between making
the mistake and realizing you've made a mistake is very short and because its short, you haven't
done that much and therefore you can figure out what you did wrong and you can fix it.
But the problem with a non deterministic test is when it goes red, you just don't know what
to think of it, because then you'll you run it again and it will go green, and then it
will go red, then it will go green.
It's not giving you reliable information. So it's effectively a waste of time. You might
as well ditch it for all the good it's doing you. But actually, non deterministic tests
are actually worse than useless. They're actually a very nasty thing. Let me explain a little
bit why they're worse than useless.
Let's imagine I've got a suite of tests, whole bunch of tests and they're all okay. If I
run that suite, I don't look at all the individual tests, I look at the overall results of the
tests and say, "Okay, the thing is green." If I have a suite of tests with one failing
test in it, and I run it, my first reaction is, "Oh, the whole suite is red." I then dig
into it to find out, okay, which failed test caused the sweep to go red.
But if I have a test that is got a non deterministic test in it, what happens to the overall suite
? Well, it's sometimes green, sometimes red, sometimes green, sometimes red. The whole
suite is affected by that test. It's what's happened is a non deterministic test is kind
of like some kind of infection that goes and infects the entire suite and causes no end
of trouble.
And that's why non deterministic tests are such a dangerous thing. So what do we do about
them? It is certainly important to realize that we care a great deal about this. If you
allow some non deterministic tests into your test suites, essentially what's going to happen
is, early on, people will say oh okay, the test suite failed, but it was that test and
we know it's flaky. And then, but after a while, people say, oh that test suite, it's
flaky. And it basically makes every test useless. And people stop trusting the tests. And when
tests fail because you screw up, you don't notice because it goes red and all those tests
are flaky anyway.
And at that point, all of your tests, the good ones and the bad ones, become useless.
And that's why non deterministic test are a virulent infection and why you have to do
something about them, because if you don't do something about them, your entire regression
suite will become useless and you'll lose all the benefits of automated testing.
And I've seen this happen. I've talked to teams where they said, oh yeah, well our tests
are useless and we don't really trust them very much, we will look at them occasionally,
but we don't do anything seriously as a result of them. So, we absolutely must take action,
quickly when it comes to dealing with non deterministic tests.
So, what do we do about them? Well the first step is relatively simple. Litt, what would
you do if I were to told you that my great friend, Linda here, has a virulent infection
of smallpox and bubonic plague? What would be your reaction?
Run away.
Run away. Get her out of the room. That's the thing, right. What happens happens when
somebody's infectious? You want to put them into quarantine. And that's the first step
for dealing with a non deterministic test. Set up a quarantine area. Move that test to
quarantine. Now the quarantine tests aren't dead.
You haven't got rid of them. You know, we wouldn't kill Linda just because she had Bubonic
plague and smallpox, we would want to cure her, of course. But the first step is to get
the test away from everything else. That way the rest of the test suite can carry on okay,
we'll continue to trust it, and we've at least pulled the test out of the main line. If you're
using a deployment pipeline, then you take the quarantine tests out and you don't make
them part of your regular pipeline. That way, the regular pipeline keeps thundering along,
it's able to give you useful information, and you're not infecting the rest of your
test suite.
But when you do use quarantine, you've got to be careful. You got to be careful to limit
that quarantine, because the danger is that if more and more tests become non deterministic,
you'll sling them all over to quarantine. And, of course, any of those tests in quarantine
aren't actually doing any good. They are not infecting the rest of your tests anymore but
they're still useless. And that means you've not got important pieces of test coverage
for your application. So it's important with quarantines to set some kind of limits. So,
limits I've come across is saying sorta like no more than three tests are allowed in quarantine
at once. If it goes above three, then we have to take action and to do it or it might be
that no test is allowed to stay in quarantine for more than a week or something like that.
Set up some kind of limit that limits how much stuff is going to be in quarantine. Because,what
we have to do is we have to figure out, okay, how do we deal with the non determinism? And
that means understanding the causes of non deterministic tests. Now, I'm not going to
have time to go through all of the causes in detail in the talk but I'm going to pick
out the highlighted ones.
You can go through to the article on my website to get more on the details. But I will go
through a few of these common causes. And, I'll begin with the first one and which is
probably one of the most pernicious ones.
So, how we, say we have a test. We've got some orders that are coming in out of a, out
of the outside world and we want to make sure that we can sum the value of these orders
up. So, we have some test that says, given three orders in my test fixture, the sum of
the orders equals the value of what I know it should be. A simple kind of test. And somewhere
else I have another test, and in this test I test that I can load a new order into the
system and then once I've got that new order, it's there, its' part of my test fixture.
A great test, work really nicely. But, what happens if, instead, I do the load new order
test first and then I do my sum of tests. What happens, of course, is that sum of orders
is going to go red. And what's happening here is a lack of isolation. The two tests aren't
isolated from each other. And as a result, depending on which order you run them in,
one will succeed and one will fail.
And this is a common problem with non determinism because, you'll find that depending on the
way the tests run back and forth, they don't necessarily pass or fail. Particularly nasty
consequence of lack of isolation is because when the test suite fails, and you say, oh
it was this test that fails, usually one of the first things you do is you run that test
on it's own, to see what's going on.
But if you've got an isolation problem, you run it on its own, it will typically pass.
It's only when it's in the Suite that it fails. And typically depending, as I said, on where
in the suite it runs in its order. So, isolation is something that you need to deal with. And
there are two broad strategies to dealing with this problem.
The first approach is to track dependencies. And that is to know that, oh, I have to run
my check of the sum test, before my check of the add the new test. The other approach
is to isolate, and to isolate means to write all of your tests in such a way that they
can be, that they never stamp on each other, that they don't interfere with each other
in any way.
And, when it comes to looking at the, this pair, I have a very strong preference, I don't
like to track dependencies, and the reason I don't like to track dependencies is because
it's complicated. You're always in this position saying, oh, this test has to run after that
test but before this test and it's a pain in the neck to keep track of.
It also makes it difficult to run individual tests individually. And, furthermore, as you
get a larger and larger test suite and you start thinking, oh, maybe we should, kind
of, spin up some cloud instances and run this stuff in parallel, dependencies will get in
the way, because you have to be aware of the dependencies to decide what order to run your
tests.
Of course, I have to mention that, kind of obvious to me here, that anything complicated
to keep track of, perfect opportunity for a tool. So, I'm sure where tools, the vendors
who are out there who're saying, oh, you've got to track your dependencies, and for that
you'll need a tool and, hey, we happen to have one.
But, on the whole, I don't like tracking test dependencies. What I prefer is isolation.
And there's two basic strategies to keeping things isolated. The first approach is a cleanup
strategy and that basically says, every test is ensure, has to ensure that it leaves the
world the same way that it found it.
So, in the case of loading the new order here, one of its roles is that it's responsible
for deleting anything new that it creates as part of it. And this is why you find in
a lot of testing frameworks, there is usually a tear down part to the testing framework,
that allows you to get rid of any resources that, you may have created, but you need to
explicitly destroy at the end of a test.
And this is a pretty good strategy, we just have one noticeable disadvantage, and that
is that if you make a mistake, it's hard to deal with afterwards, because the error is
the test over here that was, didn't do its tear down properly. But you don't see a problem
until some other test that depends on having clean slate fails.
And there's a big distance between the fault and the failure - the cause of the problem
and what you can see of the problem. And as a result, cleanup is a, you know, it's a OK
approach, but it's not preferred, because it means you got to do a good bit of debugging,
should you get into an isolation problem.
The other approach is to use a blank slate and basically the idea with a blank slate
is that every test creates what it needs for itself automatically at the beginning of the
running of the test. And certainly a lot of unit test frameworks encourage this kind of
behavior. The nice thing about the blank slate is that that way each test really is independently
responsible for it's own behavior.
And, as a result, that limits the amount of isolation problems that you might run into.
The disadvantage of using a blank slate is that sometimes building up that initial fixture
can be expensive. So, for instance, this is particularly the case in larger scope functional
tests that often rely on, for instance, a database being set up and loading up a database
worth of stuff for every single test; that can take time.
And there are tricks you can use to get around it, like copying database files rather than
doing inserts and that kind of thing, but that is a factor, so sometimes you have to
balance the cleanup and the blank slate approach. A common trick that people use for cleanup,
for instance, is that when you're running database oriented tests, to run them inside
a transaction and not to commit the transaction, to roll it back at the end of the test. And
that's a good way, for instance, of helping to do that, providing, of course, the thing
you're testing isn't dependent upon transactions being committed.
But, broadly, those are the two strategies you need to follow. And, on the whole, I prefer
the blank slate one, unless other constraints push me towards a cleanup approach.
But, that's about dealing with isolation. It's hard to say, you know, what proportion
of causes affect flaky tests. But, isolation is certainly a really common case and it's
also one of the more messy cases to deal with. Because, particularly if you've got that failure
of cleanup, it often requires a bit of detective work, to do that.
But, that's what you need in order to keep a healthy test.
That 's our first cause, and let's move on to our second cause. Our good friend, asynchrony
. So, asynchrony is very useful to us in many ways. It allows us to build with systems that
are responsive even though they're carrying out long running operations. But it does,
of course, lead to testing difficulties.
And the basic problem is we invoke some function on some, typically some remote service, but
it could be something more localized. We have to then wait, in order to see if we can get
some kind of answer. And the common approach which typically people do when they do this,
is to do this: you've got something along these lines, that is to do what I refer to
as a bare sleep.
How many people have done this in their tests? Go on, admit it. Even Jess is doing it, hey?
Wrong! Never do this. Even good people make this mistake and no, no, no, don't slap, don't
ever do a bare sleep. The problem with a bare sleep is that you are always struggling with
how long to sleep for. If you sleep for too long, your tests run really slowly. But if
you don't sleep for long enough all you've got to do is move on to a slower machine or
the machine is a bit more loaded that day than it was yesterday, and you'll get a test
failure. Classic intermittent problem because you didn't sleep for long enough. And then
that tension between how long do I sleep for in order to get fast tests and reliable tests
is pretty overwhelming.
So just don't do this. You have two alternatives to fix this. And which one you want to use
will depend on a lot on the testing framework in the circumstances that you're in.
One simple approach is a simple polling loop. And the trick here is you 're kind of doing
two sleeps, in a way. You've got this sleep inside the polling interval, and then you've
got your timeout length. That's part of it as well. And the nice thing is you can keep
your polling interval nice and short.
And that way your tests will never take longer to run until you actually detect what it is
you're looking for, and your timeout can be left nice and long. It does mean that if you
get a slow response, your test will run slowly, but then it's inevitably going to run slowly.
And if you set the timeout too low, then yes, you can get that kind of evasive problem,
but that's less of an issue because you can afford to keep a fairly long timeout. And
that polling loop is much, much better with a bare sleep for that reason; you've got two
times to play with.
And usually as well that timeout, that timeout should be set as some kind of easily changeable
global constant, or somewhat global constant, so that you can easily increase the timeouts
depending on, what if you have to move to a new machine and you're beginning to get
problems.
The other tactic for asynchrony is a callback. And basically the idea with a callback is
that you carry out some function to go, the long running function and you say once that
long running function is finished, call this callback method and the callback method itself
contains the appropriate verifications as part of your test.
Now this all, of course, depends upon your testing framework. Your testing framework
typically needs some way to collect callbacks and to make sure that they all get tested
and that none of them have left hanging and things of that kind. But this is quite nice
because it's, it'll never take longer to run than you need to, but at the same time you've
got the defense of things taking longer when they do take longer. There's basically two
strategies you have, what will, what does, the right one to use will depend upon the
testing framework you're looking at.
And sometimes based on the language. Some languages are easier to do callbacks, some
not. I've seen this problem of asynchronous behavior a great deal in terms of web applications
these days, because of asynchronous javascript operations. And, again, whether you use one
or the other will depend on the testing framework you use for your javascript as to which it
makes it easier.
But whatever you, don't use a bare sleep. And if you do use a bare sleep, replace it
as soon as you can, because it's just asking to go non deterministic on you. It's only
a matter of time. In both ways, I guess.
Third cause. Interaction with remote services. So, an interaction with remote services problem:
Typically you've got some app that you're working on and there's some remote system
that you need to rely on in order to do something with your tests.
Maybe it's a pricing engine or a billing system or a customer credit check, or you know, whatever
it is you're having to deal with. Most of us have to deal with remote systems, some
way or another. The thing is that there are many things that can go wrong when you're
dealing with a remote system that have nothing to do with your application code itself.
You know, you may have problems in the connection. The network may go down or the network may
become incredibly slow, something of that kind. The remote service itself may have limited
availability. It might go down and then that would bring the tests down. You might find
that you're relying on certain data to be present and they're not very good at the remote
end of keeping that data together. Sometimes they may give you test instances of a remote
service to look up. Sometimes they might want you to call the live remote service.
In either way, you have no control, really, about what data is available to you to make
your tests. So as a result, its also two levels of instability. In all sorts of way that your
test can fail not because of a problem with your code but with a problem with the connection
to that remote service.
So the general solution to dealing with remote services, which also solves another problem
that remote service calls are often very slow is to use some kind of test double, a fake
or a stub, for that remote service. That way, you control what data it's going to respond
to. You control any changes that it makes and also you make sure that you can have a
fast connection and all the rest of it.
So your tests will run faster, and they are less likely to suffer any problems due to
non determinism. But, of course, a lot of people blanch a bit at this. They say, well
you're not testing the real remote service, something could change there, and your test
double hasn't changed and you've...it's basically a consistency question between the two.
How do we know your double is really a good double of what that remote service does? And
the way you fix that is with a different style of test, a test that I refer to as an integration
contract test. And what the integration contract test does is pretty straightforward. It basically
makes the same call to the remote service and to your test double and verifies till
it gets back the same answer.
And this way, you can ensure that your test double and the remote service are in sync.
Now, integration contract tests don't have to be run as part of your main pipeline, because
they're going to fail, not because of what you do, but because of what the remote service
does. So the question you have is, how often is it likely that the remote service is going
to have a problem, but it's going to come out that you need to check the consistency.
And that will depend on what's going on on that remote service side. If they only deploy
changes once every blue moon, in theory, you only have to run the integration contract
tests, once every blue moon. In reality, most teams are probably going to run these at least
once a day or so, just to make sure that everything's OK.
But, it's not part of your pipeline. It won't fail your build, if the integration contract
tests go down. If an integration contract test goes down, it does mean you have to figure
what went wrong and it often may trigger a conversation between you and the remote service
team to find out what's been going on.
But, if that's a different style of problem and should be reacted to differently than
your main pipeline itself. So, there's more causes. In the article online, I talk about
problems to do with time, problems to do with resource leaks, but I don't have enough time
to talk about that now. So that's enough on non determinism tests.
I do recommend though, that you take this stuff seriously. So, whether you're a tester,
developer, manager, whatever. Flaky tests are here time and time again, are a cause
of losing faith in the test, the test no longer giving you the value that it takes to have
them. And since a good regression suite is such a wonderful thing, you really have to
work at making sure it stays good.
And, you know, I've identified this as a common problem by my totally unscientific analysis
of projects that I've happened to bump into. But, I do think it's a more common problem
than we like to talk about and it's eminently fixable. The combination of quarantines and
the kinds of investigations I've talked about here are capable of fixing a hell of a lot
of these kinds of problems. And you know, it's not rocket science, you can do it.
So, that's my first talk. So, now we move onto something a bit more, that was a kind
of a very, sort of down, concrete nitty-gritty details thing. Now, we move to something a
bit more kind of waffly and philosophical. Although, I still think a very important topic.
Why is it important and valuable to think about well designed software?
And really, it's triggered by things like this that I hear. Anybody heard this being
toted around or people using this argument? Few hands, not that many hands being raised.
Wow, you're all a lot more lucky or disciplined than I think you are. Or maybe you're just
lazy and it's early in the morning, who knows.
So this is a notion I think of as tradable quality. That we think of quality as something
that we can trade off for things. And it leads to the question of, okay, why should care
about design? Why should we put the effort into design in order to counter this kind
of argument? And one person who has quite a good argument for why we should do something
about this, is this guy.
Anybody recognize him? A smattering of you at least. This is Uncle Bob Martin. Now, he
has an argument over why we should care about software design, which I would like to briefly
summarize: Badly designed software is a sin! Global variables were created by Satan as
a trap for developers! And every time you name a method badly, it will be branded into
your flesh in the burning fires of hell!
Fair summary of his point of view, do you think?
Yep.
It works for Bob but it doesn't kind of work for me, I can 't really pull that off. So,
as a result, I kind of look at it with slightly different point of view. What's happening
here, is that people are following a notion that they have usually with quality. Which
is that quality is something that we can trade off for other things.
You make this decision every day in your life right? You say to yourself, do I wanna buy
the cheap car, or the more expensive fancy car? I mean I'd like to have the more expensive
fancy car, but I can't really afford it. I'd rather pop that money onto other things, so
I'll buy the cheaper one.
That's a decision we make all the time. We trade off quality for cost. And it's a natural
part of how we think about the things that we buy every day. But this then raises the
question, what do we mean by quality in terms of software? Well in fact there are several
things in software that we can talk about in terms of quality.
But there's a very important distinction between some of them, and some other of them. And
that is, are they visible to our users and our customers? A nice user interface, rounded
corners, pleasant gradients, things like that. You can appreciate that difference. You know
when you look at the website you know you have the nice modern looks, and then you've
got those old ones that look like they were written in the 90s?
You can feel the quality difference there. So you can make a statement as a user or customer
of, do I want the rounded corners or not? But when it comes to the internals of software,
have I got well factored code? Am I using global variables? That kind of stuff. The
users can't see it at all. It's a distinction that I make between internal and external
quality.
Now, as a buyer of software, if I've got a quality that's going to cost me some more,
but not actually be anything I can perceive, why would I want it? Linda offers me a piece
of software. Beautifully crafted, perfect internal design, does five really good functions.
Jess offers me software that does the same five functions, but it's a sprawling mass
of spaghetti, stinking to high heaven.
Linda's software is a hundred bucks, Jez's software is eighty bucks. Which do I pick?
As a user, I'm going to pick Jess. It's cheaper software, does the same thing. What do I care
what the insides look like? When you think about quality in those terms, you can see
why no one takes it seriously . So why do I take it seriously? Let's face it, my whole
life, my professional raison d'etre, is about promoting good internal design of software.
It kind of feels like a bit of a hopeless errand at the moment, but I have a reason.
And this reason is something I give that's kind of ugly, but very googleable name of
a design stamina hypothesis. It basically looks like this. If we plot a pseudograph
of cumulative functionality versus time, what we see with badly designed software is a curve
that looks like this.
It basically means that, we can make progress rapidly early on, but then over time things
slow down. They slow down, we can't... due to not paying attention to refactoring, we're
not keeping our code clean, we're not having those regular conversations about keeping
the design good. We're just hacking stuff in, and over time everything gets slow and
difficult to build.
How many people have been on projects where they have had that kind of feeling. Yeah,
pretty much everyone, it's usually the case. But, there is an alternative - to put attention
into design, to refactor regularly, to keep your code clean, to make it understandable.
And the hypothesis is you can change the shape of that curve.
Not just can you stop that slowing down effect, maybe you can even get a speeding up effect.
Where what you can, you have some new functionality and you're able to put it together really
quickly because you're able to grab this object and that object and wire it together, stick
a little thing there with that object and you're done.
How many people have been on software that have been like that? Well, a few of you....
less, but a few of you. And that's the essence of why this matters because we feel there
is this difference between those two curves. I refer to it as design stamina because what
I'm saying is, design gives us the stamina to keep building quickly.
Yeah, I can buy Jess' software for eighty bucks now, but in a year's time, Linda's come
up with five new, absolutely killer features to her software, and Jess has barely managed
to crank out one. Who's the fool now? What's the better choice? Internal quality gives
us that long running stamina to keep developing new features and keep doing new stuff cheaply
and quickly.
And I've got a hypothesis because, you know, we're in the software business. We don't have
any proof of any of this. We have no way of measuring our output and as a result, we can't
prove this. But a lot of us believe this hypothesis to be true. How many people would put their
hands up on that? Do you think that's true, that those curves exist? Yeah .
Most people I talk to think that happens and that's why we care about design. So, let's
run with this a little bit more. Let's take this hypothesis a bit further. Think about
another pseudograph. I want to add a new feature and I've got two systems, a clean system (Linda's
system,) and a typical system (Jess' system.)
I'm really picking on you today, aren't I? So is, what we see of course, is design stamina's
hypothesis tells us there's a difference between the two. Effectively, this is a cost, economic
cost for poor design. And this is where I feel, this is my argument basically in its
intrinsic sense. Many people in the software industry take this kind of, you should do
design because it's good for you, because it's the right thing to do, etc, etc.
But, you know, especially in this place, in Las Vegas, I know one thing. Morality people
can talk about, but money almost always trumps it. This is money, this is the economics of
software design. No morality needed. Just transfer with money. The problem, of course,
is it's unmeasurable money and not properly accountable, so it's not as good it could
be, but it does provide the line of argument.
Another good way to help communicate this to people is to think, well, what 's happening
here is if we've got a complexity in the software that we don't really need, that causes this
extra effort to new features. There is a relationship between the two, the quality of this code
base, and the effort to add new features. I'm pulling this all together, it's a lovely
metaphor that Ward Cunningham came up with, called "Technical Debt." What Ward said about
this, is that you can think of it in terms of thinking in terms of financial debt.
But the unnecessary complexity in your code base is like the principle of a debt. And
the extra cost that you have when you implement new features is the interest payment on that
principal. Now, what I like about this metaphor, is that it gives us away to communicate to
non-software people something about what's going on in code base. It's very hard to talk
to them about badly named variables, and poor factoring, and things of that kind.
But, this metaphor does seem to communicate something. And it also, in particular, leads
us to a very important part of the decision of what to do about it. If you have got accidental
complexity in the code base, what do you do? Well it's like what you do when you've taken
out a loan. You've got two choices: You can pay off the principal, or you can keep paying
the interest. Or you can obviously do some combination of the two. But there is always
a trade off. A choice between what's going on. And when your building new software, you
can say I'm only going to do the quick and dirty approach and increase my debt and therefore
increase the interest payments I'm going to make in the future, or am I going to try and
pay down the principal a little bit and reduce those interest payments and then you can begin
to get some connection with the economics.
Again it's not a perfect metaphor but it does work rather well. And I see teams beginning
to track this.
Its quite common I think now for teams I've talked to, to actually add technical debt
stories to their back log and say, these are things we know we need to do because we've
got debt that we need to pay down. The tricky thing though is trying to get some way of
conveying what the interest payments look like because you need both halves of the picture
to be able to make that trade off.
And that's something I think people are finding a lot harder.
So, that's the basic metaphor of technical debt. We want to explore a little bit further
and think about how we get into technical debt because actually technical debt come
in different forms. And this basically came from a, the seeds of this came in a kind of,
one of those blog arguments we have. Actually, it was with Uncle Bob. And, it came upon the
nature of different kinds of technical debt. Often when people talk about technical debt,
they talk about it in sense of conscious decision of saying, I've got a deadline coming up,
I know I have to get the software or certain functionality by that soft deadline, I'm prepared
to trade off some of the quality, take on some debt, in order to hit that deadline.
And, if you think about it, that's the way in which we actually normally use debt in
life. We say when I need something, it's more valuable for me to get it earlier. Can often
be a very sensible economic decision. I want to build a factory to build some new product.
I have to take out a loan or pay it off over time, but because the products sell, you know
it's a good economic decision, everybody wins.
But, people also are using a technical debt metaphor to use stuff where people have just
created a whole load of crap without really knowing what the hell they're doing. You go
in, this happens, there was a fair few times we, some client calls us up and says we're
having trouble with software. We send in some poor workers and they, open up their source
code control system and they go, oh my God.
This people, these people didn't, hadn't clearly known the first thing about software design.
And the question was, is that case where people don't know about software design, is that
actually technical debt? Because i't s been taken out in that kind of way. And Uncle Bob's
argument, at least initially, was that it wasn't. And it's certainly a different kind
of thing to that conscious decision a little later on. But I still think the debt metaphor
is handy because it helps us with that question of, what do we do about it?
Do we clean this code up? Or do we keep adding new features and paying the interest? That
trade off still exists, which is rather handy. But it also makes us think about what is different
between those two cases? Well one of those case things that are different is the thoughtfulness
that went into it.
If you're making a conscious decision about what I'm going to do and trying to hit the
deadline, but the example I'm giving is kind of a prudent decision. And, you know, just
getting into tons of debt is a reckless one. If we think about it in financial terms, the
parallels are obvious. I think we all know of people who have gotten themselves into
reckless debt by not paying any attention to what it is that they are borrowing. I fact
we can even, perhaps, think of some governments who have done the same.
Another difference, though, is whether the debt is deliberate or inadvertent. In the
first case, my scenario, the people knew they were taking on debt. They were deliberate
about the fact that they were taking it on. Well, in the second case they have no idea
what they were doing and had no idea that they were taking on so much debt. So, I think
it's an interesting pair of distinctions between those two examples, but of course something
even more important has just happened.
The dream of every consultant has created a quadrant. You're supposed to cheer and clap
with that, you know, it kind of leads to the moment. I'm sure the people in the virtual
world did, so, you know, but the people here are a bit slow on the up taking, haven't had
enough coffee. Now quadrants are, of course, a very important part of any consultants armory.
I'm told that Garner has a special bonus program for how many quadrants you create a year.
But, the interesting thing about a quadrant is it makes us ask the question, what's in
those other two boxes? The, in the first one, which is very interesting, is, what is reckless
and deliberate debt? And the answer is, it's very, very close to prudent and deliberate
debt.
It's both along the lines of saying, we don't have time to do design here, we've got to
go fast so, therefore, lets skip the design bit. But, this decision is often made without
really understanding what's going on. Let's go back to our little curve here. We talk
here about the difference between good design and no design or poor design.
There's a very important point which is where those two lines cross. I will design, a point,
I'll call the design pay off line. If you sacrifice, efforts on design in order to go
fast for a time period that takes you above that design payoff line, you haven't actually
benefited. You've actually deliberately taken on technical debt and ended up going slower
anyway.
The decision to trade off debt for speed only makes sense below that design payoff line
and of course, it only makes sense there, if it, kind of, really balances off all the
various other factors that we might think about. But it never makes any sense above
that design payoff line. Of course, the question then is, how far away is that design payoff
line? And of course, we can't measure this, but my gut feeling which is generally echoed
by people I talk to, is that it's somewhere in the order of weeks, not in the area of
months .
It's a lot shorter than most people tend to think it is. And that is an important point
about the distinction between a prudent and reckless debt. You have to think about, am
I at a point where I'm going to tip myself over the design payoff line? In which case
there's no point in sacrificing design for speed, because I'll end up losing both.
One last space in the quadrant. And a really weird one: prudent, inadvertent debt. At this
point, the financial metaphor kind of breaks down. Another of, people suggested some prudent
inadvertent debt, but none of them have really worked for me.
What the hell is that? This struck me, when I was in London, a year or more ago, and I
was chatting to a solid lead developer at ThoughtWorks. One of the guys that we happily
trust on the software projects, very solid, he'd been on the project for a year and I
was, popped in. I wanted to chat with him, find out how his project gone, gathering little
tidbits of information, which is a lot of what I do.
And he talked about the project that worked through for a year, but delivered to the customer,
the customer's really happy, things have gone kind o,f you know, generally sounds like they're
going pretty well. Everybody was reasonably pleased with the whole thing, but he didn't
seem terribly happy. And I said, what was up here, what's coming, what's wrong?
And he said well, the design really wasn't very good. We didn't really deliver that great
code. I said, but Ben, you're one of our best developers. How did this happen? And he said,
well, when we started off, we made decisions that seemed good decisions, but now looking
back at it from a year on, I realize they weren't the right decisions.
We should have done some things slightly differently.
Anybody else have that impression before? Pretty much everyone, yeah. The point of prudent
inadvertent debt is that even the very best software teams in the world do this all the
time because the nature of software is that you don't really know the best way to design
a piece of software until you've been designing it for about a year.
And then you begin to realize, oh, this is exactly how things should fit together. And
at that point, you realize you've taken on a debt without even noticing it. Not because
you've been stupid, but you just didn't know because we're always learning.
And this is a very important form of debt and it's actually the debt that Wood was most
talking about when he first came up with this notion of technical debt. It means that even
in the best systems with the best people and the best attention, you'll still build up
some debt that you then have to decide how to deal with.
That's a natural cause of even the best functioning teams. And it's also, of course, another reason
why you've got to be extra wary about taking on any other forms of debt. Because you've
always got a certain amount that's inevitably gonna be drawn out.
So I found that quadrant a handy way of thinking about debts. If you want more - generally,
if you want to find out more about anything I'm talking about, go to the top link up there,
the talk notes link on the Bliki. Because I have notes and links and I have several
Bliki articles in which I've talked about these ideas.
And therein ends my second talk.
So it's that time of life, now, where those of us who got involved in writing Agile manifesto
are being constantly being reminded about our age and gradually growing decrepitude
by telling us it's our tenth anniversary. The Agile conference in a couple of months.
We've all got to turn up and parade around in tee shirts saying, we are the genii who
created the Agile manifesto.
Hopefully it wont say exactly that. But it's a natural thing to reflect a little bit on
where we are in the Agile movement after ten years. And I'm going to do something slightly
different for this segment, because I'm not going to use slides. I haven't really come
up with a good set of slides for this.
So, I will need slides at one point but, until then, we'll just kill the visuals. So the
first thing I want to say about this is to think a little bit about where we actually
were ten years ago. I think this is important because it's easy when talking about things
like where is the Agile movement and what should we mean about talking about Agile software.
It's easy to forget the history. Maybe it's because I'm a history buff, but I always think
knowing how things got the way they are, is a very important part of understanding why
something is the way it is. History is very useful. It's true in code bases and in companies,
in the way you do things organizationally, and you know things like the Agile software
movement.
So back in 2000, what we saw was the world with a lot of chaotic, badly managed, uncontrollable
software projects. I don't actually know if it's any worse than it is now; I think a bit.
But it was definitely the case. And there was also, I think, a growing sense that people,
that there was a group of people that felt they had the answer to this. And the answer
is big methodology. Or what we called plan driven methodology or what I often refer to
as the engineering approach to software development.
You know get all those requirements pinned down, make sure you've got them straight.
Only once you're really sure you understand all your requirements, go on to design and
the whole waterfall stuff. Lots of documents, lots of process in order to build things.
And that very much seems the direction in which people said this is how we should build
software.
But there were some of us who would use different approaches, very different style of approach
what we now call Agile thinking. Rapid iterations, lots of collaboration, lots of approaches
towards an evolutionary approach to requirements in design and architecture. Everything that
we now talk about as Agile.
And we had had success with those techniques. And what we felt was, was there was a danger
that the industry was going to go so running down this heavy methodology route that we
kinda get trampled and not allowed to do what we knew would work in many situations. I don't
think most of us thought - certainly I didn't feel - that the Agile approach was the right
one to use in all situations.
But what we felt is that it certainly was the right one to use in many situations. And
we wanted to ensure that we could continue doing that.
And these approaches were all kind of different flavors. There was extreme programming, which
was probably the most visible one at that point 10 years ago. There was Scrum, and there
was feature driven development, a whole bunch of things like that. And the origins of the
get together at Snowbird - where we got together for the Agile meeting - was actually a year
earlier, when Kent Beck organized a workshop to talk about extreme programming.
It was near his home in Oregon which is, you kind of go into the middle of nowhere and
then you go further on into nowhere for a couple of hours and then you get to where
he is. And at that workshop he brought together a bunch of people who were active in the extreme
programming world, but he also brought together a few people who were kind of hovering on
the outskirts of the extreme programming world.
I think Pragdave Thomas was there and Jim Highsmith was there. And one of the questions
that we faced was, what really is extreme programming? Kent had described it as a very
particular set of practices governed by a set of principles and values. And the whole
looked pretty nicely as a way to sort of start off and to think about building software.
But there are people who liked the values and to some extent the principles, but didn't
like the particular practices. But those values were very powerful. So the question was, should
extreme programing be an expression primarily about the values? Or should it be something
more concrete? And Kent felt he wanted it to be concrete because that way it gave a
bit more concrete advice to people about what to do.
But then of course, that left a question - what was that commonality in values? And that kind
of what led to the Snowbird thing and why, as part of the Agile manifesto, we focused
so much on values. But try and say, "This is what we have got in common in terms of
the values of the way we think about software." But the actual concrete practices about how
you do things, they can vary enormously.
It's the values that kind of hold us together.
We didn't actually go into that meeting intending to write a manifesto. We were just kind of
invited to get together and discuss our different approaches, and my hope was just that we'd
get together and learn some ideas from each other. I mean, the various approaches of the
stolen ideas left and right from each other before, and I am always happy to steal ideas,
so that was what I was looking for.
As I remember it, it was Uncle Bob who said, "We need a manifesto, a statement of what
we believe in." And I was kind of thinking, " Well, OK. I'll go with it." And as it turned
out, I think the manifesto had a really good beneficial effect. It really helped coalesce
people around that kind of thinking.
It's surprising to me, but it did work really rather well.
But in the end it's worth remembering a point, that the people who turned up and wrote it
just happened to be the people who were free on that week and turned up. There were actually
quite a lot people who were invited who didn't make it. We got a good set.
I think that we were fairly lucky about how that worked. And a very collaborative group
as well, I must say.
As I look at the world ten years on, and I talk to the people who were there in the early
days, both the manifesto authors and the other people who were active in the Agile, what
was now called the Agile community at that time, I actually get a sense of unhappiness.
People say, well, you know, Agile is really not that interesting, or it's gone sour, or
I want to be so over the Agile thing, and Agile doesn't matter anymore, and that kind
of stuff. It's not a feeling of, yes, we've kind of made a big blazing direction in terms
of the industry, a feeling of triumph at all. It's actually a kind of feeling, eh, bleh.
Eh, bleh. You know, that kind of "eh" feeling. That's what I actually detect most. Which
is kind of surprising in a way considering how Agile, you know, there's conferences left,
right, and center are talking about and all the rest of it. Why are people so "bleh" about
it? Well a large part of the reason that this is happening, well I think there are two main
reasons.
The first thing is something that's kind of an inevitable consequence of success. And
it's something that was very obvious in the days of object oriented programming as well.
People got interested in objects and then other people started talking about it and
passing on and the ideas spread out. But the problem is as the ideas spread out, Chinese
whispers began to set in.
Somebody starting talking about "Oh, this is what objects are." And you'd look at that
and you'd say, "That's not what I understand them to be." And the same here with Agile,
people are talking about, "Oh, we have a Scrum team and the Scrum master assigns work to
everybody on the team everyday at the stand up meeting." And go, "No, I don't think that
was what Ken was talking about".
It's a process I - cause I love coining these Googleable phrases - I call this semantic
diffusion. Over time, as things get passed out, the semantics, the meaning of what we're
talking about gets diffused. And I think a lot of the feeling of bleh about Agile comes
because we see that semantic diffusion happening.
Now, I see semantic diffusion as an inevitable consequence of success. I mean, the alternative
is we're all very conscious and active in Agile stuff, but there's not very many of
us and we don't get to do very much. The benefit is that we actually get more opportunities.
When we were at ThoughtWorks, were doing stuff with agile in the early parts of the decade,
we had to be very careful about what we were doing. I mean, clients would say, I remember
one case very vividly being told of a client who had said, "Well, we really liked the way
you've developed this software and things have been happening really fast and really
low defect, but we've heard rumors that you're doing this Agile stuff. We don't want any
of that around." Now, clients come to us saying, "Oh, we want to transform our gazillion member
IT department, and turn them Agile in six months. You can do that for us, can't you?"
And, yeah, this is pretty of ugly as well but at least we're no longer being, sort of
having to do Agile under the covers.
We can much more open about that's a good thing. But it is that consequence of success.
And I see semantic diffusion as an inevitable part of success. Anything that's successful
spreads faster than the semantics can follow it. It's kind of, running quicker than semantics
can keep up.
And our job as people who believe we understand the semantics is we've just got to keep plodding
away. We've got to keep reminding people, what is Agile about? What are the core concepts?
No, it's not about Scrum masters assigning tasks. We have to keep saying that and we
have to have a lot of patience for it.
Because this is actually a very difficult time. For any movement. You've got the initial
enthusiasm, that's kind of blasted out really fast, and we can't keep up, and it's a slog.
But it's what's needed is it actually going to have the degree of traction and change
that we want to achieve. And this is where it sort of floods into the second reason,
I think, why people are happy is because we are in the early days of a very long running
change.
I think about the object oriented revolution, as it were, where objects kind of started
in the late 60s. And kind of coalesced and put together by the Small Talkers in the 70s.
Had a good defenition of what object orientation really was with Small Talk '80 in 1980, but
it still took about 20 years or so, because before I would say objects were mainstream.
The major languages like Java and C++ and C# were object oriented. So, that's a 20,
30 year process to get objects into mainstream languages. And still today, now forty years
on, I come constantly told by my colleagues, that they go into clients and look at their
object oriented Java code and they say,"There aren't any objects here, it's just procedures
in naked data structures. You know, a few getters and setters, that's not objects.
So, the object revolution still hasn't actually become truly mainstream yet. 40 years. And
Agile, well, for start, it gives you some sense of how long it takes. But, I think Agile
thinking is actually going to take longer. Because Agile thinking affects way more people.
It alters the whole relationship on power structures around software development organizations.
Testers similarly have a completely different role in the software development process then
they had before. Managers have a whole different role. Developers have to do things differently.
Changes everybody.
It's going to take much longer for the Agile movement to have an effect. And we're only
10 years in. The Agile manifesto is the equivalent of Small Talk '80. You know, we are basically
object circa 1990. But, with a much longer horizon to go. I'm hoping that things will
be well understood in the mainstream by the time I die. But I realize it could take that
long. And, unfortunately, that's one of the things I think that makes people feeling depressed.
They wanted the revolution to have finished by now. Sorry, it's going to take decades.
We still get benefit along the way, but it's a long process.
So that's why I think people are feeling a bit "bleh." And, of course, one of the consequences
of people feeling bleh is they say, "Well, why should we care about Agile at all?" And
I do hear this quite often from people saying, "I don't care about Agile anymore. It doesn't
matter to me." Well, I have a bit of a question about pointing the view, and this is where
I do need to go back to the slides.
I couldn't get them to appear. Ah, there we are. Thought I was doing something.
So this is the front page of the manifesto for Agile software development on the web
page, hopefully all of you have seen at some point. And I hope many of you have signed
it. How many people here have signed the manifesto? You know, there's a little page you can sign
it. There's been quite a lot of signatures over the years.
I want to focus on the values. This is, of course, a very striking way of writing that,
somehow we came up with, I have no idea how we did, but it's really effective. The basic
idea is we come up with eight valuable, useful, good things in software development. But,
we arrange them in pairs, so that one valuable useful thing is more valuable than the other
valuable useful thing.
Very important that the things on the right are good valuable things. It's just, we prefer
the ones over the left more. Now, when we did this, we actually had a very important
guiding light to us, which was we were very conscious of this drive to the engineering
methodologies. We, part of the structure of this is that we could imagine you flipping
the, all of those values around and that would be the value system of the engineering methodologies.
You know, they care about process and tools more than individuals, because they want individuals
to be these plug compatible people that they can just move around. In an engineering process
mindset that's what you want. The comprehensive documentation of a working software is a little
bit weird in some minds but I remember hearing it.
I remember people saying, the important thing is to produce the design diagrams. That's
where the intellectual work is. Once they produce the design diagrams, we can just sling
them out to a bunch of coders and I can just code it up.
Preferably in India where it's cheap.
And that is definitely the attitude. You know, we have to have contracts decided and sorted
out. We follow that contract, the whole basis of our conversation has to be, is it in the
contract? A lot of organizations work that way. And of course, we absolutely have to
follow the plan because our definition of success is, did it go according to plan?
In an Agile world, did it go according to a plan is... its kind of interesting in a
kind of weird way, but no one would define success as, yes, we were on time on budget.
I mean, that notion of on time on budget, that says things went according to plan. For
Agile people, success is the customer, success, more successful in what they do because of
the software we produced.
You know, we made the customers' lives better. That's the Agile definition of success. And
whether it followed the plan or not is kind of irrelevant, really. Which doesn't mean
the planning isn't important, but it doesn't become a measure of success measure. So when
people say they don't care about Agile, what does that mean?
Well, what it means is that they don't care about which way around those values are. They'll
be equally happy, with all the values flipped, or all the values as they are in the manifesto.
And I personally believe that many of the people who tell me, oh I don't care about
Agile anymore, they would not be equally happy working in a flipped or Agile environment.
They actually do care a great deal. And they may be a bit sick and tired of the hype of
the semantic diffusion, and of big companies launching their Agile practices, and all the
rest of it. But they do care about the values in which they work. And I think that's something
important to remember. We care about this because of that value system.
At least, that's what I care about, of course. So, the next thing that I wanted to talk about
a little bit, is to talk a bit about where things are, going in the future. Now, those
who know me, know that this is kind of line that occurs when people come up and talk to
me who don't know me. People will come up to me and they'll say, "Martin, what are the
big future trends in software development"?
And I have this line that I always say that, I don't know anything about future trends,
I'm not interested in the future, I'm interested in the past. I'm a patterns guy. I'm rummaging
through the software projects in the past, and finding, well this is a good idea, we
should do this more often. That's my life.
I'm an intellectual dumpster driver, looking for interesting stuff that people kind of
discarded that actually is really kind of interesting. So I'm not very good on this
future looking stuff. I leave it to these futurists that get up on stage and tell you
what the future is going to look like, and hoping fervently of course, that nobody will
look at what they told you ten years in the future, find out how wrong they usually were.
But, there are two things that I wouldn't say are futures, but are definitely current
things that I think are really interesting. And they're both things at the, sort of, what
would be the extreme edges of the traditional software life cycle thinking, which of course,
kind of goes away in the Agile world, but it, it's a good way of thinking about it.
The first of these is user interface, user experience work. And when we actually did
the manifesto, there was various discussions, I remember particularly with Larry Constantine,
because he was very into user inter, user experience at that time. And, he was saying,
well you are not talking enough about user experience stuff and we said, well you know
we kind of got a broad reach as it is, we got some broad ideas, but that's one of those
things that we expect will develop.
And it's actually developed a bit slowly, for my case. When we look at, when I do my
travels around ThoughtWorks projects, a common and rather sad thing that I hear is people
say, "Oh well, we engaged this design agency and they gave us this beautiful book full
of Photoshopped images of what the website should look like and how it should interact
and everything. "Build that, please." And we look at it and we go, "Well you know, they're
asking for this? That's quite expensive, and we could do that, which is just as good, but
hell of a lot cheaper."
And they say, "Ah, but the thing says this, that's what the design agency said, ok?" And
furthermore we can't launch a minimum viable product and then build and change and learn
from the experience because got to do what the design agency said. But, that's slowly
changing. More and more, people are beginning to get into the notion of saying, how can
we evolve user experience at the same time as we're building the software?
And there's a lot of more serious websites do this kind of stuff, with things like AB
testing, and canary releasing and stuff like that, where they'll actually, "Hey, here's
a new feature. Let's put it out to a subset of our audience and see if they like it."
We're actually getting to this lovely situation, I particularly love this, where people are
figuring out the requirements of the software by watching what people do with the existing
software, and thinking, hm, that might be an interesting idea.
That's the total antithesis of traditional requirements thinking. Right? You only know
your requirements once you've built the software, and then watch what people do with it. And
I think this notion of how do we combine Agile thinking and user experience design, so that
it's a constant ongoing process. That 's something that we're going to see
more and more of. We've done a bit of that. Its still a minority, I would say, of our
projects at ThoughtWorks, but its definitely the way we want to see things done more and
more in the future.
And I think we're seeing a shift in the user experience community. A while ago, it was
definitely the view, it seemed to be, that oh, you have to figure out the whole user
experience before you begin, because you have to.
And now it's much more a sense of, "Oh! Maybe we can change our user experience and evolve
it as we're building the software." That notion that it can be much more combined seems to
be gaining a lot more credence. And I think that's going to be an encouraging change over
the next few years that will increase.
The other one is at the other end. One of the big struggles that we've see is that we
can build software very effectively within the software development team, get it all
integrated, get it all running and tested, and all the rest of it, but then have difficulties
getting the whole thing to production, running, making money, all the rest of it. Some people
refer to this as the last mile of software development. And the problem is it's going
to be set with all sorts of difficulties.
People have not treated it seriously. There are organizational differences between development
teams, testing teams, and operations teams. There's a real lack of knowledge about how
to kind of make that software to delivery process go smoothly . A lack of tools and
automation, too many heroic 2 a.m. in the morning fiddling around with server controls
and things or the over opposite, which is, people given these paper scripts that they
have to go through to figure out how to do a delivery. All that kind of stuff going on.
But we're seeing a big shift in that, and the heart of this is a technique we call "Continuous
Delivery". This is where I do my book plug.
I'm not going to plug my own book, "Domain Specific Languages," although it is, of course,
wonderful. And you should all buy a copy and read it, and all the rest of it. I would certainly
love it if you do.
But actually, before you buy my book, buy the "Continuous Delivery" book. And which
you'll find out in the book stores, by my colleague, Jez Humble that you saw, at tutorial
early on. I really do think this is a hugely important thing. It's what says, I mean we
look at, you know clients that we been into, where they would barely get a few bug fixes
out every six months, to situation where they were rolling new features out, two weeks,
and instead of spending the whole weekend to do things in the middle of the night.
And they were doing it, Friday at 5, hitting the command to say deploy the software to
production, and at 5:30 going down to the pub. And that, to me, is the most important
thing, of course, because you don't want anything to stop you from going down to the pub at
5:30 on a Friday night. So those are the two things that I think are the really interesting
next steps.
The one last thing I want to leave you with a little bit: In recent years, another thing
that's happened is that we've seen a bit of an argument appear over the last year or so
between what I might call the Software craftmanship movement and the agile, particularly the Scrum
communities. It's a bit of a backlash, really, to the fact that one of the things that's
happened with all the semantic diffusion, is a lot of attention is paid to project management
stuff, and not programming stuff.
Now, I think now this is part of the working out and fiddling through of software development
and I like the fact that the software craftsmanship community is paying good attention to internal
quality of design; I've already told you why I think that stuff's important. But it also
reminded me, really, is something that I haven't really fully appreciated before. One of the
things I really like about extreme programming, about the way Kent originally described his
view of Agile software development, was the fact that it unified the technical practices
that you need to get software out. We have the more human interaction stuff, about how
do we manage that software development process, and how do we communicate with the customer
of software?
In fact, at Snowbird, Kent was asked to sort of summarize extreme programming, and say
what it was about at its core, and he didn't talk about test driven development or continuous
integration, or any of that stuff. He said, "I want to cure the division between customers
and builders of software, so they collaborate more closely together."
That for him is the essence of what development improvement is about. And it reminds us, as
anyone involved in the software development process, that our job is to provide stuff
that is valuable and useful and makes the customers and users more successful with their
software. And we should always concentrate on that, and always keep that at the front
of our minds.
It's good to get better at TDD, it's good to do no -SQL databases, it's good to learn
how to do requirements in an nice Agile way, and all the rest of it.
But at the heart, always guiding us, is how do we make the customers and users of our
software more effective and how does our software get towards that purpose.
And in thinking about that, I decided to take another step back. It's very easy, as software
developers, to get very focused on, you know, doing our what we do, and making sure we do
our work better, but we also have a broader responsibility and a growing responsibility,
to say, is what we're doing not just just making life better for our customers and our
users, but is it making the world a better place?
One of the things that I see a lot in software development teams is they kind of say, "Well,
you know, we are order takers. You tell us what to build, we'll build it." But I think
we have to show a greater responsibility in what we do and say, are we really building
something that is better? One of the frustrations of many people including myself about the
rise of the financial industry in the last twenty or so years, is a hell of a lot of
very bright brains are exerting all of that intellectual fire power, on how to screw over
more people when it comes to money.
And how do I make a bit more money than that guy over there? And how do I, you know, play
in that casino? Very appropriate considering our setting. And that's a terrible waste for
me. It 's a waste of intellectual energy that should say, we can do things better. And this
doesn't necessarily mean you sort of go off to Africa and build houses or something.
In fact I'll argue very much it doesn't mean that, because that's not really using your
intellectual capabilities, but I think it does mean that all of us must stop every so
often and think about, you know, I'm using my skills for my employer or for whoever is
paying me, and that's good, but, is that contributing to making the world a better place?
Sometimes those things can be very simple. Sometimes some things can be a bit more broad.
But I think it's something we have to think about a lot more as software developers. Our
industry is growing increasingly influential. The world is becoming much more connected
through websites. We're seeing software everywhere we look.
And we have to start stepping up, and saying we want to take responsibility for what that
software does and what impact that software has on society.
We have to think about how we exert that responsibility, and what we do with it.
I don't have the answers, and I don't want to make any suggestions, even with my carping
about financial industry.
But, I do think it's something that you should all individually ask about for yourself. When
you're traveling home, on your flight home, think about, you know, how does what you're
doing have an effect?
How can you make it have a better effect?
And on that mushy, mushy note, I'll finish your talk.
Thank you.