GTAC 2010: Twist, A Next Generation Functional Testing Tool for Building and Evolving Test Suites


Uploaded by GoogleTechTalks on 27.12.2010

Transcript:
>>
PRAHLAD: So I'm going to be talking about building and evolving test suites with Twist.
This is the broad agenda. So we're going to be talking a little bit of what the current
automation landscape looks like. We'll be talking something about, what does it take
to do with test evolution and maintenance. I hope the bulk of my talk will really be
about a Twist demo. I'll be showing you a few examples of test suites that have been
built using Twist. And finally, I will talk a little bit about Twist's roadmap and where
we're going in the future. So where are we at the moment? At this point, I think it's
fair to say that for most of the technologies that we use to build our applications, so
when we're talking about web applications, .Net applications, Mac applications, we have
testing drivers that we can use to test them, right? So at ThoughtWorks we've had something
of a fairly rich tradition of building testing tools. So just to introduce you to some people
who have actually contributed, this is a partial rogues gallery of tool authors from ThoughtWorks.
See if can recognize them. So, that's Jason Huggins on the left, author of Selenium. He
wrote Selenium when we were building our internal timesheet application. Everybody at ThoughtWorks
uses some different operating system or the other and we have to have a tool that works
across browsers across operating systems. And we couldn't really say use IE6 or Firefox
or whatever it is, okay, so that's why Selenium was born. That's Ketan Padegaonkar, author
of a tool called SWTBot. He actually was with the Twist team for a fair amount of time and
is currently working out of our San Francisco office now. We wanted to test Twist using
Twist, so that sounds a bit twisted but we need to have a way in which we could eat our
own dog food, so to speak. And we found, as usual, that the current crop of testing tools
at that point for testing usability applications was fairly poor, to say the least, so we ended
up having to build one. That's Paul Hammant, author of Selenium RC. The guy with the white
hair is Narayan. He thought he could do a better job than Selenium and he wrote a tool
called Sahi, a pretty interesting little web testing tool. Part of my demo is going to
use a Sahi test suite and you'll see examples of what Sahi tests look like as well. That's
my namesake, Vivek Singh, in the bottom right corner, author of two testing tools for .Net,
SharpRobo and White, both of which can be used to test .Net applications. Simon, who's
sitting there right at the back, author of WebDriver. I really don't know who that is.
Oh, that's me, author of Frankenstein. Okay, so testing drivers I would say at this point
mostly solve the automation problem, right? So, we can use tools like Selenium to automate
our web interface for instance. We can interact with web pages. Yes, when new technologies
come in, like when Flex came in. It took a little bit of time for tools to catch up and
then give us the level of support that we needed in order to test Flex applications
for instance. But invariably, the community does tend to look into that quite quickly
and we do tend to have tools now that can pretty much test most applications. Yes, there
are issues sometimes with specific things like file uploads and so on and so forth,
but those issues eventually do end up getting solved, right? And we can then start building
abstractions. So for instance we can see the first block that we see right on top. We can
see that what we're trying to do is login to an application. That's the username. That's
the password. And finally, we are clicking on the [INDISTINCT] part maybe. So we can
now pull that out into a metered or a function that we now call login, with the username
and a password. So if we take this one level further, we can build even more abstractions,
right? So some of you may know about what we call the page object pattern. Where what
we try and do is basically encapsulate the business level actions that we're trying to
do on a page and put that into a single place. So what that gives us is a nice high level
interface with which we can then interact with application. So what's next? So we see
that the driver problem is partially solved, right, and now we have lots and lots of tests.
So what problems do those [INDISTINCT]? So now we are in a situation where we deal with
multiple test suites--tests, right? So is it valid for us to now to think of test suites
as a first class entity? So, just a question for the audience, how many of us have thrown
away our test suites and redid them from scratch? One, okay, that's--okay, that's much more
than I thought. Well, that's almost half of the crowd. Yes. So why does it happen? Have
you ever stopped to think, would we do that to a production code? Yes? More often? Yes.
Okay. So but we tend to do it--for some reason we tend to do it a lot more with test suites.
And that's been my personal experience, right? So if you look at test, functional test, essentially
what we try and do is, we tend to have a defined series of steps that we go through. So you'd
say login to the system, check whether your account balance is a certain amount, transfer
money to another place, check whether balance is reduced. So it is usually a well-defined
sequence of steps and usually our tests are linear. So I think it's only the early days
of writing our test suite, we don't--anybody who's used Recursion, for instance, in their
test suites or their tests? No? Okay, somebody at the back, that's Simon. Are you talking
of WebDriver or your test? Okay. Okay, going on. Some of us like putting together tested
branching. So let's say that this is our test that says, "What insurance premium should
I pay if I'm a user of a certain type?" So if we say that this is for professionals between
the age 18 to 60 maybe, assuming that's 60 that are on that page, that's the sequence
of steps that we have to go through to do our premium calculation if you are less than
18, and that's what happens if we have to go to someone who's more than 60, right? So
you could have branching in your test. But what some of us prefer to do is to say that
those three things are really three different scenarios, and that actually simplifies the
way that we look at tests quite significantly, right? So again, when we look at tests we
may see patterns that repeat themselves. So we may have the same series of steps that
we do, possibly with different data right down there. So what do we do then? We can
refactor our test, put things out into something common and just ready the parameters that
we pass into that. So this is what we tend to do a lot when we build test suites. Of
course, there are other things that we can do. We can lead them. For instance, we can
take care of logging database abstractions, exemption in handling, and all of these kinds
of things. But what I'm really getting at is that the way that we write our test tends
to be linear, fairly well-defined, and there is a fairly small set of refractions that
we can use when we are talking about building evolving test suites, okay? And, of course,
that's the equivalent of a function parameter, which we can take in parameters and can do
what whatever meters that we like to do. So once you get to a reasonably-sized test suite,
test maintenance and evolution becomes a fairly significant problem, right? Oops--yeah, especially
when the test suites tend to grow large and getting the abstractions right is crucial,
so again, coming back to the page object pattern. So let's say that we are a banking website.
An example of what you could put into a page object for a bank account could be, check
the balance, transfer your funds, say request for a loan, we could ask for an annual statement.
So these are--this is what I meant by high level business actions. Now, even if the user
interface of the application changes or if the bank comes over, say, an Android application
that allows you to access a bank account, the test of the page object abstraction is
still valid. You would still want to check your balance, you'd still want to transfer
funds and so on. So, if you combine multiple pages, you could create a new abstraction
that you could call a workflow. So you could check your balance, transfer some money, check
your balance again, and that's where you could figure out whether, say your transfer went
through successfully or not. So, another problem that we tend to face is collaboration across
rules. So, what usually ends up happening is that when we put together our test suites,
those test suites are really understood only by technical folks. So, if you're working
in an Agile context for instance and we have say, acceptance criteria that go with the
feature that basically tell you when you're done, there's usually no link, no direct link
between the acceptance criteria and your functional test. So for instance, if your features are
described in a Wiki page or a Word document or something like that, someone on the team
has to go through the effort of translating that into an executable test. I'm not saying
that all things work in the suites, some things do, right? So we have a representation problem.
Typically, our tests are not in a form were non-technical users can understand them and
participate in them. So once they get end up getting transferred it's very difficult
for someone to look at what it looks like. So, good luck trying to ask your domain expert
or your business analyst to understand what's going on here. Once it gets to this form,
it's very hard for them to really participate in the whole testing process, right? So, how
do we evolve and maintain our test? I'd say that refactoring is part of the answer. So,
what is refactoring? Just a quick recap. We're talking about improving design without changing
behavior. In other words, in the context of testing, we're really saying that when we
refactor our test we design them in a better way. They still do exactly what they were
doing before, it's just that that they're structured in a cleaner way which helps us
maintain them better. So the intent of the code that we refactor does not change when
we do a refactoring, that's a key characteristic, right? So, what implications does all of this
have for functional testing? So, when a functional test changes, the underlying domain concepts
do not. So like I said earlier, coming back to my banking example, let's say that the
way that we transfer funds from one account to the other changes. So previously we have
to go to a different page and to the target account number, click on okay, maybe wait
for a confirmation and then work with that. We could have a change which makes the user
interface a lot more streamlined, makes it [INDISTINCT] just allows you to do everything
with a single link in a single page, right? So, even though the user interface has changed,
the act of trying to do a transfer has not. So we're still talking about transferring
funds, it's just that we're now doing it in a different way by interacting with the application
in a different way, right? So again, the way in which we transfer funds could now be different
but we are still transferring funds, and that's really critical. So, when the domain concepts
remain the same, the intent of a test also remains the same, right? I'm sure most of
you know what DRY means. It's a--it means, Don't Repeat Yourself. In the context of testing,
it really means that it helps when we have one representation of our testing activity
in the entire test suite. So for instance, what I mean by this is one place where we
define how do we login into the system, one place where we see how do we transfer funds,
one place that we see how do you check the balance and so on. The reason why this is
important is because it's really in our interest to minimize the amount of work that we do
when our application actually changes. And this is really about fixing a test should
require the minimum number of changes. So, how do we represent tests effectively, right?
One's--one way of doing this is by using something that's what they call domain-specific language.
So, what is a domain-specific language and do we need to be Shakespeare to write one?
Not quite. So what is a DSL? A DSL is essentially a mini-language that's designed to express
the requirements of a particular domain, right? It helps to involve the whole team because
when you're talking in terms of the domain, it's something that everyone on the team can
hopefully understand. If--how many of you have read Domain Driven Design by Eric Evans?
Just a couple of you? It's an awesome book. What it really talks about is that when you're
building a software it really helps to have a rich domain model and to use the terminology
and the language that we use in the domain in the software itself. So when we talk about
accounts, accounting transactions, balances and so on, it helps to have the same concepts
being defined in your code base as well, right? You just make sure that there's no translation
loss that happens when developers talk about accounts and your business analysts talks
about accounts and so on, right? So it helps solve the representation problem. So, a sneak
peak into what we see a lot more of late in this talk. This an example of a Twist test
for Go, which is a continuous integration product. So, you'll notice that the language
that we use here is highly specific to what Go does. So, Go supports something called
a build pipeline, you can think of it as a staged kind of build. So you could say that
if you're building a software you'll first compile it, you'd run your unit test, you
then run your functional test, and if necessary, your performance test. So the pipeline really
reference to a whole series of things that you do in your build. So this particular test
is trying to see whether a pipeline is triggered if a particular user is added to an approval
list. In other words, is trying to see authorization, whether that works fine or not in the context
of Go. So, thing--the thing to notice here is that the language is extremely compact.
Once you understand where the pipeline is, it's fairly clear what the system's trying
to do. We're not talking about how do we interact with the webpage or what are the plan that
we're using. It's technology agnostic, right? And if Go continues to have the notion of
a pipeline, this test would still be valid far into the future, right? So, where are
we again? If you look at the current crop of testing tools, we actually lack powerful
IDEs. Yeah, I know some of you will say, "What about Eclipse and IntelliJ and all of that?"
But do we have IDEs that can be actually used by a team, right? It's not just about us as
developers and testers it's also about involving everyone. Evolving test suites is still seen
as a black art, right? So, there's this really interesting paper written by Jennita Andrea,
who's fairly well-known in the Agile testing community. It's something like serendipity
because the time that she wrote this paper was around the time that we are starting to
build Twist. And funnily enough, she happened to be talking about the same set of ideas.
So, the emphasis is mine, "We desperately need functional testing development environments
that support the same type of capabilities that IDEs do," right? Why is refactoring important?
Powerful refactorings are even more important than the context of an IDE because now you
don't have a safety net; the tests are the safety net. So how can we make sure that we
change it without breaking the way that they work? So, refactoring becomes even more important.
So those of you who have attempted refactoring in the past would know that you can't really
refactor unless you have test, right? Because how would you know whether you broke something?
So do you refactor your functional test in the first place? What tests the test? We are
not yet at that level, right? So with that, I'd like to talk about Twist. It's a testing
IDE built on Eclipse platform. It allows you to test by building testing DSLs. The language
that you use in the Twist test suite would really vary from test suite to test suite.
In other words, the--if you're testing a banking application, you could expect to talk about
accounts, transactions, interests, transfers, all the kind of concepts that you'd have when
you're talking about banking. If you're an email application, you'll be talking about
forwarding emails, adding attachments, and so on and so forth, right? So, you'll see
examples of this in a short while. What it really allows it to do is to link what are
called your acceptance test, your high level domain level test or test intent to the online
implementation. You'll see why this is important as part of the demo as well. There are several
refactorings that are relevant to testing and I'll be showing you examples of them in
a short while. So it's build auto [INDISTINCT] a fairly small team. We've been building it
for close to three years now. And it's something that we are very proud of. So with that, let
me get into a demo. Hopefully, this will work. Okay, so what we have here is something that
we refer to as a scenario editor. You'll notice that it pretty much reads like plain English
so nothing really technical going on here. Is that visible at the back, by the way? Yes.
Okay, so let me zoom in a little bit. Yes, better? Good. Okay, so this particular test
suite that I'm showing you attempts to test Mingle. The reason why I've chosen it is because
it's a real good application. Don't worry too much about the application itself. What
I'd really like you to look at is what the test suite is about, how it's written, and
how it's actually put together. So Mingle is a project management tool. And like all
project management tools, it has a notion of a project. It's--it can be used by Agile
teams so it's basic unit of work is something called a card, right? And cards can have custom
fields that are called properties, right? You'll see examples of how all of that works
in a little while. But the main thing is, if I was starting off Mingle development and
I said, "What are the very first test that I would like to write?" Maybe it would start
by saying, "Let me try and create a project and see whether that actually works out or
not." Okay, so whatever is highlighted in green is really the test itself, the executable
part of it, at least. So let me start off by saying, let's create a new project with
the description test project. So let me change that a bit. Oh, we're going to check that
the project, once it's created, is listed. And then once that happens, we're going to
clean up after ourselves, delete the project. So this particular test will clean up after
itself, leave the system in exactly the same state as it was before it ran. That's something
that's very useful, because then you can make sure that your tests are somewhat independent
of each other. Okay, so let me run this test and we'll see what happens next. That's if
I can find the execute button. One second. This test is going to launch a browser. It
is going to try and get it over to the right. Oops, it's flying pass right before my eyes.
It's tried to create a new project and oh, that's it. Okay. Let me give it one more shot.
There we go. Logging into the system, I'm trying to create a new project, created a
new project, show that in the list and that is it. So what this test really tried to do
was login to Mingle, create a new project of the description, take that the project
exists and then delete it, right? So what is exactly going on under the covers? So let
me navigate down to the underlying code that actually makes this test work. Okay, this
particular test happens to use a Sahi driver. This supports Selenium, as well as Sahi and
a bunch of other drivers but I'll just show how this works. So, we click on a link called,
"new project," we enter the project description, basically under the product name, and finally
created the project. And that's what it all worked. The way that it verified whether the
project exists, lets you see whether that it shows up as a link in the lists of all
the projects. So what's really going on here? The IDE is actually trying to link what we
call, a test scenario with this underlying implementation. It's interactive and I'll
show you how that works in a moment. So if I now go to this test, and then say--I just
press control space, I get a list of all the steps that are available to me across the
entire test suite, right? So this is not just stuff that is available in the context of
this test, these are all the actions that have been automated across the entire test
suite. Now obviously that's a fairly large list, but if I know the names of the entities
that I can deal with then that list and such are narrowed down fairly quickly. So if I
say, for instance, "What can I do with projects?" You can see that I just can do a couple of
things. I can create a new project with a description. I can delete the project. I can
check if the project exists. What about something else? So I said that Mingle allows you to
work with cards. So if I type the word, "card," you can see that I can create cards, add comments
to cards, add properties to cards, and so on. So what this really means is that if you're
new to this test suite you can actually go in there and very quickly get a feel for what's
out there, what's already automated, and what is not, right? So, let me now try and change
this test. So what I'm going to do is that I'm going to add a new property. It's like
a custom field, okay? So on ThoughtWorks projects, some of them track this thing called volatility.
I'm not quite sure what it means. I think it has something to do with things exploding.
Yeah, that's it. Okay, so you can see that the editor pretty much gives me immediate
feedback saying that that's highlighted in a slightly different way. It's not yet implemented
so what I can do to change that? I now have two options. I can either [INDISTINCT] that
line code or I can record it, right? I'll just show you how the Twist recorder works.
By the way, from what I--I've seen folks who use Twist tend to use it according to the
least stages, it helps them bootstrap their project fairly quickly, but once they start
learning the online driving APIs, they usually go ahead and write this by hand. Yes. So what
I'm going to show you is a recorder. Let's get everything in order and finally, we'll
get to see Mingle in action. Okay. Is that--it--it actually, it plays back the scenario and do
the step that you actually want to record. So it's not as if you're recoding the entire
thing here you're actually just recording one small snippet. So if you remember, the
step that I've added is, "How do I create a new custom property called volatility?"
So what this has done is that it's executed the test for me until that particular step
and now it's waiting for me to interact with that. So that's exactly what I'm going to
do. So, I'm going to go to the project that I just created, and go to project admin to
Twist--and again, just struggling with the display a bit--you can see that I have a "done"
button right there and that's pretty much it. So now, I actually have--just a second--I
actually have a new implementation which allows me to add a new property called volatility.
So what does it look like, what is the underlying implementation? So let me just navigate down
to that. You can see that it's exactly gone ahead and recorded some stuff for me. Just
a second--so it pretty much did exactly what I was just doing at that point, which is,
it went to the project admin link. I clicked on a link called card properties, I created
a new card property not immediately visible but you'll see that it's figured out that
the value that I had in my test, the data that I had in my test, the property called
volatility has actually been substituted out. So they record actually in the sense parameters
and how they work and will automatically substitute them for you. So, I've added a new property
and then finally I clicked on the link called "create property label." Yup, so that's pretty
much what I did. And yes, nothing to stop you from going at it and typing this in once
you can read with the API as well. So, one other thing that I can do is that I can actually
go back and re-record the step in case something changes. But I'm not going to be going ahead
and showing you that, I'd like to show you some other stuff that's related. So, one of
the things that you'll notice in the editor is--though, I know that the test actually
flew past, but one of the things that happened was we actually logged in to the system. You'll
notice that's not actually mentioned in the tests. Where did we actually do that? So,
this supports something that's called a context. The idea behind this is they're often setup
like activities that you need to do in order to run your tests that are required for your
test to work but are peripheral to the test itself. So, the idea is to be able to do that
as part of a test but in a slightly different way. So, what you can then do is have that
as a context. You can have any number of contexts. The way that it all works is if you have a
sequence of contexts to define that on top, you can login first, you can maybe have a
test project to work with and so on, all of those actually execute in sequence and will
tear on after themselves as well. Okay, another thing that you'll notice is that this supports
tagging. So let me just zoom out. Right on top, you'll see that there's a whole bunch
of tags. Tags can be used in various ways. So, you can use them to define all the different
domain level concepts that are involved. So in this case we are dealing with projects
so it tagged the projects. We can use them for tracking. So, in this case the tags that
we have is Iteration 3, maybe this thing was developed doing Iteration 3. This is part
of a regression test suite and that's why it's tagged with regression. So, what do I
do with tags, right? So, one other thing that's possible is that you can actually use tags
to filter your test suites. So, if I'd like to now figure out what are all the smoke tests
that I have in this particular test suite, I can now get a list of three scenarios that
have that smoke tag. And what this also allows you to do is to run these tests based on the
tag from your CA which then means that you no longer need to maintain say, two different
sets of tests; one for your smoke test, another one for your regression test and so on. So,
you can have one test suite that can pretty much do all of this for you. Another use for
tags is to incrementally build this. So I'm sure most of us would have been in situations
where the functionality may have been developed but the test is not yet ready. You don't want
it to run as part of a CA so how do you actually approach that? So, on the Twist team, one
of the things that we do is to use a tie that we call "in progress." So what it allows us
to do is to mock tests that are not yet ready and then, as part of our CA, we say, "Run
only those scenarios that are not tagged with the tag 'in progress.'" So that's something
that we use internally as well. Okay, so now let's get on to something a little bit more
interesting, okay? So, I'm going to show you examples of how Twist actually supports refactoring.
Okay, so what I have here is a slightly more complex test. Okay, so what I'm doing here
is that I'm creating a whole bunch of cards. So, I'm creating a card called first card.
It's going to have a property called "status," it's going to be in QA. I'm going to create
another card with the priority called "high," and then I'm going to try and see whether
when I filter them, will they show up in a list or not? So, you'll notice that I'm kind
of doing the exact same thing in these two steps. It's just that the data is different.
So, I'm creating a new card and then editing a property. Here, again, creating a new card
and editing another property in this case. So, obviously this is a sequence of repetitive
steps. So what can I now do to pull them out, perhaps drag them to a high level? So what
I can now do is actually invoke a refactoring that this supports. It's called "extract concept"
and our first card, right? And description as in QA, okay? It's a very long sentence
but I think that describes what we're just doing here. Let me click on okay and we'll
see what happens. You'll notice that it looks for that sequence of steps across this test
and also the entire test suite, right? So, for those of you who know the extract method
refactoring, this is equivalent of saying do an extract method across my full core base.
You know, it's something that's actually pretty useful because as soon as you have sequences
of steps that repeat themselves all over the place then you can actually use this to pretty
much pull out an abstraction, replace it with a single sentence which can then be used everywhere,
right? So, that's something that's pretty powerful. Let me show you another example
of a similar kind refactoring, okay? It's in a slightly different test suite. So, what
happens if we get the level of abstraction wrong, right? So, let's say that I have a
simple login test and I say, "enter username, enter password, submit form." What's wrong
with this test? Anything wrong with this test? Anybody? Sorry? Okay. One of the problems
with this test is that it's very coupled to the user interface. So, it's talking about
interacting with the webpage, it says "enter the username, enter the password." In some
cases you would see tests that actually have X-spots in the test itself, right? So maybe
this particular test is at the wrong level of abstraction and it maybe better to put
the low level interaction of the webpage into the online code itself. So, maybe what I'd
like to do is push this into the code. So that's another example of a refactoring that
we can use. So, again, say, select these three steps, push to implementation--let me zoom
out a bit. What I'm actually doing is a login. That's my username. You can see there it substitutes
parameters again. And password. Yeah, so when I actually click on okay, I now have pushed
the underlying implementation into the code. So, now if I navigate into it, it's exactly
what I was doing earlier, right? So, I'm entering the username, the password and submitting
the form. But now I push purpose kind of polluting my test scenario by talking about [INDISTINCT]
track with the application that we are testing and pushing it into the underlying code. So,
what I'm really getting at is that these refactorings help us modify, improve the design of our
tests in case we get it wrong. And this invariably happens. So, one of the hardest things to
do is to be able to write test at the right level of abstraction and to know that it's
fine. So, in our experience, a good Twist test typically tends to be really concise
and really short. It helps if it's under 10 steps. It's not at always possible of course
but it really helps when it's boiled unto its absolute essence. And in some cases, in
case we get the abstraction wrong it's always useful to be able to pull that down to a different
level, yeah. So those are couple of examples of refactorings. I'll just show you one more
and then we move on to something else. So, you'll notice that the language that we use
for creating cards is a bit strange, right? So, we're talking about creating a detail
card. What does that mean? So, maybe I'd want to define that and make it read a little bit
better. So instead of saying, "create detail card" what I'll say instead is, "create a
card with name." And again, this refactoring applies not just to the test itself but every
single place where it's used. So, if I now look at usages you'll see that—-I apologize,
struggling with the display a little bit. I'll just try and see what I can do. You'll
see that this particular step is used in a whole bunch of scenarios. And I'm just going
to pick up one at random and you can see that one of the things that it does is it actually
respects your test data and tries to ensure that whatever test data that we have is still
preserved. It's just that it literally rephrases the sentence and reorders everything so that
it still makes sense. Now, one final feature that I like to talk about is how we can then
do data-driven testing, which is if I have the same sequence of sets that I want to do
and I just want to vary the data that we're going to deal with in the context of the test,
how do we actually do that? So, we can essentially do that by saying let's create a data table
for the scenario. Twist exactly provides a little bit of assistance. What it can do is
that it can merge duplicates for you and kind of figure out whether you're talking about
the same thing or not. So I'm going to ask it to go ahead and do that. Zoom out again.
I now have a little table on my right, which basically describes the parameters that I'm
working with. So, that's the card name that I have right there. So, what I can now do--oops,
that looks like I did--okay, let me just give that one more short. This time, I'm just going
to make it a little bit faster. I'm now going to go ahead and bother typing the parameters.
That looks like the one. Yeah, so there we go. So, essentially what we have now is a
little table that has all the data that we need. Let me add a new row to this. By the
way, this is--if you have been following what I've been trying to do, this is equivalent
to saying, "add a new scenario," right? So, let's say that I just have card two. I have
card two's description and other comments. It doesn't matter. So, one of the things that
I'd like to point now is that if I select a particular row, you'll see that what Twist
actually does is that it substitutes the data for you so you can now look at it. Once I
have a new set of data that's in there, what does my test now look like? So, when the test
itself runs, the report that you get allows you to do exactly the same thing. You can
basically click on a row and look at what the underlying test really looks like. Yeah,
okay. So, what we have looked at is really evolving, kind of, feature tour of what this
allows you to do. So, we looked at a few examples of what this test look like, what kind of
style we can use to write them. We looked at what the underlying driver code looks like
and how that can work. We talked a little about how the auto-complete works, how some
of those refactoring features actually help you do build and evolve your test suite and
so on, yeah? One of the things that I do notice is that there's this little "execute manually"
button in there. So, one of the cool things that Twist allows you to do is to use what
we call hybrid execution. So, this allows you to run your manual tests as well as what
we call hybrid tests. Some of you may know this as a sort of exploratory testing. What
it allows you to do is leverage the automation that you already have in order to do exploratory
testing, right? So for instance, in the test that I've usually had maybe--let me just pick
another one, maybe the comment one, right? I have something that I like to check. So,
if I'm just going to add a new step that says "explore," you'll notice that it's not yet
implemented but that's fine. How this works is kind of very similar to what we have previously.
It's essentially going to wait for me to provide input. Okay, so let me just stop at this point
and get back to my presentation. Kind of running out of time. Okay, so you saw an example of
the Mingle test suite. So, what does the Twist architecture look like? Essentially what we
have is something that we refer as a scenario editor. You can write your executables as
vacations in English, or pretty much any language that you would like. We have customers that
will actually write the testers in Chinese, for instance. The underlying implementation
mechanics, what actually makes the test work is more--we won't have time to talk about
this. So, I just want to quickly talk about a little case study that we have. Autotrader.co.uk
is U.K.'s second largest website. That's what--that's what it looks as of now--right now, if you
go over there. Basically it allows you to put up your car for sale and put in promotions
and things like that to sell your car if you'd like to. So, the Autotrader team actually
went from six weeks of testing before each release to two days. So they were using a
fairly large tester suite. They initially started at around a thousand Twist tests but
then they refactored that down and brought it down around 100 fairly concise tests, right?
So, that's the difference in terms of productivity that they were able to get to. Twist is currently
used by around 60 folks across four teams, across a multitude of roles. So that means
business analysts, developers, testers, use it. They currently release twice a month with
zero defects. Now I know this is fairly contentious but that's what they say, okay? So, this is
not what I'm saying. Yeah, yeah. So, anyway, so what does the roadmap of this look like?
So, one of the things that we are--that we are currently working on is adding support
for the languages. So, we have support for Java and Ruby at this point. We are adding
support for Ruby and .Net in the future. New refractoring options, so you just saw examples
of them. So, we'll be adding stuff like being able to inline concepts, being able to inline
core back into your tests if that makes sense and so on. So just so that it allows you to
write your test in a natural way and then improve the design of your tests if that is
so required. Another piece that we are looking on is something called test management. It
really allows you to have a unified place where you can look at all your testing activity.
Which means, if you've got manual tests, which Twist allows you to run, as well as automated
tests, you're going to have one place where you can look at a consolidated look at what
everything looks like. And finally, load and performance testing. I don't have enough time
to get into the details but this has some pretty interesting applications when you write
test in the style that I just showed you, okay? So, with that, I'm more or less done.
So, if there are any questions, I'll be happy to take them.
>> Hey, looks fantastic. >> PRAHLAD: Thank you.
>> What's--what are the edges of this thing? Like, at what types of scenarios am I going
to have trouble automating with the tool? >> PRAHLAD: Okay. So--okay, interesting question.
One of the things that you may have and this one is that, to some extent what you can do
in the scenarios is actually limited by the driver that you use. So for instance, if we
happen to be testing of say an ActiveX control and Selenium doesn't support testing ActiveX
controls, then you're in some trouble. So, that's one limitation. And that's really a
technology limitation in terms of what the underlying driver allows you to do. One of
the things you can do though is that you can actually use combinations of drivers and that
actually makes sense. So, to give you an example, if you want to do something like a driver
application using Selenium or Sahi or any other driver and check something like an audit
trail by connecting to the database using say a JDBC driver, you can actually do that
because one of the things that I didn't mention is that you can pretty much use any third-party
library under the covers for the underlying implementation. So, it's not as if it's a
closed kind of system, it's a fairly open kind of runtime. It's effectively a Java runtime,
which means you can use all the libraries that you want. So, that's one issue, which
is the technology limitation. The second thing that we have seen is and we saw this in the
[INDISTINCT] case as well, when you--let's say that you're doing something like a Java-based
development and you have, say, multiple scenarios for each story, you can rapidly have an explosion
of scenarios. So, in order to read this case, they actually ended up with almost close to
a thousand. Obviously, is a huge number. So, another pattern that we have seen is that
it's very important to kind of consolidate your scenarios as your test suite starts growing.
So, if I have a test for login already and it's being used, you know, in a million different
places, because that's something that I'll always have to do, do I really need to have
a login test anymore, right? So, some of these things start, when they're start getting used
more and more in the application, they can now be a part of other tests. So, is there
something else that… >> Maybe that would be a good feature to add.
Is that you could automatically tell me where I can eliminate tests.
>> PRAHLAD: Okay, that's exactly what the test management piece will do.
>> Woo-hoo. Okay. >> PRAHLAD: One of the things, yeah.
>> Cool. >> PRAHLAD: Okay.
>> It's look like it's something like a Wiki web server, what--I'm wondering if we can
see the same features or the stuff you showed us, you know, we can get it combining Selenium
and other web acceptance web frameworks, like a Selenium and Cucumber or Selenium and [INDISTINCT]?
Don't you think we can achieve the same stuff that you showed us? Of course, you guys have
much better UI. >> PRAHLAD: Yes.
>> So… >> PRAHLAD: Okay.
>> …what's your take on that? >> PRAHLAD: Yeah. So, his question was, "Can't
you use, say, a driver like Selenium along with, say, other frameworks like Cucumber
and so on?" Yeah, the answer is yes. Obviously you can. But one of the things that Twist
allows you to do is to look at your whole test suite. So, you saw some of the refactorings
that are possible. If you're using most of some of the other tools that you talked about,
you have to do some of those refactorings by hand. This starts becoming important once
your test suite grows. So, yes, you can definitely write test in a similar kind of style but
the difference is really in terms of your personal productivity and how long does it
take you to make a change when the application changes? In some cases, that becomes really
critical, especially when you're talking about software that's released twice a month where
the application is changing really rapidly, the amount of time and effort it takes to
make changes to a test suite starts mattering really quickly, right? And that's really something
that Twist allows to. So, again, I mean, really the emphasis of my talk and what this is really
all about is, let's go beyond one test and let's look at the test suite. It is a--it's
a first-class entity in its own right and it's something that we probably need to look
at in a different way that we have in the past.
>> Between this, is Twist an open sourced tool or commercial?
>> PRAHLAD: Twist is a commercial tool. >> [INDISTINCT] Twist, I guess.
>> PRAHLAD: Yeah, it is a commercial tool. Yes. Yeah?
>> Hey, Prahlad, it's a beautiful tool and like, you know, the refactoring is amazing.
>> PRAHLAD: Thank you. >> At the same time, one of the challenge
generally in the refactoring the code is your data is going to change the scenario. Say,
for example, if I created a table considering the current contexts. And I would have built
table over and over again and it has grown by itself, right?
>> PRAHLAD: Okay. >> And this is beautiful in terms of code
refactoring. But when you have to change the data and it has to implement all the functionality,
if I'm right, when you've went to each one of these rows, it substituted the same value
but it still went to the same scenario. >> PRAHLAD: Yes, that's right.
>> Right? >> PRAHLAD: That's right.
>> If it has to change the scenario, right, which is--I know it's a huge change by itself,
right, there's no--but how does the system--are--is there a thought in terms of like a, you know,
whether you're planning to build at or do you have any solutions or…
>> PRAHLAD: Sure. >> Because it's a common challenge all the
time, right? >> PRAHLAD: Sure. That's a very good question.
It's kind of related to the branching thing that I showed you right at the beginning.
It's almost as if you have one scenario that's actually two scenarios or more. That's one
way to handle it. All I'd like to say is that the data management support that we have right
now, data-driven testing support that we have right now, is pretty much in its infancy.
So, we are thinking about the problem that you are talking about. And one of the ideas
was to see whether we can actually talk about data not just as an aural test level but almost
at a workflow level. So, to take one block and see whether it can be written in different
ways with different data inputs. That, of course, is a fairly complex way of handling
it and it's something that we are still currently working on. Yeah, is there a question at the
back? >> Vivek. Vivek, I have a question. So…
>> PRAHLAD: Can you raise your hand please? Yeah, thanks.
>> Yeah. >> PRAHLAD: Yeah.
>> So, first, is it done to test automation tool or is it a test management tool?
>> PRAHLAD: Good question. >> Because I'm confused. I downloaded the
trial version of both Mingle and Twist. >> PRAHLAD: Sure.
>> I've been using it for about three months. >> PRAHLAD: Sure.
>> I'm confused. When you say the… >> PRAHLAD: Okay.
>> …you know, automated steps and then you came the manual testing also.
>> PRAHLAD: Sure. Sure, sure. >> Additional features are like you…
>> PRAHLAD: Correct. >> …are going to put the manual results,
automation results. >> PRAHLAD: Sure, sure.
>> That confuses me. >> PRAHLAD: That's actually a very good question.
I think it depends on what your definition of test management is. So, the way I look
at it is that Twist is a tool that helps you work with your entire test suite. It's one
place to keep your automated test, your manual test and all of that, to get reporting on
your test when they execute from your CI, and then, to be able to evolve your test once
you've moved on. Now, I don't really want to get into like redefining or defining test
management and all of that. Maybe we can catch up later and talk about it. But really, that's
how I would look at it. The test management stuff that I talked about is really about
visualization, getting a sense of what is the overall health of your test suite and
being able to interact with application, not just using this interface that I showed you
but also [INDISTINCT], so that you can have one place to look at your test reports not
just for your automated test as well as your manual test. So, when Twist test run as part
of the CI, you don't need to have the editor installed, you can actually run that from
the command link. So, that's really what the test management piece structure.
>> One additional question on there. >> PRAHLAD: Sure.
>> So, is it limited to GUI automation or… >> PRAHLAD: No.
>> …can we do service layer automation also? >> PRAHLAD: It is not. So, as I was saying
earlier, the Twist test itself runs on the Java runtime. So, pretty much anything that
you can do with Java code can be part of your test. So you can, for instance, test services
if you want to. >> Yes.
>> PRAHLAD: You can, as I said earlier, interact with databases using JDBC drivers. So, pretty
much the sky's the limit. The examples of the test that I showed you, yes, are tied
to user interfaces but it's not necessary that you write them that way. So, you can
write a test in the [INDISTINCT] style if you'd like to--if that's what you're getting
at. >> Yeah, yeah. Thanks.
>> PRAHLAD: And thank you again for waiting for this talk and it's a pleasure to be here.
So again, if there are any further questions, I'll be available at any point during the
conference and we can talk. Thank you.