10 Tips for Blazing Fast Web, Mobile and API Performance

Uploaded by SmartBearSoftware on 25.07.2012


MIKE PUNSKY: Hi I'm Mike Punsky.
And I am load test manager at SmartBear.
And today we're going to have a
presentation from Mike Gualtieri.
And it is 11 tips for blazing fast web, mobile, and API
performance this holiday season, or how to optimize
web, mobile, and API performance.
So on today's agenda, we're going to talk about a few of
the performance challenges that teams face.
We're going to provide 11 steps to improve performance
before we get into the holiday season.
We'll also talk about the next big performance challenge.
And then at the very end, we're going to answer as many
of your questions as possible.

Just remember this is our housekeeping slide, or what we
like to call our housekeeping slide.
And to let you know, you can submit your
questions at any time.
Just use the panel that you have from GoToWebinar.
And as we get questions, we'll answer them as we're able to.
And we'll also save some of those for the end and present
those answers to you.
We're also going to be sharing event on Twitter.
We're going to be using the hashtag BlazingFast.
And we're going to be providing a follow-up session
at the same hashtag also, following the presentation.
And in case you want to go back in and take a look at the
presentation after the fact or want to share it with somebody
who did not or was unable to attend, a recording of this
event will be available at SmartBear.com within about 24
hours after the event, if not sooner.
And you'll receive a follow-up email with instructions on how
to access the recording.

So before we get started, we'd like to present you with a
little survey--
and I mean little--
and find out a little bit more about you.
So if you take a look at the poll that's up on your screen,
please select all that apply, and let us know why you're
here today.

Some of the things that are up here are improving performance
of your apps in development, better test for performance in
QA and production, monitor or optimize the performance of my
apps in production, architect apps with performance in mind,
or you're just curious about different ways to improve

So it's very interesting.
It looks like the majority of you, by a factor of 1%, are
just curious about different ways to improve performance.
But then again, in second place was better testing
performance in QA and production, at 65%.
So that's a good thing to know.
And now I would like to present Mike Gualtieri,
principal analyst at Forrester.
Mike has more than 25 years of experience in the industry,
helping firms architect, design, and develop
mission-critical applications in e-commerce, insurance,
banking, travel and hospitality, manufacturing,
health care, and scientific research for organizations
including NASA, eBay, Bank of America, Liberty Mutual,
Nielsen, EMC, and others.
Mike has written thousands of lines of code, managed
development teams, and consulted with dozens of
technology firms on product, marketing, and R&D strategy.
So go ahead, Mike.
MIKE GUALTIERI: Thank you, Mike.
And welcome, everyone.
My name is Mike Gualtieri, as Mike said.
And I'm a principal analyst here at Forrester.
And I'm in our application development. team.
So my research focuses on most things application
And one of those areas is what we call blazing fast website
And so I'm going to talk to you about 11 tips for blazing
fast performance.
Now, we've set this in the holiday season because there's
a lot of high-profile things that can happen, especially in
e-commerce sites, surrounding Cyber Monday and that whole
holiday shopping season.
Now, originally they asked me to do 10 tips.
And I just couldn't do it.
I had to add an 11th.
So you probably got the email about this session
that it's 10 tips.
Well, that's why it's 11.
Because I needed 11.
I couldn't do it in 10.
So first let's look at some data.
Let's look at some of the outlook for the holiday season
and what we can expect.
And what does it really mean for traffic to
increase at that time?
Well, you can see here that shopping definitely ramps up,
starting on Thanksgiving and then rapidly increases.
Traditionally, Black Friday is more of the "let's go to the
store," "let's go to the Wrentham outlet mall, or
wherever you happen to be.
And then Cyber Monday is where the online really ramps up.
And one of the reasons for that is because people are
shopping at work.
I know that I do most of my shopping at work.
I hope my boss isn't listening to this.
But it does account for about 50% of all the
Cyber Monday spending.
And for retailers--
for online retailers, e-commerce retailers--
this entire holiday season-- not just a Cyber Monday, but
the higher entire holiday season-- can make up 40% of
the retailers' online revenue.
So for just a six-week period, that's a ton of revenue.
And you don't want anything to prevent you from getting that
revenue, especially websites that people abandon because
they're too slow or because they can't handle the load and
they go down.
And there's a number of high-profile situations that
always seem to happen on Cyber Monday.
a lot of you probably know Newegg computer stuff--
Nordstrom, Target all had some problems last Cyber Monday.
So we also know that shoppers are shifting more of their
wallet share to online during the holidays.
So they're going to the store, and, increasingly, the money
they have to spend, they're spending online.
So what this really means is that it's just an increase.
So it's going to continue to be a challenge to handle the
peak traffic that's going to continue to increase.
But it's not just about e-commerce.
It's not just about the store.
Because you might be saying, well, I don't have an
e-commerce site.
But you may have another site.
And the thing is, web-influenced sales are much
bigger than online sales in themselves.
So even if you're not selling something directly online,
your traffic may increase just because people are searching
for things and looking for things and trying to get
Now, of course, we're talking about online holiday shopping.
But performance is important at any time, as well.
It just-- the problem gets exacerbated during this
holiday season.
So the bottom line here is that performance has a direct
impact on revenue.
And we don't want to let performance availability or
scalability be the reason.
And as technical people, as developers or operations
people, as testing people, it's pretty much our
responsibility to make the site perform to be available
and to be scalable.

Online, there's so many other choices.
So online shoppers, they have pretty much
unlimited shopping choices.
And they know it.
So if the site is not performing or it's not
performing well--
They can just go to another site and buy from that site.
So it's not like they're going to stick around, per se.
So it's critical that that performance
be as fast as possible.
So that's what we're here to answer.
How can we avoid an embarrassing online disaster?
And I'm going to go through, now, 11 tips to prevent that.
And you'll see that there's a mix of these tips that have to
do with development and infrastructure and a bunch of
other stuff.
But let me start first with Vitruvius, who is the
first-century Roman architect and author of a brilliant
10-volume book called The Architura.
And he defined architecture broadly as having three
firmitas, utilitas, and venustas.
Does anyone take Latin?
Or can you figure out what those are?
Ah, it's pretty easy.
Firmitas means it's strong.
Utilitas, the function.
And venustas, that's like Venus; that's beauty.
So in software, though, we have seven qualities that we
think are important.
So in architecture, there's three.
And in software, there's seven.
Now, I'm not going to go through all seven of these.
But I am going to highlight three of them that are
critical, which is performance and availability and
And that's what these 11 tips will cover.
Now, let me distinguish between these three.
Performance is the latency.
It's the speed.
It's the response time that the user experiences.
Availability is whether the site is up or not.
So I guess if your site goes down, you have zero
So then, scalability is the ability to handle an
in some cases decreasing--
Decreasing load is to optimize the use of resources.
But in the holiday season, we're concerned with the
increasing load.
So a lot of people just say, oh, scalability.
But scalability is really separate from performance.
Obviously, all of these things are interrelated.
So all of these tips center around performance,
availability, and scalability.

So performance isn't just about one thing.
And I think of it in three layers.
There's the channels where the clients access it-- the web,
the mobile, the APIs.
Because increasingly, shopping channels are
exposed through APIs.
And even if you don't have your front end yourself, you
may have your APIs exposed.
And those have to perform.
There's the thing you don't really control, which is the
internet and third-party services you may be using
inside your website.
And then there's your own application and your own
infrastructure that's running largely on the back end,
whether it's hosted or on prem or in the cloud.
That's your code.
That's your infrastructure that you control.
So there's things here that you control.
And there's things here that you don't control.
But there's techniques that you can use to mitigate the
risks of things you don't control, and techniques that
you can do for the things that you do control.
So let's look at those tips to keep Melissa shopping.
First one is to benchmark.
And I can't tell you how shocked I am, how many people
talk to me and I ask them, well, what's your peak load?
Many don't know that.
Or what's your performance?
They don't test it.
They don't benchmark it.
So before you start any performance improvement
program, you have to know where you're starting from.
You have to test to benchmark the capacity, performance of
just one transaction but performance at load.
And you need to find the breaking point.
You need to know if we get 10,000 people, if we get
500,000 people, the site is going to break.
Now, "break" could mean that it's just locked
up, it's not available.
Or it could mean that performance is so slow that
the site is virtually unusable.
So it might be a threshold.
So instead of a page loading in two seconds or one and a
half seconds, it loads in 12 seconds.
You may consider that to be the breaking point.
But bottom line is you absolutely have to test to
create a benchmark, so you know where
you're beginning from.
Because how else will know you're going to improve?
And you have to test from the customer's point of view.
You can't just test one server, your database server.
So you have to create scripts of maybe your shopping flow--
searching for a product, putting in a shopping cart,
check-out process, payment.
So you have to script that and play that back and run it at
increasing loads and then measure the performance.
And you have to have multiple scenarios.
And one important one is to find that breaking point and
to find what the performance is at different loads.
So that's the first one.
The other thing is people often ask, well, how fast
should it be?
Should it be four seconds?
There used to be something called the four-second rule
and the two-second rule.
And what those rules said, it was like, well, if your page
load's less than four seconds, then shoppers
will start to abandon.
And then they revised it to two seconds.
But the reality is you have to benchmark it against your
You have to find out if you're in the retail business, what
are your competitors' load times?

So the answer as to how fast should your page load, it
should load as fast or within a tolerance of your
And a good example of this is travel sites.
Travel sites were really slow.
And then when one person boosts the performance, then
it's kind of like a performance war.
And there's an expectation that it's faster and faster
and faster.
So do your own benchmark.
But also compare that to benchmarks against your
The second tip is to run Google PageSpeed.
Now, how many of you are aware of Google PageSpeed?
I find that, I'd say, maybe 25% when I tell people about
this have heard of it.
But Google PageSpeed, you can just google Google PageSpeed,
and it'll come up.
And what it is, it's a site that has a list of best
practices that Google is constantly revising.
And all of these best practices have to do with how
the HTML and the JavaScript and the CSS, and how your site
within the browser, reacts to the server.
So there's a way to optimize the HTML.
And it will do a quick test against those best practices.
For example, if your site loads 30 images, it may say,
hey, that's too many images.
Or, it would be helpful if you made that
a sprite, for example.
So the reason I put this as number two is because
sometimes the fastest way to increase the performance of
your site is to run this.
Find a few of the best practices.
You might be able to see here there's some--
combine images into CSS sprites,
leverage browser caching.
So I encourage you all to go run this against your site--
or against any site, for that matter-- and familiarize
yourself with this tool.
Because this will tell you high priorities.
And it will be like, boom!
I can just do this inside my HTML, and I will get a
performance boost.
So that's why that's number two.
Now, the third thing this is what is near
and dear to my heart.
Because this is the only tip I used to follow back in the
day, which is the code.
Because if you're a programmer, you're obsessed
with your code.
And you want that to be as efficient and
performant as possible.
And that's why the third tip is to know your code, is to
test your code to eliminate bugs.
Because bugs can be performance bugs.
And so it's good in your bug-tracking system to track
those as performance bugs.
Or they can also destabilize your app or even shut it down
or shut performance down.
And that's an availability consideration.
So availability, not from the whole system's down, but maybe
the availability of a certain key function.
So one way to do this is have a fun time.
And the fun time is peer code reviews.
And these are great because everyone loves to talk about
their code and explain the decisions you made.
And then everyone else reviewing it loves to point
out what you did wrong.
But the end result is better code and trying to head off
bugs or performance-sapping things that will cause
performance to go down.
Now, the usual culprit in this is always database connections
and connections to the data.
So that's kind of where the low-hanging fruit usually
exists when it comes to code.
But there's performance profiling tools you can use to
quickly identify loops or areas of the code that could
be rewritten to be faster.
And the thing is, once you change this code, you've got
to rerun those benchmarks, which is tip one.
And you should have an automated way
to run those tasks.
We'll get into that a little bit later.
So tip four, which isn't something you can control as a
developer or necessarily a tester, but it is the cause of
many sites going down.
And that is hackers.
They can launch what's known as DDoS attacks, or
distributed denial-of-service attacks.
So you have to have a solution for DDoS.
There are some outside services from firms like
Akamai that can examine your traffic and then spill off
attacks like this.
I was involved in-- well, no, I wasn't involved in it.
I didn't do this.
But, all right.
Let me just tell you the story.
So the US House of Representatives--
I don't know if you remember, a few years ago Obama made a
speech to Congress about the health care.
And then there was that one guy who shouted out, you lie!
or something like that.
Remember that?
It was a few years ago.
What happened the next day is that the US House of
Representatives website, the .gov website,
it basically choked.
It went down.
It was a combination that it was hacked--
because someone just hacked the site-- but it was also a
lot of people running traffic against the site, saying, hey,
who is this guy who stood up and said this?
Well, anyway, the site went down.
And the Speaker of the House at the time said, look, we're
the US government.
Our congressional site can't go down.
How could this happen?
And it was a combination of these things.
Part of it was hackers.
But part of it was they weren't--
and this is another tip I'll get to--
they weren't monitoring the site.
They didn't know that it was down.
Anyway, you have to have a strategy
to handle DDoS attacks.
Fifth is infrastructure.
I'm not going to spend too much time on this.
But you have to have redundant infrastructure to achieve high
This is pretty obvious.
But what's often missed is there's different components.
I can't believe how many people say, well, yeah, I have
multiple web servers.
But then there's one database server.
And sure, that database server may be hardened.
But there's a lot of things that can happen.
And there's a lot of different components.
So some sort of redundancy is needed in that infrastructure.
So this is merely to examine all the infrastructure
components and make sure that they're duplicated.
Oh, you think cloud's the solution?
Well, ask Netflix and some other
people who are on Amazon.
And this isn't a dis at Amazon.
Because Amazon is cool, and Amazon Cloud's cool.
But it underscores the point you can't just assume that you
have infrastructure redundancy.
So you have to have that.
So that's tip five.
Oh, this was meant to be a quiz.
Well, I'm going to quiz you.
Don't look at the slide.
What does "five nines" mean?
Because you know how people like to--
especially operations guys, they like to throw around
"five nines" or "four nines," and I always wonder, do you
actually know what that means?
Well, five nines means that your downtime per year can be
5.26 minutes.
That's pretty hard to achieve.
And you can see some of the other--
what it means to be down 98%, 7.3 days, or 99.5%.
Would you be proud if you had 99.5% downtime a year?
I would.
Because that's pretty good, unless that 1.83 days was on
Cyber Monday.
Then you'd be in trouble.
So you really have to think about what is the likelihood
of a particular downtime occurring on that day.
And that's why we're focusing on these peak periods.
The sixth tip is to cache.
And caching is used a ton at many different levels, to
reduce internet latency.
And there's really three levels of
caching that I'll discuss.
It roughly goes with that three levels of architecture I
talked about in the beginning.
But the user's distance from the content actually matters.
So look at this example here.
So you've got a web server in Tampa, Florida.
And you have a user who's accessing that
website from Orlando.
Well, the network latency-- that is, the
internet latency itself--
is 1.6 milliseconds.
That's cool.
So every request to that server costs 1.6 milliseconds.
Now, you might think, well, that's not a lot.
But if your page is loading 30 things, you have to multiply
that by 30.
You follow me?
But now let's go down.
Say we're in Pennsylvania, and we access
that server in Tampa.
It goes to 16 milliseconds.
And then across continent.
So say you had a server in New Jersey, and people are
accessing content in San Diego.
It goes to 48 milliseconds.
And multicontent, 96 milliseconds.
So there is lots of internet latency there.
And this really adds up.
And that's why the companies such as Akamai--
you might have heard of Limelight; you may have heard
of EdgeCast and some of the telecoms--
have what's known as a CDN service, a
Content Delivery Network.
And what they do is they just forward-cache that content.
So they cache a lot of the static content.
So, again, if you have that server in New Jersey, they
have servers in San Diego.
And a lot of the static content will be served
locally, while your dynamic content
will be served remotely.
So that's what edge caching is, or a
content delivery network.
The other fast way, though, of caching is in the browser.
And remember tip two, Google PageSpeed?
You can run that.
And one of the things that's going to test, it's going to
test if you're caching content.
I mean, you guys know, you have a 60K JavaScript file,
you're not going to want to load that every
single time, right?
You're going to want to cache that in the page.
So that's the fastest way to serve content to the user is
through that browser cache for static elements.
And most sites are dynamic.
But there are still a lot of static elements there.
And then, finally, a server cache.
This is usually on the database cache.
Imagine a site like Facebook, where every page is really
There's a lot of personalized stuff like that.
And it has to go to the database.
They have a huge farm of in-memory servers that
assemble your page as page fragments so that they can be
served up really, really quickly.
So there's many different ways of caching on the
back end, as well.

So tip seven really has to do with mobile.
Because the other thing we know is that mobile shopping,
and especially tablet shopping, is
also a reality here.
And it makes testing much more complicated.
So you have to test your site in multiple browsers, in
multiple versions, in multiple OS versions to optimize the
Because you cannot just test in one browser.

You might test a site in Google Chrome, in Firefox.
And you can have two- or three-second differences in
just those different versions.
Here's a classic inquiry I get from clients.
Oh, a vice president of such and such is complaining that
our website is really, really slow.
But when we test it, it's lightning fast.
And it turns out that, yes, they're using some older
version of Firefox, an older version of IE.
So you have to test in these multiple versions.
Here's some mobile home-page performance differences.
And this has less to do with the version than it has to do
with the bandwidth.
I mean, you can see here that a 3G network takes-- this page
takes 8.18 seconds versus 4G, 6 seconds.
Gee, you'd think the way they talk about 4G, it would be a
half a second.
But there are big differences here.
And you have to understand these differences to
understand what the customer's experience is going to be.
The eighth tip is really, really scary.
It's third-party APIs.
And the reason it's scary is you don't control it.
And what am I talking about when I'm talking about
third-party APIs?
I'm talking about the shopping cart services, the rating
services, the ad services, the other components that you use
and you integrate on the glass in your website.
You may call these APIs from the server, as well.
But many people think, oh, well, I have an SLA with them,
so I'm cool.
But if that SLA fails on Cyber Monday, for example, you can
go yell at them and say that they failed the SLA.
But your customers are still unhappy.
So you can't be held hostage by these third-party
So how do you do that?
It can be pretty complicated.
Here's one example.
Battery Junction, small e-commerce provider, but big
in what they do, flashlights and batteries.
That site uses Yahoo!
And it uses many other third-party components.
Some sites use Google Maps.
So this is just their social plug-ins, where
you have the code.
So you know what I'm talking about, right?
Third-party APIs that you integrate into your site.
We have to choose these wisely.
The first thing, do you even know all the third-party
components you are using?
So you're a developer.
And you're developing these pages.
But you also have a web content management system.
There could be some people in marketing inserting code that
makes call-outs to third-party apps.
You may not even know it.
So you have to know what your components are.
So you have to take inventory of that.
And you have to ask everyone--
e-commerce professionals, marketing, everyone.
And then you have to do a bit of due diligence on each one
of these component providers.
Is it an established firm, like Yahoo!
Shopping Cart, Google Analytics, Google Maps?
Is it a start-up that hasn't gained momentum?
Did they just start the company in July?
Maybe they're not prepared for Cyber Monday.
Have they published usage, outage, and performance data
for components?
This is a wonderful trend.
Twitter, for example, produces that data so
you can analyze it.
You can see what the outages actually were.
Do your competitors actually use those components?
Because that might be a good sign and a good way to know if
it's a cool component.
And if you choose to be an early adopter of that
component to gain a competitive advantage, you
have to mitigate those risks.
So what I suggest is that you assign a confidence level of
one to five to each one of these components and their
service-level agreements.

I thought I another slide after that, but--
So you assign a confidence level.
And then you have to come up with a
mitigation strategy on each.
I think I'm missing a slide on that mitigation strategy, but
we'll just go on.
I think you understand third-party components.
And the third-party components might not just go down, but
they may also slow you down, as well.
Because they may not handle--
And I have a little story on that, too.
It was a large Midwest retailer--
I mean, big brick and mortar.
But they had an online.
And during this holiday season, they had a third-party
payment service.
And anyway, they focused all of their effort on keeping
their infrastructure and their e-commerce system up.
And they did a great job.
It didn't go down.
But the bottleneck was actually the payment provider.
So you have to identify, eliminate, and mitigate all of
those architectural bottlenecks--
not just your own, but potentially the third party.
So this is the confidence level I was talking about.
You assign every component in your architecture a confidence
level, so one to five.
Five means the SLA exceeds your requirements, and it has
an insignificant history of breaches.
So this is ideal.
You can't always find five.
So sometimes it's going to be OK to use one
that's a one or a two.
But you have to have a mitigation strategy.
And here's how you decide on a mitigation strategy for each
component that's used in your website.
Choose one of these mitigation strategies that's on the next
slide, after you choose a failure condition.
So what if it has a functionality
problem or a bug?
If you're using a third-party component, they may release a
new version of the code.
That happens all the time.
And that could break your site.
So what are you going to do if there's
a bug in that component?
Second failure condition is availability.
What if the component is simply not
responding to requests?
That's what happened to this retailer when they started
using the payment service.
Well, what if the component slows down, it's just simply
taking too long?
And what if it can't handle the volume, scalability?
So, unfortunately, you need a strategy for
each of these four.
So some of the potential strategies here are none.
You've just decided that that risk is low, or the cost to
mitigate the risk is too high.
What's really important here--
for example, what if Google Analytics goes down, like it
did several years ago?
That actually happened at Forrester Research.
Our site went down.
And why did our site go down?
It was like, Google Analytics was down.
And could we have built something in?
Did we?
No, because there's certain expectations of certain
components that they don't go down.
But you have to assess that risk for yourself.
Code-around fault tolerance.
You can write code that will detect the component failure.
Or you can use a monitoring service to detect the
component failure and then find some work-around.
So you can imagine that e-commerce situation where you
detect that the payment service is down.
You gather the information but then
confirm the payment later.
Another idea there is, say you're using a third-party
rating and review service.
And that goes down.
You could simply not display that on the page, for example.
The other strategy is to have a
completely alternate website.

Maybe you're doing some A/B testing.
You could just go to B for everything.
And the other thing is you could swap out.
If a component fails, you could quickly code
and swap out one.
That's probably the worst-case scenario, but it's a strategy,
Now, the tenth tip is that once you do all of these
performance-boosting activities--
the benchmarking, caching, and everything else--
then you have to continuously monitor from your customer's
point of view.
And this is where you have to use some third-party
monitoring service, which is constantly on a regular basis.
It could be every 30 seconds, every five minutes, whatever
makes sense, to run scripts against your site to measure
the performance and then to alert you
when there's a problem.
Because you're going to have to react to
whatever problems occur.
And when you measure that performance from the
customer's point of view, it's real important that you're
measuring the perceived performance.
So I'm not sure if you're all familiar with this concept of
"above the fold" in a website.
So if you look here at the Walmart, the full page, this
is loading the entire page.
And this test took 3.89 seconds.
And then it took 2.17 seconds.
But above the fold, it took 1.4 seconds.
So the user's perception is that the entire page loaded in
1.4 seconds.
Above the fold, that's when they can see everything.
That's when they can start interacting.
And to do that requires--
it's how you output the HTML and the
JavaScript and the CSS.
And you have to monitor from what that user's perspective--
not from your network, not from the browser, but from
what the user actually sees.

So the best practice here is also to baseline the
performance metrics with frequent reporting so that you
know the ups and downs.
And you'll start to get an idea of, I get a lot weekend
traffic, I get daily traffic.
It's important to test it after you do a
new release of software--
so in addition to continuously monitoring, whenever you do a
new release.
And know what elements of your infrastructure are actually
affecting that performance.
So that's the next step.
Once you know how fast it is from the user's performance,
what are the elements affecting it?
And really the nine prior tips all have an impact on that.

And then finally, problems do occur despite
the best-laid plans.
And there has to be a team and contingency plans in place to
react quickly to whatever issue arises.
So many teams are on call.
But part of the problem with this on-call strategy is that
it just means everyone's going to get on a conference call.
And then they're going to start talking about it.

They have no plans.
They have no scenarios.
So you have to create these failure scenarios.
Decide who can react to those failure scenarios and what
they're going to do.
And this doesn't just include developers or
infrastructure and ops.
It also includes the e-commerce professionals and
the marketing professionals, because there may be some
steps they have to take to mitigate the fallout of a site
that's down or a performance that's lower.
So you have to react quickly to that.
And here's some data that shows only six out of every 20
e-business leaders even know what their disaster recovery
strategy is.
So it is a problem.
So these are the 11 tips.
I'm not going to go through them again.
But you can access this from the site.
And the next big performance challenge is going
to be in the cloud.
I indicated that they're third-party components and
things that aren't in your control
are the biggest challenge.
So the cloud has a promise to make performance scale and
availability easier in some ways.
But it can also be more challenging because you're
going to have to architect your systems very differently.
You're going to have to architect them to be elastic
and fault tolerant.
And many traditional e-commerce systems or web
applications, in general, are not.
So that's going to be the next big performance challenge.
Cloud adoption is accelerating, so we know
what's coming.
But most companies aren't using it.
And few of the clouds have significant traction.
You can see here that Amazon is still the king.
But there's really no clear second place at this point.
So Amazon still the king on this one.
So the whole point of the blazing fast techniques and
the whole point of these tips in performance is to keep your
customer shopping, to keep them happy, to make them loyal
so they'll keep coming back and telling their friends.
And Cyber Monday is only four months and one day away from
today, and Christmas is five months.
So this period's coming up.
So actually, most of these tips--
it's not too late to actually benchmark, do the testing, run
Google PageSpeed.
Caching maybe a challenge if you don't already have it.
But most of those tips are completely doable within this
time frame.
So I hope you found that helpful.
And I'm going to pass it back to Mike.
Thank you Mike.
Hi, everybody.
I just want to remind you that we're still taking questions.
And we're going to have Q&A in just a few moments.
So if you have any questions you'd like to have Mike
answer, please get those over to us.
And we'll present them very soon.
So part of this slide here points out that here at
SmartBear we have a number of tools that address a lot of
the issues that you'll come across in performance,
including AQtime Pro, which profile performance memory and
resource debugging.
Also LoadUI pro, to load-test APIs and web services.
LoadComplete, for load stress and performance of web
AlertSite division of SmartBear also does website
So we monitor both performance as well as availability of
your websites and their components.
And that includes API monitoring.
And also AlertSite provides mobile monitoring, as well.
Next slide, just another reminder that you can still
submit your questions.
And I will present some of the questions that we already have
to Mike, now.
So one of the first questions that came in, Mike, was, what
is the return on investment of web performance monitoring?

MIKE GUALTIERI: Well, I think the answer to that is how you
calculate what the cost is of not doing it.
And that's going to be different for every situation.
So it's really easy to actually look at your past
e-commerce sales data.
So I'm going to talk about, first, availability, when
people can't do anything.
And then we'll back into performance.
So say you take a month period and a daily period of what
those e-commerce sales are.
And then just create a model in Excel that says, all right,
well, what if my site is down at any one of
these particular times?
And then, boom!
You're going to get a calculation.
It's kind of an oversimplified calculation, because it's
going to assume that if it's down for an hour, it's going
to come right back up and you're going to get that same
amount of shopping.
So it's a little bit of a simplified model, but it's a
good starting point.
And then what you do is then you create
an abandonment figure.
So you create a cell in that Excel spreadsheet that says
abandonment rate, next to performance.
So if the performance goes down by 30%, it's 30 times
slower, I'm going to lose, say, 10% of the
customers or 5%.
I'm not saying those are the figures you should use, but
that's how you can model it.
And then you can do to several different scenarios for your
site about what that abandonment rate will
actually cost you.
So that's the starting point.
Then what you have to do is you actually have to get data
for what that abandonment rate might be.
Now if you're lucky, you can use past performance data to
actually model that.
You can actually see if performance went down the
number of customers.
And so you can use actual numbers.
And this is a key to doing ongoing monitoring.
If you're going to continue to monitor, you're going to have
that data available.
You take your performance monitoring data from your
external monitoring site.
You marry it with the sales data and your web log data.
And now you have a data set that you can work with to
calculate that.
But if you don't have all of that, or you don't have all
that right away, you can do what I said.
You can do a simplified model and
look at multiple scenarios.

Next question that came in was, how much or how close of
a relationship is there between performance and

MIKE GUALTIERI: That's one of my favorite anecdotes is that
many customers, consumers, using a website
will say it is slow.
And this is very frustrating to a lot of technical people.
Because the technical people will run the test.
And they'll say, oh, well, the pages load in a sub-second.
And the transitions to new pages are sub-second.
And they're still saying it's slow.
And the reason why they're saying it's slow is often not
because of the performance.
It is fast, in terms of loading pages.
By the usability, how it's designed,
it has a poor design.
So, for example, a user who has to click on four items--
they have to do a four-step buy process versus a one-step
or a two-step buy process--
the four-step process with all these clicks, the consumer
will say it's slow.
So they're using the word "slow" not to describe the
actual technical performance of the rendering.
But they're using it to describe the user interface.
This is really important for technical people to understand
so they don't go crazy.
So they can go back to the e-commerce people or the
marketing people or the executives and say, well,
let's look at that user experience.
Because the way that site is designed may be the culprit,
versus the actual performance.
Thank you, Mike.
Another question that came in was, how can we test distance
dependency of performance?
Normally we'll be sitting in one place to test an
I mean, you have to use an external monitoring service
just like AlertSite.
That will test from different locations.

And that'll just give you actuals.
That'll give you actual performance from Seattle,
Washington verses Florida, versus Texas, versus London.
MIKE PUNSKY: Or South America.
MIKE GUALTIERI: Or South America, yeah.
Yeah, wherever it is.
And then you'll actually see that actual data.
And if you already have a CDN, like in Akamai, you'll still
notice some differences.
But if you don't have a CDN, you'll notice huge differences
in that internet latency.
Another one that came in was, how do you do a benchmark on
your production environment without actually taking down
your production environment?

MIKE GUALTIERI: Well, you should actually be
benchmarking your production environment constantly.
That's continuous monitoring.
So using an external monitoring service is a best
practice to monitor for performance degradation from
the user standpoint, where you'll get alerts to know that
something's happening, and you have to investigate.
But at the same time it's doing that on a periodic
basis, it's also storing and tracking this data.
So you're going to have a lot of data, depending upon how
frequently you monitor and what you monitor.
But you could set that all up on a schedule.
And then you can download and then use that data, analyze
it, to figure out the relationship., for example,
between the number of users and performance.
But you also want to load-test, too.
You want to load-test periodically, as well.
And you can load-test against production data, as long as
you ramp it up and you have work flows into test accounts.

It can be more complicated than that.
But the simple answer is monitoring.
The continuous monitoring is a continuous test.
Thank you.
One other question that came in that's a little bit more
complicated is, in what way must a cloud-based system
actually be elastic?
What contingencies can be set to alleviate
cloud performance failures?
MIKE GUALTIERI: Well, the failures that we've seen--
I mean, there's different levels of failure.
The failures that we've seen, that have made the news, have
been data center failures, like the Amazon data center.
Lightning, I think, struck it.
And it was down for a little bit.
And what Amazon says--
and they're right--
is that you can deploy in multiple data centers.
They have two data centers.
But what's not so easy is that your software has to be
architected to handle that.
So it's real easy to just reroute web servers.
But what about the data?
It's not so easy, necessarily, to reroute the data if you
can't access the data.
And all these things are becoming interdependent.
So you have to design for fault tolerance.
And it really goes back to what I said about identifying
all the architectural components and then creating a
mitigation strategy.
So elastic is about automatically provisioning
your resources to handle load.
But it's also about fault tolerance.
So it is a more complicated question.
But my answer is kind of the parameters of what you
have to look at.
Here's one.
For any application, who fixes the minimum system
Is this done by testers, developers, or somebody else?
MIKE GUALTIERI: It's done by somebody else.
And that somebody else is the data.

Someone doesn't need to arbitrarily say, this is the
minimum system requirements.
You can test it.
I mean, by system requirements I assume you mean the browser
client and maybe the mobile client.
The point Is you need to test it against all those versions
on different platforms.
And then the testing will tell you, this is not going to work
on IE6, or, we're not going to go to IE6.
Now, if there's some sort of business requirement that says
it must work on there, and then testing says, well, our
current won't, well, then there's a business decision.
Because there's a cost associated
with making it work.
But I love data because data can give us shortcuts to
helping make some decisions exactly like that.
MIKE PUNSKY: OK, let's see.
We have time for a few more questions.
Is it good to use a fancy UI component over a simple one
provided by the UI frameworks, like ASP.NET controls?

MIKE GUALTIERI: Can you ask that again?
Because I was fixated on the word "fancy."
MIKE PUNSKY: So is it better to use--
let's drop the word "fancy"--
is it better to use a different UI component than
the simple ones provided by the UI frameworks, like
ASP.NET control?

MIKE GUALTIERI: I don't know what you mean by "simple." I
mean, I wouldn't avoid the ASP.NET controls, per se.
There's a balance between functionality and performance.
So I really don't know how to answer that question.
I wouldn't say that there's a simple rule there.
I mean, I'm not a big fan of "keep it simple," because in
reality, things are complicated.
And if we're talking about e-commerce sites, it's very
And so sometimes a fancy tool that lets you try on the
clothing or view something from multiple angles, that has
That can be slow to load.
It could be sluggish to use.
It could be hard to support on multiple platforms.
But on the other hand, it can provide a differentiated
shopping experience.
So I think it's a little more complicated than I can answer
based upon just one control, if you know what I mean.
I just got a follow-up from the person that
submitted that question.
And according to him, "fancy" means third-party components
which might impact performance of page loads.
It's really the same answer.
Third-party components are great.
I saw some recent data--
I can't remember where I saw it-- but the average
e-commerce site makes calls out to 9 or
10 different domains.
So the use of third-party components is rampant.
And it's easy to understand why.
It gives you immediate functionality.
And it's not just the GUI visual component.
But there's a lot behind some of this functionality.
There's links to social media.
Or there's different shopping experience or catalogs.
I mean, just look at the number of
third-party APIs out there.
There's a huge number of them.

My advice is you can't make a rule that says, we're not
going to use third-party components.
Because there's just too much value there.
And it can help you differentiate.
It can help you get features faster.
But you have to go through that exercise that I showed,
on the bottlenecks, where you have to assign a confidence
level to those components.
And then you have to support each of those
four failure scenarios.
You have to have a mitigation strategy.
So is it costly?
Is it a bit scary?
But there's up-sides to it, too.
So you can't just say no to it.

And the final one for right now, for a good test of an
application, is it good to have different people doing
development and testing.
Or is it a good idea for the developer to do both?
MIKE GUALTIERI: Well, I have a very controversial blog post
out about that.

There's definitely a need to have separate testing.
But sometimes developers use testers as debuggers.

You're trying to meet the deadline and you can be a
little, yeah, that probably works, so I'll just throw it
over the wall.
And then the testers become debuggers.
And then the developers don't get--
it's just like a bug.
But no one goes in there and says, that's a bug you could
have avoided.
Don't do it again.
So I think there has to be some responsibility for the
developers for their own testing.
And there has to be some tracking mechanism that says
who has more bugs.
But even that's fraught with some problems.
So I think more of the burden of testing can be on
developers if they have the right tools.
But I think you also need that QA safety net, as well.
Thank you very much, Mike.
This was a great presentation.
And I just want to point out to everybody that we've gotten
a lot of questions over the past 20 minutes or so about
whether or not this session was recorded and, if so, where
the download will be available from.
And our plan is to have that recording available within the
next 24 to 48 hours.
We will send out an email to everybody who registered and
let you know where you can download that from.
And I want to thank everybody for attending today.
And thank you, Mike, for a great presentation.
And you can contact us if you have any further questions.
MIKE GUALTIERI: Thanks everyone.
MIKE PUNSKY: Thanks, everyone.