Mobile Web Performance

Uploaded by GoogleTechTalks on 14.12.2011

>> A Guypo Podjarny, he is a CTO and co-founder of Blaze. And he's also the creator of Mobitest,
which is a mobile performance measurement tool. And he's also a contributor to HTTP
archive mobile. And he also held works with the pad on webpage test. He is a researcher
of Mobile Web Performance Optimizations, maintains the Blaze blog, which is almost like the Khan-academy
for performance optimization where he does white boarding and explains concepts. And,
he's a frequent a presenter in security and performance conferences and events. His Twitter
ID is guypod. That's it. And he has a Google plus account and you can see that Guypo has
been working on laptops from a very early age, so almost two years old. So, here is
Guypo. Thank you. >> PODJARNY: So, everybody--thanks Ranku for
the intro. So yeah, the picture is of my son. He's two years old and enjoying my laptop
in the Google profile. So, I'm here to talk to you a little bit today about mobile web
performance. And basically, what we'll do is I'll give a brief intro. I think it's probably
preaching to the choir, but I'll give a brief intro about why speed matters and about my--why
mobile web performance matters. And then I'll dig into our view of mobile web performance
from my view of how to--how to split mobile web performance into the different aspects,
the different attributes of what mobile is, and then how's--what are the performance implications
of each of those and what can you do about it. And of course, then we'll summarize and
open for question. So, Ranku already did call out these intros on the CTO Blaze. We, in
a sense, we try to save you the needs to know any of these by automatically making your
side fast by applying a variety of these optimizations work on Mobitest which is today, probably
the only way to measure page load performance in a kind of reliable manner on mobile devices,
on I-phone, on Android, on--recently, on Blackberry. And yeah, in my spare time, such a thing exists.
I kind of research blogs and try to take advantage of--so, those two tools from the different
aspects, the ability to manipulate websites and the ability to measure performance, to
try and reach conclusions and understand what makes sites faster versus what makes sites--well,
what doesn't. So, onto--this is just kind of a ground setting--I think, probably, everybody
here--I'm going to be using waterfall charts through this presentation a little bit to
explain what is--what is the effect of some of these optimizations. So, you're probably
all familiar with it but basically, quick intro, it's just waterfall charts show--visualizes
the load of a page. Each line represents a resource, a request being made within the
page, and a response. And the two lines that I will be talking about is page load time,
which the generic sense is, when the browser decides the page is fully loaded, the document
is complete. Sort of the load events where the browser stops showing any progress indicators
and to start render time, the point in which a user stops staring at a blank screen and
something starts getting painted for him. So, I'll just be using this through this presentation.
So, a little bit about why we're spinning the cycles. So in general, users expect fast
sites, right? Google is, probably, one of the bigger, if not the biggest advocate of
this. This is one chart from an ACMI study, there's many others that show the majority
of users are looking for websites to load in about two seconds, three seconds at most.
And they sort of--they have that high expectation, that fast expectation about the website loading.
If they don't get it, they go away. They abandon--they abandon the site. They go to a competitor
site or a different site. The best illustration of that that I can think of, from my used
cases is, when I Google something up and I click the first result. And if that first
result doesn't respond in a certain amount of time, I go back and I choose the second
result. And I think that materializes in many ways when you browse around the web. So, the
longer your delay, the higher the abandonment rate. Again, that has been demonstrated in
a variety of status. I think this data is based on Forrester. It was published by KISSmetrics.
If you do give them the fast sites, they reciprocate by giving you better business and this is
a study from Shopzilla. There is a variety of those where if you improve the site, pretty
much any aspect of the business would improve from conversion rates to repeat views, et
cetera. And, as well as the fast sites tends to be a leaner site and, therefore, makes
better use of your infrastructure, up time is better, your costs are lower, et cetera.
So, switching a little bit to mobile, a parallel trend is mobile browsing. Mobile is clearly
growing within mobile browsing. These are some big examples where in--on Facebook, their
mobile set accounts for 19% of the traffic, on twitter, 14%. Financial time, just sort
of did a--they abandoned apps and they created an HTML5 website and they have 700,000 active
users on it. So, there's a lot of different indicators to show how mobile browsing is
growing. And the mobile--the users are not really cutting a slack for the fact that it's
mobile. They're expecting similar performance to--on mobile to what they get on desktop.
This is a study by Gomez. And what it does is, it demonstrate--they ask users--it's--they
flip questions. Say, oh, "Do you expect your desktop performance to be better or worse
than your mobile performance?" And, you know, in a nutshell, the majority of users--the
vast majority of users expect websites to perform equally or better on their mobile
phones than they do on their desktops. So basically, we have that expectation. That
expectation is growing. And again, the best illustration of that is your iPad on a Wi-Fi
network, right? You're browsing, you're staying at home, you're opening your iPad, you want
to browse the web. There's no really--the expectation is that, the browsing experience
would be equally good if not, better to that you would get on a--on your desktop or on
your laptop. And once again, if you don't do it, they would abandon the site. And the
numbers are similar. Sometimes they might give you, "This is not exactly consistent
with the previous data". Maybe they give you a second more, but generally they would expect
those fast sites. So, we aim to make sites faster and we need to make mobile websites
faster. And the reason is to basically help the bottom line. When you talk about performance,
there's a variety of components to a performance of a webpage. Loosely speaking, we are splitting
that up into Front-End and Back-End time. A little bit simplistic where Back-End is
the generation of the webpage--the HTML page, so that includes the load balancers, the data
base access, getting the HTML page, generating that and sending that down. And the Front-End
is everything that happens after which primarily talks about network time and about browser
time, processing time than networking time. So today, I'll be talking about Front-End
performance and how to accelerate that. And that is also specific to--it depends on the
mobile components more than the Back-End which is the same for mobile and desktop. Okay,
with all that intro--it's just background to be, you know, why are we here, why are
we spending time on this--into a little bit about what is mobile. So, what is mobile?
It's kind of a--pretty--kind of a philosophical question or--I don't know if philosophical
but a hard question. Specifically for performance, I find the best way to split mobile up is
to talk about the mobile network, mobile hardware, and mobile software. And that helps address
questions like, you know, "Is the laptop with a rocket stick, is that mobile? Is an iPad
on a Wi-Fi network mobile?" And the answer is each of them represents an aspect of mobile.
Some of them mobile software, some of them mobile hardware and some of them are mobile
network. And each one of those comes with its own baggage, with its own performance
implications. And basically, through this presentation, we'll talk a little bit about
each of those components and what it means for performance. Feel free to interrupt me
with questions through the process, you know, no need to sort of wait for the Q and A, we
can have the longer conversations at the Q and A section. Okay. So, that is just background.
Now, digging into the meat. Start with mobile networks. So, the problem with mobile networks
is they're slow. When you compare stats about download speeds, about upload speeds of mobile
networks, they range--mobile is very inconsistent. But they range between nine times slower,
five times slower. They're very, very variable if you are going to be browsing in the middle
of the day, when you are standing by the antenna versus, you know, out in the country or at
night time from your home. Numbers vary a lot. But while download speeds and upload
speeds are much slower, the even bigger killer in mobile networks is latency. So, on your
desktop connection on your broadband, you would expect latencies of 20 milliseconds,
of 30 milliseconds, the time it takes data to go to the server and back or at least get
to the web and back. So you'd expect very, very, very quick connections. On mobile, if
you get a hundred--if you get a hundred and fifty, you're in pretty good shape. And if
you are looking at averages, the variability's very, very great. Some measurements we've
done, as well as some studies we found about this, trying to average across different webs,
talk about the 300 millisecond average latency. And that's slow. And it means every time you
do an interaction, every time you try to talk to a server or you send a request, you try
to get a response, there's a lot of overhead, there's a lot of slowness. That's probably
the most important attribute of the mobile networks is that, they're slow and that they
have that high latency. So, looking on what you can do about it--and in general, this
is the format we're going to be doing, talking about the problem and then talking about solutions.
The primary thing that you can do about it is you can reduce requests. Why reduce requests?
First of all, logically, if the cost of an interaction with the web server is higher
every time you make a request and you get a response, you need to pay a third of a second
just to make that interaction, it matters and it slows you down. So, reducing those
interactions accelerates your site. The kind of more mathematical reason is, we've tried
to assess a variety of devices and try to--we measured--I think this was on a Fortune 500
website but it doesn't really matter. It's a set of 500 random websites and tried to
see which variable within the page correlated the most to load time. And across these devices--the
mobile devices, the number one correlating factor was the number of requests. So, the
first one is a little bit more of a logical understanding and the second is a more maybe
statistical analysis. And of course, there's a lot of different ways you can measure but
both can lead you to the same conclusion of reducing the number of requests being the
number one thing you want to do to reduce the page load time. And of course, the second
thing you want to do which is a little bit more intuitive is reduce bytes, right? If
you need to download a megabyte, it's going to take you less more time than if you need
to download 100k. That's--it's a little bit more intuitive. Yup?
>> [INDISTINCT] question [INDISTINCT] the latency. Do we expect the latency to go down
in the next five years by the new radio technology or...
>> With LTE and those components. Well... >> [INDISTINCT]
>> So, the question is whether you expect--we expect latency--whether we expect the latency
problem to be temporary or whether we expect that as cellular technologies improve, it
will go away? >> Yeah.
>> And--I guess there's no--I don't have a crystal ball but some initial studies on LTE
show that the latency is still significantly higher on LTE than it is on the broadband
environments. So, I think the gap would always exist. And it's also logical, there is more
interference. And there is--you can only optimize the communication channels so much as compared
to broadband. So, they will improve but they would probably still--when you talk about
mobile versus desktop, and the mobile network versus broadband, the gap would probably persist.
>> Well so this is--the question is whether--why is the bar low on the Nexus and on the Android
devices for the number of bytes. Why do they not correlate? So, I don't--I don't have,
sort of, an exact answer. This is, sort of, the results that we got. My--our interpretation
of it was that it's--there's other variables. These are the variables that correlate the
most--the most page load time. So, if there are other components or other reasons that
would drown out the correlation, then that would--there's a relationship between the
bars is what I'm trying to say is that if they--if one component drives page load time
more or enough, then the other components would not contribute to page load time above
and beyond it too much. So, we did--so, I will kind of [INDISTINCT] at this a little
bit but this is about four or five months old. And we did constantly improve the measurement
technologies. So, there's also possible there are--there are, kind of, some tweaks in the
data that can still be improved. So, let's see a visual about this. Now generally, when
I give this presentation, I choose the customer in question or the company in question, I
present at to demonstrate this. I got to say they're on the Google websites. I couldn't
apply this optimization and get this affect. But this is an example on the Wal-Mart website
where I apply the single optimization called on demand images or just in time images which
makes images on the page only load as they scroll into view. So, this is a good optimization
that basically in one stroke, reduces both the number of requests and the number of bytes.
And the images that are below the fold do not get loaded until they actually scroll
into view. So, on the Wal-Mart website, and this is a mobile site--this is just an apparel
section, you can see that that took us down from--it's kind of hard to see here--from
about 330K to about 222. So, it cut down about a third of the number of the amount of bytes
and also about a third of the number or requests. And when you look at the effect that that
had on the visual, it was dramatic. I probably should have put the waterfall charts here.
But what happens is all of those images were downloaded in parallel. So, because we cut
down a lot of those image requests, the bandwidth was used entirely for those images there above
the fold. They weren't contending for that download speed for those environments and
they've downloaded the data--how much faster. So, this is--we ran multiple measurements
and we pretty much consistently got those numbers. These are two second intervals. So
basically, go from about four seconds to about--and there's that eight or ten seconds that the
page is loaded. And granted when the user is going to scroll down, then the images would
need to be fetched. And if you want, you can proactively fetch them after they unload but
you focus your attention to only loading things that are within view in the initial page load.
So, any idea of the dramatic effect? This is the same data on MSNBC. It's just to say,
you know, this spans a variety of different websites, e-commerce, mobile. So, same thing,
went from about--saved about 15 out of 43 requests, reduced about a third. This is consistent
with stats we'd see on a lot of different mobile websites. And once again, shave it
a lot of time as well. So, that's it about slowness. The other aspect of mobile network
is, though slowness wasn't bad enough, is that they're unreliable. Packet loss is very
high. That one is incredibly variable, it's really hard to give a stat of the average.
You know, when you're sitting, when you're getting out of a subway and you have four
bars on your phone and you still can't get anything done, that's usually packet loss
in play. There's--everybody is using it at the same time, there's a lot of noise, the
capacity--the channels aren't being maintained well and packets get dropped. So, you just
can't get anything to actually compute the transactions. So, it goes from very, very
low packet rate if you're, you know, in a good reception area at night time to terrible
packet loss in other environments. The impact of that to your performance is that you need
to think about--that no resource is safe. At any point in time, any resource, any file
on the page may get delayed, may suddenly take a long time because packets got dropped,
and it takes a while to do that iteration. So, you can't rely on a, sort of, fixed amount
of throughput nor speed. So, the best solution we have for that is to eliminate the single
points of failure. As much as you can, you want to make all the parts of the page as
independent of each other as possible. So that one would not delay or block another.
And this way, you're reducing--you're addressing this problem a little bit, so that, at the
very least, if one resource got delayed, it would not hold back another resource. This
is also good for latency because we're doing more things in parallel. You're not blocking
one resource or another you are also helping accelerate the site in high latency environment.
>> [INDISTINCT] >> For sure, yes. So if you--so the question
is wouldn't--doesn't just collide with the notion of reducing the number of requests.
And, you know, it does to certain extent and to other extent it doesn't. So there's--there
is always a trade off. If you combine a lot of different files and you download them as
one request for instance, then you're making that file a more single point of failure versus
reducing the number of requests. You probably need to measure and to test what is the right
thing on your site. But the primary thing is to try to make each one of these files
after you've reduced like what I--what we do and what I would recommend is once you've
reduced the number of requests, try to make those requests that remain on the pages independent
of each other as possible. So let me give you a couple of examples. So when you look
at the webpage today, the single points of failure are primarily scripts and CSS files.
These are the files on--in a browser, in general and specifically on the mobile browser that
would block downloading of other resources. They would keep the browser from doing anything
else with the rest of the page until that content got downloaded--until the scripts
and CSS files completed. IFrames and fonts in various settings are also blockers. Single
point of failure might be the right point maybe--or just kind of blocking resources.
Either way those are resources that would delay other resources until they get downloaded,
nothing else does. So the solution or the way to address these above and beyond reducing
requests is to just make them not block. You can use Asynchronous JavaScript to make scripts
download without blocking other resources. So download them and execute them with your
own inline--simple inline scripts without blocking other resources on the page but not
the naīve scripts source links. You can do a similar thing with CSS where you can make
the body--set the body to visibility none if you don't want to kind of a [INDISTINCT]
content to show up. Download the difference CSS files, and then once you got all the CSS
files, turn the body visible. But in both of those cases, you haven't really accelerated
the scripts or the CSS files themselves but you kept them from delaying other resources.
So here's a visual. It's a little bit hard to see the page on the New York Times website
on what Asynchronous JavaScript Lon does for the initial render time. Talking about those
single points of failure. So what this does is we--I've only applied Asynchronous JavaScript
to this page. So, JavaScript--the scripts are not blocking the rest of the contents
on the page. And well, the load time at the end is the same, this is about 12 seconds.
The load time is the same for both cases. The user experience is drastically different.
In the first experience, the script got downloaded--it's hard to see but there's a little bit of texts
over here at the top and then, there's a script that generates a couple of ads here at the
top. So they got downloaded after. And then the script is going off and it's fetching
another piece of ad that's showing up here and kind of render over here. So every one
of those scripts delays the rendering, delays the downloading of the resource, the user
experience is that much worst. Instead of--at around four seconds getting the bulk of the
page and then getting the data filled up. And when you look at kind of a snap shot,
the film strip--oh, sorry of the waterfall chart, you can basically see these stairs,
these blockers, that every time downloader blocked other resources. Yup.
>> This is an analysis. This is some of the worst case analysis [INDISTINCT] so it has
this--it has [INDISTINCT]. >> PODJARNY: Yes. So the question is how does
this get affected by caching? I mean, what if--what if--is the situation in the real
world that a lot of these resources are caching but maybe if it's not a big deal or the impact
is not as drastic. I have a whole slide about mobile cache sizes which is a whole pen point
by itself. So I don't know if on mobile the expectation that they will be cached is better.
But also if you make scripts Asynchronous you're not really getting in the way of caching,
you're still going to take advantage of caching. So maybe you're not--maybe there are scenarios
in which this is not a problem, but there are many scenarios where there's a problem
and will get optimized. >> [INDISTINCT]
>> PODJARNY: Yes. So this is--this is--in this case, it's hard to give an exact example
of a pocket lost. Pocket lost is random. This is--this actually demonstrates the latency
in pack maybe or the just the blocking effect of the resources we are downloading more in
parallel as opposed to pocket lost. But it's the same effect that could happen if scripts
got delayed--I guess my point is that, if scripts got delayed that they would pushed
off any rendering on the page, while if you use Asynchronous JavaScript. If the script
got delayed, the bulk of the page would still be rendered.
>> [INDISTINCT] >> PODJARNY: And correct. Yeah, yeah. If one
of those scripts suddenly acts out and gets delayed. Yup.
>> [INDISTINCT] >> PODJARNY: This is--yes. So this is a real
measurement from an iPhone. >> iPhone?
>> PODJARNY: Yeah. And iPhone indeed supports up to six connections per domain up to--I
think the latest one it's up to almost--I got it to do at least 60 connections in parallel.
So it makes a lot of requests in parallel. >> [INDISTINCT]
>> PODJARNY: I know that android limits the number of connections more. And it's true
that if the number of connections becomes the bottleneck, you've got a different problem
as opposed to making things Asynchronous. So it--that one was a specific example. Just
to kind of do a bit--a bit bigger of a bigger example, which tried to--I grabbed the top
ten mobile new sites and just run the same experiment on them, running Asynchronous JavaScript
looked up at the start render time. And it served a similar take on the same bit of info.
As you increase the latency in and this was us simulating latencies. So again, little
bit more latency than single point of failure but it's the same--the same notion that same
problem. The start render doesn't get very affected if you're making scripts Asynchronous
if you're not blocking the entire rendering because of those scripts that need to get
fetched and performed and executed. So that's not--that's probably the other aspect to the
second aspect of mobile networks. Now, this applies not just for mobile devices, right?
This is also for your laptop browsing over a rocket state. The last thing I'll talk about
is the mobile--about mobile networks has been on persistent channel. So mobile networks
have a limited amount of spectrum, mobile devices have a limited amount of power. Those
components combine make mobile devices, mobile clients, and mobile cell towers, not maintain
a constant communication channel. If you're picking up your phone out of your pocket and
you create a new connection, a new connection is getting established to the cell tower just
to get a single byte through. And that has a toll that takes a couple of seconds to set
up. And if you were scrolling the page and read it for, you know, 10, 20 seconds; you
weren't really invoking any additional network captivity. Then, that connection would get
dropped maybe going to half power first and then an entire power. So when you're going
to click the next link on that page, the same thing is going to happen, it's going to need
to reestablish that connection. So that's a behavior that sort of needed in a mass environment.
You as a website owner don't really have much to do about this when somebody just picked
up their phone and loaded the first page. But what you can do is you can try and maintain
that connection alive as the user browses the site before they're going to click the--to
accelerate the next click they're going to do. So if you are browsing the page and you're
going to click another link, you save the user a second or two once they clicked that
link from loading if you maintained the channel communication alive. This is a little bit
of a kind of a bad network citizen type of behavior. It means that you're going to do
an Ajax dummy request that's going to run every couple of seconds to the cell tower
and just get like a one pixel bogus response. And all you're doing is you're maintaining
that communication channel alive to the cell tower. And it's a little bit of a gray area,
you're kind of cheating. You know like with the TCP Fast Start, right everybody is concern
about, if everybody does this would the internet melt down, right? Same thing here. My view
on it is, it's not a big deal as compared to video streaming as of compared to online
gaming on your phone all those components doing a very light weight Ajax request every
two second while the user is actively browsing the page, is probably a drop in the ocean
as compared to those. Both from battery consumption and from a kind of a cellular consumption.
But you're optimizing for your use case? That said. I would suggest that you put some time
out in your page about it and you don't keep doing it forever. So you know, we go with
two minutes--sort of something you can do. The exact time of how long does it take to
establish a connection, how long does it take for an idle connection to get drop seems to
vary by devices, by carriers. Two seconds, two or three seconds seems safe to maintain
that connection alive. >> [INDISTINCT] there's a--we have a thorough
discussion on this [INDISTINCT] signal that's actually causing power [INDISTINCT] just by
sending all this little tactics. And in fact the [INDISTINCT] so loaded.
>> PODJARNY: Fully loaded? >> Just to avoid any sort of [INDISTINCT]
>> PODJARNY: Right. So, yes. So, the comment is that the--that the power consumption is
actually quite significant. I mean I would say I can see that it has a toll. We tried
to measure it. It was kind of hard for us to really get a good estimate on how much
impact does it make. Logically, we thought it wouldn't be a big deal. But I do agree
that this one is a little bit of gray area from a kind of cleanliness perspective because
there's a reason to those connections have been made ad hoc. Okay. So that's basically
it about mobile networks and those are problem or problematic component--the most problematic
component when we talk about mobile. But we still have a couple of more--we've got mobile
software and we've got mobile hardware. Each of them coming with its own baggage. So mobile
software in general is cool, its good, it's a friend, it's modern, it's performance aware,
it uses advance browsers that are html5 and CSS3 compliant or it support those or smart
look ahead and you know it's a good business offer. It's probably better on average than
desktop software because it's newer, it's fresher, and it's more performance aware.
So generally, it's a friend for performance. The challenge from a website owner perspective
is that, it sometimes imposes limitations or makes some decision you may not agree with
or that may not fit your model or what you want to do with it. So I look at a couple
of examples of that. And the first is mobile cache sizes. So it's probably the--sort of
single most--yeah, impact full... >> [INDISTINCT]
>> ...problem the mobile software imposes today on you. So mobile sizes are very, very
small. On android, the kind of a factory setting of the stock handler is about four megabytes
of memory and the cache more measurements the--on the zoom--Motorola changed that, it
went up to twenty. But generally, it's still a very small piece of cache. And from what
we can tell, that's both memory and desk cache. On iOS, memory cache goes to pretty significant
sizes but it's very, very volatile. So, if you browse the website then you went on and
you played a little bit with apps and you read some emails, when you got back to your
browser, that cache is very inconsistent--really can't rely on it. And persistent cache is
zero. So if you kill the browser process, if you restarted your device, then cache just
completely goes away. Blackberry, this is the only raised blackberry one with a 25 megabytes
of cache. So when you look at the mobile HTP archived today, a mobile page is about 400K
in size. So four megabytes in android would give in about 10 pages. It's not that clean
because maybe some resources are shared but it's a very, very small number. Even on desktop
caches sizes are too small [INDISTINCT] keeps kind of mentioning that fact and its true.
But on mobile, you're in a bigger problem. So you can always assume on mobile that your
mobile--the resources would not be there in the cache when you come back to it the next
day. I know there's now--there are smarter eviction policies that are being played. I
know that android tries to maintain kind of far future expiry components on top of older
ones. And the browsers are attempting to do things now, like, prioritizing jobs keeping
CSS in the cache versus images. The things of that nature. But this has been sort of
supported in some of our measurements. I'm trying to see the real cache status on our
own website. But it also is just very small sizes. So what you can do about it and this
has sort of a side benefit as well. So you can use local storage instead while mobile
cache sizes are small, pretty much all the mobile devices today offer you local storage.
They give you five megabytes it's really two and half megabytes in text because UTF16 characters
take two bytes each of data. And that data is dedicated to you, it's--it doesn't go away
easily. It's actually even manually often pretty hard to clear. And you can use it to
store primarily CSS and JavaScript files and cache them yourself. You need to do a little
bit more work to get this done. The advantage of it is your using the space in a way that
is--fits your needs or you support, whatever makes sense for your websites. So, you can
store the things that are most impact full for performance and you can manage that. The
other-- the other side effect of that is you're creating what we call scriptable cache. So
you're creating a cache that you have script access to. So you can do smart things like,
what we call adaptive consolidations. So you can download one file and then store fragments
of it in the cache's individual items, so that when you go to the next page you would
know that, you know, if only one piece of the file is different, you only fetch that
piece. That's sort of a good way to reduce the request without really interfering with
caching. So, using local storage is more effort but it's a dedicated cache. It wouldn't be
robust, you know if you restart the device, if a user browse a thousand website, doesn't
come back to your site afterwards, generally it will be there. And it also helps you with
giving you that scriptable cache and you hope the browsers over time and, you know, maybe
the android browser can leave that--would give you that scriptable access because then
you can do smart things with it. The next piece about--yeah, I'm sorry.
>> [INDISTINCT] >> PODJARNY: App cache and local storage are
the same and not the same. The most noteworthy--so today, app cache is pretty finicky to use.
And it also assumes--it's a little bit more designed for the page itself to be cacheable
and then along side that all the different resources. While local storage can be used
for a dynamic page that it's not cacheable itself but you want to cache the different
resources on it. The other aspect is you get scriptable access to it which you don't for
that cache as far as I know. >> So [INDISTINCT] applies that the webpage
I guess [INDISTINCT] it's going to go away, right? [INDISTINCT]
>> PODJARNY: Correct. Yes. So if you use local storage for caching, you need to make sure
that you build the cache into it. It's not a big deal. We actually had a webinar that's
posted on our website that has set of slides that exactly walk through what you need to
do. But yeah, it puts the [INDISTINCT] on it. This is a little bit. I guess this presentation
is tuned for a website owner versus a browser developer. But basically as a website owner,
you need to put some logic into managing that cache. It's not a ton of code but it is--the
[INDISTINCT] is on you. The other aspect is that there's no--the local storage--using
local storage for caching the way we recommend it assumes versioning. So it assumes there's
no real expiry due to time. It's just a cache and you save something in it. And once you
save it its signature-based, so you can cache it forever. And only as you increase or you
cross the threshold of how much data you're willing to store and then you start turning
files out. >> So people [INDISTINCT] your email [INDISTINCT]
or storage or is there someway that the user could...
>> So local storage is not a cache from a browser perspective. And therefore, if people--it
has caudal limitations so it can only store up to a certain amount of data whether that
data is used wisely or not is up to the webpage. If the webpage is bad, the use of the data
or the use of the space would be bad. >> But it's for a domain?
>> Yeah. That's what the products are [INDISTINCT] >> Yeah.
>> So it's not like some other group sites, you know. [INDISTINCT]
>> Yeah, the quota is curred top level domain... >> Yeah.
>> Yeah. [INDISTINCT] >> So the next important software difference
between Mobile and Desktop is Pipelining. HTTP Pipelining, the idea of sending in multiple
requests on a connection at once without waiting for the response and then waiting for those
responses to get back is an idea that's been around for a while since HTTP 1.1. And it's
barely used on desktop. It's unable by default on Opera. And it's an option on Firefox. But
it basically is non-existent on desktop. On Mobile, Android uses it at least to stop Android.
Opera uses it and now with iOS 5. iOS uses it. So it is much more prevalent and it actually
accounts for the majority of browsing 65% worldwide, probably more in North America.
So you definitely want to acknowledge that or you want to, sort of, be aware of its--the
difference between mobile and desktop. The really--the single biggest risk about Pipelining
when you're building your own website is the kind of the head-of-line blocking with the
idea that you would be sending a request to--for three resources: the first one would be slow;
the next two would be fast resources but the slow resources would delay the next ones.
If the first resource took three seconds, five seconds to return, the next ones could
be immediate; it doesn't matter it's still going to take you to five seconds until you
get those back. Well, there's no way that I know of to just kind of tell the browsers,
yeah, don't combine these components in a pipe, etcetera. Except putting them on a different
domain. So the best suggestion I have for how do you handle that as a website owner
is separate out the dynamic pieces and the static pieces to different domains. The browser
would not pipe, requests different domains on the same connection. So you kind of have
a fast and a slow or a possible slow And you put
the dynamic resources on the fast one and the static ones in the slow. It has the disadvantage
maybe of creating yet another connection or you're going to another domain on some browsers
like on iOS that's sometimes even an advantage. On Android, it would basically, you know,
the max connection limit is the same per domain and for the entire device. So it's a little
bit neutral. But it at least addresses this problem. The other aspect is you want to make
sure different browsers have different juristics for how do they identify whether pipelining
is supported. Sometimes it's per connection. Sometimes it's per server. Sometimes requests
are first being put into a pipe and then distributed across connections. Sometimes the other way
around. So we have detailed blog posted digging into all the different browsers and how they
do it. And then on Android, it also varies by manufacturer. So this is just an example
of it on the Galaxy S. The Galaxy S has been modified to allow 12 connections per, well,
12 connections in total, as well as per domain. It may pipe as many as six requests on a single
connection. This is just a screenshot of the speed amp, Wireshark, of capturing that data.
And then, you know, like on those other details about whether data gets put into a pipe or
a connection first. >> [INDISTINCT]
>> So, yeah. I guess I already talked about this. So you basically want to make sure you
support Pipelining. And I would sort of suggest separating out the slow resources and the
fast resources to different domains to address the head-of-line problem. I know there's protocols
like SPDY. I think they're better conceptually than HTTP Pipelining. They're not the reality
today. They might be the future but they're not the reality today. It's probably going
to be a while before they become the reality both from a web server infrastructure perspective
and a browser support infrastructure. Specifically on the mobile today, no browser may be accepted
as Kindle Fire now building on it, supports it publicly. All right. Sir.
>> Last piece is mobile--of mobile software is that there's too much of it. And this is
a little bit the despairing part of the presentation. So there's a lot of OSes. So if you're a website
owner, you're looking this, there's a lot of OSes. There's a lot of different browsers.
There's very little visibility. Even Android that's open source still opens source by delay,
in a delayed manner. And the vendors, the different device manufacturers, can change
things. And they don't advertise that. There is--it changes frequently, I don't remember
the exact number of Android versions recently. But just in October we got a new version of
IOS, Blackberry, Android, and I think even Nokia. So it changes all the time. And there's
very little visibility and understanding into it. It's kind of less, less tools. It's just
a snapshot that says even in Android. Sp it--beyond the high level split like IOS, Android, Blackberry.
Within Android, it actually varies quite a bit. This is just a kind of a thread snapshot
of the Samsung Galaxy S, the Samsung Nexus S and the Motorola XOOM. And just specifically
on the number of connections aspect. The--it's really hard to see it in kind of this world.
But the Galaxy S opens up to 12 connection seems to have a thread per connection. I guess
you guys probably know this better than I do. The Nexus S seems to adjust to the--works
the same as the simulator. And opens four connections again, thread per connection and
has--and uses Pipelining. Well, Motorola decided to scrap all of that. And go to a more classic
desktop browser, no Pipelining, no thread per connection, and support up to 35 connections.
Even within Android, and this is just a snapshot, you're not guaranteed that an individual device
has it. So--and yeah. Well, this is again another example on Android with also website
owners have the possibility, or kind of the challenge of identifying Android tablets which
is difficult. And we've done a study and saw that basically the--at least there's--a few
months ago, the Motorola XOOM pretty much always got the same sites that Nexus S did.
While the iPad would often not get it. So we would very often get a mobile site. And
that's a bad user experience. I'm hoping these things can improve but it's hard from a website
owner's perspective to separate that because it's not the same as just looking for the
word iPad and the user agent. And then changes happen all the time. Just as last 5 change
had. We enumerated about 10 different performance related changes that mattered. Things like
adding support for A sync script and on the flipside making CSS and many cases blocked
downloading of other resources, differences or changes in javascript performance. So again
these are all posted and kind of the goal is not to go through each one of these. But
there's performance--important changes about performance happening in every one of this
versions. And they've changed all the time. so it's a pain for website owner. Now, unfortunately,
I don't kind of have a magic solution to this as a website owner, all I can say is focus.
Try to make sure that you focus on the right platforms, the right devices for you. Android
and iOS in the North America, Opera and Blackberry a little bit more when you talk about globally.
So this is just the browser stats from the last three months showing Android. And then
this one--and this one as iOS, it's iPhone and iPod Touch.
>> [INDISTINCT] >> These are percentage market shares of browsing
based on >> [INDISTINCT]
>> No. Browsing sessions like how much of the--kind of browsing activity's browsed pages
or browsed data. I'm not entirely sure if it's pages or data but it's browsing activity
as opposed to number of units out there. >> [INDISTINCT]
>> This is mobile browsers in general. iPads doesn't really populate anywhere up here.
So I'm not sure if they measure iPads. But the iPod Touch shows up. And iPhone shows
up. So I think either way iOS is already important enough for you to optimize for than iPad is
obvious for--yeah. >> [INDISTINCT]
>> Yeah. >> [INDISTINCT]
>> PODJARNY: So the question is, how--does HTML5 play into this world of fragmentation
and gets supported... >> fragmentation of a major [INDISTINCT]
>> So, I think, the HTML5 Standard is not as standardizes as we would want it. But what
really varies is not about people's interpretation but just about the support level. It's a different--the
fragmentation does matter because you can't assume the same level of support of HTML5
across the different devices. So it's just about needing to, you know, either you settle
for the least common denominator and kind of only use things that are supported across
browsers or you need to get fancy and support--use the features available to you where it is,
like you do with Async. I'd say in general, HTML5 is a good thing to help address the
fragmentation by creating a standard. But because it's not fully supported by anybody,
it's not really a fully defined yet. Then--so not an entirely full solution.
>> [INDISTINCT] >> PODJARNY: From what I see, people don't
tend to take advantage of the sophisticated HTML5 capabilities too much. I think that's
affected by fragmentation and the lack of the different level of support and just keeping
track of that. But some of it, it's just about kind of the time it takes to adopt these things.
So anyways, it talks about these right now but basically, your best bet for this is measure
as much as you can, try to focus on your--on your right aspects and sort of understand
what are the environments you're aiming at and keep track on when do they change, what
are the exact attributes. So it's not a real--it's not a magic solution, it's just something
that you need to work on. So the last bid I have and I'm going to kind of rush through
this just because we started a little late and were already probably over time. Is it
a big deal if I go a few minutes over or--I guess I'll probably--I'm going a few minutes
over. It's Mobile Hardware. So talked about software, talked about networking and the
last feed is Mobile Hardware. So the first and probably most well-understood or discussed
aspect of mobile hardware is that it's slower, it doesn't have the CPU power the capabilities
that a desktop or a laptop has. If you look at JavaScript benchmarking, you know, the
software hasn't taken iOS and Android was already here. But taking iOS from 10 seconds
to about 2.2 seconds--oh, sorry, to 3.5 seconds to complete the--a SunSpider benchmark. So,
it improved quite significantly. Additional hardware with iPhone 4S doubling the CPU to
get it down to 2.2 running the same test and my laptop takes 230 milliseconds. So it's
still 10 times faster. The gap is still very, very big. Android is on par. Like, the numbers
today are pretty matching and the Blackberry is far behind. So the weaker CPU is significant.
We also tried to understand how does this actually play a role in real websites. So
is this just a problem if you're building a JavaScript game, or does this impact your
day-to-day--or your regular website? So we took the top 100 website in the U.S. as measured
by Alexa and measured them on iPhone 4, iPhone 4S and an iOS Simulator all running iOS 5.
The advantage of the iOS--of using iOS for this is that, the iOS5 Simulator actually
compiles it to the hardware in which it runs. So it's truly the laptop hardware that's running
this measurement and we saw a big impact. The doubling the CPU and iPhone 4S went from
3.4 to 2.9 seconds on average and this is over a fast connection Wi-Fi overnight. We
tried to kind of take the network component out of the equation. So it went from 3.4 to
2.9 seconds running it on a laptop, half that speed again. So some of it is JavaScript,
some of it is the cost to render images, some of it is the cost to open connection and kind
of how fast does that component work. All those components...
>> [INDISTINCT] >> PODJARNY: This measurement isn't for rendering,
it's for load time and it's basically the point in time the browser told you. So it's
load event... >> Oh, okay.
>> PODJARNY: the browser. That may or may not reflect the right load time for that
page but either way it's consistent across these three different devices, so. It's all
iOS 5 running just on different hardware. So it matters. It matters for a regular website
as well. So I'm missing a slide about--what you can do about it. That I had it here and
I keep forgetting to unhide. But basically, what you can do about it is, as much as you
can, you should avoid JavaScript. That's a good option. It's not a good option for hardware
like--for a tablet, for iPads where you need to reach content. It's a little bit of a better
option for your Smartphone. But you definitely want to try and reduce your use of JavaScript.
It's not just JavaScript, it's also things like reflows, like putting an image. If you
add the components or you change the visual elements in a significant manner, you inject
some big note at the top of the browser, you inject div elements across the page, you're
causing the browser to re-render. Try and kind of reduce that or reduce the complexity
of it. It's not trivial. The easiest thing you can do is just try to eliminate JavaScript
or reduce JavaScript. Since you can't eliminate JavaScript, the second best option is to defer
JavaScript. Try to push that off, try to make it run after the page load, not having to
block anything during the page load time. So I'd--my apologies for kind of not having--I
have a slide that kind of demonstrate those but I think that is probably the aspect that
is most well-understood. Second aspect of mobile hardware is that it's smaller. Kind
of needs to be to get into your pocket to carry it around to be mobile. And by being
smaller, it--being smaller, we're talking primarily about Smartphone and Tablet's form
factor. The smaller devices are a design challenge, but for performance they're an opportunity.
There's nothing slower about the smaller device. It's an opportunity you can create a lighter-weight
website. Even if you don't create the lighter web website, what you can do is you can adapt
to the size. So the most obvious example of taking advantage of the smaller screen sizes
is called Responsive Images, and the idea of resizing your images to the display size,
and it can make a very big impact. This is an example from the Lonely Planet website
where--if you just take one of these banner ads that they have here at the top--not banner
but kind of the main images; the full resolution image is 50K. It's pretty sizable image. If
you reduce that to the resolution that an iPhone4 can present, it's 480 pixels, you
cut 60% of that size. If you could go to web 3gs can present, you're going to 10k, you're
halving 52:04 that again and you can go further. Now, this is a pretty rich page and you can
basically--even on the high resolution of an iPhone4 and the equivalent Android devices
today, you're still cutting the page size by a third let alone on older devices. The
iPad resolution is actually lower than the iPhone4, so on some tablets, you'd be gaining
more or equally the same. And if you go for a higher resolution, you're not really gaining
much. And if you want to be very, very advanced, what you could do is you could load the four
resolution images on zoom just in case you want people to really--if you're kind of finicky
about the quality of image that you're displaying. But, when the pages zoomed out, the user cannot
appreciate a better image. it's just they're not going to see it. And the last bid again
is a bit more technical about the mobile devices that use a touch screen. Again, I'm trying
to think about the differences between desktop and mobile, so the input device is different.
Touch screens are non-existent. Basically, on desktop and they're prevalent in the window--in
mobile devices. And that one is a very, very specific one when you touch a mobile device,
the device waits intentionally, flags to decide whether you were doing a pinch--whether you're
zooming, whether you're scrolling, whether you're clicking, what is the action that you're
doing. As far as I understand it, it's not a CPU problem or any limitations the design
decision about distinguishing what it is that you're out--or you really want to do. What
you could do is you can make your clicks more aggressive by turning them into--using Touch
Events to do them. So you would use on touchstart to--and on touchend as an invocation of a
click if you want to be a little bit more careful, you want to say without an on touchmove--without
a touchmove and then between. And if that happens, then you're clicking.
>> [INDISTINCT] iOS and all the other? >> We've seen another work on Android as well.
I'm not sure we tested the very latest Android, but we've seen this work on the Nexuses and
on the XOOM, so--it has a bit of a negative--like if you're making your clicks aggressive--you're
making them aggressive, you might be accidentally clicking something when the user was intending
to scroll, so it's a little bit of a tradeoff. This is the case where you're not necessarily
doing something better than the device but you're just changing, revising it towards
what you think is the better user experience for your users on your site. Okay. So that's
my presentation or that's the data I have. The key point we talked about is taking mobile--talking
about mobile web performance and separating it out, sort of a divide and conquer strategy.
We talked about network. We talked about hardware. We talked about software. And for each of
those, what are the different components and how do they impact performance, what performance
implications do they have and how can you as a website developer, as a website owner--and
it's true from a browser developer perspective as well. It's just again, this presentation's
a little bit more aimed at the--at the website owner; how can you help that, how can you
address that? And we talked about the different aspects, the different challenges. These are
not all the attributes and all the aspects of each of these devices but those are the
major ones to kind of make an impact. And that's it. Any questions? No?
>> What's your opinion on this Silk Browser and [INDISTINCT] to make browser which software
[INDISTINCT] >> PODJARNY: Yeah. So the question is what's
the--what is my view on the Silk Browser and kind of the have the Split Browser architecture
where there's a service side component to it. I think it's interesting. I think it's
not really as noble as they presented to be. Opera has been doing it for a little while
but this is different. What they do, like, they take it to the next level. I think there
are real performance problems that get solved by it. So from a performance perspective,
I think it's a good move, and it could increase performance. I would expect that it would
break a lot of sites. I'm a little bit concerned about the fact that it HTTPS goes directly
on it. If it doesn't go directly, I'm concerned about the privacy implications. If it goes
directly, I'm concerned about the fact that HTTPS is something we want you to have more
of and not less of, so you're a kind of eliminating the whole value. And I'm also curious about
the costs associated with it. So I think for Amazon, it's an interesting play because they--you
need to buy Kindle Fire, and you're probably have some math about how much Amazon purchases
are being done by every user who bought a Kindle Fire. So they can probably afford to
lose some money on every user because they are making money through other channels. But
I'm not sure if the math makes sense for every browser out there. So the bottom line is,
from a performance perspective, I think they're solving problems. It's like Speedy--they're
using Speedy, and there's a lot that can be done there. In fact, what we do in Blaze,
when we try to optimize, a part of it is similar logic, but we're oriented at the website.
But the idea of moving processing from the mobile device to a server and ideally doing
it ahead of a time or catching results of processing, these are the types of things
that we do as well, and we see that they have real results. So we do think they matter.
The question is whether all the other parameters work well along the side it. It will be the
faster, would it be better? I don't know. >> [INDISTINCT] Okay.
>> PODJARNY: Okay. Well, feel free to contact me and after it if you want to talk about
any of this. Thank you very much.