Sunday, September 04, 2005

AJAX Latency problems: myth or reality?

I've read many articles on AJAX and network latency in the last year, but every article seems to claim to a different thing: one talks about latency as a problem specific to AJAX, while others claim that especially AJAX applications can overcome network latency problems. While yet others take a hands-on approach and build tools to simulate network latency on localhost. Confused...

So I searched Google for 'AJAX Latency', and read many of these articles again. Below is short overview of what I've found.

The first hit is from Wikipedia (3 Sept 2005):

"An additional complaint about Ajax revolves around latency, or the duration of time it takes a web application to respond to user input. Given the fact that Ajax applications depend on network communication between web browser and web server, any delay in that communication can introduce delay in the interface of the web application, something which users might not expect or understand."

They refer to 'Listen kids, AJAX is not cool':

"If you writing a user interface, make sure it responds in 1/10th of a second. That’s a pretty simple rule, and if you break it, you will distract the user."

I agree with one thing: you do want a response in 1/10th of a second. But is this realistic? Cédric Savarese points out that - even when it's not 1/10th of a second - the user expects to see something loading: he suggests using a loading indicator as a replacement for the traditional page refresh. But then he also mentions another perpective:

"What happens really is that XmlHttpRequest is not used for what it is good at: asynchronous, behind-the-scene, access to the server, but in the context of a synchronous transaction by the user. Users want instant feedback from an application, and a better way to achieve that is by freeing the application from its over-reliance on the server."

And Michael Mahemoff (of Ajax Patterns fame) adds the following:

"It's not an all-or-none thing. With AJAX, you can continue to download in the background if you want. "

So they suggest moving more intelligence to the client, and loading data in a smart way, ideally asynchronously without having the user wait for it: I think that's really what that first 'A' of AJAX is all about. But this could mean that you're loading more data on application startup, using precious bandwidth. Or maybe you're even loading data that the user will never see (pre-loading). I found a very punchy quote in a discussion on TSS:

"These days, bandwidth is cheap, latency expensive."

I can confirm this from my own experience: on www.backbase.com around 80% of the download time is caused by latency, and 20% by download speed (bandwidth). So you can better load some more data beforehand than have very frequent requests for small files.

We have measured latency and download speed with a global performance measurement system (more about this in part 2), but that's not convenient to use during development because it takes at least a couple of days before you have enough data. Then I read the article by Harry Fuecks:

"(...) alot of AJAX development is happening @localhost so these problems simply aren’t showing up."

So he has created an AJAX Proxy to simulate a high-latency environment on localhost: kudos! In another article he indicates that the use of synchronous requests should be avoided at all times. But even asynchronous requests need to be handled carefully: "Can multiple asynchronous XMLHttpRequests be outstanding at the same time?", asks
Weiqu Gao. Harry again did some research, and came up with a couple of recommendations:

  • Avoid Uncaught Exceptions: don't call the XMLHttpRequest object when it's still processing another request
  • Implement a solution for timeouts: XMLHttpRequest doesn't handle this automatically as do other socket client APIs
  • Make it possible to abort a request gracefully
  • Make sure that responses arrive in the right sequence

These are the technical aspects. I have found few articles about the usability aspects (probably because I didn't look very well). Marco Tabini quotes on his weblog: "one of the fundamental elements of AJAX programming (...) was to always give your users the appropriate feedback, so that they know when something happens." I agree with that: the user should not be surprised by unexpected behavior of the user interface. Interaction Designers should therefore also be aware of some of the latency issues. For them I would summarize it as follows:

  • If a user's action causes a server request, don't expect a response within 1/10th of a second: consider showing a 'loading' message
  • Specify the usage patterns of an application so that the developers know how preloading of data can best be implemented (think Google Maps, which prefetches maps just outside the border of the screen)
  • Be careful with 'hidden' functionality such as auto-save functionality, because it might conflict with other actions the user performs: cooperate closely with the developer(s) to avoid usability problems.
  • Clearly specify the sequence of events, e.g. 'action 1 has to be completed before the user should be allowed to start with action 2', which gives developers relevant information to avoid concurrency issues.

And to make the link between technology and usability, I have a quote of Jonathan Boutelle, who already understood all of this more than a year ago:

"Predownloading data is critical to providing low-latency experiences. But blindly downloading data without consideration for how likely the user is to need it is not a scaleable approach. RIA architects will have to consider these issues carefully, and ground their decisions about preloading in user research, in order to create superior user experiences."

After reading all of this I've come to a tentative conclusion: network latency is an important issue to consider during the implementation of an AJAX Application, both by the developer as well as by the interaction designer. If you make the wrong decisions, usability can be terrible. If you make the right decisions, AJAX will significantly improve web application usability. It is still a tentative conclusion, because I'm pretty sure I haven't read all the relevant articles: let me know what your thoughts are.

7 Comments:

Anonymous Anonymous said...

I'd like to expand a bit on what I meant by 'freeing the application from the server'. Latency is less of a problem if the user doesn't need a response from the server.

When a ajax framework uses a server-side MVC pattern, the user interface cannot be updated without a back and forth with the server, thus causing latency. The typical example is Ruby-on-Rail and the to-do lists in Basecamp/Backpack. It works well, it's fast, but you still see this little loading animation whenever you add a new item in the list.

Ajax gives you all the tools needed for a client-side MVC framework: XML for the Model, Javascript for the Controller, and XSL for the Views.

I see 2 benefits to this approach. First you avoid latency since the application can basically run offline. Secondly, the full state of the application is contained in one XML document, this makes it RESTful friendly and allows you to save all your data with one asynchronous HTTP Request.

Anyway, your article is excellent. I'll have to check out all these links..

8:58 PM  
Blogger Israelits said...

excellent article. Just wanted to mention that doing MVC on the client also takes its performance toll, especially in Explorer, where I have found cases that it takes forever to render and execute JS code.

This blog entry was added to the Ajax resources search engine at: http://www.rawsugar.com/collections/guyt/ajax

1:23 AM  
Anonymous Anonymous said...

> Avoid Uncaught Exceptions: don't
> call the XMLHttpRequest object when
> it's still processing another
> request

You shouldn't of course call the same xmlhttprequest, but you can make array of objects ("request pool") and simple add new object to this pool each time you need it.

3:39 AM  
Anonymous Anonymous said...

You shouldn't of course call the same xmlhttprequest, but you can make array of objects ("request pool") and simple add new object to this pool each time you need it.

I'm not sure that would help. In most cases, IE will have a maximum of 2 simultaneous connections, whether you use XMLHttpRequest or not. It definitely is a good idea to queue the requests so that new requests can be triggered while others are still finishing up; it won't, however, really improve latency.

Windows will limit connections to a single HTTP 1.0 server to four simultaneous connections. Connections to a single HTTP 1.1 server will be limited to two simultaneous connections. The HTTP 1.1 specification (RFC2068) mandates the two connection limit while the four connection limit for HTTP 1.0 is a self-imposed restriction which coincides with the standard used by a number of popular Web browsers.

http://www.winguides.com/

It turns out that this is a case where IE strictly follows the standards-- in this case, RFC2616, which covers HTTP1.1. As noted in the RFC:

Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy.


http://blogs.msdn.com/ie/

12:42 AM  
Anonymous Anonymous said...

OK, ok. Let's see how we can fix that latency issue. This is the same problem that you'll find when try to implement a web chat application. In that case the trigger falls in the side of the server and the server has no way to signal the new data available to the web client. The typical solution is checking every x seconds... but that is far from the ideal (too much bandwidth bad used and too latency after the message is posted). The best solution is to create a permanent connexion with the server. Because XMLHTTPResponse doesn't allow to do this, at the moment, the solution I implemented 3 years ago was to use the XML channel that Flash 5 implements. It can be used from Javascript, and what is received by the client can be notified to a Javascript function. No latency. The only con is that you need to implement a connexion server at the server side.

4:50 AM  
Anonymous Anonymous said...

An HTTP proxy seemed like an overkill solution to inject latency, especially since Harry appears to be a Firefox user.
Instead, I wrote a Greasemonkey user script that emulate the proxy's behavior and removes the https limitation.

It's at http://blog.monstuff.com/archives/000264.html

3:08 PM  
Anonymous Anonymous said...

Another interesting and very easy way to do latency testing if you use .net webservices is to implement a soapextension.

In the ProcessMessage method of the extension during the AfterSerialize Stage you just add a:
System.Threading.Thread.Sleep(new Random().Next(10000));

And voila! Each request will take between 0 and 10 seconds longer than 'normal' on localhost.

Ofcourse this is not a 100% best way to test latency, but it is at least testing what happens if the server->client communication takes longer.

4:49 AM  

Post a Comment

<< Home