Where Africa and Technology Collide!

Never Good Enough: Speed (pt 1/3)

We’re never good enough when it comes to speed, stability or simplicity of our mobile and web applications. This is a three-part series where I unpack my experience building apps on each of these subjects. It’s not just for those of us working on Ushahidi, these are the three most crucial abilities of any web or mobile application.

Me in a cyber cafe in Monrovia

Let me tell you a personal story:

Libera, March 2009

I’m sitting, sweating in the sweltering heat of a Monrovia cyber cafe, I have my notebook out and my am watching the clock. My goal is to see how fast I can load up the Ushahidi home page for the Democratic Republic of the Congo, it has a map, timeline and list of recent events tracking the current level of unrest in the country.

It’s not looking good. As I look around, waiting for the page to load, I count 8 others in the room – 6 of which have fired up stuttering and unusable Yahoo and Skype video chat windows. Why this is the channel and usage of choice, when it so obviously doesn’t work, I cannot answer. But this is reality, and if we expect ordinary Africans to use our application, we had better make sure that it loads up relatively fast on the low-bandwidth, shared internet connections that proliferate across the continent.

Utter failure. After 20 minutes painfully watching the page load byte by byte, I give up. I quickly type out a message to our team, imploring everyone to streamline this “fat, squeeling pig of a page”. Peppering them with questions… Can I buy some caching please? What can we do with this map to make it not kill the load? Can we get rid of 75% of the images on the page? Do we need to redesign this from the ground up?

Granted, Liberia’s internet situation is worse than almost any other on the continent. Especially when it comes to the grinding halt you see in the cyber cafes during the daylight hours as the local population piles on at the same time, completely overwhelming the limited satellite connection. That’s no excuse though. Ushahidi is built on the idea that the lowest common denominator, whether it’s PC or mobile-phone based access, will work. The PC-side is clearly failing.

Worst of all, my patience is short, Liberia is pissing me off with the heat, humidity, lack of bandwidth and no electricity grid. Objectively, this is the perfect state to be in, I am now able to come up with a solution for normal users in Africa.

What other’s know

Speed… if there’s only one thing that you do with your application, make it faster. No, it’s not fast enough.

This isn’t news to anyone, or it shouldn’t be. For years the major web sites around the world have known this and have been building for it. Mozilla, Amazon, Google and Facebook are all aware of just how critical speed is to their success. It boils down to attention threshold and what we, as users ourselves, are willing to put up with.

There is no area in which our team has felt more pain than in trying to speed up the page loads of our apps. Maps tend to be page killers by themselves. Once we add multiple calls to the database we start to get some truly agonizing speeds. It’s a constant pressure that sits on every one of our development cycles, and for which we dedicate a great deal of energy.

User experience research needed in Africa

One area that hasn’t seen enough true user experience testing is Africa. We know that internet speeds are slower, sometimes by orders of magnitude. I’ve got a lot of questions, more than answers at this point. Should we cut out the maps and all images? What’s the true cost of a page load +/- 7 seconds? What is the real value of maps in Africa compared to the West – do they matter?

Jessica Colaco is a top-notch programmer who has shifted to doing research in Kenya. I hope that she, and others like Eric Osiakwan and his team from Internet Research in Ghana, will help us dig out these answers. More than that, I hope they will help us ask the right questions.


  1. In Malawi all connectivity is Sat-based. This is what I feel the real speed problem in Africa is: latency. In general to hit a Geostationary sat it takes 253 ms (http://en.wikipedia.org/wiki/Geostationary_orbit). So if you request the home page that request is sent to the sat (253 ms) which sends it back down to server (253 ms). The server handles it (x ms) and sends the response to sat (253 ms) which sends it to your browser (253 ms). So far you have already lost 1s in the best possible scenario. But suppose you have a nice CSS stylesheet linked in that response… lets go get that (up, down, up, down, another second gone) and that CSS has a background image in it, lets go get that (up, down, up, down) and we are at 3 seconds. Or lets say we are using a Google map… connection after connection later we have eaten away a lot of time just because the speed of light is too slow. In such a case, making the page bigger (including the CSS, javascript, etc inline) will make the page feel consistently faster especially in browsers that do progressive rendering (read Firefox and IE).

  2. @Jeff – You’ve nailed the technical problem. Your solution of inline CSS is interesting, especially as it’s what the rest of the world has been moving away from within applications due to wanting design and code to be separate editable-in-themselves entities. It’s definitely not the solution for a global app, but maybe it’s something that can be triggered on the admin side, where you create a on/off switch for the person to set as “streamline my version for low-bandwidth” which can automatically adjust (still thinking this out…).

    By the way, SocialRange looks good. Thanks for linking it.

  3. The folks at Aptivate have web design guidelines for low bandwidth [ http://www.aptivate.org/webguidelines/Home.html ]

  4. Erik, I’m not sure if you noticed, but internet users in Africa usually don’t have a problem with waiting for a few minutes for a YouTube video to load.

    They are unsophisticated when it comes to using the web. Their experience is totally different to that of the West. They are where the West was when we used dial-up for browsing.

    As to the solution to your problem, you will have to weigh the pros of usability against that of functionality.

  5. Using CSS and Javascript inline to reduce latency is a great tip! Very useful in situations where latency is very high. Although I wouldn’t recommend it when the inline code is several 10 kilobytes big and always the same on all pages. You might want to put that code into a separate file and cache it. There is a trick to save requests once a file is cached: Tell browsers not to send requests to check if there is a newer file than the cached one http://www.askapache.com/htaccess/apache-speed-last-modified.html

  6. @christian, I’ve played around with the 304 not-modified status header and caching on the client side – definitely helps. CSS Sprites have also been a major help because then only one call for images is called – since all the images are in a single css sprite (master image) – http://css-tricks.com/css-sprites/.

    I think my only issue at this point is that the rest of the world is quickly forgetting about designing for slow speeds as everyone else gets on the broadband bandwagon. Can anyone give an estimate for how much longer African countries have to endure this??

    @Ssembonge if hypothetically one spent 5 hours extra on the net every week waiting for something to load, that’s approximately 10 days out of the year that one could have spent doing other productive things. Our continent can’t afford to waste anything at this point, especially time since we’re so far behind.

  7. Here is my .htaccess http://pastie.org/616785 . I hope it helps someone.

    If you want to see what of your website needs optimization, install Google’s Firefox add-on “Page Speed” websites http://code.google.com/speed/page-speed/

  8. Many site optimizations are extremely “easy wins”, e.g. (on http://drc.ushahidi.com):

    1. Setting “Expires” headers on static assets (1-2 lines of web server config)
    2. Using a Content Delivery Network (particularly one which works in Africa, if there is one–if not, there’s a market here!). They’re very easy to set up, though they cost money. As a last resort “poor man’s” CDN, use multiple asset hostnames (e.g., “assets0.ushahidi.com”, “assets1.ushahidi.com”, etc., all pointing to the same web server) so the browser can parallelize downloads. Downloading eight files at a time is much faster than two, especially in high-latency environments (such as Africa).
    3. Combine common JS into a single file. For instance, putting OpenLayers.js and jquery.js into the same file, if they’re used on every page, can probably win you half a second off the initial page load in Africa. Ditto for CSS.
    4. Serve images, CSS, and JavaScript from a different domain than the website’s domain: if the website uses cookies and serves all files from the same domain, the browser has to send the cookie with each request (which costs the browser upstream bandwidth, another bottleneck). If your cookies are tied to “*.ushahidi.com”, then register another domain name, e.g. “ushahidi-assets.com,” for this big win.
    5. Skip images or make them into CSS sprites. The navbar on drc.ushahidi.com is a perfect place to use a sprite; the language flags at the top of the page could be made into a sprite, also. These aren’t enormous wins, but they’re not difficult and the improvements would be easily measurable.

    These basic tips aren’t exactly difficult to implement: they can probably all be finished with cursory testing within a day with one developer’s time. The effort would make initial page load several seconds faster. And #1 alone would be subsequent page loads many times faster.

    I’m not the cleverest guy: I’m just parroting what Google’s “Page Speed” Firefox add-on says. It’s a fantastic tool: get it at http://code.google.com/speed/page-speed/.

    After that you can work on the appearance of speed; there are some tips for that at http://developer.yahoo.com/performance/rules.html.

    But every change is meaningless without metrics, right? Test your site’s actual performance at http://www.webpagetest.org. It may have a stupid name; it may be near-impossible to find on Google (bookmark it!); but I guarantee it will help you make your website faster.

    Don’t worry too much about database calls–at least, not until you get these big fish out of the way. Even if the page takes 3 seconds to render on the server side, that’s nothing compared to the 97 subsequent seconds it takes to load all the assets and run all the scripts.

    And as for, “do we need to redesign this from the ground up?” The answer is no. Every site–and every redesign–experiences this kind of growing pains. A software engineering mantra: “features first; optimizations after.”

    I would expect the above list (and tools’ recommendations) to make ushahidi.com twice as fast or even faster, with moderately low risk and no negative side-effects.

    Good luck!

  9. @Hash you may want to talk Prof. Vivek Pai and his grad students who developed HashCache http://www.technologyreview.com/web/22119/ & http://www.cs.princeton.edu/~vivek/

  10. Yeah, I’m aware of this just a little bit. Same situation in all of DRC (a country of 60 million) which is why we started doing what we’re doing with Maneno. In fact, dealing with this kind of high latency, narrow bandwidth is one of the cornerstones to the project.

  11. Here are my bandwidth observations at various places in Africa: http://www.flickr.com/photos/dcmetroblogger/tags/bandwidth/

    My favorite, which probably approaches Liberia, is this 7.47Kbps download speed in Nigeria: http://www.flickr.com/photos/dcmetroblogger/3799347808/

    The speedtest I use is http://www.internetfrog.com/mypc/speedtest/

  12. @Christian – thank you for the .htaccess (I put it in my httpd.conf to speed things up a bit more). For my extranet services, it cut initial requests down from 37 to 25; requests on repeat visits are down to 5 (the site is mostly static css and js). Page load times here in Tanzania are down to about 6-7 seconds on initial visits, and 2-3 seconds for repeat visits. Thanks!

    Due to factors explained by Jeff Rafter, reducing the number of requests has a dramatic effect in Malawi and Tanzania, where we share bandwidth with much of East Africa on a single satellite that hovers over the Atlantic (I’ve seen working satellite dishes pointed almost parallel to the ground here in Tanzania). There’s also (supposedly) some connectivity through another satellite over the Indian Ocean, but the latency issues remain. Sometimes round-trip pings here are up to 2000ms.

    @David K: The SEACOM fiber optic line landed in Tanzania about two months ago, but most users are not seeing a difference yet. Startup costs are difficult for local ISPs to manage, i.e., upgrading and physically moving equipment to the cable landing site, and, it is rumored, the fact that ISPs need to pay up front for a year of access to bandwidth – many just don’t have the cash reserves to afford that. I expect that the EaSSY and TEAMS fiber lines scheduled to arrive next year (2010) will face similar startup issues.

    I think it will probably be up to another year or so before many users in Dar es Salaam start seeing any difference (ISPs need to upgrade infrastructure, build up cash reserves, contracts with sat. providers need to expire, new fiber competitors may arrive). With the possible exceptions of Uganda, Rwanda, and portions of Kenya (Nairobi) and SA (Jo’burg), it will likely be years, if not decades, before inland East African users see any impact. I’m not aware of any new fiber coming to West Africa anytime soon. Sadly, it seems that in terms of lit fiber, Africa will be a “Dark Continent” for the next several years (see also http://bit.ly/17rZCz ).

  13. @Alex Low earth orbit satellites and WiMax might have an impact sooner, at least in Malawi. Last year there was a lot of talk about the possibility of tracking satellites to reduce latency… but like you said, equipment purchase is expensive. I thought I should bring up one more point which is the fact that HTTP operates over TCP/IP which is a lossless protocol and therefore requires acknowledgment of packet receipt. To achieve this TCP/IP requires a packet buffer to be kept to hold onto packets until they are acknowledged (which we know takes a long time). The bad part: the default packet buffer is only 4K. Doing the math this means that you can only achieve ~64Kbps max when using TCP/IP over Sat. Fwiw, Skype generally uses UDP sockets (and you can tell when you get a direct TCP/IP connection because the call jumps). Some of this is solvable but it won’t be solved for a while.

    More reading here:

    This means that my solution does have serious limits and caching will win in repeat cases, hands down. @Christian’s .htaccess is great for that and also enables HTTP compression which is even better. Just make sure you have mod_deflate enabled so that his .htaccess can take advantage of it (which based on your speedup for initial visits, I am guessing you do).

  14. i say we should all just move to HTML5 i know this does not solve the latency issues but gives developers more flexibility in dealing with the bandwidth and latency issues and probably improves the user experience too.

  15. I’m completely with you on this issue. From the standpoint of trying to get grants from online funding organizations for development in low bandwidth areas of the world, I have gone bonkers over some of this stuff. To the point where it prompted me to write this post as a guide to thinking about the issue, from the perspective of funding organizations. Web Design for Donor Organizations for Low Bandwidth

Comments are closed.

© 2023 WhiteAfrican

Theme by Anders NorenUp ↑