Design Considerations, Part 3

In past posts, I’ve written about the application considerations and choices for a new site I’m faced with creating, as well as plans – or perhaps goals – for the overall design of the site and it’s content, all chosen or guided with an eye towards performance. In this third post, I’m going to explore some decisions regarding server hardware and software, and the basis for those choices.

As explained previously, I’m designing an informational, content-driven website for a one-time event; WordPress has been chosen as the CMS application, with a number of plugins to provide functionality and improve performance, and I’ve chosen a theme and laid out design guidelines with an eye both towards functionality and user convenience, but also towards performance under load.

What do I mean by “under load”? Well, I think there’s a very good chance the site in question can see some reasonably high levels of traffic. This is problematic, as everyone has a different definition of “high traffic”. That said, I’m a fan of conservative (pessimistic) estimates, and planning ahead for worst-case scenarios. I’ve looked at traffic stats from people who’ve been Dugg, Slashdotted, Farked, and so on; I’ve looked at the site I’m designing, and I’ve come up with some numbers.

I think the absolutely worst-case scenario I could face would be sustained traffic of two pageviews per second. That’s not really all that bad – 86,400 seconds per day times two is just 127,800 pageviews a day, if that level of traffic were sustained 24/7. But that’s pageviews, though. Assuming a worst-case scenario where each pageview is a first-time visitor with no cached stylesheet or images (the largest parts of each page), I’m looking at eight to ten requests per page – and suddenly we’re looking at the vicinity of 1.25 million requests, or hits, per day.

That isn’t really all that bad, either, and a properly-configured webserver shouldn’t have any problem handling thirty to forty requests per second (two pages per second, times eight to ten requests per page, taking two seconds to load). It’s well out of shared-hosting territory, though, even if you consider that realistically, that’s a peak traffic prediction, not a realistic level of traffic to expect 24/7.

The question of bandwidth and transfer usage is next to be considered. Again, using pessimistic worst-case conditions, none of the people visiting have browsers that can take advantage of gzip compression, none have any cached images or stylesheets, and all are requesting pages that happen to be the absolute maximum size I set out in Part Two of this series – 200kb. With those numbers, traffic would work out to be ever so slightly over 3 megabits per second – 1,200 gigabytes per month, if sustained 24/7. Definately not shared-hosting territory!

But those are extraordinarily bad worst-case numbers, reflecting conditions unlikely to ever be seen in reality. That’s the point of this sort of exercise, as we’ll see in a minute.

If you’ve read the previous parts of this series, you’ll recall I’ve setup the site so all images are served from a subdomain. It sounds silly, but there was a method to my madness. I’ve chosen to use Apache as the webserver on this project, because of some of the URL-rewriting and access-control capabilities it has, and because that’s what’s most readily available in inexpensive shared hosting, where the site is being tested and developed before going “live”. It might not be the highest-performance webserver, but I’m not 100% confident pretty URLs and everything else I want to have can be ported over to something like Lighttpd, which would be my first choice.

Images, though, can be split off to another server (or servers), and be better served by something like Lighttpd. That’s the contingency plan I have in mind, incidentally, if the need arises.

Looking at just text-ish parts of the website, I’m unlikely (because of design considerations) to have more than 20kb of text per page, including markup and stylesheet. Even if we assume that the stylesheet is never cached by any browser, and nothing ever gets gzipped, we’ve suddenly dropped the theoretical load on the main, Apache, server, to two requests per page – or around eight requests per second. Transfer would be something like 0.3megabits, roughly 100 gigabytes per month if sustained 24/7. We’re still, if we want the best performance, looking at a dedicated server (or a very high-end VPS), but the requests Apache will be facing has dropped dramatically, and the worst-case bandwidth requirements are quite reasonable and affordable.

We’re kind of considering this backwards, and, yes, there’s still a worst-case-scenario of over a terrabyte of images transferred per month. Based on my experiences with Lighttpd, the hardware required to handle the traffic we’re looking at is minimal; I’d say a well-configured PIII/800 with fast disks and 512MB RAM would do the job easily. Since it’s hard to find that low-end of a machine for lease as a dedicated server these days, we’ll probably wind up with something around 2GHz, a P4 or an AMD machine; something that can be found from a reputable and reliable dedicated server provider, with the requisite amount of transfer, for in the ballpark of $60-$75 USD per month. If that seems low, keep in mind we’re talking an unmanaged server with no control panel, probably as part of a sale or promotion. Heck, with 1GB of RAM, a 2GHz P4 should be able to run both Apache and Lighttpd simultaneously, handling the worst-case traffic without difficulty.

If it becomes necessary to split the images off, the requirements for just the text-based parts – tweaked WordPress with largely static content, running under well-optimized Apache, with PHP and Eaccelerator, MySQL with query-cacheing (and the WP-Cache2 plugin reducing database queries) – are minimal; we’re strongly tempted to colocate an available 1U, dual PIII/733 machine with 512MB RAM and a pair of SCSI drives – one 15,000 RPM one for /var, /tmp, and swap, and a 7200 RPM one for everything else – for this purpose; we’re still undecided if one fast processor and comparatively slow disks is better than two slower processors and comparatively fast disks, for this purpose. We’ll see.

But is this wankery, planning a site around extraordinarily pessimistic worst-case numbers? Not at all, because it provides an extremely healthy safety margin, both in performance and expense.

Using more realistic numbers, the 100GB of text-based content is going to be mostly compressed (Gods bless you, ob_gzhandler), and partially cached (the 15KB stylesheet). Too, high levels of traffic are not going to exist 24/7; 8/7 or even 5/7 is much more realistic. That drops the realistic expectation down to something in the vicinity of 20-30GB for text-ish content in a peak month. So too for the images; much (all the design elements) is going to be cached (by browsers, and thus only requested once per person; this only becomes a benefit when visitors view more than one page per visit), and few pages are likely to be anything like 200KB; the terabyte of image transfer is more realistically going to be something like a tenth that, 100GB or even less. Still that’s more than you want to be trying to host on a shared account, mind you.

In all likelihood, the site is going to live on a single dedicated server, running Apache and PHP with Eaccelerator, serving both the site and images. If serving the images becomes too much of a load, it’s trivial to offload the images to a separate server, as we’ve planned ahead for. It might not become necessary, but the preparations are in place, and we’ve got a plan. (It’s a cunning plan, albeit perhaps not as cunning as a fox what’s got a degree in cunning from Oxford, I admit.)

The point in planning for the almost-impossibly worst-case scenario is that in doing so, you’re prepared for otherwise unpredictable contingencies. If the site gets slashdotted the day before the event, when traffic is expected to be at or near peak, it’s not a huge deal, because that level of traffic has been allowed for. If it turns out that two pages per second is actually low, that’s not a huge problem, because real-world considerations (browser cacheing, transfer compression, small pages) balance that out to a great extent.

The site might never get that big, or that popular, or that highly trafficked. We’re developing it on shared hosting, and it will go live in that environment, just like most sites do. The question that we, and everyone with a website in these circumstances, face, is when to move up to a server of our own. The answer comes with no science behind it, no empirical evidence to support it; it’s this – when the site performance becomes unacceptable. If we start seeing page generation times getting too high, if our host complains about the load the site is causing, or if bandwidth usage starts getting expensively high, it’s time to move. Arbitrarily, for our site, a gigabyte a day (realistically, around 30-45,000 pageviews per day, but remember we’re dealing with, realistically, 30-50KB pages, and have the magic of gzip compression) is probably the point where we start sniffing around for good deals on dedicated servers.

I know I’ve not touched on a lot of technical aspects, like why Eaccelerator and not APC, or what, exactly, optimizing Apache consists of. The former is easy (it works with WP-Cache out-of-the-box, when you have gzip enabled server-wide thru PHP, and seems to play better with WordPress than other caches), the latter also easy enough to answer, but the answer here would be meaningless, because it’s specific to our circumstances, site, and hardware. Thousands and thousands of pages have been written on tweaking Apache, PHP, and MySQL for best performance, and it would be pointless to try and repeat their (oft contradictory, alas) advice here. Instead, I’ve looked at the oft-asked, and rarely satisfactorily-answered, question “how much traffic can my server handle” from the other end, as it were, and explored the slightly less-often asked “what server do I need for my site” by way of it’s kissing cousin “how much traffic can my site be expected to handle”. If it’s not directly applicable to your site, you should at least have a rough idea where to start your own back-of-the-napkin calculations.

Hopefully, the preceding fifteen-hundred or so words provided some food for thought. Remember, it’s pretty much all guesses – in this case, educated guesses – supported by some maths and relatively sound reasoning. It’ll be a year or more before they’re proved right; if they’re wrong, well, I could find out a lot sooner. Such is the nature of the internets. :D

Published in: Geekiness, General | on December 16th, 2006| No Comments »

You can leave a response, or trackback from your own site.

Leave a Comment