Design Considerations, Part Two

In a previous post, I detailed some of the application-level decisions I’ve made for a new site I’m designing. In this post, I’m going to go into some design goals for the site, and how I’m going to (hopefully!) achieve them.

Deep down, every web author and designer wants their site to be popular. More pragmatically, sites should perform well – be usable across a wide range of browsers and operating systems, load quickly, and be user-friendly in terms of design and navigation. Oh, and it needs to be able to do this under load – that is, while handling a large number of requests.

I’ve decided on a slightly-modified version of Chris Pearson’s Cutline theme for WordPress as the base design for the site I’m creating, because it’s incredibly powerful, incredibly flexible, and does everything I want it to. It’s also relatively lightweight, as far as size goes, and what (small) images there are in the design are used on every page, which allows most browsers to cache them, reducing both server load and bandwidth usage. No offense to Chris, though, but it’s not the design visitors really care about (though that helps, don’t get me wrong), it’s the content.

On the site under design, better than ninety percent of the content is going to be text. Partially this is for accessibility reasons – better ease-of-use on PDAs and internet-capable cellphones, for instance – and partially for performance reasons; (server-side) gzip compression should reduce by about half the bandwidth required to transfer the content (as well as everything else text-based, like page HTML), and server software goodness should be able to reduce the amount of time required to serve it, as well – though that’s a subject for part three.

That said, my goal is to keep the entire size of any given page to under 200kb, and three screens in length (at 1024×768 resolution). The “overhead” markup and so on for a WordPress page is around 5kb; the images for the Cutline theme are around 3kb, and the header images are 30-50kb apiece. With a stylesheet of some 15kb, this leaves more than 125kb for content per page. This shouldn’t be a problem; 10k of text, even allowing for markup, is quite a bit for a single webpage. Odds are excellent nearly every page will come in under 100kb (and compression should be able to cut that in half). By trying to keep everything under three screens in length, visitors should better be able to find what they’re looking for, and will hopefully not be faced with “information overload”. If this means some subjects need to be split across two or even three, so be it. With images and stylesheets (hopefully!) cached by the visitor’s browser, the additional bandwidth use is negligible.

I’ve been writing content for the web since 2000; the times and fashions in (X)HTML have changed, and I freely admit I’ve not, perhaps, moved with them. HTML Dog’s site provides a nice guide to the difference between presentational and meaningful markup; the former is purely aesthetic, while the other is what we used to call “semantic”, back in the olden days. I’m not convinced it’s a one-hundred percent meaningful debate; I think everyone understands that when I italicize something, a certain degree of emphasis is implied. That said, if there are screen-readers and other machines which require (or benefit from) explicit, rather than implicit, markup, so be it. As such, a part of the site-design goal is the use of meaningful (and validating) markup. It requires a minimum of effort on my part, and is truly something that’s a “best practice”.

My only concern about content design is navigational; WordPress of course offers visitors a search capability – but it only searches “Posts”, not “Pages” – and the majority of pages on the site are going to be Posts. (Isn’t WP jargon fun?) I’m exploring sitemap options – but with an estimated 150+ pages, that’s a lot for visitors to wade thru. I expect the final solution will be, for lack of a better term, “directory” pages linked from the front page, each on a subtopic or subset of the site, linking to individual, meat-and-bones content pages. I know this is not necessarily optimal from a navigation and usability viewpoint, and I don’t want to worry too much about SEO quackery and tomfoolery, but I believe it’ll at least be intuitive.

If you look at the sizes of page elements a couple paragraphs up, you’ll see that header images are the single largest objects being served up. In fact, it’s likely that on every page, an image is going to be the single biggest item being served, and images, not being compressible, are going to consume the majority of the bandwidth. Because of this, right from the beginning, I’m working to make sure that all images are served from a subdomain – images.thedomainname.tld . The reason for this is to allow the eventual painless migration of images to a separate webserver (or servers), if and when that becomes necessary. Going thru after the fact and changing links on a hundred-fifty pages isn’t going to be fun, so I’m avoiding that forseeable unpleasantness entirely. It’s much easier to mirror static content (like images) than it is dynamic (like a database). It’s also often images that produce the greatest load on a webserver, so readily being able to distribute or even delegate that load can only do good things for performance as a whole.

I think that about covers it for the content guidelines; in a couple days, I’ll be sharing a few thoughts about the hardware and software servers to run the whole thing.

Published in: Geekiness, General | on December 12th, 2006| No Comments »

You can leave a response, or trackback from your own site.

Leave a Comment