First off -- What is Caching?
From Wikipedia:
In computer engineering, a cache ( /ˈká¦Êƒ/ kash[1]) is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the greater the number of requests that can be served from the cache, the faster the overall system performance becomes.
What that means, is that the "cache" server retrieves the page you want, before you ask for it, and serves the "pre built" or "cached" page before you ask for it. The cache server periodically updates the page it stores and serves you the most recent copy of the page you ask for. This allows the server to serve 1 pre-built copy of say the front page, to 100 people trying to access it, instead of building a new copy of the front page 100 times in 5 seconds. This saves us a LOT of CPU and MEMORY usage.
Does this mean that our data is accessible to an outside server?
No! Currently the "cache" server is located on the same server as the basic web-server, they are just using different "ports" and access things in different ways. In the future, we plan to "off-load" the "cache" server to it's own dedicated machine where it can preform "load balancing" as well as caching, but that will still be our OWN server on our OWN hardware and directly administered by Me (Piper), Erin and our support team.
So will the site ALWAYS be fast now that we have this "cache" server in place?
There will be initial slow downs from time to time as the caching server dumps it's cache and rebuilds as things expire. Also there is the fact that with any caching server, some pages aren't cached untill they are accessed and we have a max time limit in place that is still fairly low in order to keep the site fresh. What this means is that some pages that aren't accessed very often will always load like they weren't cached because they aren't accessed often enough to be held in cache. But that being said, for the most part, the caching server should continue to load the site faster and allow us to do a lot more "upgrades" behind the scenes with less downtime to the users.
I have to admit, I still don't underastand "confusers"
But I'll take your word for it. From what I understand, it sounds like a great way to provide the same service MUCH more efficiently (as soon as all the bugs are worked out-but that's what happens with upgrades). I'll figure it out eventually. Thanks for everything!
Wren
I'll try.
Think of the old days of the web, when browsers were just getting going, and how fast the pages loaded. Those pages were HTML files stored on the server: your browser asked for the page and the server sent it to you, straight off the hard disk. Those pages didn't change very much, though, because somebody had to hand-edit in any changes. The kind of automatic pushdown you see on the front page here, where new stories push older ones down, didn't happen -- all that editing was just too much work for one person.
Here at BCTS, Drupal has been composing that front page automatically, so we get the pushdown, but it does that by fetching in all those teaser boxes from the database one by one to pull together a page-file, along with all the other stuff that makes up the BCTS front page -- they're database fetches too. That's a *lot* of database fetches. And, the way the site was set up, as soon as it was composed in memory Drupal then sent the just-composed page right out to the browser that asked for it, and forgot all about it as soon as it was out the door... And then repeated the process for the next browser, just as if that first one didn't ever happen. That's a lot of wasted computer-work.
With caching, the front-page that Drupal composes is copied to disk (into a cache) before being sent out. As long as nothing changes in the front-page layout, that copy is still good, so the next browser, and the next, and the next, gets that copy of the front page sent out from disk just like the old days. If the database is touched at all, it's just to check that nothing's changed that would require the front page to be recomposed. Much work is saved and the whole site runs faster because of it.
So let me get this straight
You are trying to tell us is that *ahem* 'Cache is King?'
Arr us cheapskate Chinese know dat arready :)
In all seriousness it is unfortunate it might be difficult to cache the commenting pages since they by there very nature is probably kinda custom. I have noticed that commenting seems to introduce a lot of lag in the response because every comment in effect makes the last page 'dirty' in the cache and would need to be replaced.
Kim