Adventures in server babysitting — I abandoned OpenLiteSpeed and went back to good ol Nginx One weather sites sudden struggles, and musings on why change isnt always good.
Lee Hutchinson – Jan 26, 2024 3:29 pm UTC Enlarge / Ish is on fire, yo.Tim Macpherson / Getty Images reader comments 132
Since 2017, in what spare time I have (ha!), I help my colleague Eric Berger host his Houston-area weather forecasting site, Space City Weather. Its an interesting hosting challengeon a typical day, SCW does maybe 20,00030,000 page views to 10,00015,000 unique visitors, which is a relatively easy load to handle with minimal work. But when severe weather events happenespecially in the summer, when hurricanes lurk in the Gulf of Mexicothe sites traffic can spike to more than a million page views in 12 hours. That level of traffic requires a bit more prep to handle. Enlarge / Hey, it’s Space City Weather!Lee Hutchinson
For a very long time, I ran SCW on a backend stack made up of HAProxy for SSL termination, Varnish Cache for on-box caching, and Nginx for the actual web server applicationall fronted by Cloudflare to absorb the majority of the load. (I wrote about this setup at length on Ars a few years ago for folks who want some more in-depth details.) This stack was fully battle-tested and ready to devour whatever traffic we threw at it, but it was also annoyingly complex, with multiple cache layers to contend with, and that complexity made troubleshooting issues more difficult than I would have liked.
So during some winter downtime two years ago, I took the opportunity to jettison some complexity and reduce the hosting stack down to a single monolithic web server application: OpenLiteSpeed. Out with the old, in with the new
I didnt know too much about OpenLiteSpeed (OLS to its friends) other than that it’s mentioned a bunch in discussions about WordPress hostingand since SCW runs WordPress, I started to get interested. OLS seemed to get a lot of praise for its integrated caching, especially when WordPress was involved; it was purported to be quite quickcompared to Nginx; and, frankly, after five-ish years of admining the same stack, I was interested in changing things up. OpenLiteSpeed it was! Advertisement Enlarge / The OLS admin console, showing vhosts. This is from my personal web server rather than the Space City Weather server, but it looks the same. If you want some deeper details on the OLS config I was using, check my blog. Yeah, I still have a blog. I’m old.Lee Hutchinson
The first significant adjustment to deal with was that OLS is primarily configured through an actual GUI, with all the annoying potential issues that brings with it (another port to secure, another password to manage, another public point of entry into the backend, more PHP resources dedicated just to the admin interface). But the GUI was fast, and it mostly exposed the settings that needed exposing. Translating the existing Nginx WordPress configuration into OLS-speak was a good acclimation exercise, and I eventually settled on Cloudflare tunnels as an acceptable method for keeping the admin console hidden away and notionally secure. Enlarge / Just a taste of the options that await within the LiteSpeed Cache WordPress plugin.Lee Hutchinson
The other major adjustment was the OLS LiteSpeed Cache plugin for WordPress, which is the primary tool one uses to configure how WordPress itself interacts with OLS and its built-in cache. Its a massive plugin with pages and pages of configurable options, many of which are concerned with driving utilization of the Quic.Cloud CDN service (which is operated by LiteSpeed Technology, the company that created OpenLiteSpeed and its for-pay sibling, LiteSpeed).
Getting the most out of WordPress on OLS meant spending some time in the plugin, figuring out which of the options would help and which would hurt. (Perhaps unsurprisingly, there are plenty of ways in there to get oneself into stupid amounts of trouble by being too aggressive with caching.) Fortunately, Space City Weather provides a great testing ground for web servers, being a nicely active site with a very cache-friendly workload, and so I hammered out a starting configuration with which I was reasonably happy and, while speaking the ancient holy words of ritual, flipped the cutover switch. HAProxy, Varnish, and Nginx went silent, and OLS took up the load. Page: 1 2 3 4 5 Next → reader comments 132 Lee Hutchinson Lee is the Senior Technology Editor, and oversees story development for the gadget, culture, IT, and video sections of Ars Technica. A long-time member of the Ars OpenForum with an extensive background in enterprise storage and security, he lives in Houston. Advertisement Promoted Comments Mungus the Unhyphenated At some point being a good analyst means knowing when to call for help. Was there no tech support that could have validate logging configs or speculated on the behavior?When it’s your own self-hosted webserver, you, unfortunately, are the bulk of your own support. Which, of course, sucks — and is why outsourcing can be helpful. Of course, when the outsourcing’s support bogs down, it’s back to you to try to fix the outsourced problem — which also sucks.
And in this case, perhaps Cloudflare could offer some help within their realm, but in my experience with my company’s outsource-hosted website, it’s not that Cloudflare’s support is unhelpful, it’s just limited to their realm and then it’s a three-way situation of you, your outsource provider’s support, plus Cloudflare’s. Which, arguably, sucks multiplied by three.
WordPress plugins and projects like OLS often have support only via forums and online documentation. It’s helpful but not speedy — especially when things are on fire. That’s where Google-Fu under pressure is a life-skill, as you try to piece together the logs and the smoking ruins and correlate it with the docs and posted wisdom of those who’ve suffered the same fate. Contrast that with outsourced provider-hosted solutions, where you get to play "stump the chumps" with online and phone support, while still madly researching everything you so can to try to move things forward. It still sucks, it’s just shared between you and the support team and nobody’s getting much sleep.
The XKCD reference is so on-point… January 26, 2024 at 4:02 pm pokrface I’ll be honest seeing MOTDs with the fancy dynamic server details makes me cringe a bit.
All well and good until the server is under load and logging into SSH takes 10 minutes because a load of scripts have to run before your shell comes up.Ah HA, well, joke’s on you, thenI’m too dumb to set up that kind of fancy dynamic server detail-gathering script, so that’s a static MOTD that I only change manually. In fact, that static MOTD+app list takes less tie to display than the standard scriptified Canonical MOTD! January 28, 2024 at 12:47 pm Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars