News

Look at our backend — Behind the scenes: How we host Ars Technica, part 1 Join us on a multipart journey into our place in the cloud!

Lee Hutchinson – Jul 19, 2023 1:00 pm UTC Enlarge / Take a peek inside the Ars vault with us!Aurich Lawson | Getty Images reader comments 100 with

A bit over three years ago, just before COVID hit, we ran a long piece on the tools and tricks that make Ars function without a physical office. Ars has spent decades perfecting how to get things done as a distributed remote workforce, and as it turns out, we were even more fortunate than we realized because that distributed nature made working through the pandemic more or less a non-event for us. While other companies were scrambling to get work-from-home arranged for their employees, we kept on trucking without needing to do anything different.

However, there was a significant change that Ars went through right around the time that article was published. January 2020 marked our transition away from physical infrastructure and into a wholly cloud-based hosting environment. After years of great service from the folks at Server Central (now Deft), the time had come for a leap into the cloudsand leap we did.

There were a few big reasons to make the change, but the ones that mattered most were feature- and cost-related. Ars fiercely believes in running its own tech stack, mainly because we can iterate new features faster that way, and our community platform is unique among other Cond Nast brands. So when the rest of the company was either moving to or already on Amazon Web Services (AWS), we could hop on the bandwagon and take advantage of Conds enterprise pricing. Thatcombined with no longer having to maintain physical reserve infrastructure to absorb big traffic spikes and being able to rely on scalingfundamentally changed the equation for us.

In addition to cost, we also jumped at the chance to rearchitect how the Ars Technica website and its components were structured and served. We were using a virtual private cloud setup at our previous hostingit was a pile of dedicated physical servers running VMWare vSpherebut rolling everything into AWS gave us the opportunity to reassess the site and adopt some solid reference architecture. Advertisement Cloudy with a chance of infrastructure

And now, with that redesign having been functional and stable for a couple of years and a few billion page views (really!), we want to invite you all behind the curtain to peek at how we keep a major site like Ars online and functional. This article will be the first in a four-part series on how Ars Technica workswell examine both the basic technology choices that power Ars and the software with which we hook everything together.

This first piece, which were embarking on now, will look at the setup from a high level and then focus on the actual technology componentswe’ll show the building blocks and how those blocks are arranged. Another week, well follow up with a more detailed look at the applications that run Ars and how those applications fit together within the infrastructure; after that, well dig into the development environment and look at how Ars Tech Director Jason Marlin creates and deploys changes to the site.

Finally, in part four, well take a bit of a peek into the future. There are some changes that were thinking of makingthe lure (and price!) of 64-bit ARM offerings is a powerful thingand well look at that stuff and talk about our upcoming plans to migrate to it. Ars Technica: What were doing

But before we look at what we want to do tomorrow, lets look at what were doing today. Gird your loins, dear readers, and lets dive in.

To start, heres a block diagram of the specific AWS services Ars uses. Its a relatively simple way to represent a complex interlinked structure: Enlarge / A high-level diagram of the Ars AWS setup.Lee Hutchinson

Ars leans on multiple pieces of the AWS tech stack. Were dependent on an Application Load Balancer (ALB) to first route incoming visitor traffic to the appropriate Ars back-end service (more on those services in part two). Downstream of the ALB, we use two services called Elastic Container Services (ECS) and Fargate in conjunction with each other to spin up Docker-like containers to do work. Another service, Lambda, is used to run cron jobs for the WordPress application that forms the core of the Ars website (yes, Ars runs WordPresswell get into that in part two). Page: 1 2 3 4 5 6 Next → reader comments 100 with Lee Hutchinson Lee is the Senior Technology Editor, and oversees story development for the gadget, culture, IT, and video sections of Ars Technica. A long-time member of the Ars OpenForum with an extensive background in enterprise storage and security, he lives in Houston. Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars

Articles You May Like

Consumer watchdog sues major US bank claiming it cheated customers
Trump signs TikTok executive order to give parent company lifeline as president insists popular app is 'worthless if I don't approve it'
Company behind Trumps favorite drink goes above and beyond for the inauguration
TikTok reportedly announces date when platform will shutdown in US
A new AI tool for unlocking the secrets of spatial multi-omics in cancer