How I used Ansible and Terraform to refresh an online esports leagues infrastructure

Counter Strike Confederation, also known as CSC is a community run league for a game called Counter-Strike Global Offensive. For those not familiar it’s a 5v5 first person shooter that has a massive competitive esports scene. CSC was created to foster an open and inclusive environment for players of all skill levels to compete for fun (and bragging rights of course!) in a game everyone shares a collective passion in.

Prior to my administrative involvement in the league I competed at a player level, granted I was not the best there was still plenty of fun to be had. My first season and the first seasons of CSC had been hosted on Faceit, a much larger community that opened its toolbox of servers to allow for communities to grow using their systems. It provided a lot, but in the end and prior to the third season of the league it became known that the platform was not “open enough” for changes players wanted to see implemented in the league. With that being said Season 3 kicked off in ~ September/November of 2020 with private servers hosted on DatHost, a fairly well known game server host. Cost of these servers were ~$.40 per hour (per server), for scale there were 5 consecutive matches played every tuesday/thursday night for 2 hours, totaling a total of 13 match nights (not including playoffs). The cost added up fairly quickly, and the experience was not at all automated and due to that, there were issues obtaining match recordings from the game server, meaning match stats couldn’t be properly parsed for each match, and players weren’t able to easily download the recordings to review their performance.

In late November, early December I took a step up to the plate to try and tackle a wide array of technical issues players were facing with server infrastructure. Being that it was also an extended break between semesters at University (thanks covid) I had a bit too much time on my hands, a rarity. My first order of business was getting a Terraform configuration up and running. For those unfamiliar with Terraform it allows for IaC (Infrastructure as Code), meaning the servers you want to install/setup/purchase are specified in code in a .tf file. Terraform has multiple providers, aka preconfigured api’s with well known server hosts like AWS, GCP, Vultr, and many more. Essentially by specifying your infrastructure in a terraform file you can simply type `terraform apply` and terraform will go to work contacting the server host(s) and setting up the specified instances. The benefit of this in CSC’s application was servers could be spawned every match night at a cost of ~$.04/hr (per server) for ~ 3 hours (~1 hour install, ~1 hour practice, ~1 hour match). Alone from just the money savings we had by utilizing our Vultr (the host we had picked due to best server locations), we had control over the hardware and operating systems that the game servers were running on via SSH. The servers were 2 cores, 4gb of ram, and ~75gb SSD with plenty of bandwidth and an uplink speed to support anything you could imagine. Each was running the latest LTS version of Ubuntu Server, meaning we didn’t have to pay for Windows licensing or worry about powering some not useful desktop environment on the server. Lightweight, configurable, and Linux, just what I need to be able to configure and install the Counter-Strike dedicated game server on that specific node.

So now we have 5 (technically 6, because we always wanted a backup) VPS instances from Vultr running Ubuntu, now what to we do? Well we need to install and configure the actual game server that facilitates the match between the 2 teams. Well Linux Game Server Managers (LGSM) exist, and allow for installation of game servers like Counter-Strike’s onto Linux operating systems. Great so type up a bash script, wget it from each machine every match night? NO. Ansible exists silly goose. For those not familiar with Ansible it allows for execution of ‘playbooks’ and ‘plays’ to remote instances allow for configuration of these instances simulateneously. I personally love Ansible, and well it wasn’t too hard to get Ansible going with this project.

When configuring Ansible I made things much easier for myself in 2 ways. First, setting up DNS A Records for the that point to one of the Terraform instances that were spawned for each server meant that IP addresses didn’t matter at all. Players could connect using the subdomain instead of balancing different ip addresses every night. Terraform allowed for a direct implementation of this thanks to Cloudflare’s provider. I literally never knew a single ip address of any of the servers. Furthermore I could SSH to the servers to configure them utilizing just the subdomain, and to make it even better I had Terraform drop my SSH key onto each of the remote servers. All I had to do was ssh root@<server> and I had an SSH session to that server. Back to Ansible configuration I setup my hosts file to have groups with children that pointed to each of the servers subdomain. It looked like so…







(I had a variable section initially for troubleshooting, but help almost no purpose aside from specifying SSH key path)

… and so on for each server.

So now that I have Ansible configured for SSH sessions on each remote instance I need to tell Ansible what to do, thats what a playbook does. My TLDR for what the playbook needed to accomplish was

  • Update and upgrade all existing packages
  • Install required packages by LGSM (needed for the Counter-Strike game server)
  • Install the game server (~30 minutes)
  • Download plugins and configurations from a CDN for the game server

When completed every match night looked as follows…

  • Terraform apply – have vultr setup our remote instances (~3 minutes max for all 6 instances)
  • ansible-playbook playbook.yaml – Get ansible rolling on configuring the server
  • Setup the server token
    • Something I had tried to accomplish with Ansible j2 templates, however I never finished this so I had to manually, took a minute or 2 max to add the token to a config file for all servers
  • Run scrimmages – utilize a simple plugin called pugsetup to faciliate a PUG (pickup game) or practice between 2 teams
  • Once completed it was time to start matches, matches utilized a different plugin called get5.
    • When configured each match had a set config between 2 teams and set players (meaning no random person could join and start playing)
    • After the scrimmage, the pugsetup plugin had to be disabled and the get5 plugin had to be enabled
    • I easily completed this with another small playbook that powered off the game server, and moved the plugin file to a disabled directory
  • When matches were completed a plugin would zip up the match recording (a .dem file) and upload it via SFTP to a remote server that I owned

Once all matches were complete, and I verified that match demo recordings were successfully uploaded to the remote server it was time to destroy the remote instances and the cloudflare DNS A records.

terraform destroy

Respond to the prompt with yes, and within 30 seconds all the remote instances are completely destroyed, and I’m no longer getting billed.


Now that the Season has ended, a champion has been crowned (not my team, sad) let’s reflect on how this implementation worked. Overall worked great, I was able to overcome a few obstacles with regards to the cloudflare records and linking the vultr instances to cloudflare is a very neat feature, showcasing the power that terraform can have. Server stability was good, routing to servers were good for the most part, only 3 or 4 cases of bad routing across the entire season.

The only shortfall I have with regards to this the game server token, the servers were not able to automatically start themselves up until I manually entered the token. It did not mean much of a slowdown, but being able to leverage the power of templates and variables within Ansible might have meant that matches could’ve autostarted themselves instead of me copy pasting a preset command into the gameserver console.