Automation (Puppet, code and schedules)
11 October 2016 · Filed in InfrastructureOnce you know what your servers should be doing every day, and when, you are in a position to automate the processes that manage and build the servers.
There are three levels to consider when building servers automatically.
- What does the infrastructure look like? Network, firewalls, load balancers, subnets etc
- What is the role of the server? Software to be installed, configuration etc
- Scheduling and code deployments.
The phrase infrastructure as code is used to describe the process of creating, storing (including version control) and running of code that builds the environment in which the server(s) live. Of course you don't have to automate everything at once and when you are ready there are tools to help setup the infrastructure for you.
Terraform (from the same people that brought you vagrant) and cloudformation to name but two. These tools allow you to define exactly what you want your servers and the associated environments to look like, so you can build in a completely predictable way, every time you choose to run the code. These tools allow you to modify the code and then rerun to apply the code. Keeping everything self documenting (if you can read the code) too.
Regardless of how your infrastructure is deployed (manually or automated) you can define the server build process as code too. This is the code the server will process to build itself from the moment it is switched on. Again there are a few tools to assist us; In Linux/Unix land - Puppet, Chef, Ansible and more specific to Microsoft is System Center Configuration Manager and Landesk. All of these tools allow you to define how you want your server too look, either by following a series of scripts or by defining the end configuration result, allowing your team to provide a known good result, every time they build a new server.
Now that our servers are built and working in an environment just for them, things change. Most clients we work with are developing software, so that does change a lot. Deployments of new code can also be automated. Either as part of the initial server build process, a) updated software code, b) rebuild server(s) or as a specific pipeline. The next set of tools available are often referred to as Continuous Integration/Continuous Delivery (CICD) or build tools. These allow a sequence of events to be defined, that the tool will process to build, test and deploy the new code. This can be as simple as building a server, or as complex as building a complete software package bundle. The most common tools are; Jenkins, TeamCity, Travis CI and Team Foundation Server. These work in conjunction with Source control repositories like Git, SVN.
Should you want any assistance with automation or just want to discuss your options, contact us.
Previous Post: elastic and scalable Next Post: Why you need elastic applications Tags: