21 Oct 2016
In my talk - Can Puppet help you run Docker on a T2.Micro? - You will learn the technologies we use to create environments from scratch every day.
A walk through a number of the key concepts of puppet; stages, Role and profile, hieradata and puppet forge, as well as a brief introduction to Docker.
Using these to explain a solution of running a puppet manifest to configure Amazon's smallest (Yes I've run this on a t2.nano too) server to run a docker containerised web service.
You will learn why puppet stages can be used to help in this solution, how roles and profiles are defined and used, and finally use of the puppet Forge with Hieradata to install and run docker containers.
This talk will contain links to code that can be used afterwards and I'll touch on what docker is and how to configure the puppet module to automatically run containers.
Come see me talk at Puppet Camp - 8th November 11:15 in London.
(Read more...)
19 Oct 2016
After many years of working with developers, one of the most common problems faced by the ops and support team is code that doesn't work on production systems.
This is often caused by subtle differences in the developers machine. Software versions of components, different install locations and different shared libraries. All these can have an impact on code that works fine on one machine or server, but won't install or run on another.
Following our structure and automation, we can provide a machine that looks exactly the same as test and production. Allowing the developers access to the tools needed to build their own machine, via Jenkins for example, everyone is empowered to build and develop knowing that everyone on the team is on the same page and has the same environment or environments to work with.
Some customers have a number of environments to develop and test systems in. Often these are:
development - often shortened to DEV. This can be virtual machines on the developers own computer.
testing - TEST, to pass the initial automated test, (yes those can be automated too, allowing your testers more time to diagnose)
integration - This is usually a bigger environment and has links to other systems in the business to test integration between components and systems.
UAT - User acceptance testing. Where user journeys are tested end to end.
Performance - Load testing occurs at this level, ensuring that the servers can cope with high load, stress testing all the components and can test the scaling code. This helps defend against the slashdot effect.
Preprod or staging - the final environment before production. allows for dry runs of installation or roll out of new code.
Production - Live, The final stage and where your customers really see the efforts of the team.
With so many environments it is important to have version control every step of the way and a release process to enable everyone involved to understand where they stand and how their efforts will effect the overall position of the company.
Please contact us about this and any projects you are thinking about.
(Read more...)
12 Oct 2016
Ahead of Tuesday's £150M+ Lottery draw, an email was sent to thousands of users of the UK National Lottery website.
"This is a service message to let you know that we expect to see high volumes of visitors to The National Lottery website and app in the hours leading up to the close of ticket sales for the EuroMillions draw, at 7.30pm, on Tuesday 11th October.
If you plan to buy a ticket for any of our games this week and you need to add funds to your account to play, we would recommend that you do this as early as possible in order to avoid disappointment. "
We suspect that something in the infrastructure doesn't scale automatically. No doubt the web or front end servers scale, however they are expecting a performance bottle neck somewhere. This is where performance testing of the whole infrastructure is important, to identify and work on those bottle necks. Once identified, automation of the service can be planned and provided. Often it is the database layer, but these too can be scaled with read replicas or sharded, allowing for greater number of active connections.
For more information or a quick chat, contact us.
(Read more...)
11 Oct 2016
Once you know what your servers should be doing every day, and when, you are in a position to automate the processes that manage and build the servers.
There are three levels to consider when building servers automatically.
- What does the infrastructure look like? Network, firewalls, load balancers, subnets etc
- What is the role of the server? Software to be installed, configuration etc
- Scheduling and code deployments.
The phrase infrastructure as code is used to describe the process of creating, storing (including version control) and running of code that builds the environment in which the server(s) live. Of course you don't have to automate everything at once and when you are ready there are tools to help setup the infrastructure for you.
Terraform (from the same people that brought you vagrant) and cloudformation to name but two. These tools allow you to define exactly what you want your servers and the associated environments to look like, so you can build in a completely predictable way, every time you choose to run the code. These tools allow you to modify the code and then rerun to apply the code. Keeping everything self documenting (if you can read the code) too.
Regardless of how your infrastructure is deployed (manually or automated) you can define the server build process as code too. This is the code the server will process to build itself from the moment it is switched on. Again there are a few tools to assist us; In Linux/Unix land - Puppet, Chef, Ansible and more specific to Microsoft is System Center Configuration Manager and Landesk. All of these tools allow you to define how you want your server too look, either by following a series of scripts or by defining the end configuration result, allowing your team to provide a known good result, every time they build a new server.
Now that our servers are built and working in an environment just for them, things change. Most clients we work with are developing software, so that does change a lot. Deployments of new code can also be automated. Either as part of the initial server build process, a) updated software code, b) rebuild server(s) or as a specific pipeline. The next set of tools available are often referred to as Continuous Integration/Continuous Delivery (CICD) or build tools. These allow a sequence of events to be defined, that the tool will process to build, test and deploy the new code. This can be as simple as building a server, or as complex as building a complete software package bundle. The most common tools are; Jenkins, TeamCity, Travis CI and Team Foundation Server. These work in conjunction with Source control repositories like Git, SVN.
Should you want any assistance with automation or just want to discuss your options, contact us.
(Read more...)
10 Oct 2016
Following on from my article about pets vs cattle, we shall go into more detail about what it means have an elastic and scalable application or service.
Is my application scalable?
The first test to answer this question is, does it run on more than one server - and work. This is not a trivial question. One of our clients has a website sitting on two servers behind a load balancer ( a network server that shares the load between servers based on a set of rules). The load balancer was set to a 'sticky' configuration. This means that the customer visiting the website would always be served by the server they originally talked to. However, in this case, should that server not function, die, develop a fault or otherwise become out of service, the customer would find themselves directed to the other server (fail over). Sounds good, until we tried it. When the fail over happened, the server didn't 'know' the customer and asked them to log in again. Not very seamless!
We worked with the developer team to alter the code of the website so that the servers would play nice together. In this case it was a case of making sure shared session data ( who the customer logged in as) was available to both (or all) servers delivering this application. After making the necessary changes and testing, our customer was ready to deploy to production with the side benefit of being able to switch off 'sticky sessions'. These servers however were fixed. There was always two and you could say they were pets.
Is my application elastic?
Elastic applications go one step further. Imagine the marketing manager has just told you they are running an advert for this application on TV tomorrow. With so little warning, there isn't much that can be done but hope that two servers will be enough to cope with demand. If they are virtual servers, you could arrange for them to be upgraded with more cpu and memory (Vertical Scaling).
With some automation however the answer is very different. Another of our clients automate the build of their servers (with our help). This means that on demand or a schedule, a new server is created and built from scratch by a computer following a set of instructions. This means the application can be elastic and autoscale. With the same information above, everyone can rest easy knowing that should the monitoring detect a higher demand, more servers (usually one at a time) will be built and added to the load balancer automatically or it hits a predetermined limit (so the internet doesn't put a hole in your budget). The monitoring also checks when a server has been idle for a programmed time limit and will remove and destroy a server (virtual servers don't feel a thing), until the configured minimum number of servers is reached.
I hope this article helps show the benefits of having your applications elastic and scalable. Please contact us should you want to learn more.
(Read more...)