Home » 12 steps to better application modernisation

12 steps to better application modernisation

If you haven’t yet integrated the principles of the “12-factor app” into your application, it probably costs your business time, money or efficiency in getting engineering work done.

App modernisation ensures that our solutions are reliable, scalable & organised.

Marc Firth (Managing Director at Firney) gives an overview of the 12 factors.

A transcript of the video above is included below.

12 steps to better application modernisation

Marc: I remember, in my first job as a software engineer, seeing all these frameworks, methods, and tools that seemed to me at the time to be incredibly complex. But I love to learn and expand my knowledge, so I adopted them.

Over the years, I watched businesses grow to recognize the importance of building reliable systems that provide an awesome user experience that scales with traffic. These days by harnessing best practices in the way we build modern applications, we don’t have to manage any hardware, and we very rarely get P1 or P2 alerts.

The engineering industry grew, and a set of guiding principles, known as the 12-Factor app, were laid out by some of the top experts in the industry. If you haven’t yet integrated them into your engineering team’s workflow, it’s probably costing you more than necessary, or you’re having frequent incidents or outages.

So in this video, I’m going to share some of my expertise with you and talk about the 12 things that you can do to:

  • Ensure that you achieve excellent website or API or application reliability
  • Set up your application to be massively scalable and reduce your costs

This is what’s known as “Application Modernisation”.

Why might you want to consider application modernization

Marc: Hey, everyone.

So why might you want to consider application modernization?

Well, it can save your business a huge amount of money by reducing engineering time and improving the scalability and the reliability of your application and improving your overall engineering processes.

Why you should move to the cloud

Marc: But first, I want to mention why, if you’re on dedicated servers, you should consider moving to the cloud before or while you work towards app modernization.

If you’re managing your own servers, think about all the time that’s spent purchasing, managing, provisioning hardware, installing software, upgrading software versions, managing security and decommissioning old hardware. It adds up to a significant amount of time. And if you move to the cloud, you’ll manage less and can focus on features rather than maintenance.

Why modernise an app?

Marc: When it comes to app modernization, you will improve your brand perception; a more reliable service will make your services better. You’ll avoid downtime and make your customers happy with great service. For marketing websites, you can ensure you never miss a lead or send paid traffic to a site that isn’t working. You’ll be able to launch features easier and have a happier development team. You’ll reduce your overall risk because you’ll have better processes and will be able to recover from incidents faster.

The goal is to make your web service a smooth and efficient marketing or revenue source. I.e., “No version mismatches between environments”, that can lead to a lot of wasted engineering time.

A better user experience for customers means happier customers who will be more loyal and lead to an increase in revenue.

The 12-factor app solution

Marc: There is a solution, which is a series of steps to follow.

It starts by auditing what you have to get a clear picture. This can be done through Infrastructure Maps for the architecture, Domain Maps for the services and functionality, and technical reviews to understand the detail of what’s running on each server.

Then you need to understand your critical user path and design your SLA (Service Level Agreement) and decide the SLOs (Service Level Objectives) you want to design for, as a 99.9% uptime SLA would be pretty different to a 99.99% uptime SLA. It’s the difference between 43 minutes versus 4 minutes of downtime per month.

Once you’ve designed your SLA, you know what you’re targeting.

The 12 Factor App

Marc: There is a guiding set of principles for modern application development that ensure the most agile, scalable and effective outcome for your web service or website. And these are known as the “12 Factor app”. This is an official standard we’ve used for years to build highly scalable sites and web applications, handling millions of visits per day.

1. One codebase tracked in revision control, many deploys

Marc: The first thing is that you should have one code base tracked in Version Control and many deployments. This enables you to make rapid changes by ensuring your code base is versioned.

Now combine that with a CI/CD tool or Continuous Integration or Continuous Deployment tool to automate your deployment and you can quickly release new versions of your software.

Because of version control, you can roll back to an earlier deployment if you need to. You can also see what changes may have led to an incident.

2. Explicitly declare and isolate dependencies

Marc: The second thing is that you should explicitly declare and isolate dependencies. All first and third policy dependencies should be declared in a package file such as a composer.json for PHP or package.json for Node.js. This ensures that you can load them in when you have to, and the isolation means you can replace them if needed.

Now, those dependencies are put into the code base and tracked in version control. So, if you do subsequent deployments, you can roll forward or roll back, and you can always make sure that you have that consistent library of dependencies for any version of the application. So if you change something over time, you’ll make sure that that changes with the application. If you need to roll back, you’ll undo that change.

3. Store config in the environment

Marc: The third thing is that you should store config in the environment by putting the conflict in the environment and not in the code it lets you quickly spin up additional environments as needed. So if you need another environment for UAT or Stage or Production, you can quickly spin up another environment because all the environment variables are attached to that environment and not in the code base; you don’t have to make any code changes.

4. Treat backing services as attached resources

Marc: The fourth thing is that you want to treat backing services as attached resources. The attached resources are things like databases, queues, SMTP, file stores and APIs that your services use. Basically, you want to be able to swap out attached services quickly by making changes to the environment config without having to make changes to the code.

Now, as an example, if you’re using AWS (Amazon Web Services) production, this would let you swap out a local MySQL instance that you were using locally for an Amazon RDS instance that you intend to use in production.

5. Strictly separate build and run stages

Marc: Talking about environments, the fifth thing is that you want to strictly separate, build and run stages so that you can have separate environments in which to perform different actions, such as a local development environment, integration, testing, user acceptance and production environments all separated so that they don’t collide with each other as they’re being used.

6. Execute the app as one or more stateless processes

Marc: The sixth thing is that you want to execute the app as one or more stateless processes. Now statelessness means you’re separating processing from data. An easy way to think of this is that your app can be destroyed and recreated and the data still remains intact. Now this can be a Docker container for your code connected to separate databases or file stores, but basically, you should be able to destroy and recreate that application processing container, and the app should continue running as normal. It also means that when you need to scale up, you can have many stateless containers accessing that same database and file stores, allowing each to have a greater amount of people with your application through more processing containers.

Now you need something to handle all that container orchestration. That is the spinning up or shutting down of those containers. So that’s where a tool like Kubernetes or Google Cloud Run will come into play and handle that spinning up and shutting down of those containers for you.

7. Export services via port binding

Marc: The seventh thing is that you want to export services via port binding. This allows you to ensure that there’s only a specific port can access your stateless services.

For example, this could be port 80 for a web server and your routing layer will handle the SSL Security and your load balancer will share the traffic against the many instances of that web server container.

8. Scale out via the process model

Marc: The eighth thing is that you want to scale out via the process model.

Now, this is where your stateless containers can scale to handle the demand. This is what enables concurrency in your application and allows you to scale up your service to meet demand as each stateless container handles one type of process. For example, HTTP requests may be handled by a web process and a long running background task, such as generating reports or re indexing a database can be handled by a worker process in the background.

Now, what that means is scaling up now becomes a simple action to monitor the load. A particular process or type of stateless container is getting.

9. Maximize robustness with fast startup and graceful shutdown

Marc: The ninth thing is that you want to maximize robustness with false start up and graceful shutdown of those containers.

You want to ensure that your containers can spin up fast and shut down gracefully. If they spin up quickly, ideally in seconds, you’ll be able to quickly meet a rise in demand due to increased requests or traffic to your service.

You also want them to shut down gracefully so that any current processes finish processing or, in the case of background tasks, return their job to the queue, and then the container shuts down. So there are no hanging processes or partially processed data and users have a good experience without being cut off from the action they’re performing.

10. Keep development, staging, and production as similar as possible

Marc: Now, the 10th thing is a real-time sap if you’re not getting it right, but basically, you want to keep all your development, staging and production environments as similar as possible so that all the developers and each environment (So UAT, Production for instance) they’re all using the same versions of software and all the dependencies, so it is always consistent.

We expect that engineers will be developing on the local machines, but that’s where you can use a containerization tool such as Docker, to ensure consistency between local development and production. So you’re using the same version of MySQL for instance. For those environments, keeping them as consistent as possible, in both architecture and dependencies, will ensure that you don’t any surprises when you’re moving previously approved code to the next environment.

In conjunction with CI/CD tooling, this consistency should allow engineers to take on more responsibility when it comes to the deployments.

11. Treat logs as event streams

Marc: The 11th thing is to treat logs as event streams. Logs provide visibility into the behaviour of a running app. Now, by default, these write to the local file system. But you want to convert that into a stream of aggregated time-ordered events collected from the output of all running processes and backing services. They are typically in the format of one event per line.

All of those logs should write to STDOUT rather than the local file system, and this is what enables them to be watched in real-time or picked up by an external log indexing and analysis system for easy access and incident diagnosis. It helps you keep your container stateless because you’re not storing that log data inside the container.

Remember, we want that separation between processing and data in our containers. Then that is what enables us to scale.

12. Run admin/management tasks as one-off processes

Marc: The 12th and final factor is that you want to run admin and management tasks as one-off processes. Now those processes are the things that are secondary to serving users, such as running reports in the background consoles, database migrations or one-time scripts. Each one of those processes should be run against a new instance of that container instance that is identical to the environment of the normal long-running process of the app, such as the web server, for example. But this ensures that those background tasks don’t affect the standard process.

What does this look like in real life?

Marc: So you might be wondering how this looks from a technical implementation perspective.

I will use Google Cloud as the example here, but you can do the same on other cloud providers such as AWS and Azure by using similar services. At Firney, we work across all three of those platforms successfully. But for simplicity, I’m going to talk about how you would do this on Google Cloud.

It’s important to state that there isn’t really “a one size fits all” approach here, as each business will have different services with different staff motivations and customer needs.

Let’s use this theoretical example of a simple web service hosted on a dedicated server with a web server, PHP, database and Redis for caching and queues. All that software is installed on one dedicated server.

Example: Background for the app

Marc: Now, this theoretical business wants to move faster but has had a variety of issues, from slow responses, downtime, unreliable services, and slow feature development, and a lot of knowledge was lost when certain members of staff exited the business.

They’ve audited what they have, and documented the SLA they would like to meet, which is a 99.9% uptime, and they’re now looking to improve their setup.

So we’ll discuss how we would modernize the app and make it more reliable and scalable.

Example: A modern, reliable, scalable structure

Marc: First, you want to ensure you make each service immutable so they can be destroyed and restored rapidly, and you have an image of each service. To do so, we’ll migrate them to run as containers using a tool such as Docker or serverless tools such as cloud functions or app engine for more simple microservices. Now we have stateless containers for the web server, PHP, Redis and MySQL image, which will only be used for local development, but I’ll come back to that.

We’ll make sure that any data is stored on separate services, such as in the database or use a network file share or cloud storage for files and logs.

Remember that separating that data and processing is necessary as we want to be able to freely destroy and restore the processing services without losing any of that data.

We will also make sure that we have backups for those data services so that we can restore those if needed.

Next, you would need infrastructure to run those containers on, such as Kubernetes or Google Cloud Run.

For the databases, we don’t really want to be managing dedicated servers for those databases, so we can use something like Google Cloud SQL well to manage that MySQL database. That means we don’t have to manage the hardware for that database, thus saving time and money.

We can create a container for Redis and the queues and caching and we’ll host that on Kubernetes. There are other ways of doing that.

We’ll put a load balancer in front to distribute the traffic among those containers that get spun up. We’ll also use a content delivery network to handle caching of static assets such as images. Now all services will have failover to another region, and in Kubernetes, we’ll add namespaces for a Staging and UAT environment so that they all use the same set-up and we don’t have any differences between those environments.

Example: Development and Debugging

Marc: For our local developer set-up, we’ll use the same Docker containers that we’re going to use in production for the web server and Redis. Still, as we’re using a serverless Cloud SQL solution for MySQL, we’ll also use a Docker image locally with the same version of MySQL that we’re using in the cloud production environment for local development.

We also have to make sure that all our code is deployed consistently. So for this, we’re going to use Harness to connect to our GitHub repository and handle all the deployments. We’ll make sure that all containers are logging to STDOUT, and we can ingest all of those logs into New Relic so that they’re searchable, and we can delve into any incidents if we have to. So we end up with a solution that looks a bit like this (see video).

Example: Managed Cloud Support

Marc: We’ll also set up an incident support team with a managed cloud support layer to respond to any alerts we get from the observability stack (NewRelic). The Managed Cloud Support Team will serve as an escalation point for our internal team and support our internal team with the solution as it’s designed and built.

So what did we do?

Marc: Well:

  • We moved our application from a single point of failure (a dedicated server).
  • We made our solution immutable so it could be destroyed and redeployed with a single click.
  • We set up consistent environments to reduce human error in the deployment.
  • We set it up as a scalable solution, and it can also be redeployed for other customers or projects or to other cloud providers.
  • We’ve reduced the amount of hardware we have to manage by putting everything in the cloud.
  • We set up monitoring and alerting for our stack.
  • We ensured that we had sufficient personnel to support the solution.
  • We’ll make sure we document everything as we go so that anyone joining the project can quickly understand how it’s built.

There’s a lot more you can do. This is just the first step of the journey.

If you’re interested in learning more and taking your business down the path of Application Modernisation then please reach out to us via the contact form, which I link to in the comments.

Now we’ll be delving into those 12 factors in more detail in future videos, but I hope that was helpful.

Please like the video if it is. Share it with anyone you think could benefit, subscribe to this channel for more helpful cloud and engineering tips and advice, and I’ll see you in the next video.

Subscribe to our youtube channel here.

Subscribte to our YouTube Channel

References:

Ready to get started?

Get in touch