Modernizing Traditional Applications with Docker

A few weeks ago I attended my second DockerCon, this year it was in Austin, TX. Last year there was some focus on the enterprise, but this year they’ve stepped up their game. Microsoft was all over the place. Whether it was on the show floor, sponsoring the Hands-on Labs, or presenting in multiple sessions, you couldn’t walk 20ft without being reminded that Microsoft was there. On the contrary, you wouldn’t have even known AWS was there unless you happened upon their booth in the expo hall. 

A big focus for Docker this year is what they call Modernizing Traditional Applications, or MTA. MTA is a terrible acronym btw, if the whale people are reading this, you might consider changing it as to not confuse people with local transportation systems. Before the conference officially started I was lucky enough to be a part of the Tech Field Day Extra event. We got an inside scoop on some of the things Docker was excited to be talking about (MTA) as well an awesome presentation from Portworx, which specializes in SDS specifically for containers. I highly recommend you check them out.

If you look at the different elements, you can easily conclude Docker is aiming for enterprise dollars:

  1. Huge precense from Microsoft
  2. Docker presented on Modernizing Traditional Applications during the Tech Field Day Extra event
  3. Docker had many sessions around MTA
  4. Docker announced support for IBM i-Series and Z platform (mainframes)
  5. Docker had many enterprise customers present on stage

The list goes on, but you get the idea. In this post I want to give you an overview of their (Docker) MTA strategy and how they are trying to help enterprises start using Docker today, without having to refactor their applications at the onset.

One of the first questions I always hear from customers and peers alike:

Why in the world would I want to put my legacy/traditional applications in a container?

Docker will give you a few different reasons as to Why, but the biggest benefit in my view is portability. Now, containerizing a legacy application doesn’t immediately allow you to move it anywhere you want without thinking about all the dependencies, it does help you maintain a base platform (think RHEL or Federoa) and internal dependencies without worrying about the underlying operating system. It does help to ensure that as you develop, test and QA code you’ll get predictable results in each of those environments, as well as when deploying to production. The four high level stages that Docker envisions for modernization are; Lift and Shift the app, Deploy and Manage via Docker Enterprise, Refactor or Revise (over time) and implement Containers as a Service, or CaaS.


Lift and Shift

In my view, this is the most important step for this entire process. Even though the first step is listed as Lift and Shift, the thing you must do before that, or as part of that I suppose, is to identify the application. Don’t start with your business critical applications. Start with an application that can withstand some pain as you make this transition. Once you identify the app, the idea is to lift and shift that app. Most of you are probably familiar with lift and shift as it relates public cloud, this is similar. Docker has developed a tool called Image2Docker (Windows, Linux), which is open source. Image2Docker allows you to discover all the dependencies of your application, extract everything that’s required and creates a docker compose file with all those dependencies. This works by pointing the tool at a virtual machine. Think about how we used to do P2Vs back in the day (and sadly, still today), very similar concept. Some things to think about:

  • Linux and Windows currently supported. Not all applications/dependencies are supported
  • Your application may have hundreds of dependencies. Quickest way to get up and running is to leave this as is. Smartest move is to go through the dependicies with the application owners and determine bare minimum
  • Look out for things that are stored on the system, such as plaintext passwords, shared secrets, etc
Once you’ve identified the application, discovered the application and its dependencies, extracted said dependencies, you’ll get a docker compose file. This is what allows portability. You can take this docker compose file and deploy it in AWS, Azure, on-prem, and probably your refrigerator.
Deploy & Manager Docker Enterprise Edition (Docker EE)

Now that you have your docker compose file, you’ll need a place to run it. Of course you can run this with the community edition of Docker engine, but the target here is Docker Enterprise Edition, which consists of Docker engine, which is tested, certified and supported as well as certifying third-party plugins, includes Docker Datacenter, which provides things like multi-tenancy and full support for the Docker API, and includes Docker Trusted Registry (DTR), which allows you to have a private docker repository on your premises and ensure the images that are stored there are secure. The whole idea here is that Docker EE is the landing zone for all your containers, whether they be greenfield, or containers you are deploying via a docker compose file generated by image2Docker. Docker EE gives you that “single pane of glass” feeling for all your container management and orchestration needs.
Refactor & Revise

Once you have your traditional application containerized and running on an enterprise platform (Docker EE) now you have the opportunity, if you want it, to either refactor some or all of your application in a microservices fashion, or start adding functionality to this application in a “modern” way. The concept here is to get you up an running as soon as possible so that you start to gain advantages that Docker, and containers, provide. Now that you’re running, lets start to think about refactoring this application to take even more advantage of the platform. One way to accomplish this is to start breaking up that traditional application piece-by-piece and refactoring it into a microservices architecture. If you don’t want to refactor your existing app in this fashion, you can start adding functionality into the app by writing new code. Again, you’d want to do this in a microservices form-factor.
Containers as a Service (CaaS)

This is the Mecca for Docker, and really, from a container perspective. Being able to provide a platform for a microservices based application(s) and scale them exponentially is what Docker really wants to get the enterprises focused on. You can’t have CaaS without refactoring and revising your applications. CaaS can provide you with rapid scaling, faster deployment times and with less infrastructure. Not a ton of meat here, but CaaS is more of an operating model that Docker EE and modern application development techniques via CI/CD enable as you go through the different stages of application development, and eventually, to production.
At the end of the day MTA is a methodology. Docker’s initiative around MTA is to get enterprises started using containers, container orchestration, and probably most importantly, Docker, sooner rather than later. To that last point, MTA isn’t about using Docker, or containers for that matter, just for the sake of using containers. There is real value in taking a traditional application and running that in a container framework. If there’s one thing you take from this post, it’s start small. Don’t take your SAP environment, or your billing systems and use this as your starting point for MTA. Find something relevant, but not your revenue driving applications. I’ll be following along to see how this methodology matures, and where customers are using it successfully.

Comments 1

  1. Pingback: Modernizing Traditional Applications with Docker - Tech Field Day

Leave a Reply

Your email address will not be published. Required fields are marked *