A few weeks ago I attended my second DockerCon, this year it was in Austin, TX. Last year there was some focus on the enterprise, but this year they’ve stepped up their game. Microsoft was all over the place. Whether it was on the show floor, sponsoring the Hands-on Labs, or presenting in multiple sessions, you couldn’t walk 20ft without being reminded that Microsoft was there. On the contrary, you wouldn’t have even known AWS was there unless you happened upon their booth in the expo hall.
A big focus for Docker this year is what they call Modernizing Traditional Applications, or MTA. MTA is a terrible acronym btw, if the whale people are reading this, you might consider changing it as to not confuse people with local transportation systems. Before the conference officially started I was lucky enough to be a part of the Tech Field Day Extra event. We got an inside scoop on some of the things Docker was excited to be talking about (MTA) as well an awesome presentation from Portworx, which specializes in SDS specifically for containers. I highly recommend you check them out.
If you look at the different elements, you can easily conclude Docker is aiming for enterprise dollars:
- Huge precense from Microsoft
- Docker presented on Modernizing Traditional Applications during the Tech Field Day Extra event
- Docker had many sessions around MTA
- Docker announced support for IBM i-Series and Z platform (mainframes)
- Docker had many enterprise customers present on stage
The list goes on, but you get the idea. In this post I want to give you an overview of their (Docker) MTA strategy and how they are trying to help enterprises start using Docker today, without having to refactor their applications at the onset.
One of the first questions I always hear from customers and peers alike:
Why in the world would I want to put my legacy/traditional applications in a container?
Docker will give you a few different reasons as to Why, but the biggest benefit in my view is portability. Now, containerizing a legacy application doesn’t immediately allow you to move it anywhere you want without thinking about all the dependencies, it does help you maintain a base platform (think RHEL or Federoa) and internal dependencies without worrying about the underlying operating system. It does help to ensure that as you develop, test and QA code you’ll get predictable results in each of those environments, as well as when deploying to production. The four high level stages that Docker envisions for modernization are; Lift and Shift the app, Deploy and Manage via Docker Enterprise, Refactor or Revise (over time) and implement Containers as a Service, or CaaS.
Lift and Shift
In my view, this is the most important step for this entire process. Even though the first step is listed as Lift and Shift, the thing you must do before that, or as part of that I suppose, is to identify the application. Don’t start with your business critical applications. Start with an application that can withstand some pain as you make this transition. Once you identify the app, the idea is to lift and shift that app. Most of you are probably familiar with lift and shift as it relates public cloud, this is similar. Docker has developed a tool called Image2Docker (Windows, Linux), which is open source. Image2Docker allows you to discover all the dependencies of your application, extract everything that’s required and creates a docker compose file with all those dependencies. This works by pointing the tool at a virtual machine. Think about how we used to do P2Vs back in the day (and sadly, still today), very similar concept. Some things to think about:
- Linux and Windows currently supported. Not all applications/dependencies are supported
- Your application may have hundreds of dependencies. Quickest way to get up and running is to leave this as is. Smartest move is to go through the dependicies with the application owners and determine bare minimum
- Look out for things that are stored on the system, such as plaintext passwords, shared secrets, etc