Windows container portability can affect your DR strategy

Source of an article: http://searchdisasterrecovery.techtarget.com/tip/Windows-container-portability-can-affect-your-DR-strategy

Windows containers can be easily moved, thus providing a solid disaster recovery opportunity, but you should understand details about when and how to use them first.

A new Windows Server 2016 feature that has received a great deal of attention is containers. Based on Docker, these containers offer application virtualization with the added benefit of making applications portable. A Windows container can be created on a desktop computer, run on a server in the data center and migrated to the cloud without any special coding. Containers are an interesting option for disaster recovery because their portability could make it possible to easily move an application to a public cloud or alternate data center.

Although containers are portable, blindly moving a container from one environment to another can potentially cause problems for the containerized application. Such problems stem directly from the way containers work.

Windows Server 2016 allows for two different types of containers to be created: Windows Server containers and Hyper-V containers. In both cases, the Windows container is run by a container host, which is the physical or virtual machine on which the containers reside.

It is tempting to think of a container host as similar to a virtualization host, but there are major differences. A VM is completely self-contained, with its own operating system, hardware configuration and so on. In contrast, a container does not include its own built-in OS.

Windows container components, portability issues

A container OS image acts as the operating system for each container, and it is common for multiple containers to share a common container OS image. The advantage to this approach is that host resources are not wasted by running multiple copies of a server OS, as would be the case for a virtualization host that is hosting a collection of VMs. Because the container OS image can be shared by multiple containers, the operating system image becomes read-only.

A Windows container could be a good tool for disaster recovery since it is possible to port containerized applications to the cloud or an alternate host.

A container consists of two main components: a container image and a sandbox. The image typically stores a virtualized application. A container might, for example, store an application’s binaries, registry keys and so on. The image is read-only. Write operations such as registry updates are written to the container’s sandbox.

Even though containers are designed to be portable, there are some factors that can impact their portability. Chief among these is the dependency on a container OS image. For example, you create a Windows Server container whose OS image is based on Windows Server 2016. A disaster then occurs, and you copy the Windows container to the public cloud, where it is hosted on a cloud-based container host. But you probably wouldn’t be able to make the container work with a Linux host — even though Windows and Linux both support Docker containers — because the underlying kernel is too different.

A cloud-based Windows Server container host will need a copy of the container’s OS image and any other dependency images if the container is to run properly.

The good news is that a container OS image tends to be standardized. In a Windows Server 2016 environment, you can install a container image package provider and use it to generate the container OS image. If you want to create a Windows Server Core base OS image, you could — if Docker is already installed and configured — enter the following three commands:
Install-PackageProvider ContainerImage -Force
Install-ContainerImage -Name WindowsServerCore
Restart-Service Docker

Caution required when moving data volumes

One of the biggest challenges with container portability is moving container data volumes. Container data volumes allow you to create a container data directory or add an existing data directory. A data volume can also be shared between multiple containers.

It is important to understand that data volumes exist outside of a container. A data volume will persist even if you delete the container using it.

If the container is running or paused, moving it typically involves the Docker export command; if the container is not running, the Docker save command is used. Both commands save the container to a file that can be imported to another host. The problem is that because data volumes exist outside of containers, they are not included in exported files.

For now, the best way to deal with data volumes is to back them up and then restore them after the container has been moved.

A Windows container could be a good tool for disaster recovery since it is possible to port containerized applications to the cloud or an alternate host. However, care must be taken to avoid breaking any dependencies or losing any data volumes.

 

Leave a comment / Query / Feedback

Your email address will not be published. Required fields are marked *