Containers are the new cloud

The world is moving towards various Linux Container technologies to manage their workload. Containers are a way of creating “virtual machines” (sort of) that contain “just enough” of the operating system to get the job done. If you need a web server, the container has to have it installed locally. The host operating system then has practically nothing installed. Now with un-privileged containers, each “tenant” of the server has it’s own userid namespace, and what appears to be “root” in their private namespace isn’t actually running as a privileged user in the host os, reducing security risks.

Because all the containers running on each host are actually just really regular processes on a shared server, you get all the benefits of having a single process, i/o, network scheduler, shared real and virtual memory pools. There are plenty of disk management tools and techniques that make deploying a new LXC container only consume a few seconds and a few MB.

Launching Docker or other containers are usually just as easy, although setting up the virtual networks at first can be challenging. Once you have a bridge for clients, either bridged directly to the external physical ethernet interface, or routing through a firewall container, you just need to point your containers at the bridge, and they should be able to reach what they need to reach.

I suppose another challenge is presenting a service to the external world. You probably need a reverse proxy server (running in your firewall container most likely, eg. nginx) where you define ports to listen on, and different URLs and which server(s) to send that traffic to. Like I said, setting up the network can be a little challenging.