Cloud Foundry Containers: The Difference Between Warden, Docker, and Garden

By: | September 21, 2016

Does Cloud Foundry work with containers? According to this video from the Cloud Foundry Foundation, containers are “part of the platform’s DNA.”

For those just getting started with Cloud Foundry and containers, we’ve put together an overview on different types of container implementations and how they might be used.

Until recently, Warden was the main container implementation used in Cloud Foundry. Docker is another option to easily and efficiently manage containers. That’s why a lot of effort has been put into enabling support for Docker in the Cloud Foundry Diego runtime.

Here, we briefly compare Warden and Docker, as well as explore the internals of Garden—the current container back end in Cloud Foundry.

What is Similar?

Warden and Docker containers have a number of similarities in their internal implementation; for instance:

  • Both employ cgroups to isolate usage of resources and namespaces to separate applications running inside containers from each other and from host processes. (These two features are provided by the Linux kernel.)
  • Both Warden and Docker use layers combined with a union file system, which organizes them into a single isolated root file system to be used inside a container.

What is Different?

There are also some differences. Warden is part of Cloud Foundry, so it doesn’t need to support many file systems. At the moment, it works with AUFS and OverlayFS. Docker covers more union file systems, including these two, as well as Btrfs, ZFS, VFS, and the devmapper framework.

The main difference between Docker and Warden is in the way container images are organized. Warden is designed to run applications that get all their dependencies from pieces of software called buildpacks. Warden containers usually have only two layers: a read-only layer with an OS root file system (for example, Ubuntu 14.04) and a nonpersistent, read/write layer for the application itself, all its dependencies, and temporary data.

Cloud Foundry Warden


Unlike Warden, Docker is built to run images, which are distributed through Docker Hub. An app must have an image created for it before it can work with Docker. Users can download publicly available images and make their own based on them.

Docker images consist of multiple layers, one for each RUN command. Layers are combined into a single file system, just as they are in Warden (see the diagram below). When a user creates an image based on someone else’s image, Docker reuses the layers. For example, if they have a Jenkins image based on JRE and want to make their own, with a Java application that also needs JRE, Docker will reuse the JRE image and all of its parents.

Docker Filesystem

There are additional differences, which are outlined in the table below which depicts which resources can be isolated and which features are currently available in Warden and Docker containers.


Feature Warden Docker
1. Resource isolation and control
  • CPU shares
  • memory + swap
  • network bandwidth
  • disk size quota
  • CPU shares
  • CPU sets
  • memory
  • memory swap
  • block device bandwidth
2. Dynamic resource management Warden containers support this feature, but Cloud Foundry doesn’t use it. Not supported.
3. Image management Only whole images can be reused to create new containers. Layered—allows for reusing separate layers.
4. Linking containers No Yes
5. Exposing ports Single port per container (Multiple ports will be available in Garden/Diego.) Multiple ports per container

What is Garden?

Garden is the current Cloud Foundry container back end, which became available in Diego—the current Cloud Foundry runtime. Garden is built around the same idea as Warden, but it has been refactored and re-implemented in Go. (Warden was originally written in Ruby.)

So, what’s the difference between Warden and Garden? First of all, Garden is modular. It supports multiple pluggable “back ends”—pieces of software responsible for creating containers. At the moment, three back ends are available: Linux, runC (a container specification from the Open Container Initiative), and Windows. Yes, you can run Windows (.NET) applications with Garden!

Another killer feature of the new Garden Linux back end is the possibility to run Docker images. Garden is able to fetch Docker images from Docker Hub, as well as from your own Docker repository. Generally, availability of Docker on Cloud Foundry is very good news for teams that already employ Docker images in their everyday activities and want to continue using them after they move to Cloud Foundry.

Some Cloud Foundry offerings already use Diego as the default app runtime, such as CenturyLink AppFog, HPE Helion Stackato, and Pivotal Cloud Foundry. According to the public tracker, Diego 1.0—which will finally replace DEA in open source Cloud Foundry—will be out in a matter of weeks.

Garden Internals

Let’s dig inside a Garden container to see how it works. First of all, I’ll create the simplest app—a static file index.html—and push it into Cloud Foundry.

$ cf app static
Showing health and status for app static in org altoros / space dev as admin...


requested state: started
instances: 1/1
usage: 1G x 1 instances
last uploaded: Tue Aug 9 13:25:09 UTC 2016
stack: cflinuxfs2
buildpack: staticfile 1.3.0


     state     since                    cpu    memory       disk         details
#0   running   2016-08-09 04:25:48 PM   0.0%   3.5M of 1G   6.6M of 1G


$ cat index.html 


$ curl

Garden keeps its containers in a “depot” directory, which is usually located at /var/vcap/data/garden/depot in a typical Cloud Foundry deployment. Each subdirectory is a Garden container, and the directory name is a container ID.

root@afd496e3-339a-4ab6-87fd-87310fd8cc47:/var/vcap/data/garden/depot# ls -l
total 24
drwxr-xr-x 9 root root 4096 Jul 14 15:55 o9bkat0pmke
drwxr-xr-x 9 root root 4096 Jul 26 04:50 o9bkat0po8s
drwxr-xr-x 9 root root 4096 Jul 26 12:33 o9bkat0poai
drwxr-xr-x 9 root root 4096 Jul 27 09:24 o9bkat0poeg
drwxr-xr-x 9 root root 4096 Aug  9 11:58 o9bkat0pq98
drwxr-xr-x 9 root root 4096 Aug  9 13:25 o9bkat0pq9j

(It may be tricky to find “your” container without knowing its ID. I’ve found it by the directory’s creation time for my sample container).

Let’s see what’s inside:

root@afd496e3-339a-4ab6-87fd-87310fd8cc47:/var/vcap/data/garden/depot/o9bkat0pq9j# ls -l
total 64
drwxr-xr-x 2 root root 4096 Aug  9 13:25 bin
-rw-r--r-- 1 root root   15 Aug  9 13:25 bridge-name
-rwxr-xr-x 1 root root 1601 Aug  9 13:25
drwxr-xr-x 2 root root 4096 Aug  9 13:25 etc
drwxr-xr-x 2 root root 4096 Aug  9 13:25 jobs
drwxr-xr-x 2 root root 4096 Aug  9 13:25 lib
-rwxr-xr-x 1 root root 1185 Aug  9 13:25
-rwxr-xr-x 1 root root 1195 Aug  9 13:25
drwxr-xr-x 2 root root 4096 Aug  9 13:52 processes
-rw-r--r-- 1 root root   16 Aug  9 13:25 rootfs-provider
drwxr-xr-x 2 root root 4096 Aug  9 13:25 run
-rwxr-xr-x 1 root root 3820 Aug  9 13:25
-rwxr-xr-x 1 root root  484 Aug  9 13:25
-rwxr-xr-x 1 root root 1195 Aug  9 13:25
drwxr-xr-x 2 root root 4096 Aug  9 13:25 tmp
-rw-r--r-- 1 root root    5 Aug  9 13:25 version

A Shell Inside Garden Containers

One of the most interesting parts here is a bin directory and the wsh utility, which allows you to run a shell inside a container.

root@afd496e3-339a-4ab6-87fd-87310fd8cc47:/var/vcap/data/garden/depot/o9bkat0pq9j# ./bin/wsh

Let’s check out the processes running in this container:

# ps aux
root         1  0.0  0.0   8312  5672 ?        S<l  13:25   0:00 initd -dropCapabilities=false -title="wshd: o9bkat0pq9j"
vcap        11  0.0  0.0  22528  4340 ?        S<   13:25   0:00 nginx: master process /home/vcap/app/nginx/sbin/nginx -p /home/vcap/app/nginx -c /home/vcap/app/nginx/conf/nginx.conf
vcap        13  0.0  0.0  10360  7424 ?        S<l  13:25   0:00 /tmp/lifecycle/diego-sshd -address= -hostKey=-----BEGIN RSA PRIVATE KEY----- MIICXAIBAAKBgQCa2V4UlGUjE1x0pSr
vcap        24  0.0  0.0  48688   720 ?        S<   13:25   0:00 cat
vcap        25  0.0  0.0  48688   828 ?        S<   13:25   0:00 cat
vcap        30  0.0  0.0  22952  2336 ?        S<   13:25   0:00 nginx: worker process                                                                          
root       271  0.0  0.0   4448   688 pts/0    S<s  13:54   0:00 /bin/sh
root       292  0.0  0.0  15572  2056 pts/0    R<+  13:55   0:00 ps aux

Here, we can see an nginx process, which serves our index.html file.

# curl localhost:8080

Garden File System

Let’s view the container’s file system:

# ls -l /
total 84
lrwxrwxrwx   1 root root    14 Jan 21  2016 app -> /home/vcap/app
drwxr-xr-x   2 root root  4096 Jan 21  2016 bin
drwxr-xr-x   2 root root  4096 Apr 10  2014 boot
drwxr-xr-x   6 root root  4096 Aug  9 13:25 dev
drwxr-xr-x  78 root root  4096 Aug  9 13:25 etc
drwxr-xr-x   4 root root  4096 Aug  9 13:25 home
drwxr-xr-x  13 root root  4096 Jan 21  2016 lib
drwxr-xr-x   2 root root  4096 Jan 19  2016 lib64
drwx------   2 root root 16384 Aug  9 13:25 lost+found
drwxr-xr-x   2 root root  4096 Jan 19  2016 media
drwxr-xr-x   2 root root  4096 Apr 10  2014 mnt
drwxr-xr-x   2 root root  4096 Jan 19  2016 opt
dr-xr-xr-x 216 root root     0 Aug  9 13:25 proc
drwx------   2 root root  4096 Jan 21  2016 root
drwxr-xr-x   7 root root  4096 Jan 19  2016 run
drwxr-xr-x   2 root root  4096 Jan 21  2016 sbin
drwxr-xr-x   2 root root  4096 Jan 19  2016 srv
dr-xr-xr-x  13 root root     0 Aug  9 13:25 sys
drwxrwxrwt   4 root root  4096 Aug  9 13:25 tmp
drwxr-xr-x  10 root root  4096 Jan 21  2016 usr
drwxr-xr-x  12 root root  4096 Jan 21  2016 var

It looks just like a regular Linux file system, with an /app directory, which contains application bits:

# ls /app
Staticfile  index.html  nginx public sources.yml
# cat /app/index.html

Now, let’s leave a container shell:

# exit

Another interesting thing here is a container config located at etc/config.

root@afd496e3-339a-4ab6-87fd-87310fd8cc47:/var/vcap/data/garden/depot/o9bkat0pq9j# cat etc/config 

You can see the rootfs_path parameter, which points to the path on a host file system. It is basically a container’s root file system:

root@afd496e3-339a-4ab6-87fd-87310fd8cc47:/var/vcap/data/garden/depot/o9bkat0pq9j# ls /var/vcap/data/garden/aufs_graph/aufs/mnt/70d4c8fac017a4a3d7cdb4013f3662a379a7d56e21877206c8d2a77611853859
app  bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

To check it, let’s add a file from the host:

touch /var/vcap/data/garden/aufs_graph/aufs/mnt/70d4c8fac017a4a3d7cdb4013f3662a379a7d56e21877206c8d2a77611853859/created-from-host

and then load a container shell again to check whether the file is available:

root@afd496e3-339a-4ab6-87fd-87310fd8cc47:/var/vcap/data/garden/depot/o9bkat0pq9j# ./bin/wsh
# ls /
app  bin  boot created-from-host  dev etc  home  lib lib64  lost+found  media  mnt  opt  proc  root run  sbin  srv sys  tmp  usr  var

Networking in Garden

Networks in containers are isolated from a host with the Linux network namespace. However, Garden provides a way to exchange traffic between a container and a host, as well as access container network services outside the host.

To exchange packets between the host and container, Garden creates a dedicated network in the /30 range for each container (which can hold only two IP addresses: one for the host and another for the container) with a pair of network interfaces. Dig into the etc/config file inside the container directory to see the network settings:

root@afd496e3-339a-4ab6-87fd-87310fd8cc47:/var/vcap/data/garden/depot/o9bkat0pq9j# cat etc/config 

To make sure the network is working, let’s access nginx in the container from the host machine:

root@afd496e3-339a-4ab6-87fd-87310fd8cc47:/var/vcap/data/garden/depot/o9bkat0pq9j# curl

To provide network access to a container, Garden uses “network address translation” (NAT) based on a port. Garden randomly picks an unused port—60236 in my example—and then adds a rule to the routing table using iptables, which says, “forward everything that came on the port 60236 to”

The following diagram demonstrates the container network and the packet flow.

container network and packet flow

Garden appears very promising. Now, we’re just waiting for the Diego 1.0 release, which will bring even more power to container orchestration on Cloud Foundry.

About the Author

Maksim Zhylinski is a Cloud Foundry Engineer at Altoros. He is an expert in cloud computing, networking and Cloud Foundry BOSH, having worked on multiple BOSH CPIs, releases and service brokers. Maksim has 6+ years of experience in all things Ruby, JavaScript and Go, as well as extensive skills in server and client-side web app development. He is an active member of Ruby and Go communities and a frequent contributor to various open-source projects.

Maksim Zhylinski Profile Image

Maksim Zhylinski, AUTHOR