Openstack and Docker – Part 2

This is a continuation of my previous blog on Openstack and Docker. In this blog, I will cover Openstack Docker heat plugin and Magnum.

Following are some of the items that Nova Docker driver cannot do currently:

  1. Passing environment variables
  2. Linking containers
  3. Specifying volumes
  4. Orchestrating and scheduling the containers

Heat docker plugin solves problems 1-3 and partially solves problem 4. Following is the architecture diagram I found in Openstack Docker wiki for heat.

odocker2

  • Nova is not involved here. Openstack heat uses Docker plugin to talk to Docker agent on the host.
  • The host here is the VM spawned. The VM can either be spawned by Nova or Heat can spawn this using Nova driver.
  • Glance is not involved here as the container images are stored in Docker registry.
  • The Heat approach allows us to specify environment variables, link containers, specify volumes as well as orchestrate the host on which the Docker runs.

Using Heat plugin:

I had some issues getting Heat plugin to work with Openstack Kilo. It worked fine in Openstack Icehouse, so I continued with that. I used the approach mentioned in this wiki to do the integration of Heat plugin with Openstack Icehouse in Devstack.

My environment:

Ubuntu 14.04 running on top of Virtualbox with Devstack Icehouse release. My localrc file is here. localrc has Nova network instead of Neutron.

Stacking and heat plugin install:

Do the stacking. After successful stacking, I followed the heat docker plugin installation as mentioned in the wiki. It is needed to restart heat engine service after the plugin installation. I normally do this by opening the screen session, goto the heat service screen session, kill the session(ctrl-c) and restart that particular service. After restarting heat engine, we can check if the plugin got installed and running.

$ heat resource-type-list | grep Docker
| DockerInc::Docker::Container            

Using heat to start local container:

Here, we will run docker agent on the host machine and then use the Openstack heat plugin to create containers. For this, the first step is to install docker on the host machine and allow http access for docker clients to connect. For docker installation, use the procedure here. By default, docker does not allow external http access. To start docker with external http access on port 2376, execute:

$ sudo /usr/bin/docker -d --host=tcp://0.0.0.0:2376

Following is a simple heat YML file to create nginx container on the localhost.

heat_template_version: 2013-05-23
description: >
  Heat template to deploy Docker containers to an existing host
resources:
  nginx-01:
    type: DockerInc::Docker::Container
    properties:
      image: nginx
      docker_endpoint: 'tcp://192.168.56.102:2376'

Lets create the heat stack using the above template file.

$ heat stack-create -f ~/heat/docker_temp.yml nginxheat1

Now check if the stack is successfully created:

$ heat stack-list
+--------------------------------------+---------------+-----------------+----------------------+
| id                                   | stack_name    | stack_status    | creation_time        |
+--------------------------------------+---------------+-----------------+----------------------+
| d878d8c1-ce17-4f29-9203-febd37bd8b7d | nginxheat1    | CREATE_COMPLETE | 2015-06-14T13:27:54Z |
+--------------------------------------+---------------+-----------------+----------------------

Check if container is running in the localhost:

$ docker -H :2376 ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
624ff5de9240        nginx:latest        "nginx -g 'daemon of   2 minutes ago       Up 2 minutes        80/tcp, 443/tcp     trusting_pasteur    

At this point, we can access the webserver running as container in localhost.

Using heat to start remote container:

Here, we will create a VM using Nova, start docker agent on the VM and then use Openstack heat plugin to create containers on the VM.

First, lets create a Fedora image and upload to Glance. I used the procedure here to create a Fedora image and upload to Glance. Lets look at glance images after this.

$ glance image-list
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
| ID                                   | Name                            | Disk Format | Container Format | Size      | Status |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
| 17239070-5aef-4bab-85df-1f9f72b6370b | cirros-0.3.1-x86_64-uec         | ami         | ami              | 25165824  | active |
| ea6eb351-7268-4b2e-91cd-806a67c4e9fe | fedora-software-config          | qcow2       | bare             | 610140160 | active |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+

Next, I used the following script to create Key needed for ssh and also to set appropriate security groups.

# Create key and upload
ssh-keygen
nova keypair-add --pub-key ~/.ssh/id_rsa.pub key1
# Permit ICMP (ping):
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
# Permit secure shell (SSH) access:
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# Permit 2375 port access (Docker endpoint):
nova secgroup-add-rule default tcp 2375 2375 0.0.0.0/0

Now, we can create Fedora VM using Nova.

nova boot --flavor m1.medium --image fedora-software-config --security-groups default --key-name key1 --max-count 1 fedoratest

It is necessary for the instance to have external internet access to download Docker images. With Nova networking, we need to tweak iptables to create nat filter for this. The first rule deletes default filter and second rule creates filter with eth0 address.

sudo iptables -t nat -D nova-network-snat -o br100 -s 10.0.0.0/24 -j SNAT --to-source 
sudo iptables -t nat -A nova-network-snat -o br100 -s 10.0.0.0/24 -j SNAT --to-source 

Lets check the instance created:

$ nova list
+--------------------------------------+------------+--------+------------+-------------+------------------+
| ID                                   | Name       | Status | Task State | Power State | Networks         |
+--------------------------------------+------------+--------+------------+-------------+------------------+
| ab701808-98fb-4cba-907f-663fd762cf2a | fedoratest | ACTIVE | -          | Running     | private=10.0.0.2 |
+--------------------------------------+------------+--------+------------+-------------+------------------+

To login to the fedora VM, we need to use the key which we used to create VM.

ssh -i ~/.ssh/id_rsa fedora@

Fedora image already comes with Docker agent installed. We need to start docker listening on 2375 port for Openstack heat to talk to it.

sudo cp /usr/lib/systemd/system/docker.service /etc/systemd/system/
Edit /etc/sysconfig/docker:
OPTIONS=--host=tcp://0.0.0.0:2375
sudo systemctl start docker.service

Now, we can create containers on the remote VM. Lets first specify the heat template to create nginx container on remote VM. The docker end is the remote VM ip address.

heat_template_version: 2013-05-23
description: >
  Heat template to deploy Docker containers to an existing host
resources:
  nginx-01:
    type: DockerInc::Docker::Container
    properties:
      image: nginx
      docker_endpoint: 'tcp://10.0.0.2:2375'

Lets start the heat stack using the above template.

$ heat stack-create -f docker_stack1.yml docker_stack2

It is needed to pull the nginx docker image on the VM before-hand. We can automate this step with heat.
Now, we can check that the container is created on the VM.

[fedora@fedoratest ~]$ docker -H :2375 ps
CONTAINER ID        IMAGE               COMMAND                CREATED              STATUS              PORTS               NAMES
18a7ed8f5b00        nginx:latest        "nginx -g 'daemon of   About a minute ago   Up About a minute   80/tcp, 443/tcp     tender_morse      

The heat template file mentioned here creates the VM, pulls apache, mysql containers and links them. Following is another good wiki which gets into the details of defining heat templates for docker containers and also gives a good example. I did not have luck getting these scripts working in my devstack environment. The instances got spawned, container creation failed. I spent some time, but could not figure it out.

Openstack Magnum:

Openstack heat plugin for Docker solves some of the problems with Nova Docker driver but it does not handle the dynamic Orchestration, scheduling. Magnum is a generic Container management solution being developed in Openstack to manage Docker as well as other Container technologies. Currently, Magnum supports Kubernetes and Docker. With both Kubernetes and Docker, cluster of hosts are created and Containers gets scheduled into the cluster using special clustering algorithms and constraints. Magnum is still at the early stages and the first release was done with Openstack Kilo few months back.

Following is Magnum architecture diagram mentioned in Magnum wiki.

odocker3

  • Magnum uses other Openstack components like Nova, Heat, Neutron, Glance, Keystone whereever needed to manage containers.
  • Nova is used to create micro VM instances, containers run on top of it. Docker agent runs in each micro VM instance.
  • Heat is used for overall orchestration. For Kubernetes case, Heat creates Kubernetes agents, replication controller. For Docker, Heat creates Docker swarm cluster, swarm agents.
  • The steps involved are creation of bay-model(Kubernetes, Docker), bay creation and then container creation.

I tried to follow the procedure mentioned here to create Magnum cluster. I followed the steps till cluster creation, but cluster creation never completed successfully, it was stuck in “CREATE_IN_PROGRESS”. I tried with both Kubernetes and Docker baymodel, had no luck. If someone got this working, please let me know.

Container Orchestration is still evolving and there are multiple technologies being developed to address the problem. Docker is developing its own Orchestration solution. For folks who are already using Openstack, Magnum seems like a complete solution since it integrates well with other components of Openstack. I feel that over time, best technology would survive.

References:

Note:

Pictures used in this blog are from references.

9 thoughts on “Openstack and Docker – Part 2

  1. HI,I met the same problem that when I create magnum bay ,it was stuck in “CREATE_IN_PROGRESS”.

    Have you solved the problem?

    I’m looking forward to your reply.

  2. Folks,

    Debug on these lines (I am running Ubuntu as a virtual machine on VMware fusion)

    1. are you able to launch the instance based on the fedora-21-atomic-3 image in horizon ?
    2. are you able to get into the console once the instance is launched

    Then, do these commands:

    export OS_TENANT_NAME=admin
    export OS_USERNAME=admin
    export OS_PASSWORD=openstack1
    export OS_PROJECT_NAME=admin
    export OS_USER_DOMAIN_ID=default
    export OS_PROJECT_DOMAIN_ID=default
    export OS_AUTH_URL=http://172.16.97.135:35357/v3

    magnum baymodel-create –name k8sbaymodel2 –image-id fedora-21-atomic-3 –keypair-id testkey –external-network-id ${NIC_ID} –dns-nameserver 8.8.8.8 –flavor-id m1.small –docker-volume-size 5 –coe kubernetes
    magnum bay-create –name plswork7 –baymodel k8sbaymodel2 –node-count 1

    With this you should see 2 nova instances up and running (if not, check if you have enough RAM and harddisk):

    nova list
    +————————————–+——————————————————-+——–+————+————-+————————————————————————-+
    | ID | Name | Status | Task State | Power State | Networks |
    +————————————–+——————————————————-+——–+————+————-+————————————————————————-+
    | 117dee24-636c-4530-a10d-9e2503455e0b | pl-klrfxgipzh-0-rwps745t2rvy-kube_master-rxuncq73hg6j | ACTIVE | – | Running | plswork7-emvxg6jnblbp-fixed_network-xr3ykdvmrbhs=10.0.0.5, 172.16.97.9 |
    | b0f08623-525a-4222-b6cf-b6d1d4e312a6 | pl-vqzgdiqin6-0-4fmtvkxmikgp-kube_minion-fouri7wx4jvq | ACTIVE | – | Running | plswork7-emvxg6jnblbp-fixed_network-xr3ykdvmrbhs=10.0.0.6, 172.16.97.10 |
    +————————————–+——————————————————-+——–+————+————-+————————————————————————-+

    Log into the console of these 2 instances from horizon and check if it has booted up fine and if its in login prompt (I have seen an issue where it got stuck at some point and I had to reload
    the instance)

    After this, cloud-init should start and complete

    Then check these outputs – it should be CREATE_COMPLETE

    +————————————–+—————————————————————-+—————–+———————+————————————–+
    | id | stack_name | stack_status | creation_time | parent |
    +————————————–+—————————————————————-+—————–+———————+————————————–+
    | 78ed4264-e5bd-439d-a2ae-5fcdc6b9b7e7 | plswork7-emvxg6jnblbp | CREATE_COMPLETE | 2015-08-17T07:00:51 | None |
    | d95da075-7e19-4627-a1b5-aece0ff3549a | plswork7-emvxg6jnblbp-kube_masters-vzklrfxgipzh | CREATE_COMPLETE | 2015-08-17T07:01:07 | 78ed4264-e5bd-439d-a2ae-5fcdc6b9b7e7 |
    | 42160224-d7a0-4c2c-9d88-cc5a409fe1ef | plswork7-emvxg6jnblbp-kube_masters-vzklrfxgipzh-0-rwps745t2rvy | CREATE_COMPLETE | 2015-08-17T07:01:09 | d95da075-7e19-4627-a1b5-aece0ff3549a |
    | a4d01ac7-d6da-477c-a4df-694e5f0fa195 | plswork7-emvxg6jnblbp-kube_minions-wuvqzgdiqin6 | CREATE_COMPLETE | 2015-08-17T07:10:58 | 78ed4264-e5bd-439d-a2ae-5fcdc6b9b7e7 |
    | a54d7bda-1776-4301-90de-6f37ecfef9e8 | plswork7-emvxg6jnblbp-kube_minions-wuvqzgdiqin6-0-4fmtvkxmikgp | CREATE_COMPLETE | 2015-08-17T07:11:02 | a4d01ac7-d6da-477c-a4df-694e5f0fa195 |
    +————————————–+—————————————————————-+—————–+———————+————————————–+

    And then check the below outputs:

    heat event-list plswork7-emvxg6jnblbp (repeat this for the other stacks)

    +———————–+————————————–+————————————-+——————–+———————+
    | resource_name | id | resource_status_reason | resource_status | event_time |
    +———————–+————————————–+————————————-+——————–+———————+
    | plswork7-emvxg6jnblbp | a9f8c75c-bc40-44b3-b3b7-cbed48fa9aaa | Stack CREATE started | CREATE_IN_PROGRESS | 2015-08-17T07:00:51 |
    | extrouter | 9ea3564b-6fe7-4844-a240-f3285a89aad6 | state changed | CREATE_IN_PROGRESS | 2015-08-17T07:00:51 |
    | etcd_monitor | 97ba7437-313c-4210-9168-c1229a8fb1f0 | state changed | CREATE_IN_PROGRESS | 2015-08-17T07:00:52 |
    | fixed_network | 5b0e7581-7cb9-4452-9963-a05d78c3e5aa | state changed | CREATE_IN_PROGRESS | 2015-08-17T07:00:52 |
    | api_monitor | 0e42e841-4813-4cbc-9d60-807634a0d886 | state changed | CREATE_IN_PROGRESS | 2015-08-17T07:00:52 |
    | etcd_monitor | 7846053a-3bde-4a30-9fc6-5ddeefdd3357 | state changed | CREATE_COMPLETE | 2015-08-17T07:00:52 |
    | api_monitor | af7fac36-aeb6-4554-bde6-a46f60b59351 | state changed | CREATE_COMPLETE | 2015-08-17T07:00:52 |
    | extrouter | e5812a9b-350b-4b84-80d4-e552d97b50da | state changed | CREATE_COMPLETE | 2015-08-17T07:00:52 |
    | fixed_network | 870be452-600d-4740-bbae-da32f007f25e | state changed | CREATE_COMPLETE | 2015-08-17T07:00:53 |
    | fixed_subnet | 5a3d70b1-dc3d-4ae3-89cf-6a311f8aa452 | state changed | CREATE_IN_PROGRESS | 2015-08-17T07:00:53 |
    | fixed_subnet | 363efba9-e2a2-4717-be2a-bc088041650d | state changed | CREATE_COMPLETE | 2015-08-17T07:00:54 |
    | api_pool | c40509f1-54e1-4b0f-82b4-39882ad23397 | state changed | CREATE_IN_PROGRESS | 2015-08-17T07:00:54 |
    | etcd_pool | 58ceeb7d-c9a9-4650-998e-eeeee6a5c22a | state changed | CREATE_IN_PROGRESS | 2015-08-17T07:00:56 |
    | extrouter_inside | 5d6beb10-148b-4aa9-bcc4-7c0f48402b35 | state changed | CREATE_IN_PROGRESS | 2015-08-17T07:00:58 |
    | api_pool | 8eb16f8e-aba9-4191-ae4e-e531de7ce1eb | state changed | CREATE_COMPLETE | 2015-08-17T07:01:01 |
    | extrouter_inside | a648c0a2-7b70-4014-b1ca-275663bbae7d | state changed | CREATE_COMPLETE | 2015-08-17T07:01:01 |
    | api_pool_floating | 81421d2b-545f-48fd-9574-0cd9d631223b | state changed | CREATE_IN_PROGRESS | 2015-08-17T07:01:02 |
    | etcd_pool | 15ac431f-7173-42bc-9db2-64cf92ede9bf | state changed | CREATE_COMPLETE | 2015-08-17T07:01:05 |
    | api_pool_floating | 60608c21-47b2-498b-957d-69e3f80b5000 | state changed | CREATE_COMPLETE | 2015-08-17T07:01:05 |
    | kube_masters | b8e960ba-0b48-408e-99e2-4c7ca4b982a1 | state changed | CREATE_IN_PROGRESS | 2015-08-17T07:01:06 |
    | kube_masters | d88faa92-7a62-4da6-b0bf-ac3c50cf06e4 | state changed | CREATE_COMPLETE | 2015-08-17T07:10:56 |
    | kube_minions | c585a979-e2b3-4f99-b258-ce2688df6a32 | state changed | CREATE_IN_PROGRESS | 2015-08-17T07:10:57 |
    | kube_minions | 11941565-559c-4fab-af6b-3b9352419370 | state changed | CREATE_COMPLETE | 2015-08-17T07:43:15 |
    | plswork7-emvxg6jnblbp | cb200e98-0387-4fce-9e99-0c52fedbb2bc | Stack CREATE completed successfully | CREATE_COMPLETE | 2015-08-17T07:43:15 |
    +———————–+————————————–+————————————-+——————–+———————+

Leave a comment