Micro Cloud

Using Docker to Create a Micro Cloud - 12th October 2018

Introduction

In this article I'm going to document my Micro Cloud Stack.

Setup

The goal here is to setup an environment that simulates having N number of servers available. I should be able to ssh over to those machines. They should see each other.

1. Install Docker for Mac.

Originally I had this running with Docker Machine. Since then the docker for mac setup has improved quite a bit.
What was good about the old setup was that Docker Machine presented on the network with its own ip number. The bad was that Docker Machine required a virtual machine running in VirtuaBox, probably using Vagrant.

The good thing about the new setup is no virtual machine so there is a ligher memory footprint. Docker for Mac uses HyperKit which you can google, but the point is no virtual image, and the bad is that there is no ip that's routable from mac. I'm sure this has a more elegant solution, but my workaround is to map each docker container to a separate ssh port. Which you will see shortly.

2. Create Base Image

Here is my Dockerfile.

FROM centos:7
MAINTAINER robindevilliers@me.com
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
RUN yum -y install rsyslog
RUN echo "root:root" | chpasswd
RUN useradd deploy
RUN echo "deploy:deploy" | chpasswd
RUN useradd postgres
RUN echo "postgres:postgres" | chpasswd
RUN useradd webapp
RUN echo "webapp:webapp" | chpasswd
#RUN echo -e "[Artifactory]\nname=Artifactory\nbaseurl=http://admin:letmein@172.17.0.9:8081/artifactory/rpm-local/\nenabled=1\ngpgcheck=0" >> /etc/yum.repos.d/artifactory.repo
RUN yum -y install net-tools
RUN yum -y install sudo
RUN yum -y install wget
RUN yum -y install cronie
RUN yum -y groupinstall "Development Tools"
RUN yum -y install openssh-server openssh-clients
RUN sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config
RUN echo -e "%deploy    ALL=(ALL)       NOPASSWD: ALL\n" >> /etc/sudoers
RUN systemctl enable sshd.service
RUN mkdir /home/deploy/.ssh
RUN chown -R deploy /home/deploy/.ssh; chgrp -R deploy /home/deploy/.ssh
CMD ["/usr/sbin/init"]

And from that I can build a docker image

docker build --rm -t rainbow .

That should create a docker image which you can view.

robindevilliers:rainbow robindevilliers$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
rainbow             latest              0f905a2b1d69        8 months ago        1.18GB
centos              7                   ff426288ea90        9 months ago        207MB

3. Create Containers

So from the image we create containers and I have a script for that.

#!/bin/bash

docker network create --subnet=172.19.0.0/16 rainbow-network

servers=("red" "orange" "yellow" "green" "blue" "indigo" "violet" )
ports=( "2201" "2202" "2203" "2204" "2205" "2206" "2207")

echo deleting existing servers
echo =========================
for i in "${servers[@]}"
do
   docker rm $i -f
done

echo creating new containers
echo =======================

for ((i = 0; i < ${#servers[@]}; ++i)); do
    HOST=${servers[$i]}
    echo Host: $HOST ${ports[$i]}
    docker run --privileged  -ti -d -p ${ports[$i]}:22 --net rainbow-network -h $HOST --name $HOST rainbow
done

rm ~/.ssh/config.rainbow

for ((i = 0; i < ${#servers[@]}; ++i)); do
    HOST=${servers[$i]}
    PORT=${ports[$i]}
    echo $HOST $PORT
    echo -e "Host $HOST\n HostName localhost\n User deploy\n Port $PORT\n" >> ~/.ssh/config.rainbow
    ssh-keygen -R [localhost]:$PORT >> /dev/null
    sshpass -p deploy ssh-copy-id $HOST -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no
done

So why not just use docker-compose? Well its because I'm setting up all the ssh stuff as well. And by the time I am done with that, using docker compose doesn't seem worth it.

3. How to Use

After running this script you need to include this line in your standard ~/.ssh/config file.

include config.rainbow

After that we have a set of docker containers each corresponding to a color.

robindevilliers:rainbow robindevilliers$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                  NAMES
78e8126faf04        rainbow             "/usr/sbin/init"    About an hour ago   Up About an hour    0.0.0.0:2207->22/tcp   violet
5e027c6b15f0        rainbow             "/usr/sbin/init"    About an hour ago   Up About an hour    0.0.0.0:2206->22/tcp   indigo
38c0caac078d        rainbow             "/usr/sbin/init"    About an hour ago   Up About an hour    0.0.0.0:2205->22/tcp   blue
b57a851cb51f        rainbow             "/usr/sbin/init"    About an hour ago   Up About an hour    0.0.0.0:2204->22/tcp   green
6a6f3f2f0a86        rainbow             "/usr/sbin/init"    About an hour ago   Up About an hour    0.0.0.0:2203->22/tcp   yellow
6ebcbecf16ef        rainbow             "/usr/sbin/init"    About an hour ago   Up About an hour    0.0.0.0:2202->22/tcp   orange
828f4cf8ecd3        rainbow             "/usr/sbin/init"    About an hour ago   Up About an hour    0.0.0.0:2201->22/tcp   red

And you should be able to ssh over to them like so.

robindevilliers:rainbow robindevilliers$ ssh red
Last login: Tue Oct 16 14:01:25 2018 from gateway
[deploy@red ~]$

4. Summary

Okay that's the setup. We now have 7 docker containers that are running linux. There are four users on each box; root, deploy, postgres and webapp.

What this environment is not going to give you is anything to do with SELinux, since these are containers and not virtual machines.

So you are probably wondering why? What you have here is what appears to be 7 centos linux machines available on the network. I've found this setup to be very useful, in developing Ansible scripts and running then without being reliant on shared resources.