{"id":393,"date":"2015-11-15T15:59:15","date_gmt":"2015-11-15T15:59:15","guid":{"rendered":"http:\/\/onlinelab.info\/?p=393"},"modified":"2015-11-15T15:59:15","modified_gmt":"2015-11-15T15:59:15","slug":"docker-basic","status":"publish","type":"post","link":"https:\/\/www.asianux.org.vn\/index.php\/2015\/11\/15\/docker-basic\/","title":{"rendered":"Docker basic"},"content":{"rendered":"<h2 id=\"what-is-docker\">What is Docker?<\/h2>\n<p>When people talk about <a href=\"https:\/\/www.docker.com\/\" target=\"_blank\" rel=\"noopener\">docker<\/a> they are most likely talking about the <code>docker engine<\/code>. Docker is in fact an organization which creates tools to assist with the creation and deployment of software in a portable manor. As well as creating the docker engine they also have projects which orchestrates and manages deployed applications, we will come across more of these later.<\/p>\n<h3 id=\"docker-engine\">Docker engine<\/h3>\n<p>The <code>docker engine<\/code> (which we will just refer to as <code>docker<\/code> from now on) is a piece of software which allows a sort of operating system level virtualization on linux based computers. This is called containerization and allows you to run applications within a container, which is a portable unit and can be moved between linux machines.<\/p>\n<h3 id=\"containers\">Containers<\/h3>\n<p>The biggest misconception I have found when it comes to containers is that people think they are just like virtual machines. They aim to achieve the same outcomes as a virtual machine including portability, speed and flexibility but they do it in a different way.<\/p>\n<p>All linux systems are fundamentally the same, they are all running the linux kernel. The differences are the tools which are layered on top of the kernel. If you think about it these tools are just files in a filesystem which are executed in a certain order and interact with the kernel to achieve different goals.<\/p>\n<p>Docker works by allowing you to mount multiple filesystems on one machine and then run applications within the context of the different filesystems. For example you could create a ubuntu based container which is running on a Red Hat server. If you run the <code>bash<\/code> tool within the ubuntu container you can make use of all the standard ubuntu tools, for example <code>apt-get<\/code>. Those tools will interact with the kernel run by the Red Hat host and function as if you were on a ubuntu machine.<\/p>\n<p>Docker makes use of linux kernel functionality such as <code>cgroups<\/code>, <code>selinux<\/code> and <code>chroot jails<\/code> to keep the processes separate and contained within their containers.<\/p>\n<h3 id=\"the-differences-from-a-virtual-machine\">The differences from a virtual machine<\/h3>\n<p>You may be thinking \u201cthis sounds a lot like virtualization\u201d and you\u2019re right, however the container will not do many of the things you would expect a virtual machine to do automatically.<\/p>\n<p>Containers run exactly one process, no more, no less. So if you execute a command within a ubuntu container that command will act as if it is running on a ubuntu machine. However if that command relies on operating system services to be running, such as cron, it will not be able to find them. The host operating system may be running these services but the container is not.<\/p>\n<p>To work around this you simply need more containers. This is the beauty of containerization, you can run a database service on ubuntu, which is accessed by a Java application running on Red Hat, which is backed up by a cron job running on debian, all on a host server running SUSE. The important thing to realise is that in this example SUSE is the only \u201coperating system\u201d actually running and taking up resources, the others are just running single processes but giving those processes access to the tools of alternate operating systems.<\/p>\n<div class=\"application-notice info-notice\">\n<p>You can also specify particular versions of operating systems, so you could run a container based on ubuntu 13.10 on a ubuntu 14.04 server without worrying about compatibility issues.<\/p>\n<\/div>\n<p>Hopefully you can see now why this is exciting. It gives you a level playing field. You can develop an application on your desktop with 100% confidence it will behave exactly the same way on a production server.<\/p>\n<h2 id=\"exercises\">Exercises<\/h2>\n<p>You will now work through some exercises which will help you get to grips with docker and also experience how we are using them in the Lab.<\/p>\n<h3 id=\"requirements\">Requirements<\/h3>\n<p>The following exercises are designed to be run on an <a href=\"http:\/\/aws.amazon.com\/ec2\" target=\"_blank\" rel=\"noopener\">AWS EC2<\/a> instance. If you are taking part in the session live then please request an instance from one of the Lab core members, if you are taking part afterwards please sign up for an <a href=\"http:\/\/aws.amazon.com\/free\/\" target=\"_blank\" rel=\"noopener\">AWS free account<\/a> and create an EC2 micro instance based on the \u201cAmazon Linux\u201d image. <a href=\"http:\/\/www.crmarsh.com\/aws\/\" target=\"_blank\" rel=\"noopener\">This guide<\/a>should get you started, just follow it up to the \u201cConfiguring your account\u201d section.<\/p>\n<h2 id=\"exercise-1-installing-docker\">Exercise 1: Installing docker<\/h2>\n<p>Before we do anything we need docker. To get started you\u2019ll need to ssh on to your instance. It\u2019s always good practice to run an update when you create an instance.<\/p>\n<pre><code>sudo yum update -y\n<\/code><\/pre>\n<p>Once this has finished we\u2019re ready to install docker. Thankfully it is available in the Amazon linux repositories.<\/p>\n<pre><code>sudo yum install docker -y\n<\/code><\/pre>\n<p>Hooray you now have docker! All you need to do now is set it to start on boot (we might reboot our instance later) and then start it for the first time.<\/p>\n<pre><code>sudo chkconfig docker on\nsudo service docker start\n<\/code><\/pre>\n<p>To test docker you should be able to run:<\/p>\n<pre><code>sudo docker -v\n<\/code><\/pre>\n<p>which should print out the version and build.<\/p>\n<pre><code>docker version 1.6.2, build 7c8fca2\/1.6.2\n<\/code><\/pre>\n<p>To avoid having to run docker commands using sudo we can add our <code>ec2-user<\/code> to the <code>docker<\/code> group.<\/p>\n<pre><code>sudo usermod -G docker ec2-user\n<\/code><\/pre>\n<div class=\"application-notice help-notice\">\n<p>You will need to log out and back in again from your ssh session for this to take effect.<\/p>\n<\/div>\n<h2 id=\"exercise-2-my-first-container\">Exercise 2: My first container<\/h2>\n<p>Now that you have docker we can go ahead and create our first container. Docker containers are just commands run within the context of an alternative file system. These alternative filesystems are called images. You can build your own images but for now we are going to use an off-the-shelf one. Docker provides a place to store images called the <a href=\"https:\/\/hub.docker.com\/\" target=\"_blank\" rel=\"noopener\">Docker Hub<\/a>. If you create a container which doesn\u2019t have an image locally, but it finds one on the hub, it will download it for you.<\/p>\n<p>Let\u2019s get started by creating a simple installation of an apache web server. We are going to download the apache image from the hub and create a container to run it.<\/p>\n<pre><code>docker run -p 80:80 httpd\n<\/code><\/pre>\n<p>In this example we want docker to use the default <code>httpd<\/code> image.<\/p>\n<div class=\"application-notice info-notice\">\n<p>Images are usually named following the convention of <code>author\/image<\/code>, however we are using an official docker image which means we can omit the author.<\/p>\n<\/div>\n<p>We are also telling docker to link port <code>80<\/code> on the container to port <code>80<\/code> on the host. This means we can visit our instance in our web browser and we should see the \u201cIt works!\u201d apache default page.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/s3-eu-west-1.amazonaws.com\/informatics-webimages\/articles\/2015-06-24-lab-school-docker\/apache-test-page.png\" alt=\"Apache test page\" title=\"\"><\/p>\n<p>If you see the same page in your browser then you\u2019ve successfully created an apache container.<\/p>\n<div class=\"application-notice info-notice\">\n<p>You should also see some log output in your command line.<\/p>\n<\/div>\n<h2 id=\"exercise-3-extending-an-existing-image\">Exercise 3: Extending an existing image<\/h2>\n<p>Creating an apache instance is all well and good but what if we actually want it to serve our own content? To achieve this we need to take the default apache image and add our own files to it.<\/p>\n<p>We are going to create a new directory to contain this project.<\/p>\n<pre><code>mkdir apache-app\ncd apache-app\n<\/code><\/pre>\n<p>Now we can create our <code>Dockerfile<\/code>. I like to use <code>vi<\/code> for text editing but you may be more comfortable with <code>nano<\/code>.<\/p>\n<pre><code>nano Dockerfile\n<\/code><\/pre>\n<p>Set the contents of your <code>Dockerfile<\/code> to look like this, but set yourself as the maintainer of course.<\/p>\n<pre><code>FROM httpd:2.4\nMAINTAINER Jacob Tomlinson &lt;jacob.tomlinson@informaticslab.co.uk&gt;\n\nCOPY .\/my-html\/ \/usr\/local\/apache2\/htdocs\/\n\nEXPOSE 80\n\nCMD [\"httpd-foreground\"]\n<\/code><\/pre>\n<p>Going through this line by line we see that we are inheriting this image <code>FROM<\/code> <code>httpd<\/code>. This means we want docker to download the httpd image as before but do some stuff to it afterwards. We are also explicitly stating the version of the image (which directly maps to the version of apache) we want rather than just going with the latest one.<\/p>\n<div class=\"application-notice info-notice\">\n<p>The <code>httpd<\/code> image is really just the result of another Dockerfile which you can <a href=\"https:\/\/github.com\/docker-library\/httpd\/blob\/63cd0ad57a12c76ff70d0f501f6c2f1580fa40f5\/2.4\/Dockerfile\" rel=\"external noopener\" target=\"_blank\">find on GitHub<\/a>.<\/p>\n<\/div>\n<p>We are setting ourselves as the <code>MAINTAINER<\/code> so when someone else comes along and wants to use our image they know who to bother with their problems.<\/p>\n<p>Next we are going to <code>COPY<\/code> the contents of a directory called <code>my-html<\/code> into the image and place them in apache\u2019s default content location, which for 2.4 is<code>\/usr\/local\/apache2\/htdocs\/<\/code>.<\/p>\n<p>We want to <code>EXPOSE<\/code> port <code>80<\/code> to allow it to be accessed outside this container, think of it like a software firewall within docker.<\/p>\n<p>Then we specify our command that we want to run when the container is started with <code>CMD<\/code>.<\/p>\n<div class=\"application-notice info-notice\">\n<p>The last two commands are actually already defined within the httpd image, I wanted to put them in here to show what is happening but to also show that you can redefine a <code>CMD<\/code>. Docker will run the last <code>CMD<\/code> to be defined which is useful if you want to override the functionality of an image.<\/p>\n<\/div>\n<p>You may notice that the <code>my-html<\/code> directory doesn\u2019t exist yet. Let\u2019s create it.<\/p>\n<pre><code>mkdir my-html\n<\/code><\/pre>\n<p>Then let\u2019s create an <code>index.html<\/code> file within it.<\/p>\n<pre><code>nano my-html\/index.html\n<\/code><\/pre>\n<p>Set the content to something along the lines of:<\/p>\n<pre><code>&lt;html&gt;\n  &lt;head&gt;\n    &lt;title&gt;Hello world!&lt;\/title&gt;\n  &lt;\/head&gt;\n  &lt;body&gt;\n    &lt;h1&gt;Hello world!&lt;\/h1&gt;\n    &lt;hr&gt;\n    &lt;p&gt;Docker is the best!&lt;\/p&gt;\n  &lt;\/body&gt;\n&lt;\/html&gt;\n<\/code><\/pre>\n<p>Now we can build our image.<\/p>\n<pre><code>docker build -t my-httpd .\n<\/code><\/pre>\n<p>See we\u2019ve specified the image name with <code>-t<\/code>. This is the name we will use to run it.<\/p>\n<pre><code>docker run -p 80:80 my-httpd\n<\/code><\/pre>\n<p>Excellent, now when you navigate to your EC2 instance in your browser you should see your lovely new index page.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/s3-eu-west-1.amazonaws.com\/informatics-webimages\/articles\/2015-06-24-lab-school-docker\/custom-apache-test-page.png\" alt=\"Custom apache test page\" title=\"\"><\/p>\n<h2 id=\"exercise-4-docker-compose\">Exercise 4: Docker Compose<\/h2>\n<p>In the Lab we have progressed to the stage of needing to run multiple containers at once which make up a service. If we just used pure docker we would have to run each one individually and remember to specify the correct arguments (like <code>-p<\/code>) each time. We could create a bash file which contains all of these lines but there is a better way,<a href=\"https:\/\/docs.docker.com\/compose\/\" target=\"_blank\" rel=\"noopener\">docker compose<\/a>.<\/p>\n<p>Compose is another tool provided by docker and it allows you to write down your structure of containers and all of their arguments in a <a href=\"http:\/\/yaml.org\/\" target=\"_blank\" rel=\"noopener\">yaml<\/a> file. You can then simply run<code>docker-compose up<\/code> and your containers will be created.<\/p>\n<p>To get started we need to install docker compose.<\/p>\n<pre><code>sudo pip install docker-compose\n<\/code><\/pre>\n<p>Again to test that it is installed correctly we run:<\/p>\n<pre><code>docker-compose -v\n\n<\/code><\/pre>\n<p>Which should print the version of docker-compose, cpython and openssl.<\/p>\n<pre><code>docker-compose version: 1.3.1\nCPython version: 2.7.9\nOpenSSL version: OpenSSL 1.0.1k-fips 8 Jan 2015\n<\/code><\/pre>\n<p>Let\u2019s start simply by creating a compose file for our apache application. To do this we just need to create a new file called <code>docker-compose.yml<\/code><\/p>\n<pre><code>nano docker-compose.yml\n<\/code><\/pre>\n<p>In this file we can put:<\/p>\n<pre><code>apache:\n  build: .\n  ports:\n   - \"80:80\"\n<\/code><\/pre>\n<p>This file defines a new service called <code>apache<\/code>, it tells docker that to build this service it will find the Dockerfile in the current directory <code>.<\/code> and that we want to again bind port 80 on the container to 80 on the host.<\/p>\n<p>Now we can run our application.<\/p>\n<pre><code>docker-compose up\n<\/code><\/pre>\n<p>You should see similar output to when running docker manually but the log outputs will be prefixed with the container name. When docker compose creates a container it names it after the service followed by an underscore and the number one. This is because you can scale containers with compose easily and it will create additional containers with incrementing numbers.<\/p>\n<h2 id=\"exercise-5-multiple-containers-in-compose\">Exercise 5: Multiple containers in compose<\/h2>\n<p>Now that we are comfortable creating a container with compose let\u2019s create a second container and link them together. Instead of using our apache container we\u2019re going to create a <a href=\"https:\/\/www.djangoproject.com\/\" target=\"_blank\" rel=\"noopener\">python django<\/a> application running with a <code>postgres<\/code> database.<\/p>\n<p>Let\u2019s make a new project directory and switch to it.<\/p>\n<pre><code>mkdir ~\/django-app\ncd ~\/django-app\n<\/code><\/pre>\n<p>We\u2019ll need a Dockerfile for our django app.<\/p>\n<pre><code>nano Dockerfile\n<\/code><\/pre>\n<p>With the following contents:<\/p>\n<pre><code>FROM python:2.7\n\nENV PYTHONUNBUFFERED 1\n\nRUN mkdir \/code\n\nWORKDIR \/code\n\nCOPY src\/requirements.txt \/code\/\n\nRUN pip install -r requirements.txt\n\nCOPY src\/ \/code\/\n<\/code><\/pre>\n<p>Here we are using <code>python:2.7<\/code> as our base image. This is another official image which will ensure we have python installed and at version <code>2.7<\/code>.<\/p>\n<p>We have our first use of <code>ENV<\/code>, this sets an environment variable within the container. In this case we want our python to be unbuffered which will help with our output in the docker console.<\/p>\n<p>Next we <code>RUN<\/code> a command to create a directory for our django app to live in on our container. We then use <code>WORKDIR<\/code> to switch the current working directory within our container.<\/p>\n<p>We <code>COPY<\/code> in a <code>requirements.txt<\/code> file which specifies our python dependancies. We\u2019ll create this file in a minute. Then we are using <code>pip<\/code> the python package manager to install those dependancies.<\/p>\n<p>Finally we want to copy the contents of our <code>src<\/code> directory on the host to the <code>code<\/code> directory on the container.<\/p>\n<p>Now we want to create our <code>requirements.txt<\/code> file for docker to use.<\/p>\n<pre><code>mkdir src\nnano src\/requirements.txt\n<\/code><\/pre>\n<p>Add the following lines:<\/p>\n<pre><code>Django\npsycopg2\n<\/code><\/pre>\n<p>Next we are going to define our <code>docker-compose.yml<\/code> file.<\/p>\n<pre><code>nano docker-compose.yml\n<\/code><\/pre>\n<p>Add the contents:<\/p>\n<pre><code>db:\n  image: postgres\nweb:\n  build: .\n  command: python manage.py runserver 0.0.0.0:8000\n  volumes:\n    - src\/:\/code\/\n  ports:\n    - \"80:8000\"\n  links:\n    - db\n<\/code><\/pre>\n<p>Firstly we are creating a database service called <code>db<\/code>. We just want a plain old <code>postgres<\/code> server so we don\u2019t need a Dockerfile for it, we can reference the image name from the Docker Hub directly. As we will be accessing this service from another container we don\u2019t need to expose any ports to the outside world.<\/p>\n<p>Then we create a <code>web<\/code> service for our django application. We are again going to be building the Dockerfile in the current directory, we are also specifying the command for the container to run. This can be done in the Dockerfile but you can also do it here.<\/p>\n<p>As well as copying the contents of our <code>src<\/code> directory into the container on build we are going to mount it as a volume which means when the container makes changes to those files they will be persisted on the host even if the container is destroyed and rebuilt.<\/p>\n<div class=\"application-notice help-notice\">\n<p>This is actually a bad practice, see \u201cdata only containers\u201d in the further reading section below for more information.<\/p>\n<\/div>\n<p>We are linking our ports again but django runs on port <code>8000<\/code> by default so this time we are going to link port <code>80<\/code> on the host to <code>8000<\/code> on the container.<\/p>\n<p>Finally we are going to tell our container to link with the db container. This means they will have network access between each other and it also sets up a local dns record (in <code>\/etc\/hosts<\/code>) on the web container so we can access the database at the hostname <code>db<\/code>.<\/p>\n<p>Before we can start our web service django needs you to initialise its project and also tell it where the database is. We can do this with the docker compose <code>run<\/code>command, this runs the container but executes a different command to the one specified in the yaml file.<\/p>\n<pre><code>docker-compose run web django-admin.py startproject composeexample .\n<\/code><\/pre>\n<p>When running this docker will discover your images are not built, it will automatically download and build them for you and then move on to execute the command.<\/p>\n<p>When this command finishes it should generate a new directory in your <code>src<\/code> directory called <code>composeexample<\/code> along with a <code>manage.py<\/code> file. Check that they are there.<\/p>\n<pre><code>ls src\n<\/code><\/pre>\n<p>The <code>composeexample<\/code> directory will contain a <code>settings.py<\/code> file. This is where you need to put the database configuration.<\/p>\n<p>Open the file for editing, docker will have created these files as root so you\u2019ll need a <code>sudo<\/code> on this one.<\/p>\n<pre><code>sudo nano src\/composeexample\/settings.py\n<\/code><\/pre>\n<p>Then update the <code>DATABASES = ...<\/code> declaration to look like this:<\/p>\n<pre><code>DATABASES = {\n    'default': {\n        'ENGINE': 'django.db.backends.postgresql_psycopg2',\n        'NAME': 'postgres',\n        'USER': 'postgres',\n        'HOST': 'db',\n        'PORT': 5432,\n    }\n}\n<\/code><\/pre>\n<p>You can see we are just selecting <code>postgres<\/code> as our database engine and then pointing it at host <code>db<\/code> which, thanks to our nifty DNS settings, will point to the <code>db<\/code> container.<\/p>\n<p>Now we can try and run the django command <code>syncdb<\/code> which will take your django models and update the database to reflect them.<\/p>\n<pre><code>docker-compose run web python manage.py syncdb\n<\/code><\/pre>\n<div class=\"application-notice info-notice\">\n<p>If it asks you about a <code>superuser<\/code> just say no, we\u2019re not worried about what this command is doing as it will be lost next time we run docker-compose anyway. We\u2019re just testing the database connection.<\/p>\n<\/div>\n<p>Finally you can start the containers.<\/p>\n<pre><code>docker-compose up\n<\/code><\/pre>\n<p>Refresh your EC2 instance\u2019s page in your browser and you should now see the default django test page.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/s3-eu-west-1.amazonaws.com\/informatics-webimages\/articles\/2015-06-24-lab-school-docker\/django-test-page.png\" alt=\"Django test page\" title=\"\"><\/p>\n<h2 id=\"conclusion\">Conclusion<\/h2>\n<p>Congratulations, you are now working with docker. Hopefully you can see the power provided by adding this little bit of overhead to your applications. We\u2019ve only scratched the surface here so I suggest you read as much as you can about docker, starting with the links below.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>What is Docker? When people talk about docker they are most likely talking about the docker engine. Docker is in fact an organization which creates tools to assist with the creation and deployment of software&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12],"tags":[],"class_list":["post-393","post","type-post","status-publish","format-standard","hentry","category-he-thong"],"_links":{"self":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/posts\/393","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/comments?post=393"}],"version-history":[{"count":0,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/posts\/393\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/media?parent=393"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/categories?post=393"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/tags?post=393"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}