{"id":251,"date":"2015-09-07T15:56:27","date_gmt":"2015-09-07T15:56:27","guid":{"rendered":"http:\/\/onlinelab.info\/?p=251"},"modified":"2015-09-07T15:56:27","modified_gmt":"2015-09-07T15:56:27","slug":"openstack-kilo-on-ubuntu-14-04-2-configure-cinder-2","status":"publish","type":"post","link":"https:\/\/www.asianux.org.vn\/index.php\/2015\/09\/07\/openstack-kilo-on-ubuntu-14-04-2-configure-cinder-2\/","title":{"rendered":"OpenStack Kilo on Ubuntu 14.04.2 \u2013 Configure Cinder #2"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-8807 aligncenter\" src=\"http:\/\/www.itzgeek.com\/wp-content\/uploads\/2015\/06\/OpenStack-Logo.png\" alt=\"OpenStack Logo\" width=\"200\" height=\"200\" title=\"\"><\/p>\n<p>This is the second part of <a href=\"http:\/\/www.itzgeek.com\/how-tos\/linux\/ubuntu-how-tos\/openstack-kilo-on-ubuntu-14-04-2-configure-cinder-1.html\" target=\"_blank\" rel=\"noopener\">OpenStack Kilo on Ubuntu 14.04.2 \u2013 Configure Cinder<\/a>, in this tutorial we will install and configure Storage Node for the Cinder service. For a demo purpose, will configure this storage node with a block storage device \/dev\/sdb that contains partition \/dev\/sdb1 occupying the entire disk.<\/p>\n<h2>Prerequisites:<\/h2>\n<p>The following is the network configuration of storage node, Storage Node will have one network interface on the management network.<\/p>\n<table width=\"743\">\n<tbody>\n<tr>\n<th>ROLE<\/th>\n<th>NW CARD 1<\/th>\n<th>NW CARD 2<\/th>\n<th>NW CARD 3<\/th>\n<\/tr>\n<tr>\n<th>STORAGE NODE<\/th>\n<th>192.168.12.24 \/ 24, GW=192.168.12.2<br \/>\n(MANAGEMENT NETWORK)<\/th>\n<th>NA<\/th>\n<th>NA<\/th>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Set the hostname of the node to block.<\/p>\n<p>Copy the host entry from the controller node to storage node and add the following to it. Final output will look like below.<\/p>\n<pre>192.168.12.21 controller\n192.168.12.22 network\n192.168.12.23 compute\n192.168.12.24 block<\/pre>\n<p>Install NTP package on Storage Node.<\/p>\n<pre># apt-get install ntp<\/pre>\n<p>Edit the below configuration file.<\/p>\n<pre># nano \/etc\/ntp.conf<\/pre>\n<p>Remove other ntp servers from the file, just hash out the lines that are starts with word \u201cserver\u201d. Add below entry to get our nodes sync with controller node.<\/p>\n<pre>server controller<\/pre>\n<p>Restart the NTP service.<\/p>\n<pre># service ntp restart<\/pre>\n<h2>OpenStack packages:<\/h2>\n<p>Install the Ubuntu Cloud archive keyring and repository.<\/p>\n<pre># apt-get install ubuntu-cloud-keyring\n\n# echo \"deb http:\/\/ubuntu-cloud.archive.canonical.com\/ubuntu\" \"trusty-updates\/kilo main\" &gt; \/etc\/apt\/sources.list.d\/cloudarchive-kilo.list<\/pre>\n<p>Update the repositories on your system.<\/p>\n<pre># apt-get update<\/pre>\n<p>Install lvm2 packages, if required.<\/p>\n<pre>#  apt-get install lvm2<\/pre>\n<p>Create the physical volume \/dev\/sdb1<\/p>\n<pre># pvcreate \/dev\/sdb1\nPhysical volume \"\/dev\/sdb1\" successfully created<\/pre>\n<p>Create the volume group vg_cinder.<\/p>\n<pre># vgcreate vg_cinder \/dev\/sdb1\nVolume group \"vg_cinder\" successfully created<\/pre>\n<p>Edit the \/etc\/lvm\/lvm.conf file and add a filter that accepts the \/dev\/sdb device and rejects all other devices.<\/p>\n<pre># nano \/etc\/lvm\/lvm.conf<\/pre>\n<p>In the devices section, change<\/p>\n<p>From<\/p>\n<pre>filter = [ \"a\/.*\/ \" ]<\/pre>\n<p>To<\/p>\n<pre>filter = [ \"a\/sdb\/\", \"r\/.*\/\" ]<\/pre>\n<h2>Install and configure Cinder components:<\/h2>\n<p>Install the packages on the storage node.<\/p>\n<pre># apt-get install cinder-volume python-mysqldb<\/pre>\n<p>Edit the \/etc\/cinder\/cinder.conf file.<\/p>\n<pre># nano \/etc\/cinder\/cinder.conf<\/pre>\n<p>Modify the below settings and make sure to place an entries in the proper sections. Some time you may need to add sections if it does not exists and also you require to add some entries which are missing in the file, not all.<\/p>\n<pre>[DEFAULT]\n...\nrpc_backend = rabbit\nauth_strategy = keystone\nmy_ip = 192.168.12.24\n<strong>## Management IP of Storage Node<\/strong>\nenabled_backends = lvm\nglance_host = controller\nverbose = True\n\n[database]\nconnection = mysql:\/\/cinder:password@controller\/cinder\n<strong>## Replace \"password\" with the password you chose for cinder database<\/strong>\n\n[oslo_messaging_rabbit]\nrabbit_host = controller\nrabbit_userid = openstack\nrabbit_password = password\n<strong>## Replace \"password\" with the password you chose for the openstack account in RabbitMQ.<\/strong>\n[keystone_authtoken]\nauth_uri = http:\/\/controller:5000\nauth_url = http:\/\/controller:35357\nauth_plugin = password\nproject_domain_id = default\nuser_domain_id = default\nproject_name = service\nusername = cinder\npassword = password\n<strong>## Replace \"password\" with the password you chose for cinder user in the identity service\n## Comment out or remove any other options in the [keystone_authtoken] section<\/strong>\n\n[lvm]\nvolume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver\nvolume_group = vg_cinder\niscsi_protocol = iscsi\niscsi_helper = tgtadm\n\n<strong>## Replace vg_cinder with your volume group.<\/strong>\n\n[oslo_concurrency]\nlock_path = \/var\/lock\/cinder\n\n<strong>## Comment out the lock_path in (DEFAULT) section.<\/strong><\/pre>\n<p>Restart the block storage service.<\/p>\n<pre># service tgt restart\n# service cinder-volume restart<\/pre>\n<p>Remove the SQLite database file.<\/p>\n<pre># rm -f \/var\/lib\/cinder\/cinder.sqlite<\/pre>\n<h2>Troubleshooting:<\/h2>\n<p>Go through the log for any errors.<\/p>\n<pre># cat \/var\/log\/cinder\/cinder-volume.log<\/pre>\n<p>For errors like below.<\/p>\n<pre><strong>\"Unknown column 'volumes.instance_uuid' in 'field list'\")<\/strong>\n\n<strong>\"Unknown column 'volumes.attach_time' in 'field list<\/strong>\n\n<strong>\"Unknown column 'volumes.mountpoint' in 'field list'\"<\/strong>\n\n<strong>\"Unknown column 'volumes.attached_host' in 'field list'\")<\/strong><\/pre>\n<p>Visit: <a href=\"http:\/\/www.itzgeek.com\/QA\/questions\/question\/unknown-column-volumes-mountpoint-in-field-list\/\" target=\"_blank\" rel=\"noopener\">Unknown Column<\/a><\/p>\n<p>For errors like below.<\/p>\n<pre><strong>AMQP server on controller:5672 is unreachable: Too many heartbeats missed. Trying again in 1 seconds.<\/strong><\/pre>\n<p>Visit: <a href=\"http:\/\/www.itzgeek.com\/QA\/questions\/question\/amqp-server-on-controller5672-is-unreachable-too-many-heartbeats-missed-trying-again-in-1-seconds\/\" target=\"_blank\" rel=\"noopener\">Too many heartbeats missed.<\/a><\/p>\n<h2>Verification:<\/h2>\n<p>Run the following command to configure the Block Storage client to use API version 2.0.<\/p>\n<pre># echo \"export OS_VOLUME_API_VERSION=2\" | tee -a admin-openrc.sh demo-openrc.sh<\/pre>\n<p>Load the credentials.<\/p>\n<pre># source admin-openrc.sh<\/pre>\n<p>List the service components.<\/p>\n<pre># cinder service-list\n\n+------------------+------------+------+---------+-------+----------------------------+-----------------+\n|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |\n+------------------+------------+------+---------+-------+----------------------------+-----------------+\n| cinder-scheduler | controller | nova | enabled |   up  | 2015-07-07T20:11:21.000000 |       None      |\n|  cinder-volume   | block@lvm  | nova | enabled |   up  | 2015-07-07T20:11:18.000000 |       None      |\n+------------------+------------+------+---------+-------+----------------------------+-----------------+<\/pre>\n<h2>Attach a volume to an instance:<\/h2>\n<p>Create a virtual disk \u201cdisk01\u2033 with 5GB, run the following command on controller node.<\/p>\n<pre># cinder create --name disk01 5\n+---------------------------------------+--------------------------------------+\n|                Property               |                Value                 |\n+---------------------------------------+--------------------------------------+\n|              attachments              |                  []                  |\n|           availability_zone           |                 nova                 |\n|                bootable               |                false                 |\n|          consistencygroup_id          |                 None                 |\n|               created_at              |      2015-07-07T20:18:34.000000      |\n|              description              |                 None                 |\n|               encrypted               |                False                 |\n|                   id                  | dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 |\n|                metadata               |                  {}                  |\n|              multiattach              |                False                 |\n|                  name                 |                disk01                |\n|         os-vol-host-attr:host         |                 None                 |\n|     os-vol-mig-status-attr:migstat    |                 None                 |\n|     os-vol-mig-status-attr:name_id    |                 None                 |\n|      os-vol-tenant-attr:tenant_id     |   9b05e6bffdb94c8081d665561d05e31e   |\n|   os-volume-replication:driver_data   |                 None                 |\n| os-volume-replication:extended_status |                 None                 |\n|           replication_status          |               disabled               |\n|                  size                 |                  5                   |\n|              snapshot_id              |                 None                 |\n|              source_volid             |                 None                 |\n|                 status                |               creating               |\n|                user_id                |   127a9a6b822a4e3eba69fa54128873cd   |\n|              volume_type              |                 None                 |\n+---------------------------------------+--------------------------------------+<\/pre>\n<p>List the available volumes, status should be available.<\/p>\n<pre># cinder list\n+--------------------------------------+-----------+--------+------+-------------+----------+-------------+\n|                  ID                  |   Status  |  Name  | Size | Volume Type | Bootable | Attached to |\n+--------------------------------------+-----------+--------+------+-------------+----------+-------------+\n| dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 | available | disk01 |  5   |     None    |  false   |             |\n+--------------------------------------+-----------+--------+------+-------------+----------+-------------+<\/pre>\n<p>Attach the disk01 volume to our running instance \u201cMy-Fedora\u201d<\/p>\n<pre># nova volume-attach MY-Fedora dbd9afb1-48fd-46d1-8f66-1ef5195b6a94\n+----------+--------------------------------------+\n| Property | Value                                |\n+----------+--------------------------------------+\n| device   | \/dev\/vdb                             |\n| id       | dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 |\n| serverId | 7432030a-3cbe-49c6-956a-3e725e22196d |\n| volumeId | dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 |\n+----------+--------------------------------------+<\/pre>\n<p>List the volumes, you can see the status as in-use and it should be attached to the My-Fedora\u2019s instance ID.<\/p>\n<pre># cinder list\n+--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+\n|                  ID                  | Status |  Name  | Size | Volume Type | Bootable |             Attached to              |\n+--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+\n| dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 | in-use | disk01 |  5   |     None    |  false   | 7432030a-3cbe-49c6-956a-3e725e22196d |\n+--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+<\/pre>\n<p>Login to MY-Fedora instance using SSH and run fdisk -l command to list down the disks.<\/p>\n<pre> # ssh -i mykey fedora@192.168.0.201\n\nLast login: Mon Jul  6 17:59:46 2015 from 192.168.0.103\n[fedora@my-fedora ~]$ sudo su -\n[root@my-fedora ~]# fdisk -l\nDisk \/dev\/vda: 20 GiB, 21474836480 bytes, 41943040 sectors\nUnits: sectors of 1 * 512 = 512 bytes\nSector size (logical\/physical): 512 bytes \/ 512 bytes\nI\/O size (minimum\/optimal): 512 bytes \/ 512 bytes\nDisklabel type: dos\nDisk identifier: 0xf1cc8d9d\n\nDevice     Boot Start      End  Sectors Size Id Type\n\/dev\/vda1  *     2048 41943039 41940992  20G 83 Linux\n\n<strong>Disk \/dev\/vdb: 5 GiB<\/strong>, 5368709120 bytes, 10485760 sectors\nUnits: sectors of 1 * 512 = 512 bytes\nSector size (logical\/physical): 512 bytes \/ 512 bytes\nI\/O size (minimum\/optimal): 512 bytes \/ 512 bytes<\/pre>\n<p>From the above you can see, new disk \/dev\/vdb added with 5GB. This is the one we have attached earlier and now it is visible in guest OS.<\/p>\n<p>That\u2019s All!!!, You have successfully configured block storage service (Cinder) on Ubuntu 14.04.2<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This is the second part of OpenStack Kilo on Ubuntu 14.04.2 \u2013 Configure Cinder, in this tutorial we will install and configure Storage Node for the Cinder service. For a demo purpose, will configure this&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11],"tags":[],"class_list":["post-251","post","type-post","status-publish","format-standard","hentry","category-virtualization"],"_links":{"self":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/posts\/251","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/comments?post=251"}],"version-history":[{"count":0,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/posts\/251\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/media?parent=251"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/categories?post=251"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/tags?post=251"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}