Sep 072015
 

OpenStack Logo

This is the second part of OpenStack Kilo on Ubuntu 14.04.2 – Configure Cinder, in this tutorial we will install and configure Storage Node for the Cinder service. For a demo purpose, will configure this storage node with a block storage device /dev/sdb that contains partition /dev/sdb1 occupying the entire disk.

Prerequisites:

The following is the network configuration of storage node, Storage Node will have one network interface on the management network.

ROLE NW CARD 1 NW CARD 2 NW CARD 3
STORAGE NODE 192.168.12.24 / 24, GW=192.168.12.2
(MANAGEMENT NETWORK)
NA NA

Set the hostname of the node to block.

Copy the host entry from the controller node to storage node and add the following to it. Final output will look like below.

192.168.12.21 controller
192.168.12.22 network
192.168.12.23 compute
192.168.12.24 block

Install NTP package on Storage Node.

# apt-get install ntp

Edit the below configuration file.

# nano /etc/ntp.conf

Remove other ntp servers from the file, just hash out the lines that are starts with word “server”. Add below entry to get our nodes sync with controller node.

server controller

Restart the NTP service.

# service ntp restart

OpenStack packages:

Install the Ubuntu Cloud archive keyring and repository.

# apt-get install ubuntu-cloud-keyring

# echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list

Update the repositories on your system.

# apt-get update

Install lvm2 packages, if required.

#  apt-get install lvm2

Create the physical volume /dev/sdb1

# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created

Create the volume group vg_cinder.

# vgcreate vg_cinder /dev/sdb1
Volume group "vg_cinder" successfully created

Edit the /etc/lvm/lvm.conf file and add a filter that accepts the /dev/sdb device and rejects all other devices.

# nano /etc/lvm/lvm.conf

In the devices section, change

From

filter = [ "a/.*/ " ]

To

filter = [ "a/sdb/", "r/.*/" ]

Install and configure Cinder components:

Install the packages on the storage node.

# apt-get install cinder-volume python-mysqldb

Edit the /etc/cinder/cinder.conf file.

# nano /etc/cinder/cinder.conf

Modify the below settings and make sure to place an entries in the proper sections. Some time you may need to add sections if it does not exists and also you require to add some entries which are missing in the file, not all.

[DEFAULT]
...
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.12.24
## Management IP of Storage Node
enabled_backends = lvm
glance_host = controller
verbose = True

[database]
connection = mysql://cinder:password@controller/cinder
## Replace "password" with the password you chose for cinder database

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = password
## Replace "password" with the password you chose for the openstack account in RabbitMQ.
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = password
## Replace "password" with the password you chose for cinder user in the identity service
## Comment out or remove any other options in the [keystone_authtoken] section

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = vg_cinder
iscsi_protocol = iscsi
iscsi_helper = tgtadm

## Replace vg_cinder with your volume group.

[oslo_concurrency]
lock_path = /var/lock/cinder

## Comment out the lock_path in (DEFAULT) section.

Restart the block storage service.

# service tgt restart
# service cinder-volume restart

Remove the SQLite database file.

# rm -f /var/lib/cinder/cinder.sqlite

Troubleshooting:

Go through the log for any errors.

# cat /var/log/cinder/cinder-volume.log

For errors like below.

"Unknown column 'volumes.instance_uuid' in 'field list'")

"Unknown column 'volumes.attach_time' in 'field list

"Unknown column 'volumes.mountpoint' in 'field list'"

"Unknown column 'volumes.attached_host' in 'field list'")

Visit: Unknown Column

For errors like below.

AMQP server on controller:5672 is unreachable: Too many heartbeats missed. Trying again in 1 seconds.

Visit: Too many heartbeats missed.

Verification:

Run the following command to configure the Block Storage client to use API version 2.0.

# echo "export OS_VOLUME_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh

Load the credentials.

# source admin-openrc.sh

List the service components.

# cinder service-list

+------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2015-07-07T20:11:21.000000 |       None      |
|  cinder-volume   | block@lvm  | nova | enabled |   up  | 2015-07-07T20:11:18.000000 |       None      |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

Attach a volume to an instance:

Create a virtual disk “disk01″ with 5GB, run the following command on controller node.

# cinder create --name disk01 5
+---------------------------------------+--------------------------------------+
|                Property               |                Value                 |
+---------------------------------------+--------------------------------------+
|              attachments              |                  []                  |
|           availability_zone           |                 nova                 |
|                bootable               |                false                 |
|          consistencygroup_id          |                 None                 |
|               created_at              |      2015-07-07T20:18:34.000000      |
|              description              |                 None                 |
|               encrypted               |                False                 |
|                   id                  | dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 |
|                metadata               |                  {}                  |
|              multiattach              |                False                 |
|                  name                 |                disk01                |
|         os-vol-host-attr:host         |                 None                 |
|     os-vol-mig-status-attr:migstat    |                 None                 |
|     os-vol-mig-status-attr:name_id    |                 None                 |
|      os-vol-tenant-attr:tenant_id     |   9b05e6bffdb94c8081d665561d05e31e   |
|   os-volume-replication:driver_data   |                 None                 |
| os-volume-replication:extended_status |                 None                 |
|           replication_status          |               disabled               |
|                  size                 |                  5                   |
|              snapshot_id              |                 None                 |
|              source_volid             |                 None                 |
|                 status                |               creating               |
|                user_id                |   127a9a6b822a4e3eba69fa54128873cd   |
|              volume_type              |                 None                 |
+---------------------------------------+--------------------------------------+

List the available volumes, status should be available.

# cinder list
+--------------------------------------+-----------+--------+------+-------------+----------+-------------+
|                  ID                  |   Status  |  Name  | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------+------+-------------+----------+-------------+
| dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 | available | disk01 |  5   |     None    |  false   |             |
+--------------------------------------+-----------+--------+------+-------------+----------+-------------+

Attach the disk01 volume to our running instance “My-Fedora”

# nova volume-attach MY-Fedora dbd9afb1-48fd-46d1-8f66-1ef5195b6a94
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 |
| serverId | 7432030a-3cbe-49c6-956a-3e725e22196d |
| volumeId | dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 |
+----------+--------------------------------------+

List the volumes, you can see the status as in-use and it should be attached to the My-Fedora’s instance ID.

# cinder list
+--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+
|                  ID                  | Status |  Name  | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+
| dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 | in-use | disk01 |  5   |     None    |  false   | 7432030a-3cbe-49c6-956a-3e725e22196d |
+--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+

Login to MY-Fedora instance using SSH and run fdisk -l command to list down the disks.

 # ssh -i mykey [email protected]

Last login: Mon Jul  6 17:59:46 2015 from 192.168.0.103
[fedora@my-fedora ~]$ sudo su -
[root@my-fedora ~]# fdisk -l
Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf1cc8d9d

Device     Boot Start      End  Sectors Size Id Type
/dev/vda1  *     2048 41943039 41940992  20G 83 Linux

Disk /dev/vdb: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

From the above you can see, new disk /dev/vdb added with 5GB. This is the one we have attached earlier and now it is visible in guest OS.

That’s All!!!, You have successfully configured block storage service (Cinder) on Ubuntu 14.04.2