khanhnnvn

Installing ProFTPD Server on RHEL/CentOS 7

 Linux  Comments Off on Installing ProFTPD Server on RHEL/CentOS 7
Sep 072015
 

ProFTPD is an Open Source FTP Server and one of the most used, secure and reliable file transfer daemons on Unix environments, due to its file configurations simplicity speed and easy setup.

Install Proftpd In CentOS 7

Install Proftpd In RHEL/CentOS 7

Requirements

  1. CentOS 7 Minimal Installation
  2. Red Hat Enterprise Linux 7 Installation
  3. Configure Static IP Address on System

This tutorial will guide you on how you can install and use ProFTPD Server on CentOS/RHEL 7 Linux distributions for a simple file transfer from your local system accounts to remote systems.

Step 1: Install Proftpd Server

1. Official RHEL/CentOS 7 repositories doesn’t provide any binary package for ProFTPD Server, so you need to add extra package repositories on your system provided by EPEL 7 Repo, using the following command.

# rpm -Uvh http://ftp.astral.ro/mirrors/fedora/pub/epel/beta/7/x86_64/epel-release-7-0.2.noarch.rpm
Install EPEL in CentOS 7

Install EPEL in RHEL/CentOS 7

2. Before you start installing ProFTPD Server, edit your machine hosts file, change it accordingly to your system FQDN and test the configurations to reflect your system domain naming.

# nano /etc/hosts

Here add your system FQDN on 127.0.0.1 localhost line like in the following example.

127.0.0.1 server.centos.lan localhost localhost.localdomain

Then edit /etc/hostname file to match the same system FQDN entry like in the screenshots below.

# nano /etc/hostname
Open Hostname File

Open Hostname File
Add Hostname in Hosts

Add Hostname in Hosts

3. After you have edited the host files, test your local DNS resolution using the following commands.

# hostname
# hostname -f     ## For FQDN
# hostname -s     ## For short name
How to Check Hostname in CentOS

Verify System Hostname

4. Now it’s time to install ProFTPD Server on your system and some required ftp utilities that we will be using later by issuing following command.

# yum install proftpd proftpd-utils
Install FTP in CentOS

Install Proftpd Server

5. After the server is installed, start and manage Proftpd daemon by issuing the following commands.

# systemctl start proftpd
# systemctl status proftpd
# systemctl stop proftpd
# systemctl restart proftpd
Start Proftpd Server

Start Proftpd Server

Step 2: Add Firewall Rules and Access Files

6. Now, your ProDTPD Server runs and listen for connections, but it’s not available for outside connections due to Firewall policy. To enable outside connections make sure you add a rule which opens port 21, using firewall-cmd system utility.

# firewall-cmd –add-service=ftp   ## On fly rule
# firewall-cmd –add-service=ftp   --permanent   ## Permanent rule
# systemctl restart firewalld.service
Open FTP Port in CentOS

Open Proftp Port in Firewall

7. The most simple way to access your FTP server from remote machines is by using a browser, redirecting to your server IP Address or domain name using ftp protocol on URL.

ftp://domain.tld

OR

ftp://ipaddress

8. The default configuration on Proftpd Server uses valid system local accounts credentials to login and access your account files which is your $HOME system path account, defined in /etc/passwd file.

Access Proftpd from Browser

Access Proftpd from Browser
Index of Proftpd Files

Index of Proftpd Files

9. To make ProFTPD Server automatically run after system reboot, aka enable it system-wide, issue the following command.

# systemctl enable proftpd

That’s it! Now you can access and manage your account files and folders using FTP protocol using whether a browser or other more advanced programs, such as FileZilla, which is available on almost any platforms, or WinSCP, an excellent File Transfer program that runs on Windows based systems.
On the next series of tutorials concerning ProFTPD Server on RHEL/CentOS 7, I shall discuss more advanced features like enabling Anonymous account, use TLS encrypted file transfers and adding Virtual Users.

OpenStack Kilo on Ubuntu 14.04.2 – Configure Cinder #1

 Virtualization  Comments Off on OpenStack Kilo on Ubuntu 14.04.2 – Configure Cinder #1
Sep 072015
 

OpenStack Logo

Cinder AKA OpenStack block storage service ads the persistent storage to an instance, it also provides an infrastructure for managing volumes and interacts with compute service to provide volume for instance. The amount of storage is provisioned and consumed is determined the block storage drivers, there are a variety of drivers that are available: NAS/SAN, NFS, iSCSI, Ceph, and more.

The block storage API and scheduler service typically runs on the controller nodes. Depending upon the drivers used, the volume service can run on controllers, compute nodes, or standalone storage nodes.

This guide helps you to install and configure cinder on the controller node. This service requires at least one additional storage node that provides volumes to instances.

Install and configure controller node:

Login into MySQL server as the root user.

# mysql -u root -p

Create the nova database.

CREATE DATABASE cinder;

Grant a proper permission to the nova database.

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'password';

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'password';

Replace “password” with a suitable password. Exit from MySQL.

Load your admin credential from the environment script.

# source admin-openrc.sh

Create the cinder user for creating service credentials.

# openstack user create --password-prompt cinder
User Password:
Repeat User Password:
+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | f02a9693b5dd4f328e8f1a292f372782 |
| name     | cinder                           |
| username | cinder                           |
+----------+----------------------------------+

Add the admin role to the cinder user.

# openstack role add --project service --user cinder admin
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 33af4f957aa34cc79451c23bf014af6f |
| name  | admin                            |
+-------+----------------------------------+

Create the cinder service entities.

# openstack service create --name cinder --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | cc16bd02429842d694ccd4a425513cfc |
| name        | cinder                           |
| type        | volume                           |
+-------------+----------------------------------+
# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 926e5dcb46654d228987d61978903b27 |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+

Create the Block Storage service API endpoints.

# openstack endpoint create --publicurl http://controller:8776/v2/%\(tenant_id\)s --internalurl http://controller:8776/v2/%\(tenant_id\)s --adminurl http://controller:8776/v2/%\(tenant_id\)s --region RegionOne volume

+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| adminurl     | http://controller:8776/v2/%(tenant_id)s |
| id           | 4b38b10d227a48cfaf1d6356d23a6481        |
| internalurl  | http://controller:8776/v2/%(tenant_id)s |
| publicurl    | http://controller:8776/v2/%(tenant_id)s |
| region       | RegionOne                               |
| service_id   | cc16bd02429842d694ccd4a425513cfc        |
| service_name | cinder                                  |
| service_type | volume                                  |
+--------------+-----------------------------------------+
# openstack endpoint create --publicurl http://controller:8776/v2/%\(tenant_id\)s --internalurl http://controller:8776/v2/%\(tenant_id\)s --adminurl http://controller:8776/v2/%\(tenant_id\)s --region RegionOne volumev2

+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| adminurl     | http://controller:8776/v2/%(tenant_id)s |
| id           | dcf45538165b40f2a6736bcf5276b319        |
| internalurl  | http://controller:8776/v2/%(tenant_id)s |
| publicurl    | http://controller:8776/v2/%(tenant_id)s |
| region       | RegionOne                               |
| service_id   | 926e5dcb46654d228987d61978903b27        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
+--------------+-----------------------------------------+

Install and configure Cinder (Block Storage) controller components:

Install the following packages on the controller node.

# apt-get install cinder-api cinder-scheduler python-cinderclient

Edit the /etc/cinder/cinder.conf file.

# nano /etc/cinder/cinder.conf

Modify the below settings and make sure to place an entries in the proper sections. Some time you may need to add sections if it does not exists and also you require to add some entries which are missing in the file, not all.

[database]
connection = mysql://cinder:password@controller/cinder

## Replace "password" with the password you chose for cinder database

[DEFAULT]
...
rpc_backend = rabbit
auth_strategy = keystone
verbose = True
my_ip = 192.168.12.21

## Management IP of Controller Node

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = password

## Replace "password" with the password you chose for the openstack account in RabbitMQ.

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = password

## Replace "password" with the password you chose for cinder user in the identity service
## Comment out or remove any other options in the [keystone_authtoken] section

[oslo_concurrency]
lock_path = /var/lock/cinder

## Comment out the lock_path in (DEFAULT) section.

Populate the cinder database.

# su -s /bin/sh -c "cinder-manage db sync" cinder

Restart the services.

# service cinder-scheduler restart
# service cinder-api restart

Remove the SQLite database file.

# rm -f /var/lib/cinder/cinder.sqlite

List the services, you can ignore the warnings.

# cinder-manage service list

Binary           Host                                 Zone             Status     State Updated At
cinder-scheduler controller                           nova             enabled    :-)   2015-07-06 18:35:55

That’s All!!. Next is to configure a Storage Node.

OpenStack Kilo on Ubuntu 14.04.2 – Configure Horizon

 Virtualization  Comments Off on OpenStack Kilo on Ubuntu 14.04.2 – Configure Horizon
Sep 072015
 

OpenStack Logo

In the last tutorial, we have had created an instance using CLI; same can be done through a web interface called Horizon. It enables us to manage various OpenStack resources and services. This guide helps you to configure horizon in Ubuntu 14.04.

Horizon uses OpenStack API’s to interact with the cloud controller, you can also customize the dashboard for branding. Here we will be using apache servers as web server to serve horizon dashboard.

System requirements:

Before proceeding ahead, make sure your system meets below requirements.

OpenStack compute installation, it enables identity service for user and project management.

Dashboard should be run as identity service user with sudo privileges.

Python version should support Django, v 2.7.

Install and configure dashboard on a node that can contact the identity service.

Install the Horizon components:

Install the following OpenStack dashboard package on the controller node.

# apt-get install openstack-dashboard

Configure the Horizon:

Edit the /etc/openstack-dashboard/local_settings.py file.

# nano /etc/openstack-dashboard/local_settings.py

Modify below settings.

## Enter controller node details.

OPENSTACK_HOST = "controller"

## Allow hosts to access dashboard

ALLOWED_HOSTS = '*'

## Comment out any other storage configuration.

CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}

## Default role that will assigned when the user is created via dashboard.

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

## Replace TIME_ZONE with your Time zone.

TIME_ZONE = "TIME_ZONE"

Restart the apache service.

# service apache2 restart

Access the dashboard using the following url

http://controller/horizon or http://ip-add-ress/horizon

Enter the admin credential that we created during Keystone configurations.

OpenStack - Configure Horizon
OpenStack – Configure Horizon

Once you logged in, you will be taken to summary page of Horizon.

OpenStack - Configure Horizon (Usage Summary)
OpenStack – Configure Horizon (Usage Summary)

Click on Instances section on the left side to list down the instances.

OpenStack - Configure Horizon (Instances)
OpenStack – Configure Horizon (Instances)

Click on the Instance name to get further information.

OpenStack - Configure Horizon (Instance Overview)
OpenStack – Configure Horizon (Instance Overview)

You can click on the console menu to get the remote console of the selected instance.

OpenStack - Configure Horizon (Instance Console)
OpenStack – Configure Horizon (Instance Console)

You may need to make a host entry of controller on client desktop from where you are accessing the dashboard.

That’s All!!!, you have successfully configured Horizon. Next is to configure Block Storage Server (Cinder).

OpenStack Kilo on Ubuntu 14.04.2 – Launch an instance

 Virtualization  Comments Off on OpenStack Kilo on Ubuntu 14.04.2 – Launch an instance
Sep 072015
 

OpenStack Logo

This guide shows you how to launch an instance of Fedora 22 image that we added in OpenStack Kilo on Ubuntu 14.04.2 – Glance. Here we will be using command line interface on the controller node to create an instance, this tutorial launches an instance using OpenStack Networking (neutron).

Load the admin credentials on the controller node.

# source demo-openrc.sh

Almost all cloud images uses public keys for authentication instead of user/password authentication. Before launching an instance, we must create a public/private key pair.

Generate and add a key pair.

# nova keypair-add my-key

Copy the output of above command and save it into any file, this key should be used with ssh command to login to instance.

List the available key pair’s.

# nova keypair-list
+--------+-------------------------------------------------+
| Name   | Fingerprint                                     |
+--------+-------------------------------------------------+
| my-key | 0a:b2:30:cb:54:fc:c4:69:29:00:19:ef:38:8d:2e:2d |
+--------+-------------------------------------------------+

Launch an instance:

To launch an instance, we must need to know flavors, available images, networks, and security groups.

List the available flavors, this is nothing but a predefined allocation of cpu, memory and disk.

# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

List available images.

# nova image-list
+--------------------------------------+--------------------------------------+--------+--------+
| ID                                   | Name                                 | Status | Server |
+--------------------------------------+--------------------------------------+--------+--------+
| a1533d87-d6fa-4d9d-bf85-6b2ab8400712 | Fedora-Cloud-Base-22-20150521.x86_64 | ACTIVE |        |
+--------------------------------------+--------------------------------------+--------+--------+

List available networks. Our instance will use int-net (Internal network), while creating the instance we must specify network using the ID instead of name.

# neutron net-list
+--------------------------------------+---------+-------------------------------------------------------+
| id                                   | name    | subnets                                               |
+--------------------------------------+---------+-------------------------------------------------------+
| 187a7b6c-7d14-4d8f-8673-57fa9bab1bba | int-net | 7f75b54f-7b87-42e4-a7e1-f452c8adcb3a 192.168.100.0/24 |
| db407537-7951-411c-ab8e-ef59d204f110 | ext-net | a517e200-38eb-4b4b-b82f-d486e07756ca 192.168.0.0/24   |
+--------------------------------------+---------+-------------------------------------------------------+

List available security groups.

# nova secgroup-list
+--------------------------------------+---------+------------------------+
| Id                                   | Name    | Description            |
+--------------------------------------+---------+------------------------+
| c88f4002-611e-41dd-af7c-2f7c348dea27 | default | Default security group |
+--------------------------------------+---------+------------------------+

Default security group implements a firewall that blocks remote access to instance, to allow remote access to instance, we need to configure remote access.

The following commands adds rule to default security group, to allow ping and SSH access.

# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Launch the first instance using the below command, load a variable for network ID.

# INT_NET_ID=`neutron net-list | grep int-net | awk '{ print $2 }'

Replace $INT_NET_ID with ID of internal network.

# nova boot --flavor m1.small --image Fedora-Cloud-Base-22-20150521.x86_64 --nic net-id=$INT_NET_ID --security-group default --key-name my-key MY-Fedora
+--------------------------------------+-----------------------------------------------------------------------------+
| Property                             | Value                                                                       |
+--------------------------------------+-----------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                      |
| OS-EXT-AZ:availability_zone          | nova                                                                        |
| OS-EXT-SRV-ATTR:host                 | -                                                                           |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000000b                                                           |
| OS-EXT-STS:power_state               | 0                                                                           |
| OS-EXT-STS:task_state                | scheduling                                                                  |
| OS-EXT-STS:vm_state                  | building                                                                    |
| OS-SRV-USG:launched_at               | -                                                                           |
| OS-SRV-USG:terminated_at             | -                                                                           |
| accessIPv4                           |                                                                             |
| accessIPv6                           |                                                                             |
| adminPass                            | 7PGDvZaxnxR5                                                                |
| config_drive                         |                                                                             |
| created                              | 2015-07-02T17:45:15Z                                                        |
| flavor                               | m1.small (2)                                                                |
| hostId                               |                                                                             |
| id                                   | 7432030a-3cbe-49c6-956a-3e725e22196d                                        |
| image                                | Fedora-Cloud-Base-22-20150521.x86_64 (a1533d87-d6fa-4d9d-bf85-6b2ab8400712) |
| key_name                             | my-key                                                                      |
| metadata                             | {}                                                                          |
| name                                 | MY-Fedora                                                                   |
| os-extended-volumes:volumes_attached | []                                                                          |
| progress                             | 0                                                                           |
| security_groups                      | default                                                                     |
| status                               | BUILD                                                                       |
| tenant_id                            | 9b05e6bffdb94c8081d665561d05e31e                                            |
| updated                              | 2015-07-02T17:45:15Z                                                        |
| user_id                              | 127a9a6b822a4e3eba69fa54128873cd                                            |
+--------------------------------------+-----------------------------------------------------------------------------+

We will check the status of our instance.

# nova list
+--------------------------------------+-----------+--------+------------+-------------+-----------------------+
| ID                                   | Name      | Status | Task State | Power State | Networks              |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------+
| 7432030a-3cbe-49c6-956a-3e725e22196d | MY-Fedora | ACTIVE | -          | Running     | int-net=192.168.100.8 |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------+

Create a floating IP address on the external network (ext-net).

# neutron floatingip-create ext-net
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.0.201                        |
| floating_network_id | db407537-7951-411c-ab8e-ef59d204f110 |
| id                  | 0be060c7-d84f-4691-8205-34ad9bb6a296 |
| port_id             |                                      |
| router_id           |                                      |
| status              | DOWN                                 |
| tenant_id           | 9b05e6bffdb94c8081d665561d05e31e     |
+---------------------+--------------------------------------+

We will associate the floating IP address to our instance (MY-Fedora).

# nova floating-ip-associate MY-Fedora 192.168.0.201

Check the status of the floating IP address.

# nova list
+--------------------------------------+-----------+--------+------------+-------------+--------------------------------------+
| ID                                   | Name      | Status | Task State | Power State | Networks                             |
+--------------------------------------+-----------+--------+------------+-------------+--------------------------------------+
| 7432030a-3cbe-49c6-956a-3e725e22196d | MY-Fedora | ACTIVE | -          | Running     | int-net=192.168.100.8, 192.168.0.201 |
+--------------------------------------+-----------+--------+------------+-------------+--------------------------------------+

Verify the network connectivity using ping from any host on the external physical network.

C:\>ping 192.168.0.201

Pinging 192.168.0.201 with 32 bytes of data:
Reply from 192.168.0.201: bytes=32 time=1ms TTL=63
Reply from 192.168.0.201: bytes=32 time=2ms TTL=63
Reply from 192.168.0.201: bytes=32 time=1ms TTL=63
Reply from 192.168.0.201: bytes=32 time=1ms TTL=63

Ping statistics for 192.168.0.201:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 1ms, Maximum = 2ms, Average = 1ms

Once you get a ping response, wait atleast a minute, allow the instance to get fully booted; then try to SSH from controller or external system. Use the key pair for authentication.

# ssh -i mykey [email protected]

The authenticity of host '192.168.0.201 (192.168.0.201)' can't be established.
ECDSA key fingerprint is 0e:c2:58:9b:7f:28:10:a9:e1:cf:6d:00:51:6b:1f:f5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.201' (ECDSA) to the list of known hosts.
[fedora@my-fedora ~]$

Now you have successfully logged into fedora instance.

OpenStack Kilo on Ubuntu 14.04.2 – Create initial networks

 Virtualization  Comments Off on OpenStack Kilo on Ubuntu 14.04.2 – Create initial networks
Sep 072015
 

OpenStack Logo

This is the fourth part of configuring neutron (Networking) on Ubuntu 14.04, you can go through previous article on Configure Neutron #1 , Configure Neutron #2, and Configure Neutron #3 in which we have installed and configured Networking components on Controller, Network, and Compute node.

Here, we will be creating initial network, this must be created before launching VM instance.

OpenStack  Networking (Neutron) - With Subnets
OpenStack Networking (Neutron) – With Subnets

Creating external Network:

The external network provides internet access to instances using NAT (Network Address Translation), internet access can be enabled to individual instances using a floating ip address with the suitable security rules.

Load credentials on controller node.

# source admin-openrc.sh

create the network.

# neutron net-create ext-net --router:external --provider:physical_network external --provider:network_type flat

Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | b4c8d5fc-a4b9-42dc-b705-48c0d4217137 |
| mtu                       | 0                                    |
| name                      | ext-net                              |
| provider:network_type     | flat                                 |
| provider:physical_network | external                             |
| provider:segmentation_id  |                                      |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 9b05e6bffdb94c8081d665561d05e31e     |
+---------------------------+--------------------------------------+

Create a subnet on the external network.

For example, using 192.168.0.0/24 with floating IP address range 192.168.0.200 to 203.0.113.250 with the physical gateway 192.168.0.1. This gateway should be associated physical network

# neutron subnet-create ext-net 192.168.0.0/24 --name ext-subnet --allocation-pool start=192.168.0.200,end=192.168.0.250 --disable-dhcp --gateway 192.168.0.1
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field             | Value                                              |
+-------------------+----------------------------------------------------+
| allocation_pools  | {"start": "192.168.0.200", "end": "192.168.0.250"} |
| cidr              | 192.168.0.0/24                                     |
| dns_nameservers   |                                                    |
| enable_dhcp       | False                                              |
| gateway_ip        | 192.168.0.1                                        |
| host_routes       |                                                    |
| id                | b32eb748-9bc0-4e57-ae26-cd17033b635e               |
| ip_version        | 4                                                  |
| ipv6_address_mode |                                                    |
| ipv6_ra_mode      |                                                    |
| name              | ext-subnet                                         |
| network_id        | b4c8d5fc-a4b9-42dc-b705-48c0d4217137               |
| subnetpool_id     |                                                    |
| tenant_id         | 9b05e6bffdb94c8081d665561d05e31e                   |
+-------------------+----------------------------------------------------+

Creating internal network:

The internal network provides internal network access for instances, internal networks are isolated from each other. Only the instance running on same network can communicate each other, not to or from other networks.

Create the internal network (int-net).

# neutron net-create int-net
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 187a7b6c-7d14-4d8f-8673-57fa9bab1bba |
| mtu                       | 0                                    |
| name                      | int-net                              |
| provider:network_type     | gre                                  |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1                                    |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 9b05e6bffdb94c8081d665561d05e31e     |
+---------------------------+--------------------------------------+

Create a subnet on the internal network. For example, using 192.168.100.0/24 network with the virtual gateway 192.168.0.1

# neutron subnet-create int-net 192.168.100.0/24 --name int-subnet --gateway 192.168.100.1
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field             | Value                                                |
+-------------------+------------------------------------------------------+
| allocation_pools  | {"start": "192.168.100.2", "end": "192.168.100.254"} |
| cidr              | 192.168.100.0/24                                     |
| dns_nameservers   |                                                      |
| enable_dhcp       | True                                                 |
| gateway_ip        | 192.168.100.1                                        |
| host_routes       |                                                      |
| id                | 7f75b54f-7b87-42e4-a7e1-f452c8adcb3a                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | int-subnet                                           |
| network_id        | 187a7b6c-7d14-4d8f-8673-57fa9bab1bba                 |
| subnetpool_id     |                                                      |
| tenant_id         | 9b05e6bffdb94c8081d665561d05e31e                     |
+-------------------+------------------------------------------------------+

Create the virtual router.

A virtual router passes network traffic between two or more virtual networks, In our case, we need to create a router and attach internal and external networks to it.

# neutron router-create int-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| distributed           | False                                |
| external_gateway_info |                                      |
| ha                    | False                                |
| id                    | a47b81d7-2ad8-4bdc-a17a-0026ad374dcf |
| name                  | int-router                           |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 9b05e6bffdb94c8081d665561d05e31e     |
+-----------------------+--------------------------------------+

Attach the router to the internal subnet.

# neutron router-interface-add int-router int-subnet
Added interface cb36eb61-5e3a-4c85-b747-8e230b5d1fec to router int-router.

Attach the router to the external network by setting it as the gateway.

# neutron router-gateway-set int-router ext-net
Set gateway for router int-router

You can verify the connectivity by pinging 192.168.0.200 from the external physical network. This is because we used subnet 192.168.0.0/24, floating ip ranges from 192.168.0.200 – 250, tenant router gateway should occupy the lowest IP address in the floating IP address range ie 192.168.0.200

C:\>ping 192.168.0.200

Pinging 192.168.0.200 with 32 bytes of data:
Reply from 192.168.0.200: bytes=32 time<1ms TTL=64
Reply from 192.168.0.200: bytes=32 time<1ms TTL=64
Reply from 192.168.0.200: bytes=32 time<1ms TTL=64
Reply from 192.168.0.200: bytes=32 time=1ms TTL=64

Ping statistics for 192.168.0.200:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 1ms, Average = 0ms

That’s All!!!, you have successfully configured Networking (Neutron). You are good to go for launching an instance.

OpenStack Kilo on Ubuntu 14.04.2 – Configure Neutron #3

 Virtualization  Comments Off on OpenStack Kilo on Ubuntu 14.04.2 – Configure Neutron #3
Sep 072015
 

OpenStack Logo

This is the third part of configuring neutron (Networking) on Ubuntu 14.04, you can go through previous article on Configure Neutron #1 and Configure Neutron #2 in which we have installed and configured Networking components on Controller node and Network Node.

Here, we will be configuring compute node to use neutron.

Prerequisites:

Configure kernel parameters on compute node, edit /etc/sysctl.conf file.

# nano /etc/sysctl.conf

Add the following parameters into the file.

net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

Apply the changes.

# sysctl -p

Install and configure Networking components:

Install the following packages on each and every compute node you have it on OpenStack environment.

# apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent

Edit the /etc/neutron/neutron.conf file.

# nano /etc/neutron/neutron.conf

Modify the below settings and make sure to place a entries in the proper sections. In the case of database section, comment out any connection options as network node does not directly access the database.

[DEFAULT]
...
rpc_backend = rabbit
verbose = True
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
auth_strategy = keystone

[database]
...
#connection = sqlite:////var/lib/neutron/neutron.sqlite

##Comment out the above line.

[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = password

## Replace "password" with the password you chose for the openstack account in RabbitMQ

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = password

## Replace "password" with the password you chose for neutron user in the identity service

Configure Modular Layer 2 (ML2) plug-in:

Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file.

# nano /etc/neutron/plugins/ml2/ml2_conf.ini

Modify the below sections.

[ml2]
...
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
...
tunnel_id_ranges = 1:1000

[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]
local_ip = 192.168.12.23

## Tunnel network interface on your Compute Node.

[agent]
tunnel_types = gre

## [ovs] and [agent] stanzas are need to be added extra at the bottom of the file.

Restart the Open vSwitch service.

# service openvswitch-switch restart

Configure Compute node to use Networking:

By default, Compute node uses legacy networking. We must reconfigure Compute to manage networks through Neutron.

Edit the /etc/nova/nova.conf file.

# nano /etc/nova/nova.conf

Modify the below settings and make sure to place a entries in the proper sections. If the sections does not exists, create a sections accordingly.

[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[neutron]

url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = password

## Replace "password" with the password you chose for neutron user in the identity service

Restart the compute and Open vSwitch agen on compute node.

# service nova-compute restart
# service neutron-plugin-openvswitch-agent restart

Verify operation:

Load admin credentials on the controller node.

# source admin-openrc.sh

List the agents.

# neutron agent-list
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| id                                   | agent_type         | host    | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| 23da3f95-b81b-4426-9d7a-d5cbfc5241c0 | Metadata agent     | network | :-)   | True           | neutron-metadata-agent    |
| 4217b0c0-fbd4-47d9-bc22-5187f09d958a | DHCP agent         | network | :-)   | True           | neutron-dhcp-agent        |
| a4eaabf8-8cf0-4d72-817d-d80921b4f915 | Open vSwitch agent | compute | :-)   | True           | neutron-openvswitch-agent |
| b4cf95cd-2eba-4c69-baa6-ae8832384e40 | Open vSwitch agent | network | :-)   | True           | neutron-openvswitch-agent |
| d9e174be-e719-4f05-ad05-bc444eb97df5 | L3 agent           | network | :-)   | True           | neutron-l3-agent          |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+

The output should have four agents alive on the network node and one agent alive on the compute node.

OpenStack Kilo on Ubuntu 14.04.2 – Configure Neutron #2

 Virtualization  Comments Off on OpenStack Kilo on Ubuntu 14.04.2 – Configure Neutron #2
Sep 072015
 

OpenStack Logo

This is the second part of configuring neutron (Networking) on Ubuntu 14.04, you can go through previous article on Configure Neutron #1, in which we have installed and configured Networking components on Controller node.

Here, in this tutorial we will install and configure Network Node.

Prerequisite:

Make sure you have enabled OpenStack Kilo repository on Compute Node, or follow below steps to enable it.

Install the Ubuntu Cloud archive keyring and repository.

# apt-get install ubuntu-cloud-keyring

# echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list

Upgrade your system.

# apt-get update

Configure kernel parameters on network node, edit /etc/sysctl.conf file.

# nano /etc/sysctl.conf

Add the following parameters into the file.

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

Apply the changes.

# sysctl -p

Install and configure Networking components:

Install the following packages on Network node.

# apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

Edit /etc/neutron/neutron.conf.

# nano /etc/neutron/neutron.conf

Modify the below settings and make sure to place a entries in the proper sections. In the case of database section, comment out any connection options as network node does not directly access the database

[DEFAULT]
...
rpc_backend = rabbit
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
auth_strategy = keystone
verbose = True

[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = password

## Replace "password" with the password you chose for the openstack account in RabbitMQ

[database]
...
#connection = sqlite:////var/lib/neutron/neutron.sqlite

##Comment out the above line.

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = password

## Replace "password" with the password you chose for neutron user in the identity service

Configure Modular Layer 2 (ML2) plug-in:

Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file.

# nano /etc/neutron/plugins/ml2/ml2_conf.ini

Modify the below sections.

[ml2]
...
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_flat]
...
flat_networks = external

[ml2_type_gre]
...
tunnel_id_ranges = 1:1000

[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]
local_ip = 192.168.11.22
## Tunnel network interface on your Network Node.
bridge_mappings = external:br-ex

[agent]
tunnel_types = gre

Note: [ovs] and [agent] stanzas are need to be added extra at the bottom of the file.

Configure the Layer-3 (L3) agent:

It provides routing services for virtual networks, Edit the /etc/neutron/l3_agent.ini file.

# nano /etc/neutron/l3_agent.ini

Modify the [DEFAULT] section.

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
router_delete_namespaces = True
verbose = True

Configure the DHCP agent:

Edit the /etc/neutron/dhcp_agent.ini file.

# nano  /etc/neutron/dhcp_agent.ini

Modify the following stanzas.

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
dhcp_delete_namespaces = True
verbose = True

Configure the metadata agent:

Edit the /etc/neutron/metadata_agent.ini file

# nano /etc/neutron/metadata_agent.ini

Modify the following sections, you may have to comment out the existing entries.

[DEFAULT]
...
verbose = True
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = password

## Replace "password" with the password you chose for neutron user in the identity service.

nova_metadata_ip = controller
metadata_proxy_shared_secret = 26f008fb8c504b393df3
## Replace "26f008fb8c504b393df3" with a suitable secret for the metadata proxy

On the Controller node, edit the /etc/nova/nova.conf file.

# nano /etc/nova/nova.conf

Modify the [neutron] sections.

[neutron]
...
service_metadata_proxy = True
metadata_proxy_shared_secret = 26f008fb8c504b393df3

## Replace "26f008fb8c504b393df3" with the secret you chose for the metadata proxy.

Restart the compute API service on controller node.

# service nova-api restart

Configure the Open vSwitch (OVS) service:

Restart the OVS service on Network Node.

# service openvswitch-switch restart

Add the external bridge.

# ovs-vsctl add-br br-ex

Add a port to the external bridge that connects to the physical external network interface, in my case eth2 is the interface name.

# ovs-vsctl add-port br-ex eth2

Restar the networking services.

# service neutron-plugin-openvswitch-agent restart
# service neutron-l3-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart

Verify operation:

Load admin credentials on the controller node.

# source admin-openrc.sh

List the agents.

# neutron agent-list
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| id                                   | agent_type         | host    | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| 23da3f95-b81b-4426-9d7a-d5cbfc5241c0 | Metadata agent     | network | :-)   | True           | neutron-metadata-agent    |
| 4217b0c0-fbd4-47d9-bc22-5187f09d958a | DHCP agent         | network | :-)   | True           | neutron-dhcp-agent        |
| b4cf95cd-2eba-4c69-baa6-ae8832384e40 | Open vSwitch agent | network | :-)   | True           | neutron-openvswitch-agent |
| d9e174be-e719-4f05-ad05-bc444eb97df5 | L3 agent           | network | :-)   | True           | neutron-l3-agent          |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+

That’s All!!!, you have successfully configured Network Node.

OpenStack Kilo on Ubuntu 14.04.2 – Configure Neutron #1

 Virtualization  Comments Off on OpenStack Kilo on Ubuntu 14.04.2 – Configure Neutron #1
Sep 072015
 

OpenStack Logo

OpenStack Networking allows you to create or attach interface device to networks, this guide helps you to configure Neutron (Networking) on OpenStack environment. Neutron manages all networking related things that are required for Virtual Networking Infrastructure, it provides the networks, subnets, and router object abstractions.

Install and configure controller node:

Before we configure Neutron service, we must create a database, service, and API endpoint.

Login as the root into MySQL server.

# mysql -u root -p
Create the neutron database.
CREATE DATABASE neutron;

Grant a proper permission to the neutron database.

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'password';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'password';

Replace “password” with a suitable password. Exit from MySQL.

Load your admin credential from the environment script.

# source admin-openrc.sh

Create the neutron user for creating service credentials.

# openstack user create --password-prompt neutron
User Password:
Repeat User Password:
+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | ac5ee3286887450d911b82d4e263e1c9 |
| name     | neutron                          |
| username | neutron                          |
+----------+----------------------------------+

Add the admin role to the neutron user.

# openstack role add --project service --user neutron admin
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 33af4f957aa34cc79451c23bf014af6f |
| name  | admin                            |
+-------+----------------------------------+

Create the neutron service entity.

# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | 95237876259e44d9a1a926577b786875 |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

Create the neutron service API endpoint.

# openstack endpoint create \
--publicurl http://controller:9696 \
--adminurl http://controller:9696 \
--internalurl http://controller:9696 \
--region RegionOne \
network
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| adminurl     | http://controller:9696           |
| id           | ed46eb46c27e4f2b9a58ff574f43d0cb |
| internalurl  | http://controller:9696           |
| publicurl    | http://controller:9696           |
| region       | RegionOne                        |
| service_id   | 95237876259e44d9a1a926577b786875 |
| service_name | neutron                          |
| service_type | network                          |
+--------------+----------------------------------+

Install and configure Networking components on the controller node:

# apt-get install neutron-server neutron-plugin-ml2 python-neutronclient

Edit the /etc/neutron/neutron.conf.

# nano /etc/neutron/neutron.conf

Modify the below settings and make sure to place a entries in the proper sections.

[DEFAULT]
...
verbose = True
rpc_backend = rabbit
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2


[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = password
## Replace "password" with the password you chose for the openstack account in RabbitMQ

[database]
...
connection = mysql://neutron:password@controller/neutron
## Replace "password" with the password you chose for neutron database

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = password
## Replace "password" with the password you chose for neutron user in the identity service.

[nova]
...
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = password

## Replace "password" with the password you chose for nova user in the identity service.

Configure Modular Layer 2 (ML2) plugin:

Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file

# nano /etc/neutron/plugins/ml2/ml2_conf.ini

Modify the following stanzas.

[ml2]
...
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
...
tunnel_id_ranges = 1:1000

[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Configure compute to use Networking, edit /etc/nova/nova.conf on the controller node.

# nano /etc/nova/nova.conf

Modify the below settings and make sure to place a entries in the proper sections.

[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[neutron]
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = password

## Replace "password" with the password you chose for neutron user in the identity service

Note: If you do not have a particular section, create and place stanzas onto it.

Populate the neutron database.

# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

Restart compute and networking service on controller node.

# service nova-api restart

# service neutron-server restart

Verify it by listing loaded extensions.

# neutron ext-list
+-----------------------+-----------------------------------------------+
| alias                 | name                                          |
+-----------------------+-----------------------------------------------+
| security-group        | security-group                                |
| l3_agent_scheduler    | L3 Agent Scheduler                            |
| net-mtu               | Network MTU                                   |
| ext-gw-mode           | Neutron L3 Configurable external gateway mode |
| binding               | Port Binding                                  |
| provider              | Provider Network                              |
| agent                 | agent                                         |
| quotas                | Quota management support                      |
| subnet_allocation     | Subnet Allocation                             |
| dhcp_agent_scheduler  | DHCP Agent Scheduler                          |
| l3-ha                 | HA Router extension                           |
| multi-provider        | Multi Provider Network                        |
| external-net          | Neutron external network                      |
| router                | Neutron L3 Router                             |
| allowed-address-pairs | Allowed Address Pairs                         |
| extraroute            | Neutron Extra Route                           |
| extra_dhcp_opt        | Neutron Extra DHCP opts                       |
| dvr                   | Distributed Virtual Router                    |
+-----------------------+-----------------------------------------------+

Next is to Install and configure Network Node.

OpenStack Kilo on Ubuntu 14.04.2 – Configure Nova

 Virtualization  Comments Off on OpenStack Kilo on Ubuntu 14.04.2 – Configure Nova
Sep 072015
 

OpenStack Logo

This guide helps you to configure Nova (Compute) service on OpenStak environment, in OpenStack, compute service (node) is used to host and manage cloud computing systems. OpenStack compute is a major part in IaaS, it interacts with KeyStone for authentication, image service for disk and images, and dashboard for the user and administrative interface.

OpenStack Compute can scale horizontally on standard hardware, and download images to launch computing instance.

Install and configure controller node:

We will configure the Compute service on the Controller node, login into the MySQL server as the root user.

# mysql -u root -p

Create the nova database.

CREATE DATABASE nova;

Grant a proper permission to the nova database.

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'password';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'password';

Replace “password” with a suitable password. Exit from MySQL.

Load your admin credential from the environment script.

# source admin-openrc.sh

Create the nova user for creating service credentials.

# openstack user create --password-prompt nova
User Password:
Repeat User Password:
+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | 58677ccc7412413587d138f686574867 |
| name     | nova                             |
| username | nova                             |
+----------+----------------------------------+

Add the admin role to the nova user.

# openstack role add --project service --user nova admin
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 33af4f957aa34cc79451c23bf014af6f |
| name  | admin                            |
+-------+----------------------------------+

Create the nova service entity.

# openstack service create --name nova --description "OpenStack Compute" compute

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | 40bc66cafb164b18965528c0f4f5ab83 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

Create the nova service API endpoint.

# openstack endpoint create \
--publicurl http://controller:8774/v2/%\(tenant_id\)s \
--internalurl http://controller:8774/v2/%\(tenant_id\)s \
--adminurl http://controller:8774/v2/%\(tenant_id\)s \
--region RegionOne \
compute

+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| adminurl     | http://controller:8774/v2/%(tenant_id)s |
| id           | 3a61334885334ccaa822701ac1091080        |
| internalurl  | http://controller:8774/v2/%(tenant_id)s |
| publicurl    | http://controller:8774/v2/%(tenant_id)s |
| region       | RegionOne                               |
| service_id   | 40bc66cafb164b18965528c0f4f5ab83        |
| service_name | nova                                    |
| service_type | compute                                 |
+--------------+-----------------------------------------+

Install and configure Compute controller components:

Install the packages on Controller Node.

# apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient

Edit the /etc/nova/nova.conf.

# nano /etc/nova/nova.conf

Modify the below settings and make sure to place a entries in the proper sections.

[DEFAULT]
...
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.12.21 
## Management IP of Controller Node
vncserver_listen = 192.168.12.21 
## Management IP of Controller Node
vncserver_proxyclient_address = 192.168.12.21 
## Management IP of Controller Node

[database]
connection = mysql://nova:password@controller/nova

## Replace "password" with the password you chose for nova database

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = password

## Replace "password" with the password you chose for the openstack account in RabbitMQ.

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = password

## Replace "password" with the password you chose for nova user in the identity service

[glance]
host = controller

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

Populate the Compute database.

#  su -s /bin/sh -c "nova-manage db sync" nova

Restart the compute services.

# service nova-api restart
# service nova-cert restart
# service nova-consoleauth restart
# service nova-scheduler restart
# service nova-conductor restart
# service nova-novncproxy restart

Remove SQLite database file.

# rm -f /var/lib/nova/nova.sqlite

Install and configure Nova (a compute node):

Here we will install and configure Compute service on a compute node, this service supports multiple hypervisors to deploy instance (VM’s). Our compute node uses the QEMU hypervisor with KVM extension to support hardware accelerated virtualization.

Verify whether your compute supports hardware virtualization.

# egrep -c '(vmx|svm)' /proc/cpuinfo
1

If the command returns with value 1 or more, your compute node supports virtualization.

Make sure you have enabled OpenStack Kilo repository on Compute Node, or follow below steps to enable it.

Install the Ubuntu Cloud archive keyring and repository.

# apt-get install ubuntu-cloud-keyring

# echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list

Upgrade your system.

# apt-get update

Install the following packages on your each and every Compute node.

# apt-get install nova-compute sysfsutils

Edit /etc/nova/nova-compute.conf to enable QEMU.

# nano /etc/nova/nova-compute.conf/
Change virt_type=kvm to qemu in libvirt section.
[libvirt]
...
virt_type = qemu

Edit the /etc/nova/nova.conf.

# nano /etc/nova/nova.conf

Modify the below settings and make sure to place a entries in the proper sections.

[DEFAULT]
...
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.12.23
## Management IP of Compute Node
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.12.23
## Management IP of Compute Node
novncproxy_base_url = http://controller:6080/vnc_auto.html

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = password
## Replace "password" with the password you chose for the openstack account in RabbitMQ.

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = password

## Replace "password" with the password you chose for nova user in the identity service

[glance]
host = controller

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

Restart the compute service.

# service nova-compute restart

Remove SQLite database file.

# rm -f /var/lib/nova/nova.sqlite

Verify operation:

Load admin credentials on Controller Node.

# source admin-openrc.sh

List the compute service components to verify, run the following command on the Controller Node.

# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-cert        | controller | internal | enabled | up    | 2015-06-29T20:38:48.000000 | -               |
| 2  | nova-conductor   | controller | internal | enabled | up    | 2015-06-29T20:38:46.000000 | -               |
| 3  | nova-consoleauth | controller | internal | enabled | up    | 2015-06-29T20:38:41.000000 | -               |
| 4  | nova-scheduler   | controller | internal | enabled | up    | 2015-06-29T20:38:50.000000 | -               |
| 5  | nova-compute     | compute    | nova     | enabled | up    | 2015-06-29T20:38:49.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

You should get an output with four service components enabled on the controller node and one service component enabled on the compute node.

List images in the Image service catalog to verify the connectivity with the image service.

# nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| b19c4522-df31-4331-a2e1-5992abcd4ded | Ubuntu_14.04-x86_64 | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+

That’s All!!!, you have successfully configure Nova service. Next is to configure OpenStack Networking (Neutron).

OpenStack Kilo on Ubuntu 14.04.2 – Configure Glance

 Virtualization  Comments Off on OpenStack Kilo on Ubuntu 14.04.2 – Configure Glance
Sep 072015
 
OpenStack Logo
OpenStack Logo

This post guides you to configure OpenStack image service, code-named Glance, on the controller node. We will configure glance to store images locally on the controller node. Before going ahead make sure you have configured KeyStone service.

If you have not configured KeyStone yet, you can go through the below two posts.

OpenStack Kilo on Ubuntu 14.04.2- Configure KeyStone #1

OpenStack Kilo on Ubuntu 14.04.2- Configure KeyStone #2

Create a client environment script for admin and demo user, these scripts will help us to load appropriate credential for client operations.

Create the admin-openrc.sh file.

# nano admin-openrc.sh

Paste the following content onto the file.

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://controller:35357/v3

Replace password with the password that you created for admin user in KeyStone #2

Create the demo-openrc.sh file.

# nano demo-openrc.sh

Paste the below content onto the file.

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=password
export OS_AUTH_URL=http://controller:5000/v3

Replace password with the password that you created for demo user in KeyStone #2.

Prerequisites:

Login as root into to MySQL database server.

# mysql -u root -p

Create the database for glance.

CREATE DATABASE glance;

Set proper access to the glance database.

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'password';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'password';

Replace password with suitable password.

Load admin credentials.

# source admin-openrc.sh

Create the glance user.

# openstack user create --password-prompt glance
User Password:
Repeat User Password:
+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | f4bed648d59f44bfa31d9bb670fa7bc2 |
| name     | glance                           |
| username | glance                           |
+----------+----------------------------------+

Add the admin role to the glance user and service project.

# openstack role add --project service --user glance admin
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 33af4f957aa34cc79451c23bf014af6f |
| name  | admin                            |
+-------+----------------------------------+

Create the glance service entity.

# openstack service create --name glance --description "OpenStack Image service" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image service          |
| enabled     | True                             |
| id          | f75a73447c504fceb4cdf898a9033d81 |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

Create the API endpoint for glance.

# openstack endpoint create \
--publicurl http://controller:9292 \
--internalurl http://controller:9292 \
--adminurl http://controller:9292 \
--region RegionOne \
image

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| adminurl     | http://controller:9292           |
| id           | e38a6ecf4f9347a29026706719ef2988 |
| internalurl  | http://controller:9292           |
| publicurl    | http://controller:9292           |
| region       | RegionOne                        |
| service_id   | f75a73447c504fceb4cdf898a9033d81 |
| service_name | glance                           |
| service_type | image                            |
+--------------+----------------------------------+

Install and Configure glance:

Install the packages.

# apt-get install glance python-glanceclient

Edit the /etc/glance/glance-api.conf, modify the below settings and make sure to place a entries in the proper sections.

[DEFAULT]
...
notification_driver = noop
verbose = True

[database]
...
connection = mysql://glance:password@controller/glance
## Replace with the password you chose for glance database

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = password
## Replace this with the password you chose for glance user in the identity service.

[paste_deploy]
...
flavor = keystone

[glance_store]
...
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

Edit the /etc/glance/glance-registry.conf file, modify the below settings and make sure to place a entries in the proper sections.

[DEFAULT]
...
notification_driver = noop
verbose = True

[database]
...
connection = mysql://glance:password@controller/glance
## Replace with the password you chose for glance database

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = password
## Repalce this with the password you chose for glance user in the identity service

[paste_deploy]
...
flavor = keystone

Populate the glance database.

# su -s /bin/sh -c "glance-manage db_sync" glance

Restart the services.

# service glance-registry restart
# service glance-api restart

Delete the SQLite database file.

# rm -f /var/lib/glance/glance.sqlite

Verify operation:

In this, we will verify the image service by uploading cloud image of Fedora 22 on to our OpenStack environment.

In our client environment script, we will configure Image service client to use API version 2.0:

# echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh

Load admin credentials.

# source admin-openrc.sh

Download Fedora 22 cloud image on /tmp directory.

# cd /tmp

# wget https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2

Upload the image.

#  glance image-create --name "Fedora-Cloud-Base-22-20150521.x86_64" --file /tmp/Fedora-Cloud-Base-22-20150521.x86_64.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress

You will get a below output.

[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 18abc933d17f69d55ecea0d19f8f5c71     |
| container_format | bare                                 |
| created_at       | 2015-06-28T17:42:59Z                 |
| disk_format      | qcow2                                |
| id               | a1533d87-d6fa-4d9d-bf85-6b2ab8400712 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | Fedora-Cloud-Base-22-20150521.x86_64 |
| owner            | 9b05e6bffdb94c8081d665561d05e31e     |
| protected        | False                                |
| size             | 228599296                            |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2015-06-28T17:43:27Z                 |
| virtual_size     | None                                 |
| visibility       | public                               |
+------------------+--------------------------------------+

List the uploaded images.

# glance image-list
+--------------------------------------+--------------------------------------+
| ID                                   | Name                                 |
+--------------------------------------+--------------------------------------+
| a1533d87-d6fa-4d9d-bf85-6b2ab8400712 | Fedora-Cloud-Base-22-20150521.x86_64 |
+--------------------------------------+--------------------------------------+

That’s All!!!, you have successfully confugured Glance. Next is to configure Nova (Compute).