This is a simple how-to to show how install openstack manually using the packages from repository, we will install only two machines as OpenStack components and the third will be to adictional services, on to be the "simplecontroller" (where all services will installed) and other to be the compute-node as well we will have only one network, the imagem bellow show how will be the environment.
Manual basic instalation (Simple OpenStack)

first of all configure the name and ips on all machines like the commands bellow.

simpleservices
1 |
hostnamectl set-hostname simpleservices.xurupita.nl
|
1
2
3
4
5
6
7
8
9
10
11
12 |
cat <<'EOF'> /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.88.6
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
NAME=eth0
DEVICE=eth0
ONBOOT=yes
PREFIX=24
GATEWAY=192.168.88.1
DNS1=8.8.8.8
EOF
|

simplecontroller
1 |
hostnamectl set-hostname simplecontroller.xurupita.nl
|
1
2
3
4
5
6
7
8
9
10
11
12 |
cat <<'EOF'> /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.88.7
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
NAME=eth0
DEVICE=eth0
ONBOOT=yes
PREFIX=24
GATEWAY=192.168.88.1
DNS1=8.8.8.8
EOF
|

simplecompute
1 |
hostnamectl set-hostname simplecompute.xurupita.nl
|
1
2
3
4
5
6
7
8
9
10
11
12 |
cat <<'EOF'> /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.88.11
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
NAME=eth0
DEVICE=eth0
ONBOOT=yes
PREFIX=24
GATEWAY=192.168.88.1
DNS1=8.8.8.8
EOF
|

All machines!!!
Disable the SELinux:
1
2 |
sed -e 's/SELINUX=.*/SELINUX=permissive/' /etc/selinux/config -i
setenforce 0
|
And the FirewallD:
1
2 |
systemctl stop firewalld
systemctl disable firewalld
|
As well the NetworkManager:
1
2 |
systemctl stop NetworkManager
systemctl disable NetworkManager
|
Because we are not using some DNS as authoritative of xurupita.nl add to '/etc/hosts' these lines:
1
2
3
4
5
6
7 |
cat << 'EOF' > /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.88.6 simpleservices.xurupita.nl simpleservices
192.168.88.7 simplecontroller.xurupita.nl simplecontroller
192.168.88.11 simplecompute.xurupita.nl simplecompute
EOF
|
Install the Epel and Openstack repos:
1 |
yum install centos-release-openstack-stein epel-release -y
|
This package will install the 'openstack-config' command that is just a wrapper to the command 'crudinni':
1 |
yum install openstack-utils -y
|
NTP

simplecontroller
1 |
yum install chrony -y
|
Configure to permite the others servers sync the time:
1
2
3
4
5
6
7
8 |
cat <<'EOF'> /etc/chrony.conf
server 1.centos.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
allow 192.168.88.0/24
EOF
|
Enable and start the Chronyd service:
1
2 |
systemctl enable chronyd.service
systemctl start chronyd.service
|
Verify if it is working:
1 |
chronyc sources
|

simplecompute
1 |
yum install chrony -y
|
Configure to sync time from simplecontroller:
1
2
3
4
5
6
7 |
cat <<'EOF'> /etc/chrony.conf
server simplecontroller iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
EOF
|
Enable and start the Chronyd service:
1
2 |
systemctl enable chronyd.service
systemctl start chronyd.service
|
Verify if it is working:
1 |
chronyc sources
|
SQL DataBase

simplecontroller
Install the MariaDB service and the python connector:
1 |
yum install mariadb mariadb-server python2-PyMySQL -y
|
Configure the '/etc/my.cnf.d/openstack.cnf' to ensure that MySQL will work InnoDB and UTF also set to be open the port 3306 on all ips that we'll have:
1
2
3
4
5
6
7
8
9 |
cat <<'EOF'> /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 0.0.0.0
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF
|
Enable and start the MariaDB service:
1
2 |
systemctl enable mariadb.service
systemctl start mariadb.service
|
Test if it is ok:
1 |
mysql -u root -e "show databases"
|
Set a passowrd to root user (root only on MariaDB will have this password) be able to connect remotly if necessary:
1 |
mysql -u root -e "grant all privileges on *.* to root@'%' identified by 'MDB_PASS' with grant option; flush privileges"
|
Message Queue

simplecontroller
Install the RabbitMQ service to provide the message system:
1 |
yum install rabbitmq-server -y
|
Enable the managment portal, it will expose many informations on web console, the default port is 15672, in our case try http://192.168.88.7:15672 after create the user and set the permission on next steps:
1 |
rabbitmq-plugins enable rabbitmq_management
|
Enable and start the RabbitMQ Server service:
1
2 |
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
|
Create, set password and set permission of the 'openstack' user, this user will be used to connect each OpenStack service on RabbitMQ:
1
2
3 |
rabbitmqctl add_user openstack RABBIT_PASS
rabbitmqctl set_user_tags openstack administrator
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
|
Chech if all is correctly configured:
1 |
curl -i -u openstack:RABBIT_PASS http://192.168.88.7:15672/api/whoami
|
Memcached

simplecontroller
Install the Memcached and the python library to enable openstack uses memcached:
1 |
yum install memcached python-memcached -y
|
Determine that memcached service will listen on all interfaces:
1 |
sed -e 's/OPTIONS=.*/OPTIONS="-l 0.0.0.0"/' /etc/sysconfig/memcached -i
|
Enable and start the Memcached service:
1
2 |
systemctl enable memcached.service
systemctl start memcached.service
|
Chech if the Memcached is correctly configured:
1 |
memcached-tool 192.168.88.7:11211 stats
|
Identity service (Keystone)

simplecontroller
First of all, create the Keystone database and set the password to enable the connection on this database with user keystone:
1
2 |
mysql -u root -e "CREATE DATABASE keystone"
mysql -u root -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS' ; flush privileges"
|
Install the Keystone packages, the apache server and the wsgi that is used to be like a gateway interface for the keytone api service:
1 |
yum install openstack-keystone httpd mod_wsgi -y
|
Set the database configurations created few steps above:
1 |
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@simplecontroller/keystone
|
Determine which token mechanism Keytone will use:
1 |
openstack-config --set /etc/keystone/keystone.conf token provider fernet
|
Set Keytone to work with Memcached also set which server will be:
1
2 |
openstack-config --set /etc/keystone/keystone.conf cache backend dogpile.cache.memcached
openstack-config --set /etc/keystone/keystone.conf cache memcache_servers simplecontroller:11211
|
Sync the database, it means create all tables also populate with the minimum iformation required:
1 |
su -s /bin/sh -c "keystone-manage db_sync" keystone
|
Confirm looking if the tables are created:
1 |
mysql -uroot keystone -e "show tables"
|
To user Fernet is necessary to generate the hashs that will be used to genetate the tokens, so run the command bellow:
1
2 |
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
|
This command will create the service identity and the address endpoints of Identity service (Keystone), as well will create the Domain Default, the Project admin and the User admin with the especified password and if you specify will create the Region, as bellow.
1
2
3
4
5
6 |
keystone-manage --config-file /etc/keystone/keystone.conf bootstrap \
--bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://simplecontroller:5000/v3/ \
--bootstrap-internal-url http://simplecontroller:5000/v3/ \
--bootstrap-public-url http://simplecontroller:5000/v3/ \
--bootstrap-region-id EURegion
|
Because the keystone package dont create the Apache Virtual-Host for Keystone WSGI, create this link:
1 |
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
|
Enable and start the Apache service:
1
2 |
systemctl enable httpd.service
systemctl start httpd.service
|
Install the client, to manage the OpenStack and your services by CLI:
1 |
yum install python-openstackclient python-osc-placement -y
|
Check if it is everthing ok:
1
2
3 |
openstack --os-auth-url http://simplecontroller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin --os-password ADMIN_PASS token issue
|
Instead to pass on command line all information that you need, is possible to add to a file like bellow:
1
2
3
4
5
6
7
8
9
10
11 |
cat <<'EOF'> admin_openrc
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://simplecontroller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=EURegion
export PS1='\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;31m\][\W]($OS_PROJECT_NAME-$OS_REGION_NAME)\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
EOF
|
upload this file to memory:
1 |
source admin_openrc
|
Now you are able to execute any comando whit this credential, so create the service project that will be used by all other OpenStack services:
1 |
openstack project create --domain default --description "Service Project" service
|
Just a tip for you!!!
1
2 |
openstack complete --shell bash | tee /etc/bash_completion.d/os.bash
source /etc/bash_completion.d/os.bash
|
To test many resources, create the Domain, Project, User and set the role for this new user.
1
2
3
4 |
openstack domain create --description "An Example Domain" mydomain
openstack project create --domain mydomain --description "Demo Project" myproject
openstack user create --domain mydomain --password myuser_pass myuser
openstack role add --project myproject --project-domain mydomain --user myuser --user-domain mydomain member
|
Check if it is everthing ok:
1
2
3 |
openstack --os-auth-url http://simplecontroller:5000/v3 \
--os-project-domain-name mydomain --os-user-domain-name mydomain \
--os-project-name myproject --os-username myuser --os-password myuser_pass token issue
|
Image service (Glance)

simplecontroller
First of all, create the Glance database and set the password to enable the connection on this database with user glance:
1
2 |
mysql -u root -e "CREATE DATABASE glance"
mysql -u root -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS' ; flush privileges"
|
Create the Glance user also set the role admin on project service:
1
2 |
openstack user create --domain default --password GLANCE_PASS glance
openstack role add --project service --user glance admin
|
Now create the Glance service as well add the your endpoints:
1
2
3
4 |
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region EURegion image public http://simplecontroller:9292
openstack endpoint create --region EURegion image internal http://simplecontroller:9292
openstack endpoint create --region EURegion image admin http://simplecontroller:9292
|
Install the Glance packge also wget that will be used to download the images:
1 |
yum install openstack-glance wget -y
|
Set the database configurations created few steps above:
1 |
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@simplecontroller/glance
|
Determine the OpenStack credentials on the Glance configuration file:
1
2
3
4
5
6
7
8
9 |
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://simplecontroller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://simplecontroller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers simplecontroller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
|
Sync the database, it means create all tables also populate with the minimum iformation required:
1 |
su -s /bin/sh -c "glance-manage db_sync" glance
|
Confirm looking if the tables are created:
1 |
mysql -uroot glance -e "show tables"
|
Enable and start the Glance API service:
1
2 |
systemctl enable openstack-glance-api.service
systemctl start openstack-glance-api.service
|
Download the Cirros image file from internet:
1 |
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
|
Create on OpenStack a new image called "cirros", the disk format must be QCOW2 and the container format Bare:
1
2
3 |
openstack image create "cirros" \
--file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare
|
Check if this command works propearly:
1
2
3 |
openstack --os-auth-url http://simplecontroller:5000/v3 \
--os-project-domain-name mydomain --os-user-domain-name mydomain \
--os-project-name myproject --os-username myuser --os-password myuser_pass image list
|
Also check if exist a new file on the path bellow:
1 |
ls -l /var/lib/glance/images/
|
Volume service (Cinder)

simplecontroller
First of all, create the Cinder database and set the password to enable the connection on this database with user cinder:
1
2 |
mysql -u root -e "CREATE DATABASE cinder"
mysql -u root -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS' ; flush privileges"
|
Create the Cinder user also set the role admin on project service:
1
2 |
openstack user create --domain default --password CINDER_PASS cinder
openstack role add --project service --user cinder admin
|
Now create the Cinder V2 service as well add the your endpoints:
1
2
3
4 |
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack endpoint create --region EURegion volumev2 public http://simplecontroller:8776/v2/%\(project_id\)s
openstack endpoint create --region EURegion volumev2 internal http://simplecontroller:8776/v2/%\(project_id\)s
openstack endpoint create --region EURegion volumev2 admin http://simplecontroller:8776/v2/%\(project_id\)s
|
And create the Cinder V3 service as well add the your endpoints:
1
2
3
4 |
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
openstack endpoint create --region EURegion volumev3 public http://simplecontroller:8776/v3/%\(project_id\)s
openstack endpoint create --region EURegion volumev3 internal http://simplecontroller:8776/v3/%\(project_id\)s
openstack endpoint create --region EURegion volumev3 admin http://simplecontroller:8776/v3/%\(project_id\)s
|
Install the Cinder packge:
1 |
yum install openstack-cinder -y
|
Set the database configurations created few steps above:
1 |
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_DBPASS@simplecontroller/cinder
|
Set the RabbitMQ credentions also which server we will work:
1 |
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@simplecontroller
|
Determine the OpenStack credentials on the Cinder configuration file:
1
2
3
4
5
6
7
8
9
10 |
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://simplecontroller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://simplecontroller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers simplecontroller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password CINDER_PASS
|
To manage external resources and with multi-thead and/or multi-process is necessary manage the lock where will determine who is doing what:
1 |
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
|
Sync the database, it means create all tables also populate with the minimum iformation required:
1 |
su -s /bin/sh -c "cinder-manage db sync" cinder
|
Confirm looking if the tables are created:
1 |
mysql -uroot cinder -e "show tables"
|
Enable and start the Cinder API and Cinder Scheduler service:
1
2 |
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
|
Check if the Cinder services are running, in this case should be only the scheduler:
1 |
openstack volume service list
|

simpleservices
The propuse of the section is describe how to have some way to provide volumes on OpenStack, for this environment will be the NFS, but on this blog have or will have other ways to provide Block Storage.
First of all, install the nfs-utils package that contain the nfs-server service:
1 |
yum install nfs-utils -y
|
create a new directory that will be used to share with other machines through NFS protocol:
1 |
mkdir /mnt/vol01
|
Inside the NFS configuration file, write the config about the directory recently created:
1 |
echo '/mnt/vol01 *(rw,no_root_squash)' > /etc/exports
|
Enable and start the NFS Server service:
1
2 |
systemctl enable nfs-server
systemctl start nfs-server
|
Check to see if it is all ok:
1 |
showmount -e
|

simplecontroller
Install the same packge, but this time we'll (Cinder Volume'll) use the 'mount.nfs' command:
1 |
yum install nfs-utils -y
|
Determine that the backends enabled will be the 'nfsbkend':
1 |
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends nfsbkend
|
Configure the Cinder Volume to work with the NFS protocol also determine the config file of our NFS Server:
1
2
3
4
5 |
openstack-config --set /etc/cinder/cinder.conf nfsbkend nfs_shares_config /etc/cinder/shares.conf
openstack-config --set /etc/cinder/cinder.conf nfsbkend volume_driver cinder.volume.drivers.nfs.NfsDriver
openstack-config --set /etc/cinder/cinder.conf nfsbkend volume_backend_name nfsbackendname
openstack-config --set /etc/cinder/cinder.conf nfsbkend nfs_mount_options
openstack-config --set /etc/cinder/cinder.conf nfsbkend nfs_sparsed_volumes True
|
Set where is our NFS server like bellow:
1 |
echo "simpleservices.xurupita.nl:/mnt/vol01" > /etc/cinder/shares.conf
|
Enable and start the Cinder Volume service:
1
2 |
systemctl enable openstack-cinder-volume
systemctl restart openstack-cinder-api openstack-cinder-scheduler openstack-cinder-volume
|
Now is possible to see the mount of the NFS:
1 |
mount
|
still admin user, create the type and correlate with the NFS server config:
1
2
3 |
openstack volume type create NFS
openstack volume type set NFS --property volume_backend_name=nfsbackendname
openstack volume type list --long
|
Create a new volume:
1
2
3
4 |
openstack --os-auth-url http://simplecontroller:5000/v3 \
--os-project-domain-name mydomain --os-user-domain-name mydomain \
--os-project-name myproject --os-username myuser --os-password myuser_pass \
volume create --type NFS --size 2 vol01
|
Create a new other volume, but this time from a image:
1
2
3
4 |
openstack --os-auth-url http://simplecontroller:5000/v3 \
--os-project-domain-name mydomain --os-user-domain-name mydomain \
--os-project-name myproject --os-username myuser --os-password myuser_pass \
volume create --type NFS --image cirros --size 2 volimg01
|
List all volumes also their configs:
1 |
cinder list --all
|
To finalize verify these new files on mount point:
1 |
ls -R /var/lib/cinder/mnt/
|
Placement service (Placement)
First of all, create the Placement database and set the password to enable the connection on this database with user placement:
1
2 |
mysql -u root -e "CREATE DATABASE placement"
mysql -u root -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS'; flush privileges"
|
Now create the Placement service as well add the your endpoints:
1
2
3
4 |
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region EURegion placement public http://simplecontroller:8778
openstack endpoint create --region EURegion placement internal http://simplecontroller:8778
openstack endpoint create --region EURegion placement admin http://simplecontroller:8778
|
Create the Placement user also set the role admin on project service:
1
2 |
openstack user create --domain default --password PLACEMENT_PASS placement
openstack role add --project service --user placement admin
|
Install the Placement packge:
1 |
yum install openstack-placement-api -y
|
Set the database configurations created few steps above:
1 |
openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@simplecontroller/placement
|
Determine the OpenStack credentials on the Placement configuration file:
1
2
3
4
5
6
7
8 |
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_url http://simplecontroller:5000/v3
openstack-config --set /etc/placement/placement.conf keystone_authtoken memcached_servers simplecontroller:11211
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_type password
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/placement/placement.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_name service
openstack-config --set /etc/placement/placement.conf keystone_authtoken username placement
openstack-config --set /etc/placement/placement.conf keystone_authtoken password PLACEMENT_PASS
|
Sync the database, it means create all tables also populate with the minimum iformation required:
1 |
su -s /bin/sh -c "placement-manage db sync" placement
|
Confirm looking if the tables are created:
1 |
mysql -uroot placement -e "show tables"
|
This is a little workaround:
1
2
3
4
5
6
7
8
9
10
11 |
cat <<'EOF'>> /etc/httpd/conf.d/00-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
EOF
|
Restart the Apache to make the changes affect:
1 |
systemctl restart httpd
|
Check if is possible to connect to Placement endpoints, this command now don't will have output
1 |
openstack resource provider list
|
Compute service (Nova)

simplecontroller
First of all, create the Nova databases (novaapi, nova and novacell0) and set the password to enable the connection on this database with user nova:
1
2
3
4
5
6 |
mysql -u root -e "CREATE DATABASE nova_api"
mysql -u root -e "CREATE DATABASE nova"
mysql -u root -e "CREATE DATABASE nova_cell0"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; flush privileges"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; flush privileges"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; flush privileges"
|
Now create the Nova service as well add the your endpoints:
1
2
3
4 |
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region EURegion compute public http://simplecontroller:8774/v2.1
openstack endpoint create --region EURegion compute internal http://simplecontroller:8774/v2.1
openstack endpoint create --region EURegion compute admin http://simplecontroller:8774/v2.1
|
Create the Nova user also set the role admin on project service:
1
2 |
openstack user create --domain default --password NOVA_PASS nova
openstack role add --project service --user nova admin
|
Install the Nova packges:
1 |
yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y
|
Set the databases configurations created few steps above:
1
2 |
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@simplecontroller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@simplecontroller/nova
|
Set the RabbitMQ credentions also which server we will work:
1 |
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@simplecontroller
|
Determine the OpenStack credentials on the Nova configuration file:
1
2
3
4
5
6
7
8 |
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://simplecontroller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers simplecontroller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
|
Set where is the Glance service that this nova will talk:
1 |
openstack-config --set /etc/nova/nova.conf glance api_servers http://simplecontroller:9292
|
Also determine which Cinder Region this nova will talk:
1 |
openstack-config --set /etc/nova/nova.conf cinder os_region_name EURegion
|
Determine the OpenStack credentials for the service Placement on the Nova configuration file:
1
2
3
4
5
6
7
8 |
openstack-config --set /etc/nova/nova.conf placement region_name EURegion
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://simplecontroller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS
|
This configuration is to add automatically new compute nodes to the cell:
1 |
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 300
|
Sync the database, it means create all tables also populate with the minimum iformation required:
1
2
3
4 |
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
|
Confirm looking if the tables are created:
1
2
3 |
mysql -uroot nova_cell0 -e "show tables"
mysql -uroot nova_api -e "show tables"
mysql -uroot nova -e "show tables"
|
Check if the cell are correctly configured:
1 |
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
|
Enable and start the Nova API, Nova Scheduler, Nova Conductor and Nova NoVNCProxy service:
1
2
3
4 |
systemctl enable openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
|
List all services from Nova:
1 |
openstack compute service list
|

simplecompute
Install the Nova packge:
1 |
yum install openstack-nova-compute -y
|
Set the RabbitMQ credentions also which server we will work:
1 |
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@simplecontroller
|
Determine the OpenStack credentials on the Nova configuration file:
1
2
3
4
5
6
7
8 |
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://simplecontroller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers simplecontroller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
|
Determine the OpenStack credentials for the service Placement on the Nova configuration file:
1
2
3
4
5
6
7
8 |
openstack-config --set /etc/nova/nova.conf placement region_name EURegion
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://simplecontroller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS
|
Configure the VNC to enable the 'simplecontroller' be a proxy from external environmet to the internal machines:
1
2
3
4 |
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address 192.168.88.11
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://192.168.88.7:6080/vnc_auto.html
|
Determine which Glance service will be used by this Nova Compute:
1 |
openstack-config --set /etc/nova/nova.conf glance api_servers http://simplecontroller:9292
|
This little script will look if the processor supoort virtualization:
1
2
3
4
5
6 |
if egrep -q '(vmx|svm)' /proc/cpuinfo
then
openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm
else
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
fi
|
Enable and start the LibvirtD and Nova Computeservice:
1
2 |
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
|

simplecontroller
Now it is time to see all works very well:
1
2
3 |
openstack compute service list
openstack hypervisor list
openstack resource provider list
|
Dashboard (Horizon)

simplecontroller
Install the Horizon packge:
1 |
yum install openstack-dashboard -y
|
Determine which OpenStack this Horizon will conect:
1 |
openstack-config --set /etc/openstack-dashboard/local_settings '' OPENSTACK_HOST '"simplecontroller"'
|
Set where from will be accept connections:
1 |
sed -e 's/ALLOWED_HOSTS.*/ALLOWED_HOSTS = ["*"]/' /etc/openstack-dashboard/local_settings -i
|
Set Horizon to store the Session and Cache on Memcached:
1
2 |
openstack-config --set /etc/openstack-dashboard/local_settings '' SESSION_ENGINE "'django.contrib.sessions.backends.cache'"
openstack-config --set /etc/openstack-dashboard/local_settings '' CACHES "{'default': {'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache','LOCATION': 'simplecontroller:11211',}}"
|
Enable the Horizon to work with multiple Domains:
1 |
openstack-config --set /etc/openstack-dashboard/local_settings '' OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT True
|
Chose the default Role to be used by Horizon:
1 |
openstack-config --set /etc/openstack-dashboard/local_settings '' OPENSTACK_KEYSTONE_DEFAULT_ROLE '"member"'
|
Restart the Apache to be affected by the changes:
1 |
systemctl restart httpd.service
|
Now finalize the instalation fowlling one of the next sections: