Multi-Node Cinder with EMC ViPR
In a previous series of blogs I showed how to setup EMC Vipr to be used for Cinder block storage in an Openstack environment (the series starts here). The blog series showed how to create storage on a single node instance of Openstack. In real world scenarios there will be multiple nodes in your Openstack environment, and you may want to make different storage pools available to the nodes. To access multiple pools of storage with EMC Vipr, we need to setup the cinder volume service on multiple nodes. Each instance of the ViPR Cinder Driver can be used to manage only one virtual array and one virtual pool within ViPR. The ViPR virtual array can be accessed by multiple Cinder driver instances, but a virtual pool can only be accessed by one driver.
In my lab I’ve deployed Openstack Havana in a 3 node configuration using packstack. I created an answer file and edited it so that services (Glance, Neutron, Cinder, Nova, etc) were split amongst the nodes. Nova3 has the Cinder service. When deploying the Cinder service on subsequent nodes you need to create a connection to the mysql database. I also edited to the answer file to set the sql connection passwords instead of using an auto generated password.



Above you can see my lab enviornment. 3 nodes with Nova installed, the 2 virtual arrays, and the 4 pools physical pools that have been abstracted into virtual pools. I have already setup the EMC ViPR drive on Node3 which is using the VNX-Tier0 pool. The rest of this blog will show how to setup additional Cinder volme services and use the EMC ViPR driver to access other virtual pools.

On the nova1 node, install cinder
yum install openstack-cinder
|

Edit the /etc/cinder/cinder.conf file and add the following lines:
sql_connection=mysql://cinder:[email protected]/cinder
qpid_hostname=nova1.cto.emc.local
The sql connection is to the server with the mysql database, in this case nova1. The connection is made with the user “cinder” with a password of “password”
The qpd service is also located on nova1
|

Set the SCSI service to turn on at boot, start it, and start the openstack-cinder-volume service:
chkconfig tgtd on
service tgtd start
service openstack-cinder-volume start
cinder-manage host list
Check that the connection is working with the cinder-manage host list command.
|
Now we install the ViPR Cinder driver as normal. See this post for details (HERE). A summary of the commands is below:
cd /tmp
wget github.com/emcvipr/controller-openstack-cinder/archive/master.zip
unzip master
cd controller-openstack-cinder-master/cinder/volume/drivers/emc
cp -avr vipr /usr/lib/python2.6/site-packages/cinder/volume/drivers/emc
mkdir /usr/cookie
chown cinder /usr/cookie
|

Edit the cinder.conf file again and add the Vipr Driver information. The screen shot above shows cinder.conf file with all the correct information for my environment.
volume_driver = cinder.volume.drivers.emc.vipr.emc_vipr_iscsi.EMCViPRISCSIDriver
vipr_hostname=10.10.81.53
vipr_port=4443
vipr_username=root
vipr_password=password
vipr_tenant=Provider Tenant
vipr_project=Cinder
vipr_varray=VNX-Iscsi-Block
vipr_cookiedir=/usr/cookie
rpc_response_timeout=300
|

Create the volume in Openstack and back it with a virtual pool. In this example we are creating the Openstack volume “Tier1” with the first command and backing it with the virtual pool “VNX-Tier1” with the second command. Copy the ID number listed after the first command and insert it as the “Type-key” for the second command:
cinder --os-username admin --os-password password --os-tenant-name admin --os-auth-url=http://nova1.cto.emc.local:35357/v2.0 type-create Tier2
cinder --os-username admin --os-password password --os-tenant-name admin --os-auth-url=http://nova1.cto.emc.local:35357/v2.0 type-key 162c6219-8cc7-43c8-86e3-db60f24641cb set ViPR:VPOOL=VNX-Tier2
|

On the Horizon dashboard you will see the Volume type listed
|

The next series of commands authenticate the driver against the ViPR vapp, adds the host as a physical asset, inserts the iscsi initator into the correct network pool and verifies connectivity of the host to the physical array:
python viprcli.py authenticate -u root -d /usr/cookie
cat /etc/iscsi/initiatorname.iscsi
---ADD INITIATOR NAME IN THE NEXT COMMAND---
python viprcli.py openstack add_host -name nova1.cto.emc.local -wwpn iqn.1994-05.com.redhat:f4a0e539edc
iscsiadm -m discovery -t sendtargets -p 192.168.1.101 --ADDRESS of VNX
service openstack-cinder-volume restart
The pool can now be used by Openstack
|

Using the horizon dashboard, create a volume
|

Use the volume “Type” we just created.
|

Verify the creation on Vipr by checking the resources for the Cinder project
|

In unisphere we can see the LUN was created in the pool.
|
The storage environment is a VNX with 3 tiers of storage, deployed as 3 pools. A Virtual array has been created and the physical pools have been abstracted into virtual pools. To add subsequent nodes repeat the steps above.

I then go on to add the Cinder service to Node2 and give it access to the VNX-Tier2 virtual pool. Above you can see 3 volumes that have been created in Horizon. One per pool.