Last week EMC released their newest storage plugin for Icehouse Cinder. This version FOUND HERE now supports Havana, Icehouse, and EMC ViPR 1.1. I have previously blogged on how to setup the ViPR driver for version 1.0 and Havana (HERE). The plugin also now supports multiple backends simultaneously. The instillation of the newest version of the plugin has one major change, and that is the use of the ViPR CLI client to create volumes on ViPR virtual storage pools. This blog will show you how to setup the new driver
Enviornment
3 nodes installed with CentOS and RDO Icehouse
ViPR is backed by 2 storage systems. Isilon for NFS and VNX for iSCSI Block
The Physical arrays have been abstracted into Virtual array’s.
From the Virtual array we have created Virtual pools. This blog we will be using the Tier0-SSD Virtual pool to create cinder volumes.
We have alos created a project called ObjectProject.
Install Cinder Driver
To install the cinder driver we first have to install the ViPR CLI on the Cinder node.
mkdir cli
cd cli
Download the CLI from your VIPR vm using the wget command
wget http://10.10.81.161:9998/cli
Extract the installer with the Tar command
tar -xf cli
Install the ViPR CLI
./Installer_viprcli.linux
The ViPR cli install begins. Note the instillation directory, Port number, and the ViPR hostname, which must be a FQDN.
Set the path environment using the source command
source /opt/vipr/cli/viprcli.profile
With the CLI installed we can begin the driver installation
cd /tmp
wget github.com/emcvipr/controller-openstack-cinder/archive/master.zip
Unzip the driver
unzip master
Copy the ViPR subdirectory into the cinder volume directory
cd controller-openstack-cinder-master/cinder/volume/drivers/emc/
cp -avr vipr /usr/lib/python2.6/site-packages/cinder/volume/drivers/emc/
Edit the cinder.conf file
vim /etc/cinder/cinder.conf
At the top of the file in the default section enter the following
volume_driver = cinder.volume.drivers.emc.vipr.emc_vipr_iscsi.EMCViPRISCSIDriver
vipr_hostname=vipr.ebc.emc.local
vipr_port=4443
vipr_username=root
vipr_password=password
vipr_cli_path=/opt/vipr/cli/bin
vipr_tenant=Provider Tenant
vipr_project=ObjectProject
vipr_varray=CinderiSCSI
vipr_cookiedir=/usr/cookie
rpc_response_timeout=300
Enter the correct information for your environment. Note that the default tenant in ViPR is “Provider Tenant”
Restart the cinder service
service openstack-cinder-volume restart
Create a volume type
cinder --os-username admin --os-tenant-name admin type-create Tier0
Map the volume type to a virual pool in ViPR
cinder --os-username admin --os-tenant-name admin type-key Tiero set ViPR:VPOOL=Tier0-SSD
Note type key can be the ID number or name.
On the ViPR GUI we add the cinder host. Click Add
Enter the host information. Click Save
The host is added to ViPR
Under the network tab select the netwok associated with the array. In this example it’s the ISCSI-NET network
You will see the 2 IP ports from the array already assigned to this network. Click the Add button and select Add Host Ports
Scroll through the list till you find the iqn of the cinder host. Select it and click Add
The Port is added to the network. Click Save
On the Openstack Horizon dashboard we can now create a volume. Give the volume a name and select the type created earlier. Click Create Volume
On the ViPR GUI you can see the volume being created by viewing the resource tab under users.
We can also view the creation of the volume in Unisphere.
We now have a ViPR iSCSI block volume in Cinder for use.
If you have any issues with LUN creation check the /var/log/cinder/volume.log file. Details of volume creation are logged there.