Upgrading NexentaStor4 to NexentaStor5

Volumegroup specifics in NexentaStor5

NexentaStor5 has introduced significant changes in ZFS volumes management. In NexentaStor4, ZFS volumes could be located in any path of the dataset hierarchy. NexentaStor5 has introduced the datasets of specific types (which effectively are filesystems with specfic custom properties) called volumegroups. NexentaStor5 ZFS volumes could only be located inside volumegroups and cannot be located anywhere within regular filesystems or within the root of the pool. While upgrading from NexentaStor4 to NexentaStor5 upgrade scripts create NexentaStor5 volumegroups and move existing ZFS volumes there. The most significant concequence of this is that the dataset hierarchy of storage pools is being changed during the upgrade to NexentaStor5. iSCSI mappings are modified accordingly during the upgrade.

Migration behaviour for different Cinder volumes locations

Existing NexentaStor4 ZFS volume patMigration logicNew NexentaStor5 ZFS volume path
pool/volume-b4f6bd24-1974-4343-88e0-f9a82affd56bCreate a new volume group pool/pool-vg and move existing Cinder volumes from root dataset to the created volume grouppool/pool-vg/volume-b4f6bd24-1974-4343-88e0-f9a82affd56b
pool/san/volume-b4f6bd24-1974-4343-88e0-f9a82affd56bChange properties of existing pool/san filesystempool/san/volume-b4f6bd24-1974-4343-88e0-f9a82affd56b
pool/openstack/san/volume-b4f6bd24-1974-4343-88e0-f9a82affd56bCreate a new volume group pool/openstack-san-vg and move existing Cinder volumes from pool/openstack/san dataset to the created volume grouppool/openstack-san-vg/volume-b4f6bd24-1974-4343-88e0-f9a82affd56b

Replication specifics in NexentaStor5

Nexenta Openstack Cinder driver has some specifics if used together with NexentaStor5 HPR. Please consider the following:

  • Cinder volumes snapshots will not be replicated by HPR
  • Cinder volumes will be replicated
  • Cinder clones will be replicated as independent datasets

Upgrade steps

Create openstack user
CLI@host> user create -p password -g other openstack

Note: please repeat this step for all NexentaStor5 nodes

Backup and update existing Nexenta Cinder driver
controller-node# mv /usr/lib/python2.7/dist-packages/cinder/volume/drivers/nexenta /usr/lib/python2.7/dist-packages/cinder/volume/drivers/nexenta.bak
controller-node# git clone -b stable/pike https://github.com/Nexenta/cinder /tmp/nexenta-cinder
controller-node# cp -rf /tmp/nexenta-cinder/cinder/volume/drivers/nexenta /usr/lib/python2.7/dist-packages/cinder/volume/drivers
controller-node# rm -rf /tmp/nexenta-cinder

Note: Please repeat this step for all Openstack controller/cinder nodes. And use the appropriate Cinder driver branch: queens, pike, ocata, newton, mitaka, liberty, kilo, juno or icehouse.

Create backup copy and modify Cinder configuration file /etc/cinder/cinder.conf
controller-node# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
controller-node# vi /etc/cinder/cinder.conf

For example, typical changes for the existing Cinder configuration file:

NexentaStor4 Cinder configuration fileNexentaStor5 Cinder configuration file
[nexenta_iscsi][nexenta_iscsi]
volume_driver = cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDrivervolume_driver = cinder.volume.drivers.nexenta.ns5.iscsi.NexentaISCSIDriver
volume_backend_name = nexenta_iscsivolume_backend_name = nexenta_iscsi
nexenta_volume = poolnexenta_volume = pool
nexenta_volume_group = pool-vg
nexenta_user = openstacknexenta_user = openstack
nexenta_password = passwordnexenta_password = password
nexenta_rest_protocol = httpnexenta_use_https = True
nexenta_rest_port = 8457nexenta_rest_port = 8443
nexenta_host = 192.168.1.1nexenta_host = 192.168.1.1
nexenta_rest_address = 10.1.1.1,10.1.1.2
[nexenta_nfs][nexenta_nfs]
volume_driver = cinder.volume.drivers.nexenta.nfs.NexentaNfsDrivervolume_driver = cinder.volume.drivers.nexenta.ns5.nfs.NexentaNfsDriver
volume_backend_name = nexenta_nfsvolume_backend_name = nexenta_nfs
nexenta_shares_config = /etc/cinder/shares.cfgnas_host = 192.168.1.1
nexenta_rest_address = 10.1.1.1,10.1.1.2
nexenta_rest_port = 8443
nexenta_use_https = True
nexenta_user = openstack
nexenta_password = password
nas_share_path = pool1/openstack
nfs_mount_options = vers=3nfs_mount_options = vers=3
NexentaStor4 vs NexentaStor5 Cinder Options Conversion Table
NexentaStor4 parameterNexentaStor5 parameterDescription
uses same parameter nexenta_host for rest and datanexenta_rest_addressNexentaStor4 does not have separate value for Rest API management
nexenta_rest_protocol = <auto,http,https>nexenta_use_https = <True,False>NexentaStor4 uses http by default, NexentaStor5 uses https
nexenta_foldernexenta_volume_groupfor backend iSCSI only
nfs_shares_confignas_host and nas_share_pathNexentaStor5 does not use separated shares configuration file
nexenta_iscsi_target_portal_groupsnexenta_iscsi_target_portals and nexenta_iscsi_target_portal_portNexentaStor4 exposes TPGs while NexentaStor5 creates them using list of portals (IPs)
Restart cinder services: api, backup, scheduler and volume

For SysV or Upstart based distributions:

## service cinder-api restart
## service cinder-scheduler restart
## service cinder-volume restart
## service cinder-backup restart

For systemd based distributions:

## systemctl restart openstack-cinder-api.service
## systemctl restart openstack-cinder-scheduler.service
## systemctl restart openstack-cinder-volume.service
## systemctl restart openstack-cinder-backup.service

Rollback steps

Restore saved Cinder driver and Cinder configuration file
controller-node# rm -rf /usr/lib/python2.7/dist-packages/cinder/volume/drivers/nexenta
controller-node# mv /usr/lib/python2.7/dist-packages/cinder/volume/drivers/nexenta.bak /usr/lib/python2.7/dist-packages/cinder/volume/drivers/nexenta
controller-node# cat /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
Change Cinder configuration file

We need to change Cinder volumes location due to migration procedure. The consequences of migration can be solved using the parameter - nexenta_folder

[nexenta_iscsi]
nexenta_folder = pool-vg 
Restart cinder services: api, backup, scheduler and volume

For SysV or Upstart based distributions:

## service cinder-api restart
## service cinder-scheduler restart
## service cinder-volume restart
## service cinder-backup restart

For systemd based distributions:

## systemctl restart openstack-cinder-api.service
## systemctl restart openstack-cinder-scheduler.service
## systemctl restart openstack-cinder-volume.service
## systemctl restart openstack-cinder-backup.service