NexentaEdge 2.x

Supported OpenStack Releases

  • NexentaEdge 1.2 - Newton+

Cinder Driver Prerequisites

  • System must be initialized and licensed
  • Cluster/Tenant/Bucket creates
  • NFS or iSCSI Gateway configured
  • Storage Network configured between NexentaEdge Gateway and OpenStack Hypervisors (Recommended 10GBE, MTU 9000)

Where to get Cinder Drivers?

It’s recommended to get the latest driver from Nexenta’s repository: https://github.com/Nexenta/cinder

The branches in the repository correspond with Openstack releases.

To following command can be used to download the exact version w/o having to switch branches

git clone -b stable/newton - this will download the exact version, no need to switch

Nexenta Drivers are located under the following path: https://github.com/Nexenta/cinder/tree/stable/newton/cinder/volume/drivers/nexenta

The path includes driver for NexentaStor 4.x, NexentaStor 5.x and NexnetaEdge 2.0. Make sure to copy the whole folder.

Installation Steps

  1. Determine cinder driver location path used in your environment
  2. Clone or download the correct version of the drivers, unzip if downloaded and copy to the cinder location. For example drivers for Mitaka release:
    $ git clone -b stable/newton https://github.com/Nexenta/cinder nexenta-cinder
    $ cp -rf nexenta-cinder/cinder/volume/drivers/nexenta /usr/lib/python2.7/dist-packages/cinder/volume/drivers
    
  3. Configure cinder.conf
  4. Restart Cinder Service
    • Systemd based system: $ sudo systemctl restart openstack-cinder-volume.service
    • Upstart/SysV based system: $ sudo service cinder-volume restart

NexentaEdge 1.2 iSCSI - List of all available options

Parameter nameDefaultChoicesDescription
nexenta_rest_addressIP address of NexentaEdge management REST API endpoint
nexenta_rest_port0HTTP(S) port to connect to Nexenta REST API server. If it is equal zero, 8443 for HTTPS and 8080 for HTTP is used
nexenta_rest_protocolauto[http, https, auto]Use http or https for REST connection
nexenta_blocksize4096Block size for datasets (NStor4)
nexenta_nbd_symlinks_dir/dev/disk/by-pathNexentaEdge logical path of directory to store symbolic links to NBDs.
nexenta_rest_useradminUser name to connect to NexentaEdge
nexenta_rest_passwordnexentaPassword to connect to NexentaEdge
nexenta_replication_count3NexentaEdge iSCSI LUN object replication count.
nexenta_encryptionFalseDefines whether NexentaEdge iSCSI LUN object has encryption enabled.
nexenta_lun_containerNexentaEdge logical path of bucket for LUNs
nexenta_iscsi_serviceNexentaEdge iSCSI service name
nexenta_client_addressNexentaEdge iSCSI Gateway client address for non-VIP service
nexenta_chunksize32768NexentaEdge iSCSI LUN object chunk size
nexenta_iops_limit0NexentaEdge iSCSI LUN object IOPS limit

NexentaEdge 1.2 iSCSI cinder.conf minimal config

[nedge_iscsi]
volume_driver=cinder.volume.drivers.nexenta.nexentaedge.iscsi.NexentaEdgeISCSIDriver
volume_backend_name = nedge
nexenta_rest_address = 10.0.0.1
nexenta_rest_port = 8080
nexenta_rest_protocol = http
nexenta_iscsi_target_portal_port = 3260
nexenta_rest_user = admin
nexenta_rest_password = nexenta
nexenta_lun_container = cl/tn/bk
nexenta_iscsi_service = iscsi
nexenta_client_address = 10.0.1.1

After configuring the cinder.conf, restart the cinder-volume service

sudo service cinder-volume restart (may differ depending on OS)

iSCSI Multipath

Openstack Nova provides the ability to use iSCSI Multipath. To enable Multipath you need to add following line into nova.conf in the [libvirt] section:

[libvirt]
iscsi_use_multipath = True

For this change to take place you need to restart nova-compute service: $ sudo service restart nova-compute

Backup

This section describes how to configure the cinder-backup service and cinder NFS driver on top NexentaStor NFS share. Official documentation link: NFS backup driver Example section for cinder.conf:

[DEFAULT]
backup_driver = cinder.backup.drivers.nfs
backup_share = 10.1.1.1:/pool/nfs/backup
backup_mount_options = vers=4

Note: 10.1.1.1 - IP address of NexentaStor, /pool/nfs/backup - NFS share path.

Steps for NexentaStor 4.x:

nmc@host1:/$ create folder pool/nfs/backup
nmc@host1:/$ share folder pool/nfs/backup nfs
Auth Type            : sys
Anonymous            : false
Read-Write           :
Read-Only            :
Root                 :
Extra Options        : uidmap=*:root:@10.1.1.2
Recursive            : true
Modifed NFS share for folder 'pool/nfs/backup'

Note: 10.1.1.2 - IP address of Openstack Cinder host.

Steps for NexentaStor 5.x:

CLI@host> filesystem create -p pool/nfs/backup
CLI@host> nfs share -o uidMap='*:root:@10.1.1.2' pool/nfs/backup

Note: 10.1.1.2 - IP address of Openstack Cinder host.

Cinder and Replication

  • Replication on Consistency group level
  • Replication of clones will result in a full filesystems (Not efficient from capacity perspective)
  • Cinder snapshots are omitted in replication in 5.1.x (We expect fix in 5.2FP1)

Troubleshooting

grep for “Traceback” in your Openstack logs folder, default is

/var/log/openstack-project, for example: /var/log/cinder/cinder-volume.log

Most of the errors related to storage are in Cinder or Nova logs.

If the error is not self explanatory, enable the debug logging, restart the service and try to reproduce the error. Debug loggings will trace all calls to Nexenta, which allows to narrow down the possible cause of the error.

To enable debug in cinder, add the following line to cinder.conf:

[DEFAULT]
debug=True

And restart cinder-volume: sudo service cinder-volume restart