OpenStack:Ceph-COI-Installation
From DocWiki
Contents |
Installing a ceph cluster and configuring rbd-backed cinder volumes.
Currently rbd-backed cinder volumes are only available on compute nodes running cinder-volume. Work on using rbd-backed volumes on standalone cinder nodes is underway.
First steps
- Install your build server
- Run puppet_modules.py to download the necessary puppet modules
- Edit site.pp to fit your configuration.
- Define one mon (mon0) and at least one OSD server (osd0). If you wish to test with multiple MONs, you must have an odd number of MON nodes.
Configuration
- Change the cinder driver option to 'rbd'
# The cinder storage driver to use Options are iscsi or rbd(ceph). Default is 'iscsi'. $cinder_storage_driver = 'rbd'
- Enable ceph installation on your compute nodes
$cinder_ceph_enabled = true
- Specify your MON and OSD cobbler nodes
### Repeat as needed ### # Make a copy of your swift storage node block above for each additional # node in your swift cluster and paste the copy in this section. Be sure # to change the host name, mac, ip, and power settings for each node. ### this block defines the ceph monitor nodes ### you will need to add a node type for each addtional mon node ### eg ceph-mon02, etc. This is due to their unique id requirements cobbler_node { "ceph-mon01": node_type => "ceph-mon01", mac => "11:22:33:cc:bb:aa", ip => "192.168.242.180", power_address => "192.168.242.13", } ### this block define ceph osd nodes ### add a new entry for each node cobbler_node { "ceph-osd01": node_type => "ceph-osd01", mac => "11:22:33:cc:bb:aa", ip => "192.168.242.181", power_address => "192.168.242.14", }
- Enable all the ceph options.
- Note that the string 'REPLACEME' MUST be left as is.
- Generate a unique $ceph_monitor_fsid by running 'uuidgen \-r' on the command line. Unique $ceph_monitor_secrets can only be generated on a running ceph cluster. Leave this string as is.
$ceph_auth_type = 'cephx' $ceph_monitor_fsid = 'e80afa94-a64c-486c-9e34-d55e85f26406' $ceph_monitor_secret = 'AQAJzNxR+PNRIRAA7yUp9hJJdWZ3PVz242Xjiw==' $ceph_monitor_port = '6789' $ceph_monitor_address = $::ipaddress $ceph_cluster_network = '192.168.242.0/24' $ceph_public_network = '192.168.242.0/24' $ceph_release = 'cuttlefish' $cinder_rbd_user = 'volumes' $cinder_rbd_pool = 'volumes' $cinder_rbd_secret_uuid = 'REPLACEME'
- You will also need to uncomment both the path Exec statement and the puppet node statements
node 'ceph-mon01' inherits os_base { # only mon0 should export the admin keys. # This means the following if statement is not needed on the additional mon nodes. if \!empty($::ceph_admin_key) { @@ceph::key { 'admin': secret => $::ceph_admin_key, keyring_path => '/etc/ceph/keyring', } } # each MON needs a unique id, you can start at 0 and increment as needed. class {'ceph_mon': id => 0 } class { 'ceph::apt::ceph': release => $::ceph_release } } # This is the OSD node definition example. You will need to specify the public and cluster IP for each unique node. node 'ceph-osd01' inherits os_base { class { 'ceph::conf': fsid => $::ceph_monitor_fsid, auth_type => $::ceph_auth_type, cluster_network => $::ceph_cluster_network, public_network => $::ceph_public_network, } class { 'ceph::osd': public_address => '192.168.242.3', cluster_address => '192.168.242.3', } # Specify the disk devices to use for OSD here. # Add a new entry for each device on the node that ceph should consume. # puppet agent will need to run four times for the device to be formatted, # and for the OSD to be added to the crushmap. ceph::osd::device { '/dev/sdd': } class { 'ceph::apt::ceph': release => $::ceph_release } }
Installation process
- Lets first bring up the mon0 node
apt-get update run 'puppet agent \-t \-v \--no-daemonize' at least three times
- Then bring up the OSD node(s)
apt-get update run 'puppet agent \-t \-v \--no-daemonize' at least four times
- The ceph cluster will now be up.
Login to mon0 run 'ceph status' the monmap line should show X mons the osdmap line should show X osds, ensure that the OSD is marked as up.
- You can verify the ceph status with the following command
$ ceph status health HEALTH_WARN 320 pgs degraded; 320 pgs stuck unclean; recovery 2/4 degraded (50.000%) monmap e1: 1 mons at {0=192.168.2.71:6789/0}, election epoch 2, quorum 0 0 osdmap e7: 1 osds: 1 up, 1 in pgmap v17: 320 pgs: 320 active+degraded; 138 bytes data, 4131 MB used, 926 GB / 930 GB avail; 0B/s rd, 11B/s wr, 0op/s; 2/4 degraded (50.000%) mdsmap e1: 0/0/1 up
- If your OSD is not marked as up, you will NOT be able to create block storage until it is.
- Note: If You are using a disk that was previously used as an osd device, you must write zeros to the drive. Do this by running
dd if=/dev/zero of=/dev/DISK bs=1M count=100';
if you do not, your OSD installation will fail.
- Installing your compute nodes will run the necessary commands to create the volumes pool and the client.volumes account.
- Your compute nodes will be automatically configured to use ceph for block storage.
- Run puppet agent atleast twice on each compute node
puppet agent \-t \-v \--no-daemonize
- Once these steps are complete, you should be able to create a rbd-backed volume and attach it to an instance as normal.
nova volume-create 1 nova volume-list
- For a few moments, depending on the speed of your ceph cluster, nova volume-list will show the volume status as "creating".
- After it's created, it should mark the volume "available".
- Failure states will either be "error" or a indefinite "creating" status. If this is the case, check the /var/log/cinder/cinder-volume.log for any errors.