OpenStack:Ceph-COI-Installation

From DocWiki

(Difference between revisions)
Jump to: navigation, search
m
Line 6: Line 6:
* Install your build server
* Install your build server
-
* run puppet_modules.py
+
* Run puppet_modules.py to download the necessary puppet modules
-
* edit site.pp
+
* Edit site.pp to fit your configuration.
-
* define one mon (mon0) and at least one OSD server (osd0). If you wish to test with multiple MONs, you must have an odd number of MON nodes.
+
* Define one mon (mon0) and at least one OSD server (osd0). If you wish to test with multiple MONs, you must have an odd number of MON nodes.
=== Configuration ===
=== Configuration ===
-
''' Change the cinder driver option to 'rbd': '''
+
* Change the cinder driver option to 'rbd'
<pre>
<pre>
Line 19: Line 19:
</pre>
</pre>
-
'''Enable ceph installation on your compute nodes'''
+
* Enable ceph installation on your compute nodes
<pre>
<pre>
Line 25: Line 25:
</pre>
</pre>
-
'''Specify your MON and OSD cobbler nodes:'''
+
* Specify your MON and OSD cobbler nodes
<pre>
<pre>
Line 53: Line 53:
</pre>
</pre>
-
'''Enable all the ceph options. Note that the string 'REPLACEME' MUST be left as is. Generate a unique $ceph_monitor_fsid by running 'uuidgen \-r' on the command line. Unique $ceph_monitor_secrets can only be generated on a running ceph cluster. Leave this string as is.'''
+
* Enable all the ceph options.  
 +
** Note that the string 'REPLACEME' MUST be left as is.  
 +
** Generate a unique $ceph_monitor_fsid by running 'uuidgen \-r' on the command line. Unique $ceph_monitor_secrets can only be generated on a running ceph cluster. Leave this string as is.
<pre>
<pre>
Line 69: Line 71:
</pre>
</pre>
-
'''You will also need to uncomment both the path Exec statement and the puppet node statements:
+
* You will also need to uncomment both the path Exec statement and the puppet node statements
-
'''
+
 
<pre>
<pre>
node 'ceph-mon01' inherits os_base {
node 'ceph-mon01' inherits os_base {
Line 80: Line 82:
     keyring_path => '/etc/ceph/keyring',
     keyring_path => '/etc/ceph/keyring',
   }
   }
-
 
+
}
-
 
+
-
  }
+
# each MON needs a unique id, you can start at 0 and increment as needed.
# each MON needs a unique id, you can start at 0 and increment as needed.
Line 112: Line 112:
-
== Installation process ==
+
=== Installation process ===
-
* Bring mon0 up first
+
* Lets first bring up the mon0 node
<pre>
<pre>
apt-get update
apt-get update
Line 134: Line 134:
</pre>
</pre>
-
'''Example 'ceph status' output:'''
+
* You can verify the ceph status with the following command
<pre>
<pre>
 +
$ ceph status
health HEALTH_WARN 320 pgs degraded; 320 pgs stuck unclean; recovery 2/4 degraded (50.000%)
health HEALTH_WARN 320 pgs degraded; 320 pgs stuck unclean; recovery 2/4 degraded (50.000%)
monmap e1: 1 mons at {0=192.168.2.71:6789/0}, election epoch 2, quorum 0 0
monmap e1: 1 mons at {0=192.168.2.71:6789/0}, election epoch 2, quorum 0 0
Line 143: Line 144:
</pre>
</pre>
-
* '''If your OSD is not marked as up, you will NOT be able to create block storage until it is. '''
+
* If your OSD is not marked as up, you will NOT be able to create block storage until it is.
-
* '''NOTE THAT IF YOU ARE USING A DISK THAT WAS PREVIOUSLY USED AS AN OSD DEVICE YOU MUST WRITE ZEROS TO THE DRIVE'''
+
* NOTE THAT IF YOU ARE USING A DISK THAT WAS PREVIOUSLY USED AS AN OSD DEVICE YOU MUST WRITE ZEROS TO THE DRIVE
-
* '''Do this by running 'dd if=/dev/zero of=/dev/DISK bs=1M count=100'; if you do not, your OSD installation WILL fail.'''
+
* Do this by running  
 +
<pre>
 +
dd if=/dev/zero of=/dev/DISK bs=1M count=100';
 +
</pre>
 +
if you do not, your OSD installation will fail.
Installing your compute nodes will run the necessary commands to create the volumes pool and the client.volumes account.
Installing your compute nodes will run the necessary commands to create the volumes pool and the client.volumes account.

Revision as of 14:46, 5 August 2013

Contents

Installing a ceph cluster and configuring rbd-backed cinder volumes.

Currently rbd-backed cinder volumes are only available on compute nodes running cinder-volume. Work on using rbd-backed volumes on standalone cinder nodes is underway.

First steps

  • Install your build server
  • Run puppet_modules.py to download the necessary puppet modules
  • Edit site.pp to fit your configuration.
  • Define one mon (mon0) and at least one OSD server (osd0). If you wish to test with multiple MONs, you must have an odd number of MON nodes.

Configuration

  • Change the cinder driver option to 'rbd'
# The cinder storage driver to use Options are iscsi or rbd(ceph). Default is 'iscsi'.
$cinder_storage_driver         = 'rbd'
  • Enable ceph installation on your compute nodes
$cinder_ceph_enabled = true
  • Specify your MON and OSD cobbler nodes
### Repeat as needed ###
# Make a copy of your swift storage node block above for each additional
# node in your swift cluster and paste the copy in this section. Be sure
# to change the host name, mac, ip, and power settings for each node.

### this block defines the ceph monitor nodes
### you will need to add a node type for each addtional mon node
### eg ceph-mon02, etc. This is due to their unique id requirements
cobbler_node { "ceph-mon01":
    node_type     => "ceph-mon01",
    mac           => "11:22:33:cc:bb:aa",
    ip            => "192.168.242.180",
    power_address => "192.168.242.13",
  }

### this block define ceph osd nodes
### add a new entry for each node
cobbler_node { "ceph-osd01":
    node_type     => "ceph-osd01",
    mac           => "11:22:33:cc:bb:aa",
    ip            => "192.168.242.181",
    power_address => "192.168.242.14",
  }
  • Enable all the ceph options.
    • Note that the string 'REPLACEME' MUST be left as is.
    • Generate a unique $ceph_monitor_fsid by running 'uuidgen \-r' on the command line. Unique $ceph_monitor_secrets can only be generated on a running ceph cluster. Leave this string as is.
$ceph_auth_type         = 'cephx'
$ceph_monitor_fsid      = 'e80afa94-a64c-486c-9e34-d55e85f26406'
$ceph_monitor_secret    = 'AQAJzNxR+PNRIRAA7yUp9hJJdWZ3PVz242Xjiw=='
$ceph_monitor_port      = '6789'
$ceph_monitor_address   = $::ipaddress
$ceph_cluster_network   = '192.168.242.0/24'
$ceph_public_network    = '192.168.242.0/24'
$ceph_release           = 'cuttlefish'
$cinder_rbd_user        = 'volumes'
$cinder_rbd_pool        = 'volumes'
$cinder_rbd_secret_uuid = 'REPLACEME'
  • You will also need to uncomment both the path Exec statement and the puppet node statements
node 'ceph-mon01' inherits os_base {
# only mon0 should export the admin keys.
# This means the following if statement is not needed on the additional mon nodes.
if \!empty($::ceph_admin_key) {
  @@ceph::key { 'admin':
    secret       => $::ceph_admin_key,
    keyring_path => '/etc/ceph/keyring',
  }
}

# each MON needs a unique id, you can start at 0 and increment as needed.
class {'ceph_mon': id => 0 }
class { 'ceph::apt::ceph': release => $::ceph_release }
}

# This is the OSD node definition example. You will need to specify the public and cluster IP for each unique node.

node 'ceph-osd01' inherits os_base {
class { 'ceph::conf':
    fsid            => $::ceph_monitor_fsid,
    auth_type       => $::ceph_auth_type,
    cluster_network => $::ceph_cluster_network,
    public_network  => $::ceph_public_network,
  }
class { 'ceph::osd':
    public_address  => '192.168.242.3',
    cluster_address => '192.168.242.3',
  }
# Specify the disk devices to use for OSD here.
# Add a new entry for each device on the node that ceph should consume.
# puppet agent will need to run four times for the device to be formatted,
# and for the OSD to be added to the crushmap.
ceph::osd::device { '/dev/sdd': }
class { 'ceph::apt::ceph': release => $::ceph_release }
}


Installation process

  • Lets first bring up the mon0 node
apt-get update
run 'puppet agent \-t \-v \--no-daemonize' at least three times
  • Then bring up the OSD node(s)
 apt-get update
run 'puppet agent \-t \-v \--no-daemonize' at least four times
  • The ceph cluster will now be up.
Login to mon0
run 'ceph status'
the monmap line should show X mons
the osdmap line should show X osds, ensure that the OSD is marked as up.
  • You can verify the ceph status with the following command
$ ceph status
health HEALTH_WARN 320 pgs degraded; 320 pgs stuck unclean; recovery 2/4 degraded (50.000%)
monmap e1: 1 mons at {0=192.168.2.71:6789/0}, election epoch 2, quorum 0 0
osdmap e7: 1 osds: 1 up, 1 in
pgmap v17: 320 pgs: 320 active+degraded; 138 bytes data, 4131 MB used, 926 GB / 930 GB avail; 0B/s rd, 11B/s wr, 0op/s; 2/4 degraded (50.000%)
mdsmap e1: 0/0/1 up
  • If your OSD is not marked as up, you will NOT be able to create block storage until it is.
  • NOTE THAT IF YOU ARE USING A DISK THAT WAS PREVIOUSLY USED AS AN OSD DEVICE YOU MUST WRITE ZEROS TO THE DRIVE
  • Do this by running
dd if=/dev/zero of=/dev/DISK bs=1M count=100';

if you do not, your OSD installation will fail.

Installing your compute nodes will run the necessary commands to create the volumes pool and the client.volumes account.

Your compute nodes will be automatically configured to use ceph for block storage.

Run run 'puppet agent \-t \-v \--no-daemonize' at least two times on each compute node.

Once these steps are complete, you should be able to create a rbd-backed volume and attach it to an instance as normal.

nova volume-create 1
nova volume-list

For a few moments, depending on the speed of your ceph cluster, nova volume-list will show the volume status as "creating".

After it's created, it should mark the volume "available".

Failure states will either be "error" or a indefinite "creating" status. If this is the case, check the /var/log/cinder/cinder-volume.log for any errors.

Rating: 5.0/5 (3 votes cast)

Personal tools