OpenStack:Ceph-COI-Installation

From DocWiki

Revision as of 15:40, 10 September 2013 by Dotalton (Talk | contribs)
Jump to: navigation, search

Contents

Installing a ceph cluster and configuring rbd-backed cinder volumes.

First steps

  • Install your build server
  • Run puppet_modules.py to download the necessary puppet modules
  • Edit site.pp to fit your configuration.
  • You must define one MON and at least one OSD to use Ceph.

Choosing Your Configuration

Cisco COI Grizzly g.1 release only supports standalone ceph nodes. Please follow only those instructions. Cisco COI Grizzly g.2 release supports standalone and integrated. Integrated options allow you to run MON on control and compute servers, along with OSD on compute servers. You can also have standalone cinder-volume nodes as OSD servers.


for all ceph configurations, uncomment the following:

$ceph_auth_type         = 'cephx'
$ceph_monitor_fsid      = 'e80afa94-a64c-486c-9e34-d55e85f26406'
$ceph_monitor_secret    = 'AQAJzNxR+PNRIRAA7yUp9hJJdWZ3PVz242Xjiw=='
$ceph_monitor_port      = '6789'
$ceph_monitor_address   = $::ipaddress1
$ceph_cluster_network   = '10.0.0.0/24'
$ceph_public_network    = '10.0.0.0/24'
$ceph_release           = 'cuttlefish'
$cinder_rbd_user        = 'admin'
$cinder_rbd_pool        = 'volumes'
$cinder_rbd_secret_uuid = 'e80afa94-a64c-486c-9e34-d55e85f26406'


and uncomment the exec block
Exec {
  path        => '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
#  environment => "https_proxy=$::proxy",
}

Ceph Standalone Node Deployment

Configure the cobbler node entries for your Ceph servers.

Uncomment or add the Puppet Ceph node entries:

on the first mon only:

  if !empty($::ceph_admin_key) {
  @@ceph::key { 'admin':
    secret       => $::ceph_admin_key,
    keyring_path => '/etc/ceph/keyring',
  }
  }
  
  class {'ceph_mon': id => 0 }
}

on any additional mon, you only need the following:
class {'ceph_mon': id => 0 }

YOU MUST INCREMENT THIS ID, AND IT MUST BE UNIQUE TO EACH MON


ceph osd nodes need the following:

  class { 'ceph::conf':
    fsid            => $::ceph_monitor_fsid,
    auth_type       => $::ceph_auth_type,
    cluster_network => $::ceph_cluster_network,
    public_network  => $::ceph_public_network,
  }
  class { 'ceph::osd':
    public_address  => '10.0.0.3',
    cluster_address => '10.0.0.3',

  ceph::osd::device { '/dev/sdb': }

Ceph MON on the Controller Only and OSD on All Compute Nodes

uncomment $controller_has_mon = true $osd_on_compute = true

uncomment the following in your control server puppet node definition
  #  if !empty($::ceph_admin_key) {
  #  @@ceph::key { 'admin':
  #    secret       => $::ceph_admin_key,
  #    keyring_path => '/etc/ceph/keyring',
  #  }
  #  }

  # each MON needs a unique id, you can start at 0 and increment as needed.
  #  class {'ceph_mon': id => 0 }

  
add the following to each compute server puppet node definition
  class { 'ceph::conf':
    fsid            => $::ceph_monitor_fsid,
    auth_type       => $::ceph_auth_type,
    cluster_network => $::ceph_cluster_network,
    public_network  => $::ceph_public_network,
  }
  class { 'ceph::osd':
    public_address  => '10.0.0.3',
    cluster_address => '10.0.0.3',
  }
  # Specify the disk devices to use for OSD here.
  # Add a new entry for each device on the node that ceph should consume.
  # puppet agent will need to run four times for the device to be formatted,
  #   and for the OSD to be added to the crushmap.
  ceph::osd::device { '/dev/sdb': }

Ceph Multiple MON On Specified Controller and Compute Nodes, with OSD on Separate Compute Nodes

you cannot cohabitate mons and osds on the same server in this use case.

uncomment the following:

$controller_has_mon = true $computes_have_mons = false

uncomment the following in your control server puppet node definition
  #  if !empty($::ceph_admin_key) {
  #  @@ceph::key { 'admin':
  #    secret       => $::ceph_admin_key,
  #    keyring_path => '/etc/ceph/keyring',
  #  }
  #  }

  # each MON needs a unique id, you can start at 0 and increment as needed.
  #  class {'ceph_mon': id => 0 }
  
for each additional mon on a compute node, add the following
  # each MON needs a unique id, you can start at 0 and increment as needed.
  #  class {'ceph_mon': id => 0 }
  
for each compute node that does NOT contain a mon, you can specify the OSD configuration
  class { 'ceph::conf':
    fsid            => $::ceph_monitor_fsid,
    auth_type       => $::ceph_auth_type,
    cluster_network => $::ceph_cluster_network,
    public_network  => $::ceph_public_network,
  }
  class { 'ceph::osd':
    public_address  => '10.0.0.3',
    cluster_address => '10.0.0.3',
  }
  # Specify the disk devices to use for OSD here.
  # Add a new entry for each device on the node that ceph should consume.
  # puppet agent will need to run four times for the device to be formatted,
  #   and for the OSD to be added to the crushmap.
  ceph::osd::device { '/dev/sdb': }


Ceph MON and OSD on the Same Nodes

WARNING: YOU MUST HAVE AN ODD NUMBER OF MON NODES.

You can have as many OSD nodes as you like, but the MON nodes must be odd to reach a quorum.

uncomment $ceph_combo

Do NOT uncomment these variables

$osd_on_compute
$controller_has_mon
$computes_have_mon

You will need to specify the normal MON and OSD definitions for each puppet node as usual.

Deploying a Standalone Cinder Volume OSD node

add the following to the puppet node definition

  # if you are using rbd, uncomment the following ceph classes
  #class { 'ceph::conf':
  #  fsid            => $::ceph_monitor_fsid,
  #  auth_type       => $::ceph_auth_type,
  #  cluster_network => $::ceph_cluster_network,
  #  public_network  => $::ceph_public_network,
  #}
  #class { 'ceph::osd':
  #  public_address  => '192.168.242.22',
  #  cluster_address => '192.168.242.22',
  #}

Configuring Glance to use Ceph

Uncomment the following:

# $glance_ceph_enabled = true
# $glance_ceph_user    = 'admin'
# $glance_ceph_pool    = 'images'

and change $glance_backend to 'rbd'

Configuring Cinder to use Ceph

uncomment the following:

# The cinder_ceph_enabled configures cinder to use rbd-backed volumes.
# $cinder_ceph_enabled           = true

and change $cinder_storage_driver to 'rbd'

Ceph Node Installation and Testing

If you do not set puppet to autostart in the site.pp, you will have to run the agent manually as shown here. Regardless of the start method, the agent must run at least four times on each node running any Ceph services in order for Ceph to be properly configured.

  • First bring up the mon0 node and run:
apt-get update
run 'puppet agent -t -v --no-daemonize' at least four times
  • Then bring up the OSD node(s) and run:
 apt-get update
run 'puppet agent -t -v --no-daemonize' at least four times
  • The ceph cluster will now be up. You can verify by logging in to the mon0 node and running the 'ceph status' command. The "monmap" line should show 1 more more mons (depending on the number you configured). The osdmap shoudl show 1 or more OSD's (depending on the number you configured) and the OSD should be marked as "up". There will be one OSD per disk configured eg. if you have a single OSD node with three disks available for ceph, you will have 3 OSDs show up in your 'ceph status'.
$ ceph status
health HEALTH_WARN 320 pgs degraded; 320 pgs stuck unclean; recovery 2/4 degraded (50.000%)
monmap e1: 1 mons at {0=192.168.2.71:6789/0}, election epoch 2, quorum 0 0
osdmap e7: 1 osds: 1 up, 1 in
pgmap v17: 320 pgs: 320 active+degraded; 138 bytes data, 4131 MB used, 926 GB / 930 GB avail; 0B/s rd, 11B/s wr, 0op/s; 2/4 degraded (50.000%)
mdsmap e1: 0/0/1 up
  • If your OSD is not marked as up, you will NOT be able to create block storage until it is.
  • Note: If You are using a disk that was previously used as an osd device, you must write zeros to the drive. Do this by running
dd if=/dev/zero of=/dev/DISK bs=1M count=100';

If you do not zero the disk, your OSD installation will fail.

  • Installing your compute nodes will run the necessary commands to create the volumes pool and the client.volumes account.
  • Your compute nodes will be automatically configured to use ceph for block storage.

Testing

Testing Cinder

  • Once these steps are complete, you should be able to create a rbd-backed volume and attach it to an instance as normal.
nova volume-create 1
nova volume-list (note the new volume's UUID)

Check Ceph to see that the new volume exists

rbd --pool volumes ls

This command should return a list of UUIDs, of which you will see the one matching the output of nova volume-list. This is your volume.

  • For a moment, depending on the speed of your ceph cluster, nova volume-list will show the volume status as "creating".
  • After it's created, it should mark the volume "available".
  • Failure states will either be "error" or a indefinite "creating" status. If this is the case, check the /var/log/cinder/cinder-volume.log for any errors.

Testing Glance

Download an image and add it to glance:

wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
glance add name="precise-x86_64" is_public=true container_format=ovf disk_format=qcow2 < precise-server-cloudimg-amd64-disk1.img

Check that the image is stored in Ceph:

glance image-list
rbd --pool images ls

As with Cinder, you should see a matching UUID in the glance volume-list and rbd command. This is your image stored in Ceph.

Rating: 5.0/5 (4 votes cast)

Personal tools