OpenStack:Ceph-COI-Installation

From DocWiki

Revision as of 22:10, 30 August 2013 by Dotalton (Talk | contribs)
Jump to: navigation, search

Contents

Installing a ceph cluster and configuring rbd-backed cinder volumes.

First steps

  • Install your build server
  • Run puppet_modules.py to download the necessary puppet modules
  • Edit site.pp to fit your configuration.
  • Define one mon (mon0) and at least one OSD server (osd0). If you wish to test with multiple MONs, you must have an odd number of MON nodes.

Choosing Your Configuration

Cisco COI Grizzly g1 release only supports standalone ceph nodes. Please follow only those instructions. Cisco COI Grizzly g2 release supports standalone and ingrated. Integrated options allow you to run MON on control and compute servers, along with OSD on compute servers. You can also have standalone cinder-volume nodes as OSD servers.


for all ceph configurations, uncomment the following:

$ceph_auth_type         = 'cephx'
$ceph_monitor_fsid      = 'e80afa94-a64c-486c-9e34-d55e85f26406'
$ceph_monitor_secret    = 'AQAJzNxR+PNRIRAA7yUp9hJJdWZ3PVz242Xjiw=='
$ceph_monitor_port      = '6789'
$ceph_monitor_address   = $::ipaddress1
$ceph_cluster_network   = '10.0.0.0/24'
$ceph_public_network    = '10.0.0.0/24'
$ceph_release           = 'cuttlefish'
$cinder_rbd_user        = 'admin'
$cinder_rbd_pool        = 'volumes'
$cinder_rbd_secret_uuid = 'e80afa94-a64c-486c-9e34-d55e85f26406'


and uncomment the exec block
Exec {
  path        => '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
#  environment => "https_proxy=$::proxy",
}

Ceph Standalone Node Deployment

Configure the cobbler node entries for your Ceph servers.

Uncomment or add the Puppet Ceph node entries:

on the first mon only:

  if !empty($::ceph_admin_key) {
  @@ceph::key { 'admin':
    secret       => $::ceph_admin_key,
    keyring_path => '/etc/ceph/keyring',
  }
  }
  
  class {'ceph_mon': id => 0 }
}

on any additional mon, you only need the following:
class {'ceph_mon': id => 0 }

YOU MUST INCREMENT THIS ID, AND IT MUST BE UNIQUE TO EACH MON


ceph osd nodes need the following:

  class { 'ceph::conf':
    fsid            => $::ceph_monitor_fsid,
    auth_type       => $::ceph_auth_type,
    cluster_network => $::ceph_cluster_network,
    public_network  => $::ceph_public_network,
  }
  class { 'ceph::osd':
    public_address  => '10.0.0.3',
    cluster_address => '10.0.0.3',

  ceph::osd::device { '/dev/sdb': }

Ceph MON on the Controller and OSD on All Compute Nodes

uncomment $controller_has_mon = true $osd_on_compute = true

uncomment the following in your control server puppet node definition
  #  if !empty($::ceph_admin_key) {
  #  @@ceph::key { 'admin':
  #    secret       => $::ceph_admin_key,
  #    keyring_path => '/etc/ceph/keyring',
  #  }
  #  }

  # each MON needs a unique id, you can start at 0 and increment as needed.
  #  class {'ceph_mon': id => 0 }

  
add the following to each compute server puppet node definition
  class { 'ceph::conf':
    fsid            => $::ceph_monitor_fsid,
    auth_type       => $::ceph_auth_type,
    cluster_network => $::ceph_cluster_network,
    public_network  => $::ceph_public_network,
  }
  class { 'ceph::osd':
    public_address  => '10.0.0.3',
    cluster_address => '10.0.0.3',
  }
  # Specify the disk devices to use for OSD here.
  # Add a new entry for each device on the node that ceph should consume.
  # puppet agent will need to run four times for the device to be formatted,
  #   and for the OSD to be added to the crushmap.
  ceph::osd::device { '/dev/sdb': }

Ceph Multi-MON Across Controller(s) and Compute(s), with Some OSD on Compute(s)

you cannot cohabitate mons and osds on the same server.

uncomment the following:

$controller_has_mon = true $computes_have_mons = false

uncomment the following in your control server puppet node definition
  #  if !empty($::ceph_admin_key) {
  #  @@ceph::key { 'admin':
  #    secret       => $::ceph_admin_key,
  #    keyring_path => '/etc/ceph/keyring',
  #  }
  #  }

  # each MON needs a unique id, you can start at 0 and increment as needed.
  #  class {'ceph_mon': id => 0 }
  
for each additional mon on a compute node, add the following
  # each MON needs a unique id, you can start at 0 and increment as needed.
  #  class {'ceph_mon': id => 0 }
  
for each compute node that does NOT contain a mon, you can specify the OSD configuration
  class { 'ceph::conf':
    fsid            => $::ceph_monitor_fsid,
    auth_type       => $::ceph_auth_type,
    cluster_network => $::ceph_cluster_network,
    public_network  => $::ceph_public_network,
  }
  class { 'ceph::osd':
    public_address  => '10.0.0.3',
    cluster_address => '10.0.0.3',
  }
  # Specify the disk devices to use for OSD here.
  # Add a new entry for each device on the node that ceph should consume.
  # puppet agent will need to run four times for the device to be formatted,
  #   and for the OSD to be added to the crushmap.
  ceph::osd::device { '/dev/sdb': }

Deploying a Standalone Cinder Volume OSD node

add the following to the puppet node definition

  # if you are using rbd, uncomment the following ceph classes
  #class { 'ceph::conf':
  #  fsid            => $::ceph_monitor_fsid,
  #  auth_type       => $::ceph_auth_type,
  #  cluster_network => $::ceph_cluster_network,
  #  public_network  => $::ceph_public_network,
  #}
  #class { 'ceph::osd':
  #  public_address  => '192.168.242.22',
  #  cluster_address => '192.168.242.22',
  #}

Configuring Glance to use Ceph

Uncomment the following:

# $glance_ceph_enabled = true
# $glance_ceph_user    = 'admin'
# $glance_ceph_pool    = 'images'

change $glance_backend to 'rbd'

Configuring Cinder to use Ceph

uncomment the following:

# The cinder_ceph_enabled configures cinder to use rbd-backed volumes.
# $cinder_ceph_enabled           = true

and change $cinder_storage_driver to 'rbd'




Ceph Node Installation

  • First bring up the mon0 node and run:
apt-get update
run 'puppet agent -t -v --no-daemonize' at least three times
  • Then bring up the OSD node(s) and run:
 apt-get update
run 'puppet agent -t -v --no-daemonize' at least four times
  • The ceph cluster will now be up. You can verify by logging in to the mon0 node and running the 'ceph status' command. The "monmap" line should show 1 more more mons (depending on the number you configured). The osdmap shoudl show 1 or more OSD's (depending on the number you configured) and the OSD should be marked as "up".
$ ceph status
health HEALTH_WARN 320 pgs degraded; 320 pgs stuck unclean; recovery 2/4 degraded (50.000%)
monmap e1: 1 mons at {0=192.168.2.71:6789/0}, election epoch 2, quorum 0 0
osdmap e7: 1 osds: 1 up, 1 in
pgmap v17: 320 pgs: 320 active+degraded; 138 bytes data, 4131 MB used, 926 GB / 930 GB avail; 0B/s rd, 11B/s wr, 0op/s; 2/4 degraded (50.000%)
mdsmap e1: 0/0/1 up
  • If your OSD is not marked as up, you will NOT be able to create block storage until it is.
  • Note: If You are using a disk that was previously used as an osd device, you must write zeros to the drive. Do this by running
dd if=/dev/zero of=/dev/DISK bs=1M count=100';

If you do not zero the disk, your OSD installation will fail.

  • Installing your compute nodes will run the necessary commands to create the volumes pool and the client.volumes account.
  • Your compute nodes will be automatically configured to use ceph for block storage.
  • Run puppet agent atleast twice on each compute node
puppet agent -t -v --no-daemonize
  • Once these steps are complete, you should be able to create a rbd-backed volume and attach it to an instance as normal.
nova volume-create 1
nova volume-list
  • For a few moments, depending on the speed of your ceph cluster, nova volume-list will show the volume status as "creating".
  • After it's created, it should mark the volume "available".
  • Failure states will either be "error" or a indefinite "creating" status. If this is the case, check the /var/log/cinder/cinder-volume.log for any errors.

Rating: 5.0/5 (3 votes cast)

Personal tools