OpenStack:Ceph-COI-Installation

From DocWiki

(Difference between revisions)
Jump to: navigation, search
(Configuration)
(Choosing Your Configuration)
Line 37: Line 37:
#  environment => "https_proxy=$::proxy",
#  environment => "https_proxy=$::proxy",
}
}
 +
</pre>
 +
 +
==Ceph Standalone Node Deployment==
 +
 +
Configure the cobbler node entries for your Ceph servers.
 +
 +
Uncomment or add the Puppet Ceph node entries:
 +
<pre>
 +
on the first mon only:
 +
 +
  if !empty($::ceph_admin_key) {
 +
  @@ceph::key { 'admin':
 +
    secret      => $::ceph_admin_key,
 +
    keyring_path => '/etc/ceph/keyring',
 +
  }
 +
  }
 +
 
 +
  class {'ceph_mon': id => 0 }
 +
}
 +
 +
on any additional mon, you only need the following:
 +
class {'ceph_mon': id => 0 }
 +
 +
YOU MUST INCREMENT THIS ID, AND IT MUST BE UNIQUE TO EACH MON
 +
 +
 +
ceph osd nodes need the following:
 +
 +
  class { 'ceph::conf':
 +
    fsid            => $::ceph_monitor_fsid,
 +
    auth_type      => $::ceph_auth_type,
 +
    cluster_network => $::ceph_cluster_network,
 +
    public_network  => $::ceph_public_network,
 +
  }
 +
  class { 'ceph::osd':
 +
    public_address  => '10.0.0.3',
 +
    cluster_address => '10.0.0.3',
 +
 +
  ceph::osd::device { '/dev/sdb': }
</pre>
</pre>

Revision as of 22:03, 30 August 2013

Contents

Installing a ceph cluster and configuring rbd-backed cinder volumes.

First steps

  • Install your build server
  • Run puppet_modules.py to download the necessary puppet modules
  • Edit site.pp to fit your configuration.
  • Define one mon (mon0) and at least one OSD server (osd0). If you wish to test with multiple MONs, you must have an odd number of MON nodes.

Choosing Your Configuration

Cisco COI Grizzly g1 release only supports standalone ceph nodes. Please follow only those instructions. Cisco COI Grizzly g2 release supports standalone and ingrated. Integrated options allow you to run MON on control and compute servers, along with OSD on compute servers. You can also have standalone cinder-volume nodes as OSD servers.


for all ceph configurations, uncomment the following:

$ceph_auth_type         = 'cephx'
$ceph_monitor_fsid      = 'e80afa94-a64c-486c-9e34-d55e85f26406'
$ceph_monitor_secret    = 'AQAJzNxR+PNRIRAA7yUp9hJJdWZ3PVz242Xjiw=='
$ceph_monitor_port      = '6789'
$ceph_monitor_address   = $::ipaddress1
$ceph_cluster_network   = '10.0.0.0/24'
$ceph_public_network    = '10.0.0.0/24'
$ceph_release           = 'cuttlefish'
$cinder_rbd_user        = 'admin'
$cinder_rbd_pool        = 'volumes'
$cinder_rbd_secret_uuid = 'e80afa94-a64c-486c-9e34-d55e85f26406'


and uncomment the exec block
Exec {
  path        => '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
#  environment => "https_proxy=$::proxy",
}

Ceph Standalone Node Deployment

Configure the cobbler node entries for your Ceph servers.

Uncomment or add the Puppet Ceph node entries:

on the first mon only:

  if !empty($::ceph_admin_key) {
  @@ceph::key { 'admin':
    secret       => $::ceph_admin_key,
    keyring_path => '/etc/ceph/keyring',
  }
  }
  
  class {'ceph_mon': id => 0 }
}

on any additional mon, you only need the following:
class {'ceph_mon': id => 0 }

YOU MUST INCREMENT THIS ID, AND IT MUST BE UNIQUE TO EACH MON


ceph osd nodes need the following:

  class { 'ceph::conf':
    fsid            => $::ceph_monitor_fsid,
    auth_type       => $::ceph_auth_type,
    cluster_network => $::ceph_cluster_network,
    public_network  => $::ceph_public_network,
  }
  class { 'ceph::osd':
    public_address  => '10.0.0.3',
    cluster_address => '10.0.0.3',

  ceph::osd::device { '/dev/sdb': }

Installation process

  • First bring up the mon0 node and run:
apt-get update
run 'puppet agent -t -v --no-daemonize' at least three times
  • Then bring up the OSD node(s) and run:
 apt-get update
run 'puppet agent -t -v --no-daemonize' at least four times
  • The ceph cluster will now be up. You can verify by logging in to the mon0 node and running the 'ceph status' command. The "monmap" line should show 1 more more mons (depending on the number you configured). The osdmap shoudl show 1 or more OSD's (depending on the number you configured) and the OSD should be marked as "up".
$ ceph status
health HEALTH_WARN 320 pgs degraded; 320 pgs stuck unclean; recovery 2/4 degraded (50.000%)
monmap e1: 1 mons at {0=192.168.2.71:6789/0}, election epoch 2, quorum 0 0
osdmap e7: 1 osds: 1 up, 1 in
pgmap v17: 320 pgs: 320 active+degraded; 138 bytes data, 4131 MB used, 926 GB / 930 GB avail; 0B/s rd, 11B/s wr, 0op/s; 2/4 degraded (50.000%)
mdsmap e1: 0/0/1 up
  • If your OSD is not marked as up, you will NOT be able to create block storage until it is.
  • Note: If You are using a disk that was previously used as an osd device, you must write zeros to the drive. Do this by running
dd if=/dev/zero of=/dev/DISK bs=1M count=100';

If you do not zero the disk, your OSD installation will fail.

  • Installing your compute nodes will run the necessary commands to create the volumes pool and the client.volumes account.
  • Your compute nodes will be automatically configured to use ceph for block storage.
  • Run puppet agent atleast twice on each compute node
puppet agent -t -v --no-daemonize
  • Once these steps are complete, you should be able to create a rbd-backed volume and attach it to an instance as normal.
nova volume-create 1
nova volume-list
  • For a few moments, depending on the speed of your ceph cluster, nova volume-list will show the volume status as "creating".
  • After it's created, it should mark the volume "available".
  • Failure states will either be "error" or a indefinite "creating" status. If this is the case, check the /var/log/cinder/cinder-volume.log for any errors.

Rating: 5.0/5 (3 votes cast)

Personal tools