ceph集群搭建详细教程(ceph-deploy)

  • ceph集群搭建详细教程(ceph-deploy)已关闭评论
  • 82 次浏览
  • A+
所属分类:linux技术
摘要

ceph-deploy比较适合生产环境,不是用cephadm搭建。相对麻烦一些,但是并不难,细节把握好就行,只是命令多一些而已。

ceph-deploy比较适合生产环境,不是用cephadm搭建。相对麻烦一些,但是并不难,细节把握好就行,只是命令多一些而已。

实验环境

服务器主机 public网段IP(对外服务) cluster网段IP(集群通信) 角色
deploy 192.168.2.120 用于部署集群、管理集群
ceph-node1 192.168.2.121 192.168.6.135 ceph-mon、ceph-mgr、ceph-osd
ceph-node2 192.168.2.122 192.168.6.136 ceph-mon、ceph-mgr、ceph-osd
ceph-node3 192.168.2.123 192.168.6.137 ceph-mon、ceph-osd
ceph-osd节点:一般建议裸金属部署。 	10c12c, 32G、64G更好。  ceph-mgr两个节点就可以做高可用了,当然可以用更多节点。  ceph-mon必须3个节点以上。 ceph-mon性能可以低一点,比如跑虚拟机上。 	4c8g也够用,4C16G更好。 

准备工作

关闭防火墙、关闭selinux

systemctl disable firewalld systemctl stop firewalld setenforce 0 sed -i '7s/enforcing/disabled/' /etc/selinux/config 

设置每台服务的hostname

hostnamectl set-hostname ceph-node1 hostnamectl set-hostname ceph-node2 hostnamectl set-hostname ceph-node3 hostnamectl set-hostname ceph-deploy 

设置host相互解析

192.168.2.120 ceph-deploy 192.168.2.121 ceph-node1 192.168.2.122 ceph-node2 192.168.2.123 ceph-node3 

每台服务器添加好epel源

[epel] name=Extra Packages for Enterprise Linux 7 - baseurl=http://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/ #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch= failovermethod=priority enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7  [epel-debuginfo] name=Extra Packages for Enterprise Linux 7 -  - Debug baseurl=http://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch= failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1  [epel-source] name=Extra Packages for Enterprise Linux 7 -  - Source baseurl=http://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch= failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1 

每台服务器添加ceph的源

[Ceph] name=Ceph packages for $basearch baseurl=http://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc  [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc  [ceph-source] name=Ceph source packages baseurl=http://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc 

每天服务器添加ceph用户

groupadd  ceph -g 3333 useradd -u 3333 -g 3333 ceph echo "cephadmin888" | passwd --stdin ceph 

每台服务配置sudoer配置文件,允许ceph用户执行所有sudo命令

echo "ceph    ALL=(ALL)       NOPASSWD:ALL" >> /etc/sudoers 

ceph-deploy节点生产ssh密钥

# 切换到ceph用户,切记一定要切换再做..不然免密就是免密你当前的用户,因为后续要用ceph用户来部署。 su - ceph # 生成ssh密钥 ssh-keygen 

复制ssh密钥到ceph-node1、ceph-node2、ceph-node3节点

sudo ssh-copy-id ceph@192.168.2.121 sudo ssh-copy-id ceph@192.168.2.122 sudo ssh-copy-id ceph@192.168.2.123 

开始部署集群

在ceph-deploy节点创建目录

su - ceph [ceph@ceph-deploy ~]$ mkdir ceph-cluster-deploy [ceph@ceph-deploy ~]$ cd ceph-cluster-deploy/ [ceph@ceph-deploy ceph-cluster-deploy]$  

安装ceph-deploy包

[ceph@ceph-deploy ceph-cluster-deploy]$ sudo yum install ceph-deploy python-setuptools python2-subprocess3 

安装成功后可以查看ceph-deploy命令是否能够使用

[ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy  usage: ceph-deploy [-h] [-v | -q] [--version] [--username USERNAME]                    [--overwrite-conf] [--ceph-conf CEPH_CONF]                    COMMAND ... 

查看ceph-deploy的版本

ceph-deploy 2.0.1默认安装mimic的ceph版本(也就是13.2.10),如果需要安装其他版本ceph,可以使用--release来指定

[ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy --version 2.0.1 

执行ceph-deploy install命令初始化osd节点

在ceph-deploy节点通过执行install命令,为ceph集群中的osd节点安装ceph相关包

[ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy install --help usage: ceph-deploy install [-h] [--stable [CODENAME] | --release [CODENAME] |                            --testing | --dev [BRANCH_OR_TAG]]                            [--dev-commit [COMMIT]] [--mon] [--mgr] [--mds]                            [--rgw] [--osd] [--tests] [--cli] [--all]                            [--adjust-repos | --no-adjust-repos | --repo]                            [--local-mirror [LOCAL_MIRROR]]                            [--repo-url [REPO_URL]] [--gpg-url [GPG_URL]]                            [--nogpgcheck]                            HOST [HOST ...]  Install Ceph packages on remote hosts.  positional arguments:   HOST                  hosts to install on ... 等选项,此处忽略  # 这里有2个比较重要的选项,分别是: --no-adjust-repos     install packages without modifying source repos # 不要去修改ceph的repo源,因为我们前面已经将源改成清华的源了,等下它给你改回来就慢的要死 --nogpgcheck          install packages without gpgcheck # 跳过gpg校验 

执行命令:

# p.s:ceph-node{1..3} 中的{1..3}这个是linux中的一个循环运算,比如用在for循环中 # 实际上生产命令:ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1 ceph-node2 ceph-node3  # 执行该命令进行安装 [ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node{1..3}  

执行过程就忽略了,执行成功后有类似提示,如下:

[ceph-node3][DEBUG ] 完毕! [ceph-node3][INFO  ] Running command: sudo ceph --version [ceph-node3][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable) 

执行ceph-deploy new命令执行ceph集群初始化

# 查看ceph-deploy new子命令的帮助信息 [ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy new --help usage: ceph-deploy new [-h] [--no-ssh-copykey] [--fsid FSID]                        [--cluster-network CLUSTER_NETWORK]                        [--public-network PUBLIC_NETWORK]                        MON [MON ...]  Start deploying a new cluster, and write a CLUSTER.conf and keyring for it.  positional arguments:   MON                   initial monitor hostname, fqdn, or hostname:fqdn pair  optional arguments:   -h, --help            show this help message and exit   --no-ssh-copykey      do not attempt to copy SSH keys   --fsid FSID           provide an alternate FSID for ceph.conf generation   --cluster-network CLUSTER_NETWORK                         specify the (internal) cluster network   --public-network PUBLIC_NETWORK                         specify the public network for a cluster  

执行命令:

# 由于我是将mon也放到osd节点上,所以这里就是ceph-node1、ceph-node2、ceph-node3了 # 生产环境,建议将mon单独服务器节点。 [ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy new --cluster-network 192.168.6.0/24 --public-network 192.168.2.0/24  ceph-node1 ceph-node2 ceph-node3 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy new --cluster-network 192.168.6.0/24 --public-network 192.168.2.0/24 ceph-node1 ceph-node2 [ceph_deploy.cli][INFO  ] ceph-deploy options: [ceph_deploy.cli][INFO  ]  username                      : None [ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7fa768c08de8> [ceph_deploy.cli][INFO  ]  verbose                       : False [ceph_deploy.cli][INFO  ]  overwrite_conf                : False [ceph_deploy.cli][INFO  ]  quiet                         : False [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa76837f8c0> [ceph_deploy.cli][INFO  ]  cluster                       : ceph [ceph_deploy.cli][INFO  ]  ssh_copykey                   : True [ceph_deploy.cli][INFO  ]  mon                           : ['ceph-node1', 'ceph-node2'] [ceph_deploy.cli][INFO  ]  public_network                : 192.168.2.0/24 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None [ceph_deploy.cli][INFO  ]  cluster_network               : 192.168.6.0/24 [ceph_deploy.cli][INFO  ]  default_release               : False [ceph_deploy.cli][INFO  ]  fsid                          : None [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds [ceph-node1][DEBUG ] connected to host: ceph-deploy  [ceph-node1][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-node1 [ceph-node1][DEBUG ] connection detected need for sudo [ceph-node1][DEBUG ] connected to host: ceph-node1  [ceph-node1][DEBUG ] detect platform information from remote host [ceph-node1][DEBUG ] detect machine type [ceph-node1][DEBUG ] find the location of an executable [ceph-node1][INFO  ] Running command: sudo /usr/sbin/ip link show [ceph-node1][INFO  ] Running command: sudo /usr/sbin/ip addr show [ceph-node1][DEBUG ] IP addresses found: [u'192.168.2.121', u'192.168.6.135'] [ceph_deploy.new][DEBUG ] Resolving host ceph-node1 [ceph_deploy.new][DEBUG ] Monitor ceph-node1 at 192.168.2.121 [ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds [ceph-node2][DEBUG ] connected to host: ceph-deploy  [ceph-node2][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-node2 [ceph_deploy.new][WARNIN] could not connect via SSH [ceph_deploy.new][INFO  ] will connect again with password prompt The authenticity of host 'ceph-node2 (192.168.2.122)' can't be established. ECDSA key fingerprint is SHA256:bFB9FzJjKEKMP2W5kW+orMbo9mD+tr8fLOPRsYaXhj8. ECDSA key fingerprint is MD5:b7:e5:bd:6a:56:10:42:3d:34:3a:54:ac:79:a2:3c:5b. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ceph-node2' (ECDSA) to the list of known hosts. [ceph-node2][DEBUG ] connected to host: ceph-node2  [ceph-node2][DEBUG ] detect platform information from remote host [ceph-node2][DEBUG ] detect machine type [ceph_deploy.new][INFO  ] adding public keys to authorized_keys [ceph-node2][DEBUG ] append contents to file [ceph-node2][DEBUG ] connection detected need for sudo [ceph-node2][DEBUG ] connected to host: ceph-node2  [ceph-node2][DEBUG ] detect platform information from remote host [ceph-node2][DEBUG ] detect machine type [ceph-node2][DEBUG ] find the location of an executable [ceph-node2][INFO  ] Running command: sudo /usr/sbin/ip link show [ceph-node2][INFO  ] Running command: sudo /usr/sbin/ip addr show [ceph-node2][DEBUG ] IP addresses found: [u'192.168.6.136', u'192.168.2.122'] [ceph_deploy.new][DEBUG ] Resolving host ceph-node2 [ceph_deploy.new][DEBUG ] Monitor ceph-node2 at 192.168.2.122 [ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-node1', 'ceph-node2'] [ceph_deploy.new][DEBUG ] Monitor addrs are [u'192.168.2.121', u'192.168.2.122'] [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...  # 查看当前目录,可以发现生产了一些文件 [ceph@ceph-deploy ceph-cluster-deploy]$ ll 总用量 16 -rw-rw-r-- 1 ceph ceph  292 12月 22 12:10 ceph.conf  # conf是ceph集群的配置文件 -rw-rw-r-- 1 ceph ceph 5083 12月 22 12:10 ceph-deploy-ceph.log # 日志 -rw------- 1 ceph ceph   73 12月 22 12:10 ceph.mon.keyring # 这个是ceph集群的密钥  # 查看ceph.conf [ceph@ceph-deploy ceph-cluster-deploy]$ cat ceph.conf  [global] fsid = f1da3a2e-b8df-46ba-9c6b-0030da25c73e public_network = 192.168.2.0/24 cluster_network = 192.168.6.0/24 mon_initial_members = ceph-node1, ceph-node2 mon_host = 192.168.2.121,192.168.2.122 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx  

配置mon节点

安装ceph-mon包

如果是独立的mon节点,记得检查各个mon节点中是否安装了ceph-mon包

yum install -y ceph-mon 

初始化mon节点

切换回ceph-deploy节点

[ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy mon create-initial 

执行完成后回发现又多了一些配置文件,这些文件都是非常重要的,类似k8s中的kubeconfig,不要随意泄露。

[ceph@ceph-deploy ceph-cluster-deploy]$ ll 总用量 476 -rw------- 1 ceph ceph    113 12月 22 13:11 ceph.bootstrap-mds.keyring -rw------- 1 ceph ceph    113 12月 22 13:11 ceph.bootstrap-mgr.keyring -rw------- 1 ceph ceph    113 12月 22 13:11 ceph.bootstrap-osd.keyring -rw------- 1 ceph ceph    113 12月 22 13:11 ceph.bootstrap-rgw.keyring -rw------- 1 ceph ceph    151 12月 22 13:11 ceph.client.admin.keyring -rw-rw-r-- 1 ceph ceph    292 12月 22 12:11 ceph.conf -rw-rw-r-- 1 ceph ceph 207826 12月 22 13:17 ceph-deploy-ceph.log -rw------- 1 ceph ceph     73 12月 22 12:11 ceph.mon.keyring 

而且对应的mon节点上的服务器,运行的mon服务

ceph-mon@.service 从此处链接: /etc/systemd/system/ceph-mon.target.wants/ceph-mon@<mon节点主机名>.service 

并且也有对应的进程

[root@ceph-node3 ~]# ps axu | grep non ceph        2614  0.5  2.1 470596 39944 ?        Ssl  13:17   0:00 /usr/bin/ceph-mon -f --cluster ceph --id ceph-node3 --setuser ceph --setgroup ceph 

推送管理密钥到节点

推送密钥到各个osd节点、或者你需要使用ceph集群管理的节点。不推送你就得每次自己指定密钥,比较麻烦。。。

[ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy admin ceph-node{1..3}  # 推送给自己,因为我这里是用同一个服务器来部署和管理ceph集群 [ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy admin ceph-deploy 

设置密钥的facl权限

在各个node节点上设置文件facl,因为推送过去的密码默认属主和属组都是root用户,但是我们前面是创建ceph用户用于管理ceph集群

# 可以在root用户下设置,也可以用sudo  # ceph-node1 [root@ceph-node1 ~]# setfacl -m u:ceph:rw /etc/ceph/ceph.client.admin.keyring  [root@ceph-node1 ~]# getfacl /etc/ceph/ceph.client.admin.keyring getfacl: Removing leading '/' from absolute path names # file: etc/ceph/ceph.client.admin.keyring # owner: root # group: root user::rw- user:ceph:rw- group::--- mask::rw- other::---  # ceph-node2 和 ceph-node3 类似  # 因为我打算在deploy节点同时管理ceph,也就是admin和deploy是同一个节点,所以这里也要给ddeploy节点设置facl [root@ceph-deploy ~]# setfacl -m u:ceph:rw /etc/ceph/ceph.client.admin.keyring  

配置mgr节点

只有ceph luminios和以上的版本才有mgr节点,老版本并没有,所以老版本不需要部署。

但是我们部署的是安装mimic的ceph版本(也就是13.2.10),所以需要部署。

安装ceph-mgr包

如果是独立的mgr节点服务器,记得检查是否安装了ceph-mgr包

yum install -y ceph-mgr 

ceph-mgr命令选项:

[ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy mgr --help usage: ceph-deploy mgr [-h] {create} ...  Ceph MGR daemon management  positional arguments:   {create}     create    Deploy Ceph MGR on remote host(s)  optional arguments:   -h, --help  show this help message and exit 

添加mgr节点

执行命令,初始化mgr节点

# 由于我是osd、mon、mgr混用服务器,所以这里就用ceph-node1、ceph-node2了。 ceph-deploy mgr create ceph-node1 ceph-node2 

检查ceph集群状态

[ceph@ceph-deploy ceph-cluster-deploy]$ ceph -s   cluster:     id:     f1da3a2e-b8df-46ba-9c6b-0030da25c73e     health: HEALTH_WARN             OSD count 0 < osd_pool_default_size 3     services:     mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3     mgr: ceph-node1(active), standbys: ceph-node2     osd: 0 osds: 0 up, 0 in     data:     pools:   0 pools, 0 pgs     objects: 0  objects, 0 B     usage:   0 B used, 0 B / 0 B avail     pgs:      

添加osd

添加osd到集群中

# 擦除osd节点上要被添加的磁盘的空间 ceph-deploy disk zap ceph-node1 /dev/sd{b,c,d} ceph-deploy disk zap ceph-node2 /dev/sd{b,c,d} ceph-deploy disk zap ceph-node3 /dev/sd{b,c,d}  # 添加ceph-node1上的磁盘为osd [ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy osd create ceph-node1 --data /dev/sdb [ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy osd create ceph-node1 --data /dev/sdc [ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy osd create ceph-node1 --data /dev/sdd  # 添加ceph-node2上的磁盘为osd ceph-deploy osd create ceph-node2 --data /dev/sdb ceph-deploy osd create ceph-node2 --data /dev/sdc ceph-deploy osd create ceph-node2 --data /dev/sdd  # 添加ceph-node3上的磁盘为osd ceph-deploy osd create ceph-node3 --data /dev/sdb ceph-deploy osd create ceph-node3 --data /dev/sdc ceph-deploy osd create ceph-node3 --data /dev/sdd   # 添加完成后,会在对应的osd节点上添加osd服务(但是只是runtime临时生效,必须将其改为永久生效) 如:/run/systemd/system/ceph-osd.target.wants/ceph-osd@7.service # 7是osd的id,从0开始。 

检查osd状态

# 通过ceph-deploy可以检查 [ceph@ceph-deploy ceph-cluster-deploy]$ ceph-deploy osd list ceph-node{1,2,3}   # 通过ceph osd stat命令检查 [ceph@ceph-deploy ceph-cluster-deploy]$ ceph osd stat 9 osds: 9 up, 9 in; epoch: e37  # 使用 ceph osd status 查看 - `id`: OSD的唯一标识符。 - `host`: OSD所在的主机名 - `used`: OSD已使用的存储容量。 - `avail`: OSD可用的存储容量。 - `wr ops`: OSD每秒写入操作的数量。 - `wr data`: OSD每秒写入数据的数量。 - `rd ops`: OSD每秒读取操作的数量。 - `rd data`: OSD每秒读取数据的数量。 - `state`: OSD的状态,"exists"表示OSD存在,"up"表示OSD正常运行。 [ceph@ceph-deploy ceph-cluster-deploy]$ ceph osd status +----+------------+-------+-------+--------+---------+--------+---------+-----------+ | id |    host    |  used | avail | wr ops | wr data | rd ops | rd data |   state   | +----+------------+-------+-------+--------+---------+--------+---------+-----------+ | 0  | ceph-node1 | 1028M | 4087M |    0   |     0   |    0   |     0   | exists,up | | 1  | ceph-node1 | 1028M | 4087M |    0   |     0   |    0   |     0   | exists,up | | 2  | ceph-node1 | 1028M | 4087M |    0   |     0   |    0   |     0   | exists,up | | 3  | ceph-node2 | 1028M | 4087M |    0   |     0   |    0   |     0   | exists,up | | 4  | ceph-node2 | 1028M | 4087M |    0   |     0   |    0   |     0   | exists,up | | 5  | ceph-node2 | 1028M | 4087M |    0   |     0   |    0   |     0   | exists,up | | 6  | ceph-node3 | 1028M | 4087M |    0   |     0   |    0   |     0   | exists,up | | 7  | ceph-node3 | 1028M | 4087M |    0   |     0   |    0   |     0   | exists,up | | 8  | ceph-node3 | 1028M | 4087M |    0   |     0   |    0   |     0   | exists,up | +----+------------+-------+-------+--------+---------+--------+---------+-----------+  # ceph osd tree 也可以 [root@ceph-node1 ~]# ceph osd tree ID CLASS WEIGHT  TYPE NAME           STATUS REWEIGHT PRI-AFF  -1       0.04408 root default                                 -3       0.01469     host ceph-node1                           0   hdd 0.00490         osd.0           up  1.00000 1.00000   1   hdd 0.00490         osd.1           up  1.00000 1.00000   2   hdd 0.00490         osd.2           up  1.00000 1.00000  -5       0.01469     host ceph-node2                           3   hdd 0.00490         osd.3           up  1.00000 1.00000   4   hdd 0.00490         osd.4           up  1.00000 1.00000   5   hdd 0.00490         osd.5           up  1.00000 1.00000  -7       0.01469     host ceph-node3                           6   hdd 0.00490         osd.6           up  1.00000 1.00000   7   hdd 0.00490         osd.7           up  1.00000 1.00000   8   hdd 0.00490         osd.8           up  1.00000 1.00000    # 这个是用来查看osd的disk free,类似linux的df - `ID`: OSD的唯一标识符。 - `CLASS`: OSD的存储类别。 - `WEIGHT`: OSD的权重。 - `REWEIGHT`: OSD的重新加权比例。 - `SIZE`: OSD的总存储容量。 - `RAW USE`: OSD当前使用的原始存储容量。 - `DATA`: OSD数据存储使用量。 - `OMAP`: OSD的OMAP(Object Map)数据存储使用量。 - `META`: OSD元数据存储使用量。 - `AVAIL`: OSD可用的存储容量。 - `%USE`: OSD使用率百分比。 - `VAR`: OSD使用率方差。 - `PGS`: OSD分布的PG(Placement Group)数量。 - `STATUS`: OSD的状态,"up"表示OSD正常运行。  [root@ceph-node1 ~]# ceph osd df ID CLASS WEIGHT  REWEIGHT SIZE    USE     DATA    OMAP META  AVAIL   %USE  VAR  PGS   0   hdd 0.00490  1.00000 5.0 GiB 1.0 GiB 4.7 MiB  0 B 1 GiB 4.0 GiB 20.11 1.00   0   1   hdd 0.00490  1.00000 5.0 GiB 1.0 GiB 4.7 MiB  0 B 1 GiB 4.0 GiB 20.11 1.00   0   2   hdd 0.00490  1.00000 5.0 GiB 1.0 GiB 4.7 MiB  0 B 1 GiB 4.0 GiB 20.11 1.00   0   3   hdd 0.00490  1.00000 5.0 GiB 1.0 GiB 4.7 MiB  0 B 1 GiB 4.0 GiB 20.11 1.00   0   4   hdd 0.00490  1.00000 5.0 GiB 1.0 GiB 4.7 MiB  0 B 1 GiB 4.0 GiB 20.11 1.00   0   5   hdd 0.00490  1.00000 5.0 GiB 1.0 GiB 4.7 MiB  0 B 1 GiB 4.0 GiB 20.11 1.00   0   6   hdd 0.00490  1.00000 5.0 GiB 1.0 GiB 4.7 MiB  0 B 1 GiB 4.0 GiB 20.11 1.00   0   7   hdd 0.00490  1.00000 5.0 GiB 1.0 GiB 4.7 MiB  0 B 1 GiB 4.0 GiB 20.11 1.00   0   8   hdd 0.00490  1.00000 5.0 GiB 1.0 GiB 4.7 MiB  0 B 1 GiB 4.0 GiB 20.11 1.00   0                      TOTAL  45 GiB 9.0 GiB  42 MiB  0 B 9 GiB  36 GiB 20.11           MIN/MAX VAR: 1.00/1.00  STDDEV: 0 

将ceph-osd服务设置开机启动

根据osd所在节点,添加对应的服务为开机启动

# ceph-node1 systemctl enable ceph-osd@{0,1,2}  # ceph-node2 systemctl enable ceph-osd@{3,4,5}  # ceph-node3 systemctl enable ceph-osd@{6,7,8} 

管理相关

从rados中移除osd

移除的时候,最好一个个移除,不然有可能性能跟不上,因为ceph自己去找其他osd的备份来作为主,一旦一次性删除太多就可能出现性能问题。

# 停用osd ceph osd out <osd-id>  # 停止osd服务 systemctl stop ceph-osd@<osd-id>  # 移除osd ceph osd ourge <osd-id> --yes-i-really-mean-it  # 检查ceph.conf集群配置文件中,是福哦还有对应osd的配置,如有则手动删除、  ###### Luminous 之前的版本,移除步骤如下 : ceph osd crush remove <name> ceph auth del osd <osd-id> ceph osd rm <osd-id> 

手动测试数据上传、下载

# 通过rados创建pool rados mkpool <pool-name> [123[ 4]]  create pool <pool-name>'                                     [with auid 123[and using crush rule 4]]   # 通过ceph 命令创建pool ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure} {<erasure_code_profile>}          create pool  {<rule>} {<int>}      ceph osd pool  create <pool名> <pg值> <pg备份值>   # 上传文件到指定的pool [ceph@ceph-deploy ceph-cluster-deploy]$ rados put myfile /etc/fstab -p swq-test # 列出指定pool中的文件 [ceph@ceph-deploy ceph-cluster-deploy]$ rados ls -p swq-test myfile  # 下载文件 [ceph@ceph-deploy ceph-cluster-deploy]$ rados get myfile -p swq-test /tmp/my.txt [ceph@ceph-deploy ceph-cluster-deploy]$ cat /tmp/my.txt   # # /etc/fstab # Created by anaconda on Thu Dec 21 23:51:13 2023 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root /                       xfs     defaults        0 0 UUID=4b1bb372-7f34-48f6-8852-036ee6dfd125 /boot    # 查看文件的映射关系 [ceph@ceph-deploy ceph-cluster-deploy]$ ceph osd map swq-test myfile osdmap e43 pool 'swq-test' (2) object 'myfile' -> pg 2.423e92f7 (2.17) -> up ([5,6,2], p5) acting ([5,6,2], p5)  # 在哪个pg中? # 	pg为2的423e92f7中。 # -> pg 2.423e92f7 (2.17)   # 在哪个osd中? # 	这里是在5,6,2这3个osd中,主osd为:5 # 	acting是目前活动的osd # -> up ([5,6,2], p5) acting ([5,6,2], p5)