Дерево страниц

Сравнение версий

Ключ

  • Эта строка добавлена.
  • Эта строка удалена.
  • Изменено форматирование.


Предупреждение

Статья находится в разработке.


Оглавление


Информация
titleДанная статья применима к:



Введение

Follow through this post to learn how to deploy Ceph storage cluster on Debian 12. Ceph is a scalable distributed storage system designed for cloud infrastructure and web-scale object storage. It can also be used to provide Ceph Block Storage as well as Ceph File System storage.

Deploying Ceph Storage Cluster on Debian 12

The Ceph Storage Cluster Daemons

Ceph Storage Cluster is made up of different daemons eas performing specific role.

  • Ceph Object Storage Daemon (OSD, ceph-osd)
    • It provides ceph object data store.
    • It also performs data replication , data recovery, rebalancing and provides storage information to Ceph Monitor.
    • At least an OSD is required per storage device.
  • Ceph Monitor (ceph-mon)
    • It maintains maps of the entire Ceph cluster state including monitor map, manager map, the OSD map, and the CRUSH map.
    • manages authentication between daemons and clients.
    • A Ceph cluster must contain a minimum of three running monitors in order to be both redundant and highly-available. If there are at least five nodes on the cluster, it is recommended to run five monitors in the cluster.
  • Ceph Manager (ceph-mgr)
    • keeps track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load.
    • manages and exposes Ceph cluster web dashboard and API.
    • At least two managers are required for HA.
  • Ceph Metadata Server (MDS):
    • Manages metadata for the Ceph File System (CephFS). Coordinates metadata access and ensures consistency across clients.
    • One or more, depending on the requirements of the CephFS.
  • RADOS Gateway (RGW):
    • Also called “Ceph Object Gateway”
    • is a component of the Ceph storage system that provides object storage services with a RESTful interface. RGW allows applications and users to interact with Ceph storage using industry-standard APIs, such as the S3 (Simple Storage Service) API (compatible with Amazon S3) and the Swift API (compatible with OpenStack Swift).

Ceph Storage Cluster Deployment Methods

There are different methods you can use to deploy Ceph storage cluster.

  • cephadm leverages container technology (specifically, Docker containers) to deploy and manage Ceph services on a cluster of machines.
  • Rook deploys and manages Ceph clusters running in Kubernetes, while also enabling management of storage resources and provisioning via Kubernetes APIs.
  • ceph-ansible deploys and manages Ceph clusters using Ansible.
  • ceph-salt installs Ceph using Salt and cephadm.
  • jaas.ai/ceph-mon installs Ceph using Juju.
  • Installs Ceph via Puppet.
  • Ceph can also be installed manually.

Use of cephadm and rooks are the recommended methods for deploying Ceph storage cluster.

Описание стенда

Стенд состоит из следующих узлов:

  • Узел-администратор.
  • Узлы-мониторы.
  • Узлы-хранилища.

Аппаратные требования к узлам см. https://docs.ceph.com/en/latest/start/hardware-recommendations/.

На всех узлах:

  • установлена операционная система Astra Linux Special Edition x.8;
  • установлены и настроены инструменты для синхронизации времени, время на всех узлах синхронизировано;
  • установлена служба SSH и разрешен вход пользователя root с узла, выполняющего роль администратора;

На узлах 

Установка пакетов

Command
sudo apt install cephadm



Настройка

https://docs.ceph.com/en/reef/cephadm/install/

DEPLOYING A NEW CEPH CLUSTER

Cephadm creates a new Ceph cluster by bootstrapping a single host, expanding the cluster to encompass any additional hosts, and then deploying the needed services.

See the section Compatibility With Podman Versions for a table of Ceph versions that are compatible with Podman. Not every version of Podman is compatible with Ceph.

BOOTSTRAP A NEW CLUSTER

WHAT TO KNOW BEFORE YOU BOOTSTRAP

The first step in creating a new Ceph cluster is running the cephadm bootstrap command on the Ceph cluster’s first host. The act of running the cephadm bootstrap command on the Ceph cluster’s first host creates the Ceph cluster’s first Monitor daemon. You must pass the IP address of the Ceph cluster’s first host to the ceph bootstrap command, so you’ll need to know the IP address of that host.

Important

ssh must be installed and running in order for the bootstrapping procedure to succeed.

Note

If there are multiple networks and interfaces, be sure to choose one that will be accessible by any host accessing the Ceph cluster.

RUNNING THE BOOTSTRAP COMMAND

Run the ceph bootstrap command:

cephadm bootstrap --mon-ip *<mon-ip>*

This command will:

  • Create a Monitor and a Manager daemon for the new cluster on the local host.

  • Generate a new SSH key for the Ceph cluster and add it to the root user’s /root/.ssh/authorized_keys file.

  • Write a copy of the public key to /etc/ceph/ceph.pub.

  • Write a minimal configuration file to /etc/ceph/ceph.conf. This file is needed to communicate with Ceph daemons.

  • Write a copy of the client.admin administrative (privileged!) secret key to /etc/ceph/ceph.client.admin.keyring.

  • Add the _admin label to the bootstrap host. By default, any host with this label will (also) get a copy of /etc/ceph/ceph.conf and /etc/ceph/ceph.client.admin.keyring.

FURTHER INFORMATION ABOUT CEPHADM BOOTSTRAP

The default bootstrap process will work for most users. But if you’d like immediately to know more about cephadm bootstrap, read the list below.

Also, you can run cephadm bootstrap -h to see all of cephadm’s available options.

  • By default, Ceph daemons send their log output to stdout/stderr, which is picked up by the container runtime (docker or podman) and (on most systems) sent to journald. If you want Ceph to write traditional log files to /var/log/ceph/$fsid, use the --log-to-file option during bootstrap.

  • Larger Ceph clusters perform best when (external to the Ceph cluster) public network traffic is separated from (internal to the Ceph cluster) cluster traffic. The internal cluster traffic handles replication, recovery, and heartbeats between OSD daemons. You can define the cluster network by supplying the --cluster-network option to the bootstrap subcommand. This parameter must be a subnet in CIDR notation (for example 10.90.90.0/24 or fe80::/64).

  • cephadm bootstrap writes to /etc/ceph files needed to access the new cluster. This central location makes it possible for Ceph packages installed on the host (e.g., packages that give access to the cephadm command line interface) to find these files.

    Daemon containers deployed with cephadm, however, do not need /etc/ceph at all. Use the --output-dir *<directory>* option to put them in a different directory (for example, .). This may help avoid conflicts with an existing Ceph configuration (cephadm or otherwise) on the same host.

  • You can pass any initial Ceph configuration options to the new cluster by putting them in a standard ini-style configuration file and using the --config *<config-file>* option. For example:

    $ cat <<EOF > initial-ceph.conf
    [global]
    osd crush chooseleaf type = 0
    EOF
    $ ./cephadm bootstrap --config initial-ceph.conf ...
    
  • The --ssh-user *<user>* option makes it possible to designate which SSH user cephadm will use to connect to hosts. The associated SSH key will be added to /home/*<user>*/.ssh/authorized_keys. The user that you designate with this option must have passwordless sudo access.

  • If you are using a container image from a registry that requires login, you may add the argument:

    • --registry-json <path to json file>

    example contents of JSON file with login info:

    {"url":"REGISTRY_URL", "username":"REGISTRY_USERNAME", "password":"REGISTRY_PASSWORD"}
    

    Cephadm will attempt to log in to this registry so it can pull your container and then store the login info in its config database. Other hosts added to the cluster will then also be able to make use of the authenticated container registry.

  • See Different deployment scenarios for additional examples for using cephadm bootstrap.

ENABLE CEPH CLI

Cephadm does not require any Ceph packages to be installed on the host. However, we recommend enabling easy access to the ceph command. There are several ways to do this:

  • The cephadm shell command launches a bash shell in a container with all of the Ceph packages installed. By default, if configuration and keyring files are found in /etc/ceph on the host, they are passed into the container environment so that the shell is fully functional. Note that when executed on a MON host, cephadm shell will infer the config from the MON container instead of using the default configuration. If --mount <path> is given, then the host <path> (file or directory) will appear under /mnt inside the container:

    cephadm shell
    
  • To execute ceph commands, you can also run commands like this:

    cephadm shell -- ceph -s
    
  • You can install the ceph-common package, which contains all of the ceph commands, including ceph, rbd, mount.ceph (for mounting CephFS file systems), etc.:

    cephadm add-repo --release reef
    cephadm install ceph-common
    

Confirm that the ceph command is accessible with:

ceph -v

Confirm that the ceph command can connect to the cluster and also its status with:

ceph status

ADDING HOSTS

Add all hosts to the cluster by following the instructions in Adding Hosts.

By default, a ceph.conf file and a copy of the client.admin keyring are maintained in /etc/ceph on all hosts that have the _admin label. This label is initially applied only to the bootstrap host. We recommend that one or more other hosts be given the _admin label so that the Ceph CLI (for example, via cephadm shell) is easily accessible on multiple hosts. To add the _admin label to additional host(s), run a command of the following form:

ceph orch host label add *<host>* _admin

ADDING ADDITIONAL MONS

A typical Ceph cluster has three or five Monitor daemons spread across different hosts. We recommend deploying five Monitors if there are five or more nodes in your cluster. Most clusters do not benefit from seven or more Monitors.

Please follow Deploying additional monitors to deploy additional MONs.

ADDING STORAGE

To add storage to the cluster, you can tell Ceph to consume any available and unused device(s):

ceph orch apply osd --all-available-devices

See Deploy OSDs for more detailed instructions.

ENABLING OSD MEMORY AUTOTUNING

Warning

By default, cephadm enables osd_memory_target_autotune on bootstrap, with mgr/cephadm/autotune_memory_target_ratio set to .7 of total host memory.

See Automatically tuning OSD memory.

To deploy hyperconverged Ceph with TripleO, please refer to the TripleO documentation: Scenario: Deploy Hyperconverged Ceph

In other cases where the cluster hardware is not exclusively used by Ceph (converged infrastructure), reduce the memory consumption of Ceph like so:

# converged only:
ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2

Then enable memory autotuning:

ceph config set osd osd_memory_target_autotune true

USING CEPH

To use the Ceph Filesystem, follow Deploy CephFS.

To use the Ceph Object Gateway, follow Deploy RGWs.

To use NFS, follow NFS Service

To use iSCSI, follow Deploying iSCSI

DIFFERENT DEPLOYMENT SCENARIOS

SINGLE HOST

To deploy a Ceph cluster running on a single host, use the --single-host-defaults flag when bootstrapping. For use cases, see One Node Cluster. Such clusters are generally not suitable for production.

The --single-host-defaults flag sets the following configuration options:

global/osd_crush_chooseleaf_type = 0
global/osd_pool_default_size = 2
mgr/mgr_standby_modules = False

For more information on these options, see One Node Cluster and mgr_standby_modules in ceph-mgr administrator’s guide.

DEPLOYMENT IN AN ISOLATED ENVIRONMENT

You might need to install cephadm in an environment that is not connected directly to the Internet (an “isolated” or “airgapped” environment). This requires the use of a custom container registry. Either of two kinds of custom container registry can be used in this scenario: (1) a Podman-based or Docker-based insecure registry, or (2) a secure registry.

The practice of installing software on systems that are not connected directly to the internet is called “airgapping” and registries that are not connected directly to the internet are referred to as “airgapped”.

Make sure that your container image is inside the registry. Make sure that you have access to all hosts that you plan to add to the cluster.

  1. Run a local container registry:

    podman run --privileged -d --name registry -p 5000:5000 -v /var/lib/registry:/var/lib/registry --restart=always registry:2
    
  2. If you are using an insecure registry, configure Podman or Docker with the hostname and port where the registry is running.

    Note

    You must repeat this step for every host that accesses the local insecure registry.

  3. Push your container image to your local registry. Here are some acceptable kinds of container images:

    • Ceph container image. See Ceph Container Images.

    • Prometheus container image

    • Node exporter container image

    • Grafana container image

    • Alertmanager container image

  4. Create a temporary configuration file to store the names of the monitoring images. (See Using custom images):

    cat <<EOF > initial-ceph.conf
    
    [mgr]
    mgr/cephadm/container_image_prometheus = *<hostname>*:5000/prometheus
    mgr/cephadm/container_image_node_exporter = *<hostname>*:5000/node_exporter
    mgr/cephadm/container_image_grafana = *<hostname>*:5000/grafana
    mgr/cephadm/container_image_alertmanager = *<hostname>*:5000/alertmanger
    
  5. Run bootstrap using the --image flag and pass the name of your container image as the argument of the image flag. For example:

    cephadm --image *<hostname>*:5000/ceph/ceph bootstrap --mon-ip *<mon-ip>*
    

DEPLOYMENT WITH CUSTOM SSH KEYS

Bootstrap allows users to create their own private/public SSH key pair rather than having cephadm generate them automatically.

To use custom SSH keys, pass the --ssh-private-key and --ssh-public-key fields to bootstrap. Both parameters require a path to the file where the keys are stored:

cephadm bootstrap --mon-ip <ip-addr> --ssh-private-key <private-key-filepath> --ssh-public-key <public-key-filepath>

This setup allows users to use a key that has already been distributed to hosts the user wants in the cluster before bootstrap.

Note

In order for cephadm to connect to other hosts you’d like to add to the cluster, make sure the public key of the key pair provided is set up as an authorized key for the ssh user being used, typically root. If you’d like more info on using a non-root user as the ssh user, see Further information about cephadm bootstrap

DEPLOYMENT WITH CA SIGNED SSH KEYS

As an alternative to standard public key authentication, cephadm also supports deployment using CA signed keys. Before bootstrapping it’s recommended to set up the CA public key as a trusted CA key on hosts you’d like to eventually add to the cluster. For example:

# we will act as our own CA, therefore we'll need to make a CA key
[root@host1 ~]# ssh-keygen -t rsa -f ca-key -N ""

# make the ca key trusted on the host we've generated it on
# this requires adding in a line in our /etc/sshd_config
# to mark this key as trusted
[root@host1 ~]# cp ca-key.pub /etc/ssh
[root@host1 ~]# vi /etc/ssh/sshd_config
[root@host1 ~]# cat /etc/ssh/sshd_config | grep ca-key
TrustedUserCAKeys /etc/ssh/ca-key.pub
# now restart sshd so it picks up the config change
[root@host1 ~]# systemctl restart sshd

# now, on all other hosts we want in the cluster, also install the CA key
[root@host1 ~]# scp /etc/ssh/ca-key.pub host2:/etc/ssh/

# on other hosts, make the same changes to the sshd_config
[root@host2 ~]# vi /etc/ssh/sshd_config
[root@host2 ~]# cat /etc/ssh/sshd_config | grep ca-key
TrustedUserCAKeys /etc/ssh/ca-key.pub
# and restart sshd so it picks up the config change
[root@host2 ~]# systemctl restart sshd

Once the CA key has been installed and marked as a trusted key, you are ready to use a private key/CA signed cert combination for SSH. Continuing with our current example, we will create a new key-pair for for host access and then sign it with our CA key

# make a new key pair
[root@host1 ~]# ssh-keygen -t rsa -f cephadm-ssh-key -N ""
# sign the private key. This will create a new cephadm-ssh-key-cert.pub
# note here we're using user "root". If you'd like to use a non-root
# user the arguments to the -I and -n params would need to be adjusted
# Additionally, note the -V param indicates how long until the cert
# this creates will expire
[root@host1 ~]# ssh-keygen -s ca-key -I user_root -n root -V +52w cephadm-ssh-key
[root@host1 ~]# ls
ca-key  ca-key.pub  cephadm-ssh-key  cephadm-ssh-key-cert.pub  cephadm-ssh-key.pub

# verify our signed key is working. To do this, make sure the generated private
# key ("cephadm-ssh-key" in our example) and the newly signed cert are stored
# in the same directory. Then try to ssh using the private key
[root@host1 ~]# ssh -i cephadm-ssh-key host2

Once you have your private key and corresponding CA signed cert and have tested SSH authentication using that key works, you can pass those keys to bootstrap in order to have cephadm use them for SSHing between cluster hosts

[root@host1 ~]# cephadm bootstrap --mon-ip <ip-addr> --ssh-private-key cephadm-ssh-key --ssh-signed-cert cephadm-ssh-key-cert.pub

Note that this setup does not require installing the corresponding public key from the private key passed to bootstrap on other nodes. In fact, cephadm will reject the --ssh-public-key argument when passed along with --ssh-signed-cert. This is not because having the public key breaks anything, but rather because it is not at all needed and helps the bootstrap command differentiate if the user wants the CA signed keys setup or standard pubkey encryption. What this means is that SSH key rotation would simply be a matter of getting another key signed by the same CA and providing cephadm with the new private key and signed cert. No additional distribution of keys to cluster nodes is needed after the initial setup of the CA key as a trusted key, no matter how many new private key/signed cert pairs are rotated in.

 PreviousNext 


© Copyright 2016, Ceph authors and contributors. Licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0). Revision b9b067bc.

 Read the Docsv: reef 

https://kifarunix.com/how-to-deploy-ceph-storage-cluster-on-debian/

How to Deploy Ceph Storage Cluster on Debian 12

Prepare Ceph Nodes for Ceph Storage Cluster Deployment on Debian 12

Our Ceph Storage Cluster Deployment Architecture

The diagram below depicts our ceph storage cluster deployment architecture. In a typical production environment, you would have at least 3 monitor nodes as well as at least 3 OSDs.

If your cluster nodes are in the same network subnet, cephadm will automatically add up to five monitors to the subnet, as new hosts are added to the cluster.

Ceph Storage Nodes Hardware Requirements

Check the hardware recommendations page for the Ceph storage cluster nodes hardware requirements.

Create Ceph Deployment User Account

We will be deploying Ceph on Debian 12 using a root user account. So to follow along, ensure you have access to the root account on your Ceph cluster nodes. Bear in mind that root user account is a superuser account, hence, “With great power comes great responsibility”.

whoami
root

If you would like to use non root user account to bootstrap Ceph cluster, read more on the here.

Attach Storage Disks to Ceph OSD Nodes

Each Ceph OSD nodes in our architecture above has unallocated 3 raw disks, /dev/vd{a,b} of 50 GB each.

lsblk

Sample output;

NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda      8:0    0  100G  0 disk 
├─sda1   8:1    0   98G  0 part /
├─sda2   8:2    0    1K  0 part 
└─sda5   8:5    0    2G  0 part 
vda    253:0    0   50G  0 disk 
vdb    253:16   0   50G  0 disk 

Set Hostnames and Update Hosts File

To begin with, setup up your nodes hostnames;

hostnamectl set-hostname ceph-mgr-mon01

Set the respective hostnames on other nodes.

If you are not using DNS for name resolution, then update the hosts file accordingly.

For example, in our setup, each node hosts file should contain the lines below;

less /etc/hosts
...
192.168.122.170 ceph-mgr-mon01
192.168.122.127 ceph-mon02
192.168.122.184 ceph-mon03
192.168.122.188 ceph-osd01
192.168.122.30 ceph-osd02
192.168.122.51 ceph-osd03

Install SSH Server on Each Node

Ceph deployment through cephadm utility requires that an SSH server is installed on all the nodes.

Debian 12 comes with SSH server already installed. If not, install and start it as follows;

sudo apt install openssh-server
systemctl enable --now ssh

Enable Root Login on Other Nodes from Ceph Admin Node

In order to add other nodes to the Ceph cluster using Ceph Admin Node, you will have to use the root user account.

Thus, on the Ceph Monitor, Ceph OSD nodes, enable root login from the Ceph Admin node;

vim /etc/ssh/sshd_config

Add the configs below, replacing the IP address for Ceph Admin accordingly.

Match Address 192.168.122.170
        PermitRootLogin yes

Reload ssh;

systemctl reload sshd

Install Python3

Python is required to deploy Ceph. Python 3 is installed by default on Debian 12;

python3 -V
Python 3.11.2

Install Docker CE on Each Node

The cephadm utility is used to bootstrap a Ceph cluster and to manage ceph daemons deployed with systemd and Docker containers.

It can also use Podman, which will be installed along with other Ceph packages as can be seen in the later stages of this guide.

To install Docker CE on each Ceph cluster node, follow the guide below;

How to Install Docker CE on Debian 12

Install LVM Package on each Node

Ceph requires LVM2 for provisioning storage devices. Install the package on each node.

apt install lvm2 -y

Setup Ceph Storage Cluster on Debian 12

Install cephadm Utility on Ceph Admin Node

On the Ceph admin node, you need to install the cephadm utility.

Cephadm installs and manages a Ceph cluster using containers and systemd, with tight integration with the CLI and dashboard GUI.

  • cephadm only supports Octopus and newer releases.
  • cephadm is fully integrated with the new orchestration API and fully supports the new CLI and dashboard features to manage cluster deployment.
  • cephadm requires container support (podman or docker) and Python 3.

If you check the cephadm utility provided by the default repos, it is a lower version. The current version as of this writing is 18.2.0.

apt-cache policy cephadm
cephadm:
  Installed: (none)
  Candidate: 16.2.11+ds-2
  Version table:
     16.2.11+ds-2 500
        500 http://deb.debian.org/debian bookworm/main amd64 Packages

To install the current cephadm release version, you need the current Ceph release repos installed.

To install Ceph release repos on Debian 12, run the commands below

wget -q -O- 'https://download.ceph.com/keys/release.asc' | \
gpg --dearmor -o /etc/apt/trusted.gpg.d/cephadm.gpg
echo deb https://download.ceph.com/debian-reef/ $(lsb_release -sc) main \
> /etc/apt/sources.list.d/ceph.list
apt update

Then, check the available version of cephadm package now.

apt-cache policy cephadm
cephadm:
  Installed: (none)
  Candidate: 18.2.1-1~bpo12+1
  Version table:
     18.2.1-1~bpo12+1 500
        500 https://download.ceph.com/debian-reef bookworm/main amd64 Packages
     16.2.11+ds-2 500
        500 http://deb.debian.org/debian bookworm/main amd64 Packages

As you can see, the Ceph repo provides current release version of cephadm package. Thus, install it as follows;

apt install cephadm

During the installation, you may see some errors relating to the cephadm user account that is being created. Since we are using root user account to bootstrap our Ceph cluster, then we ignore this error.

Initialize Ceph Cluster Monitor On Ceph Admin Node

Your nodes are now ready to deploy a Ceph storage cluster.

It is now time to bootstrap the Ceph cluster in order to create the first Ceph monitor daemon on Ceph admin node. Thus, run the command below, substituting the IP address with that of the Ceph admin node accordingly.

cephadm bootstrap --mon-ip 192.168.122.170
Ceph Dashboard is now available at:

	     URL: https://ceph-mgr-mon01:8443/
	    User: admin
	Password: 0lquv02zaw

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/456f0baa-affa-11ee-be1c-525400575614/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

	sudo /usr/sbin/cephadm shell --fsid 456f0baa-affa-11ee-be1c-525400575614 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

	sudo /usr/sbin/cephadm shell 

Please consider enabling telemetry to help improve Ceph:

	ceph telemetry on

For more information see:

	https://docs.ceph.com/en/latest/mgr/telemetry/

Bootstrap complete.

According to the documentation, the bootstrap command;

  • Create a monitor and manager daemon for the new cluster on the localhost.
  • Generate a new SSH key for the Ceph cluster and add it to the root user’s /root/.ssh/authorized_keys file.
  • Write a copy of the public key to /etc/ceph/ceph.pub.
  • Write a minimal configuration file to /etc/ceph/ceph.conf. This file is needed to communicate with the new cluster.
  • Write a copy of the client.admin administrative (privileged!) secret key to /etc/ceph/ceph.client.admin.keyring.
  • Add the _admin label to the bootstrap host. By default, any host with this label will (also) get a copy of /etc/ceph/ceph.conf and /etc/ceph/ceph.client.admin.keyring.

Enable Ceph CLI

When bootstrap command completes, a command for accessing Ceph CLI is provided. Execute that command to access Ceph CLI, in case of multi-cluster or non-default config:

sudo /usr/sbin/cephadm shell \
	--fsid 456f0baa-affa-11ee-be1c-525400575614 \
	-c /etc/ceph/ceph.conf \
	-k /etc/ceph/ceph.client.admin.keyring

Otherwise, for the default config, just execute;

sudo cephadm shell

This drops you onto Ceph CLI. You should see your shell prompt change!

root@ceph-mgr-mon01:/#

You can run the ceph commands eg to check the Ceph status;

ceph -s
  cluster:
    id:     456f0baa-affa-11ee-be1c-525400575614
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum ceph-mgr-mon01 (age 8m)
    mgr: ceph-mgr-mon01.gioqld(active, since 7m)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

You can exit the ceph CLI by pressing Ctrl+D or type exit and press ENTER.

There are other ways in which you can access the Ceph CLI. For example, you can run Ceph CLI commands using cephadm command.

cephadm shell -- ceph -s

Or You could install Ceph CLI tools on the host (ignore errors about cephadm user account);

apt install ceph-common

With this method, then you can just ran the Ceph commands easily;

ceph -s

Copy SSH Keys to Other Ceph Nodes

Copy the SSH key generated by the bootstrap command to Ceph Monitor, OSD1 and OSD2 root user account. Ensure Root Login is permitted on the Ceph monitor node.

for i in ceph-mon02 ceph-mon03 ceph-osd01 ceph-osd02 ceph-osd03; do ssh-copy-id -f -i /etc/ceph/ceph.pub root@$i; done

Drop into Ceph CLI

You can drop into the Ceph CLI to execute the next commands.

cephadm shell

Or if you installed the ceph-common package, no need to drop into the cli as you can directly execute the ceph commands from the terminal.

Add Ceph Monitor Node to Ceph Cluster

At this point, we have just provisioned Ceph Admin node only. You can list all the hosts known to the Ceph ochestrator (ceph-mgr) using the command below

ceph orch host ls

Sample output;

HOST            ADDR             LABELS  STATUS  
ceph-mgr-mon01  192.168.122.170  _admin          
1 hosts in cluster

So next, add the Ceph Monitor node to the cluster.

Assuming you have copied the Ceph SSH public key, execute the command below to add the Ceph Monitor to the cluster;

for i in 02 03; do ceph orch host add ceph-mon$i; done

Sample command output;

Added host 'ceph-mon02' with addr '192.168.122.127'
Added host 'ceph-mon03' with addr '192.168.122.184'

Next, label the nodes as per their roles;

ceph orch host label add ceph-mgr-mon01 ceph-mgr-mon01
for i in 02 03; do ceph orch host label add ceph-mon$i mon$i; done

Kindly note that if you have 5 or more nodes (including OSD nodes and admin nodes in the cluster within the same network, a maximum of 5 noded will be automatically assigned monitor roles). If you have nodes in different networks, a minimum of three monitors is recommended!

Similarly, there will be at least two managers deployed automatically.

Add Ceph OSD Nodes to Ceph Cluster

Similarly, add the OSD Nodes to the cluster;

for i in 01 02 03; do ceph orch host add ceph-osd$i; done

Define their respective labels;

for i in 01 02 03; do ceph orch host label add ceph-osd$i osd$i; done

List Ceph Cluster Nodes;

You can list the Ceph cluster nodes;

ceph orch host ls

Sample output;

HOST            ADDR             LABELS                 STATUS  
ceph-mgr-mon01  192.168.122.170  _admin,ceph-mgr-mon01          
ceph-mon02      192.168.122.127  mon02                          
ceph-mon03      192.168.122.184  mon03                          
ceph-osd01      192.168.122.188  osd01                          
ceph-osd02      192.168.122.30   osd02                          
ceph-osd03      192.168.122.51   osd03                          
6 hosts in cluster

Create Ceph OSDs from OSD Nodes Drives

To create a Ceph OSD from the OSD node logical volumes, run the command below. Replace ceph-vg/ceph-lv with Volume group and logical volume names accordingly. Otherwise, use the raw device path.

sudo ceph orch daemon add osd ceph-mon:ceph-vg/ceph-lv

Command output;

Created osd(s) 0 on host 'ceph-mon'

Repeat the same for the other OSD nodes.

sudo ceph orch daemon add osd ceph-osd1:ceph-vg/ceph-lv
sudo ceph orch daemon add osd ceph-osd2:ceph-vg/ceph-lv

The Ceph OSDs are now ready for use.

In our setup, we have unallocated raw storage devices raw disks of 50G on each OSD node to be used as bluestore for OSD daemons.

You can list the devices that are available on the OSD nodes for creating OSDs using the command below;

ceph orch device ls

A storage device is considered available if all of the following conditions are met:

  • The device must have no partitions.
  • The device must not have any LVM state.
  • The device must not be mounted.
  • The device must not contain a file system.
  • The device must not contain a Ceph BlueStore OSD.
  • The device must be larger than 5 GB.

Sample output;

HOST        PATH      TYPE  DEVICE ID   SIZE  AVAILABLE  REFRESHED  REJECT REASONS  
ceph-osd01  /dev/vda  hdd              50.0G  Yes        6m ago                     
ceph-osd01  /dev/vdb  hdd              50.0G  Yes        6m ago                     
ceph-osd02  /dev/vda  hdd              50.0G  Yes        5m ago                     
ceph-osd02  /dev/vdb  hdd              50.0G  Yes        5m ago                     
ceph-osd03  /dev/vda  hdd              50.0G  Yes        5m ago                     
ceph-osd03  /dev/vdb  hdd              50.0G  Yes        5m ago

You can add all the available devices to ceph OSDs at once or just add them one by one.

To attach them all at once;

ceph orch apply osd --all-available-devices --method {raw|lvm}

Use raw method if you are using raw disks (like in our case here).

ceph orch apply osd --all-available-devices --method raw

Otherwise, if you are using LVM volumes, use lvm method;

ceph orch apply osd --all-available-devices --method lvm

Command output;

Scheduled osd.all-available-devices update...

Note that when you add devices using this approach, then;

  • If you add new disks to the cluster, they will be automatically used to create new OSDs.
  • In the event that an OSD is removed, and the LVM physical volume is cleaned, a new OSD will be generated automatically.

If you wish to prevent this behavior (i.e., disable the automatic creation of OSDs on available devices), use the 'unmanaged' parameter:

ceph orch apply osd --all-available-devices --unmanaged=true

To manually create an OSD from a specific device on a specific host:

ceph orch daemon add osd <host>:<device-path>

If you check again, the disks are now added to Ceph and not available for other use anymore;

ceph orch device ls
HOST        PATH      TYPE  DEVICE ID   SIZE  AVAILABLE  REFRESHED  REJECT REASONS              
ceph-osd01  /dev/vda  hdd              50.0G  No         10s ago    Has BlueStore device label  
ceph-osd01  /dev/vdb  hdd              50.0G  No         10s ago    Has BlueStore device label  
ceph-osd02  /dev/vda  hdd              50.0G  No         10s ago    Has BlueStore device label  
ceph-osd02  /dev/vdb  hdd              50.0G  No         10s ago    Has BlueStore device label  
ceph-osd03  /dev/vda  hdd              50.0G  No         10s ago    Has BlueStore device label  
ceph-osd03  /dev/vdb  hdd              50.0G  No         10s ago    Has BlueStore device label

Check Ceph Cluster Health

To verify the health status of the ceph cluster, simply execute the command ceph s on the admin node or even on each OSD node (if you have installed cephadm/ceph commands there).

To check Ceph cluster health status from the admin node;

ceph -s

Sample output;

  cluster:
    id:     456f0baa-affa-11ee-be1c-525400575614
    health: HEALTH_OK
 
  services:
    mon: 5 daemons, quorum ceph-mgr-mon01,ceph-mon02,ceph-mon03,ceph-osd01,ceph-osd03 (age 14m)
    mgr: ceph-mgr-mon01.htboob(active, since 45m), standbys: ceph-mon02.wgbbcc
    osd: 6 osds: 6 up (since 3m), 6 in (since 4m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 577 KiB
    usage:   160 MiB used, 300 GiB / 300 GiB avail
    pgs:     1 active+clean

Accessing Ceph Admin Web User Interface

The bootstrap commands give a url and credentials to use to access the Ceph admin web user interface;

Ceph Dashboard is now available at:

	     URL: https://ceph-mgr-mon01:8443/
	    User: admin
	Password: 0lquv02zaw

Thus, open the browser and navigate to the URL above. Or you can use the cephadm node resolvable hostname or IP address if you want. Sample URL: https://ceph-mgr-mon01:8443.

Open the port, 8443/TCP, on firewall if any is running.

Enter the provided credential and reset your admin password and proceed to login to Ceph Admin UI.

If you want, you can activate the telemetry module by clicking Activate button or just from the Ceph admin node CLI;

cephadm shell -- ceph telemetry on --license sharing-1-0

Go through other Ceph menu to see more about Ceph.

Ceph Dashboard;

Under the Cluster Menu, you can see quite other details; hosts, disks, OSDs etc.

There you go. That marks the end of our tutorial on how to deploy Ceph storage cluster.

Tagsceph osd, ceph storage, debian 12 ceph, deploy ceph debian 12