Table des matières
Notes stockage CEPH
Voir :
Voir aussi :
- SeaweedFS
- JuiceFS
ceph status ceph-deploy admin serveur
30 TB 30 TB 30 TB ——-- 90 TB
DFS 100 TB SMB Windows DFS
Lexique
OSD (pour Object Storage Daemon), un disque
Etapes : Monter un CEPH Monter un Samba
DFS Droits lectures pour tout auth
Synchro les données
U:\Services\Direction des Etudes Gty\Modification\Inventaire DE-SdT\Inventaire 2020
http://people.redhat.com/bhubbard/nature/nature-new/glossary/#term-node
Prérequis
Matériel Voir : https://docs.ceph.com/en/latest/start/hardware-recommendations/
Logiciel ; http://people.redhat.com/bhubbard/nature/nature-new/start/quick-start-preflight/
Liste :
- Réseau (Ceph préconise l’utilisation de 2 interfaces réseau)
- /etc/hosts
- NTP
- Ceph deploy user (with passwordless sudo privileges)
- SSH passwordless
- Sudo tty (Si
requiretty⇒Defaults:ceph !requiretty) - SELinux
Composants
MDS (Meta Data Server)
Consomme du CPU et de la RAM (1 Gi de mémoire par instance). Utile que si l'on planifie d'utiliser CephFS.
Monitors
For small clusters, 1-2 GB is generally sufficient
OSD (Object Storage Daemon)
Du côté de la mémoire, 512 Mi par instance sont suffisants sauf lors de la récupération où 1 Gi de mémoire par Ti de données et par instance est conseillé.
Installation
Voir :
Les users CEPH ne doivent pas être des utilisateurs standard, mais des utilisateurs de services eux même chargés d’une gestion fine des droits Ceph préconise l’utilisation de 2 interfaces réseau
echo "deb http://ftp.debian.org/debian buster-backports main" >> /etc/apt/sources.list.d/backports.list apt-get update apt-get install -t buster-backports ceph
zcat /usr/share/doc/ceph/sample.ceph.conf.gz > /etc/ceph/ceph.conf
# uuidgen 67274814-239f-4a05-8415-ed04df45876c
/etc/ceph/ceph.conf
[global] ### http://docs.ceph.com/docs/master/rados/configuration/general-config-ref/ fsid = 67274814-239f-4a05-8415-ed04df45876c # use `uuidgen` to generate your own UUID public network = 192.168.56.0/24 cluster network = 192.168.56.0/24 # Replication level, number of data copies. # Type: 32-bit Integer # (Default: 3) osd pool default size = 2 ## Replication level in degraded state, less than 'osd pool default size' value. # Sets the minimum number of written replicas for objects in the # pool in order to acknowledge a write operation to the client. If # minimum is not met, Ceph will not acknowledge the write to the # client. This setting ensures a minimum number of replicas when # operating in degraded mode. # Type: 32-bit Integer # (Default: 0), which means no particular minimum. If 0, minimum is size - (size / 2). ;osd pool default min size = 2 osd pool default min size = 1 [mon] ### http://docs.ceph.com/docs/master/rados/configuration/mon-config-ref/ ### http://docs.ceph.com/docs/master/rados/configuration/mon-osd-interaction/ # The IDs of initial monitors in a cluster during startup. # If specified, Ceph requires an odd number of monitors to form an # initial quorum (e.g., 3). # Type: String # (Default: None) mon initial members = kub1,kub2,kub3 [mon.kub1] host = kub1 mon addr = 192.168.56.21:6789 [mon.kub2] host = kub2 mon addr = 192.168.56.22:6789 [mon.kub3] host = kub3 mon addr = 192.168.56.23:6789
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
/etc/ceph/ceph.client.admin.keyring
[client.admin] key = AQBGuWRfchSlDRAA3/bTmiPTLLN0w4JdVOxpDQ== caps mds = "allow" caps mon = "allow *" caps osd = "allow *"
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
systemctl enable ceph.target systemctl start ceph.target systemctl enable ceph-mon@$(hostname -s) systemctl start ceph-mon@$(hostname -s) systemctl status ceph-mon@$(hostname -s).service
# ceph health detail
HEALTH_WARN 3 monitors have not enabled msgr2
MON_MSGR2_NOT_ENABLED 3 monitors have not enabled msgr2
mon.kub1 is not bound to a msgr2 port, only v1:192.168.56.21:6789/0
mon.kub2 is not bound to a msgr2 port, only v1:192.168.56.22:6789/0
mon.kub3 is not bound to a msgr2 port, only v1:192.168.56.23:6789/0
# ceph mon enable-msgr2
# ceph health detail
HEALTH_OK
OSD
Voir : https://wiki.nix-pro.com/view/CEPH_deployment_guide
ceph-volume remplace ceph-disk
ceph-volume inventory ceph-volume inventory /dev/sdb ceph-volume lvm batch --bluestore /dev/sda /dev/sdb /dev/sdc CEPH_VOLUME_DEBUG=1 ceph-volume inventory /dev/sdb ceph-volume lvm zap /dev/sdb --destroy
ceph-osd -i 0 --mkfs --mkkey --osd-uuid 13b2da5a-033f-4d58-b106-2f0212df6438 chown -R ceph:ceph /var/lib/ceph ceph auth list ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring
/var/lib/ceph/osd/ceph-0/keyring
[osd.0] key = AQBizGxGhJcwJxAAHhOGHXQuCUTktxNszj62aQ==
ceph --cluster ceph osd crush add-bucket kub1 host ceph osd crush move kub1 root=default chown -R ceph:ceph /var/lib/ceph ceph --cluster ceph osd crush add osd.0 1.0 host=kub1 ceph-volume raw prepare --bluestore --data /dev/sdb1 systemctl start ceph-osd@1
CephFS
cd /etc/ceph sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
Administration
Ceph health
ceph -s ceph mon_status -f json-pretty ceph -w ceph df ceph health detail ceph -n client.admin --keyring=/etc/ceph/ceph.client.admin.keyring health ceph pg dump ceph pg X.Y query ceph pgdump_stuck inactive
OSD
ceph osd tree watch ceph osd pool stats ceph osd map
Suppression OSD
ceph osd crush reweight osd.XX 0. # Passage du poids de l’OSD à 0 ceph osd out XX # Marquage de l’OSD comme non disponible au cluster # 1er mouvement de données, ~10To rebalancés #stop ceph-osd id=XX systemctl stop ceph-osd@XX.service # arrêt de l'exécution de l’OSD sur le serveur ceph osd crush remove osd.XX # Sortie logique de l’OSD du cluster # 2nd mouvement de données (non prévu), ~10To rebalancés ceph auth del osd.{osd-num} # suppression des clés d’authentification de l’OSD au cluster ceph osd rm {osd-num} # suppression définitive de l’OSD du cluster #ceph-volume lvm zap /dev/sdb --destroy
Autres
ceph mgr module l
ceph mgr module enable plop
Client
Voir :
mount -t ceph 128.114.86.4:6789:/ /mnt/pulpos -o name=admin,secretfile=/etc/ceph/admin.secret
/etc/fstab
128.114.86.4:6789,128.114.86.5:6789,128.114.86.2:6789:/ /mnt/pulpos ceph name=admin,secretfile=/etc/ceph/admin.secret,noatime,_netdev 0 2
ceph-fuse -m 128.114.86.4:6789 /mnt/pulpos
