Outils pour utilisateurs

Outils du site


blog

Notes stockage CEPH

Prérequis

Matériel Voir : https://docs.ceph.com/en/latest/start/hardware-recommendations/

Logiciel ; http://people.redhat.com/bhubbard/nature/nature-new/start/quick-start-preflight/

Liste :

  • Réseau (Ceph préconise l’utilisation de 2 interfaces réseau)
  • /etc/hosts
  • NTP
  • Ceph deploy user (with passwordless sudo privileges)
  • SSH passwordless
  • Sudo tty (Si requirettyDefaults:ceph !requiretty)
  • SELinux
Composants
MDS (Meta Data Server)

Consomme du CPU et de la RAM (1 Gi de mémoire par instance). Utile que si l'on planifie d'utiliser CephFS.

Monitors

For small clusters, 1-2 GB is generally sufficient

OSD (Object Storage Daemon)

Du côté de la mémoire, 512 Mi par instance sont suffisants sauf lors de la récupération où 1 Gi de mémoire par Ti de données et par instance est conseillé.

Installation

Voir :

Les users CEPH ne doivent pas être des utilisateurs standard, mais des utilisateurs de services eux même chargés d’une gestion fine des droits Ceph préconise l’utilisation de 2 interfaces réseau

echo "deb http://ftp.debian.org/debian buster-backports main" >> /etc/apt/sources.list.d/backports.list
 
apt-get update
apt-get install -t buster-backports ceph
zcat /usr/share/doc/ceph/sample.ceph.conf.gz > /etc/ceph/ceph.conf
# uuidgen
67274814-239f-4a05-8415-ed04df45876c

/etc/ceph/ceph.conf

[global]
### http://docs.ceph.com/docs/master/rados/configuration/general-config-ref/
 
    fsid                       = 67274814-239f-4a05-8415-ed04df45876c    # use `uuidgen` to generate your own UUID
    public network             = 192.168.56.0/24
    cluster network            = 192.168.56.0/24
 
    # Replication level, number of data copies.
    # Type: 32-bit Integer
    # (Default: 3)
	osd pool default size      = 2
 
    ## Replication level in degraded state, less than 'osd pool default size' value.
    # Sets the minimum number of written replicas for objects in the
    # pool in order to acknowledge a write operation to the client. If
    # minimum is not met, Ceph will not acknowledge the write to the
    # client. This setting ensures a minimum number of replicas when
    # operating in degraded mode.
    # Type: 32-bit Integer
    # (Default: 0), which means no particular minimum. If 0, minimum is size - (size / 2).
    ;osd pool default min size  = 2
    osd pool default min size  = 1	
 
[mon]
### http://docs.ceph.com/docs/master/rados/configuration/mon-config-ref/
### http://docs.ceph.com/docs/master/rados/configuration/mon-osd-interaction/
 
    # The IDs of initial monitors in a cluster during startup.
    # If specified, Ceph requires an odd number of monitors to form an
    # initial quorum (e.g., 3).
    # Type: String
    # (Default: None)
    mon initial members        = kub1,kub2,kub3
 
[mon.kub1]
    host                       = kub1
    mon addr                   = 192.168.56.21:6789
 
[mon.kub2]
    host                       = kub2
    mon addr                   = 192.168.56.22:6789
 
[mon.kub3]
    host                       = kub3
    mon addr                   = 192.168.56.23:6789
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'

/etc/ceph/ceph.client.admin.keyring

[client.admin]
        key = AQBGuWRfchSlDRAA3/bTmiPTLLN0w4JdVOxpDQ==
        caps mds = "allow"
        caps mon = "allow *"
        caps osd = "allow *"
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
systemctl enable ceph.target
systemctl start ceph.target
 
systemctl enable ceph-mon@$(hostname -s)
systemctl start  ceph-mon@$(hostname -s)
systemctl status ceph-mon@$(hostname -s).service
# ceph health detail
HEALTH_WARN 3 monitors have not enabled msgr2
MON_MSGR2_NOT_ENABLED 3 monitors have not enabled msgr2
    mon.kub1 is not bound to a msgr2 port, only v1:192.168.56.21:6789/0
    mon.kub2 is not bound to a msgr2 port, only v1:192.168.56.22:6789/0
    mon.kub3 is not bound to a msgr2 port, only v1:192.168.56.23:6789/0
# ceph mon enable-msgr2
# ceph health detail
HEALTH_OK
OSD

Voir : https://wiki.nix-pro.com/view/CEPH_deployment_guide

ceph-volume remplace ceph-disk

ceph-volume inventory
ceph-volume inventory /dev/sdb
ceph-volume lvm batch --bluestore /dev/sda /dev/sdb /dev/sdc
CEPH_VOLUME_DEBUG=1 ceph-volume inventory /dev/sdb
ceph-volume lvm zap /dev/sdb --destroy
ceph-osd -i 0 --mkfs --mkkey --osd-uuid 13b2da5a-033f-4d58-b106-2f0212df6438
chown -R ceph:ceph /var/lib/ceph
ceph auth list
ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring

/var/lib/ceph/osd/ceph-0/keyring

[osd.0]
	key = AQBizGxGhJcwJxAAHhOGHXQuCUTktxNszj62aQ==
ceph --cluster ceph osd crush add-bucket kub1 host
ceph osd crush move kub1 root=default
chown -R ceph:ceph /var/lib/ceph
ceph --cluster ceph osd crush add osd.0 1.0 host=kub1
 
 
ceph-volume raw prepare --bluestore --data /dev/sdb1
 
systemctl start ceph-osd@1
CephFS
cd /etc/ceph
sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring

Administration

Ceph health

ceph -s
ceph mon_status -f json-pretty
ceph -w
ceph df
ceph health detail
ceph -n client.admin --keyring=/etc/ceph/ceph.client.admin.keyring health
ceph pg dump
ceph pg X.Y query
ceph pgdump_stuck inactive
OSD
ceph osd tree
watch ceph osd pool stats
ceph osd map

Suppression OSD

ceph osd crush reweight osd.XX 0.
    # Passage du poids de l’OSD à 0
ceph osd out XX 
    # Marquage de l’OSD comme non disponible au cluster
    # 1er mouvement de données, ~10To rebalancés
#stop ceph-osd id=XX
systemctl stop  ceph-osd@XX.service
    # arrêt de l'exécution de l’OSD sur le serveur
ceph osd crush remove osd.XX 
    # Sortie logique de l’OSD du cluster
    # 2nd mouvement de données (non prévu), ~10To rebalancés
ceph auth del osd.{osd-num} 
    # suppression des clés d’authentification de l’OSD au cluster
ceph osd rm {osd-num} 
    # suppression définitive de l’OSD du cluster
 
#ceph-volume lvm zap /dev/sdb --destroy

Autres

ceph mgr module l
ceph mgr module enable plop

Client

Voir :

mount -t ceph 128.114.86.4:6789:/ /mnt/pulpos -o name=admin,secretfile=/etc/ceph/admin.secret

/etc/fstab

128.114.86.4:6789,128.114.86.5:6789,128.114.86.2:6789:/  /mnt/pulpos  ceph  name=admin,secretfile=/etc/ceph/admin.secret,noatime,_netdev  0  2
ceph-fuse -m 128.114.86.4:6789 /mnt/pulpos
2025/03/24 15:06

Notes sssd

Voir :

Voir aussi :

  • Winbind

sssd vs winbind

Prerequisites for AD to Support SSSD ID Mapping

No configuration should be necessary, if the following things are properly configured.

  • A DNS SRV record exists for “_ldap._tcp.ad.example.com”.
  • A DNS SRV record exists for “_ldap._tcp.dc._msdcs.ad.example.com”.

Open the following ports :

  • 53 (DNS) TCP and UDP
  • 389 (LDAP) TCP and UDP
  • 88 (Kerberos) TCP and UDP
  • 464 (Kerberos password changes) TCP and UDP
  • 3268 (LDAP global catalog) TCP
  • 123 (NTP) UDP

Source : https://paulgorman.org/technical/linux-active-directory-auth.txt.html

Disable ID Mapping

/etc/sssd/sssd.conf

ldap_id_mapping = false

Conf

# Important. Impact les performances
enumerate = false

cache_credentials = True
# How long should we allow cached logins (in days since the last successful online login). 0 for no limit
# offline_creditinals_expiration=0

default_shell=/bin/bash

# ad_gpo_access_control = enforcing # Défaut RHEL8
# ad_gpo_access_control = permissive
# Ne pas bloquer l’authentification si les GPO ne sont pas accessible (si permissive ou disabled)
ad_gpo_access_control =  disabled

# dyndns_update = false

ldap_referrals = false

Pb connexion sssd

systemctl restart sssd
tail /var/log/secure
sssctl config-check
systemctl stop sssd
 
ps -ef |grep sssd
killall sssd
 
rm /var/lib/sss/db/*
systemctl start sssd
getend password plop

Del cache

sss_cache -E

Autres

rm -rf /etc/authselect/custom/activedirectory-ACME.LOCAL/
authselect create-profile activedirectory-ACME.LOCAL -b sssd
authselect select custom/activedirectory-ACME.LOCAL with-pamaccess with-mkhomedir --force

la configuration présente dans /etc/authselect/user-nsswitch.conf

grep passwd /etc/authselect/custom/activedirectory-ACME.LOCAL/nsswitch.conf |grep -q with-files-domain && echo "profil OK" || echo "profil KO"
 
egrep "^passwd:" /etc/nsswitch.conf|grep -q "files sss" && echo "conf OK" || echo "conf KO"
2025/03/24 15:06
, ,

Notes cgroup

Voir :

Tester si tous est ok

apt-get install lxc
lxc-checkconfig

ou

#apt-get install docker.io
#/usr/share/docker.io/contrib/check-config.sh

https://github.com/opencontainers/runc/blob/main/script/check-config.sh

Dans Debian :mount cgroup automatically in mountkernfs.

Normalement sous Debian, les cgroup sont automatiquement montés (dans le mountkernfs)

$ mount |grep cgroup
none on /sys/fs/cgroup type tmpfs (rw,relatime,size=4k,mode=755)
systemd on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,name=systemd)

Si ce n'est pas le cas, la technique d'ajouter dans /etc/fstab

/etc/fstab

cgroup          /cgroup         cgroup  defaults        0       0

ou alors passer à systemd

apt-get install systemd systemd-sysv

Voir :

Mais comme nous choisissons la méthode avec des services (méthode à la Redhat)

Pour connaître les ????? FIXME prit en charge par le noyau

# lssubsys -a
cpuset
cpu
cpuacct
memory
devices
freezer
net_cls
blkio
perf_event

Install du packet

apt-get update && apt-get install -y cgroup-tools

Puis

dpkg -L cgroup-tools

Donc

mkdir /etc/sysconfig/
cp -p /usr/share/doc/cgroup-tools/examples/cgconfig.sysconfig /etc/sysconfig/cgconfig
cp -p /usr/share/doc/cgroup-tools/examples/cgred.conf /etc/sysconfig/cgred
cp -p /usr/share/doc/cgroup-tools/examples/cgred /etc/init.d/
cp -p /usr/share/doc/cgroup-tools/examples/cgconfig /etc/init.d/
cp -p /usr/share/doc/cgroup-tools/examples/cgconfig.conf /etc/
cp -p /usr/share/doc/cgroup-tools/examples/cgrules.conf /etc/
chmod a+x /etc/init.d/cgconfig /etc/init.d/cgred
ln -s /etc/sysconfig/cgconfig /etc/default/
ln -s /etc/sysconfig/cgred /etc/default/
sed -i -e 's|/var/lock/subsys/|/var/lock/|g' /etc/init.d/cgred 
sed -i -e 's|/var/lock/subsys/|/var/lock/|g' /etc/init.d/cgconfig
getent group cgred >/dev/null || groupadd -r cgred

Puis prendre le fichier /etc/rc.d/init.d/functions sur une CentOS.

mkdir -p /etc/rc.d/init.d/
cp -p functions /etc/rc.d/init.d/

Commenter la ligne [ -z “${CONSOLETYPE:-}” ] && CONSOLETYPE=“$(/sbin/consoletype)“

vi /etc/rc.d/init.d/functions

Puis

mkdir /cgroup
cd /cgroup
mkdir $(lssubsys -a)

Erreur sous Debian :

# /etc/init.d/cgconfig start
Starting cgconfig service: Error: cannot mount memory to /cgroup/memory: No such file or directory
/usr/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed
[FAIL] Failed to parse /etc/cgconfig.conf ... failed!

Solution : rajouter “cgroup_enable=memory swapaccount=1” à votre Grub :

/etc/default/grub

GRUB_CMDLINE_LINUX="vga=795 cgroup_enable=memory swapaccount=1"
update-grub

Pour le debug si nécessaire :

export CGROUP_LOGLEVEL=debug

Autres

allocated 133693440 bytes of page_cgroup
please try 'cgroup_disable=memory' option if you don't want memory cgroups

https://wiki.debian.org/LXC

/etc/fstab

cgroup  /sys/fs/cgroup  cgroup  defaults  0   0

/etc/default/grub

GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"

sudo apt-get install cgroup-tools
 
sudo cgcreate -a jean -g memory:plop
echo 10000000 > /sys/fs/cgroup/memory/plop/memory.kmem.limit_in_bytes
sudo cgexec -g memory:plop bash

cgroupv1 ou v2 ?

podman info
docker info
 
mount | grep cgroup2
 
systemctl --user status
 
grep cgroup /proc/filesystems

Pour passer à la version 2

grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=1"

Autres

cat /sys/fs/cgroup/user.slice/user-1003.slice/cgroup.controllers
cpuset cpu io memory pids
2025/03/24 15:06

Notes SSL/TLS HTTPS client OpenSSL

Voir :

Voir aussi :

Vérif cert

openssl s_client -showcerts -CAfile ca.crt -connect 192.168.56.101:7000 -servername acme.fr

Avoir des informations sur le certificat

openssl x509 -inform PEM -in mycertfile.pem -text -out certdata

Debug

curl -v --insecure --show-error --verbose --cacert mycertfile.pem https://acme.fr

Install CA certificat - Debian

mv cert.pem acme.fr.crt
cp acme.fr.crt /usr/local/share/ca-certificates/
#vim /etc/ca-certificates.conf
#dpkg-reconfigure ca-certificates
 
# RedHat
# update-ca-trust
 
# Debian
update-ca-certificates

Remove CA certificat - Debian

rm /usr/local/share/ca-certificates/plop.crt
 
 
# RedHat
# update-ca-trust
 
# Debian
#update-ca-certificates
 
update-ca-certificates -f

-f, --fresh : Fresh updates. Remove symlinks in /etc/ssl/certs directory.

Install CA certificat - RedHat

Voir :

  • trust (paquet RedHat p11-kit-trust ; paquet Debian p11-kit)
  • update-ca-trust (paquet ca-certificat)
cp ca.crt /etc/pki/ca-trust/source/anchors/
 
# Debian
# update-ca-certificates
 
# RedHat
update-ca-trust

Source : cat /etc/pki/ca-trust/source/README

Requette HTTP over SSL/TLS

(echo -ne "GET / HTTP/1.1\r\nHost: acme.fr\r\n\r\n" ; cat ) |openssl s_client -showcerts -CAfile ca.crt -connect acme.fr:443 -servername acme.fr

Test TLS HTTPS en ligne

Test TLS HTTPS hors ligne

Python

trustflag.py

"""Check AddTrust External CA Root
 
https://bugzilla.redhat.com/show_bug.cgi?id=1842174
"""
from __future__ import print_function
 
import socket
import ssl
import sys
 
try:
    from urllib2 import urlopen
except ImportError:
    from urllib.request import urlopen
 
X509_V_FLAG_TRUSTED_FIRST = 0x8000
URL = "https://addtrust-chain.demo.sslmate.com"
 
print(sys.version)
print(ssl.OPENSSL_VERSION)
print()
 
ctx = ssl.create_default_context()
assert ctx.verify_mode == ssl.CERT_REQUIRED
assert ctx.check_hostname == True
 
print("Try with default verify flags")
print("verify_flags", hex(ctx.verify_flags))
try:
    urlopen(URL, context=ctx)
except Exception as e:
    print("FAILED")
    print(e)
else:
    print("success")
print()
 
print("Try again with X509_V_FLAG_TRUSTED_FIRST")
ctx.verify_flags |= X509_V_FLAG_TRUSTED_FIRST
print("verify_flags", hex(ctx.verify_flags))
try:
    urlopen(URL, context=ctx)
except Exception as e:
    print("FAILED")
    print(e)
else:
    print("success")
print()

Pb

Le certificat téléchargé ne fonctionne pas
Curl Wget
Debian
RedHat

source : https://superuser.com/questions/97201/how-to-save-a-remote-server-ssl-certificate-locally-as-a-file

openssl s_client -showcerts -connect acme.fr:443 -servername acme.fr </dev/null 2>/dev/null|openssl x509 -outform PEM >mycertfile.pem

OK sous Debian \ NOK sous RedHat

wget --ca-certificate=mycertfile.pem https://acme.fr:443/somepage

NOK sous Debian & RedHat

curl --show-error --verbose --cacert mycertfile.pem https://acme.fr:443/somepage
Solution

Utiliser -verify pour avoir la chaîne complète, c'est-à-dire télécharger nom seulement la clef publique de acme.fr, mais aussi la clef publique de la CA.

openssl s_client -showcerts -verify 5 -connect 192.168.56.101:7000 -servername acme.fr </dev/null > mycertfile.pem

Puis ne garder que la CA. Note : si la CA existe, dans le cas d'un certificat auto-signé, ça ne marchera pas. Pour Debian, il est possible d'installer le certificat comme si c'était celui d'une CA.

vim mycertfile.pem

Voir https://unix.stackexchange.com/questions/368123/how-to-extract-the-root-ca-and-subordinate-ca-from-a-certificate-chain-in-linux

2025/03/24 15:06
blog.txt · Dernière modification : de 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki