tech:haute_dispo_cluster_failover_redhat
Différences
Ci-dessous, les différences entre deux révisions de la page.
| Prochaine révision | Révision précédente | ||
| tech:haute_dispo_cluster_failover_redhat [2025/03/24 15:06] – créée - modification externe 127.0.0.1 | tech:haute_dispo_cluster_failover_redhat [2025/10/30 17:23] (Version actuelle) – Jean-Baptiste | ||
|---|---|---|---|
| Ligne 1: | Ligne 1: | ||
| + | < | ||
| {{tag> | {{tag> | ||
| - | = Haute dispo cluster failover redhat | + | # Haute dispo cluster failover redhat |
| Voir aussi : | Voir aussi : | ||
| * OpenSVC | * OpenSVC | ||
| + | * Paquet resource-agents | ||
| Ressources : | Ressources : | ||
| Ligne 19: | Ligne 21: | ||
| * https:// | * https:// | ||
| - | == Installation | + | ## Installation |
| Voir : | Voir : | ||
| Ligne 25: | Ligne 27: | ||
| * http:// | * http:// | ||
| - | === Prérequis | + | ### Prérequis |
| Prérequis | Prérequis | ||
| - | # Date syncho | + | * Date syncho |
| - | # SELinux désactivé | + | |
| - | # service NetworkManager arrêté | + | |
| - | # Règles pare-feu | + | |
| - | # Conf ''/ | + | |
| - | ==== Date synchro (ntp) | + | |
| + | #### Date synchro (ntp) | ||
| Les nœuds doivent avoir la date et l' | Les nœuds doivent avoir la date et l' | ||
| Vérif | Vérif | ||
| - | < | + | ~~~bash |
| date | date | ||
| - | </ | + | ~~~ |
| Exemple avec Clush [[cluster_shell_parallele]] | Exemple avec Clush [[cluster_shell_parallele]] | ||
| - | < | + | ~~~bash |
| echo date |clush -B -w node-[1-2] | echo date |clush -B -w node-[1-2] | ||
| - | </ | + | ~~~ |
| - | ==== SELinux désactivé | ||
| - | < | + | #### SELinux désactivé |
| + | |||
| + | ~~~bash | ||
| setenforce 0 | setenforce 0 | ||
| sed -i.bak " | sed -i.bak " | ||
| - | </ | + | ~~~ |
| Vérif | Vérif | ||
| - | < | + | ~~~bash |
| sestatus | sestatus | ||
| - | </ | + | ~~~ |
| - | ==== Service NetworkManager arrêté et désactivé | + | #### Service NetworkManager arrêté et désactivé |
| - | < | + | ~~~bash |
| systemctl stop NetworkManager | systemctl stop NetworkManager | ||
| systemctl disable NetworkManager | systemctl disable NetworkManager | ||
| - | </ | + | ~~~ |
| - | ==== Pare-feu | + | #### Pare-feu |
| Si pare-feu activé | Si pare-feu activé | ||
| - | < | + | ~~~bash |
| firewall-cmd --permanent --add-service=high-availability | firewall-cmd --permanent --add-service=high-availability | ||
| firewall-cmd --add-service=high-availability | firewall-cmd --add-service=high-availability | ||
| - | </ | + | ~~~ |
| Ou | Ou | ||
| Désactivation du parefeux | Désactivation du parefeux | ||
| - | < | + | ~~~bash |
| systemctl stop firewalld | systemctl stop firewalld | ||
| systemctl disable firewalld | systemctl disable firewalld | ||
| #rpm -e firewalld | #rpm -e firewalld | ||
| - | </ | + | ~~~ |
| Vérif | Vérif | ||
| - | < | + | ~~~bash |
| iptables -L -n -v | iptables -L -n -v | ||
| - | </ | + | ~~~ |
| - | ==== Résolution noms | + | |
| + | #### Résolution noms | ||
| Chaque nœud doit pouvoir pinguer les autres via son nom. | Chaque nœud doit pouvoir pinguer les autres via son nom. | ||
| Il est conseiller d' | Il est conseiller d' | ||
| - | <code - /etc/hosts> | + | |
| + | '' | ||
| + | ~~~ | ||
| 127.0.0.1 | 127.0.0.1 | ||
| ::1 | ::1 | ||
| Ligne 99: | Ligne 106: | ||
| 192.168.97.221 | 192.168.97.221 | ||
| 192.168.97.222 | 192.168.97.222 | ||
| - | </ | + | ~~~ |
| - | === Install | + | ### Install |
| Install paquets | Install paquets | ||
| - | < | + | ~~~bash |
| yum install -y pacemaker pcs psmisc policycoreutils-python | yum install -y pacemaker pcs psmisc policycoreutils-python | ||
| - | </ | + | ~~~ |
| - | < | + | ~~~bash |
| echo " | echo " | ||
| Ligne 115: | Ligne 123: | ||
| #unset http_proxy | #unset http_proxy | ||
| - | # | + | # |
| pcs cluster auth node-1 node-2 #-u hacluster -p passwd | pcs cluster auth node-1 node-2 #-u hacluster -p passwd | ||
| Ligne 123: | Ligne 131: | ||
| pcs cluster start --all | pcs cluster start --all | ||
| pcs cluster enable --all | pcs cluster enable --all | ||
| - | </ | + | ~~~ |
| Le fichier corosync.conf est automatiquement crée | Le fichier corosync.conf est automatiquement crée | ||
| - | <code c / | + | '' |
| + | ~~~c | ||
| totem { | totem { | ||
| version: 2 | version: 2 | ||
| Ligne 158: | Ligne 167: | ||
| to_syslog: yes | to_syslog: yes | ||
| } | } | ||
| - | </ | + | ~~~ |
| Vérifier la conf de corosync 1 | Vérifier la conf de corosync 1 | ||
| - | < | + | ~~~bash |
| corosync-cfgtool -s | corosync-cfgtool -s | ||
| - | </ | + | ~~~ |
| Doit retourner **no faults** \\ | Doit retourner **no faults** \\ | ||
| Ne doit pas comporter d’adresse 127.0.0.1 | Ne doit pas comporter d’adresse 127.0.0.1 | ||
| Vérifier la conf de corosync 2 | Vérifier la conf de corosync 2 | ||
| - | < | + | ~~~bash |
| corosync-cmapctl | corosync-cmapctl | ||
| pcs status corosync | pcs status corosync | ||
| - | </ | + | ~~~ |
| - | === Configuration | + | ### Configuration |
| Prevent Resources from Moving after Recovery | Prevent Resources from Moving after Recovery | ||
| - | < | + | ~~~bash |
| pcs resource defaults resource-stickiness=100 | pcs resource defaults resource-stickiness=100 | ||
| - | </ | + | ~~~ |
| Pas de quorum | Pas de quorum | ||
| - | < | + | ~~~bash |
| #pcs property set no-quorum-policy=ignore | #pcs property set no-quorum-policy=ignore | ||
| pcs property set no-quorum-policy=freeze | pcs property set no-quorum-policy=freeze | ||
| - | </ | + | ~~~ |
| - | == Configuration du fencing / stonith | + | ## Configuration du fencing / stonith |
| - | === Test en vue du fencing via iDRAC | + | ### Test en vue du fencing via iDRAC |
| Voir https:// | Voir https:// | ||
| Tester du fencing | Tester du fencing | ||
| - | < | + | ~~~bash |
| / | / | ||
| - | </ | + | ~~~ |
| Test avec OpenManage ''/ | Test avec OpenManage ''/ | ||
| - | < | + | ~~~bash |
| racadm -r 192.168.96.221 -u root -p calvin get iDRAC.Info | racadm -r 192.168.96.221 -u root -p calvin get iDRAC.Info | ||
| - | </ | + | ~~~ |
| Test via SSH sur iDRAC | Test via SSH sur iDRAC | ||
| Pour redemarrer le serveur en se connectant en SSH sur la iDRAC | Pour redemarrer le serveur en se connectant en SSH sur la iDRAC | ||
| - | <code -> | + | ~~~ |
| ssh root@192.168.96.221 | ssh root@192.168.96.221 | ||
| racadm serveraction powercycle | racadm serveraction powercycle | ||
| - | </ | + | ~~~ |
| ** Si pas de stonith / fence ** sinon la VIP refusera de démarrer | ** Si pas de stonith / fence ** sinon la VIP refusera de démarrer | ||
| - | < | + | ~~~bash |
| # Si pas de stonith / fence | # Si pas de stonith / fence | ||
| pcs property set stonith-enabled=false | pcs property set stonith-enabled=false | ||
| - | </ | + | ~~~ |
| - | === Vérif | + | ### Vérif |
| - | < | + | ~~~bash |
| crm_verify -LVVV | crm_verify -LVVV | ||
| - | </ | + | ~~~ |
| - | === Configuration | + | ### Configuration |
| - | < | + | ~~~bash |
| # pcs stonith create fence_node-1 fence_drac5 ipaddr=192.168.96.221 login=root passwd=calvin secure=1 cmd_prompt="/ | # pcs stonith create fence_node-1 fence_drac5 ipaddr=192.168.96.221 login=root passwd=calvin secure=1 cmd_prompt="/ | ||
| pcs stonith create fence_node-1 fence_drac5 ipaddr=192.168.96.221 login=root passwd=calvin secure=1 cmd_prompt="/ | pcs stonith create fence_node-1 fence_drac5 ipaddr=192.168.96.221 login=root passwd=calvin secure=1 cmd_prompt="/ | ||
| Ligne 232: | Ligne 241: | ||
| pcs stonith level add 1 node-1 fence_node-1 | pcs stonith level add 1 node-1 fence_node-1 | ||
| pcs stonith level add 1 node-2 fence_node-2 | pcs stonith level add 1 node-2 fence_node-2 | ||
| - | </ | + | ~~~ |
| Interdire le suicide (le fencing de soi-même) | Interdire le suicide (le fencing de soi-même) | ||
| - | < | + | ~~~bash |
| pcs constraint location fence_node-1 avoids node-1 | pcs constraint location fence_node-1 avoids node-1 | ||
| pcs constraint location fence_node-2 avoids node-2 | pcs constraint location fence_node-2 avoids node-2 | ||
| - | </ | + | ~~~ |
| Tester le fencing | Tester le fencing | ||
| - | < | + | ~~~bash |
| # | # | ||
| pcs stonith fence node-1 | pcs stonith fence node-1 | ||
| - | </ | + | ~~~ |
| - | === Ajout ressources | + | ### Ajout ressources |
| Ajout ressource VIP (adresse IP virtuelle) | Ajout ressource VIP (adresse IP virtuelle) | ||
| - | < | + | ~~~bash |
| pcs resource create myvip IPaddr2 ip=192.168.97.230 cidr_netmask=24 nic=bond0 op monitor interval=30s on-fail=fence | pcs resource create myvip IPaddr2 ip=192.168.97.230 cidr_netmask=24 nic=bond0 op monitor interval=30s on-fail=fence | ||
| #pcs constraint location myvip prefers node-1=INFINITY | #pcs constraint location myvip prefers node-1=INFINITY | ||
| Ligne 257: | Ligne 266: | ||
| pcs constraint location myvip prefers node-2=50 | pcs constraint location myvip prefers node-2=50 | ||
| #pcs resource meta myvip resource-stickiness=100 | #pcs resource meta myvip resource-stickiness=100 | ||
| - | </ | + | ~~~ |
| Ajouter ressource ping | Ajouter ressource ping | ||
| - | < | + | ~~~bash |
| pcs resource create ping ocf: | pcs resource create ping ocf: | ||
| pcs constraint location myvip rule score=-INFINITY pingd lt 1 or not_defined pingd | pcs constraint location myvip rule score=-INFINITY pingd lt 1 or not_defined pingd | ||
| - | </ | + | ~~~ |
| Ajout ressource Apache | Ajout ressource Apache | ||
| Ligne 270: | Ligne 279: | ||
| et arrêter le service d' | et arrêter le service d' | ||
| - | < | + | ~~~bash |
| curl http:// | curl http:// | ||
| systemctl stop httpd.service | systemctl stop httpd.service | ||
| systemctl disable httpd.service | systemctl disable httpd.service | ||
| - | </ | + | ~~~ |
| - | < | + | ~~~bash |
| pcs resource create srvweb apache configfile="/ | pcs resource create srvweb apache configfile="/ | ||
| # Le serveur Web toujours sur la VIP | # Le serveur Web toujours sur la VIP | ||
| Ligne 282: | Ligne 291: | ||
| # D' | # D' | ||
| pcs constraint order myvip then srvweb | pcs constraint order myvip then srvweb | ||
| - | </ | + | ~~~ |
| - | + | ## Manip | |
| - | == Manip | + | |
| Déplacer la VIP | Déplacer la VIP | ||
| - | < | + | ~~~bash |
| pcs resource move myvip node-1 | pcs resource move myvip node-1 | ||
| pcs resource move myvip node-2 | pcs resource move myvip node-2 | ||
| - | </ | + | ~~~ |
| Retour arrière - Déplacer la VIP | Retour arrière - Déplacer la VIP | ||
| - | < | + | ~~~bash |
| #pcs constraint --full |grep prefer | #pcs constraint --full |grep prefer | ||
| pcs constraint remove cli-prefer-myvip | pcs constraint remove cli-prefer-myvip | ||
| pcs resource relocate run | pcs resource relocate run | ||
| - | </ | + | ~~~ |
| Remise à zero compteur erreurs | Remise à zero compteur erreurs | ||
| - | < | + | ~~~bash |
| #pcs resource failcount reset res1 | #pcs resource failcount reset res1 | ||
| # | # | ||
| pcs resource cleanup | pcs resource cleanup | ||
| - | </ | + | ~~~ |
| Déplacer toutes les ressources sur le nœud primaire (ignoring resource stickiness) | Déplacer toutes les ressources sur le nœud primaire (ignoring resource stickiness) | ||
| - | < | + | ~~~bash |
| #pcs resource relocate show | #pcs resource relocate show | ||
| pcs resource relocate run | pcs resource relocate run | ||
| - | </ | + | ~~~ |
| Maintenance sur une ressource | Maintenance sur une ressource | ||
| - | < | + | ~~~bash |
| #pcs resource update fence_node-1 meta target-role=stopped | #pcs resource update fence_node-1 meta target-role=stopped | ||
| #pcs resource update fence_node-1 meta is-managed=false | #pcs resource update fence_node-1 meta is-managed=false | ||
| Ligne 320: | Ligne 328: | ||
| #pcs resource disable fence_node-1 | #pcs resource disable fence_node-1 | ||
| pcs resource unmanage fence_node-1 | pcs resource unmanage fence_node-1 | ||
| - | </ | + | ~~~ |
| Maintenance générale du cluster | Maintenance générale du cluster | ||
| - | < | + | ~~~bash |
| pcs property set maintenance-mode=true | pcs property set maintenance-mode=true | ||
| - | </ | + | ~~~ |
| Fin de maintenance | Fin de maintenance | ||
| - | < | + | ~~~bash |
| pcs property set maintenance-mode=false | pcs property set maintenance-mode=false | ||
| - | </ | + | ~~~ |
| Arrêt du cluster | Arrêt du cluster | ||
| - | < | + | ~~~bash |
| pcs cluster stop --all | pcs cluster stop --all | ||
| pcs cluster disable --all | pcs cluster disable --all | ||
| - | </ | + | ~~~ |
| - | == Diagnostic / Supervision | + | ## Diagnostic / Supervision |
| - | === Diag Passif | + | ### Diag Passif |
| [[http:// | [[http:// | ||
| - | < | + | ~~~bash |
| # Check syntax conf | # Check syntax conf | ||
| corosync -t | corosync -t | ||
| Ligne 352: | Ligne 360: | ||
| # check the node's network | # check the node's network | ||
| corosync-cmapctl | corosync-cmapctl | ||
| - | </ | + | ~~~ |
| Vérif | Vérif | ||
| - | < | + | ~~~bash |
| pcs cluster pcsd-status | pcs cluster pcsd-status | ||
| pcs cluster verify | pcs cluster verify | ||
| Ligne 363: | Ligne 371: | ||
| journalctl --since yesterday -p err | journalctl --since yesterday -p err | ||
| journalctl -u pacemaker.service --since " | journalctl -u pacemaker.service --since " | ||
| - | </ | + | ~~~ |
| Script supervision (ces commandes doivent retourner aucune ligne) | Script supervision (ces commandes doivent retourner aucune ligne) | ||
| - | < | + | ~~~bash |
| LANG=C pcs status |egrep " | LANG=C pcs status |egrep " | ||
| crm_verify -LVVV | crm_verify -LVVV | ||
| LANG=C pcs resource relocate show |sed -ne '/ | LANG=C pcs resource relocate show |sed -ne '/ | ||
| crm_mon -1f | grep -q fail-count | crm_mon -1f | grep -q fail-count | ||
| - | </ | + | ~~~ |
| Voir plus haut si (script ''/ | Voir plus haut si (script ''/ | ||
| - | < | + | ~~~bash |
| tailf / | tailf / | ||
| - | </ | + | ~~~ |
| Script supervision | Script supervision | ||
| Quel nœud est actif | Quel nœud est actif | ||
| - | < | + | ~~~bash |
| LANG=C crm_resource --resource myvip --locate |cut -d':' | LANG=C crm_resource --resource myvip --locate |cut -d':' | ||
| - | </ | + | ~~~ |
| Le serveur web répond t-il bien en utilisant l'IP de la VIP. (Le code de retour doit-être **0**) | Le serveur web répond t-il bien en utilisant l'IP de la VIP. (Le code de retour doit-être **0**) | ||
| - | < | + | ~~~bash |
| #curl -4 -m 1 --connect-timeout 1 http:// | #curl -4 -m 1 --connect-timeout 1 http:// | ||
| curl -4 -m 1 --connect-timeout 1 http:// | curl -4 -m 1 --connect-timeout 1 http:// | ||
| #echo $? | #echo $? | ||
| - | </ | + | ~~~ |
| - | === ACL | + | ### ACL |
| Compte en lecture seule avec les droits de consulter crm_mon \\ | Compte en lecture seule avec les droits de consulter crm_mon \\ | ||
| Attention : ce compte trouver le mdp iDRAC/Ilo | Attention : ce compte trouver le mdp iDRAC/Ilo | ||
| - | < | + | ~~~bash |
| pcs stonith --full |grep passwd | pcs stonith --full |grep passwd | ||
| - | </ | + | ~~~ |
| Mise en œuvre | Mise en œuvre | ||
| - | < | + | ~~~bash |
| #adduser rouser | #adduser rouser | ||
| #usermod -a -G haclient rouser | #usermod -a -G haclient rouser | ||
| Ligne 410: | Ligne 418: | ||
| #pcs acl user create rouser read-only | #pcs acl user create rouser read-only | ||
| pcs acl user create process read-only | pcs acl user create process read-only | ||
| - | </ | + | ~~~ |
| - | < | + | ~~~bash |
| #crm_mon --daemonize --as-html / | #crm_mon --daemonize --as-html / | ||
| - | </ | + | ~~~ |
| - | <code bash / | + | '' |
| + | ~~~bash | ||
| #!/bin/sh | #!/bin/sh | ||
| # https:// | # https:// | ||
| Ligne 423: | Ligne 432: | ||
| ${CRM_notify_target_rc: | ${CRM_notify_target_rc: | ||
| exit | exit | ||
| - | </ | + | ~~~ |
| - | < | + | ~~~bash |
| chmod 755 / | chmod 755 / | ||
| chown root.root / | chown root.root / | ||
| - | </ | + | ~~~ |
| - | < | + | ~~~bash |
| pcs resource create ClusterMon-External ClusterMon update=10000 user=process extra_options=" | pcs resource create ClusterMon-External ClusterMon update=10000 user=process extra_options=" | ||
| - | </ | + | ~~~ |
| Colocation - page de monitoting toujours actif sur la VIP \\ | Colocation - page de monitoting toujours actif sur la VIP \\ | ||
| **Seulement nécessaire si ressource non clonée** | **Seulement nécessaire si ressource non clonée** | ||
| - | < | + | ~~~bash |
| pcs constraint colocation add ClusterMon-External with myvip | pcs constraint colocation add ClusterMon-External with myvip | ||
| - | </ | + | ~~~ |
| Test | Test | ||
| - | < | + | ~~~bash |
| curl 192.168.97.230/ | curl 192.168.97.230/ | ||
| - | </ | + | ~~~ |
| Voir https:// | Voir https:// | ||
| - | === Diag Actif | + | ### Diag Actif |
| En cas de pb | En cas de pb | ||
| - | < | + | ~~~bash |
| pcs resource debug-start resource_id | pcs resource debug-start resource_id | ||
| - | </ | + | ~~~ |
| - | == Ajout 2em interface pour le heartbeat | + | ## Ajout 2em interface pour le heartbeat |
| Ligne 462: | Ligne 471: | ||
| Avant de modifier la conf, on passe le cluster en mode maintenance : | Avant de modifier la conf, on passe le cluster en mode maintenance : | ||
| - | < | + | ~~~bash |
| pcs property set maintenance-mode=true | pcs property set maintenance-mode=true | ||
| - | </ | + | ~~~ |
| - | <code - /etc/hosts> | + | '' |
| + | ~~~ | ||
| 192.168.21.10 | 192.168.21.10 | ||
| 192.168.22.10 | 192.168.22.10 | ||
| 192.168.21.11 | 192.168.21.11 | ||
| 192.168.22.11 | 192.168.22.11 | ||
| - | </ | + | ~~~ |
| On ajoute **rrp_mode** et **ring1_addr** | On ajoute **rrp_mode** et **ring1_addr** | ||
| - | <code c / | + | '' |
| + | ~~~c | ||
| totem { | totem { | ||
| rrp_mode: active | rrp_mode: active | ||
| Ligne 491: | Ligne 502: | ||
| } | } | ||
| } | } | ||
| - | </ | + | ~~~ |
| - | < | + | ~~~bash |
| pcs cluster reload corosync | pcs cluster reload corosync | ||
| pcs cluster status corosync | pcs cluster status corosync | ||
| corosync-cfgtool -s | corosync-cfgtool -s | ||
| pcs property unset maintenance-mode | pcs property unset maintenance-mode | ||
| - | </ | + | ~~~ |
| - | == Reprise sur incident | + | ## Reprise sur incident |
| - | < | + | ~~~bash |
| # | # | ||
| pcs resource cleanup | pcs resource cleanup | ||
| pcs resource relocate run | pcs resource relocate run | ||
| #pcs cluster start --all | #pcs cluster start --all | ||
| - | </ | + | ~~~ |
| - | == Crash-tests | + | ## Crash-tests |
| Test 1 Crash brutal | Test 1 Crash brutal | ||
| - | < | + | ~~~bash |
| echo 1 > / | echo 1 > / | ||
| echo c > / | echo c > / | ||
| - | </ | + | ~~~ |
| Test 2 Coupure électrique : Débranchement du câble | Test 2 Coupure électrique : Débranchement du câble | ||
| Test 3 Coupure réseaux | Test 3 Coupure réseaux | ||
| - | < | + | ~~~bash |
| ifdown bond0 | ifdown bond0 | ||
| - | </ | + | ~~~ |
| Test 4 Perte du ping de la passerelle sur l'un des nœud | Test 4 Perte du ping de la passerelle sur l'un des nœud | ||
| - | < | + | ~~~bash |
| iptables -A OUTPUT -d 192.168.97.250/ | iptables -A OUTPUT -d 192.168.97.250/ | ||
| - | </ | + | ~~~ |
| Test 5 Fork bomb, nœud ne répond plus, sauf au ping | Test 5 Fork bomb, nœud ne répond plus, sauf au ping | ||
| Fork bomb | Fork bomb | ||
| - | < | + | ~~~bash |
| :(){ :|:& };: | :(){ :|:& };: | ||
| - | </ | + | ~~~ |
| Test 6 Perte connexion iDRAC : Débranchement du câble | Test 6 Perte connexion iDRAC : Débranchement du câble | ||
| Ligne 541: | Ligne 552: | ||
| - | == Nettoyage - effacer | + | ## Nettoyage - effacer |
| - | < | + | ~~~bash |
| pcs cluster stop --force #--all | pcs cluster stop --force #--all | ||
| pcs cluster destroy | pcs cluster destroy | ||
| Ligne 564: | Ligne 575: | ||
| rm -rf / | rm -rf / | ||
| rm -f / | rm -f / | ||
| - | </ | + | ~~~ |
| - | == Erreurs | + | ## Erreurs |
| - | === 1 Erreur Dell hardware | + | ### 1 Erreur Dell hardware |
| - | <code -> | + | ~~~ |
| UEFI0081: Memory size has changed from the last time the system was started. No action is required if memory was added or removed. | UEFI0081: Memory size has changed from the last time the system was started. No action is required if memory was added or removed. | ||
| - | </ | + | ~~~ |
| http:// | http:// | ||
| - | === 2 Test fork-bomb | + | ### 2 Test fork-bomb |
| - | <code -> | + | ~~~ |
| error: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms) | error: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms) | ||
| - | </ | + | ~~~ |
| - | == Autres | + | ## Autres |
| - | === Pour voir / vérifier les " | + | ### Pour voir / vérifier les " |
| - | < | + | ~~~bash |
| #pcs property set symmetric-cluster=true | #pcs property set symmetric-cluster=true | ||
| pcs property | pcs property | ||
| - | </ | + | ~~~ |
| - | === Ressources | + | ### Ressources |
| Lister | Lister | ||
| - | < | + | ~~~bash |
| pcs resource standards | pcs resource standards | ||
| - | </ | + | ~~~ |
| - | <code -> | + | ~~~ |
| ocf | ocf | ||
| lsb | lsb | ||
| Ligne 606: | Ligne 617: | ||
| systemd | systemd | ||
| stonith | stonith | ||
| - | </ | + | ~~~ |
| - | < | + | ~~~bash |
| pcs resource providers | pcs resource providers | ||
| - | </ | + | ~~~ |
| - | < | + | |
| + | ~~~ | ||
| heartbeat | heartbeat | ||
| openstack | openstack | ||
| pacemaker | pacemaker | ||
| - | </ | + | ~~~ |
| Lister les agents : Exemple | Lister les agents : Exemple | ||
| - | < | + | ~~~bash |
| pcs resource agents systemd | pcs resource agents systemd | ||
| pcs resource agents ocf: | pcs resource agents ocf: | ||
| - | </ | + | ~~~ |
| Timeout par défaut pour les ressources | Timeout par défaut pour les ressources | ||
| - | < | + | ~~~bash |
| pcs resource op defaults timeout=240s | pcs resource op defaults timeout=240s | ||
| - | </ | + | ~~~ |
| Stopper toutes les ressources | Stopper toutes les ressources | ||
| - | < | + | ~~~bash |
| pcs property set stop-all-resources=true | pcs property set stop-all-resources=true | ||
| - | </ | + | ~~~ |
| - | < | + | ~~~bash |
| pcs property unset stop-all-resources | pcs property unset stop-all-resources | ||
| - | </ | + | ~~~ |
| ocf: | ocf: | ||
| Ligne 643: | Ligne 655: | ||
| ''/ | ''/ | ||
| - | < | + | ~~~bash |
| egrep ' | egrep ' | ||
| export OCF_ROOT=/ | export OCF_ROOT=/ | ||
| / | / | ||
| - | </ | + | ~~~ |
| Autre | Autre | ||
| Lister toutes les ressources | Lister toutes les ressources | ||
| - | < | + | ~~~bash |
| crm_resource --list | crm_resource --list | ||
| - | </ | + | ~~~ |
| Dump CIB (Cluster Information Base) | Dump CIB (Cluster Information Base) | ||
| - | < | + | ~~~bash |
| pcs cluster cib | pcs cluster cib | ||
| pcs cluster cib cib-dump.xml | pcs cluster cib cib-dump.xml | ||
| - | </ | + | ~~~ |
| Ajout d'une ressource service | Ajout d'une ressource service | ||
| - | < | + | ~~~bash |
| pcs resource create CRON systemd: | pcs resource create CRON systemd: | ||
| #pcs resource op add CRON start interval=0s timeout=1800s | #pcs resource op add CRON start interval=0s timeout=1800s | ||
| - | </ | + | ~~~ |
| UPDATE | UPDATE | ||
| - | < | + | ~~~bash |
| pcs resource update ClusterMon-External | pcs resource update ClusterMon-External | ||
| - | </ | + | ~~~ |
| UNSET | UNSET | ||
| - | < | + | ~~~bash |
| pcs resource update ClusterMon-External | pcs resource update ClusterMon-External | ||
| - | </ | + | ~~~ |
| - | === Stonith | + | ### Stonith |
| - | < | + | ~~~bash |
| pcs property list --all |grep stonith | pcs property list --all |grep stonith | ||
| - | </ | + | ~~~ |
| Confirmer que le nœud est bien arrêté. \\ | Confirmer que le nœud est bien arrêté. \\ | ||
| Attention, si ce n'est pas le cas risque de pb | Attention, si ce n'est pas le cas risque de pb | ||
| - | < | + | ~~~bash |
| pcs stonith confirm node2 | pcs stonith confirm node2 | ||
| - | </ | + | ~~~ |
| - | === Failcount | + | ### Failcount |
| - | < | + | ~~~bash |
| crm_mon --failcounts | crm_mon --failcounts | ||
| pcs resource failcount show resource_id | pcs resource failcount show resource_id | ||
| pcs resource failcount reset resource_id | pcs resource failcount reset resource_id | ||
| - | </ | + | ~~~ |
| Actualisation de l’état, et remise à zéro du " | Actualisation de l’état, et remise à zéro du " | ||
| - | < | + | ~~~bash |
| pcs resource cleanup resource_id | pcs resource cleanup resource_id | ||
| - | </ | + | ~~~ |
| - | === Install depuis zero | + | ### Install depuis zero |
| - | < | + | ~~~bash |
| echo " | echo " | ||
| systemctl start pcsd.service | systemctl start pcsd.service | ||
| Ligne 741: | Ligne 753: | ||
| pcs resource create appmgr systemd: | pcs resource create appmgr systemd: | ||
| pcs constraint colocation add appmgr with myvip | pcs constraint colocation add appmgr with myvip | ||
| - | </ | + | ~~~ |
| ------------------ | ------------------ | ||
tech/haute_dispo_cluster_failover_redhat.1742825205.txt.gz · Dernière modification : de 127.0.0.1
