Table des matières
1 billet(s) pour février 2026
| Exemple de service et socket avec systemd | 2026/02/05 21:50 | Jean-Baptiste |
Exemple de service et socket avec systemd
Voir :
mkdir -p ~/.config/systemd/user && cd ~/.config/systemd/user
awx-proxy.socket
[Socket] ListenStream=127.0.0.1:8080 [Install] WantedBy=sockets.target
awx-proxy.service
[Unit] #Requires=container-httpd.service #After=container-httpd.service Requires=awx-proxy.socket After=awx-proxy.socket [Service] ExecStart=/home/k8s/.arkade/bin/kubectl port-forward svc/awx-service 3000:80
systemctl --user daemon-reload systemctl --user enable --now awx-proxy.socket
Test
curl -v -I 127.0.0.1:3000
AWX sur K8S Kind - partage de fichier pour les blob - Execution pods
But :
- Ne pas mettre de blob sous Git
- Accéder directement à des fichiers binaires (BLOB) via Ansible sans réécrire tous les playbook avec get_url au autre
- Accéder simplement à ces fichiers
Conf Kind
config.yaml
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 networking: ipFamily: ipv4 kubeProxyMode: "nftables" nodes: - role: control-plane extraPortMappings: - containerPort: 30000 hostPort: 30000 protocol: TCP extraMounts: - containerPath: /data/files hostPath: /data/files readOnly: true - containerPath: /data/postgres-13 hostPath: /data/postgres-13
Conf AWX
“Instance Groups” - “default” - “Edit”
Cocher Customize pod specification
Nous avons :
apiVersion: v1 kind: Pod metadata: namespace: awx spec: containers: - image: 'quay.io/ansible/awx-ee:latest' name: worker args: - ansible-runner - worker - '--private-data-dir=/runner'
Ajouter comme ci-dessous
apiVersion: v1 kind: Pod metadata: namespace: awx spec: containers: - image: 'quay.io/ansible/awx-ee:latest' name: worker args: - ansible-runner - worker - '--private-data-dir=/runner' volumeMounts: - name: ansfiles-volume mountPath: /data/files volumes: - name: ansfiles-volume hostPath: path: /data/files
Notes rsh rcp
Voir :
Voir aussi :
Ne pas utilser ce truc
Use of rsh is discouraged due to the inherent insecurity of host-based authentication.
Source : man rsh
Also note that the design of the .rhosts system is COMPLETELY INSECURE except on a carefully firewalled private network. Under all other circumstances, rshd should be disabled entirely.
Source : man in.rshd
rsh : Ce programme est issu du package rlogin
il faut que le compte utilisé soit reconnu par la machine distante. Pour être reconnu, l'utilisateur doit avoir un compte avec le même nom sur la machine distante et, en plus, il doit avoir correctement configuré son fichier .rhosts
Protocole
Ports
rsh hostname (port 513) rsh hosname commande (port 514)
Coté serveur
Avec le compte root
apt-get install rsh-client rsh-server
/etc/init.d/openbsd-inetd status /etc/init.d/openbsd-inetd start /etc/init.d/openbsd-inetd status
echo "localhost" >> ~/.rhosts
Le fichier hosts.equiv / .rhosts autorise ou interdit à des ordinateurs et à des utilisateurs l'utilisation des commandes r (telles que rlogin, rsh ou rcp) sans donner de mot de passe.
/etc/hosts.equiv global trusted host-user pairs list
~/.rhosts per-user trusted host-user pairs list
rsh, rlogin et ssh utilisent ces fichiers
Syntaxe de .rhosts hosts.equiv
# hostname [username] somehost somehost username
For root login to succeed here with pam_securetty, “rsh” must be listed in /etc/securetty.
echo "rsh" >> /etc/securetty
Coté client
Avec le compte utilisateur
apt-get install rsh-client
echo plop > plop.txt rcp plop.txt root@localhost:/tmp/ rcp plop.txt localhost:/tmp/
rsh -l user localhost rlogin -l user localhost # NOTE : rsh without a command switches to rlogin. rsh -l user localhost command
Shell meta-characters escape
Shell meta-characters which are not quoted are interpreted on local machine, while quoted meta-characters are interpreted on the remote machine
Appends the remote file remotefile to the local file localfile
rsh otherhost cat remotefile >> localfile
Appends remotefile to other_remotefile
rsh otherhost cat remotefile ">>" other_remotefile
Cas 1
Sur le client - NOK
test@rsh-cli:~$ rcp TEST4 user1@rsh-srv:/home/user1/ Permission denied.
Sur le serveur
echo "rsh-cli test" >> /home/user1/.rhosts
Sur le client - OK
test@rsh-cli:~$ rcp TEST4 user1@rsh-srv:/home/user1/
Autres
Dans un conteners
# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 14:51 pts/0 00:00:00 /bin/sh root 7 1 0 14:51 pts/0 00:00:00 bash root 942 1 0 15:08 ? 00:00:00 /usr/sbin/inetd root 1071 7 0 15:39 pts/0 00:00:00 ps -ef
# rsh localhost rlogind[1078]: pam_rhosts(rlogin:auth): allowed access to root@localhost as root #
root 25225 1 0 Jan20 ? 00:00:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 3072532 25225 0 11:02 ? 00:00:00 in.rlogind root 3072597 3072532 0 11:02 ? 00:00:01 login -- user1 user1 3072724 3072597 0 11:02 pts/4 00:00:00 -ksh
Bizarre, je ne vois pas le process 'sleep' que j'ai lancé et qui tourne toujours. Même avec plusieurs essais. Le dernier processus '-ksh' n'a pas d'enfant.
# ss -tlnp |grep xinetd
0 64 *:513 *:* users:(("xinetd",25225,5))
0 64 *:514 *:* users:(("xinetd",25225,6))
/etc/xinetd.d/rsh
# default: on
# description: The rshd server is the server for the rcmd(3) routine and, \
# consequently, for the rsh(1) program. The server provides \
# remote execution facilities with authentication based on \
# privileged port numbers from trusted hosts.
service shell
{
disable = no
socket_type = stream
wait = no
user = root
log_on_success += USERID
log_on_failure += USERID
server = /usr/sbin/in.rshd
}
Git - Duplication d'un dépôt
git clone --bare https://github.com/EXAMPLE-USER/OLD-REPOSITORY.git cd OLD-REPOSITORY git push --mirror https://github.com/EXAMPLE-USER/NEW-REPOSITORY.git cd .. rm -rf OLD-REPOSITORY
Source : https://docs.github.com/en/repositories/creating-and-managing-repositories/duplicating-a-repository
Autres
git clone https://git.plop.org/depot.git cd depot #git pull --all git remote remove origin git remote add origin https://git.acme.fr/plop/depot.git git push -u origin --all # git push -u origin --mirror
Exemple simple de conf Nagios
Voir :
Voir aussi :
Arborescence proposée
Le fichier de conf principal appelant tous les autres :
- etc/nagios.cfg
Fichier contenant les commandes à exécuter :
- etc/objects/commands.cfg
Ce fichier ne doit pas contenir de mot de passe.
Le fichier contenant les mots de passe utilisés par commands.cfg :
- etc/resource.cfg
Ce fichier ne devrait être en lecture que pour l'utilisateur “nagios”
Fichiers de conf basiques modifiés peu souvent :
- etc/objects/templates.cfg
- etc/objects/timeperiods.cfg
- etc/objects/contacts.cfg
- etc/objects/localhost.cfg
Fichier de conf contenant la liste des hôtes à superviser :
- etc/objects/servers.cfg
Il est possible d'avoir plusieurs fichiers de ce genre. Exemple : switch.cfg, printer.cfg, windows.cfg…
Le nom des fichiers cfg est libre, mais ils doivent être appelés par nagios.cfg
Exemple de conf
Fichier de conf principal nagios.cfg
Extrait
nagios.cfg
#cfg_file=/usr/local/nagios/etc/objects/commands.cfg #cfg_file=/usr/local/nagios/etc/objects/contacts.cfg #cfg_file=/usr/local/nagios/etc/objects/timeperiods.cfg #cfg_file=/usr/local/nagios/etc/objects/templates.cfg #cfg_file=/usr/local/nagios/etc/objects/localhost.cfg #cfg_file=/usr/local/nagios/etc/objects/servers.cfg #cfg_file=/usr/local/nagios/etc/objects/windows.cfg #cfg_file=/usr/local/nagios/etc/objects/switch.cfg cfg_dir=/usr/local/nagios/etc/objects/ resource_file=/usr/local/nagios/etc/resource.cfg nagios_user=nagios nagios_group=nagios check_external_commands=1 max_concurrent_checks=0 retain_state_information=0 enable_flap_detection=1 date_format=iso8601 use_syslog=1 log_file=/usr/local/nagios/var/nagios.log debug_level=0 debug_file=/usr/local/nagios/var/nagios.debug
Fichier commands.cfg
commands.cfg
define command { command_name check-host-alive command_line $USER1$/check_ping -H $HOSTADDRESS$ -w 3000.0,80% -c 5000.0,100% -p 5 } define command { command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$" | /bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } define command { command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } # NOTE: The following 'check_local_...' functions are designed to monitor # various metrics on the host that Nagios is running on (i.e. this one). define command { command_name check_local_users command_line $USER1$/check_users -w $ARG1$ -c $ARG2$ } define command { command_name check_centreon_snmp_linux_mem command_line $USER1$/centreon_plugins --plugin=os::linux::snmp::plugin --hostname=$HOSTADDRESS$ --snmp-version=3 --snmp-username $USER6$ --authprotocol MD5 --authpassphrase "$USER7$" --mode=memory --warning-usage-prct $ARG1$ --critical-usage-prct $ARG2$ } #define command { # command_name check_active_router # command_line $USER1$/check_snmp_active_router.sh -v 3 -a MD5 -A "$USER4$" -l authNoPriv -u $USER3$ #} #define command { # command_name trigger_memory # command_line /usr/bin/tclsh $USER1$/eventhandlers/app_snmp_pxy.tcl MEM $HOSTNAME$ $SERVICEDESC$ #$SERVICESTATE$ #}
Ce fichier contient les commandes qui seront exécutées.
Tous les scripts ne renvoyant que des données locales (de localhost), et donc ne contenant pas $HOSTADDRESS$, devraient avoir en command_name le préfixe “check_local_”
Seul le fichier localhost.cfg devrait faire appel aux commandes “check_local_*”
Fichier contenant les Templates
templates.cfg
define contact { name generic-contact ; The name of this contact template service_notification_period 24x7 ; service notifications can be sent anytime host_notification_period 24x7 ; host notifications can be sent anytime service_notification_options w,u,c,r,f,s ; send notifications for all service states, flapping events, and scheduled downtime events host_notification_options d,u,r,f,s ; send notifications for all host states, flapping events, and scheduled downtime events service_notification_commands notify-service-by-email ; send service notifications via email host_notification_commands notify-host-by-email ; send host notifications via email register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL CONTACT, JUST A TEMPLATE! } # Generic host definition template - This is NOT a real host, just a template! define host { name generic-host ; The name of this host template notifications_enabled 1 ; Host notifications are enabled event_handler_enabled 1 ; Host event handler is enabled flap_detection_enabled 1 ; Flap detection is enabled failure_prediction_enabled 1 ; Failure prediction is enabled process_perf_data 1 ; Process performance data retain_status_information 1 ; Retain status information across program restarts retain_nonstatus_information 1 ; Retain non-status information across program restarts notification_period 24x7 ; Send host notifications at any time register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL HOST, JUST A TEMPLATE! } define host { name linux-server ; The name of this host template use generic-host ; This template inherits other values from the generic-host template check_period 24x7 ; By default, Linux hosts are checked round the clock check_interval 5 ; Actively check the host every 5 minutes retry_interval 1 ; Schedule host check retries at 1 minute intervals max_check_attempts 10 ; Check each Linux host 10 times (max) check_command check-host-alive ; Default command to check Linux hosts notification_period workhours ; Linux admins hate to be woken up, so we only notify during the day ; Note that the notification_period variable is being overridden from ; the value that is inherited from the generic-host template! notification_interval 120 ; Resend notifications every 2 hours notification_options d,u,r ; Only send notifications for specific host states contact_groups admins ; Notifications get sent to the admins by default register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL HOST, JUST A TEMPLATE! } # Custom define host { name tpl-host-linux use linux-server hostgroups linux-hosts register 0 } define service { name generic-service ; The 'name' of this service template active_checks_enabled 1 ; Active service checks are enabled passive_checks_enabled 1 ; Passive service checks are enabled/accepted parallelize_check 1 ; Active service checks should be parallelized (disabling this can lead to major performance problems) obsess_over_service 1 ; We should obsess over this service (if necessary) check_freshness 0 ; Default is to NOT check service 'freshness' notifications_enabled 1 ; Service notifications are enabled event_handler_enabled 1 ; Service event handler is enabled flap_detection_enabled 1 ; Flap detection is enabled failure_prediction_enabled 1 ; Failure prediction is enabled process_perf_data 1 ; Process performance data retain_status_information 1 ; Retain status information across program restarts retain_nonstatus_information 1 ; Retain non-status information across program restarts is_volatile 0 ; The service is not volatile check_period 24x7 ; The service can be checked at any time of the day max_check_attempts 3 ; Re-check the service up to 3 times in order to determine its final (hard) state normal_check_interval 10 ; Check the service every 10 minutes under normal conditions retry_check_interval 2 ; Re-check the service every two minutes until a hard state can be determined contact_groups admins ; Notifications get sent out to everyone in the 'admins' group notification_options w,u,c,r ; Send notifications about warning, unknown, critical, and recovery events notification_interval 60 ; Re-notify about service problems every hour notification_period 24x7 ; Notifications can be sent out at any time register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL SERVICE, JUST A TEMPLATE! } # Local service definition template - This is NOT a real service, just a template! define service { name local-service ; The name of this service template use generic-service ; Inherit default values from the generic-service definition max_check_attempts 4 ; Re-check the service up to 4 times in order to determine its final (hard) state normal_check_interval 5 ; Check the service every 5 minutes under normal conditions retry_check_interval 1 ; Re-check the service every minute until a hard state can be determined register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL SERVICE, JUST A TEMPLATE! } # Custom define service { name tpl-service-mca1 max_check_attempts 1 normal_check_interval 1 retry_check_interval 1 check_period 24x7 notification_interval 2000 notification_period 24x7 notification_options w,c,r contact_groups admins register 0 }
Ce fichier devrait être dédié aux Templates. Chaque bloque devrait contenir register 0.
Pour utiliser un template il faut utiliser la directive “use”. Nous verons des exemples dans la suite.
Pour tous les nouveaux objets templates, nous recommandons pour une meilleurs lisibilité de leurs appliquer une convention de nommage spécifique. Par exemple en les préfixant par “tpl-”
Passons rapidement sur timeperiods.cfg & contacts.cfg
timeperiods.cfg
# This defines a timeperiod where all times are valid for checks, # notifications, etc. The classic "24x7" support nightmare. :-) define timeperiod { timeperiod_name 24x7 alias 24 Hours A Day, 7 Days A Week sunday 00:00-24:00 monday 00:00-24:00 tuesday 00:00-24:00 wednesday 00:00-24:00 thursday 00:00-24:00 friday 00:00-24:00 saturday 00:00-24:00 } # 'workhours' timeperiod definition define timeperiod { timeperiod_name workhours alias Normal Work Hours monday 09:00-17:00 tuesday 09:00-17:00 wednesday 09:00-17:00 thursday 09:00-17:00 friday 09:00-17:00 }
contacts.cfg
define contact { contact_name nagiosadmin ; Short name of user use generic-contact ; Inherit default values from generic-contact template (defined above) alias Nagios Admin ; Full name of user email nagios@localhost ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ****** } define contactgroup { contactgroup_name admins alias Nagios Administrators members nagiosadmin }
Notez ici la directive “use” parmetant d“hétirer du template.
Fichier localhost.cfg
localhost.cfg
define host { use linux-server ; Name of host template to use ; This host definition will inherit all variables that are defined ; in (or inherited by) the linux-server host template definition. host_name localhost alias localhost address 127.0.0.1 } define hostgroup { hostgroup_name linux-servers ; The name of the hostgroup alias Linux Servers ; Long name of the group members localhost ; Comma separated list of hosts that belong to this group } define service { use local-service ; Name of service template to use host_name localhost service_description Current Users check_command check_local_users!20!50 }
Toutes les commandes présentent dans ce fichier devraient avoir pour check_command “check_local_”. Voir commands.cfg
Fichier servers.cfg
servers.cfg
define host { host_name srv01 use tpl-host-linux alias srv01 } define host { host_name srv02 use tpl-host-linux alias srv01 } define hostgroup { hostgroup_name linux-hosts alias Linux Servers } define hostgroup { hostgroup_name RED alias RedondanceEnvLogique members srv01 } define hostgroup { hostgroup_name App alias ReseauGlobalApp # hostgroup_members linux-hosts, Switchs, Routeurs, printer-hosts hostgroup_members linux-hosts } define service { service_description Memory use tpl-service-mca1 hostgroup_name linux-hosts check_command check_centreon_snmp_linux_mem!80!90 # event_handler trigger_memory }
Notez que un hostgroup peut inclure des hôtes avec members
ou d'autres hostgroups avec hostgroup_members
Souvent on préférera utliser les templates à l'aide de la directive “use”. Voir notre exemple avec tpl-host-linux
Vérification de la conf
Vérif
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
Commandes
Trouver les commandes non utilisées
for CMD in $(grep command_name etc/objects/commands.cfg | grep -v "^#" | awk '{ print $2}' | sort -u) ; do grep -q "$CMD" $(find etc/objects/ -type f -not -name commands.cfg) || echo $CMD; done
Trouver les commandes en double
diff <(grep -v ^# etc/objects/commands.cfg | awk '/command_name/ { print $NF }' | sort) <(grep -v ^# etc/objects/commands.cfg | awk '/command_name/ { print $NF }' | sort -u)
Sondes locales
Toutes les sondes locales (qui remonte des infos du host sur lequel le script est exécuté) devraient avoir un command_name commençant par “check_local_”
Sauf exception seules les fichiers suivants devraient contenir ce motif
$ rgrep -l check_local_ * objects/commands.cfg objects/localhost.cfg
Même chose avec les local-service
$ rgrep -l local-service * objects/templates.cfg objects/localhost.cfg
templates.cfg
- Tous les bloques de code
register 0devraient être dans le fichier templates.cfg - Tous les bloques de code présent dans le fichier templates.cfg devrait avoir
register 0
Autres conditions
- Pour toute directive “use” il doit y avoir un template corespondant (template.cfg)
- Pour chaque “check_command” il doit y avoir une entrée “command_name” correspondante (commands.cfg)
- Tous les noms indiqués par la directive “name” dans les objets de type “host” doivent être résolvables
