Table des matières
3 billet(s) pour janvier 2026
| Notes rsh rcp | 2026/01/21 18:08 | Jean-Baptiste |
| Git - Duplication d'un dépôt | 2026/01/19 10:22 | Jean-Baptiste |
| Exemple simple de conf Nagios | 2026/01/14 10:07 | Jean-Baptiste |
Reverse SSH tunnel - SSH comme VPN
Voir :
- sshuttle
- OpenVPN
- PageKite
- ssh_tunnel
- autossh
Putty
#ssh -R 3128:192.168.56.1:3128 user@192.168.1.20 plink -R 3128:192.168.56.1:3128 -pw P@ssw0rd -batch user@192.168.1.20
Script de test STONITH fencing cluster
Voir :
Voir aussi :
Source :
Ce script permet d'émuler le “fencing” via SSH, à des fins de test. Compatible cman, pacemaker et pcs
/usr/sbin/fence_ssh
#!/bin/sh # A fence agent for cman and pacemaker, using ssh. # The only required argument is nodename. # Author: # klwang (http://klwang.info) # Note: # authorized_keys configuration are required # just for test, enjoy it! # Source : https://github.com/wklxd/misc/blob/master/fence_ssh SSH_COMMAND="/usr/bin/ssh -q -x -o PasswordAuthentication=no -o StrictHostKeyChecking=no -n -l root" #REBOOT_COMMAND="echo '/sbin/reboot -nf' | SHELL=/bin/sh at now >/dev/null 2>&1" REBOOT_COMMAND="shutdown -r now >/dev/null 2>&1" nodename= action=reboot usage () { /bin/echo "Usage: $0 -n NAME [-o ACTION]" /bin/echo /bin/echo " -n NODENAME" /bin/echo " The name of the node to be fenced." /bin/echo " In case it contains spaces, use double quotes." /bin/echo " -o ACTION" /bin/echo " What to do; on|off|list|monitor|reboot(default)." /bin/echo exit 0 } arg_cmd() { while getopts ":n:p:o:h" opt; do case "$opt" in n|p) nodename=$OPTARG ;; o) action=$OPTARG ;; h) action="usage" ;; *) usage ;; esac done } arg_stdin() { eval $(cat -) if [ "x$nodename" = "x" -a "x$port" != "x" ]; then nodename=$port # pacemaker only use port fi } metadata() { cat <<EOF <?xml version="1.0" ?> <resource-agent name="fence_ssh" shortdesc="ssh fence agent, work both for cman and pacemaker"> <longdesc> The style come from fence_pcmk, http://www.clusterlabs.org Some functions references external/ssh agent </longdesc> <vendor-url> http://klwang.info </vendor-url> <parameters> <parameter name="action" unique="1"> <getopt mixed="-o" /> <content type="string" default="reboot" /> <shortdesc lang="en">Fencing Action</shortdesc> </parameter> <parameter name="nodename" unique="1"> <getopt mixed="-n" /> <content type="string" /> <shortdesc lang="en">Name of machine</shortdesc> </parameter> <parameter name="port" unique="1"> <getopt mixed="-p" /> <content type="string" /> <shortdesc lang="en">Name of machine, equal to nodename</shortdesc> </parameter> <parameter name="help" unique="1"> <getopt mixed="-h" /> <content type="string" /> <shortdesc lang="en">Display help and exit</shortdesc> </parameter> </parameters> <actions> <action name="reboot" /> <action name="on" /> <action name="off" /> <action name="list" /> <action name="status" /> <action name="metadata" /> </actions> </resource-agent> EOF exit 0 } get_usable_ip() { for ip in `/usr/bin/getent hosts $1 | cut -d" " -f1`; do if ping -w1 -c1 $ip > /dev/null 2>&1 then echo $ip return 0 fi done return 1 } is_host_up() { for j in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15; do if ping -w1 -c1 "$1" >/dev/null 2>&1 then sleep 1 else return 1 fi done return 0 } reboot() { local node=$1 ip=`get_usable_ip $node` if [ $? -ne 0 ];then /bin/echo "Error: can not get a usable ip, is nodename($node) alive!" exit 0 # in case power lose fi if ! ping -c1 -w1 $ip >/dev/null 2>&1; then exit 0 # in case the node have been fenced fi $SSH_COMMAND $ip "echo $(date +'%Y-%m-%d %H:%M') FENC_FROM_SSH \$SSH_CLIENT >> /var/log/fenc.log" $SSH_COMMAND $ip "$REBOOT_COMMAND" if `is_host_up $ip`; then exit 1 else exit 0 fi } #main if [ $# -gt 0 ]; then arg_cmd $* else arg_stdin fi case "$action" in metadata) metadata ;; usage) usage ;; on|off) exit 0 # ssh can not turn on a node # so avoiding turn it down ;; reset|reboot) reboot $nodename ;; monitor) exit 0 # just for pacemaker ;; help) usage ;; *) /bin/echo "Unkonw options" exit 1 ;; esac
Configuration
pcs stonith create fencessh_node1 fence_ssh nodename=node1 pcmk_host_list=node1 pcs stonith create fencessh_node2 fence_ssh nodename=node2 pcmk_host_list=node2 pcs stonith level add 1 node1 fencessh_node1 pcs stonith level add 1 node2 fencessh_node2 # Interdire le fence de soi-même pcs constraint location fencessh_node1 avoids node1 pcs constraint location fencessh_node2 avoids node2
Test
pcs stonith fence node2
Autres
Lister
pcs stonith list
stonith_admin -I
Reverse proxy HTTP Headers - Disable compressed response - Nginx
Voir :
Disable the compressed response
proxy_set_header Accept-Encoding "";
Context: http, server, location
Réseau Linux tc (Traffic Control)
Introduction to Network Emulation with tc (Traffic Control) The tc command is part of the iproute package
Source : https://bencane.com/simulating-network-latency-for-testing-in-linux-environments-29daad98efcc
tc (Traffic Control) is a powerful Linux command used to control the kernel's network scheduler. It interfaces with a component known as netem (Network Emulator), which provides functionalities for emulating network conditions like latency, packet loss, and more. This tool is crucial for replicating real-world network scenarios, such as a WAN, within a controlled test environment.
Determine Current Latency: Use the ping command to measure the current latency to a remote server.
ping google.com
Calculate Additional Latency: Subtract the average current latency from your desired latency.
Desired Latency - Current Latency = Additional Latency
Apply the Latency using tc : Use the tc command to add the calculated delay to the network interface.
tc qdisc add dev eth0 root netem delay 97ms
Verify the Rule : Use the tc -s command to ensure the delay has been correctly added.
tc -s qdisc
Removing the Latency Rule
tc qdisc del dev eth0 root netem
Réseau Linux pile TCP/IP
Voir aussi :
- MPTCP, SCTP, DCCP
Voir :
- hping2
man 7 tcp
Contrack
Voir :
- /proc/net/nf_conntrack
- /proc/sys/net/nf_conntrack_max
apt-get install conntrack
Flush
conntrack -F
/proc/sys/net/ipv4/tcp_syn_retries
$ sysctl net.ipv4.tcp_syn_retries net.ipv4.tcp_syn_retries = 6
Effectively, this takes 1+2+4+8+16+32+64=127s before the connection finally aborts.
/proc/sys/net/ipv4/tcp_synack_retries
/proc/sys/net/ipv4/tcp_retries2
Voir :
Voir aussi :
- /proc/sys/net/ipv4/tcp_retries
- /proc/sys/net/ipv4/tcp_syn_retries
- /proc/sys/net/ipv4/tcp_synack_retries
Cluster
In a High Availability (HA) situation consider decreasing the setting to 3.
RFC 1122 recommends at least 100 seconds for the timeout, which corresponds to a value of at least 8. Oracle suggest a value of 3 for a RAC configuration.
Nb de retransmissions vs temps
An experiment confirms that (on a recent Linux at least) the timeout is more like 13s with the suggested net.ipv4.tcp_retries2=5
“Windows defaults to just 5 retransmissions which corresponds with a timeout of around 6 seconds.” “Five retransmissions corresponds with a timeout of around six seconds.” tcp_retries2=5 means timeout with first transmission plus 5 retransmissions: 12.6 seconds=(2^6 - 1) * 0.2. tcp_retries2=15: 924.6 seconds=(2^10 - 1) * 0.2 + (16 - 10) * 120.
Source : https://github.com/elastic/elasticsearch/issues/102788
Voir aussi : https://www.elastic.co/guide/en/elasticsearch/reference/current/system-config-tcpretries.html#_related_configuration
F_RTO
TCP keepalive
Configuring TCP/IP keepalive parameters for high availability clients (JDBC)
- tcp_keepalive_probes - the number of probes that are sent and unacknowledged before the client considers the connection broken and notifies the application layer
- tcp_keepalive_time - the interval between the last data packet sent and the first keepalive probe
- tcp_keepalive_intvl - the interval between subsequent keepalive probes
- tcp_retries2 - the maximum number of times a packet is retransmitted before giving up
echo "6" > /proc/sys/net/ipv4/tcp_keepalive_time echo "1" > /proc/sys/net/ipv4/tcp_keepalive_intvl echo "10" > /proc/sys/net/ipv4/tcp_keepalive_probes echo "3" > /proc/sys/net/ipv4/tcp_retries2
ss -o
Process / diag tools
Outils
TCP retransmissions
Voir :
- net.ipv4.tcp_early_retrans
Outils :
- tcpretrans.bt (bpftrace)
- tcpretrans (perf-tools)
- tcpretrans.py (bpfcc-tools - iovisor/bcc))
Connaitre le rto_min et le rto_max
# grep ^Tcp /proc/net/snmp |column -t |cut -c1-99 Tcp: RtoAlgorithm RtoMin RtoMax MaxConn ActiveOpens PassiveOpens AttemptFails EstabResets Tcp: 1 200 120000 -1 6834 964 161 4614
yum install bpftrace /usr/share/bcc/tools/tcpretrans
timeout 60 ./tcpretrans | nl
sar -n ETCP sar -n TCP
# netstat -s |egrep 'segments retransmited|segments send out'
107428604792 segments send out
47511527 segments retransmited
# echo "$(( 47511527 * 10000 / 107428604792 ))"
4
https://www.ibm.com/support/pages/tracking-tcp-retransmissions-linux
tcpretransmits.sh
#! /usr/bin/bash test -x /usr/sbin/tcpretrans.bt && TCPRETRANS=/usr/sbin/tcpretrans.bt test -x /usr/share/bpftrace/tools/tcpretrans.bt && TCPRETRANS=/usr/share/bpftrace/tools/tcpretrans.bt # https://github.com/brendangregg/perf-tools/blob/master/net/tcpretrans test -x ./tcpretrans.pl && TCPRETRANS=./tcpretrans.pl OUT=/tmp/tcpretransmits.log if [ -z "$TCPRETRANS" ]; then echo "It looks like 'bpftrace' is not installed" else date > $OUT netstat -s |awk '/segments sen. out$/ { R=$1; } /segments retransmit+ed$/ { printf("%.4f\n", ($1/R)*100); }' >> $OUT $TCPRETRANS | tee -a $OUT netstat -s |awk '/segments sen. out$/ { R=$1; } /segments retransmit+ed$/ { printf("%.4f\n", ($1/R)*100); }' >> $OUT fi
Resolving The Problem
TCP retransmissions are almost exclusively caused by failing network hardware, not applications or middleware. Report the failing IP pairs to a network administrator.
Autres
tcp_low_latency (Boolean; default: disabled; since Linux 2.4.21/2.6; obsolete since Linux 4.14)
net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 1 net.ipv4.tcp_moderate_rcvbuf = 1
# ip route get 192.168.100.11
192.168.100.11 dev virbr1 src 192.168.100.1 uid 1000
cache
# ip route show dev virbr1
192.168.100.0/24 proto kernel scope link src 192.168.100.1
# ip route change dev virbr1 192.168.100.0/24 proto kernel scope link src 192.168.100.1 rto_min 8ms
