Table des matières
Notes NFS
Serveur
Voir :
rclone serv
Si SNMP
- /etc/snmp/snmpd.conf
skipNFSInHostResources 1
Disable NFS4 delegations
# On the NFS server echo 0 > /proc/sys/fs/leases-enable sysctl -w fs.leases-enable=0
Client NFS
Mapper un utilisateur - monter le FS pour un utilisateur précis
all_squash,anonuid=1010,anongid=1010
l'option no_root_squash spécifie que le root de la machine sur laquelle le répertoire est monté a les droits de root sur le répertoire). L'option root_squash est l'option par défaut
NFS3
Dans les logs Oracle
WARNING:NFS file system /import/oracle/plop mounted with incorrect options(rw,sync,vers=3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,proto=tcp,timeo=600,retrans=2,sec=sys,addr=sicile) WARNING:Expected NFS mount options for ADR directory: rsize>=4096,wsize>=4096,hard WARNING:Expected NFS mount options for ADR directory: The 'noac' option should not be set WARNING: The directory specified for the diagnostic_dest location has WARNING: incorrect mount options. [/app/oracle/oradata/plop/dump]
NFSv3 hard,bg,intr,vers=3,proto=tcp, rsize=32768, wsize=32768,… rsize/wsize. Determines the NFS request size for reads/writes. The values of these parameters should match the values for nfs.tcp.xfersize on the NetApp system. A value of 32,768 (32kB) has been shown to maximize database performance in the environment of NetApp and Solaris. In all circumstances, the NFS read/write size should be the same as or greater than the Oracle block size. For example, specifying a DB_FILE_MULTIBLOCK_READ_COUNT of 4 multiplied by a database block size of 8kB results in a read buffer size (rsize) of 32kB. NetApp recommended mount options for Oracle single-instance database on Solaris: rw,bg,vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,forcedirectio NetApp recommended mount options for Oracle9i RAC on Solaris: rw,bg,vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,forcedirectio,noac nas1:/shared_config /u01/shared_config nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0 nas1:/shared_grid /u01/app/11.2.0/grid nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0 nas1:/shared_home /u01/app/oracle/product/11.2.0/db_1 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0 nas1:/shared_data /u01/oradata nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
NFS4
el01sn01:/export/common/patches /u01/common/patches nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp
NFS performance can come close to FC
Requires
Network topology be clean
no routers, fast switches
Mount options correct : Rsize / wsize at maximum
- Avoid actimeo=0 and noac
- TCP configuration :MTU 9000 (tricky)
Exemple AWS EFS (NFS)
yum install -y nfs-utils sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 MOUNT_TARGET_IP:/ efs
The Amazon EFS client uses the following mount options that are optimized for Amazon EFS:
- _netdev
- nfsvers=4.1 – used when mounting on EC2 Linux instances
- rsize=1048576
- wsize=1048576
- hard
- timeo=600
- retrans=2
- noresvport
#rpm -ql nfs-utils rpm -q --filesbypkg nfs-utils |grep bin
nfs-utils /sbin/mount.nfs nfs-utils /sbin/mount.nfs4 nfs-utils /sbin/nfs_cache_getent nfs-utils /sbin/rpc.statd nfs-utils /sbin/umount.nfs nfs-utils /sbin/umount.nfs4 nfs-utils /usr/sbin/exportfs nfs-utils /usr/sbin/mountstats nfs-utils /usr/sbin/nfsidmap nfs-utils /usr/sbin/nfsiostat nfs-utils /usr/sbin/nfsstat nfs-utils /usr/sbin/rpc.gssd nfs-utils /usr/sbin/rpc.idmapd nfs-utils /usr/sbin/rpc.mountd nfs-utils /usr/sbin/rpc.nfsd nfs-utils /usr/sbin/rpc.svcgssd nfs-utils /usr/sbin/rpcdebug nfs-utils /usr/sbin/showmount nfs-utils /usr/sbin/sm-notify nfs-utils /usr/sbin/start-statd
Selinux
#mount -t nfs -o vers=3,context="system_u:object_r:container_file_t:s0" <server>:/shared_folder /opt/ufm/files mount -t nfs4 -o context="system_u:object_r:container_file_t:s0" <server>:/shared_folder /opt/ufm/files
Diag
Diag
tshark -Y 'tcp.port == 2049' -r tcpdump.pcap > tcpdump.txt
Pb
Pb avec la commande ''df''
A la place il est possible d'utiliser la commande df -l
sudo umount -a -t nfs -l sudo umount -a -t nfs4 -l sudo umount -a -t autofs -l
Err access denied by server while mounting
# mount -t nfs 127.0.0.1:/exports/plop1 /mnt/nfs mount.nfs: access denied by server while mounting 127.0.0.1:/exports/plop1 # chmod 1777 /exports/plop1 # chmod 1777 /exports # mount -t nfs 127.0.0.1:/exports/plop1 /mnt/nfs
Err Read-only file system
- /etc/exports
#specific IP addresses to appear first, IP ranges after. #/exports/plop1 192.168.56.0/24(ro,sync,no_subtree_check) #/exports/plop1 192.168.56.101(rw,sync,no_subtree_check) /exports/plop1 192.168.56.101(rw,sync,no_subtree_check) /exports/plop1 192.168.56.0/24(ro,sync,no_subtree_check)
Pb tail latency
How do I use the noac option
Source https://partners-intl.aliyun.com/help/en/nas/latest/67dca4
Problem
A user mounts the same network file system on two ECS servers (ESC-A and ESC-B). The user writes data in append mode on ECS-A, and monitors file content changes with the tail -f command on ECS-B.
After data is written on ECS-A, the file content changes on ECS-B may experience latency of up to 30 seconds.
However, if a file is directly opened (such as using vi) on ECS-B under the same conditions, the updated content is visible immediately.
Analysis
This is related to the mount option and the tail -f implementation.
The user uses the following mount command : mount -t nfs4 /mnt/
For file systems mounted on ECS-B using the NFS protocol, the kernel maintains a copy of metadata cache for the file and directory attributes. The cached file and directory attributes (including permission, size, and time stamp) are used to reduce the NFSPROC_GETATTR RPC requests.
The tail -f command uses sleep+fstat to monitor changes to the file attributes (primarily the file size), read files, and then output the results. However, file content output by using the tail -f command is dependent on the fstat result. Due to the metadata cache, the fstat command may not be monitoring real-time file attributes. Therefore, even if the file has been updated on the NFS server, the tail -f command cannot detect in real time whether the file has been changed or not, resulting in the latency.
Solution
Use the noac option of the mount command to disable the caching of file and directory attributes. The command is as follows:
mount -t nfs4 -o noac /mnt/
Autres
Cluster multi DC (NFS4 RH8)
rw, sync, noac, actime=0, _netdev
Option exports : rw, sync
