I'm beating my head against a wall with this one. Environment is 2 x CentOS 6 64bit installs. Both NFS client and server are fully up to date as of 1 hour ago.
I've set up an NFS export on the server:
/opt/nfs 10.1.1.0/24(rw,sync,no_root_squash,no_all_squash)
AFAICT, all relevant NFS services on the server are running:
(2) (0 Jobs) [root@lb01-cbr01-au ~]$ service rpcbind status
rpcbind (pid 20079) is running...
(2) (0 Jobs) [root@lb01-cbr01-au ~]$ service nfslock status
rpc.statd (pid 19986) is running...
(2) (0 Jobs) [root@lb01-cbr01-au ~]$ service nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 20034) is running...
nfsd (pid 20031 20030 20029 20028 20027 20026 20025 20024) is running...
(2) (0 Jobs) [root@lb01-cbr01-au ~]$
On the client, both rpcbind and nfslock report as running.
On the server, the output of rpcinfo for localhost looks good:
[root@lb01-cbr01-au ~]# rpcinfo -p localhost
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 39893 status
100024 1 tcp 59014 status
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 2 tcp 2049 nfs_acl
100227 3 tcp 2049 nfs_acl
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 2 udp 2049 nfs_acl
100227 3 udp 2049 nfs_acl
100021 1 udp 44725 nlockmgr
100021 3 udp 44725 nlockmgr
100021 4 udp 44725 nlockmgr
100021 1 tcp 40736 nlockmgr
100021 3 tcp 40736 nlockmgr
100021 4 tcp 40736 nlockmgr
100005 1 udp 55385 mountd
100005 1 tcp 55481 mountd
100005 2 udp 46027 mountd
100005 2 tcp 59968 mountd
100005 3 udp 45069 mountd
100005 3 tcp 33231 mountd
[root@lb01-cbr01-au ~]#
Similarly, rpcinfo -p localhost on the client indicates good state:
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 59519 status
100024 1 tcp 39715 status
The firewall is open between the client and server - an allow rule exists for the interface on both the input and output chain on each host.
From the client, when I issue showmount -e <server_ip>
, it hangs for 20 seconds until eventually producing the export list. Issuing rpcinfo -p <server_ip>
it also hangs for 20 seconds until eventually returning "rpcinfo: can't contact portmapper: RPC: Remote system error - Connection timed out".
When I attempt to actually mount the export from the client, using:
mount -t nfs 10.1.1.33:/opt/nfs /opt/test/nfs
It hangs for 3m 30 seconds, returning "mount.nfs: Connection timed out".
However, if I try and mount over UDP:
mount -o udp -t nfs 10.1.1.33:/opt/nfs /opt/test/nfs
It instantly succeeds and the mount is accessible.
I haven't done anything to hosts.allow or hosts.deny (both are empty, which from my reading of man 5 hosts_access indicates access will be allowed).
What am I missing here?
Edit: SELinux is permissive on both hosts.
Turns out there was a "security" feature enabled on our PowerConnect switch that took offense to NFS SYN packets with source ports < 1024 (dos-control tcpflag). Suffice it to say, disabling the feature solved the issue.
Although SELinux is permissive, try :
setsebool -P nfs_export_all_rw 1
Restart rpcbind, nfs and nfslock And thenexportfs -a