Some NFS Notes

A few notes on configuring NFS on RedHat or Ubuntu

The server packages to install are:
Ubuntu:
apt-get-install nfs-kernel-server nfs-common quota
RedHat:
yum install nfs-utils nfs-utils-lib quota

The client Packages are:
Ubuntu:
apt-get install nfs-common
RedHat:
yum install nfs-utils nfs-utils-lib

A few things to configure on the server:

/etc/hosts.allow
Any hosts listed here will be granted access to everything i.e
ALL: 2.11.1.2[4-5]
ALL: 192.168.1.*


or just NFS access:
portmap: 192.168.0.1 , 192.168.0.2
lockd: 192.168.0.1 , 192.168.0.2
rquotad: 192.168.0.1 , 192.168.0.2
mountd: 192.168.0.1 , 192.168.0.2
statd: 192.168.0.1 , 192.168.0.2

/etc/hosts.deny
Any hosts here will be denied access i.e
portmap: ALL
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL

Doing an ALL: ALL would block other services such as SSH. Its a good idea to deny all hosts by default and explicitly list hosts in hosts.allow as any hosts not listed in allow or deny will be automatically allowed.

/etc/exports
Defines what will be shared and to who can be single IP, netgroup, CIDR, wildcard i.e
/home 192.168.0.1(rw) 192.168.0.2(rw)
/files *(rw,all_squash, subtree_check)
/home 192.168.0.0/255.255.255.0(rw)
/files *(rw,sync,no_subtree_check,anonuid=222,anongid=1001)

* = share to ALL allowed hosts
ro= read only
rw = read write
root_squash = (default) maps root requests to the anonymous user
no_root_squash = remote root user is root!
all_squash = all remote users become anonymous user
subtree_check / no_subtree_check = When subdirectories of filesystems are exported, the server must check that the tree is in the correct filesystem. Disabling has mild security implications but can improve performance.

If you make changes to exports later, run rpcinfo quota for NFS to re-read it.

Make sure the right services are running:
RedHat:
/etc/init.d/nfs start
/etc/init.d/portmap start

Ubuntu:
/etc/init.d/nfs-kernel-server start
/etc/init.d/portmap start

To connect from the client:

Make sure portmap is running:
/etc/init.d/portmap start

Add something like so to your /etc/fstab depending on your options
192.168.100.85:/home/myself /mnt/test nfs users,auto,rw 0 0
NAS:/files /files nfs users,auto,rw,sync,rsize=8192,timeo=14,wsize=8192,intr 0 0

mkdir /files
mount /files

Check the files have mounted correctly

*Notes*
NFS isn’t the most secure of things – run only over your local network/VPN/somewhere secure.
Firewalls:
Portmap – 111TCP/UDP
nfsd – 2049 TCP-UDP

statd,mountd,lockd,rquotad will generally float around the next available port from portmapper. You can bind to a specific port:

REDHAT
1. Find some free ports
2. Edit /etc/sysconfig/nfs
# NFS port numbers
STATD_PORT=10002
STATD_OUTGOING_PORT=10003
MOUNTD_PORT=10004
RQUOTAD_PORT=10005
LOCKD_UDPPORT=30001
LOCKD_TCPPORT=30001

UBUNTU
Edit /etc/default/nfs-common and add

STATDOPTS="--port 32765 --outgoing-port 32766"

Edit /etc/default/nfs-kernel-server and add

RPCMOUNTDOPTS="-p 32767"

Edit /etc/default/quota and add

RPCRQUOTADOPTS="-p 32769"

Create /etc/modprobe.d/local.conf with the contents

options lockd nlm_udpport=32768 nlm_tcpport=32768

Update /etc/services

# NFS ports as per the NFS-HOWTO
# http://www.tldp.org/HOWTO/NFS-HOWTO/security.html#FIREWALLS
# Listing here does not mean they will bind to these ports.
rpc.nfsd 2049/tcp # RPC nfsd
rpc.nfsd 2049/udp # RPC nfsd
rpc.statd-bc 32765/tcp # RPC statd broadcast
rpc.statd-bc 32765/udp # RPC statd broadcast
rpc.statd 32766/tcp # RPC statd listen
rpc.statd 32766/udp # RPC statd listen
rpc.mountd 32767/tcp # RPC mountd
rpc.mountd 32767/udp # RPC mountd
rcp.lockd 32768/tcp # RPC lockd/nlockmgr
rcp.lockd 32768/udp # RPC lockd/nlockmgr
rpc.quotad 32769/tcp # RPC quotad
rpc.quotad 32769/udp # RPC quotad

Update your firewall accordingly and restart NFS. You may find this doesnt work on your distro as some modules may be compiled in the kernel. rpcinfo -p will list the port numbers being used – do a google if its not going to plan!

You will probably want to make sure that the owner of the files on the fileserver and the user(s) accessing them exist on all systems and have the same UID’s.

Random error I had with NFSv4 was mounting with a UID and GID of 4294967294 even though the system UID’s matched up correctly. If you can live with v3 then just add -o vers=3 to your mount command. The better fix is:

edit /etc/idmapd.conf and set Domain on server and client to the “localdomain”

[General]
Domain = localdomain
[Translation]
Method = nsswitch

change the /etc/default/nfs-common file (on both your server and client): set NEED_IDMAPD= yes

start and enable idmapd service on both servers

You can leave a response, or trackback from your own site.

Leave a Reply