Re: how to spec a server that will accept NFS connections from 300 hosts.



On Fri, 2009-09-25 at 01:51 +0000, Rahul wrote:
Chris Cox <chrisncoxn@xxxxxxxxxxxxxx> wrote in
1252002208.6805.128.camel@geeko:">news:1252002208.6805.128.camel@geeko:

Thanks Chris! Sorry, I never noticed your very useful reply.

If you have any really old NFS clients out there, don't do this.

None. All new machine. So then I ought to do NFS over UDP? Where exactly
is this specified. UDP versus TCP.

NO. No you do NOT want to use UDP. It's just really old systems that
had this restriction.

NFS UDP over "high speed" networks (gigabit) will result in corruption.


That's ok. Reasonable (on the extreme side). I use a LOT less and
serve up well over 100 clients without issue.. just 1Gbit network.

These are HPC nodes though. Notorious for doing lot of I/O and do it
24/7.


They usually get 30+MB/sec or so.... so, not terribly shabby. Single
client benchmark (last one I did before going live on the 1Gbit
network) showed 92MB/sec on seq. read and 60MB/sec on seq. write
(random io was good in the 40-50MB/sec range)...

I just posted bonnie++ output from my similar (but much smaller) NFS
setup.

http://dl.getdropbox.com/u/118481/io_benchmarks/bonnie_op_node25.html

Not great. This was an NFS test across gigabit?? Reads look bad.

With that said, there are good versions of bonnie++ and bad versions.
What version did you use?

But still, I'm not aware of a version of bonnie++ that had a problem
with block reads.


I'm still trying to figure out which ones of your numbers to compare with
which of mine corresponding numbers! :)

which might not be
picture perfect, but good enough (same network as normal traffic, no
jumbo frames).

Should I use jumbo frames? I mean no compatibility issues for me. All
this is my private network end-to-end.

Probably NOT. You can convert to jumbo frames IF ALL NICS are running
Jumbo frames (whole network NO EXCEPTIONS). If you don't, you'll get
frame errors all over the place.



Our NAS is split into two servers each serving up about 2TB max. Each
is a DL380G5 2x5130 with 8G ram with 8 nfsd's each. Backend storage
comes off a SAN. Both are running SLES10SP1 currently. Just checked,
one is serving to about 150 client hosts and the other about 110.
TONS of free memory. No evidence of them EVER swapping. So I still
think 16G is overkill.


Any way to check what;s the RAM utilization of my current NFS server
setup? I tried nfsstat but it won't show me anything useful.

free?

I've got a pretty heavily hit setup... we just have 8G of ram... and I
doubt we ever use it all.


.



Relevant Pages

  • Re: more weird bugs with mmap-ing via NFS
    ... Yes, the default is UDP. ... If it helps, we can assume, our UDP NFS is ... not by observing the network statistics via netstat -s. ... operating system flushes the VM pages. ...
    (freebsd-stable)
  • Re: how to spec a server that will accept NFS connections from 300 hosts.
    ... So then I ought to do NFS over UDP? ... NFS UDP over "high speed" networks will result in corruption. ... network) showed 92MB/sec on seq. ... You can convert to jumbo frames IF ALL NICS are running ...
    (comp.os.linux.hardware)
  • r8169 GigE driver problem, locks up 2.4.23 NFS subsystem
    ... Particularly it locks up the NFS ... I restricted the network to a single 100mbit crossover cable between a machine ... However 2.6 NFS subsystem was able to recover ... My suspicion is that something related to UDP datagram to IP-over-ethernet ...
    (Linux-Kernel)
  • Re: Jumbo Frames with em
    ... I'm using TCP currently, getting around 10MB/sec when writing one to ... With UDP I get around 2 to 4/MB/sec. ... I did do a network test with netpipe and with the mtu set to 9000 on ... than a NFS mount to an IDE drive, ...
    (freebsd-current)
  • Re: Firewall problems with NFS
    ... It seems to only allow use as an NFS client, since that worked fine when I tested it. ... U was surprised to see that TCP with tcp_adv_win_size=5 and rsize=8192 was as fast as UDP, ... 100005 1 udp 841 mountd ...
    (Fedora)