Re: how to spec a server that will accept NFS connections from 300 hosts.



On Fri, 2009-09-25 at 01:51 +0000, Rahul wrote:
Chris Cox <chrisncoxn@xxxxxxxxxxxxxx> wrote in
1252002208.6805.128.camel@geeko:">news:1252002208.6805.128.camel@geeko:

Thanks Chris! Sorry, I never noticed your very useful reply.

If you have any really old NFS clients out there, don't do this.

None. All new machine. So then I ought to do NFS over UDP? Where exactly
is this specified. UDP versus TCP.

NO. No you do NOT want to use UDP. It's just really old systems that
had this restriction.

NFS UDP over "high speed" networks (gigabit) will result in corruption.


That's ok. Reasonable (on the extreme side). I use a LOT less and
serve up well over 100 clients without issue.. just 1Gbit network.

These are HPC nodes though. Notorious for doing lot of I/O and do it
24/7.


They usually get 30+MB/sec or so.... so, not terribly shabby. Single
client benchmark (last one I did before going live on the 1Gbit
network) showed 92MB/sec on seq. read and 60MB/sec on seq. write
(random io was good in the 40-50MB/sec range)...

I just posted bonnie++ output from my similar (but much smaller) NFS
setup.

http://dl.getdropbox.com/u/118481/io_benchmarks/bonnie_op_node25.html

Not great. This was an NFS test across gigabit?? Reads look bad.

With that said, there are good versions of bonnie++ and bad versions.
What version did you use?

But still, I'm not aware of a version of bonnie++ that had a problem
with block reads.


I'm still trying to figure out which ones of your numbers to compare with
which of mine corresponding numbers! :)

which might not be
picture perfect, but good enough (same network as normal traffic, no
jumbo frames).

Should I use jumbo frames? I mean no compatibility issues for me. All
this is my private network end-to-end.

Probably NOT. You can convert to jumbo frames IF ALL NICS are running
Jumbo frames (whole network NO EXCEPTIONS). If you don't, you'll get
frame errors all over the place.



Our NAS is split into two servers each serving up about 2TB max. Each
is a DL380G5 2x5130 with 8G ram with 8 nfsd's each. Backend storage
comes off a SAN. Both are running SLES10SP1 currently. Just checked,
one is serving to about 150 client hosts and the other about 110.
TONS of free memory. No evidence of them EVER swapping. So I still
think 16G is overkill.


Any way to check what;s the RAM utilization of my current NFS server
setup? I tried nfsstat but it won't show me anything useful.

free?

I've got a pretty heavily hit setup... we just have 8G of ram... and I
doubt we ever use it all.


.