- ... Badros1
- Parts of this material are based upon
work supported under a National Science Foundation Graduate
Fellowship. Any opinions, findings, conclusions, or recommendations
expressed in this publication are those of the author, and do not
necessarily reflect the views of the National Science Foundation.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... 25KB/sec,2
- This speed was observed on an overloaded
network where the client and the server were on separate subnets. For
comparison, both rcp and ftp wrote at over
100KB/sec.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... oystr@cs.washington.edu).3
- The Linux 2.1.32 NFS
implementation is by Olaf Kirch (okir@monad.swb.de).
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... occasionally.4
- In fact, this is common for
distributed file systems [WPE+83].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... efficient.5
- Relative to
10Mbit ethernet, modern SCSI disks (with an efficient file
system such as Linux ext2fs [CTT96] and
bus-mastering PCI controllers) are an order of magnitude faster
in performance.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... traffic.6
- In addition, improved read
caching (see section , on page ) will strictly increase the
fraction of all RPCs that are lookups or getattrs.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ...
load.7
- Lazowska et al. noted that the server CPU is the primary
bottleneck for scaling distributed file systems [LZCZ86].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... operations).8
- In fact, Sun's initial
VFS interface (and corresponding Vnode--virtual node--interface)
were introduced to support their NFS
implementation [SGK+85, p. 124].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... eliminated.9
- Like
most other implementations, we permit use of cached lookup RPC
results to reduce the number of times we must ask the server if the
file has not changed to once every 3-5 seconds.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... accurate.10
- Surprisingly, neither the Linux
or Solaris NFS clients update the access time when a page is read from
the VFS memory cache.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... file.11
- This deficiency is removed in NFS
V3; see section , on page for details.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ...
locally.12
- This is the principal reason why our implementation
does not cache files that have changed recently.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... files.13
- In our architecture, it does not make sense to
stop filling in because of space concerns, because the cleaning
process can only operate on completely cached files. See
section , on page .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ...
memory.14
- Fill-in using the filehandle after the NFS inode has
left memory is not yet implemented.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... information.15
- This same interface is used at boot time
(actually, just after the NFS client module is inserted) to
inform the kernel of space consumed by cache files that
persisted across a reboot.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... department.16
- Worst
case round trip times on the University of Washington computer science
department network are often around 300ms when communicating between
subnets, with an average of more than 15ms during normal workday traffic.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ... eliminated.17
- Since the
cache is not cleared between the single run benchmarks and the
parallel runs, both the parallel ``read a big file x 4'' and ``read a
big file again x 4'' tests are serviced from the local disk
cache--only the single ``read a big file'' test accesses the
file from the server.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
- ...
behavior.18
- It is dangerous because closed files' changes would
not be immediately visible by other hosts that open the recently
closed file (and thus NFS disallows such behaviour). One can imagine
writing a large file, then rshing to another machine to
continue working on that file. If the close completes and returns to
the client before the data exists on the server, the second machine
will not see the entire file immediately.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.