Distributed Systems - Distributed File Systems - Thoai Nam

What is a file system? 1
 Persistent stored data sets
 Hierarchic name space visible to all processes
 API with the following characteristics:
– access and update operations on persistently stored data sets
– Sequential access model (with additional random facilities)
 Sharing of data between users, with access control
 Concurrent access:
– certainly for read-only access
– what about updates?
 Other features:
– mountable file stores
– more? 
pdf 28 trang thamphan 26/12/2022 3420
Bạn đang xem 20 trang mẫu của tài liệu "Distributed Systems - Distributed File Systems - Thoai Nam", để tải tài liệu gốc về máy hãy click vào nút Download ở trên.

File đính kèm:

  • pdfdistributed_systems_distributed_file_systems_thoai_nam.pdf

Nội dung text: Distributed Systems - Distributed File Systems - Thoai Nam

  1. Teaching material based on Distributed Systems: Concepts and Design, Edition 3, Addison-Wesley 2001. Distributed Systems Course Distributed File Systems Copyright © George Coulouris, Jean Dollimore, Chapter 2 Revision: Failure model Tim Kindberg 2001 email: authors@cdk2.net Chapter 8: This material is made available for private study and for direct use by 8.1 Introduction individual teachers. It may not be included in any 8.2 File service architecture product or employed in any service without the written 8.3 Sun Network File System (NFS) permission of the authors. [8.4 Andrew File System (personal study)] Viewing: These slides must be viewed in 8.5 Recent advances slide show mode. 8.6 Summary
  2. Chapter 2 Revision: Failure model Figure 2.11 Class of failure Affects Description Fail-stop Process Process halts and remains halted. Other processes may detect this state. Crash Process Process halts and remains halted. Other processes may not be able to detect this state. Omission Channel A message inserted in an outgoing message buffer never arrives at the other end’s incoming message buffer. Send-omission Process A process completes a send, but the message is not put in its outgoing message buffer. Receive-omission Process A message is put in a process’s incoming message buffer, but that process does not receive it. Arbitrary Process Process/channel exhibits arbitrary behaviour: it may (Byzantine) or channel send/transmit arbitrary messages at arbitrary times, commit omissions; a process may stop or take an incorrect step. 3 *
  3. Storage systems and their properties Types of consistency between copies: 1 - strict one-copy consistency Figure 8.1 √ - approximate consistency X - no automatic consistency Sharing Persis- Distributed Consistency Example tence cache/replicas maintenance Main memory 1 RAM File system 1 UNIX file system Distributed file system Sun NFS Web Web server Distributed shared memory Ivy (Ch. 16) Remote objects (RMI/ORB) 1 CORBA Persistent object store 1 CORBA Persistent Object Service Persistent distributed object store PerDiS, Khazana 5 *
  4. What is a file system? 2 Figure 8.4 UNIX file system operations filedes = open(name, mode) Opens an existing file with the given name. filedes = creat(name, mode) Creates a new file with the given name. Both operations deliver a file descriptor referencing the open file. The mode is read, write or both. status = close(filedes) Closes the open file filedes. count = read(filedes, buffer, n) Transfers n bytes from the file referenced by filedes to buffer . count = write(filedes, buffer, n) Transfers n bytes to the file referenced by filedes from buffer. Both operations deliver the number of bytes actually transferred and advance the read-write pointer. pos = lseek(filedes, offset, Moves the read-write pointer to offset (relative or absolute, whence) depending on whence). status = unlink(name) Removes the file name from the directory structure. If the file has no other names, it is deleted. status = link(name1, name2) Adds a new name (name2) for a file (name1). status = stat(name, buffer) Gets the file attributes for file name into buffer. 7 *
  5. File service requirements Tranparencies  Transparency ConcurrencyReplication properties properties AccessHeterogeneity: Same properties operations IsolationFaultConsistencySecurity tolerance  Concurrency FileService Efficiencyservice can maintains be accessed multiple by clients identical running copies on of LocationServiceUnix : offers mustSame one continue name-copy space update to operate after semantics relocation even when for of clients Filefiles-Must(almost)levelGoal maintain orfor record anydistributed OS access-level or hardware filelocking control systems andplatform. is privacy usually as for  Replication makeoperations fileserrors or on orprocesses localcrash. files - caching is completely Other• Load formslocalperformance-sharing files. of concurrency between comparable servers control to makes tolocal minimise file service system. MobilityDesign• attransparent.:- most mustAutomatic-once be compatible semantics relocation with of thefiles file is possiblesystems of  Heterogeneity moredifferent scalable•contentionbased OSes on identity of user making request PerformanceDifficult to: Satisfactoryachieve the performancesame for distributed across afile • •Localat-least access•identities-once has semantics ofbetter remote response users must (lower be authenticated latency)  Fault tolerance Servicesystems interfacesspecified while maintainingmustrange be of opensystem good - precise loads performance •requires•privacy idempotentrequires secure operations communication Scaling• Faultspecificationsand: tolerance scalability.Service ofcan APIs be areexpanded published. to meet  Consistency Service must resume after a server machine FullService replicationadditional interfaces is difficult loads are toopen implement. to all processes not crashes.excluded by a firewall.  Security Caching (of all or part of a file) gives most of the If the service is replicated, it can continue to benefits •(exceptvulnerable fault to tolerance) impersonation and other  Efficiency operateattacks even during a server crash. 10 *
  6. Server operations for the model file service Figures 8.6 and 8.7 Flat file service Directory service position of first byte Read(FileId, i, n) -> Data Lookup(Dir, Name) -> FileId FileId position of first byte AddName(Dir, Name, File) Write(FileId, i, Data) UnName(Dir, Name) Create() -> FileId GetNames(Dir, Pattern) -> NameSeq Delete(FileId) Pathname lookup GetAttributes(FileId) -> Attr PathnamesFileId such as '/usr/bin/tar' are resolved Aby unique iterative identifier calls for to fileslookup() anywhere, one in call the for SetAttributes(FileId, Attr) network.each component of the path, starting with the ID of the root directory '/' which is known in every client. 12 *
  7. Case Study: Sun NFS  An industry standard for file sharing on local networks since the 1980s  An open standard with clear and simple interfaces  Closely follows the abstract file service model defined above  Supports many of the design requirements already mentioned: – transparency – heterogeneity – efficiency – fault tolerance  Limited achievement of: – concurrency – replication – consistency – security 14 *
  8. NFS architecture: does the implementation have to be in the system kernel? No: – there are examples of NFS clients and servers that run at application- level as libraries or processes (e.g. early Windows and MacOS implementations, current PocketPC, etc.) But, for a Unix implementation there are advantages: – Binary code compatible - no need to recompile applications  Standard system calls that access remote files can be routed through the NFS client module by the kernel – Shared cache of recently-used blocks at client – Kernel-level server can access i-nodes and file blocks directly  but a privileged (root) application program could do almost the same. – Security of the encryption key used for authentication. 16 *
  9. NFS access control and authentication  Stateless server, so the user's identity and access rights must be checked by the server on each request. – In the local file system they are checked only on open()  Every client request is accompanied by the userID and groupID – not shown in the Figure 8.9 because they are inserted by the RPC system  Server is exposed to imposter attacks unless the userID and groupID are protected by encryption  Kerberos has been integrated with NFS to provide a stronger and more comprehensive security solution – Kerberos is described in Chapter 7. Integration of NFS with Kerberos is covered later in this chapter. 18 *
  10. Local and remote file systems accessible on an NFS client Figure 8.10 Server 1 Client Server 2 (root) (root) (root) export . . . vmunix usr nfs Remote Remote people students x staff users mount mount big jon bob . . . jim ann jane joe Note: The file system mounted at /usr/students in the client is actually the sub-tree located at /export/people in Server 1; the file system mounted at /usr/staff in the client is actually the sub-tree located at /nfs/users in Server 2. 20 *
  11. NFS optimization - client caching  Server caching does nothing to reduce RPC traffic between client and server – further optimization is essential to reduce server load in large networks – NFS client module caches the results of read, write, getattr, lookup and readdir operations – synchronization of file contents (one-copy semantics) is not guaranteed when two or more clients are sharing the same file.  Timestamp-based validity check – reduces inconsistency, but doesn't eliminate it t freshness guarantee – validity condition for cache entries at the client: Tc time when cache entry was last (T - Tc < t) v (Tmclient = Tmserver) validated – t is configurable (per file) but is typically set to Tm time when block was last 3 seconds for files and 30 secs. for directories updated at server – it remains difficult to write distributed T current time applications that share files with NFS 24 *
  12. NFS summary 1  An excellent example of a simple, robust, high-performance distributed service.  Achievement of transparencies (See section 1.4.7): Access: Excellent; the API is the UNIX system call interface for both local and remote files. Location: Not guaranteed but normally achieved; naming of filesystems is controlled by client mount operations, but transparency can be ensured by an appropriate system configuration. Concurrency: Limited but adequate for most purposes; when read-write files are shared concurrently between clients, consistency is not perfect. Replication: Limited to read-only file systems; for writable files, the SUN Network Information Service (NIS) runs over NFS and is used to replicate essential system files, see Chapter 14. 27 cont'd *
  13. Recent advances in file services NFS enhancements WebNFS - NFS server implements a web-like service on a well-known port. Requests use a 'public file handle' and a pathname-capable variant of lookup(). Enables applications to access NFS servers directly, e.g. to read a portion of a large file. One-copy update semantics (Spritely NFS, NQNFS) - Include an open() operation and maintain tables of open files at servers, which are used to prevent multiple writers and to generate callbacks to clients notifying them of updates. Performance was improved by reduction in gettattr() traffic. Improvements in disk storage organisation RAID - improves performance and reliability by striping data redundantly across several disk drives Log-structured file storage - updated pages are stored contiguously in memory and committed to disk in large contiguous blocks (~ 1 Mbyte). File maps are modified whenever an update occurs. Garbage collection to recover disk space. 34 *
  14. New design approaches 2  Replicated read-write files – High availability – Disconnected working  re-integration after disconnection is a major problem if conflicting updates have ocurred – Examples:  Bayou system (Section 14.4.2)  Coda system (Section 14.4.3) 36 *