|
Post by Admin on Mar 14, 2014 2:15:06 GMT
Hadoop distributed file system:
Hadoop is a distributed file system and it uses to store bulk amounts of data like terabytes (or) even peta bytes. HDFS Support high throughput mechanism for accessing this large amount information. In HDFS files are stored in sequential redundant manner over the multiple machines and this guaranteed the following ones. 1.Durability to failure 2 High availability to very parallel applications.
Example: NFS(Network file system) NFS gives remote access to a single logical volume stored on a single machine. NFS server can visible a portion of it’s local files system to external clints and also the clint can mount this remote file system directly into their own linux file system and interact with it as though it were part of the local drive.
|
|