|
Post by Admin on Mar 13, 2014 8:55:10 GMT
HDFS is designed to support very large data sets. HDFS supports write once/read many times semantics on files. IN HDFS data is split into blocks and distributed across multiple data nodes in the cluster. Each block is typically 64mb(or) 128 (mb) in size. Each block is replicated multiples times .by default replication factor of block is 3 times.replicas are stored on different data nodes. HDFS utilizes the local file system to store each HDFS block as a separate file Hdfs blocksize can not be compared with the traditional file system blocksize.
|
|