How to contribute specific amount of storage as slave in Hadoop cluster | Task-4.1 | ARTH

So to contribute limited storage as slave in hadoop cluster we take help of linux partition.

With the help of Linux partition we create a partition of required size & then mount it to the folder which we share to Namenode.

So first open Virtual box & then click on Setting of slave node & then click on storage.

Now follow the instruction given in the Screenshots:-

To create a new Hard disk:-

After this we only have to click on “Next” button.

Here we can see a new hard disk is created.

We can also confirm through Linux terminal.

So we have created a hard disk, now we have to follow three steps:-

(A) We have to create partition

(B)Format this created parttion

(C)Mount this partition to the folder we shared to Namenode

Commands I use for these three steps:-

#To create partition
fdisk /dev/sdb
#To create parttion
mkfs.ext4 /dev/sdb
#To mount this parttion
mount /dev/sdb /data1


Here we can see a Parttion of 2 GiB is created & mounted on the folder “/data1".

Now it’s time to update hdfs-site.xml file & start Data node services, after doing this we can see 2 GB size is shared by our data node to name node.


So we have learned about how to contribute limited amount of storage but in hadoop a need of dynamic storage occurs as we have to increase or decrease the storage as slave according to the requirement. Then we use LVM with hadoop, i have explain this Integration of LVM & Hadoop in the given blog.

!!Thanks For Reading!!



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store