How to contribute specific amount of storage as slave in Hadoop cluster | Task-4.1 | ARTH

Gaurav Sharma
3 min readNov 15, 2020

So to contribute limited storage as slave in hadoop cluster we take help of linux partition.

With the help of Linux partition we create a partition of required size & then mount it to the folder which we share to Namenode.

But before this concept let us learn how to create a new hard disk in Linux.

So first open Virtual box & then click on Setting of slave node & then click on storage.

Now follow the instruction given in the Screenshots:-

To create a new Hard disk:-

After this we only have to click on “Next” button.

Here we can see a new hard disk is created.

We can also confirm through Linux terminal.

So we have created a hard disk, now we have to follow three steps:-

(A) We have to create partition

(B)Format this created parttion

(C)Mount this partition to the folder we shared to Namenode

Commands I use for these three steps:-

#To create partition
fdisk /dev/sdb
#To create parttion
mkfs.ext4 /dev/sdb
#To mount this parttion
mount /dev/sdb /data1

Screenshots:-

Here we can see a Parttion of 2 GiB is created & mounted on the folder “/data1".

Now it’s time to update hdfs-site.xml file & start Data node services, after doing this we can see 2 GB size is shared by our data node to name node.

Screenshot:-

So we have learned about how to contribute limited amount of storage but in hadoop a need of dynamic storage occurs as we have to increase or decrease the storage as slave according to the requirement. Then we use LVM with hadoop, i have explain this Integration of LVM & Hadoop in the given blog.

!!Thanks For Reading!!

--

--