Providing Elasticity to DataNode Storage | LVM | Task 7.1 | ARTH
In hadoop we already know how to create a hadoop cluster in which Data node contribute a limited storage to Name node. If not, read my article on this topic:-
How to contribute specific amount of storage as slave in Hadoop cluster | Task-4.1 | ARTH
So to contribute limited storage as slave in hadoop cluster we take help of linux partition.
But in Technology world a need of increasing & decreasing of size of slave node occur many time. So basically we need a Elastic Storage which we can increase or decrease according to our requirement.
So for this type of use-case we use LVM(Logical Volume Management), we can increase the size of a drive dynamically by adding the size from another drive. By LVM we create a Volume Group in which drives or physical volumes can contributes their storage. This group is not physical that’s why it is called Logical volume.
So coming back to the task, first we have to create a hard disk. This step I have performed in the same blog i have shared above.
After this we have to convert this Hard Disk to Physical Volume.
#Convert Hard disk to PV
pvcreate /dev/sdb#To display PV
Now after creating these types of PV we have to create a single Volume Group of all these PV from which our LV’s get there volume.
#To create Volume Group
vgcreate gsgroup /dev/sdb#To display Volume Group
After creating this VG now we are creating a Logical Volume from this VG.
#To create Logical Volume
lvcreate --size 10G --name gglv1 gsgroup#To display Logical Volume
Here we can see a LV of 10 GB is created, now we have to format this LV & then mount it.
#To format partition
mkfs.ext4 /dev/gsgroup/gglv1#To mount it
mount /dev/gsgroup/gglv1 /nnlvm
After mounting LV we can see a folder /nnlvm is created. Now we update hdfs-site.xml file & start salve node services.
Here we can see a Data node which contributes around 10 GB storage.
Now let us try to increase the size of data node.
Or we can say let’s extend the size of LV. So with two simple steps we can extend the size of LV, first step is to add storage & second step is to format the increased part.
#To increase the size of partition
lvextend --size +4G /dev/gsgroup/gglv1#To format the increased part of LV
Here we increase 4GB size of LV, we can also confirm from WebUI.
Now let’s try to reduce the size of this LV.
Reduction of LV is not that simple like extension, we have to follow 5 steps :-
For this first we have to un-mount the partition & for un-mount the partition we have stop hadoop slave node.
Second step is to Clean/Scan the partition for this step we use “e2fsck” command.
Third step is to format or we can say resize the partion, here we use “resize2fs” command. In xml file system we can use “resize2fs” in both increasing & decreasing the partition.
Fourth step is the main & it is “lvreduce” commands, here we can give size with “-” sign to be reduce or we can give the size we want in final stage without any sign.
Last step is to mount the parttion again.
e2fsck -f /dev/gsgroup/gglv1
resize2fs /dev/gsgroup/gglv1 6G
lvreduce --size 6G /dev/gsgroup/gglv1
mount /dev/mapper/vgrise-lv1 /nnlvm
So here we can see a data node contributing 6 GiB storage.
So finally we learnt about LVM & it’s integration with Hadoop. This technology is very helpful although LVM is different technology & hadoop is different technology, but their integration gives us unexpected result.
$$$$Thanks For Reading The Article
Hope you have Learned Something From this Article$$$