I installed Hadoop on one Azure VM, and it works fine by using its OS disk. However, I attached one hard disk to my VM, and I want to know how to configure Hadoop to only use this new hard disk as its default storage disk. Can anyone tell me how to change the configurations?
Any Help would be grateful. Thank You.
Specifically, the Datanode needs to have the dfs.datanode.data.dir configuration property set with the path of each mount point or directory where it is allowed to write data to. This property is a set of comma-separated values.
For example:
This defines three different directories that the Datanode can write blocks into.
problem is solved. I have to edit hdfs-site.xml. Thanks for everyone.