问题
启动hadoop时报错Failed to add storage directory
2023-11-26 12:02:06,840 WARN common.Storage: Failed to add storage directory [DISK]file:xxx
java.io.IOException: Incompatible clusterIDs in xxx/dfs/data: namenode clusterID = CID-xxxxxx; datanode clusterID = CID-xxxxxxx
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:722)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:286)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:399)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:379)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:544)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1690)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1650)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:376)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
at java.lang.Thread.run(Thread.java:748)
2023-11-26 12:02:06,851 ERROR datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid xxxxxxx-xxxxx-x-xxxxxxx) service to /0.0.0.0:19000. Exiting.
java.io.IOException: All specified directories have failed to load.
原因
hadoop NameNode
和DataNode
中VERSION 文件的clusterID
不同
导致这个问题的原因是,我第二次用下述命令格式化NameNode
之前,没有删除DataNode
目录下的文件
bash
hadoop namenode -format
解法
在格式化NameNode
之前,需要删除<dfs.data.dir>
指定的目录下的所有文件
所以这里只要删除文件后重新运行上述命令格式化即可。