2024-01-09 16:53:43,241 INFO org.apache.hadoop.util.GSet: capacity = 2^17 = 131072 entries
2024-01-09 16:53:43,260 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-root/dfs/name/in_use.lock acquired by nodename 1879@saas-dev04
2024-01-09 16:53:43,262 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:252)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1139)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:750)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:658)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:734)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:977)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:950)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1723)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1790)
-------AI回答-------
The log snippet you provided indicates that there is an issue with the Hadoop NameNode, and it is reporting that the NameNode is not formatted. The key error message is:
java
---------代码---------
java.io.IOException: NameNode is not formatted.
This typically occurs when the Hadoop NameNode metadata is not properly initialized or formatted.
To resolve this issue, you need to format the Hadoop NameNode. Formatting the NameNode involves initializing the directory that holds the file system metadata. However, be cautious as formatting the NameNode will result in the loss of all data stored in HDFS. Here are the general steps to format the NameNode:
Stop the Hadoop services: Make sure Hadoop services are stopped before formatting the NameNode. You can use the following command:
bash
---------代码---------
sbin/stop-dfs.sh
Format the NameNode: Use the following command to format the NameNode:
bash
---------代码---------
bin/hdfs namenode -format
Start the Hadoop services: After formatting the NameNode, start the Hadoop services again:
bash
---------代码---------
sbin/start-dfs.sh
Make sure to check the Hadoop documentation or any specific instructions provided for your Hadoop version to ensure compatibility and accuracy. Additionally, take appropriate backups before formatting the NameNode to avoid data loss.
2024-01-09 17:00:25,356 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-root/dfs/data/in_use.lock acquired by nodename 6890@saas-dev04
2024-01-09 17:00:25,357 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/tmp/hadoop-root/dfs/data
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/data is in an inconsistent state: Can't format the storage directory because the current directory is not empty.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:622)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:735)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:272)
-------AI回答-------
The log snippet you provided indicates that there is an issue with a DataNode's storage directory. The key error message is:
java
---------代码---------
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/data is in an inconsistent state: Can't format the storage directory because the current directory is not empty.
This error suggests that the specified storage directory is not empty, and the DataNode is unable to format it because of the inconsistency.
To resolve this issue, you can follow these steps:
Stop the Hadoop services: Make sure Hadoop services are stopped before attempting to fix the issue. You can use the following command:
bash
---------代码---------
sbin/stop-dfs.sh
Clean up the inconsistent storage directory: Manually clean up the contents of the problematic storage directory. In this case, it's the /tmp/hadoop-root/dfs/data directory. You may want to move the existing contents to a backup location or delete them if you can afford to lose the data.
Start the Hadoop services: After cleaning up the storage directory, start the Hadoop services again:
bash
---------代码---------
sbin/start-dfs.sh
By following these steps, you should be able to resolve the inconsistency in the storage directory. Keep in mind that manually cleaning up directories involves potential data loss, so make sure to assess the situation and back up any important data if needed. Additionally, it's a good practice to investigate why the directory became inconsistent to prevent similar issues in the future.
执行hbase创建命令式提示错误 create 'zbwdemo', 'info', 'data'
--------------------------------
rg.apache.hadoop.hbase.PleaseHoldException: Master is initializing
-------AI回答-------
The error message "Master is initializing" indicates that the HBase master server is currently in the process of initializing. During this initialization phase, certain operations, such as creating a new table, might be temporarily restricted.
To resolve this issue, you can wait for the HBase master to finish initializing before attempting to create the table. You can check the status of the master initialization by looking at the HBase master logs or using the HBase shell. Once the master initialization is complete, you should be able to create the table without encountering the "Master is initializing" error.
Here are some steps you can take:
Check HBase Master Logs:
Examine the logs of the HBase master to see if there are any ongoing initialization processes. Look for messages indicating the progress of the initialization. Once the initialization is complete, you should see logs indicating that the master is ready to accept requests.
Wait and Retry:
Wait for a few moments and then retry the table creation command in the HBase shell. The initialization process should complete relatively quickly, and you should be able to perform table operations once it's finished.
bash
---------代码---------
create 'zbwdemo', 'info', 'data'
Check HBase Status:
You can also use the HBase shell to check the status of the master:
bash
---------代码---------
status
This command will provide information about the current status of the HBase cluster, including the master's state.
If the issue persists or if you encounter any specific error messages in the logs indicating a different problem, further investigation may be needed. Ensure that your HBase cluster is properly configured and that all services are running as expected.
2024-01-09 18:28:20,585 WARN [master/saas-dev04:16000:becomeActiveMaster] master.HMaster: hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1704796064667, server=saas-dev04,16020,1702630918165}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined.
-------AI回答-------
The warning message you provided indicates that the HBase Master is unable to make progress because the region hbase:meta,,1.1588230740 is not online. This can happen during the HBase startup process when the master is trying to bring regions online.
Here are some steps you can take to address this issue:
Check Region Server Status:
Ensure that the Region Server specified in the warning message (saas-dev04,16020,1702630918165) is running and healthy. You can check the Region Server logs for any errors or issues.
Check hbase:meta Region:
The hbase:meta table is critical for HBase's operation, as it stores information about where other regions are located. Ensure that the hbase:meta region is assigned to a Region Server and is online.
Check Region Assignment:
Use the HBase shell or the HBase web UI to check the status of region assignments. You can use the following command in the HBase shell:
bash
---------代码---------
status 'detailed'
Look for the hbase:meta region and verify its assignment. If it is not assigned or is in transition, there may be an issue with the region assignment.
Check Region Server Logs:
Examine the logs of the Region Server (saas-dev04,16020,1702630918165) for any errors or issues related to the hbase:meta region or the region in question.
Manually Assign Region (if necessary):
If the hbase:meta region is stuck in transition, you may need to manually assign it. Use the HBase shell to unassign and then assign the region:
bash
---------代码---------
unassign 'hbase:meta,,1.1588230740'
Wait for the unassignment to complete, and then assign the region:
bash
---------代码---------
assign 'hbase:meta,,1.1588230740'
Monitor the status to ensure that the region becomes online.
Check HBase Configuration:
Ensure that your HBase configuration is correct, including settings related to ZooKeeper, HDFS, and other relevant parameters.
If the issue persists, you may need to provide more context or check additional logs to identify the root cause of the problem. Additionally, consulting the HBase documentation or community forums for specific versions and configurations can provide more insights into common issues and their resolutions.
归根节点是 ZooKeeper 里面保存了hbase数据出现错误导致的,deleteall /hbase 就可以了