HDP-Cluster + Journalknoten geraten aus der Synchronisierung

HDP-Cluster + Journalknoten geraten aus der Synchronisierung

wir haben HDP-Cluster Version 2.6.5

Wenn wir uns die Protokolle ansehen, name-nodesehen wir die folgende Warnung

2023-02-20 15:56:37,731 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000193594484455 -> /hadoop/hdfs/journal/hdfsha/current/edits_0000000193594484455-0000000193594600017
2023-02-20 15:58:31,377 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193594757835-193594757835 took 1498ms
2023-02-20 15:58:40,617 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000193594600018 -> /hadoop/hdfs/journal/hdfsha/current/edits_0000000193594600018-0000000193594769398
2023-02-20 16:00:39,037 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193594895192-193594895192 took 1371ms
2023-02-20 16:00:42,839 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000193594769399 -> /hadoop/hdfs/journal/hdfsha/current/edits_0000000193594769399-0000000193594899457
2023-02-20 16:01:43,962 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193594954980-193594954980 took 1329ms
2023-02-20 16:02:44,799 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000193594899458 -> /hadoop/hdfs/journal/hdfsha/current/edits_0000000193594899458-0000000193595017147
2023-02-20 16:02:47,129 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595018764-193595018764 took 1321ms
2023-02-20 16:03:52,763 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595106645-193595106646 took 1344ms
2023-02-20 16:04:46,965 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000193595017148 -> /hadoop/hdfs/journal/hdfsha/current/edits_0000000193595017148-0000000193595169050
2023-02-20 16:04:56,276 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595175233-193595175233 took 1678ms
2023-02-20 16:06:01,067 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595252052-193595252052 took 1265ms
2023-02-20 16:07:06,447 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595320796-193595320796 took 1273ms

In unserem HDP-Cluster umfasst der HDFS-Dienst 2 name-nodeDienste und 3 journal-Nodes Cluster umfassen 736 data nodesMaschinen, und der HDFS-Dienst ist der Manager allerdata-node

wir möchten verstehen, was der Grund für die folgende Warnung ist?

 server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595018764-193595018764 took 1321ms

und wie Sie diese Meldungen durch proaktive Lösungen vermeiden können

Nach unseren bisherigen Erkenntnissen ist die folgende Lösung möglich:

http://www.hadoopadmin.co.in/hdfs/standby-namenode-is-faling-and-only-one-is-running/

RESOLUTION:
Increase the values of following JournalNode timeout properties:
dfs.qjournal.select-input-streams.timeout.ms = 60000 
dfs.qjournal.start-segment.timeout.ms = 60000 
dfs.qjournal.write-txns.timeout.ms = 60000

verwandte Informationen