Clickhouse資料庫查詢效能不一致

Clickhouse資料庫查詢效能不一致

各位,

Clickhouse Server 版本:22.8.21(已嘗試過各種 22.x 和 23.x 版本來clickhouse-backup恢復我的資料以進行測試,似乎都沒有提高查詢效能)。

我發現這裡的 clickhouse 伺服器有一些非常奇怪的行為(在專用機器上和載入相同資料的 docker 映像上)。

有時查詢需要約 50 毫秒才能完成,其他查詢則需要 250 秒以上才能完成(如果您執行 200 秒的查詢足夠頻繁,它將從所有外觀填充到某些快取中,並降低到約 50 毫秒)。

資料集位於具有簡單索引的表中(查詢不會跨主索引運行,因此它將掃描整個 3.5Gb 資料集)。


CREATE TABLE `cdrs_processed` (
    `Id` UInt64,
    `Direction` String,
    `Time` DateTime('UTC'),
    `CallID` String,
    `CorrelationID` String,
... <snip> ...
)  
ENGINE = ReplacingMergeTree(Id)  
PARTITION BY toYYYYMM(Time)  
ORDER BY (Id,Time)  
PRIMARY KEY Id  
SETTINGS index\_granularity = 8192  
COMMENT 'Processed and enriched CDR records';  

SELECT * FROM cdrs_processed WHERE CorrelationID='<some guid>' ORDER BY Id; 正在執行的查詢是這樣的。這裡有趣的部分是:

  1. 在整個流量上執行 PCAP,在執行高效能查詢時(即記錄在約 50 毫秒內傳回),伺服器首先回應列標題,然後回應資料。當選擇導致讀取緩慢的 CorrelationID 值時,PCAP 流量會顯示連線和查詢,然後顯示一大堆 TCP KeepAlive 訊息,只有這樣(如果沒有逾時)它才會發送列標題和資料。

  2. 部署的 clickhouse-server 軟體在 AWS EC2 r5a.xlarge 節點上運行,接收最少的查詢(1-2/秒),並且由於某種奇怪的原因,它以 100% 的速度旋轉 EC2 執行個體的 4 個 vCPU。該伺服器具有與我們從備份中獲取資料並將其還原到在開發人員系統上執行 clickhouse-server 的本機 docker 映像時相同的查詢效能問題。

  3. 偵錯查詢時,請使用並在執行查詢之前clickhouse-client執行。set send_logs_level='trace';如果它是一個高效能查詢,我們會得到有趣的日誌。

[2f7418aeaf9c] 2023.09.06 05:28:27.307306 [ 287 ] {5c35e3ba-15d3-494e-991b-9a93674ae9af} <Debug> executeQuery: (from 172.28.0.1:47140) SELECT * FROM nextvoice.cdrs_processed WHERE CorrelationID='E3065F87-3CB9-4C58-A12D-66A686B8C53D'; (stage: Complete)
[2f7418aeaf9c] 2023.09.06 05:28:27.322028 [ 287 ] {5c35e3ba-15d3-494e-991b-9a93674ae9af} <Debug> InterpreterSelectQuery: MergeTreeWhereOptimizer: condition "CorrelationID = 'E3065F87-3CB9-4C58-A12D-66A686B8C53D'" moved to PREWHERE
[2f7418aeaf9c] 2023.09.06 05:28:27.328348 [ 287 ] {5c35e3ba-15d3-494e-991b-9a93674ae9af} <Debug> InterpreterSelectQuery: MergeTreeWhereOptimizer: condition "CorrelationID = 'E3065F87-3CB9-4C58-A12D-66A686B8C53D'" moved to PREWHERE
[2f7418aeaf9c] 2023.09.06 05:28:27.331671 [ 287 ] {5c35e3ba-15d3-494e-991b-9a93674ae9af} <Trace> ContextAccess (default): Access granted: SELECT(Id, Direction, Time, CallID, .... <snip>

一切都很好,如果它是一個緩慢的查詢,那麼它就會變得有趣。該日誌行的第一行會相應地列印時間戳,第二行和後續行在查詢完成之前不會列印,但包含時間戳,就好像它們是在正確的時間發布的一樣。 IE。

列印出這一行。

[2f7418aeaf9c] 2023.09.06 05:30:06.838300 [ 287 ] {f9d77d15-8b7b-42d7-a344-eb0ad09ee690} <Debug> executeQuery: (from 172.28.0.1:47140) SELECT * FROM nextvoice.cdrs_processed WHERE CorrelationID='03FBC351-48A1-4E3A-8257-AA97ED75F7A5'; (stage: Complete)

然後我們什麼也得不到,直到2023.09.06 05:31:35.309735我們列印出所有其餘的追蹤線,如下所示。 (我拍攝了發生這種情況的影片)。

[2f7418aeaf9c] 2023.09.06 05:30:06.860733 [ 287 ] {f9d77d15-8b7b-42d7-a344-eb0ad09ee690} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
[2f7418aeaf9c] 2023.09.06 05:30:06.864526 [ 287 ] {f9d77d15-8b7b-42d7-a344-eb0ad09ee690} <Debug> nextvoice.cdrs_processed (c8be24d5-22dd-4344-a9bf-10db9eabc46d) (SelectExecutor): Key condition: unknown
[2f7418aeaf9c] 2023.09.06 05:30:06.865117 [ 287 ] {f9d77d15-8b7b-42d7-a344-eb0ad09ee690} <Debug> nextvoice.cdrs_processed (c8be24d5-22dd-4344-a9bf-10db9eabc46d) (SelectExecutor): MinMax index condition: unknown
[2f7418aeaf9c] 2023.09.06 05:30:06.880895 [ 287 ] {f9d77d15-8b7b-42d7-a344-eb0ad09ee690} <Debug> nextvoice.cdrs_processed (c8be24d5-22dd-4344-a9bf-10db9eabc46d) (SelectExecutor): Selected 62/62 parts by partition key, 62 parts by primary key, 1873/1873 marks by primary key, 1873 marks to read from 62 ranges
[2f7418aeaf9c] 2023.09.06 05:31:35.309735 [ 287 ] {f9d77d15-8b7b-42d7-a344-eb0ad09ee690} <Debug> nextvoice.cdrs_processed (c8be24d5-22dd-4344-a9bf-10db9eabc46d) (SelectExecutor): Reading approx. 3852688 rows with 4 streams
  1. 接下來,server0 在過去 6 個多小時內一直保持安靜,沒有任何類型的查詢,唯一發送給它的是 prometheus 指標抓取。然而追蹤日誌不斷地滾動過去
2023.09.06 10:51:06.000598 [ 317 ] {b09755a5-5004-4d4e-adef-82f4b86b4962::202309_75514_76921_1106} <Debug> MergeTask::PrepareStage: Merging 2 parts: from 202309_75514_76920_1105 to 202309_76921_76921_0 into Wide
2023.09.06 10:51:06.000762 [ 317 ] {b09755a5-5004-4d4e-adef-82f4b86b4962::202309_75514_76921_1106} <Debug> MergeTask::PrepareStage: Selected MergeAlgorithm: Horizontal
2023.09.06 10:51:06.000789 [ 317 ] {b09755a5-5004-4d4e-adef-82f4b86b4962::202309_75514_76921_1106} <Debug> MergeTreeSequentialSource: Reading 10 marks from part 202309_75514_76920_1105, total 70966 rows starting from the beginning of the part
2023.09.06 10:51:06.000924 [ 317 ] {b09755a5-5004-4d4e-adef-82f4b86b4962::202309_75514_76921_1106} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202309_76921_76921_0, total 34 rows starting from the beginning of the part
2023.09.06 10:51:06.029025 [ 317 ] {b09755a5-5004-4d4e-adef-82f4b86b4962::202309_75514_76921_1106} <Debug> MergeTask::MergeProjectionsStage: Merge sorted 71000 rows, containing 10 columns (10 merged, 0 gathered) in 0.028416545 sec., 2498544.4219203987 rows/sec., 805.83 MiB/sec.
2023.09.06 10:51:06.030461 [ 317 ] {b09755a5-5004-4d4e-adef-82f4b86b4962::202309_75514_76921_1106} <Trace> MergedBlockOutputStream: filled checksums 202309_75514_76921_1106 (state Temporary)
2023.09.06 10:51:06.030766 [ 317 ] {b09755a5-5004-4d4e-adef-82f4b86b4962::202309_75514_76921_1106} <Trace> system.trace_log (b09755a5-5004-4d4e-adef-82f4b86b4962): Renaming temporary part tmp_merge_202309_75514_76921_1106 to 202309_75514_76921_1106.
2023.09.06 10:51:06.030821 [ 317 ] {b09755a5-5004-4d4e-adef-82f4b86b4962::202309_75514_76921_1106} <Trace> system.trace_log (b09755a5-5004-4d4e-adef-82f4b86b4962) (MergerMutator): Merged 2 parts: from 202309_75514_76920_1105 to 202309_76921_76921_0
2023.09.06 10:51:06.030869 [ 317 ] {} <Debug> MemoryTracker: Peak memory usage to apply mutate/merge in b09755a5-5004-4d4e-adef-82f4b86b4962::202309_75514_76921_1106: 43.34 MiB.
2023.09.06 10:51:07.000655 [ 335 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 804.53 MiB, peak 3.11 GiB, will set to 807.54 MiB (RSS), difference: 3.01 MiB
2023.09.06 10:51:07.593604 [ 349 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushing system log, 2030 entries to flush up to offset 5643012
2023.09.06 10:51:07.594682 [ 349 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 41.29 GiB.
2023.09.06 10:51:07.595125 [ 349 ] {} <Trace> MergedBlockOutputStream: filled checksums 202309_2778_2778_0 (state Temporary)
2023.09.06 10:51:07.595304 [ 349 ] {} <Trace> system.asynchronous_metric_log (65e48ced-63b1-49f3-b373-fe52139c8fd6): Renaming temporary part tmp_insert_202309_2778_2778_0 to 202309_64623_64623_0.
2023.09.06 10:51:07.595444 [ 349 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushed system log up to offset 5643012
2023.09.06 10:51:07.769264 [ 327 ] {} <Trace> system.trace_log (b09755a5-5004-4d4e-adef-82f4b86b4962): Found 2 old parts to remove.
2023.09.06 10:51:07.769291 [ 327 ] {} <Debug> system.trace_log (b09755a5-5004-4d4e-adef-82f4b86b4962): Removing part from filesystem 202309_75514_76856_1041
2023.09.06 10:51:07.769959 [ 327 ] {} <Debug> system.trace_log (b09755a5-5004-4d4e-adef-82f4b86b4962): Removing part from filesystem 202309_76857_76857_0
2023.09.06 10:51:10.302387 [ 475 ] {} <Debug> DNSResolver: Updating DNS cache
2023.09.06 10:51:10.302543 [ 475 ] {} <Debug> DNSResolver: Updated DNS cache
2023.09.06 10:51:10.924813 [ 350 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 7 entries to flush up to offset 19458
2023.09.06 10:51:10.932845 [ 350 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 41.29 GiB.
2023.09.06 10:51:10.935552 [ 350 ] {} <Trace> MergedBlockOutputStream: filled checksums 202309_2590_2590_0 (state Temporary)
2023.09.06 10:51:10.935969 [ 350 ] {} <Trace> system.metric_log (8c45d18c-4f03-43de-9848-28490ac69588): Renaming temporary part tmp_insert_202309_2590_2590_0 to 202309_60226_60226_0.
2023.09.06 10:51:10.936121 [ 471 ] {} <Debug> system.metric_log (8c45d18c-4f03-43de-9848-28490ac69588) (MergerMutator): Selected 6 parts from 202309_59866_60221_71 to 202309_60226_60226_0
2023.09.06 10:51:10.936149 [ 471 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 41.29 GiB.
2023.09.06 10:51:10.936234 [ 302 ] {8c45d18c-4f03-43de-9848-28490ac69588::202309_59866_60226_72} <Debug> MergeTask::PrepareStage: Merging 6 parts: from 202309_59866_60221_71 to 202309_60226_60226_0 into Compact

這向我表明它正在嘗試處理某種形式的查詢/合併/插入。

一個客戶端應用程式(已關閉 6 小時)每秒最多處理 1-2 個請求。該組合中可能存在更新請求,但我可以用插入替換它們並依賴行為ReplacingMergeTree

任何人都可以提供有關在哪裡/什麼尋找需要更改或調整的內容的想法。對於某些查詢來說,資料中缺乏索引似乎不是問題,因為順序掃描在這個階段的效能足夠好。我們正在嘗試獲得可靠的查詢效能,以便我們可以識別資料中所需的正確索引/物化視圖/投影,以了解它們如何影響事物,但因為我們無法從 clickhouse 獲得可靠的效能..)。

相關內容