
我確信有一個簡單的解決方案,我正在運行 Mongodb 服務:
[Unit]
Description=Mongo server for Open Quartermaster. Version ${version}, using MongoDB tagged to "4".
Documentation=https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/Infrastructure
After=docker.service
Wants=network-online.target docker.socket
Requires=docker.socket
[Service]
Type=simple
Restart=always
TimeoutSec=5m
#ExecStartPre=/bin/bash -c "/usr/bin/docker container inspect oqm_mongo 2> /dev/null || "
ExecStartPre=/bin/bash -c "/usr/bin/docker stop -t 10 oqm_infra_mongo || echo 'Could not stop mongo container'"
ExecStartPre=/bin/bash -c "/usr/bin/docker rm oqm_infra_mongo || echo 'Could not remove mongo container'"
ExecStartPre=/bin/bash -c "/usr/bin/docker pull mongo:4"
ExecStart=/bin/bash -c "/usr/bin/docker run \
--name oqm_infra_mongo \
-p=27017:27017 \
-v /data/oqm/db/mongo/:/data/db \
mongo:4 mongod --replSet rs0"
ExecStartPost=/bin/bash -c "running=\"false\"; \
while [ \"$running\" != \"true\" ]; do \
sleep 1s; \
/usr/bin/docker exec oqm_infra_mongo mongo --eval \"\"; \
if [ \"$?\" = \"0\" ]; then \
echo \"Mongo container running and available!\"; \
running=\"true\"; \
fi \
done \
"
ExecStartPost=/bin/bash -c "/usr/bin/docker exec oqm_infra_mongo mongo --eval \"rs.initiate({'_id':'rs0', 'members':[{'_id':0,'host':'localhost:27017'}]})\" || echo 'Probably already initialized.'"
ExecStop=/bin/bash -c "/usr/bin/docker stop -t 10 oqm_infra_mongo || echo 'Could not stop mongo container'"
ExecStopPost=/bin/bash -c "/usr/bin/docker rm oqm_infra_mongo || echo 'Could not remove mongo container'"
[Install]
WantedBy=multi-user.target
我正在讓 docker 映射主機的 /data/oqm/db/mongo/ 目錄,看起來效果很好;從主機上看,我看到運行時填充的目錄。此設定似乎在重新啟動之間工作正常並且持續良好。
但是,當我嘗試透過複製此目錄中的資料然後再複製回來來備份和還原資料時,當嘗試還原服務時,Mongo barfs 聲稱資料已損壞。有任何想法嗎?
需要明確的是,我的備份和復原都是在服務停止時發生的,包括:
備份:
- 停止服務(使用systemd)
- 複製data目錄下的文件
- 啟動服務備份(使用systemd)
恢復:
- 停止服務(使用systemd)
- 刪除資料目錄中的現有文件
- 複製回之前複製出來的文件
- 啟動服務(使用systemd)(失敗)
我正在嘗試這種方法,因為它通常在文件中描述:https://www.mongodb.com/docs/manual/core/backups/#back-up-with-cp-or-rsync
錯誤日誌片段:
May 27 09:06:59 oqm-dev bash[41304]: {"t":{"$date":"2023-05-27T13:06:59.275+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=11224M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
May 27 09:06:59 oqm-dev bash[41304]: {"t":{"$date":"2023-05-27T13:06:59.686+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1685192819:686027][1:0x7fefd1197cc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 3"}}
May 27 09:06:59 oqm-dev bash[41304]: {"t":{"$date":"2023-05-27T13:06:59.719+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1685192819:719823][1:0x7fefd1197cc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 3"}}
May 27 09:06:59 oqm-dev bash[41304]: {"t":{"$date":"2023-05-27T13:06:59.749+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1685192819:749492][1:0x7fefd1197cc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 3 through 3"}}
May 27 09:06:59 oqm-dev bash[41304]: {"t":{"$date":"2023-05-27T13:06:59.801+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":-31804,"message":"[1685192819:801911][1:0x7fefd1197cc0], txn-recover: __recovery_setup_file, 643: metadata corruption: files file:collection-0-6243490866083295563.wt and file:collection-0--9007965794334803376.wt have the same file ID 4: WT_PANIC: WiredTiger library panic"}}
May 27 09:06:59 oqm-dev bash[41304]: {"t":{"$date":"2023-05-27T13:06:59.801+00:00"},"s":"F", "c":"-", "id":23089, "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":50853,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp","line":481}}
May 27 09:06:59 oqm-dev bash[41304]: {"t":{"$date":"2023-05-27T13:06:59.802+00:00"},"s":"F", "c":"-", "id":23090, "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}
May 27 09:06:59 oqm-dev bash[41304]: {"t":{"$date":"2023-05-27T13:06:59.802+00:00"},"s":"F", "c":"CONTROL", "id":4757800, "ctx":"initandlisten","msg":"Writing fatal message","attr":{"message":"Got signal: 6 (Aborted).\n"}}
更新:在提出建議後,我仔細檢查了權限是否被保留,而且它們現在肯定被保留了。不幸的是,仍然遇到同樣的問題
答案1
弄清楚了。
權限可能確實發揮了作用,但真正起作用的是意識到:
rm -rf "/some/dir/*"
!=rm -rf /some/dir*
更改指令即可rm -rf "$root/some/dir"/*
達到目的