我從一位離開公司的同事那裡繼承了 FreeNAS 安裝。此安裝用於保存我們分佈在世界各地的許多網頁伺服器的內部 rsync 備份。
FreeNAS 盒子中的儲存空間不足,我發現我們仍然有早在 2013 年就退役的伺服器的備份。
初步結果df -h
是:
...
raid-1 283G 261G 22G 92% /mnt/raid-1
raid-1/clone-auto-20140925.0800-2m 283G 261G 22G 92% /mnt/raid-1/clone-auto-20140925.0800-2m
...
因此,我天真地刪除了舊的 rsync 快照,即rm -rf /mnt/raid-1/backups/old.server.1 /mnt/raid-1/backups/old.server.2
.接下來,df -h
現在看起來像:
...
raid-1 266G 244G 22G 92% /mnt/raid-1
raid-1/clone-auto-20140925.0800-2m 283G 261G 22G 92% /mnt/raid-1/clone-auto-20140925.0800-2m
...
奧奧
我已將問題範圍縮小到我對 ZFS 缺乏了解。顯然,df
這不是我期望的傳統意義上的磁碟使用情況報告,天真的刪除也無法解決我的問題。
如果有人可以的話,我將不勝感激:
為我指明 FreeNAS 如何利用 ZFS 的正確方向,以便我能夠理解它,並且- 提供一些有關如何透過刪除這些舊備份的副本來釋放一些空間的指導。
編輯1
我一直在閱讀,現在我了解到由於 ZFS 的 CoW,該空間沒有被釋放。
輸出zfs list
:
NAME USED AVAIL REFER MOUNTPOINT
raid-1 1.76T 21.5G 245G /mnt/raid-1
raid-1/clone-auto-20140925.0800-2m 34.8G 21.5G 261G /mnt/raid-1/clone-auto-20140925.0800-2m
編輯2
`zfs list -t snapshot 的輸出
NAME USED AVAIL REFER MOUNTPOINT
[email protected] 91.0G - 261G -
[email protected] 9.13G - 301G -
[email protected] 4.68G - 301G -
[email protected] 4.70G - 302G -
[email protected] 4.63G - 302G -
[email protected] 15.5G - 297G -
[email protected] 15.6G - 297G -
[email protected] 15.7G - 297G -
[email protected] 16.0G - 297G -
[email protected] 15.9G - 297G -
[email protected] 16.2G - 298G -
[email protected] 15.2G - 297G -
[email protected] 13.8G - 297G -
[email protected] 14.1G - 298G -
[email protected] 19.1G - 298G -
[email protected] 19.3G - 299G -
[email protected] 16.6G - 299G -
[email protected] 16.7G - 300G -
[email protected] 15.7G - 299G -
[email protected] 16.3G - 300G -
[email protected] 16.6G - 300G -
[email protected] 19.5G - 300G -
[email protected] 19.8G - 299G -
[email protected] 17.4G - 299G -
[email protected] 17.6G - 300G -
[email protected] 16.4G - 299G -
[email protected] 16.9G - 300G -
[email protected] 17.5G - 297G -
[email protected] 20.0G - 297G -
[email protected] 20.2G - 297G -
[email protected] 5.43G - 297G -
[email protected] 5.46G - 302G -
[email protected] 16.7G - 307G -
[email protected] 16.8G - 308G -
[email protected] 17.2G - 309G -
[email protected] 20.5G - 309G -
[email protected] 17.4G - 309G -
[email protected] 17.7G - 310G -
[email protected] 17.8G - 311G -
[email protected] 575M - 310G -
[email protected] 575M - 310G -
[email protected] 20.9G - 309G -
[email protected] 21.0G - 309G -
[email protected] 20.6G - 306G -
[email protected] 17.8G - 306G -
[email protected] 18.1G - 308G -
[email protected] 561M - 307G -
[email protected] 561M - 307G -
[email protected] 20.7G - 308G -
[email protected] 21.3G - 308G -
[email protected] 21.6G - 308G -
[email protected] 18.9G - 309G -
[email protected] 19.1G - 310G -
[email protected] 18.0G - 309G -
[email protected] 18.2G - 309G -
[email protected] 18.6G - 310G -
[email protected] 19.1G - 310G -
[email protected] 22.1G - 238G -
[email protected] 19.4G - 238G -
[email protected] 12.2G - 239G -
[email protected] 314M - 245G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 584M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 584M - 261G -
raid-1/[email protected] 584M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 584M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 585M - 261G -
raid-1/[email protected] 584M - 261G -
答案1
答案2
這是我到目前為止使用過的:
zfs list
- 識別卷宗/資料集zfs list -H -t snapshot -o name -S creation -r volume/dataset | tail -10
- 用您的資訊取代磁碟區/資料集,tail XX menas list xx oldszfs list -H -t snapshot -o name -S creation -r volume/dataset | tail -10 | xargs -n 1 zfs destroy
- 再次,用你的替換卷/資料集。