FSlint 可以找到重複檔案。但假設有 10,000 首歌曲或圖像,並且只想尋找那些相同但名稱不同的檔案?現在,我得到了一個包含數百個重複項的清單(在不同的資料夾中)。我希望名稱保持一致,因此我只想查看具有不同名稱的相同文件,而不是具有相同名稱的相同文件。
具有高階參數(或不同程式)的 FSlint 可以完成此任務嗎?
答案1
如果您同意腳本列印所有重複文件具有相同和不同的檔案名,您可以使用此命令列:
find . -type f -exec sha256sum {} \; | sort | uniq -w64 --all-repeated=separate | cut -b 67-
對於範例運行,我使用以下目錄結構。具有相似名稱(和不同編號)的文件具有相同的內容:
.
├── dir1
│ ├── uname1
│ └── uname3
├── grps
├── lsbrelease
├── lsbrelease2
├── uname1
└── uname2
現在讓我們來看看我們的命令施展一些魔法:
$ find . -type f -exec sha256sum {} \; | sort | uniq -w64 --all-repeated=separate | cut -b 67-
./lsbrelease
./lsbrelease2
./dir1/uname1
./dir1/uname3
./uname1
./uname2
由新行分隔的每個群組都包含具有相同內容的檔案。不列出非重複文件。
答案2
我為您提供了另一個更靈活且易於使用的解決方案!
複製下面的腳本並將其貼上到/usr/local/bin/dupe-check
(或任何其他位置和檔案名,您需要該腳本的 root 權限)。
透過執行以下命令使其可執行:
sudo chmod +x /usr/local/bin/dupe-check
正如/usr/local/bin
每個用戶的路徑一樣,每個人現在都可以直接運行它而無需指定位置。
首先,您應該查看我的腳本的幫助頁面:
$ dupe-check --help
usage: dupe-check [-h] [-s COMMAND] [-r MAXDEPTH] [-e | -d] [-0]
[-v | -q | -Q] [-g] [-p] [-V]
[directory]
Check for duplicate files
positional arguments:
directory the directory to examine recursively (default '.')
optional arguments:
-h, --help show this help message and exit
-s COMMAND, --hashsum COMMAND
external system command to generate hashes (default
'sha256sum')
-r MAXDEPTH, --recursion-depth MAXDEPTH
the number of subdirectory levels to process: 0=only
current directory, 1=max. 1st subdirectory level, ...
(default: infinite)
-e, --equal-names only list duplicates with equal file names
-d, --different-names
only list duplicates with different file names
-0, --no-zero do not list 0-byte files
-v, --verbose print hash and name of each examined file
-q, --quiet suppress status output on stderr
-Q, --list-only only list the duplicate files, no summary etc.
-g, --no-groups do not group equal duplicates
-p, --path-only only print the full path in the results list,
otherwise format output like this: `'FILENAME'
(FULL_PATH)´
-V, --version show program's version number and exit
您會看到,要獲取當前目錄(以及所有子目錄)中具有不同檔案名稱的所有檔案的列表,您需要標誌-d
和格式化選項的任何有效組合。
我們仍然假設相同的測試環境。具有相似名稱(和不同編號)的文件具有相同的內容:
.
├── dir1
│ ├── uname1
│ └── uname3
├── grps
├── lsbrelease
├── lsbrelease2
├── uname1
└── uname2
所以我們只需運行:
$ dupe-check
Checked 7 files in total, 6 of them are duplicates by content.
Here's a list of all duplicate files:
'lsbrelease' (./lsbrelease)
'lsbrelease2' (./lsbrelease2)
'uname1' (./dir1/uname1)
'uname1' (./uname1)
'uname2' (./uname2)
'uname3' (./dir1/uname3)
這是腳本:
#! /usr/bin/env python3
VERSION_MAJOR, VERSION_MINOR, VERSION_MICRO = 0, 4, 1
RELEASE_DATE, AUTHOR = "2016-02-11", "ByteCommander"
import sys
import os
import shutil
import subprocess
import argparse
class Printer:
def __init__(self, normal=sys.stdout, stat=sys.stderr):
self.__normal = normal
self.__stat = stat
self.__prev_msg = ""
self.__first = True
self.__max_width = shutil.get_terminal_size().columns
def __call__(self, msg, stat=False):
if not stat:
if not self.__first:
print("\r" + " " * len(self.__prev_msg) + "\r",
end="", file=self.__stat)
print(msg, file=self.__normal)
print(self.__prev_msg, end="", flush=True, file=self.__stat)
else:
if len(msg) > self.__max_width:
msg = msg[:self.__max_width-3] + "..."
if not msg:
print("\r" + " " * len(self.__prev_msg) + "\r",
end="", flush=True, file=self.__stat)
elif self.__first:
print(msg, end="", flush=True, file=self.__stat)
self.__first = False
else:
print("\r" + " " * len(self.__prev_msg) + "\r",
end="", file=self.__stat)
print("\r" + msg, end="", flush=True, file=self.__stat)
self.__prev_msg = msg
def file_walker(top, maxdepth=None):
dirs, files = [], []
for name in os.listdir(top):
(dirs if os.path.isdir(os.path.join(top, name)) else files).append(name)
yield top, files
if maxdepth != 0:
for name in dirs:
for x in file_walker(os.path.join(top, name), maxdepth-1):
yield x
printx = Printer()
argparser = argparse.ArgumentParser(description="Check for duplicate files")
argparser.add_argument("directory", action="store", default=".", nargs="?",
help="the directory to examine recursively "
"(default '%(default)s')")
argparser.add_argument("-s", "--hashsum", action="store", default="sha256sum",
metavar="COMMAND", help="external system command to "
"generate hashes (default '%(default)s')")
argparser.add_argument("-r", "--recursion-depth", action="store", type=int,
default=-1, metavar="MAXDEPTH",
help="the number of subdirectory levels to process: "
"0=only current directory, 1=max. 1st subdirectory "
"level, ... (default: infinite)")
arggroupn = argparser.add_mutually_exclusive_group()
arggroupn.add_argument("-e", "--equal-names", action="store_const",
const="e", dest="name_filter",
help="only list duplicates with equal file names")
arggroupn.add_argument("-d", "--different-names", action="store_const",
const="d", dest="name_filter",
help="only list duplicates with different file names")
argparser.add_argument("-0", "--no-zero", action="store_true", default=False,
help="do not list 0-byte files")
arggroupo = argparser.add_mutually_exclusive_group()
arggroupo.add_argument("-v", "--verbose", action="store_const", const=0,
dest="output_level",
help="print hash and name of each examined file")
arggroupo.add_argument("-q", "--quiet", action="store_const", const=2,
dest="output_level",
help="suppress status output on stderr")
arggroupo.add_argument("-Q", "--list-only", action="store_const", const=3,
dest="output_level",
help="only list the duplicate files, no summary etc.")
argparser.add_argument("-g", "--no-groups", action="store_true", default=False,
help="do not group equal duplicates")
argparser.add_argument("-p", "--path-only", action="store_true", default=False,
help="only print the full path in the results list, "
"otherwise format output like this: "
"`'FILENAME' (FULL_PATH)´")
argparser.add_argument("-V", "--version", action="version",
version="%(prog)s {}.{}.{} ({} by {})".format(
VERSION_MAJOR, VERSION_MINOR, VERSION_MICRO,
RELEASE_DATE, AUTHOR))
argparser.set_defaults(name_filter="a", output_level=1)
args = argparser.parse_args()
hashes = {}
dupe_counter = 0
file_counter = 0
try:
for root, filenames in file_walker(args.directory, args.recursion_depth):
if args.output_level <= 1:
printx("--> {} files ({} duplicates) processed - '{}'".format(
file_counter, dupe_counter, root), stat=True)
for filename in filenames:
path = os.path.join(root, filename)
file_counter += 1
filehash = subprocess.check_output(
[args.hashsum, path], universal_newlines=True).split()[0]
if args.output_level == 0:
printx(" ".join((filehash, path)))
if filehash in hashes:
dupe_counter += 1 if len(hashes[filehash]) > 1 else 2
hashes[filehash].append((filename, path))
if args.output_level <= 1:
printx("--> {} files ({} duplicates) processed - '{}'"
.format(file_counter, dupe_counter, root), stat=True)
else:
hashes[filehash] = [(filename, path)]
except FileNotFoundError:
printx("ERROR: Directory not found!")
exit(1)
except KeyboardInterrupt:
printx("USER ABORTED SEARCH!")
printx("Results so far:")
if args.output_level <= 1:
printx("", stat=True)
if args.output_level == 0:
printx("")
if args.output_level <= 2:
printx("Checked {} files in total, {} of them are duplicates by content."
.format(file_counter, dupe_counter))
if dupe_counter == 0:
exit(0)
elif args.output_level <= 2:
printx("Here's a list of all duplicate{} files{}:".format(
" non-zero-byte" if args.no_zero else "",
" with different names" if args.name_filter == "d" else
" with equal names" if args.name_filter == "e" else ""))
first_group = True
for filehash in hashes:
if len(hashes[filehash]) > 1:
if args.no_zero and os.path.getsize(hashes[filehash][0][0]) == 0:
continue
first_group = False
if args.name_filter == "a":
filtered = hashes[filehash]
else:
filenames = {}
for filename, path in hashes[filehash]:
if filename in filenames:
filenames[filename].append(path)
else:
filenames[filename] = [path]
filtered = [(filename, path)
for filename in filenames if (
args.name_filter == "e" and len(filenames[filename]) > 1 or
args.name_filter == "d" and len(filenames[filename]) == 1)
for path in filenames[filename]]
if len(filtered) == 0:
continue
if (not args.no_groups) and (args.output_level <= 2 or not first_group):
printx("")
for filename, path in sorted(filtered):
if args.path_only:
printx(path)
else:
printx("'{}' ({})".format(filename, path))
答案3
Byte Commander 的優秀腳本有效,但沒有給我所需的行為(列出所有重複文件,其中至少包含一個不同名稱的文件)。我做了以下更改,現在它非常適合我的目的(並且節省了我大量的時間)!我將第 160 行更改為:
args.name_filter == "d" and len(filenames[filename]) >= 1 and len(filenames[filename]) != len(hashes[filehash]))