我的日誌中有類似以下內容的行:
2015/11/02-07:55:39.735 INFO failed with ERR_AUTHORIZATION_REQUIRED. (10.10.10.11:61618) is not a trusted source.
2015/11/02-07:55:40.515 INFO failed with ERR_AUTHORIZATION_REQUIRED. (10.10.10.11:51836) is not a trusted source.
2015/11/02-07:55:39.735 INFO failed with ERR_AUTHORIZATION_REQUIRED. (10.10.10.10:61615) is not a trusted source.
2015/11/02-07:55:40.515 INFO failed with ERR_AUTHORIZATION_REQUIRED. (10.10.10.10:51876) is not a trusted source.
2015/11/02-07:55:39.735 INFO failed with ERR_AUTHORIZATION_REQUIRED. (10.10.10.10:61614) is not a trusted source.
2015/11/02-07:55:39.735 INFO failed with ERR_AUTHORIZATION_REQUIRED. (10.10.10.15:61614) is not a trusted source.
2015/11/02-07:55:39.735 INFO failed with ERR_AUTHORIZATION_REQUIRED. (10.10.10.15:61618) is not a trusted source.
2015/11/02-07:55:39.735 INFO failed with ERR_AUTHORIZATION_REQUIRED. (10.10.10.15:61613) is not a trusted source.
因此,我嘗試了以下命令來獲取每個 uniq IP 的計數(已排序):
grep ERR_AUTHORIZATION_REQUIRED file.log | awk '{print $6}' | cut -s -d ':' -f1 | tr -d '(' | sort | uniq -c
我得到的輸出類似以下內容:
3 10.10.10.10
2 10.10.10.11
3 10.10.10.15
因此,就像在應用之前對 IP 進行排序uniq -c
(這在給出命令的情況下是有意義的),但如果我交換uniq
和sort
命令,每個 IP 都會列印 的計數1
。
答案1
從uniq
線上說明頁:
DESCRIPTION
Discard all but one of successive identical lines from INPUT (or standard input), writing to OUTPUT (or standard output).
這裡的關鍵字是「連續」。它不會在流中的任何點搜尋重複項,只會搜尋緊接其後的重複項。排序會強制所有重複項彼此相鄰,以便可以將它們刪除(並進行計數)。