Met TC zou je zowel files op gelijke inhoud (= gelijke MD5) als op gelijke naam moeten kunnen zoeken. En je wilt het eerste, en niet gelijke naam.
Overigens vergiste is me eerder. In de syno file tools zit geen TC maar Midnight Commander en die heeft niets voor gelijke files. In dat pakket zit wel "jdupes" voor het behandelen van dubbele files. Je kunt dubbele files zelfs omzetten in "hard links" zodat je effectief 1 file overhoud, maar er wel de fileverwijzingen blijven op de oorspronkelijke plaatsen.
$ jdupes --help
Usage: jdupes [options] FILES and/or DIRECTORIES...
Duplicate file sets will be printed by default unless a different action
option is specified (delete, summarize, link, dedupe, etc.)
-@ --loud output annoying low-level debug info while running
-0 --print-null output nulls instead of CR/LF (like 'find -print0')
-1 --one-file-system do not match files on different filesystems/devices
-A --no-hidden exclude hidden files from consideration
-B --dedupe do a copy-on-write (reflink/clone) deduplication
-C --chunk-size=# override I/O chunk size in KiB (min 4, max 262144)
-d --delete prompt user for files to preserve and delete all
others; important: under particular circumstances,
data may be lost when using this option together
with -s or --symlinks, or when specifying a
particular directory more than once; refer to the
documentation for additional information
-e --error-on-dupe exit on any duplicate found with status code 255
-f --omit-first omit the first file in each set of matches
-h --help display this help message
-H --hard-links treat any linked files as duplicate files. Normally
linked files are treated as non-duplicates for safety
-i --reverse reverse (invert) the match sort order
-I --isolate files in the same specified directory won't match
-j --json produce JSON (machine-readable) output
-l --link-soft make relative symlinks for duplicates w/o prompting
-L --link-hard hard link all duplicate files without prompting
-m --summarize summarize dupe information
-M --print-summarize print match sets and --summarize at the end
-N --no-prompt together with --delete, preserve the first file in
each set of duplicates and delete the rest without
prompting the user
-o --order=BY select sort order for output, linking and deleting; by
mtime (BY=time) or filename (BY=name, the default)
-O --param-order Parameter order is more important than selected -o sort
-p --permissions don't consider files with different owner/group or
permission bits as duplicates
-P --print=type print extra info (partial, early, fullhash)
-q --quiet hide progress indicator
-Q --quick skip byte-for-byte confirmation for quick matching
WARNING: -Q can result in data loss! Be very careful!
-r --recurse for every directory, process its subdirectories too
-R --recurse: for each directory given after this option follow
subdirectories encountered within (note the ':' at
the end of the option, manpage for more details)
-s --symlinks follow symlinks
-S --size show size of duplicate files
-t --no-change-check disable security check for file changes (aka TOCTTOU)
-T --partial-only match based on partial hashes only. WARNING:
EXTREMELY DANGEROUS paired with destructive actions!
-u --print-unique print only a list of unique (non-matched) files
-U --no-trav-check disable double-traversal safety check (BE VERY CAREFUL)
This fixes a Google Drive File Stream recursion issue
-v --version display jdupes version and license information
-X --ext-filter=x:y filter files based on specified criteria
Use '-X help' for detailed extfilter help
-y --hash-db=file use a hash database text file to speed up repeat runs
Passing '-y .' will expand to '-y jdupes_hashdb.txt'
-z --zero-match consider zero-length files to be duplicates
-Z --soft-abort If the user aborts (i.e. CTRL-C) act on matches so far
You can send SIGUSR1 to the program to toggle this