Filtering/Locating slightly similar duplicates using wildcards

Hey, guys!

I have a ton of .pdf files I'm currently organizing and many of them are duplicates. I'm trying to filter/locate duplicates inside a single folder (I'll later search how to locate through multiple directories, but that's another issue) using wildcards, but no matter what I do, no matter the configuration, the filters just don't work like I wanted them to (don't get me wrong, I'm sure the problem is between the keyboard and the chair), it shows no duplicates as results, even if I know for a fact there are indeed duplicates (examples below).

So, because some of those pdfs were renamed by a python script, there are actually more than a single pattern for duplicates, but no more than three or four, like this:

FileName (1).pdf
FileName_1.pdf
FileName.pdf
FileName .pdf
FileName (1) .pdf
FileName_1 .pdf
FileName_2.pdf
FileName (2).pdf
etc., spaces intended.

I've already went through all the duplicates (using the duplicates tool, ctrl+u) found through file size, so that's not an option. I've then tried locating duplicates using names as a comparison parameter (no extension) using as filters things like '(1'), *'(1'), *_1, _1 etc., to no avail.

What I may be doing wrong?

If the files have only been renamed, what I would do is a duplicate search using Checksum MD5 as a comparison method.