MD5 > Everything > TeraCopy > GPT > Nom nom nom

I'll just paste this here. I bet this could be a huge time saver.
If anyone can help via script or specific recommendation - thanks kindly!

On with the request... I hereby invoke PasteGPT

"The process you're looking to achieve, using TeraCopy's MD5 hash file in Directory Opus 13 for quickly determining duplicates through its integration with "Everything," appears to be complex and potentially requires some manual steps. Here's a breakdown based on the information available:

1. Integration of Directory Opus 13 with "Everything":

    • Directory Opus 13 offers integrated support for "Everything," which enables instant searches of local drives.*
    • Within Directory Opus, "Everything" can be used in different ways, such as through the search field, Find-As-You-Type (FAYT), and the Tools > Find Files option.*

2. MD5 Hashes in Directory Opus:

    • Directory Opus can calculate and display MD5 checksums (hash values) of files in the current folder, but this feature is disabled by default for files exceeding a certain size due to the time it takes to calculate these hashes. *
    • Directory Opus provides options to manually calculate MD5 checksums for selected files or all files in a directory, irrespective of their size. MD5 calculation using Directory Opus may be time consuming when removing this limitation.*

3. Duplicate Finder in Directory Opus:

    • Directory Opus's duplicate finder has an MD5 mode, which could theoretically be used to find duplicates based on MD5 checksums.*

4. Potential Limitations and Requirements:

    • A direct way to feed cached MD5 sums into the Find Duplicates panel in Directory Opus does not seem to be clearly documented.*
    • Any integration that involves using a third-party database or hash file (like TeraCopy's MD5 hash file) for speeding up the duplicate finding process is not explicitly mentioned in the available resources.*

While Directory Opus 13 supports integration with "Everything" and has capabilities related to MD5 checksums, it's not clear whether it can directly utilize an external MD5 hash file generated by TeraCopy for its duplicate finding process. The standard approach in Directory Opus seems to involve generating its own MD5 checksums for duplicate searches.

Given the complexity and specificity of your requirement, and the potential lack of direct support for this exact process in Directory Opus 13, I recommend exploring the following steps:

1. Consult Directory Opus Support or Community Forums: Reach out to Directory Opus support or engage with their community forums to inquire about this specific use case. There might be scripts or advanced configurations available that are not widely documented.

2. Custom Scripting or Automation: If Directory Opus doesn't natively support this functionality, consider whether a custom script or automation tool could bridge the gap, using the TeraCopy hash file and Directory Opus's file management capabilities.

3. Alternative Tools: If the exact process you're looking for is not feasible with Directory Opus and TeraCopy, you might need to explore other file management tools that allow more direct integration with external hash files for duplicate searching."


P.S. I'd like to echo other requests for more detail about how to leverage, improve, and clear the MD5 cash in Dopus. For a program that is made to give control freaks a utopia, this ongoing omission is a bummer. For example: Knowing the cache is clear via simple feedback is a great way to establish confidence that Dopus really will capture the current state of my files. Being able to manually nuke it (or do so with a hotkey) would improve my personal use of (and confidential in) each run. Perhaps others?

P.P.S. Once we solve this, I plan to beg for xxHash3-64 support. :point_right::point_left:

P.P.P.S. Please make the Everything integration documentation a bit more clear to a new reader. Illustrating a "before and after" in a pop-up tip or something may be a help for understanding "what" and "how" the integration works.

Keep up the great work! Edited for clarity.

1 Like

If I'm understanding correctly, you want to use a file with several hashes, pass that info to Everything through DOpus to find duplicates in Everything's database?
Even if it can be done, it wouldn't be fast by any means, since Everything would have to calculate the hash for each file in its database (I'm not sure if the md5 checksum can be indexed for fast use but I don't think so).
In fact, I've been taking advantage of my forced vacation these days, and I was precisely working on a script that does that, find duplicates with Everything based on their properties and using DOpus to automate the process. Hence I would recommend:

  1. It can be done alone with Everything, but it takes a bunch of clicks and it is tedious if you plan to search through many files, which is why DOpus is of great help.
  2. Don't try to search dupes in Everything directly by their hash, instead first reduce the results by filtering by name, extension, size, etc. and only at the end you can use the md5 checksum for comparison
1 Like

you want to use a file with several million hashes, pass that info to DOpus' dupe finder checker via Everything to find duplicates in DOpus

Exactly. :wink:

Thanks for the reply. Sorry to hear about the forced vaca. But, I'm glad you're doing well and being productive during it.

Unrelated: This forum is fantastic-- what back-end is running it?

1 Like

You rock. Thanks.