Starting synchronize 'byte comparison' at a certain file

I have a 5TB Seagate SMR USB drive whereas I backup hundreds of 30GB ISO files.

As SMR is certainly not an idea technology, I need to be able to Verify the files - at least once, and as I add new ones.

The 'byte comparison' does this well, but ... it would take over 24 hours to run a full compare.

Is there a way, or would it be possible to Add a way, to start the comparison (sorted by date) at a certain file number?

This would mean having a Pause button that might show the 'current' file number - or having it store a pointer on the drive? And then a Resume button that would allow input of this file number, or automatically pull it from a pointer stored on the drive?

thanks!

There isn't a way to do that. I'm not sure how the UI for it would work really.

Have you considered making .SFV files or similar for the files/folders? You could then verify one or more files on demand whenever you wanted to. Comparing the SFV files for the original and backup folder would tell you if they both contained the same data, and you would also then be able to verify either copy of the data even if the other copy was lost, since it would just mean checking the hashes were still the same.

Opus doesn't have any SFV support built-in but there are tools like QuickSFV and TurboSFV, and I think QuickPar also does SFV, all of which can integrate into Opus to some degree.

For ad-hoc comparisons, you could also set up buttons in Opus to compare two files on demand easily enough, but from what you're doing the SFV route (or something similar) seems like it'd have more advantages.

If you're using NTFS on both your source & target, I might help, if you're ok with using SHA1 (*).

A while ago I posted a proof-of-concept multi-threaded hashing with DOpus and have converted it to a full-fledged script. It's been working very, very stable since March or so, and I have been synching 10-15 TB between multiple disks myself, all my movies, music, pictures, business files...

The script attaches SHA1 checksums to the files, i.e. creates an ADS stream for each file (therefore NTFS requirement), and you can process each file individually. The checksum keeps track of the file size and modification timestamp but not name, that means you can rename the file as you wish, but as soon as size/moddate changes and you try to verify it, the script warns you that the checksum is outdated or "dirty". Unlike all checksum programs I've evaluated or know of, it works file-based, so you don't need to compare full folders which most likely have deleted/renamed files, etc. which unnecessarily causes false alarms.

If the checksums are missing, you can re-calculate them on the fly or export from existing ADS, copy the exported file to another machine/disk and import them and then verify the files. The flexibility and possibilites are far beyond traditional checksum programs. The checksum calculation is completely done by DOpus, I put the rest on top of it, the multi-threading, export, import, verify, find dirty, find missing, long-filename handling, etc. I've also done extensive checks and compared the hashes to external programs, it has never failed so far. In multi-threading mode, it's faster than all but 1 known checksum programs, faster than binary compare (since only 1 disk is involved) and it's faster than verifying RAR/ZIP files via Winrar/7-zip. Single-threaded it's as fast as DOpus & your CPU/disk, the script overhead is minimal.

Regarding your question, I do almost the same thing you want daily, i.e. perform a simple sync without binary comparison, then go to flat view in the target, sort by modified descending and verify only those files (incl. gigantic 4K movies or VeraCrypt containers) Of course every now and then you can verify your external disks completely. It uses 4 different collections to show you 1. verified 2. failed 3. dirty 4. missing hashes of any folder/files you select. Since it's file-based, it works with flat-view or collections as well. It has progress bar with pause/abort, etc. number of threads settings, there's aven a disk type detection for spinning disks to set the thread count to 1 to avoid disk thrashing but no configuration screen (yet)!

You might ask why I haven't announced the script yet: I planned many more features, but got extremely frustrated with the huge and increasing script file size, so put it on ice for a while. Its source looks very chaotic for my developer pride but if you are interested, message me, I can help you to set it up and support with its usage. The GitHub version is stable but there's a more recent and stable version I can upload if you're interested.

1 Like

That sounds pretty cool and I'd like the script. Is that something you could attach to this thread or put into a beta version on github?

I'm not totally happy with the Seagate SMR drives (no issues so far, but the storage method is less desirable), but ... WD seems to provide almost no support for their Blacks and I can't even find the Seagate barracuda pro drives that seem to be the current 'best choice'. Can't wait until I can just buy an 8TB Samsung SSD at a 'reasonable' price!!!

Thanks!!

I've checked in 2 versions (beta2 & beta3). Try beta3, because it has user-configurable parameters. Since I wasn't actually planning to publish it so quickly, the switch from the no user-config version to user-config version might be a little bit rushed, i.e. a few non-essential functions might not respect the user settings yet, but the essential functions, e.g. hash calculation, verification, etc. should work very reliable.

After installing the .OSP file, please check out the .DCF file as well, it contains all pre-configured actions with (some) description and some samples. Also please read the config parameter descriptions at the bottom of the config screen. There are also "script columns" which you can either use directly or use to set e.g. DOpus labels, e.g. like I've been using for months now (add the status column to default lister folder format if necessary).

And last but not least, although I have 100% confidence your files will not be harmed at all, no change in original files whatsoever, please test it extensively using copies of your files and use it at your own risk.

My tips:
You should use some copies of your files to become familiar with the essential buttons/actions and plan your workflow. "Smart update" & "Verify" are essential; you might want to put them on a toolbar you use a lot. "Find missing" & "find dirty" might also come handy. All the import/export buttons, etc. heavily on your needs, I occasionally use "export from ADS" copy the file elsewhere and "import into ADS" using that file. "Clear cache" is normally not necessary on your main toolbar but sometimes verify or export might reject proceeding saying some selected files at the top level or somewhere under one of the selected folders, are dirty (I can explain it in detail via private message if you're interested) but even a false alarm does not affect any hash calculation or anything else. The faster a CPU & disk you have the more you will like the multi-threaded version, but on HDDs stick to 1 thread; you can skip the drive type detection if you just copy the default "Verify" button to a new one and set thread count to 1 (it saves 5-7 seconds at the beginning when you click "Verify"). After a while I'm pretty sure you'll get the hang of it.

Hope you like it.