I'm not sure yet what is wrong here. I exchanged an old Samsung 256GB SSD drive with an 4TB SSD drive (SATA) also from Samsung. After I build it in I need to copy a lot of files. And here I noticed that problem the first time, the last two times the last hour why I opened this post. I need to kill and restart DOpus a lot in the last weeks.
It happens when I copy a lot of smaller files, the more the more intense react DOpus. If I copy 5000 files (I can't say exactly how much until it is noticeable) DOpus is after it very slow, Tooltips needs many seconds for blending in and out. Also switching tabs gets very slow. I often kill DOpus with the Taskmanager and restart it again, then it works back to normal. I copied later 15000 files, DOpus did at first react slow after it and then didn't react anymore, but no message came up that DOpus was not reacting anymore. I guess it was only so slow that it looked like a freeze.
It only happens when DOpus itself copy files and it copies the files itself without a problem, but as soon it is finished that above happens. I have also a script that use robocopy which I start out of DOpus, there is no problem at all. Also the rest of my system reacts fine, it is only DOpus so far.
I have an i7 8700K, 2TB Samsung M.2, 4TB Samsung SATA, 2TB and 4TB HDD SATA (WD) and 32GB DDR4 RAM.
Have you tested similar copies with other software to see if they're similarly effected?
What does Task Manager show for the CPU and Storage utilisation after the copy completes?
Could you generate some process snapshots while the problem is happening? We can use those to see which code is active at that time. (Assuming there's high CPU or Storage usage, and it's attributed to dopus.exe.)
Possibilities include:
If Opus has folder tabs open displaying the files (not just the parent directories), it may be inspecting them to get metadata. Close any unnecessary folder tabs to see if it helps.
Antivirus and/or search indexing scanning all the new files.
Hardware/drivers being slow when dealing with a backlog. (Make sure you have installed the motherboard's SATA drivers, for example. They can be very important.)
Shell extensions etc. could be reacting to the changes as well (although usually only if they can see the files in an open folder, so this is closely related to the first possibility; they may just exacerbate the issue.)
FWIW, over the last 3 days I've been moving a huge amount of data from HDD to SSD, including file copies using Opus where over 2 million files and 3TB of data were moved in a single batch, and I haven't seen similar issues.
A note of redemption: I recently had to back up about 10TB of data (spread over 9 (very) old discs, one of which was damaged by carelessness) to 2 large discs. I did it all with Opus. Some files were large, of course, while others were tiny and numbered in the thousands.
It was really quick. It took me about 2 working days. Average speed was >= 100MB/s. (USB 2 drives ~ 40MB/s)
There was not a single error.
Even the damaged disc (which I had disconnected without "save remove") complained loudly at first, but was completely recovered.
Of course, expectations of speed vary, but the expectation of error-free recovery is probably the same everywhere.
So I am completely satisfied with Opus' performance.
I know, it is an EVO. And like I said, it was not a problem with copying itself, that was normal and it also happens when I copy to my m.2 or to an HDD AFTER it finished copying the files. So it shouldn't have to do with that SSD.
To make it clear again: DOpus slows down as soon as it has copied all files, not during copying, only afterwards.
RoboCopy copies the files and then exits, so it may not hit whatever is bottlenecking things after the copy finishes. A/V tools can also treat different processes differently.
It's also possible Opus is working through the change notification backlog from the copy (which in turn could be slowed down by things like antivirus), although then I'd expect Opus to be slow after doing the copy in another tool as well, so that may not fit what you're seeing.
The process snapshots I mentioned are probably the best way to see what's going on without lots of guesswork.
I guess I found the issue, even when it is logical a bit strange. So far it looks like I had simply way to many tabs open, but the strange thing is, that it seems like only some less more tabs (maybe 5-10) lead into this "bug".
With a fresh lister I couldn't reproduce it and since I closed a lot of tabs the problem seems to be gone in my default lister as well, can't reproduce it anymore.
As I tested more around I noticed also, that a part of the UI was lagging/didn't work anymore. For example hovering over a directory to let it calculate is size didn't worked anymore, also the right click context menu didn't worked and on the beginning also not to select files or switch to the other lister (double lister setup) and drag&drop also not, but this two things came back some minutes later. I also could still open directories with a tiny lag and scroll though the lister, even opening files with a double click worked, but open it over the menu had a lag of 20+ seconds. Generally the menu had a huge lag or didn't worked anymore at all. For example I want to load a other lister setup over the menu, nothing happened after I clicked on it.
So it is clear that DOpus was working on some part as usual and on some part not. I let it run for 15 minutes to see if it goes away but that was not the case (still was on 8% process usage), I need to close the lister and open a new one, then it was all fine again. It is strange that the problem appeared so suddenly. It came muchleit so 5-10 tabs and then it came to the problem, before not. But I also had way too many tabs open, I didn't count them, but it could have been over 200... So maybe I hit a limit.
I don't know if the dump files are still useful, I created five as it happened but I read they contain private data, so first I would need to know what kind of data that is, tab names? directory entries? Since I had many tabs open, there could be some critical data in there.