Thinking about this some more, what appears to be a simple problem is a bit more complex
Using my suggestion as an example will solve the problem of disk thrashing, but leads to inefficiencies since some queued items could run immediately once the original job (ie. the one that the queue is named after) has finished. For example,
1st job is copy to different physical disk: C: > D: (queue named "C:D:1A2B:3C4D")
2nd job (started while 1st still running) is copy to different disk E: > D: (added to queue "C:D:1A2B:3C4D" since dest matches queue name)
3rd job (started after 1st ended and 2nd job running) is copy to same disk (could be different as end result is same) C: > C: (added to queue "C:D:1A2B:3C4D" since source+dest matches queue name)
Given the above scenario, the 3rd will wait until the 2nd job has finished, but it could actually be run at the same time, since the physical disk is diferent.
The simple initial solution is to avoid disk thrashing, so I believe my suggestion is preferable to the current solution.
An ideal solution, which avoids disk thrashing, while also avoiding inefficiencies would be to allow multiple jobs in a single queue to run concurrently, if the src/dest ID's do not match. Opus could do the following when a new copy operation is initiated:
- Check any existing queue names to see if it should be aded to the queue (ie. if source or dest disk ID matches either/both ID's in queue name, then it's a match).
- If it finds a matching queue, the job is added to the queue, as before, but Opus will also check if the source/dest ID's of the new job match any of the currently running jobs in this queue. If a match is found, the job will be queued (ie. waiting to run). If a match is not found, the job will be run immediately (ie. so multiple jobs in the same queue could be running concurrently).
- To kep maximum efficiency, one more thing needs to be done - when a job finishes, Opus should check all non-running jobs in the same queue to see if they match any of the running jobs. Any which don't match can be run immediately.
The ability to manually create new named queues complicates things further, since a job which has been added to one of these user-created queues could potentially clash with one of the auto-queued jobs. The solution is to also check any running jobs in manual queues before starting a new job running.
While all this may seem a little complicated, it's not overly complicated IMO. The main issue would be the UI that would be needed for running multiple jobs concurrently in a single queue.
However, as Opus' main job is file management, then it would be time well spent, as it would maximise copy/move efficiency when performing multiple copy/moves.