10Gb network transfers

i've used Opus for YEARS and it is my favorite windows explorer replacement
BUT, recently I upgraded my network to 10Gb because of the growing file sizes and total amounts of data they I need to push from one machine to another or to a NAS on the network

Opus does not work well with 10Gb networks, at least for me
a file that I can transfer at 689MB/s using windows explorer only gets 278-341MB/s on Opus
and if I turn that transfer around and send it the other way between the 2 machines PC and NAS
with explorer I get 497MB/s (physical drive limitations without raid)
with OPUS I get 65MB/s

This appears to be because either opus does not use disk caching or is by passing it during the transfers
is there a setting i'm missing

In the future we'll be moving (where possible) to the same method File Explorer uses to copy files, since it seems some network drivers/hardware (or possibly the OS itself) just can't buffer file transfers properly unless the data is given in a very specific/arcane way.

(A consequence of this is that almost everything in every program, other than file copies done via CopyFileEx, will be similarly slow if the hardware/drivers/OS are affected in this way. It's just you don't notice because things rarely report performance numbers for operations like opening and saving files.)

In the meantime, this thread has some things you can try which may speed things up:

(Edit: Changed link to go to the start of the thread.)

The copy buffer sizes/thresholds in Preferences / Miscellaneous/ Advanced are the main things you can tweak that can affect this, at the moment.

this forsure helped but still not quite right
get alot of stop and go
but it is better

Is this going to get fixed in the next version? It's the biggest headache for me and costs a lot of time.

i've been waiting to see this fixed before I pay to upgrade again
right now when I start doing large file transfers on the network I go back to windows explorer to do them

Only issue i've really ever had with opus is this 10gb issue and since the average person does not have 10gb in their houses its probably not a top priority

Is this progressing for the next version maybe? It's my single biggest pain point which hurts me daily. This is the Windows Explorer. Opus maybe does 150-200 MB

1 Like

It's progressing but it will not be in the next update.

We've shown Opus is able to copy at several GB/s using a RAM drive, so the speed difference you're seeing will still come down to an issue with some aspect of the network/drivers/firewall/etc. bottlenecking somewhere, which is going to slow down everything that sends data over the network unless it uses one particular API to copy files. It's still worth fixing that, even after we release a change to use that API in Opus (which is still some time away). It's just you can't measure how slow it's making most things as most things don't report transfer speeds.

Does the FileSystemObject.CopyFile method use this API?

I swear to you, it's not network/drivers/firewall/etc. because Windows Explorer or Total Commander copy with maximum speed. Easy to measure with a stopwatch.

I haven't checked using a debugger, but I expect it does.

There's also a NirSoft command-line tool which can be used to copy files using the API, which may be a bit easier to integrate into an Opus button, if that's the plan. (Although I am not sure if it can move files as well as copy them, from memory.)

[Edit] My post about using it in Opus here: Copy files via the shell (Windows Explorer)

I don't know what Total Commander uses, but Explorer uses the high level API that I'm talking about (which Opus will be able to use in the future). That is the API that has been optimized for certain hardware/network issues, and which in turn a lot of hardware/drivers/etc. have been optimized around, while leaving the normal (and much more general) way to read and write data much slower, since almost nobody measures that and thus nobody notices it is slow and complains about it.

Measure how fast something like Photoshop can read or write a large file over the network and you'll probably find it's as bottlenecked as Opus on your setup. (I may be wrong, of course, or maybe Photoshop is a bad example. But I'd be amazed if you measured a lot of software and found that most wasn't affected, and you just hadn't realised it most software doesn't report transfer speed when reading/writing files.)

All Opus does is call CreateFile to open the files and then ReadFile and WriteFile in a loop to read and write the data, with a configurable buffer size (and an option to disable filesystem buffering and do the buffering itself, but that is off by default). If that is slow, it's going to be slow for almost everything.

You can play with the buffer-size and non-buffered-IO options to see if they help with your particular setup. They sometimes do.

In time, we'll add the ability to use the high level API to Opus, but it won't come any faster by bumping threads about it. That just makes it spend time replying instead of coding. :slight_smile:

[Edit 2]: A quick search of the Total Commander forums shows it is also using the same high level API, and, now I think of it, I saw a post from the TC devs saying there's no way for anyone else to replicate the speed it gets with some hardware as it's completely arcane and Windows is just broken in this regard, so the option is to either use the API or be slower on some setups. They added an option to use the API, which is what we will do as well, but it still means anything not using the API is going to be very slow on the same setups.

The API is only good for copying simple files from A to B, not reading or writing data into memory or archive creation or extraction, so it's limited, and sad that Windows only optimized around that instead of the more general case and fixing the lower-level APIs like they should have.

1 Like

[quote="Leo, post:10, topic:32570"]
All Opus does is call CreateFile to open the files and then ReadFile and WriteFile in a loop to read and write the data, with a configurable buffer size (and an option to disable filesystem buffering and do the buffering itself, but that is off by default).[/quote]

DOpus should read the next buffer while attempting to write the previous one. (some exceptions might exist)

You think we haven't thought of that already?

If filesystem buffering in use, the OS itself does read-ahead and write-behind buffering. If it's turned off, Opus reads and writes the data in parallel via two separate threads and a large double buffer.

@b-s-ger Out of curiosity... does this script copy faster? It uses the Windows FileSystemObject and simply copies the selection in the source to the destination.

function OnClick(clickData) {
    var cmd = clickData.func.command;
    var tab = clickData.func.sourcetab;
    var dtab = clickData.func.desttab;
    var currLister = DOpus.listers.lastactive;
    var fso = new ActiveXObject('Scripting.FileSystemObject');

    cmd.deselect = false;

    if (currLister.dual == 0) return;
    if (dtab.path.drive == 0) return;
    if (String(tab.path) == String(dtab.path)) return;

    cmd.RunCommand('Set UTILITY=otherlog');
    var dest = String(dtab.path) + '\\';
    for (var e = new Enumerator(tab.selected); !e.atEnd(); e.moveNext()) {
        var item = e.item();
        var source = String(item.realpath);
        if (item.is_dir) {
            fso.CopyFolder(source, dest);
        } else {
            fso.CopyFile(source, dest);

    DOpus.Output('\n... done.');

Button as XML
<?xml version="1.0"?>
<button backcol="none" display="both" textcol="none">
	<label>fso Copy</label>
	<function type="script">
		<instruction>@script JScript</instruction>
		<instruction>function OnClick(clickData) {</instruction>
		<instruction>    var cmd = clickData.func.command;</instruction>
		<instruction>    var tab = clickData.func.sourcetab;</instruction>
		<instruction>    var dtab = clickData.func.desttab;</instruction>
		<instruction>    var currLister = DOpus.listers.lastactive;</instruction>
		<instruction>    var fso = new ActiveXObject(&apos;Scripting.FileSystemObject&apos;);</instruction>
		<instruction />
		<instruction>    cmd.deselect = false;</instruction>
		<instruction />
		<instruction>    if (currLister.dual == 0) return;</instruction>
		<instruction>    if (dtab.path.drive == 0) return;</instruction>
		<instruction>    if (String(tab.path) == String(dtab.path)) return;</instruction>
		<instruction />
		<instruction>    cmd.RunCommand(&apos;Set UTILITY=otherlog&apos;);</instruction>
		<instruction>    DOpus.ClearOutput();</instruction>
		<instruction>    DOpus.Output(&apos;Enumerating...\n&apos;);</instruction>
		<instruction>    </instruction>
		<instruction>    var dest = String(dtab.path) + &apos;\\&apos;;</instruction>
		<instruction>    </instruction>
		<instruction>    for (var e = new Enumerator(tab.selected); !e.atEnd(); e.moveNext()) {</instruction>
		<instruction>        var item = e.item();</instruction>
		<instruction>        var source = String(item.realpath);</instruction>
		<instruction>        DOpus.Output(source);</instruction>
		<instruction>        if (item.is_dir) {</instruction>
		<instruction>            fso.CopyFolder(source, dest);</instruction>
		<instruction>        } else {</instruction>
		<instruction>            fso.CopyFile(source, dest);</instruction>
		<instruction>        }</instruction>
		<instruction>    }</instruction>
		<instruction />
		<instruction>    DOpus.Output(&apos;\n... done.&apos;);</instruction>

Well, since DOpus numbers are bad, yes. :smiley:

BTW, why two threads? Async IO is quite simple to setup.

One more thing - unbuffered IO is recommended by MSDN for copying large files.

This is a nice read: https://techcommunity.microsoft.com/t5/windows-blog-archive/inside-vista-sp1-file-copy-improvements/ba-p/723622

It doesn't give me any progress bar so I checked via Task Manager locally and on the server. Yes, it's as fast as the Windows Explorer copy method.