High CPU when another process is doing lots of file changes

I work as a software developer on a large java project. When I have dopus open and I run a command line build of our product ("gradle clean build"), Dopus bumps up to ~10% CPU for a couple of minutes, then hums along at about 2-3% CPU for the remainder of the build. It doesn't seem like much but our build is CPU bound so I'm trying to get everything I can out of my processor. dopus is not currently displaying any of the folders involved in the build. The "clean" phase of the build deletes about 1000 folders and 100,000 files and the build phase will recreate the same number. But why does Directory Opus even care if it is not displaying anything involved in command line build?

I can see in Process Monitor that Directory Opus appears to be reacting to file/folder changes on both my C: and D: drive.

The blurred sections are all paths to subfolders in my project folder, but again dopus is not displaying any of them. But I've left the C: and D: unblurred because I'm wondering about this "NotifyChangeDirectory" event. It sounds like there is a hook on any changes on these drives? I don't know I'm just guessing.

I also used Process Explorer to capture some Thread stacks. The first stack is the highest CPU stack:

This stack trace appears on some of threads that have less cpu (1-2%) and is also the stack that appears for the remainder of the build when Dopus is running 2-3% cpu.

You can reduce overheads of change processing with these settings:

  • Preferences / Folder Tabs / Options / Process file changes in background tabs -- Turn that off to avoid inactive tabs processing events. Instead, they'll flag that they are 'dirty' and re-read the directory if needed the next time they become active.

  • Preferences / Miscellaneous / Advanced [Troubleshooting]: collection_change_delay -- Controls how often changes are fed to collections. (Deleting any old, unwanted collections can also reduce overheads in some situations.)

  • no_external_change_notify in the same place can turn off processing of external events entirely. We don't recommend keeping things like that, but it can be useful to verify that that's where the overheads are really coming from, and that there isn't another issue hiding somewhere.

  • Still in the same place, make sure notify_debug and shellchange_debug are both off, unless you're actively debugging an issue with them. The extra debug output can slow things down.

  • Still in the same place, notify_max_time and notify_max_items can be used to control how often changes are dispatched to listers/tabs, and how big the batches are.

  • If the folder tree is open, it has to listen to change events for just about everything, which increases overheads. Closing the folder tree (in all open windows) can reduce overheads.

  • Aside from that, it should only be drives or folders which are open that are monitored. We generally monitor the whole drive if it's a local drive and something is pointing at a folder below it, but there are cases where it's done on a folder by folder basis. If any monitoring is happening for a drive or folder+children then it will cause some overheads, even if the files turn out to not be in a folder that is currently displayed, since there's an overhead in looking at the paths and working out if anything is displaying them. Most of the overhead comes from the file displays processing things, however, which the settings above can help with.

Having said that, SYSFER.DLL looks like it is part of Symantec's firewall or antivirus software, so seeing that many nested function calls within that DLL in the callstack for the high-CPU thread suggests it may be causing the issue, perhaps by scanning files unnecessarily, which could mean the CPU usage you're seeing isn't really caused by Opus. But it could also be a red herring.

Thanks let me try some of these settings.

And yes we have Symantec on our systems and I know it is playing a part in this performance saga that I'm looking at. I was just looking up SYSFER.DLL to see what it is about. I'll work with my IT department to try to reduce what Symantec is doing.

1 Like