File Log Behavior when Undoing Deletions

The file activity log is nearly perfect for recording every file action taken in the program, so that I can use the log to keep a separate database of file paths in sync with changes to the actual file system.

Undoing a file delete does not get recorded, however.
This leads to the situation where updating my filepath DB w/ the log inaccurately records that the file no longer exists.

This seems like an important omission, almost to the degree that it may be considered a bug, since every other file action is recorded successfully, and the user might not expect undo actions to not get reflected in the log.

How is it suggested I solve this problem – so that anytime I undo a file delete, the operation gets recorded?
Any pointers or insights on how go about this would be very much appreciated.

If I have to create custom undo button, I would require that there is no longer any way to undo in the default way, avoiding the possibility of accidentally undoing w/o logging the operation- is that possible?
Thanks!

I'm not sure the log is the best choice for that at all, since it can only record a limited number of items before it starts removing old log entries, and you might copy a directory with more items than that.

What's the aim here? To keep a list of where every file is, for what purpose? The filesystem itself does that, so presumably this is for removable media or so you can search for files more quickly or something? There may be better ways to achieve those things.

Thanks for quickly getting back to me!

So to be clear, what I don’t need simply a list of where everyfile is - since it is straightforward to get a directory listing of all file paths –

What is difficult to achieve is an accurate history for files which have had their paths updated – from renaming or moving(the file itself or its parents).

So the file paths database is for references to metadata about the file.

Additionally, the directory structure itself is meaningful information to me, that I would like to be able to have access for later if necessary.

The 3 main uses for this are:

My own metadata scripts -- in other words, just as opus internally must know what file operations occurred to update descript.ion files -- I have my own references to information about files which needs to get updated accurately.

ad-hoc references to files by full paths

Before using directory opus and other better tools, occasionally I needed to reference a file and did so by using the full file path(obviously a flawed system, but to maintain working references from when I did, I must have some way to match the original path of a file with its current path)

The file path is itself important metadata useful to me:

To highlight why this is useful for myself, a real world hypothetical example: suppose I have a pdf “technical textbook A” in my folder for a project I’m using it for ProjectFolder1. Later, I decide to move the file to a folder for all books. The fact that the PDF was moved from ProjectFolder1 to Books, and likely the date I moved it, is something I might like to know later).

This particular case is the only one where I definitely have the need for keeping not just a reference to the original file path, but future paths.

I have spent considerable research and time on finding the tools for building them -- before ultimately decided to rely heavily on directory opus, I built my own tools ultimately based around a file manager based on python with PyQt…

So I’m quite familiar with filesystem listening methods and ways to obtain file system information, including reading the usn journal.

I do still partially use this approach – by having the void tools everything program export the index log:

This is much more complicated though because that log is cluttered with file activity that is irrelevant, and in undo event is seen as a file creation.

I’m not at all worried about the file activity log getting too big, since I have the setting configured to not record subfiles, as I have already the ability to get a listing of all files involved w/o needing Opus to do it for me, and so will export the log either manually or on a script with enough frequency to not really worry about running out of space.

I suppose one way around the problem would be, when reading the index log from everything, checking the USN journal to see if a file creation event was actually the file being restored from the trash. In fact there are many workarounds but they are all quite complex and involve the need to properly handle edge cases.

So It would be a matter of tremendous simplification for me if I could simply get this information correctly recorded from opus.

Anyway,

Sorry for the very long response, hopefully it at least contextualizes my situation useful and understandable way.

I’ll happily share any additional info if you’d like. Thanks