One of the most useful tools in Opus is the Synchronization tool. However, I found it's not too smart, or I can't find the right option.
I'm talking about one-way copying with included "Delete files from destination that don't exist in the source" and "Synchronize sub-folder contents".
As far as I understand it works like this:
Copy the differences to the destination.
Delete unmatched-to-source files from the destination.
This logic has apparent flaw as it tends to run into no-space left in destination rendering the Opus not be able to accomplish the operation, although there is enough space if it'd do the deleting unmatched files and folders first. So - is there a way to turn around this and do the deleting first without trashcan, thus freeing some space, and then do the copying?
Please - it is serious, as if you copy 50.000 of files you probably leave it overnight expecting it to be done when you wake up and not to get "Low Space Error", asking you to abort the operation and do some deleting yourself.
DO's sync-tool is good for "basic" operations. If you sync a lot and with different profiles incl. the option you want and much more, I can recommend SyncBack Pro. You can also integrate it into DO (running profiles from a button and include it into DO-backup/scripts, as there is a portable version available).
But: The usual way to backup/sync is copy first, not deleting first!
I believe you are happy with that 55USD solution, but I still think that DO should be smarter than that. I don't sync that often, but when I do I want DO to get the job done. I believe it's not too much to ask?
There are pros & cons to deleting at the start vs end.
If the copy stage fails and you've deleted the old files already, you've potentially lost that data (or only got one copy of it, on the drive you are trying to back up).
Of course, if the target drive fills up then that's also a problem, so sometimes it is better to delete first if you can't ensure that the target drive has space to temporarily contain both the old & new data.
It's something we may change or add an option for at some point, but I just wanted to say that the way it works now is intentional and not a mistake.
Of course it isn't, just made an advice that works the way you want, because I know about the options you have:
It will take some time for GP to realize if it will be realized at all (as Leo said it's not intentional and there're other important requests on the list for sync, like profiles).
There're only a few tools that offer deleting first (most of them only delete oldest backup, e.g. if you have 2 or more full-backups), so you need one of these tools or spend some money in a bigger backup storage.
This is understandable from one side. However, if I do the synchronization this way (deleting unmatched at Destination) I presume that the originals are already within the Source, right? So - why would I need the Destinations files and folders that are going to be deleted anyways? The only problem there is that the unattended operation fails to complete. As I stated earlier - synchronizing large number of files takes hours to complete and I want it done. Currently, I am never sure if it would be completed or hit the dead end.
This really should be added at least as an option, so if somebody likes to do it "secure" way - be it his/her will. I am pretty sure it's not that big deal to program, since everything else is already there.
I'm not sure if I didn't overlook something, but DOpus could delete only files that do not exist in the source location so they are meant to be completely deleted anyway.
This way the problem with low disk space is minimized to the situation where files that are overwritten earlier are [much] bigger than files that are overwritten later.
Eg. you have 100 GB backup disk and files:
1.zip 30 GB
2.zip 60 GB
and overwrite those files with newer versions:
1.zip 60 GB
2.zip 10 GB
If 1.zip gets overwritten first, we run out of space.
But in some situations, like:
a.zip 30 GB
b.zip 30 GB
c.zip 30 GB
and we synchronize it with:
c.zip 40 GB
d.zip 50 GB
deleting a.zip and b.zip first does make sense - they are meant to be deleted and are useless there. Copying d.zip before deleting a.zip and b.zip causes problem with low disk space.
Hmm... I think Opus could and should be a bit smarter here. Since we're talking about a 'sync' operation - Opus has already cataloged what items (at a file level) will need to be copied to the destination as part of the initial compare phase right? So Opus should be able to obtain the total size of the data to be copied as well as the total size of the data to be deleted fairly easily/quickly. Why shouldn't Opus then make an assessment before any copying takes place that looks at:
a. The size of the data needing to be copied to the destination
b. The size of the data needing to be deleted from the destination
c. The available capacity on the destination
If however, subtracting from results in enough space on the destination, then perhaps Opus could go at it in a staggered approach... Meaning, rather than delete the ENTIRE set of "unmatched-to-source files from the destination" in one go, perhaps Opus could make an assessment of whether deleting a little bit as you go could free up enough space to allow the 'next' file from the source to be copied. I don't know what kind of overhead that might add to the total operation run-time, but it could be a compromise between implementing a method that would just "get the job done" by deleting everything up front vs honoring the concerns behind the way things work now. And I do tend to lean more towards darocs last point, that in many cases, a potential failure in copying files from the source doesn't necessarily increase the value of the files you've accepted will be deleted from the destination in this kind of sync operation. The conspicuous scenario where that wouldn't necessarily be the case is if you're dealing with datasets where instead of just 'updating' a file on the source, a new 'version' of the file gets created with an incremented filename or something like that. In that case, yeah... it might be alot more valuable to have a slightly outdated 'version' of the file on the dest, than no version at all.
I'm against conditions like if size of files to be copied is greater than size of files to be deleted in destination, then DOpus tries to delete some of these files to free up enough space. This is messy, hard to understand for users (and probably very hard to test/debug) and can lead to data loss (similar situations to the one described by Leo).
Either files are deleted before starting copying or files are copied first and then files get deleted.
The solution has to be simple and understandable for users.
I think a simple solution is to add an option to delete files before starting copying. I don't think I need it anyway, just trying to find a quick way to do what is really needed.
I strongly agree that "delete first" should be an option. I used to use TrueCrypt container on my USB stick to sync data between work-home, and because I didn't want to encrypt entire USB stick - just work data, I made TC container as small as possible. However I often ended up with insufficient space when trying to sync.
What about first adding some more useful sync-features like profiles, accessing external drives via label (not letter), using different copy-methods, sync-autostart on USB-connect and so on?! That would make DO smarter!
And I agree to Daroc: As long you could not save options for different sync-operations, too much options could be dangerous (DO has got no test-mode like some sync-tools have - for a good reason!!!)!
Don't misunderstand, I would really like to use DO's sync instead of an external solution, but deleting first is not the "killer"-feature (also no argument to buy DO)!
We may add an option to make sync delete first. It needs to be looked at more in detail before we're sure what it involves, but we've been talking about it and agree it could be useful.
...However, if you want automated/unattended synching/backups of huge amounts of data, the Opus sync tool is not the best fit for the job, and never will be. It's designed to be used interactively, to do relatively small ad-hoc syncs where you look down the list of differences and inspect things, maybe turn some things on & off manually, before clicking "go" and watching things happen, interacting if there are any issues along the way.
An automated backup tool seems more like what you want, IMO, and that's not something we're trying to turn Opus into, since it's a full-time job in itself and we'd be wasting our time trying to compete with tools which are dedicated to that job alone.
@Leo: I absolutely agree, but as the main options for syncing still exists, please think about the idea to create "sync"-buttons or profiles, where source/dest and options could be set. With that option DO's sync would be a perfect small sync-tool for basic and daily usage .
If you often sync/backup two-way and one-way and also from different locations and with different options you can easily run into dataloss when setting wrong options. That's the main reason for me not using DO's sync for basic backups, because I've to "configure" it each time for different operations and to verify the options. And that's also the reason why too much new options doesn't make sense as long you can't save them.
@Rated RR: You're right, it should just be an on/off-button in prefs or sync-panel "Delete first" without any further conditions.
@Sasa: this topic is about low disk space (which in my opinion is relatively simple feature). Leo already said they will look into the matter. This is not the right place to try to force your own ideas and feature requests which are only a little related to this topic. Please create a new topic if you want to talk more about other features you want to be implemented in DOpus.
@daroc: Disagree, if we talking about new features for sync we should first find a way organizing them, because it makes no sense to config x conditions for each operation. BTW I already requested this long time ago.
And to be honest: The way backup works is copy first, not deleting first. If you run out of disk space, buy a bigger one (that's what Jon already said above).