Ill expand upon the previous post as to why sub-collections are so great.
One of the biggest reasons is that they let you set the entry point of commonly-used, deeply-nested folders higher up in the tree, so you dont have to constantly be drilling down to them. On top of that, sub-collections allow you to put them into a logical group, and rename the folders to something more easily identifiable, without actually renaming the underlying "real" folders, for which you might not have permission to rename or even if you did, it would break a website's functionality.
Say you are in charge of administering several different web applications all on different machines:
Is the only reason you want to do that using collections rather than junctions that you want to avoid accidentally deleting a load of real data when you just wanted to remove something from the work area?
[quote="ryanagler"]
Not at all. The reason is that all of the above are linux ext3 filesystems mounted via samba. Junctions are only applicable to NTFS.[/quote]
Although nudel, your question got me thinking, and Whoa! I just read that Vista supports symbolic links. And, unlike an NTFS junction point, a Vista symbolic link can also point to a remote SMB network path.
So, as long you're using Vista, you can achieve the same effect as dynamic collections by making symlinks!
Now, all thats needed is for the folder tree "Favorites/Alias" bug to be fixed, so an arbitrarily nested symlink can be made a favorite and behave like the toplevel of a custom tree.
I thought collections were going to be very useful to me but then I ran into a problem similar to the above. I have several programs that rename the original file (add .bak for example) and create new file, with the original name, with the changes.
So if I have a file named "fred" in a collection and I edit it to make changes I end up with the old version, now named "fred.bak" in the collection and the new version, still named "fred" is not in the collection. The missing "fred" is probably the same issue mentioned above but I'm puzzled as to why the old, renamed version is in the collection.
Hopefully the collections feature will be addressed/improved in future versions...
In another thread I was talking about how I could make a Perl script to find similar music videos and songs based on a loose comparison of different parts of the title and artist name in the filename.
Instead of trying to create a crude interface to actually deal with these duplicates (view and delete them) I was wondering if I could create a collection for each set of duplicates, or create one collection with all of them grouped together by whichever ones are similar. Is there a way that you can create a dopus collection with a specific set of files right from within the command line? If this can be done, then I should be able to run dopus with a relevant argument from my perl script, and plug the similar files into a collection so that I can look at all of them, and pick the one I want to delete. I'd be very appreciative if someone could tell me how this could be done (if it can be at all).