![]() ![]() ![]() There are some bug fixes and improvements to system:hash parsing. I had a good week back after the holiday. You can access all the 'set these files as same quality duplicates' commands outside of the duplicate filter through the normal thumbnail menu when you have multiple files selected, but you'll want to set up your 'default duplicate metadata merge options' in a duplicates processing page first, so your client knows what to merge where in these situations. Here's the general help, if you haven't seen it. I plan to keep working on it because we are running into more and more situations where we want clean easy and automatable sharing/duplicating of metadata across similar files. Most of this tech is in the 'duplicates' system, where you can copy tags and ratings and URLs from one file to another, but it is clunky to work with and doesn't copy every possible thing. 'This file has this hash is core to the whole database, so if you start messing around with ids, I can't promise that some maintenance routine won't notice the problem later and throw a bunch of warnings and errors. Not in the convenient way you are thinking, unfortunately. Give it a go and let me know if you run into any trouble! This first version is simple-it counts all URLs, regardless of how important, but I can see a future version having the ability to scan by 'post URLs' or specific URL classes. You will find it under a new stub, 'system:urls', at the bottom of the normal system predicate list (where 'system:known urls' has also moved). I figured out 'system:number of urls', and you can now import rtf files! ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |