Closed ProtagNeptune closed 6 months ago
There are situations where a user may want to load the details, even if the md5 is already in the database and the file is already on the hard drive at that path.
For instance, I have database hooks that run for each image that insert that image's metadata into a custom DB. If I download the same file from different sources, I don't want that process to be skipped: I want my custom DB to have a separate entry for each source that contains the file.
I'm still learning how to use Grabber, but I'm worried that skipping loading details when an image is already in the md5 list might skip these DB commands, and I want to make sure that won't happen.
@solipsis-project in those cases, you can simply set your download settings always download the file. But today, the file will be skipped anyway after loading the details, and the SQL commands won't run. This is just an optimization.
@Bionus Nope, c6b22e2 didn't fix it so I'm guessing that my metadata setting is still somehow forcing it to load the details even though it's already in the MD5 list and on the disk. XMP:TagsList
javascript:
'artist/'+artist.replace(/([,;])+/g,'_').replace(/([_]{2,})+/g,'_').replace(/(@#&)+/g,';artist/').toLowerCase()+';'+
'copyright/'+copyright.replace(/([,;])+/g,'_').replace(/([_]{2,})+/g,'_').replace(/(@#&)+/g,';copyright/').toLowerCase()+';'+
'character/'+character.replace(/([,;])+/g,'_').replace(/([_]{2,})+/g,'_').replace(/(@#&)+/g,';character/').toLowerCase()+';'+
'meta/'+meta.replace(/([,;])+/g,'_').replace(/([_]{2,})+/g,'_').replace(/(@#&)+/g,';meta/').toLowerCase()+';'+
'model/'+model.replace(/([,;])+/g,'_').replace(/([_]{2,})+/g,'_').replace(/(@#&)+/g,';model/').toLowerCase()+';'+
'general/'+general.replace(/([,;])+/g,'_').replace(/([_]{2,})+/g,'_').replace(/(@#&)+/g,';general/').toLowerCase()+';'+
'rating/'+rating+';'+
'website/'+website
Seems to be because of the "same dir" MD5 setting. Pushed a second fix, sorry about that 👍
Yep, 70073d6 fixed this bug for me. 👌
When you batch download and it gets a URL with the md5 already in it why should it need to load the details if the file already exists on the md5 database and in the hard drive with the Path/Folder/Filename? It keeps throwing
[Info] Details limit reached (HTTP 503). New try.
and slow the batch download by pointlessly loading details when it's just not needed, the file already exists so it should just start the next download on the list.