Documenting potential models to evaluate as a 'baseline'
What pre-trained/existing models do we want to use for evaluation? We might only do one but it might be nice as part of building the dataset to show one evaluation of biases/recall etc. in pre-trained models on GLAM images. My suggested criteria for choosing to evaluate a model would be:
evidence of 'use in the wild' in a GLAM setting
'out of the box' solutions - these might be more likely to be picked up and used
well-known models (YOLO) etc.
Likely candidates
[ ] YOLO (there are many versions of this by now. I would try and find out what version is the most user friendly on the basis that this is going to make it more likely to be used by a GLAM?)
[ ] Google Cloud Vision API. I have run out of Google Credits now but we might be able to get enough to evaluate for free
[ ] Azure?
[ ] AWS? My sense (with no empirical evidence) is that in terms of popularity in GLAMS settings the cloud providers go from AWS> Google > azure. It probably doesn't matter too much which we pick but it might be worth documenting the differences (if any) in how much control a user has when using these APIs.
More hands on/research code approaches to face detetion
Documenting potential models to evaluate as a 'baseline'
What pre-trained/existing models do we want to use for evaluation? We might only do one but it might be nice as part of building the dataset to show one evaluation of biases/recall etc. in pre-trained models on GLAM images. My suggested criteria for choosing to evaluate a model would be:
Likely candidates
More hands on/research code approaches to face detetion