The current implementation use getSizes function to fetch url of photos. Each url is a flickr API query. This increases flickr server API hits by number of images trying to fetch.
For example, for the following task: "Search 400 words from flickr, each word fetch 4000 images. "
The task can have max of hits to server of (400*4000) hits to server with current library, way pass flickr hourly limits (see limits section).
However, the task can be complete with 400*(4000/500) = 2400 hits, as 500 is max fetch per page due to image urls can be return with results. Which is below flickr hourly limits. Due to flickr search api can request photo urls return with search results via "extra" field: see extra field in here. Under current implementation, each fetched photo can be parsed and saved with "Photo" class, with max_size url cached.
Hello again.
The current implementation use getSizes function to fetch url of photos. Each url is a flickr API query. This increases flickr server API hits by number of images trying to fetch.
For example, for the following task: "Search 400 words from flickr, each word fetch 4000 images. " The task can have max of hits to server of (400*4000) hits to server with current library, way pass flickr hourly limits (see limits section).
However, the task can be complete with 400*(4000/500) = 2400 hits, as 500 is max fetch per page due to image urls can be return with results. Which is below flickr hourly limits. Due to flickr search api can request photo urls return with search results via "extra" field: see extra field in here. Under current implementation, each fetched photo can be parsed and saved with "Photo" class, with max_size url cached.
Thanks again for the great repo!
Cheers!