Open rubenrivera opened 4 years ago
Interesting - in order to squeeze more into cache if i recall properly, and potentialy spread over multiple items. It's never happened to me, but of course the cacheservice is not guaranteed to survive the amount of time you ask it to, so it's possible it expired. If this is the case, then playing with exp backoff probably wont help. the problem is here, where it's detected that an entry has spread over multiple entries.
// that'll return either some data, or a list of keys
if (chunky && chunky.chunks) {
var p = chunky.chunks.reduce (function (p,c) {
var r = getObject_ ( self.getStore() , c);
// should always be available
if (!r) {
throw 'missing chunked property ' + c + ' for key ' + propKey;
}
// rebuild the crushed string
return p + r.chunk;
},"");
// now uncrush the result
var package = JSON.parse(chunky.skipZip ? p : self.unzip (p));
}
Now the cache service isnt guaranteed to store things for the amount of time you ask it to, so it's possible that 1 or more chunk has disappeared before the 'master' has, even though the chunks (I think) are given more lifetime than the master.
What expiry time are you setting?
One workaround would simply be to not fail when a chunk is missing and treat it as if it's not in cache at all. On reflection that's probably a better approach anyway.
So that'd pretty much be setting package to null rather than throwing an error if any chunk is missing You could try taking a copy and seeing how that works out.
Thanks Bruce.
We are using the default expiry time (according to the docs it is 600 seconds / 10 minutes) as on our different tests that we made that was good enough (I think that the longest time to do the write/read to/from the cache takes no more than 6 mins, anyway, I will check again the logs.
Also I'm thinking to do some experiments to try to find if there is an expiry time that could help to reproduce the problem. I'll keep you posted.
Hi @brucemcpherson
My partner and I published an add-on that is using code from your site, the most relevant parts for this post are cUseful library (version 117) and code from Parallel implementation and getting started.
Note: To be able to use the add-on the user should be a G Suite administrator and the domain should have enabled Classroom, have some classes and students.
We are struggling to find how to reproduce the following error :
where SOMETHING is the name that we give to a property of the Cache Service to store the G Suite domain users that we got by using the Admin Directory service.
The error appeared for first time to an add-on user from Spain. On our test environment we created as many users (3000 thousand) as the Spain user domain. My partner who resides in Argentina, randomly got this error when running the addon code in a loop (added a task at the end of the parallel processing profile to open again the sidebar) but doing the same I haven't get this error after several attempts. By the way I'm in Mexico.
I'm wondering if this error is related to network issues and most important how we should set the options of
cUseful.Utils.expbackoff
to properly handle this error (Is it safe to retry only the read of the related Cache Service property? Should we read again the users directory or "gracefully" tell the user to try again later?)