Closed TheLogFather closed 8 years ago
Hmm... I'll try it when I'm not on mobile.
OK - works really nicely for me... it means it reacts to cloudvar changes almost instantly, rather than having to wait for the next poll of varserver.
Note that it returns as soon as it has received and decoded the data [almost... it does try for an extra 1/00th sec after receiving data, just in case there's another cloudvar change straight afterwards, until it runs out of data to receive. I guess it's vaguely possible that means it could get 'stuck' in that loop if there are changes coming continuously more often than every 1/100th sec. It's rather unlikely, but it may be that it should include some check to ensure that it never stays in that loop more than a certain amount of time, or for more than some upper limit on the number of received updates, which could be an optional param to it, I guess...]
EDIT: added the upper limit for number of changes into code snippet above - default is 10, which seems reasonable since there's [meant to be] a limit of 10 cloudvars for a project. Means it can only be stuck for max of extra 1/10th sec after receiving first update.
Used the port 531 check_updates receiver for https://scratch.mit.edu/projects/96491582/ So much nicer (and quicker) than polling the varserver URL every 2 or 3 secs. :)
I am testing, but busy so I should have it pushed in about 10 hours (going on a trip yay)
Woah, has something changed after the maintenance...? -The above receiver is no longer working!
It looks like updates to long cloudvars can be split across receives. And multiple updates can be in a single receive?
Recoding it...
Well, this was a bit tedious (and it's starting to look somewhat messy, so could maybe do with some rearranging), but it seems to be working now with my cloud speedtest project (linked above):
...
self._rollover = [] # add into CloudSession.__init__
...
def check_updates(self, timeout, maxCount=10):
count = 0
updates = {} # keep a dict of all name+value pairs received
self._connection.settimeout(timeout) # recv will wait for given time
while count<maxCount:
data = ''.encode('utf-8') # start off blank
while True:
try: # keep concatenating receives (until ended with \n)
data = data + self._connection.recv(4096) # raises exception if no data by timeout
if data[-1]==10: break # get out if we found terminating \n
self._connection.settimeout(0.1) # allow time for more data
except: # or until recv throws exception 'cos there's no data
break
if not data: break # get out if nothing received
self._connection.settimeout(0.01) # allow quick check for more data
if data[0]==123: # starts with left brace, so don't prepend rollover
self._rollover = [] # this rollover thing does seem to happen occasionally...
data = self._rollover + data.decode('utf-8').split('\n') # split up multiple updates
if data[-1]: # last line was incomplete, so roll it over...
print('Warning: last line of data incomplete?! '+data[-1].encode('utf-8')) # FYI for now...
self._rollover = [data[-1]] # put it into rollover for next receive
else:
self._rollover = []
for line in data[:-1]: # never need last line - it's either blank or it's rolled over
if line: # ignore blank lines (shouldn't get any?)
try:
line = json.loads(line) # try to parse this entry
name = line['name'] # try to extract var name
value = str(line['value']) # should be string anyway?
if name.startswith('â'+chr(32)):
updates[name[2:]] = value # avoid leading cloud+space chars
else:
updates[name] = value # probably never happens?
count = count + 1 # count how many updates we've successfully parsed
except: # just ignore data if we can't get 'name'+'value' from it
continue # get next entry, or go back to receive more
self._connection.settimeout(None) # reset timeout to default
return updates
EDIT: added the encode('utf-8') to the FYI print when it sees incomplete data...
I just saw a rollover case in my custom client for https://scratch.mit.edu/projects/96491582/
Unfortunately, I had a stupid mistake in my version of the FYI print statement, meaning it crashed and I therefore don't yet have any idea if it would've properly rolled over to the next call of check_updates.
But it does at least suggest it ought to have something to deal with the case where the last received thing isn't the whole of the update data...
Since this is added, I can close now, right?
I figured out how to do this - you just need to listen for data arriving on port 531. (Dunno where port 843 comes in - can't make any sense of that...)
Just needs something like this in CloudSession (
sorry, still using older version of ScratchAPI.py that doesn't have the new underscores and things you've changed in your recent commitNow prefixed "connection" in snippet below with the underscore - I think that's all it needs, right?):With the above, you can call c.check_updates(secs), where c is a CloudSession, to wait for 'secs' seconds for updates to the project's cloudvars.
Much kinder on cloud servers than polling varserver URL every couple of seconds! :)