datahandler.py line 408:
if skip:
return self.Get(batchsize, get_last_piece=get_last_piece)
Getting batches of data is done in recursive way which is not efficient at all.
with batchsize=100 for a new larger dataset, I get error:
"maximum recursion depth exceeded while calling a Python object"
it could have been easily implemented with a "for loop" as we know the size of dataset. I assume parallel for loops would have been available as well.
datahandler.py line 408: if skip: return self.Get(batchsize, get_last_piece=get_last_piece)
Getting batches of data is done in recursive way which is not efficient at all. with batchsize=100 for a new larger dataset, I get error: "maximum recursion depth exceeded while calling a Python object" it could have been easily implemented with a "for loop" as we know the size of dataset. I assume parallel for loops would have been available as well.