Open Pagey opened 4 months ago
Hi @Pagey-
KGen class inherits from tf.keras.utils.Sequence. In tf.keras.utils.Sequence(PyDataset class) implement __getitem__()
method should return a complete batch, and the __len__
method should return the number of batches in the dataset. For more details can find from here.
So in KGen class, __getitem__()
method return elements from the underlying data. And here self.alist[idx] will return all element of self.alist data while idx return only index. Attached gist for the reference.
Thanks @mehtamansi29 - it looks like you changed the code in the gist between the tensorflow 2.15 and 2.16 versions? the
def __getitem__(self, idx):
return idx#self.alist[idx]
is supposed to represent an infinite data generator and thus is not limited to the length of self.alist. It could have just been written there: return np.random.random()
in any case this represents a difference in behavior between the two versions, i.e. one that is terminated after len()/__len__
batches (in tensorflow 2.15) and one that is not (in tensorflow 2.16)
i saw that in the new version method __len__
is replaced by num_batches but it doesn't seem to make a similar effect as was in 2.15 either.
how should one terminate after __len__
/num_batches batches in tensorflow 2.16 in case of an infinitely generated set?
Hello,
I am seeing the same problem. I was looking at the source for class PyDataset I think I may have found an issue. I first noticed that the number of calls to getitem, more calls are made than len returns. I tried setting up a def num_batches property since it is part of the PyDataset class on lines 157 and 158 of the source code for py_dataset_adapter.py. I might be wrong but when calling that property and supplying a calculation for the number of batches, I get an error:
File "/home/user/miniconda3/envs/
The code for that section is calling
num_batches = self.py_dataset.num_batches if self._output_signature is None: num_samples = data_adapter_utils.NUM_BATCHES_FOR_TENSOR_SPEC if num_batches is not None: num_samples = min(num_samples, num_batches)
This seems off to me but I may be wrong that the line
num_batches = self.py_dataset.num_batches
is returning the num_batches method which cannot be evaluated by the min statement and should be
num_batches = self.py_dataset.num_batches()
and above at the at line 165 the def num_batches should return
if hasattr(self, "len"): return self.len()
where it has: return len(self) which should just give an infinite recursion error.
The combination of these bugs might be leading to len being None and causing the generator to be called an infinite number of times.
I've also noticed that the number of calls to the getitem of the PyDataset class, that some indexes are called more than once as if some calls are made, something fails, and then getitem is called again. When I print the number of times getitem has been called, the number of batches delivered to training is always larger than what the progress bar reports when training a model in tensorflow.
I generate my data on the fly, perhaps I need to design my dataset better, My particular issue is that I process the data coming off the disk before sending to training to save system memory, Doing this with multiple workers on the i-o side causes issues where the data is not necessarily associated with a particular index. I get that is an issue on my part but it would be nice if the description of PyDataset made it clear that not all batches of data delivered are used in training.
Hi there - paraphrasing an issue from 2018 :
change is return idx#self.alist[idx] in
__getitem__
. this is relevant in cases of generated datasets- i.e. it looks as though__len__
value is ignored and it used not to be?the above code on tensorflow 2.15 (Python 3.10.13, Ubuntu 20.04) produces this output:
and on tensorflow 2.16 (Python 3.10.13, Ubuntu 20.04) produces this output: