Here are several commits, but one is the most big and important. Note please, that MBSplitter was modified in the first one of these commits, but then i removed it at all, so this modification doesnt matter now.
Main changes are:
commit 51a85790b7d5aa92752c7cdbae8f11456e8471cd
Added docs to wrappers, attention_nets
Moved train_val_split() to utils. Previosly it had distinct implementations for wrappers and BaseNet, because of need to add_padding, but we moved this padding to BaseNet.fit().
Removed array_to_tensor(). y reshape and cuda transfer is now done inplace in MLP.fit() and BaseNet.fit() methods.
Removed MBSplitter(). All we needed can be done by TensorDataset()
Moved get_mini_batches() to utils. Previosly it had distinct implementations for wrappers and BaseNet, because of need to handle variable size input, but we use now *args.
ndim=None is removed from all wrappers and AtentionNet. None initialized network couldnt train.
Here are several commits, but one is the most big and important. Note please, that MBSplitter was modified in the first one of these commits, but then i removed it at all, so this modification doesnt matter now. Main changes are: commit 51a85790b7d5aa92752c7cdbae8f11456e8471cd