I believe there is a problem in dividing large json file.
For example if we have 2418 files in original json. We end up with two jsons 2000 and 417 files each (because we subtracting 1 from end index), python do not include last index when using [start:end] operator.
I believe there is a problem in dividing large json file. For example if we have 2418 files in original json. We end up with two jsons 2000 and 417 files each (because we subtracting 1 from
end
index), python do not include last index when using[start:end]
operator.