specifying aggregation values on historic data request does take into account the limit size in the wrong way...
1 HistoMinute call with limit of 2000 return 2000 OHLC values (which is correct)
1 HistoMinute call with aggregate of 5 and limit 2000 returns 400 values (which is incorrect in my eyes)
it gets only worse with bigger aggregation...
in my opinion, the limit should be applied after the aggregation is done. I understand that it might require more processing on your end, but I guess this is minimal since it would be cached.
The problem is that if you need a certain amount of data to be able to make some calculations you need to make a lot requests which is suboptimal or sometimes not feasible for certain setups.
I hope this was an oversight and not by design... 😄
specifying aggregation values on historic data request does take into account the limit size in the wrong way...
1 HistoMinute call with limit of 2000 return 2000 OHLC values (which is correct) 1 HistoMinute call with aggregate of 5 and limit 2000 returns 400 values (which is incorrect in my eyes) it gets only worse with bigger aggregation...
in my opinion, the limit should be applied after the aggregation is done. I understand that it might require more processing on your end, but I guess this is minimal since it would be cached.
The problem is that if you need a certain amount of data to be able to make some calculations you need to make a lot requests which is suboptimal or sometimes not feasible for certain setups.
I hope this was an oversight and not by design... 😄