enforce that partitionTimeFormat is appropriate for the given pagination.before and pagination.after
optional fixes
consider making "reset" updates null in the SSE data frames. It would explicitly tell the application to reset data and is parsable by JSON.parse: JSON.parse('null') === null, while JSON.parse('') is an error, JSON.parse('[]') would be a "no update" operation because the array is empty.
for a known endpoint: when data is empty expected long-polling requests hang (they wait for data to exist). This also happens for what should be 404 Not Found requests: a request to /liquidity/token/stake/stake will hang, because it too returned an empty update and waited.
I think the use of type PaginatedRequestQuery as an argument to getPaginatedResponse is not great. It's causing the main loop of the SSE mechanism to set the query parameters as strings on each iteration, which may be error prone (though I'm not certain) eg. (1.1234567e20).toFixed(0) === '112345670000000008192' but fortunately Number((1.1234567e20).toFixed(0)) === 112345670000000000000 which is expected. Regardless, if the getPaginatedResponse logic is abstracted out so that all data payloads are handled by the same function for each mechanism (not one per route as it is currently) then we can easily use an intermediary data type such as PaginationInput in the function args instead of the PaginatedRequestQuery object and the code would be quite understandable.
specific fixes required
ensure that the startTime and endTime are resolved partition time edges. round down startTime and round up endTime so that we can cover the range asked for
this is so we can split the request into block ranges that can be combined
The inner part of the methods can be cached (with block_range.from_height and block_range.to_height vars). this could be done with caches and cacheIDs of:
priceDataCache should have no generateFunc callback
the cache should be set similarly to the compressed-responses cache where the cache value is generated in the current context if the value does not exist
A different, maybe cleaner solution would be to put caches inside another cache which seems bad? like this:
Issue
Get
/stats
and/timeseries
endpoints to uselong-polling.ts
andserver-sent-events.ts
from issue28
Solution
general fixes
for
getPrice.ts
andgetTotalVolume.ts
optional fixes
null
in the SSE data frames. It would explicitly tell the application to reset data and is parsable by JSON.parse:JSON.parse('null') === null
, whileJSON.parse('')
is an error,JSON.parse('[]')
would be a "no update" operation because the array is empty.404 Not Found
requests: a request to/liquidity/token/stake/stake
will hang, because it too returned an empty update and waited.PaginatedRequestQuery
as an argument togetPaginatedResponse
is not great. It's causing the main loop of the SSE mechanism to set the query parameters as strings on each iteration, which may be error prone (though I'm not certain) eg.(1.1234567e20).toFixed(0) === '112345670000000008192'
but fortunatelyNumber((1.1234567e20).toFixed(0)) === 112345670000000000000
which is expected. Regardless, if thegetPaginatedResponse
logic is abstracted out so that all data payloads are handled by the same function for each mechanism (not one per route as it is currently) then we can easily use an intermediary data type such asPaginationInput
in the function args instead of thePaginatedRequestQuery
object and the code would be quite understandable.specific fixes required
startTime
andendTime
are resolved partition time edges. round downstartTime
and round upendTime
so that we can cover the range asked forThe inner part of the methods can be cached (with
block_range.from_height
andblock_range.to_height
vars). this could be done with caches and cacheIDs of:tokenA,tokenB
=>token0,token1
, thenpartitionTimeFormat,offsetSeconds,fromHeight,toHeight,token0,token1
partitionTimeFormat,offsetSeconds,fromHeight,toHeight,token0,token1
generateFunc
callbackcompressed-responses
cache where the cache value is generated in the current context if the value does not existA different, maybe cleaner solution would be to put caches inside another cache which seems bad? like this:
partitionTimeFormat,offsetSeconds,fromHeight,toHeight,tokenA,tokenB
tokenA,tokenB
=>token0,token1
, thenpartitionTimeFormat,offsetSeconds,fromHeight,toHeight,token0,token1
this seems a little cleaner but maybe bad practice, the inner caches would have to be global caches.