Closed hardillb closed 1 week ago
Attention: Patch coverage is 59.09091%
with 18 lines
in your changes missing coverage. Please review.
Project coverage is 78.73%. Comparing base (
9c1babb
) to head (3e5816d
). Report is 38 commits behind head on main.
Files with missing lines | Patch % | Lines |
---|---|---|
forge/ee/lib/teamBroker/index.js | 55.88% | 15 Missing :warning: |
forge/housekeeper/tasks/teamBroker.js | 25.00% | 3 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
🚨 Try these New Features:
I did it this way to make it easier to include the project nodes changes later (when they will publish without the mount point so will need to include the full topic prefix).
Might need to discuss/think about it some more.
Not sure how to write a test for those last 5 lines as that would mean adding a 30 second pause to ensure the interval runs (but even then it won't remove any entries for 30mins)
Currently caches topics for 1 hour, checks for removing topics every 30seconds, probably should be configurable.
Is there an opportunity for a longer term solution here? May be valid that some data only gets published every 24 hours for example.
Is there an opportunity for a longer term solution here? May be valid that some data only gets published every 24 hours for example.
These are just starting values, but having a longer caching period is likely to not be useful in it's current form as the cache will get cleared every time the forge app restarts (e.g. every time we do a CI deployment....)
There is a comment in the code about using Redis as a better cache, but this is a MUCH bigger solution
Why is the URL /api/v1/teams/:teamId/broker/topicList
and not /api/v1/teams/:teamId/broker/topics
The latter is more RESTful?
Think about longevity of the cache - and dealing with CI reloading the app.
Some options that come to mind:
In terms of the existing caching TTL values, I'd suggest we default for longer intervals - assuming we (eventually) solve the persistence of state. I'd suggest 25hrs ttl and once a day purging - I don't think the level of usage for in-memory caching will be a problem in the short term for it to be quite generous.
Leaving the technical review here to @knolleary - conceptually though, love the work, thanks for finding a solution Ben!
Increasing the cache to 25hours is an easy change, but moving to clearing the cache to once a day, will mean moving that code to the house keeping tasks to get round the restarts.
Will look at that shortly
Stashed the topic cache in the database across restarts
Need to check that this DB cache actually will work on K8s as I think it starts the new instance, before telling the old one to shutdown.
This means the cache will load from the DB before it's populated. Might need to add a delay (and find away to force the app.settings.get
to re-read from the database).
@hardillb lets discuss before you throw commits at the PR
Is this good to merge? I've not got too many opinions here as it's more server-side work. So if Nick says go, I vote, we go
part of #4726
Description
Tracks all topics published to on the Team Broker.
Adds 1 new HTTP endpoint
/api/v1/teams/:teamId/broker/topics
returns an array of strings representing the topics the team has published toRequires team Member or Owner to access list of topics
Needs:
testinga new permission adding to access the endpointCurrently caches topics for 1 hour, checks for removing topics every 30seconds, probably should be configurable.
Will benefit from using Redis for this so cache is preserved over restarts and if multiple instances of forge app running.
Related Issue(s)
4726
Checklist
flowforge.yml
?FlowFuse/helm
to update ConfigMap TemplateFlowFuse/CloudProject
to update values for Staging/ProductionLabels
area:migration
label