Open BobGneu opened 1 year ago
hello thanks for suggestions
i will try to connect https://github.com/Prozi/detect-collisions#benchmark with some changes with what you linked
I updated the benchmark to be more deterministic and
used it inside the circle ci build process https://app.circleci.com/pipelines/github/Prozi/detect-collisions
example:
I tried reading about what you pasted but had no success after first try - it seems so complicated
have you tried using this tool maybe and can provide some help if needed?
I have, and it only works well with github pages.
The trick is that the build agent is only there to
In looking at the stress script I think the better route would be to define some boundaries/cases and then build out the suites to support them:
Repeat the above for each shape type, then mixed
I think I have some time tomorrow and may be able to get you a PR to help set the direction for this, assuming my comments above align with the vision for the library. I have sync'd up my fork and will try to get you a PR later in the afternoon, assuming time permits.
What are the expectations for the ray casts?
TBH the goal was they should just work, on all types of bodies, which can be seen in tank demo https://prozi.github.io/detect-collisions/demo/
What is the upper bound for the system and how many entities it should be supporting collisions with?
I would say based on benchmark that a rough 1500 constantly moving bodies to keep 60 fps updates
Are you open to leveraging something like tinybench to help offload the processing/statistical side of this?
yes
I think I have some time tomorrow and may be able to get you a PR to help set the direction for this, assuming my comments above align with the vision for the library. I have sync'd up my fork and will try to get you a PR later in the afternoon, assuming time permits.
I would love a merge request with such changes
Alrighty!
I am going to take a stab at it in the morrow. Started nosing around already and I think my schedule this week is open enough to get this moving.
Now that we have the baseline in here, is there a segment you would like to focus our efforts on?
Now that we have the baseline in here, is there a segment you would like to focus our efforts on?
hello
I think the speed of testing for
those are from the top of my head that could use a proper benchmark
also thinking of:
what are your opinions?
maybe divide the benchmarks into long running and fast running, put both on different workflows/pipelines
and add the fast running to pre commit hook
??
those insertion benchmarks and collision ones are fast ones I guess and the stress benchmark (which can be improved) is long running because it has 10x 1000ms scenarios (we can make them shorter but even then it will be 10 x N ms and results for FPS measuring for ms < 1000 are not precise)
This library looks awesome. Appreciate the streamlined focus on doing one thing and doing it well.
Have you done any benchmark comparisons between this library and more general physics libraries like Rapier, Jolt, matter.js, Planck.js?
This library looks awesome. Appreciate the streamlined focus on doing one thing and doing it well.
Have you done any benchmark comparisons between this library and more general physics libraries like Rapier, Jolt, matter.js, Planck.js?
no but basically if they don't use webasm I doubt something what implements THE SAME THING + physics would be any faster than my library
also I have more features than some because of:
@bfelbo thanks
Does this project have specific goals around performance that are maintained or monitored over time?
Something like what Deno does, where performance would be tracked and maintained over time.
Recently I have found this github actions based tool that seems to be in the right vein, though its tied to gh-pages.
I am definitely down put together a PR for it if there is interest. I will need some input into what would be of the most benefit to benchmark though.