moltar / typescript-runtime-type-benchmarks

📊 Benchmark Comparison of Packages with Runtime Validation and TypeScript Support
https://moltar.github.io/typescript-runtime-type-benchmarks/
621 stars 59 forks source link

chore: Use the correct API for Valibot (it should not affect perf) #1127

Closed naruaway closed 1 year ago

naruaway commented 1 year ago

cc: @fabian-hiller

.parse() on the Valibot schema was never intended to be used directly.

I did not notice this while I did the change in https://github.com/moltar/typescript-runtime-type-benchmarks/pull/1112 since I was just relying on auto completion based on TypeScript types and .parse appeared.

Note that the future version of Valibot renamed it to _parse to avoid this confusion.

Anyway let's use the intended documented API. Otherwise, updating Valibot version in the future will break this test case. Note that this should not affect performance since internally it's just calling the .parse on the schema now.

fabian-hiller commented 1 year ago

The benchmarks should also take into account that we have deprecated .error in safeParse, which will make safeParse faster than parse from one of the next versions. Also, it has to be considered whether a validation library collects all issues or aborts at the first one. If I see correctly, the Typia implementation uses .createIs instead of .createValidate. Valibot should also use the is function or the abortEarly property for a direct comparison.

naruaway commented 1 year ago

@fabian-hiller

The benchmarks should also take into account that we have deprecated .error in safeParse, which will make safeParse faster than parse from one of the next versions.

Sure, but all the test cases in typescript-runtime-type-benchmarks is for "valid" data, so throw will not happen anyway even for .parse so I think the perf of parse and safeParse should not be different. This is also consistent with the test cases for Zod, which can throw when the data is invalid.

Also, it has to be considered whether a validation library collects all issues or aborts at the first one. If I see correctly,

I agree with this but since typescript-runtime-type-benchmarks includes a lot of libs, it's almost impossible to do "complete fair" comparison anyway. Typia cannot have "validation" logic since the compiled output consists of simple checks. I think what we can/should do the best here is trying to align with the case for Zod. And Zod collects all errors. However, all of this should not matter for the benchmark result in typescript-runtime-type-benchmarks since it does not run benchmark for "invalid" data. I guess this might be interesting addition for typescript-runtime-type-benchmarks.

And this is one of the reason why I am experimenting with https://github.com/naruaway/valibot-benchmarks, which includes more diverse schema and invalid data. People are using this typescript-runtime-type-benchmarks for evaluating PRs like https://github.com/fabian-hiller/valibot/pull/104 but we cannot reveal any perf issues for invalid data case. For this specific PR, I'll try to leave a comment later there