-
-
### Title of the talk
Speedup your pandas data analysis using FireDucks
### Description
The Pandas library, being the top choice among Data Scientists, many legacy applications developed using this…
-
In the last few days I've been playing around trying to see how fast I can get a 19M model training on a single 4090. My somewhat arbitrary goal is 1 hour, down from about 24 hours (just on `humanoid-…
-
I am trying to convert FSDv2 to ONNX (and next to TensorRT), but there is an error:
RuntimeError: ONNX export failed on an operator with unrecognized namespace torch_scatter::scatter_max. If you ar…
-
hello,
I found that there is speedup using tensorrt(fp32, fp16) inference, is that right?
And I found that batch inference for torch model has no speedup too. I do not know if there is something w…
-
There's a lot to gain from speeding up pip's startup time.
For one, pip takes around 600ms to just print the completion text, which is laggy. (as mentioned in #4755). Further, faster startup time m…
-
hi, i test your code and find the the time of bin_conv_layer is nearly the same as conv_layer, have you ever test the speedup at run-time?
-
This is not really a bug (sry for it). It's more like a SEO issue?
But actually I tried looking up this feature 20 times on Google stackoverflow whatnot.
And I didn't find any answer other than 'c…
-
Many of the rustdoc JSON files we parse are large — from a few MB to ~500MB in size. In the largest cases, we spend ~5s parsing JSON per `cargo-semver-checks` run.
Speeding up JSON parsing by switc…
-
der speedup durch die statue ist auf level 8 zu hoch, max level auf 5 begrenzen evtl?