-
Just a couple of suggestions for this superb blazingly fast package! In summary, I propose following `BenchmarkTools` more closely regarding layout.
Thanks for your work!
1. Adding "0 allocation…
-
Feature request: Enable or Disable MKL-DNN in MXNet via environment variable at module load time
It would be helpful to have the capability to enable or disable MKL-DNN in MXNet via an environm…
-
In FAQ, you mention that hyperparameters can be tuned using all the data:
"How do I select the hyperparameters of the first stage models?
Alternatively, you can pick the best first stage models …
-
The existing auto-tuner in Boda is only able to tune one operation per call i.e. we have to pass the desired operation and some additional informations.
What if we want to tune a whole CNN? Basicall…
dinhv updated
7 years ago
-
Evaluate the performance of zio actors.
Ideally this should cover a range of cases similar to the ones covered by akka performance benchmarks https://github.com/akka/akka/tree/master/akka-bench-jmh/s…
-
I posted a similar issue in outlines, but here goes: we're building something complex and I think it would be helpful to have a marvin-like library that supports normal programming patterns with LLM'…
-
R learner + their data
-
## Week 1
### Information Pages (FAQ)
* Complete the FAQ page with frontend and backend elements
* [x] Have website programmer names
* [x] Have website programmer photos and bios
* [x] Fit the t…
-
Hi all,
I'm attempting to follow the SmoothQuant tutorial for the LLAMA2-7b model: [https://github.com/intel/neural-compressor/tree/master/examples/onnxrt/nlp/huggingface_model/text_generation/llam…
-
Hey there, is this still active?
This project was recently brought to my attention because it is being used by the TechEmpower benchmarks for Ruby to tune Puma in that portion of the benchmarks. Th…