-
In llama.cpp, there is a parameter which sets the number of tokens to output. Is there some command line parameter to set 512 tokens for the output? Thank you.
-
-
## 🐛 Bug
We observed a ~2x slowdown in training performance of a detection network when we did broadcast on a tensor.
This broadcasting operation launches the following kernel in the backward pass…
-
## Keyword: detection
### Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value Functions
- **Authors:** Authors: Yun-Chun Chen, Adithyavairavan Murali, Balakumar Sundaralingam, Wei Ya…
-
@Sharjeeliv
Hello Sharjeel, this issue page is created for the task of time series forecasting for user similarity matrices. Here is the detailed explanation for this task:
When UML (user modelin…
-
In this issue you can either:
- Add papers that you think are interesting to read and discuss (please stick to the format).
- vote: should be done using :+1: on comments
Example: https://githu…
-
Comment below with one speaker (and/or a paper by the speaker) whom you wish to see at our workshop.
Please make your comments by Wednesday 11:59 PM, and upvote at least five of your peers' comment…
-
"We can construct a so-called artificial neural network inside a computer, and then try to teach it to solve problems by giving it examples. This process is similar to how a newborn child learns about…
-
We are working on to increase supports for sparse tensor. Currently we have [summarized current state of sparse tensor](https://github.com/pytorch/pytorch/issues/9674) and listed out [sparse ops to su…
-
Here is an exchange where I'm trying to get the mass of the planets in the Solar System
./chat -m ggml-alpaca-13b-q4.bin --temp 0.8 -n 512 -c 4096
As you can see its mostly garbage...I tried as…