-
Could support be added for these? Their context token size is way bigger than the current models. It's 65536 instead of the 2048 so it retains memory way better.
Here is more info about them: https:…
-
![image](https://github.com/AdvisorySG/mentorship-page/assets/2070423/0bd4a574-036b-445e-853d-d487ba3d86e6)
When collapsed, there is a mismatch in vertical margin between top and bottom. The questi…
-
On Windows 11 `cmake ..` fails to produce a working Makefile
One has to explicitly specify `cmake -G 'Unix Makefiles' ..`
The w64devkit gcc version 12.2.0 (GCC) has no aligned_alloc, therefore
``…
-
Does fine-tuning support the multi-GPU training?
When trying to fine-tune with multiple GPUs, got the following error.
> RuntimeError: Expected all tensors to be on the same device, but found a…
-
Sorry if this has been covered before but how big of a mesh should I be able to stream in with this build settingkjansen@pfe26:~/SCOREC-core/buildMGEN_write3D> more doConfigure14_18
#!/bin/bash -ex
…
-
I am unable to load .mpt file which was previously imported to EC-Lab from .txt hence has no cycle_number column:
> ---------------------------------------------------------------------------
> At…
-
Setup is not working. I'm getting this error when I install the apk.. Both when it is installed and not.
This is what I'm doing:
```
source mpt_venv/bin/activate
cd [dir containing the apk]
mpt…
-
Thanks for your great work. Im running a mpt model with nvidia v100 gpu. I think the compilation process went well, but GPU cannot be utilized during inference. Here is what i got
```bash
cmake -D C…
-
null
[MPT-1](https://linear.app/advisorysg/issue/MPT-1/disable-zoom-on-mobile)
-
I tried to use mpt-7b-ggml-q5_1(https://huggingface.co/TheBloke/MPT-7B-GGML) with koboldcpp(commit hash: e6ddb15c3a8) on Ubuntu 22.04. It was fine with generating English alphabet but when it comes to…