-
There is an issue on chapter 18, in the following paragraph
The only problem is that prediction is not particularly useful here. Ultimately, we want to know when we can increase prices and when we ca…
-
### What happened?
I'm using the `openai` library to interact with `llama-server` docker image on an A6000:
`docker run -p 8080:8080 --name llama-server -v ~/gguf_models:/models --gpus all ghcr.io…
-
Input: ADMG
Output: Probability expression is independent of a set of nodes.
Note: this algorithm has already been implemented in the R package `causaleffect`. In #31, the `causaleffect` impleme…
-
Hello. Thank you for sharing such a great work. I am trying to run samples in inference.py. The instruction-tuned worked perfectly. However, the in-context working example for pretrained model did not…
-
It would be great to have a nice notebook explaining TransformerLM and maybe even full Transformer in models/ -- both to explain the code and if possible with illustrations clarifying the concepts.
-
Desciption: In DeepSpeed-Chat step3, a runtime error: The size of tensor a (4) must match the size of tensor b (8) at non-singleton dimension 0 will be thrown when inference_tp_size>1 and hybrid engin…
-
i have set up the environmet succesfully, but when i run `lm_eval --model mamba_ssm --model_args pretrained=state-spaces/mamba-130m --tasks lambada_openai,hellaswag,piqa,arc_easy,arc_challenge,winogra…
-
Running:
print('\n[Create 2 models for treatment and untreatment and estimate CATE (Conditional Average Treatment Effects)]')
train_df, test_df = cl.estimate_cate_by_2_models()
gives below er…
-
### What is the issue?
Ollama kept crashing GPU and getting flickers with my pc while working with VScode and Continue extension.
Works fine for about an hour then throws message about not connecti…
-
Hi, using regmedint is it possible to compute the effects comparing two levels of a categorical exposure variable which has more than two categories? Thanks.