MichelNivard / gptstudio

GPT RStudio addins that enable GPT assisted coding, writing & analysis
https://michelnivard.github.io/gptstudio/
Other
906 stars 109 forks source link

[Bug]: Chat in source gives "ChatGPT responded" message when I'm not using the OpenAI service #213

Closed jakeberv closed 5 months ago

jakeberv commented 5 months ago

Confirm setup

What happened?

I was trying to do run gptstudio using codestral thru ollama-- I am able to get the shiny app interface to work with chat, but all the other addins seem to automatically query ChatGPT directly (even without having specified an API key) -- I assume this works because it is now possible access ChatGPT for free?

I have ollama outside docker (eg not following the example on your website) so I can use the GPU -- not sure if that makes a difference.

Attached an image showing 'chat' working in the shiny window-- and then another image with the console output showing the query to ChatGPT after asking it to comment the code:

image image

Relevant log output

No response

Session info

r
gptstudio::gptstudio_sitrep()
#> 
#> ── Configuration for gptstudio ─────────────────────────────────────────────────
#> Using user configuration file at
#> '/Users/cotinga/Library/Preferences/org.R-project.R/R/gptstudio/config.yml'
#> 
#> 
#> ── Current Settings ──
#> 
#> 
#> 
#> - Model: codestral:latest
#> 
#> - Task: coding
#> 
#> - Language: en
#> 
#> - Service: ollama
#> 
#> - Custom prompt:
#> 
#> - Stream: TRUE
#> 
#> - Code style: base
#> 
#> - Skill: beginner
#> 
#> 
#> 
#> ── Checking API connections ──
#> 
#> 
#> 
#> ── Checking OpenAI API connection 
#> 
#> ✖ API key is not set or invalid for OpenAI service.
#> 
#> 
#> 
#> ── Checking HuggingFace API connection 
#> 
#> ✖ API key is not set or invalid for HuggingFace service.
#> 
#> 
#> 
#> ── Checking Anthropic API connection 
#> 
#> ✖ API key is not set or invalid for Anthropic service.
#> 
#> 
#> 
#> ── Checking Google AI Studio API connection 
#> 
#> ✖ API key is not set or invalid for Google AI Studio service.
#> 
#> 
#> 
#> ── Checking Azure OpenAI API connection 
#> 
#> ✖ API key is not set or invalid for Azure OpenAI service.
#> 
#> 
#> 
#> ── Checking Perplexity API connection 
#> 
#> ✖ API key is not set or invalid for Perplexity service.
#> 
#> 
#> 
#> ── Checking Cohere API connection 
#> 
#> ✖ API key is not set or invalid for Cohere service.
#> 
#> 
#> 
#> ── Check Ollama for Local API connection 
#> 
#> ✔ Ollama is running
#> 
#> 
#> 
#> ── Getting help ──
#> 
#> 
#> 
#> See the gptstudio homepage (<https://michelnivard.github.io/gptstudio/>) for
#> getting started guides and package documentation. File an issue or contribute
#> to the package at the GitHub repo
#> (<https://github.com/MichelNivard/gptstudio>).
#> ── End of gptstudio configuration ──────────────────────────────────────────────

Created on 2024-05-31 with reprex v2.1.0

Session info ``` r sessioninfo::session_info() #> ─ Session info ─────────────────────────────────────────────────────────────── #> setting value #> version R version 4.2.2 (2022-10-31) #> os macOS Ventura 13.6.6 #> system aarch64, darwin22.1.0 #> ui unknown #> language (EN) #> collate en_US.UTF-8 #> ctype en_US.UTF-8 #> tz America/Detroit #> date 2024-05-31 #> pandoc 3.2 @ /opt/homebrew/bin/ (via rmarkdown) #> #> ─ Packages ─────────────────────────────────────────────────────────────────── #> package * version date (UTC) lib source #> assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.2.2) #> cli 3.6.2 2023-12-11 [1] CRAN (R 4.2.2) #> curl 5.2.1 2024-03-01 [1] CRAN (R 4.2.2) #> digest 0.6.35 2024-03-11 [1] CRAN (R 4.2.2) #> ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.2.2) #> evaluate 0.22 2023-09-29 [1] CRAN (R 4.2.2) #> fastmap 1.1.1 2023-02-24 [1] CRAN (R 4.2.2) #> fs 1.6.3 2023-07-20 [1] CRAN (R 4.2.2) #> glue 1.7.0 2024-01-09 [1] CRAN (R 4.2.2) #> gptstudio 0.4.0 2024-05-21 [1] CRAN (R 4.2.2) #> htmltools 0.5.8.1 2024-04-04 [1] CRAN (R 4.2.2) #> htmlwidgets 1.6.2 2023-03-17 [1] CRAN (R 4.2.2) #> httpuv 1.6.8 2023-01-12 [1] CRAN (R 4.2.2) #> httr2 1.0.1 2024-04-01 [1] CRAN (R 4.2.2) #> jsonlite 1.8.8 2023-12-04 [1] CRAN (R 4.2.2) #> knitr 1.44 2023-09-11 [1] CRAN (R 4.2.2) #> later 1.3.0 2021-08-18 [1] CRAN (R 4.2.2) #> lifecycle 1.0.4 2023-11-07 [1] CRAN (R 4.2.2) #> magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.2.2) #> mime 0.12 2021-09-28 [1] CRAN (R 4.2.2) #> promises 1.2.0.1 2021-02-11 [1] CRAN (R 4.2.2) #> R6 2.5.1 2021-08-19 [1] CRAN (R 4.2.2) #> rappdirs 0.3.3 2021-01-31 [1] CRAN (R 4.2.2) #> Rcpp 1.0.12 2024-01-09 [1] CRAN (R 4.2.2) #> reprex 2.1.0 2024-01-11 [1] CRAN (R 4.2.2) #> rlang 1.1.3 2024-01-10 [1] CRAN (R 4.2.2) #> rmarkdown 2.25 2023-09-18 [1] CRAN (R 4.2.2) #> rstudioapi 0.14 2022-08-22 [1] CRAN (R 4.2.2) #> sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.2.2) #> shiny 1.7.4 2022-12-15 [1] CRAN (R 4.2.2) #> withr 2.5.1 2023-09-26 [1] CRAN (R 4.2.2) #> xfun 0.40 2023-08-09 [1] CRAN (R 4.2.2) #> xtable 1.8-4 2019-04-21 [1] CRAN (R 4.2.2) #> yaml 2.3.7 2023-01-23 [1] CRAN (R 4.2.2) #> #> [1] /opt/homebrew/lib/R/4.2/site-library #> [2] /opt/homebrew/Cellar/r/4.2.2/lib/R/library #> #> ────────────────────────────────────────────────────────────────────────────── ```


### Code of Conduct

- [X] I agree to follow this project's Code of Conduct
calderonsamuel commented 5 months ago

Uhmm, I'm not sure what makes you think that the other addins use chatgpt 🤔 . It seems to me like you are actually using Ollama.

I have ollama outside docker

If by this you mean that you have used the Ollama installer for MacOS instead of docker, don't worry. Both the MacOS and the docker version set the local Ollama API at the same port. This is why you are getting responses. This is actually the expected behavior!

If you want more proof that you are not getting responses from chatgpt, look at this section of your session info output:

#> ── Checking OpenAI API connection 
#> 
#> ✖ API key is not set or invalid for OpenAI service.

AFAIK, while you can use the chatgpt web for free, you can't use the OpenAI API for free, and it shows.

Let me know if that solves your doubt

jakeberv commented 5 months ago

the output to the console says:

Add comments to explain this code. Your output will go directly into a source (.R) file. Comment the code line by line: hello<-function(){ print('hello') } ✔ ChatGPT responded [5.1s]

So, this is just a reporting issue?

calderonsamuel commented 5 months ago

You are 100% right. I hadn't noticed that we show the message "ChatGPT responded". You are in fact using Ollama, but the message is misleading. I'll rename this issue to focus on that

jakeberv commented 5 months ago

Cool - thanks!

jakeberv commented 5 months ago

related question

If by this you mean that you have used the Ollama installer for MacOS instead of docker, don't worry. Both the MacOS and the docker version set the local Ollama API at the same port.

possible to use the lmstudio server instead of ollama? the default server port for lmstudio is 1234 -- I tried setting the OLLAMA_HOST environmental variable to http://localhost:1234 but no go.

calderonsamuel commented 5 months ago

We use OLLAMA_HOST to find the Ollama API, which is by default in http://localhost:11434/api. I'm not familiar with lmstudio, so I don't have a definitive answer to that. If you link the documentation and is a reasonable and stable provider we might add it to our support list (especially if it's open source)

jakeberv commented 5 months ago

docs here https://lmstudio.ai/docs/local-server

calderonsamuel commented 5 months ago

Hey @jakeberv , this has been fixed and will soon be merged to the dev version (it will autoclose). Thanks for your report! For support to lmstudio please add a new issue following the "New feature" route.