bramses / chatgpt-md

A (nearly) seamless integration of ChatGPT into Obsidian.
MIT License
824 stars 61 forks source link

gpt-4 model is not accesible #66

Closed alexfg closed 1 year ago

alexfg commented 1 year ago

Hello!

I tried changing the model in the plugin options but even after setting setting it to gpt-4 is seems that the request is done via the 3.5 model. Checking in the OpenAI usage confirms this. (I do have access to gpt-4)

This is my frontmatter:

---
system_commands: ['I am a helpful assistant.']
temperature: 0.7
top_p: 1
max_tokens: 1500
presence_penalty: 1
frequency_penalty: 1
stream: true
stop: null
n: 1
model: gpt-4
---
adfrisealach commented 1 year ago

I'm also unable to use gpt-4 through the plugin, though I get an error message "The model gpt-4 does not exist"

lukemt commented 1 year ago

@alexfg Strange, it should work. When I set gpt-4 in the frontmatter the responses are noticeably slower (and cost more as well).

Here is the template I use:


---
system_commands: ['You are a helpful assistant.']
temperature: 0.5
top_p: 1
max_tokens: 512
presence_penalty: 1
frequency_penalty: 1
stream: true
stop: null
n: 1
model: gpt-4
---

## role::user

Hi

@adfrisealach Do you have access to gpt-4? Otherwise please put yourself on the waitlist

alexfg commented 1 year ago

@lukemt I did test after test and watching the request list in OpenAI platform. It will not go through to gpt-4. I also tested with Text Generator where it works, but I like the ChatGPT MD options better, i.e. context history, response layout, etc.

I am still looking for a solution...

lukemt commented 1 year ago

Ok, thanks.

A few things to try:

  1. Check if the frontmatter gets ignored altogether. For example try what happens when you change the system message. Like so 😎:
    
    ---
    system_commands: ['Your objective is to incorporate the word “blurb” into every message.']
    temperature: 0.5
    top_p: 1
    max_tokens: 512
    presence_penalty: 1
    frequency_penalty: 1
    stream: true
    stop: null
    n: 1
    model: gpt-3.5-turbo
    ---

role::user

Hi there!


role::assistant

Hey, have you read any good blurbs lately?


role::user



2. Configure chatgpt-md in a new vault and check if you can reproduce the issue. If you can't then look very closely and try to spot any differences.

3. Reinstall chatgpt-md

That's what I would do, hopefully this brings us closer to a solution 
alexfg commented 1 year ago

@lukemt So I tried you suggestions. The results are both embarrassing and illuminating on my side on how the plugin works. 😅

My understanding was that when no frontmatter was included in the file the call to the Chat command would be done by default, behind the scenes, using the default frontmatter. Which does not happen. <- Maybe make this a feature?

So, when including the frontmatter in the file all works as expected in the fresh vault I created. However, I did find that this is broken in my initial vault. I believe this is an error on my side, I tinkered so much with my initial vault and the gpt plugins that I might have broken something all together. 😂

I will close this as it seems that there is no error on the plugin side. I will open a new issue if I find the error blocking my initial vault if I find that it can be avoided somehow.

Thank you for your help!

AlexEscalante commented 4 months ago

I do have access to ChatGPT 4, since I tested it in NeoVim.

I did set the "model: gpt-4" field in the front matter in Obsidian, however, I do not get any answer, neither I get an error. It just hangs.