ubiquity-os-marketplace / command-ask

0 stars 2 forks source link

`@ubiquityos` gpt command #1

Closed Keyrxng closed 3 days ago

Keyrxng commented 3 months ago

Resolves https://github.com/ubiquibot/plugins-wishlist/issues/29

I followed your prompt template and kept the system message short and sweet.

It seems it's able to lose the question being asked so I think it might be better to prioritize the question.

I think filling the chat history slightly would do the trick

  1. system
  2. user - long prompt
  3. assistant - manually inserted short acknowledgement of the context received
  4. user - directly ask the question
  5. assistant - the real API response

github-actions[bot] commented 3 months ago

Unused dependencies (1)

Filename dependencies
package.json dotenv

Unused types (2)

Filename types
src/types/github.ts IssueComment
ReviewComment
Keyrxng commented 2 months ago

I'll gather fresh QA tomorrow

0x4007 commented 2 months ago

I'll gather fresh QA tomorrow

Curious to see examples.

Keyrxng commented 2 months ago

@0x4007 fresh QA

https://github.com/ubq-testing/bot-ai/issues/49#issuecomment-2247731420

It takes all of the context no problem but it gets lost in it a little still and is mixing together context where it shouldn't be

It's also trying to use the # format and so github is auto-linking my first issue as opposed to this issue but it is receiving the correct context as seen by the convo and other metadata

0x4007 commented 1 month ago

Maybe let's test in production but let's make it reply to @UbiquityOS instead of /gpt

  1. Make the change
  2. Merge
  3. Install config

And let's go from there

0x4007 commented 3 weeks ago

Merge or delete this repo? @Keyrxng

Keyrxng commented 3 weeks ago

Shouldn't we rebrand this repo as I think it'll be doing more than the previously intended /ask feature?

Or should the review under conditions be it's own plugin? I assumed this would do both to save replicating the context fetching etc

0x4007 commented 3 weeks ago

Sure we can rename later

Keyrxng commented 3 weeks ago

What are we calling this so I can update the references in package.json and the readme?

QA:

0x4007 commented 3 weeks ago

command-ask is fine for now. Your QA makes it look stable. Can we start using it? Also I want to mention that I have access to o1 from the API now.

https://platform.openai.com/docs/guides/reasoning

I'm not sure which model is best. I'm assuming o1-mini is pretty solid for our use case though.

The maximum output token limits are: o1-preview: Up to 32,768 tokens o1-mini: Up to 65,536 tokens


Hi there,I’m

Nikunj, PM for the OpenAI API. We’ve been working on expanding access to the OpenAI o1 beta and we’re excited to provide API access to you today. We’ve developed these models to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.As a trusted developer on usage tier 4, you’re invited to get started with the o1 beta today.

Hi there,

I’m Nikunj, PM for the OpenAI API. We’ve been working on expanding access to the OpenAI o1 beta and we’re excited to provide API access to you today. We’ve developed these models to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.

As a trusted developer on usage tier 4, you’re invited to get started with the o1 beta today. Read the docs You have access to two models:

Our larger model, o1-preview, which has strong reasoning capabilities and broad world knowledge. Our smaller model, o1-mini, which is 80% cheaper than o1-preview.

Try both models! You may find one better than the other for your specific use case. But keep in mind o1-mini is faster, cheaper, and competitive with o1-preview at coding tasks (you can see how it performs here). We’ve also written up more about these models in our blog post.

These models currently have a rate limit of 100 requests per minute for developers on usage tier 4, but we’ll be increasing rate limits soon. To get immediately notified of updates, follow @OpenAIDevs. I can’t wait to see what you build with o1—please don’t hesitate to reply with any questions.

Keyrxng commented 3 weeks ago

command-ask is fine for now. Your QA makes it look stable. Can we start using it? Also I want to mention that I have access to o1 from the API now.

https://platform.openai.com/docs/guides/reasoning

I'm not sure which model is best. I'm assuming o1-mini is pretty solid for our use case though.

o1 in my opinion is too slow compared to 4o, I'd prefer to use it and honestly, reasoning models on the OpenAi website have not impressed me so far idk about you guys.

But keep in mind o1-mini is faster, cheaper, and competitive with o1-preview at coding tasks

i.e it's faster and cheaper than o1-preview but it drags compared to 4o.

Your QA makes it look stable. Can we start using it?

I hope so and as soon as it gets merged. I will apply the finishing touches and it should be mergeable following any other review comments.

Keyrxng commented 3 weeks ago

Typically slash command type plugins have a commands entry in the manifest but with this since I'm unsure what to do basically, if the command is configurable then an entry does not make sense however if it's going to be a constant then I guess I could add one.

0x4007 commented 3 weeks ago

o1 in my opinion is too slow compared to 4o

I think it's fine. A comment responding ten seconds later isn't a problem

Keyrxng commented 3 weeks ago

I moved UBIQUITY_OS_APP_SLUG into .env so that we set it when we deploy the worker. I done this to make it impossible for a partner to whitelabel it and alter the command as I got the feeling that what's intended with this plugin.

Keyrxng commented 2 weeks ago

Some recent additional QA that was built on top of this plugin:

https://github.com/ubq-testing/ask-plugin/issues/2

I think it's fine. A comment responding ten seconds later isn't a problem

I noticed that I don't have o1 access so I had to specify in the config or it would error for me. I know as an org we'll use o1 but should we use a stable GPT4 as the default to avoid this error for others?

sshivaditya2019 commented 2 weeks ago

@Keyrxng I think it would be a good idea for me to continue with this PR, as my #2 PR builds on it.

@0x4007 rfc

Keyrxng commented 2 weeks ago

This PR should be merged separate from your feature. If required branch off from this PR do not add your logic to it

This PR is held back by review only

Keyrxng commented 2 weeks ago

Realize I never pushed the branch to my repo which facilitated the onboarding bot built on top of this PR

https://github.com/ubiquity-os-marketplace/text-vector-embeddings/pull/18 https://github.com/ubq-testing/ask-plugin/tree/onboarding-bot

Is this getting merged or closed in favour of #2 @0x4007?