Open 0x4007 opened 1 year ago
/start
Deadline | Wed, 23 Aug 2023 17:36:41 UTC |
Registered Wallet | 0xAe5D1F192013db889b1e2115A370aB133f359765 |
/wallet 0x0000...0000
if you want to update your registered payment wallet address @user.https://api.github.com/repos/ubiquity/ubiquibot/issues/291/comments
Scrubbed:
[
{
"id": 1690282995,
"user": {
"login": "Keyrxng",
"id": 106303466,
"type": "User"
},
"created_at": "2023-08-23T16:36:38Z",
"updated_at": "2023-08-23T16:36:38Z",
"body": "/start"
},
{
"id": 1690283105,
"user": {
"login": "ubiquibot[bot]",
"id": 113181824,
"type": "Bot"
},
"created_at": "2023-08-23T16:36:43Z",
"updated_at": "2023-08-23T16:36:43Z",
"body": "\n<code>\n\n <table>\n <tr>\n <td></td>\n <td></td>\n </tr>\n <tr>\n <td>Deadline</td>\n <td>Wed, 23 Aug 2023 17:36:41 UTC</td>\n </tr>\n <tr>\n <td>Registered Wallet</td>\n <td>0xAe5D1F192013db889b1e2115A370aB133f359765</td>\n </tr>\n \n \n \n </table>\n</code><h6>Tips:</h6>\n <ul>\n <li>Use <code>/wallet 0x0000...0000</code> if you want to update your registered payment wallet address @user.</li>\n <li>Be sure to open a draft pull request as soon as possible to communicate updates on your progress.</li>\n <li>Be sure to provide timely updates to us when requested, or you will be automatically unassigned from the bounty.</li>\n <ul>"
},
{
"id": 1690302671,
"user": {
"login": "Keyrxng",
"id": 106303466,
"type": "User"
},
"created_at": "2023-08-23T16:50:15Z",
"updated_at": "2023-08-23T16:50:15Z",
"body": "Take a look [here](https://github.com/Keyrxng/didactic-octo-train/issues/8)"
}
]
The spec was adjusted according to https://github.com/ubiquity/ubiquibot/pull/663#issuecomment-1705464635 so I added some more to the bounty
@pavlovcik that's wild dude, very much appreciated thank you!
I think it would be awesome to get each repo vectored and stored and see how well it performs the spec checking knowing the entire codebase and not just the pr diff. I'm interested to see what AI stuff you have in mind following your time away just now
Do you have any updates @Keyrxng? If you would like to release the bounty back to the DevPool, please comment /stop
Last activity time: Thu Sep 14 2023 09:23:19 GMT+0000 (Coordinated Universal Time)
PR completed just awaiting behind-the-scenes action I'm sure
0xAe5D1F19...33f359765
@pavlovcik I'm not meant to be claiming any sort of convo rewards if I'm the assignee right? I have two permits for ~$20 sitting
Why conversation Rewards includes @Keyrxng? he is an assignee. seems a bug.
@pavlovcik I'm not meant to be claiming any sort of convo rewards if I'm the assignee right? I have two permits for ~$20 sitting
Why conversation Rewards includes @Keyrxng? he is an assignee. seems a bug.
You are always free to claim every reward, unless there is some EXCEPTIONAL bug (like ~500 USD+) however I think they have the same permit nonce so you should definitely try and claim the largest amount available to you per issue first. The second one will probably fail.
It seems that we implemented the "backend" securely and now the "frontend" has some catching up to do with consolidating all of the rewards into a single payment permit per contributor.
Oh I vaguely recall that we encode the issue id, the user id, and I believe that we also added a custom extra string like "comments" or "assignee" to generate unique nonces per type, per contributor, per issue!
Looks like it's not working https://github.com/ubiquity/ubiquibot/issues/811#issuecomment-1732348869
I changed the token limit in our config from like 16385 or whatever to 8000 and now it just silently fails
@Keyrxng please be sure to review this conversation and implement any necessary fixes. Unless this is closed as completed, its payment of 300.0 WXDAI will be deducted from your next bounty.
Opened this as I think there is something amiss across all commands
I believe it. The payouts are also all unreliable
I don't think there's default config anymore, I had a hard time using my repo after some recent update..
It kept telling me to remove some data in config that's not needed and add some new ones, also renamed some before it started working again
I don't think there's default config anymore, I had a hard time using my repo after some recent update..
It kept telling me to remove some data in config that's not needed and add some new ones, also renamed some before it started working again
It done the same for myself but looking at your repo config you only have two here.
Where is it reading the rest from for you?
^^^^^
Scratch that, your dev branch is 90 commits behind just noticed and not sure which is your most recent branch to check myself
ahhh okay maybe I'll have to do the same then if that's the only way things are working at the moment but not ideal
I'm going to test with my repo now, besides I think #796 needs to be fixed. It's not easily noticeable but new updates will not reflect with this bug
I spotted whilefoo raise this on TG at the time and tried to reproduce that and could not, to this day still haven't and i've done a fresh install multiple times and again like 10 mins ago so #796 doesn't affect me somehow
Do you have any updates @Keyrxng? If you would like to release the bounty back to the DevPool, please comment /stop
Last activity time: Sat Sep 23 2023 18:19:11 GMT+0000 (Coordinated Universal Time)
/start
Deadline | Mon, 09 Oct 2023 18:16:41 UTC |
Registered Wallet | 0xAe5D1F192013db889b1e2115A370aB133f359765 |
/wallet 0x0000...0000
if you want to update your registered payment wallet address @user./start
Skipping /start
since the issue is already assigned
It's working on my org repo now that I've gotten around the default config still not working
All I've done is pass in 4000 via the private settings repo with the path configRepo/.github/config.yml
Is this still relevant? Busy week there but didn't think it was resolved
Using langchain we can pass -1 for token count which just passes the remaining tokens in as the requested response, just a little fyi
this.llm = new OpenAI({
openAIApiKey: this.apiKey,
modelName: 'gpt-3.5-turbo-16k',
maxTokens: -1,
})
This would be an easy and simple fix for our token problems
I'm still getting the no such file error for \lib\assets\images\pmg.png']
. I've created /lib/assets/pmg.png
and still noy joy, it's exiting the process after responding for me. Any fix on this?
I'm assuming that you've probably used an api key that is used for other things but @pavlovcik by chance have you created a key that we can check the usage to see whether the /ask call is being made when called from this issue?
I cannot reproduce the non-response I get errors or it responds to me as it should
Introduced `tokenLimit: openAITokenLimit || 0,`` which will result in it failing if the tokenLimit is undefined as 0 is an invalid value for max_tokens. So we'll have to define a reason value as rndquu requested before, my recommendation is probably about 60/40 for the size of the issues being parsed and linked in this org
It's probably not the case but I'm sure I said it before but parallel /ask
s on different repos would fail for me on at least one them. This would imply every time it's been called was at the same time as someone else calling it which is absurd but still something to consider for down the road I think
@pavlovcik @rndquu bump
bumping as I feel I've dragged this out far longer than acceptable with being MIA last week, hoping to sort the 3 out this week asap
Do you have any updates @Keyrxng? If you would like to release the bounty back to the DevPool, please comment /stop
Last activity time: Wed Oct 04 2023 09:36:18 GMT+0000 (Coordinated Universal Time)
The /ask
command is working fine in the latest development
branch.
The thing is that right now the bot's production build is set to the 1st of September which doesn't have the ask
command codebase so we can't use it in production until we switch to the latest bot's build.
Anyway this issue can be closed as completed because all we need to do is:
Yes @rndquu, love to hear it! So everything is all good with this, so I can finish the rest of the pr's that rely on and/use the same functionality. I knew there wasn't anything on my end I done everything I could think of to debug and resolve this lmao was rather shitting it truth be told so I'm glad you have put this to bed
That "tokenLimit || 0" needs to be updated as that'll cause headaches but all good otherwise
Do you have any updates @Keyrxng? If you would like to release the bounty back to the DevPool, please comment /stop
Last activity time: Tue Oct 10 2023 14:07:18 GMT+0000 (Coordinated Universal Time)
Everything is working as it should be
removed my assignment to avoid any more bot updates
There have been several instances (including with myself) where I would answer a question presented in a pull request review, or in an issue conversation, by asking ChatGPT and pasting in the results.
It could be very nice to see what exactly the original prompt was inside of the conversation for full context. Imagine if we can simply handle this by using a
/ask
command? Any of the words following the command would be passed into GPT4.On one hand, it feels a bit extraneous as a feature. On the other hand, we do plan to lean in pretty heavily into the AI powered features for version one of the bot, so I feel that this idea is not totally off course.
Originally posted by @FibrinLab in https://github.com/ubiquity/ubiquity-dollar/issues/629#issuecomment-1532618125