Closed 0x4007 closed 2 weeks ago
/start
Deadline | Wed, Oct 9, 4:31 AM UTC |
Beneficiary | 0x830E2a4D7714989361BA76d3eA9C49e093f0A04C |
[!TIP]
- Use
/wallet 0x0000...0000
if you want to update your registered payment wallet address.- Be sure to open a draft pull request as soon as possible to communicate updates on your progress.
- Be sure to provide timely updates to us when requested, or you will be automatically unassigned from the task.
@0x4007 Does this cover the changes mentioned in https://github.com/ubiquity-os-marketplace/text-conversation-rewards/issues/143 for the prompt?
@0x4007 Does this cover the changes mentioned in #143 for the prompt?
Looks like I made a mistake. Yes.
@0x4007 Does this cover the changes mentioned in #143 for the prompt?
please, you will have to clarify this for me because i don't see the changes mentioned
! Failed to run comment evaluation. SyntaxError: Expected double-quoted property name in JSON at position 24
! Failed to run comment evaluation. SyntaxError: Unterminated string in JSON at position 27
+ Evaluating results. Please wait...
View | Contribution | Count | Reward |
---|---|---|---|
Issue | Task | 1 | 50 |
Issue | Comment | 1 | 0 |
Review | Comment | 9 | 349.978 |
Comment | Formatting | Relevance | Reward |
---|---|---|---|
please, you will have to clarify this for me because i don't see… | 0content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 17 wordValue: 0 result: 0 | 0.5 | 0 |
Resolves https://github.com/ubiquity-os-marketplace/text-convers… | 0content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 12 wordValue: 0 result: 0 | 0.4 | 0 |
The line break basically has no effect in separating the comment… | 7.94content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 76 wordValue: 0.2 result: 7.94 | 0.5 | 7.94 |
@gentlementlegen here is the new fix. | 0.92content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 6 wordValue: 0.2 result: 0.92 | 0.3 | 0.552 |
### PromptEvaluate the relevance of GitHub comments to an … | 90.88content: content: h3: score: 1 elementCount: 2 p: score: 0 elementCount: 54 a: score: 5 elementCount: 1 ul: score: 1 elementCount: 6 li: score: 0.5 elementCount: 29 ol: score: 1 elementCount: 3 result: 30.5 regex: wordCount: 827 wordValue: 0.2 result: 60.38 | 0.6 | 133.456 |
@0x4007 suggested this new format. | 0.79content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 5 wordValue: 0.2 result: 0.79 | 0.5 | 0.79 |
@sshivaditya2019 Here is the result from the issue you provided… | 139.03content: content: p: score: 0 elementCount: 82 h3: score: 1 elementCount: 5 a: score: 5 elementCount: 3 hr: score: 0 elementCount: 5 ul: score: 1 elementCount: 5 li: score: 0.5 elementCount: 21 img: score: 5 elementCount: 3 ol: score: 1 elementCount: 1 result: 51.5 regex: wordCount: 1280 wordValue: 0.2 result: 87.53 | 0.5 | 190.53 |
@gentlementlegen @0x4007 does this PR meet the requirements to … | 2.55content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 20 wordValue: 0.2 result: 2.55 | 0.7 | 3.57 |
@gentlementlegen I have resolved the conflict and from the previ… | 7.55content: content: p: score: 0 elementCount: 1 a: score: 5 elementCount: 1 result: 5 regex: wordCount: 20 wordValue: 0.2 result: 2.55 | 0.6 | 13.06 |
Done | 0.2content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 1 wordValue: 0.2 result: 0.2 | 0.2 | 0.08 |
View | Contribution | Count | Reward |
---|---|---|---|
Issue | Comment | 1 | 1.242 |
Review | Comment | 5 | 5.742 |
Comment | Formatting | Relevance | Reward |
---|---|---|---|
@0x4007 Does this cover the changes mentioned in https://github.… | 1.38content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 22 wordValue: 0.1 result: 1.38 | 0.9 | 1.242 |
My concern about comments to evaluate is here. A line break seem… | 2.1content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 36 wordValue: 0.1 result: 2.1 | 0.7 | 1.47 |
@0x4007 I do not want to merge this without testing, it broke al… | 2.69content: content: p: score: 0 elementCount: 2 result: 0 regex: wordCount: 48 wordValue: 0.1 result: 2.69 | 0.8 | 2.152 |
@0x4007 Why removing the `json` format? It seems to make… | 1.59content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 26 wordValue: 0.1 result: 1.59 | 0.6 | 0.954 |
@jaykayudo please resolve the conflict. Also would like the reas… | 1.06content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 16 wordValue: 0.1 result: 1.06 | 0.5 | 0.53 |
Then I suppose this can be merged and tested, and reverted like … | 1.59content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 26 wordValue: 0.1 result: 1.59 | 0.4 | 0.636 |
View | Contribution | Count | Reward |
---|---|---|---|
Issue | Specification | 1 | 28.14 |
Issue | Comment | 2 | 6.433 |
Review | Comment | 2 | 2.295 |
Comment | Formatting | Relevance | Reward |
---|---|---|---|
Generally when I write prompts I run them through LLMs to improv… | 9.38content: content: p: score: 0 elementCount: 4 h6: score: 1 elementCount: 1 a: score: 5 elementCount: 1 result: 6 regex: wordCount: 63 wordValue: 0.1 result: 3.38 | 1 | 28.14 |
Looks like I made a mistake. Yes. | 0.52content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 7 wordValue: 0.1 result: 0.52 | 0.3 | 0.156 |
I just removed - [Prompt Improvement](https://github.com/ubiqui… | 6.61content: content: p: score: 0 elementCount: 1 ul: score: 0 elementCount: 1 li: score: 0.5 elementCount: 1 a: score: 5 elementCount: 1 result: 5.5 regex: wordCount: 17 wordValue: 0.1 result: 1.11 | 0.7 | 6.277 |
@sshivaditya2019 @gentlementlegen please decide. I think your t… | 2.25content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 39 wordValue: 0.1 result: 2.25 | 0.7 | 1.575 |
JSON is extremely redundant compared to plaintext/markdown. For … | 1.44content: content: p: score: 0 elementCount: 1 result: 0 regex: wordCount: 23 wordValue: 0.1 result: 1.44 | 0.5 | 0.72 |
View | Contribution | Count | Reward |
---|---|---|---|
Review | Comment | 1 | 5.203875 |
Comment | Formatting | Relevance | Reward |
---|---|---|---|
In my opinion the system message should contain only the instruc… | 21.83content: content: p: score: 0 elementCount: 6 a: score: 5 elementCount: 3 result: 15 regex: wordCount: 144 wordValue: 0.1 result: 6.83 | 0.85 | 5.203875 |
View | Contribution | Count | Reward |
---|---|---|---|
Review | Comment | 1 | 1.3585 |
Comment | Formatting | Relevance | Reward |
---|---|---|---|
@jaykayudo, could you share some results for less obvious exampl… | 6.38content: content: p: score: 0 elementCount: 1 a: score: 5 elementCount: 1 result: 5 regex: wordCount: 22 wordValue: 0.1 result: 1.38 | 0.3 | 1.3585 |
@0x4007 We count review comments for the pull-request author now?
No definitely not
It seems you've enabled pull contributors to have rewards https://github.com/ubiquity-os-marketplace/.ubiquity-os/blob/main/.github/.ubiquity-os.config.dev.yml#L227
and disabled issue assignee https://github.com/ubiquity-os-marketplace/.ubiquity-os/blob/main/.github/.ubiquity-os.config.dev.yml#L295
Can you fix the config across the orgs? The config syntax isn't really clear to me. I think I asked Claude to consolidate them to be less verbose, and after looking it over, it seemed fine so I implicitly approved it.
Generally when I write prompts I run them through LLMs to improve the clarity and conciseness (and of course test the results.) This prompt looks raw and can benefit from similar treatment.
Enhance the prompt and test.
https://github.com/ubiquity-os-marketplace/text-conversation-rewards/blob/7fca5d02fd19993406bc9558c556c59b6762b0c9/src/parser/content-evaluator-module.ts#L244-L265
Similar [^01^]
[^01^]: Pull 142 Adjustments 78%