sshivaditya2019 / test-public

0 stars 0 forks source link

Build Issues #209

Open sshivaditya2019 opened 3 weeks ago

sshivaditya2019 commented 3 weeks ago

Something wrong, with the current build process, does not work as expected.

sshivaditya2019 commented 3 weeks ago

@ubosshivaditya can you tell me about the devpool ? Can any one work on that ?

Keyrxng commented 3 weeks ago

Might be best to strip mentions or something to stop tagging lots of issues and people in your QA bud.

I tend to have it either not comment or I include in my query instruction not to mention users and to include www. on issue references

test-app-ubo[bot] commented 3 weeks ago
Reasoning Process 1. I used the fetch_similar_comments, fetch_similar_issues, and fetch_chat_history tools to gather context about the discussion on 'devpool'. These tools provided relevant comments and issues that mention 'devpool' and its functionalities. 2. [Understand the question and context] The user is asking about 'devpool' and whether anyone can work on it. [Analyze relevant information from provided context] The context provided indicates that 'devpool' is part of the UbiquityDAO's DevRel program. It serves as a platform where developers can find tasks and bounties to work on. Anyone can participate as long as they choose an unassigned task from the devpool. [Consider technical implications] The devpool is associated with repository tasks and requires contributors to fork repositories and submit pull requests after completing tasks. [Form logical reasoning chain] Based on the context, 'devpool' allows open participation, and contributors can earn incentives by completing tasks listed there. [Generate comprehensive response] I will explain that 'devpool' is a platform for developers to work on tasks and earn incentives, and anyone can join if they choose tasks that are not already assigned.
Tool Calls **Tool:** `fetch_similar_comments` **Parameters:** `{"text":"@ubosshivaditya can you tell me about the devpool ? Can any one work on that ?"}` **Result:** `[{"comment_id":"IC_kwDOLf0nHM55Cwiz","comment_plaintext":"\nI am looking to gain a deeper understanding of the code structure and the interactions between different modules within the UbiquityDAO's DevRel program. Knowing where to find the specific pieces of code I need to work on and how these parts interact within the larger system will be instrumental in my contribution. This insight is not only crucial for my immediate work but also for creating detailed onboarding documentation that will aid future contributors in navigating and understanding the system efficiently. My goal is to grasp the overall functionality of the system and document the onboarding process to ensure a smooth integration for new developers joining the project.\n\nYou can see a list of available bounties in the devpool at work.ubq.fi from multiple applications of Ubiquity DAO. \nEach item in the list provides a task description, the incentive amount, priority, and urgency level. It also provides a link to the source GitHub issue. You can go through all the tasks, and find the one that suits your experience. Make sure it is not assigned to anyone else on Github, and start working on it. Ubiquibot will help you through the entire process, from starting the task to getting paid. As you already typed /start command to assign you a task, you can also type /help to see other available commands. \nStart by forking a repository of interest and see how can make changes to it to achieve the goal described in the issue. Once, you think the work is ready for submission, you can open a pull request. Do not forget to include a QA with each pull request to show how your changes are working. Your pull request will go through a review before being merged to the production.\nEvery application you see here is constantly being improved and enhanced. Including work.ubq.fi itself. If you are interested in any listed issue of work.ubq.fi itself, you can see its repository. It is a simple typescript application with some CI/CD Github workflows and tests. A readme file is included with each repository to help you get started. \n","comment_issue_id":"I_kwDOLf0nHM6EMjE5","similarity":0.26713262246973,"text_similarity":0.0708201304078102},{"comment_id":"IC_kwDOKzVPS86ICUC6","comment_plaintext":"I just got pretty confused with these tasks if I'm honest.\nThe open PR aims to consolidate RFC logic into the current devpool-directory and so does this cover item 1 in Add support for proposals on 'work.ubq.fi': "upgrade backend"?\nThis spec implies that the work involved with rfc.ubq.fi is a requirement for this task to be completed. Is it expected that this issue won't be complete until the relevant work on the UI is complete too? And if this is true, then is working on work.ubq.fi filtering and an additional view as step 2 required in the larger task?\n\nSeems like 'double dipping' as it's been called before but it leaves only repo-sync-solution as step 3 of the larger task.\nShould we point the open PR towards a dedicated task in https://github.com/ubiquity/devpool-directory-bounties/issues and we can allow item 2 of the larger task to be worked on via this task?\nAlso https://github.com/ubiquity/devpool-directory-bounties/issues/25 is a duplicate of item 3 of the larger task as well. Some consolidating of the issues would make things a lot clearer\n","comment_issue_id":"I_kwDOKzVPS86K52yA","similarity":0.267834395419749,"text_similarity":0.0568242967128754},{"comment_id":"IC_kwDOJhaJks5_KGk3","comment_plaintext":"\n@rndquu This up for grabs?\n\nYes, you may setup code sync between https://github.com/ubiquity/devpool-directory and your forked instance of https://github.com/rndquu/devpool-directory-private-test for testing purposes and open a PR, I will check how it works with https://github.com/ubiquity/devpool-directory-private then\n","comment_issue_id":"I_kwDOJhaJks6JGphr","similarity":0.299067546099026,"text_similarity":0.0121585419401526},{"comment_id":"IC_kwDOKzVPS86OzTGT","comment_plaintext":"Is the preview URL: https://devpool-directory-ui.pages.dev?\n","comment_issue_id":"I_kwDOKzVPS86TaSGR","similarity":0.28826152941894,"text_similarity":0.0121585419401526},{"comment_id":"IC_kwDOJhaJks6PEMoT","comment_plaintext":"@Keyrxng Is https://github.com/ubiquity/devpool-directory/blob/development/devpool-statistics.json accurate right now?\n","comment_issue_id":"I_kwDOJhaJks6Xa0y_","similarity":0.288053620578754,"text_similarity":0},{"comment_id":"IC_kwDOKzVPS86A1E9g","comment_plaintext":"Please show a test deploy link. It doesn't seem possible if your "backend" has no relevant changes https://github.com/ubiquity/devpool-directory/compare/development...xno-miner:devpool-rfc:development\n","comment_issue_id":"I_kwDOKzVPS86K52yA","similarity":0.285413027328698,"text_similarity":0},{"comment_id":"IC_kwDOKzVPS86OzOGl","comment_plaintext":"@rndquu @0x4007 Thinking more about it, it should be invalid I guess. Posting an issue to devpool-issues won't imply in appearing at work.ubq.fi unless this issue makes it into devpool-issues.json. If that is the case the problem is at devpool-issues authentication mechanism and not here.\n","comment_issue_id":"I_kwDOKzVPS86UNkiT","similarity":0.218828624867041,"text_similarity":0.0547134391963482},{"comment_id":"PRRC_kwDOKzVPS85qqImj","comment_plaintext":"There is also Devpool.directory \n","comment_issue_id":"I_kwDOKzVPS86TaSGR","similarity":0.261358197772435,"text_similarity":0.0121585419401526},{"comment_id":"IC_kwDOKzVPS86BijjK","comment_plaintext":"I have created the pull request, the changes are described in the pull request description.\nhttps://github.com/ubiquity/devpool-directory/pull/1212\n","comment_issue_id":"I_kwDOKzVPS86K52yA","similarity":0.256764158489614,"text_similarity":0},{"comment_id":"PRRC_kwDOMqhXyc5pSjs8","comment_plaintext":"One worker, one entry and that entry is used by both payloads, we typecheck the payload and route it to it's destination.\nWorkflows are forwarded payloads from the worker.\nAll proxyCallbacks assigned to issues.labeled will run, e.g workrooms will forward the payload to the workflow entry which will run it's own workflowProxyCallbacks opening the room.\nThis feature does not require the worker to do anything except forward the payload. But other features may execute within the worker under the same event issues.labeled for example let's pretend we announce tasks on TG because it has a certain label, announceSpecialTask would run and the worker would not forward that because the bot would simply make the announcement as it doesn't require a user account to post.\n\nfor example you can just define both workflow and worker endpoints in the config\n\nDo you mean that I would define this plugin twice within ubiquibot-config.yml one with the worker url and one with the workflow url, on any given event both would run but would remove the self-forwarding aspect?\n","comment_issue_id":"I_kwDOMqhXyc6U2TJW","similarity":0.177357739515872,"text_similarity":0.0773164033889771}]` **Tool:** `fetch_similar_issues` **Parameters:** `{"text":"@ubosshivaditya can you tell me about the devpool ? Can any one work on that ?"}` **Result:** `[{"issue_id":"I_kwDOKzVPS86K52yA","issue_plaintext":"I've opened up a couple of proposals, essentially requests for comment. It could be useful to aggregate them onto a single view for ease-of-access to the DAO. Recent examples:\n\nhttps://github.com/ubiquity/.github/issues/111\nhttps://github.com/ubiquity/.github/issues/110\n\nI imagine that we can find issues without pricing and populate them under the "proposals" view. \n\nFork this repository, make rfc.ubq.fi repository\nFork devpool-directory repository, make devpool-rfc repository \nrfc.ubq.fi UI should load from devpool-rfc "backend"\n\nA Note on Forks\nIt seems that we have use cases for similar UI code but for different purposes. Perhaps it makes sense to fork and create new repos (leaderboard.ubq.fi, rfc.ubq.fi.)\nIn addition, we already have https://github.com/ubiquity/devpool-directory and https://github.com/ubiquity/devpool-directory-private. Ideally we could consolidate everything into a single aggregator repository, and collect even the github issues without pricing. That way we only need to maintain a single repository (instead of potentially three) for our "backend" of issues. \nFor now lets just do another fork and then we can worry about consolidating them into a "single backend" in a future project. \n","similarity":0.23605733925853,"text_similarity":0.0439058467745781},{"issue_id":"I_kwDOMw-7xM6almsJ","issue_plaintext":"building new features like https://github.com/ubiquity/work.ubq.fi/issues/117 would benefit greatly from the backend aggregating every issue in our network. Once they are in a single place, all operations can be handled instantly client side which is much better for UX compared to manually fetching each issue. In my case, every assigned issue. At the same time the work.ubq.fi UI will likely need to be tweaked to only show what you can /start on Ideally instead of just seeing if it's assigned, command-start-stop could have an API that can take the user auth, and take all the compiled issues from the whole network, and return a list of which they can /start. It should check all prerequisite conditions like: Are they already assigned to too many issues?Is their XP sufficient?Is it already assigned?Were they previously disqualified from that task? devpool-issues.json should include all issues ","similarity":0.249357101691933,"text_similarity":0.0243170838803053},{"issue_id":"I_kwDOJhaJks6Bqc5W","issue_plaintext":"https://github.com/ubiquity/devpool-directory-private is our new "backend" to read private issues and populate them on the devpool.directory\nThis should ONLY populate with private repository issues. This is so we don't have redundant data on the public and private directories.\nAlso re-enable "referencing" back to private repository issues (I just disabled it entirely in that repo as a quick fix by disabling the logic if its not the public devpool-directory repo https://github.com/ubiquity/devpool-directory-private/blob/development/helpers/github.ts#L63.) This means that the bot should paste in the normal URL in the GitHub comment so that the mirrored issue is "referenced" in the real issue conversation.\nEnsure that this works as expected by testing the UI on the devpool.directory (you must auth as a core team member)\n","similarity":0.190011642033448,"text_similarity":0.0516737997531891},{"issue_id":"I_kwDOLf0nHM6L6JBc","issue_plaintext":"See this workflow run regarding Build CI failing during the yarn step. I've came across this in testing plugin workflows where the workflow fails for reasons out of user control during a step it shouldn't such as yarn.\nIt's reasonable to expect this to happen again for plugin workflows and the kernel wouldn't know the wf has failed (afaik)\n\n\nEvents to work with below, I think it's possible to track workflow steps by name that fail that are expected to never fail. If so, it would be a beneficial plugin to have that can automatically refire these runs.\nworkflow_dispatch: [],\nworkflow_job: [],\n'workflow_job.completed': [],\n'workflow_job.in_progress': [],\n'workflow_job.queued': [],\n'workflow_job.waiting': [],\nworkflow_run: [],\n'workflow_run.completed': [],\n'workflow_run.in_progress': [],\n'workflow_run.requested': []\n\n\n\nI feel I lack the insight to properly define the spec for this one, if it is possible at all given the kernel structure.\n","similarity":0.10204221844672,"text_similarity":0.12799808382988},{"issue_id":"I_kwDOLf0nHM6T3fS7","issue_plaintext":"This is experimental and unsure how far it would manage to get but here goes.\nTaking inspiration from https://github.com/nektos/act.\nTo improve the dev experience for working on plugins it would be fantastic if we could cram everything that we need into a container which can be easily used on any OS. \nThe most important feature of it is the ability to run our workflows locally without the tediousness of repeatedly pushing, waiting 30 secs for start up and 10s-2m to find an error, add logs then rinse and repeat n times.\nIf this could be streamlined such that our env supports this out of the box, it would make life a dream to work on any kind of plugin.\nIf we include the kernel itself it ensures consistency across setups, less repos to manage for new contributors etc which might be more attractive?\n\nI don't have a lot of experience with Docker so this is as far as I'll spec this task for now\n","similarity":0.201843469416602,"text_similarity":0.0273567195981741},{"issue_id":"I_kwDOKzVPS86UNkiT","issue_plaintext":"There is a small time window for a malicious user to post a new github issue in https://github.com/ubiquity/devpool-directory with a phishing link. Although such issue will be deleted eventually users still can be affected.\nWe could hide issues created by unauthorized users and keep only the ones created by authorized bots:\n\n76412717: https://github.com/ubiquity\n133917611: https://github.com/ubiquibot\n165700353: https://github.com/UbiquityOS\n\n","similarity":0.205231402504053,"text_similarity":0.0121585419401526},{"issue_id":"I_kwDOJhaJks6Xbmuv","issue_plaintext":"So it seems that when an issue is transferred to another repository or organization that the issue takes on a new node_id so we must update our logic to handle out-of-sync labels.\nWe should also aim to remove any partner issues that have been deleted (I assume) but still exist within the devpool as they can belong to repos that have been deleted.\nSince we already have the IssueRemover class we can borrow it and delete any erroneous tasks before we perform any state changes.\nI think we should also delete any devpool issues that's body does not match the typical url formatting standard.\n","similarity":0.18563265074359,"text_similarity":0.0151981767266989},{"issue_id":"I_kwDOMw-7xM6aljyx","issue_plaintext":"There are 2 repositories which collect issues available for contributors: https://www.github.com/ubiquity/devpool-directory: Public repo, any contributor can take issue there https://www.github.com/ubiquity/devpool-directory-private: Private repo, only core contributors can take issues there since that repo issues may contain sensitive/NDA data The issue is that we somehow need to sync code changes between https://www.github.com/ubiquity/devpool-directory and https://www.github.com/ubiquity/devpool-directory-private. We can't simply fork https://www.github.com/ubiquity/devpool-directory-private from https://www.github.com/ubiquity/devpool-directory and use github UI since private repos can't be synced. What should be done: when a new code change is introduced in https://www.github.com/ubiquity/devpool-directory then this code change should also appear in https://www.github.com/ubiquity/devpool-directory-private Possible solution described here Sync code between ubiquity/devpool-directory and ubiquity/devpool-directory-private ","similarity":0.200427413914589,"text_similarity":0},{"issue_id":"I_kwDOMw-7xM6bXDfs","issue_plaintext":"something with lot of details and possibly completed by lot of people and not similar to some of the existing issues. Test ","similarity":0.196195142075923,"text_similarity":0},{"issue_id":"I_kwDOJhaJks6Xa0y_","issue_plaintext":"Related to #27 \nRight now the statistics are based on the collection of issues within the devpool. The Unavailable label is pretty central to how things are currently calculated\n// devpool issue has unavailable label because it's assigned and so it's closed\nconst isAssigned = labels.find((label) => (label.name as string).includes(LABELS.UNAVAILABLE)) && issue.state === "closed";\n// devpool issue doesn't have unavailable label because it's unassigned and closed so it's merged therefore completed\nconst isCompleted = !labels.some((label) => (label.name as string).includes(LABELS.UNAVAILABLE)) && issue.state === "closed";\n\nThe logic of the devpool dictates that any partner-assigned task will be closed upon assignment and only re-opened if it's priced and unassigned. Without the Unavailable label to indicate this for us we have a couple of options:\n\nWe could close devpool issues as not_planned while they are assigned instead, closing as complete once paid out.\nWe could simply use the partner issues. My approach builds a partner project Map on entry and we pass that to stats etc.\n\nSome problems with those approaches:\n\nWe'd need to author a script to update all devpool issues open and closed. Some issues remain assigned even after completion so we'd need to include additional logic to account for this.\nPartner issue is obv a good source of data but it's in the partner' hands not ours. If they racked up 50k worth of stats and then deleted their org or nuked all of their issues/repos etc we'd lose a chunk of our stats.\n\nI think the best approach is to use the partner tasks directly although it comes with strings:\n\nWe persist our data better. Like a snapshot each month and store a timeline of stats which in the case of the above scenario we can recover easily. (Yeah we have git commit history but with 24 commits a day I don't think it's ideal. Plus with any changes to how stats etc are calculated it's far easier to audit.)\nUse a combination of partner tasks and our devpool issues to validate open/assigned/completed\n\n\nThe Stats themselves, big numbers and all that ;) but we have some grey areas I think. So we capture open, assigned, completed and we also have a bunch of repos excluded. \n\nShould we exclude those same repos from all stat calculations or they should be added to the total only?\nTasks which are re-opened, should they be considered to be completed since it's been paid out or should they fall into open/assigned?\ncompleted is considered to be what? partner task closed as completed and is currently closed (may not have a PR associated so we should not rely on that as a check)\nShould we capture reopened or another similar category? (internal audit likes this)\n\nI'm refactoring tests for this right now and will PR soon\n","similarity":0.177198571012678,"text_similarity":0.0183812472969294}]` **Tool:** `fetch_chat_history` **Parameters:** `{}` **Result:** `["=== Current Pull #209 Request === sshivaditya2019/test-public/209 ===\n\nSomething wrong, with the current build process, does not work as expected.\n=== End Current Pull #209 Request ===\n\n=== Current Issue #209 Conversation === sshivaditya2019/test-public #209 ===\n\n2452498609 sshivaditya2019: @ubosshivaditya can you tell me about the devpool ? Can any one work on that ?\n2629309399 sshivaditya2019: Something wrong, with the current build process, does not work as expected.\n2629309399 sshivaditya2019: A project using Ubiquity OS\n\n=== End Current Issue #209 Conversation ===\n\n=== Linked Pull Request Code Diff === sshivaditya2019/test-public #209 ===\n\nstatic/dist/test.md@@ -0,0 +1 @@\n\u001b[32m+randomtest\u001b[0m\nresr.md@@ -0,0 +1,6 @@\n\u001b[32m+Important Details about the project:\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+1. We would use C11 for general language support\u001b[0m\n\u001b[32m+2. If someone is facing issues with CMAKE then should try switching their LLVM compilers especially on Mac Silicon\u001b[0m\n\u001b[32m+3. I think we should start using Qt6 for UI, 4.5 is not good\u001b[0m\n\u001b[32m+4. GDCOM has not been added to vsproj fix that\u001b[0m\ngithub-comment-module.ts@@ -0,0 +1,342 @@\n\u001b[32m+import { Value } from \"@sinclair/typebox/value\";\u001b[0m\n\u001b[32m+import Decimal from \"decimal.js\";\u001b[0m\n\u001b[32m+import * as fs from \"fs\";\u001b[0m\n\u001b[32m+import { JSDOM } from \"jsdom\";\u001b[0m\n\u001b[32m+import { stringify } from \"yaml\";\u001b[0m\n\u001b[32m+import { CommentAssociation, CommentKind } from \"../configuration/comment-types\";\u001b[0m\n\u001b[32m+import configuration from \"../configuration/config-reader\";\u001b[0m\n\u001b[32m+import { GithubCommentConfiguration, githubCommentConfigurationType } from \"../configuration/github-comment-config\";\u001b[0m\n\u001b[32m+import { getGithubWorkflowRunUrl } from \"../helpers/github\";\u001b[0m\n\u001b[32m+import logger from \"../helpers/logger\";\u001b[0m\n\u001b[32m+import { createStructuredMetadata } from \"../helpers/metadata\";\u001b[0m\n\u001b[32m+import { removeKeyFromObject, typeReplacer } from \"../helpers/result-replacer\";\u001b[0m\n\u001b[32m+import { getErc20TokenSymbol } from \"../helpers/web3\";\u001b[0m\n\u001b[32m+import { IssueActivity } from \"../issue-activity\";\u001b[0m\n\u001b[32m+import { getOctokitInstance } from \"../octokit\";\u001b[0m\n\u001b[32m+import program from \"./command-line\";\u001b[0m\n\u001b[32m+import { GithubCommentScore, Module, Result } from \"./processor\";\u001b[0m\n\u001b[32m+import { GITHUB_COMMENT_PAYLOAD_LIMIT } from \"../helpers/constants\";\u001b[0m\n\u001b[32m+import { generateFeeString } from \"../helpers/fee\";\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+interface SortedTasks {\u001b[0m\n\u001b[32m+ issues: { specification: GithubCommentScore | null; comments: GithubCommentScore[] };\u001b[0m\n\u001b[32m+ reviews: GithubCommentScore[];\u001b[0m\n\u001b[32m+}\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+/**\u001b[0m\n\u001b[32m+ * Posts a GitHub comment according to the given results.\u001b[0m\n\u001b[32m+ */\u001b[0m\n\u001b[32m+export class GithubCommentModule implements Module {\u001b[0m\n\u001b[32m+ private readonly _configuration: GithubCommentConfiguration | null = configuration.incentives.githubComment;\u001b[0m\n\u001b[32m+ private readonly _debugFilePath = \"./output.html\";\u001b[0m\n\u001b[32m+ /**\u001b[0m\n\u001b[32m+ * COMMENT_ID can be set in the environment to reference the id of the last comment created during this workflow.\u001b[0m\n\u001b[32m+ * See also compute.yml to understand how it is set.\u001b[0m\n\u001b[32m+ */\u001b[0m\n\u001b[32m+ private _lastCommentId: number | null = process.env.COMMENT_ID ? Number(process.env.COMMENT_ID) : null;\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ /**\u001b[0m\n\u001b[32m+ * Ensures that a string containing special characters get HTML encoded.\u001b[0m\n\u001b[32m+ */\u001b[0m\n\u001b[32m+ _encodeHTML(str: string) {\u001b[0m\n\u001b[32m+ const dom = new JSDOM();\u001b[0m\n\u001b[32m+ const div = dom.window.document.createElement(\"div\");\u001b[0m\n\u001b[32m+ div.appendChild(dom.window.document.createTextNode(str));\u001b[0m\n\u001b[32m+ return div.innerHTML;\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ async getBodyContent(result: Result, stripContent = false): Promise {\u001b[0m\n\u001b[32m+ const keysToRemove: string[] = [];\u001b[0m\n\u001b[32m+ const bodyArray: (string | undefined)[] = [];\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ if (stripContent) {\u001b[0m\n\u001b[32m+ logger.info(\"Stripping content due to excessive length.\");\u001b[0m\n\u001b[32m+ bodyArray.push(\"> [!NOTE]\\n\");\u001b[0m\n\u001b[32m+ bodyArray.push(\"> This output has been truncated due to the comment length limit.\\n\\n\");\u001b[0m\n\u001b[32m+ for (const [key, value] of Object.entries(result)) {\u001b[0m\n\u001b[32m+ // Remove result with 0 total from being displayed\u001b[0m\n\u001b[32m+ if (result[key].total <= 0) continue;\u001b[0m\n\u001b[32m+ result[key].evaluationCommentHtml = await this._generateHtml(key, value, true);\u001b[0m\n\u001b[32m+ bodyArray.push(result[key].evaluationCommentHtml);\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ bodyArray.push(\u001b[0m\n\u001b[32m+ createStructuredMetadata(\"GithubCommentModule\", {\u001b[0m\n\u001b[32m+ workflowUrl: this._encodeHTML(getGithubWorkflowRunUrl()),\u001b[0m\n\u001b[32m+ })\u001b[0m\n\u001b[32m+ );\u001b[0m\n\u001b[32m+ return bodyArray.join(\"\");\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ for (const [key, value] of Object.entries(result)) {\u001b[0m\n\u001b[32m+ // Remove result with 0 total from being displayed\u001b[0m\n\u001b[32m+ if (result[key].total <= 0) {\u001b[0m\n\u001b[32m+ keysToRemove.push(key);\u001b[0m\n\u001b[32m+ continue;\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ result[key].evaluationCommentHtml = await this._generateHtml(key, value);\u001b[0m\n\u001b[32m+ bodyArray.push(result[key].evaluationCommentHtml);\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ // Remove evaluationCommentHtml because it is superfluous\u001b[0m\n\u001b[32m+ let metadataResult = removeKeyFromObject(result, \"evaluationCommentHtml\");\u001b[0m\n\u001b[32m+ // Remove user with 0 result from metadataResult\u001b[0m\n\u001b[32m+ for (const key of keysToRemove) {\u001b[0m\n\u001b[32m+ metadataResult = removeKeyFromObject(metadataResult, key);\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ // Add the workflow run url and the metadata in the GitHub's comment\u001b[0m\n\u001b[32m+ bodyArray.push(\u001b[0m\n\u001b[32m+ createStructuredMetadata(\"GithubCommentModule\", {\u001b[0m\n\u001b[32m+ workflowUrl: this._encodeHTML(getGithubWorkflowRunUrl()),\u001b[0m\n\u001b[32m+ output: JSON.parse(JSON.stringify(metadataResult, typeReplacer, 2)),\u001b[0m\n\u001b[32m+ })\u001b[0m\n\u001b[32m+ );\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ const body = bodyArray.join(\"\");\u001b[0m\n\u001b[32m+ // We check this length because GitHub has a comment length limit\u001b[0m\n\u001b[32m+ if (body.length > GITHUB_COMMENT_PAYLOAD_LIMIT) {\u001b[0m\n\u001b[32m+ // First, we try to diminish the metadata content to only contain the URL\u001b[0m\n\u001b[32m+ bodyArray[bodyArray.length - 1] = `${createStructuredMetadata(\"GithubCommentModule\", {\u001b[0m\n\u001b[32m+ workflowUrl: this._encodeHTML(getGithubWorkflowRunUrl()),\u001b[0m\n\u001b[32m+ })}`;\u001b[0m\n\u001b[32m+ const newBody = bodyArray.join(\"\");\u001b[0m\n\u001b[32m+ if (newBody.length <= GITHUB_COMMENT_PAYLOAD_LIMIT) {\u001b[0m\n\u001b[32m+ return newBody;\u001b[0m\n\u001b[32m+ } else {\u001b[0m\n\u001b[32m+ return this.getBodyContent(result, true);\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ return body;\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ async transform(data: Readonly, result: Result): Promise {\u001b[0m\n\u001b[32m+ const body = await this.getBodyContent(result);\u001b[0m\n\u001b[32m+ if (this._configuration?.debug) {\u001b[0m\n\u001b[32m+ fs.writeFileSync(this._debugFilePath, body);\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ if (this._configuration?.post) {\u001b[0m\n\u001b[32m+ try {\u001b[0m\n\u001b[32m+ await this.postComment(body);\u001b[0m\n\u001b[32m+ } catch (e) {\u001b[0m\n\u001b[32m+ logger.error(`Could not post GitHub comment: ${e}`);\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ return result;\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ get enabled(): boolean {\u001b[0m\n\u001b[32m+ if (!Value.Check(githubCommentConfigurationType, this._configuration)) {\u001b[0m\n\u001b[32m+ logger.error(\"Invalid / missing configuration detected for GithubContentModule, disabling.\");\u001b[0m\n\u001b[32m+ return false;\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ return true;\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ async postComment(body: string, updateLastComment = true) {\u001b[0m\n\u001b[32m+ const { eventPayload } = program;\u001b[0m\n\u001b[32m+ if (!this._configuration?.post) {\u001b[0m\n\u001b[32m+ logger.debug(\"Won't post a comment since posting is disabled.\", { body });\u001b[0m\n\u001b[32m+ return;\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ if (updateLastComment && this._lastCommentId !== null) {\u001b[0m\n\u001b[32m+ await getOctokitInstance().issues.updateComment({\u001b[0m\n\u001b[32m+ body,\u001b[0m\n\u001b[32m+ repo: eventPayload.repository.name,\u001b[0m\n\u001b[32m+ owner: eventPayload.repository.owner.login,\u001b[0m\n\u001b[32m+ issue_number: eventPayload.issue.number,\u001b[0m\n\u001b[32m+ comment_id: this._lastCommentId,\u001b[0m\n\u001b[32m+ });\u001b[0m\n\u001b[32m+ } else {\u001b[0m\n\u001b[32m+ const comment = await getOctokitInstance().issues.createComment({\u001b[0m\n\u001b[32m+ body,\u001b[0m\n\u001b[32m+ repo: eventPayload.repository.name,\u001b[0m\n\u001b[32m+ owner: eventPayload.repository.owner.login,\u001b[0m\n\u001b[32m+ issue_number: eventPayload.issue.number,\u001b[0m\n\u001b[32m+ });\u001b[0m\n\u001b[32m+ this._lastCommentId = comment.data.id;\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ _createContributionRows(result: Result[0], sortedTasks: SortedTasks | undefined) {\u001b[0m\n\u001b[32m+ const content: string[] = [];\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ if (result.task?.reward) {\u001b[0m\n\u001b[32m+ content.push(buildContributionRow(\"Issue\", \"Task\", result.task.multiplier, result.task.reward));\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ if (!sortedTasks) {\u001b[0m\n\u001b[32m+ return content.join(\"\");\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ function buildContributionRow(\u001b[0m\n\u001b[32m+ view: string,\u001b[0m\n\u001b[32m+ contribution: string,\u001b[0m\n\u001b[32m+ count: number,\u001b[0m\n\u001b[32m+ reward: number | Decimal | undefined\u001b[0m\n\u001b[32m+ ) {\u001b[0m\n\u001b[32m+ const fee = generateFeeString(reward, result.feeRate);\u001b[0m\n\u001b[32m+ return `\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ ${view}\u001b[0m\n\u001b[32m+ ${contribution}\u001b[0m\n\u001b[32m+ ${count}\u001b[0m\n\u001b[32m+ ${reward || \"-\"}\u001b[0m\n\u001b[32m+ ${fee}\u001b[0m\n\u001b[32m+ `;\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ if (sortedTasks.issues.specification) {\u001b[0m\n\u001b[32m+ content.push(buildContributionRow(\"Issue\", \"Specification\", 1, sortedTasks.issues.specification.score?.reward));\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ if (sortedTasks.issues.comments.length) {\u001b[0m\n\u001b[32m+ content.push(\u001b[0m\n\u001b[32m+ buildContributionRow(\u001b[0m\n\u001b[32m+ \"Issue\",\u001b[0m\n\u001b[32m+ \"Comment\",\u001b[0m\n\u001b[32m+ sortedTasks.issues.comments.length,\u001b[0m\n\u001b[32m+ sortedTasks.issues.comments.reduce((acc, curr) => acc.add(curr.score?.reward ?? 0), new Decimal(0))\u001b[0m\n\u001b[32m+ )\u001b[0m\n\u001b[32m+ );\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ if (sortedTasks.reviews.length) {\u001b[0m\n\u001b[32m+ content.push(\u001b[0m\n\u001b[32m+ buildContributionRow(\u001b[0m\n\u001b[32m+ \"Review\",\u001b[0m\n\u001b[32m+ \"Comment\",\u001b[0m\n\u001b[32m+ sortedTasks.reviews.length,\u001b[0m\n\u001b[32m+ sortedTasks.reviews.reduce((acc, curr) => acc.add(curr.score?.reward ?? 0), new Decimal(0))\u001b[0m\n\u001b[32m+ )\u001b[0m\n\u001b[32m+ );\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ return content.join(\"\");\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ _createIncentiveRows(sortedTasks: SortedTasks | undefined, feeRate: number | Decimal | undefined = undefined) {\u001b[0m\n\u001b[32m+ const content: string[] = [];\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ if (!sortedTasks) {\u001b[0m\n\u001b[32m+ return content.join(\"\");\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ function buildIncentiveRow(commentScore: GithubCommentScore) {\u001b[0m\n\u001b[32m+ // Properly escape carriage returns for HTML rendering\u001b[0m\n\u001b[32m+ const formatting = stringify({\u001b[0m\n\u001b[32m+ content: commentScore.score?.formatting,\u001b[0m\n\u001b[32m+ regex: commentScore.score?.words,\u001b[0m\n\u001b[32m+ }).replace(/[\\n\\r]/g, \" \");\u001b[0m\n\u001b[32m+ // Makes sure any HTML injected in the templated is not rendered itself\u001b[0m\n\u001b[32m+ const sanitizedContent = commentScore.content\u001b[0m\n\u001b[32m+ .replaceAll(\"&\", \"&\")\u001b[0m\n\u001b[32m+ .replaceAll(\"<\", \"<\")\u001b[0m\n\u001b[32m+ .replaceAll(\">\", \">\")\u001b[0m\n\u001b[32m+ .replaceAll(\"`\", \"`\")\u001b[0m\n\u001b[32m+ .replace(/([\\s\\S]{64}).[\\s\\S]+/, \"$1…\");\u001b[0m\n\u001b[32m+ const fee = generateFeeString(commentScore.score?.reward, feeRate);\u001b[0m\n\u001b[32m+ return `\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+
\u001b[0m\n\u001b[32m+ ${sanitizedContent}\u001b[0m\n\u001b[32m+
\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+
\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ ${new Decimal(commentScore.score?.words?.result ?? 0).add(new Decimal(commentScore.score?.formatting?.result ?? 0))}\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+
${formatting}
\u001b[0m\n\u001b[32m+
\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ ${commentScore.score?.relevance ?? \"-\"}\u001b[0m\n\u001b[32m+ ${commentScore.score?.reward ?? \"-\"}\u001b[0m\n\u001b[32m+ ${fee}\u001b[0m\n\u001b[32m+ `;\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ if (sortedTasks.issues.specification) {\u001b[0m\n\u001b[32m+ content.push(buildIncentiveRow(sortedTasks.issues.specification));\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ for (const issueComment of sortedTasks.issues.comments) {\u001b[0m\n\u001b[32m+ content.push(buildIncentiveRow(issueComment));\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ for (const reviewComment of sortedTasks.reviews) {\u001b[0m\n\u001b[32m+ content.push(buildIncentiveRow(reviewComment));\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ return content.join(\"\");\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ async _generateHtml(username: string, result: Result[0], stripComments = false) {\u001b[0m\n\u001b[32m+ const sortedTasks = result.comments?.reduce(\u001b[0m\n\u001b[32m+ (acc, curr) => {\u001b[0m\n\u001b[32m+ if (curr.type & CommentKind.ISSUE) {\u001b[0m\n\u001b[32m+ if (curr.type & CommentAssociation.SPECIFICATION) {\u001b[0m\n\u001b[32m+ acc.issues.specification = curr;\u001b[0m\n\u001b[32m+ } else {\u001b[0m\n\u001b[32m+ acc.issues.comments.push(curr);\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ } else if (curr.type & CommentKind.PULL) {\u001b[0m\n\u001b[32m+ acc.reviews.push(curr);\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+ return acc;\u001b[0m\n\u001b[32m+ },\u001b[0m\n\u001b[32m+ { issues: { specification: null, comments: [] }, reviews: [] }\u001b[0m\n\u001b[32m+ );\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ const tokenSymbol = await getErc20TokenSymbol(configuration.evmNetworkId, configuration.erc20RewardToken);\u001b[0m\n\u001b[32m+\u001b[0m\n\u001b[32m+ return `\u001b[0m\n\u001b[32m+
\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+

\u001b[0m\n\u001b[32m+  \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ [ ${result.total} ${tokenSymbol} ]\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+  \u001b[0m\n\u001b[32m+

\u001b[0m\n\u001b[32m+
\u001b[0m\n\u001b[32m+ @${username}\u001b[0m\n\u001b[32m+
\u001b[0m\n\u001b[32m+
\u001b[0m\n\u001b[32m+
\u001b[0m\n\u001b[32m+ ${result.feeRate !== undefined ? `
⚠️ ${result.feeRate}% fee rate has been applied. Consider using the Ubiquity Dollar for no fees.
` : \"\"}\u001b[0m\n\u001b[32m+
Contributions Overview
\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ ${this._createContributionRows(result, sortedTasks)}\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+
ViewContributionCountRewardFee
\u001b[0m\n\u001b[32m+ ${\u001b[0m\n\u001b[32m+ !stripComments\u001b[0m\n\u001b[32m+ ? `
Conversation Incentives
\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+ ${this._createIncentiveRows(sortedTasks, result.feeRate)}\u001b[0m\n\u001b[32m+ \u001b[0m\n\u001b[32m+
CommentFormattingRelevanceRewardFee
`\u001b[0m\n\u001b[32m+ : \"\"\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+
\u001b[0m\n\u001b[32m+ `\u001b[0m\n\u001b[32m+ .replace(/(\\r?\\n|\\r)\\s*/g, \"\") // Remove newlines and leading spaces/tabs after them\u001b[0m\n\u001b[32m+ .replace(/\\s*(<\\/?[^>]+>)\\s*/g, \"$1\") // Trim spaces around HTML tags\u001b[0m\n\u001b[32m+ .trim();\u001b[0m\n\u001b[32m+ }\u001b[0m\n\u001b[32m+}\u001b[0m=== End \nLinked Pull Request Code Diff ===\n\n"]`

The 'devpool' is part of the UbiquityDAO's DevRel program, where developers can find tasks and bounties to work on. It is accessible at work.ubq.fi, where tasks are listed with descriptions, incentives, and links to the corresponding GitHub issues. Anyone can participate by selecting tasks that are not assigned to others, forking the related repository, and submitting a pull request to complete the task. This platform provides an opportunity for contributors to earn incentives by completing the tasks available in the devpool.

Sources ### Comment in #undefined > I am looking to gain a deeper understanding of the code structure and the interactions between different modules within the UbiquityDAO's DevRel program. Knowing where to find the specific pieces of code I need to work on and how these parts interact within the larger system will be instrumental in my contribution. This insight is not only crucial for my immediate work but also for creating detailed onboarding documentation that will aid future contributors in navigating and understanding the system efficiently. My goal is to grasp the overall functionality of the system and document the onboarding process to ensure a smooth integration for new developers joining the project. You can see a list of available bounties in the devpool at work.ubq.fi from multiple applications of Ubiquity DAO. Each item in the list provides a task description, the incentive amount, priority, and urgency level. It also provides a link to the source GitHub issue. You can go through all the tasks, and find the one that suits your experience. Make sure it is not assigned to anyone else on Github, and start working on it. Ubiquibot will help you through the entire process, from starting the task to getting paid. As you already typed /start command to assign you a task, you can also type /help to see other available commands. Start by forking a repository of interest and see how can make changes to it to achieve the goal described in the issue. Once, you think the work is ready for submission, you can open a pull request. Do not forget to include a QA with each pull request to show how your changes are working. Your pull request will go through a review before being merged to the production. Every application you see here is constantly being improved and enhanced. Including work.ubq.fi itself. If you are interested in any listed issue of work.ubq.fi itself, you can see its repository. It is a simple typescript application with some CI/CD Github workflows and tests. A readme file is included with each repository to help you get started. ### Comment in #undefined > I just got pretty confused with these tasks if I'm honest. The open PR aims to consolidate RFC logic into the current devpool-directory and so does this cover item 1 in Add support for proposals on 'work.ubq.fi': "upgrade backend"? This spec implies that the work involved with rfc.ubq.fi is a requirement for this task to be completed. Is it expected that this issue won't be complete until the relevant work on the UI is complete too? And if this is true, then is working on work.ubq.fi filtering and an additional view as step 2 required in the larger task? Seems like 'double dipping' as it's been called before but it leaves only repo-sync-solution as step 3 of the larger task. Should we point the open PR towards a dedicated task in https://github.com/ubiquity/devpool-directory-bounties/issues and we can allow item 2 of the larger task to be worked on via this task? Also https://github.com/ubiquity/devpool-directory-bounties/issues/25 is a duplicate of item 3 of the larger task as well. Some consolidating of the issues would make things a lot clearer ### Comment in #undefined > @rndquu This up for grabs? Yes, you may setup code sync between https://github.com/ubiquity/devpool-directory and your forked instance of https://github.com/rndquu/devpool-directory-private-test for testing purposes and open a PR, I will check how it works with https://github.com/ubiquity/devpool-directory-private then ### Comment in #undefined > Is the preview URL: https://devpool-directory-ui.pages.dev? ### Comment in #undefined > @Keyrxng Is https://github.com/ubiquity/devpool-directory/blob/development/devpool-statistics.json accurate right now? ### Comment in #undefined > Please show a test deploy link. It doesn't seem possible if your "backend" has no relevant changes https://github.com/ubiquity/devpool-directory/compare/development...xno-miner:devpool-rfc:development ### Comment in #undefined > @rndquu @0x4007 Thinking more about it, it should be invalid I guess. Posting an issue to devpool-issues won't imply in appearing at work.ubq.fi unless this issue makes it into devpool-issues.json. If that is the case the problem is at devpool-issues authentication mechanism and not here. ### Comment in #undefined > There is also Devpool.directory ### Comment in #undefined > I have created the pull request, the changes are described in the pull request description. https://github.com/ubiquity/devpool-directory/pull/1212 ### Comment in #undefined > One worker, one entry and that entry is used by both payloads, we typecheck the payload and route it to it's destination. Workflows are forwarded payloads from the worker. All proxyCallbacks assigned to issues.labeled will run, e.g workrooms will forward the payload to the workflow entry which will run it's own workflowProxyCallbacks opening the room. This feature does not require the worker to do anything except forward the payload. But other features may execute within the worker under the same event issues.labeled for example let's pretend we announce tasks on TG because it has a certain label, announceSpecialTask would run and the worker would not forward that because the bot would simply make the announcement as it doesn't require a user account to post. for example you can just define both workflow and worker endpoints in the config Do you mean that I would define this plugin twice within ubiquibot-config.yml one with the worker url and one with the workflow url, on any given event both would run but would remove the self-forwarding aspect? ### Comment in #undefined > I've opened up a couple of proposals, essentially requests for comment. It could be useful to aggregate them onto a single view for ease-of-access to the DAO. Recent examples: https://github.com/ubiquity/.github/issues/111 https://github.com/ubiquity/.github/issues/110 I imagine that we can find issues without pricing and populate them under the "proposals" view. Fork this repository, make rfc.ubq.fi repository Fork devpool-directory repository, make devpool-rfc repository rfc.ubq.fi UI should load from devpool-rfc "backend" A Note on Forks It seems that we have use cases for similar UI code but for different purposes. Perhaps it makes sense to fork and create new repos (leaderboard.ubq.fi, rfc.ubq.fi.) In addition, we already have https://github.com/ubiquity/devpool-directory and https://github.com/ubiquity/devpool-directory-private. Ideally we could consolidate everything into a single aggregator repository, and collect even the github issues without pricing. That way we only need to maintain a single repository (instead of potentially three) for our "backend" of issues. For now lets just do another fork and then we can worry about consolidating them into a "single backend" in a future project. ### Comment in #undefined > building new features like https://github.com/ubiquity/work.ubq.fi/issues/117 would benefit greatly from the backend aggregating every issue in our network. Once they are in a single place, all operations can be handled instantly client side which is much better for UX compared to manually fetching each issue. In my case, every assigned issue. At the same time the work.ubq.fi UI will likely need to be tweaked to only show what you can /start on Ideally instead of just seeing if it's assigned, command-start-stop could have an API that can take the user auth, and take all the compiled issues from the whole network, and return a list of which they can /start. It should check all prerequisite conditions like: Are they already assigned to too many issues?Is their XP sufficient?Is it already assigned?Were they previously disqualified from that task? devpool-issues.json should include all issues ### Comment in #undefined > https://github.com/ubiquity/devpool-directory-private is our new "backend" to read private issues and populate them on the devpool.directory This should ONLY populate with private repository issues. This is so we don't have redundant data on the public and private directories. Also re-enable "referencing" back to private repository issues (I just disabled it entirely in that repo as a quick fix by disabling the logic if its not the public devpool-directory repo https://github.com/ubiquity/devpool-directory-private/blob/development/helpers/github.ts#L63.) This means that the bot should paste in the normal URL in the GitHub comment so that the mirrored issue is "referenced" in the real issue conversation. Ensure that this works as expected by testing the UI on the devpool.directory (you must auth as a core team member) ### Comment in #undefined > See this workflow run regarding Build CI failing during the yarn step. I've came across this in testing plugin workflows where the workflow fails for reasons out of user control during a step it shouldn't such as yarn. It's reasonable to expect this to happen again for plugin workflows and the kernel wouldn't know the wf has failed (afaik) Events to work with below, I think it's possible to track workflow steps by name that fail that are expected to never fail. If so, it would be a beneficial plugin to have that can automatically refire these runs. workflow_dispatch: [], workflow_job: [], 'workflow_job.completed': [], 'workflow_job.in_progress': [], 'workflow_job.queued': [], 'workflow_job.waiting': [], workflow_run: [], 'workflow_run.completed': [], 'workflow_run.in_progress': [], 'workflow_run.requested': [] I feel I lack the insight to properly define the spec for this one, if it is possible at all given the kernel structure. ### Comment in #undefined > This is experimental and unsure how far it would manage to get but here goes. Taking inspiration from https://github.com/nektos/act. To improve the dev experience for working on plugins it would be fantastic if we could cram everything that we need into a container which can be easily used on any OS. The most important feature of it is the ability to run our workflows locally without the tediousness of repeatedly pushing, waiting 30 secs for start up and 10s-2m to find an error, add logs then rinse and repeat n times. If this could be streamlined such that our env supports this out of the box, it would make life a dream to work on any kind of plugin. If we include the kernel itself it ensures consistency across setups, less repos to manage for new contributors etc which might be more attractive? I don't have a lot of experience with Docker so this is as far as I'll spec this task for now ### Comment in #undefined > There is a small time window for a malicious user to post a new github issue in https://github.com/ubiquity/devpool-directory with a phishing link. Although such issue will be deleted eventually users still can be affected. We could hide issues created by unauthorized users and keep only the ones created by authorized bots: 76412717: https://github.com/ubiquity 133917611: https://github.com/ubiquibot 165700353: https://github.com/UbiquityOS ### Comment in #undefined > So it seems that when an issue is transferred to another repository or organization that the issue takes on a new node_id so we must update our logic to handle out-of-sync labels. We should also aim to remove any partner issues that have been deleted (I assume) but still exist within the devpool as they can belong to repos that have been deleted. Since we already have the IssueRemover class we can borrow it and delete any erroneous tasks before we perform any state changes. I think we should also delete any devpool issues that's body does not match the typical url formatting standard. ### Comment in #undefined > There are 2 repositories which collect issues available for contributors: https://www.github.com/ubiquity/devpool-directory: Public repo, any contributor can take issue there https://www.github.com/ubiquity/devpool-directory-private: Private repo, only core contributors can take issues there since that repo issues may contain sensitive/NDA data The issue is that we somehow need to sync code changes between https://www.github.com/ubiquity/devpool-directory and https://www.github.com/ubiquity/devpool-directory-private. We can't simply fork https://www.github.com/ubiquity/devpool-directory-private from https://www.github.com/ubiquity/devpool-directory and use github UI since private repos can't be synced. What should be done: when a new code change is introduced in https://www.github.com/ubiquity/devpool-directory then this code change should also appear in https://www.github.com/ubiquity/devpool-directory-private Possible solution described here Sync code between ubiquity/devpool-directory and ubiquity/devpool-directory-private ### Comment in #undefined > something with lot of details and possibly completed by lot of people and not similar to some of the existing issues. Test ### Comment in #undefined > Related to #27 Right now the statistics are based on the collection of issues within the devpool. The Unavailable label is pretty central to how things are currently calculated // devpool issue has unavailable label because it's assigned and so it's closed const isAssigned = labels.find((label) => (label.name as string).includes(LABELS.UNAVAILABLE)) && issue.state === "closed"; // devpool issue doesn't have unavailable label because it's unassigned and closed so it's merged therefore completed const isCompleted = !labels.some((label) => (label.name as string).includes(LABELS.UNAVAILABLE)) && issue.state === "closed"; The logic of the devpool dictates that any partner-assigned task will be closed upon assignment and only re-opened if it's priced and unassigned. Without the Unavailable label to indicate this for us we have a couple of options: We could close devpool issues as not_planned while they are assigned instead, closing as complete once paid out. We could simply use the partner issues. My approach builds a partner project Map on entry and we pass that to stats etc. Some problems with those approaches: We'd need to author a script to update all devpool issues open and closed. Some issues remain assigned even after completion so we'd need to include additional logic to account for this. Partner issue is obv a good source of data but it's in the partner' hands not ours. If they racked up 50k worth of stats and then deleted their org or nuked all of their issues/repos etc we'd lose a chunk of our stats. I think the best approach is to use the partner tasks directly although it comes with strings: We persist our data better. Like a snapshot each month and store a timeline of stats which in the case of the above scenario we can recover easily. (Yeah we have git commit history but with 24 commits a day I don't think it's ideal. Plus with any changes to how stats etc are calculated it's far easier to audit.) Use a combination of partner tasks and our devpool issues to validate open/assigned/completed The Stats themselves, big numbers and all that ;) but we have some grey areas I think. So we capture open, assigned, completed and we also have a bunch of repos excluded. Should we exclude those same repos from all stat calculations or they should be added to the total only? Tasks which are re-opened, should they be considered to be completed since it's been paid out or should they fall into open/assigned? completed is considered to be what? partner task closed as completed and is currently closed (may not have a PR associated so we should not rely on that as a check) Should we capture reopened or another similar category? (internal audit likes this) I'm refactoring tests for this right now and will PR soon ### Comment > You can see a list of available bounties in the devpool at work.ubq.fi from multiple applications of Ubiquity DAO. Each item in the list provides a task description, the incentive amount, priority, and urgency level. It also provides a link to the source GitHub issue. ### Comment > Start by forking a repository of interest and see how can make changes to it to achieve the goal described in the issue.
sshivaditya2019 commented 3 weeks ago

@ubosshivaditya could you explain the changes this pr is making ?

test-app-ubo[bot] commented 3 weeks ago

The pull request introduces several changes to the project, notably focusing on improving the build process and updating various configurations. Here are the key changes being made:

  1. Project Details Update:

    • The project now specifies the use of C11 for general language support.
    • Suggestions include switching LLVM compilers on Mac Silicon if issues arise with CMAKE.
    • Recommends using Qt6 for UI instead of version 4.5, which is deemed less satisfactory.
    • A note to add GDCOM to the vsproj as it has not been added yet.
  2. Addition of github-comment-module.ts:

    • A new module github-comment-module.ts is introduced, containing a significant amount of code (342 lines).
    • This module appears to handle GitHub comments, importing various utilities and configurations from other parts of the project.
    • It includes functionality for handling comments, encoding HTML, generating structured metadata, and posting comments on GitHub.
    • The module also provides methods for creating contribution and incentive rows, suggesting a focus on evaluating contributions and rewarding them accordingly.
  3. Build Process Issue:

    • There is a mention of something wrong with the current build process, indicating that it does not work as expected. This may be a primary focus of the pull request to address [^01^].

These updates suggest a focus on enhancing the build process, updating language and UI frameworks, and improving the handling of GitHub comments within the project.


References

[^01^]: "Something wrong, with the current build process, does not work as expected." - GitHub Issue

Citations: [^01^]: "Something wrong, with the current build process, does not work as expected." - GitHub Issue

sshivaditya2019 commented 3 weeks ago

@ubosshivaditya can you pin point some interesting aspects from the code changes ?

test-app-ubo[bot] commented 3 weeks ago

Here are some interesting aspects of the code changes made in this pull request:

  1. Introduction of New Imports: The code introduces various new imports such as Decimal from decimal.js, fs for file system operations, and JSDOM from jsdom for working with HTML content. This indicates a focus on handling numerical operations, file manipulations, and HTML processing, respectively.

  2. HTML Encoding Method: A private method _encodeHTML is added to ensure that strings containing special characters are correctly HTML encoded. This method leverages JSDOM to safely encode HTML content, which is crucial for preventing injection attacks or rendering issues.

  3. GitHub Comment Handling: The GithubCommentModule class is responsible for posting comments on GitHub. It includes methods for generating HTML content for comments, managing comment length within GitHub's limits, and posting or updating comments using the GitHub API.

  4. Structured Metadata for Comments: The code includes a function createStructuredMetadata which is used to add structured metadata to GitHub comments. This is useful for including additional context or information in the comments in a structured and readable format.

  5. Dynamic Contribution and Incentive Rows: The code dynamically generates HTML table rows for contributions and incentives based on tasks and comments. It uses helper functions like buildContributionRow and buildIncentiveRow to format and display this data, which is crucial for reporting and visualization of contributions and rewards.

  6. Configuration and Debugging: The module reads its configuration from GithubCommentConfiguration, and includes debugging capabilities by writing output to a file when the debug mode is enabled. This helps in troubleshooting issues related to comment generation and posting.

  7. Error Handling and Logging: The code includes error handling and logging for operations like posting comments, which ensures that any issues are logged and can be addressed without affecting the overall workflow.

Overall, these changes enhance the functionality and robustness of the GitHub commenting system by focusing on safe HTML handling, dynamic content generation, structured metadata, and robust error handling [^01^].

[^01^]: Reference note.


References

[^01^]: "Linked Pull Request Code Diff", https://www.github.com/sshivaditya2019/test-public/pull/209/files

Citations: [^01^]: "Linked Pull Request Code Diff", https://www.github.com/sshivaditya2019/test-public/pull/209/files

sshivaditya2019 commented 3 weeks ago

@ubosshivaditya could you explain the GITHUB_COMMENT_PAYLOAD_LIMIT in the code why is it used explain citing code blocks ?

test-app-ubo[bot] commented 3 weeks ago

The GITHUB_COMMENT_PAYLOAD_LIMIT is used to ensure that comments posted to GitHub do not exceed a certain length. This is important because GitHub imposes a limit on the maximum size of a comment. In the given code, the limit is used within the GitHubCommentModule to handle situations where the content to be posted might exceed this limit.

Here is a relevant code snippet demonstrating its usage:

const body = bodyArray.join(""); // We check this length because GitHub has a comment length limit
if (body.length > GITHUB_COMMENT_PAYLOAD_LIMIT) {
  // First, we try to diminish the metadata content to only contain the URL
  bodyArray[bodyArray.length - 1] = `${createStructuredMetadata("GithubCommentModule", {
    workflowUrl: this._encodeHTML(getGithubWorkflowRunUrl()),
  })}`;
  const newBody = bodyArray.join("");
  if (newBody.length <= GITHUB_COMMENT_PAYLOAD_LIMIT) {
    return newBody;
  } else {
    return this.getBodyContent(result, true);
  }
}

In this code, if the length of the comment body exceeds the GITHUB_COMMENT_PAYLOAD_LIMIT, it attempts to reduce the content by altering the metadata. If the content is still too long, it calls getBodyContent with a flag to strip some content, ensuring the posted comment stays within the allowed limit.

sshivaditya2019 commented 3 weeks ago

@ubosshivaditya I want to know more about UBD ? what is the LP ?

test-app-ubo[bot] commented 3 weeks ago

It seems like you are asking about "UBD" and "LP" in the context of Ubiquity OS or a related project. Unfortunately, I couldn't find specific details about "UBD" and "LP" from the available discussion, issues, or comments. However, here's what I gathered from the conversation history and related materials: 1. UBD: Without further context from the conversation or repository, it's unclear what "UBD" specifically refers to. It could potentially be an acronym or a term specific to the project or organization that hasn't been detailed in the available discussions. 2. LP: Similarly, "LP" could stand for different things based on the context—such as "Liquidity Pool" if you're discussing decentralized finance (DeFi) or other related financial technology projects. However, this is speculative without additional context. To gain a more precise understanding, it might be beneficial to consult the repository documentation or reach out to project contributors for a detailed explanation about these terms within the project's specific context. If you have access to particular files or sections of the codebase where these terms are defined, that might also provide clarity.

0x4007 commented 3 weeks ago

@ubosshivaditya I want to know more about UBD ? what is the LP ?

Ask about UBQ instead

sshivaditya2019 commented 3 weeks ago

@ubosshivaditya I want to know more about UBQ ? what is the LP ?

0x4007 commented 3 weeks ago

We also have UUSD

test-app-ubo[bot] commented 3 weeks ago

Based on the gathered context, here is some information related to UBQ and LP: 1. UBQ: UBQ is a token that seems to be involved in financial transactions, particularly in a process where users acquire 95% LUSD and 5% UBQ to mint UUSD, which is then used on the Gnosis Chain. The process involves a combination of smart contracts and UI interactions[^01^]. 2. LP (Liquidity Pool): In a staking scenario, users deposit UAD3CRV LP tokens and receive UBQ tokens as part of their staking rewards. For instance, after a week, users might withdraw a different amount of UAD3CRV LP and receive UBQ, indicating that UBQ is part of the liquidity pool rewards system[^02^]. These elements suggest that UBQ is a token used within liquidity pools and staking environments, potentially as part of a decentralized finance (DeFi) ecosystem.


References

[^01^]: "Objective Currently we have some hoops to jump through to mint UUSD and to use on Gnosis Chain: acquire 95% LUSD acquire 5% UBQ mint UUSD bridge to Gnosis Chain" - GitHub Issue

[^02^]: "User deposits 1000 UAD3CRV LP to ... for 1 week ... withdraws 1000 UAD3CRV LP and gets 178 UAD3CRV LP + 72 UBQ." - GitHub Issue

Citations: [^01^]: "Objective Currently we have some hoops to jump through to mint UUSD and to use on Gnosis Chain: acquire 95% LUSD acquire 5% UBQ mint UUSD bridge to Gnosis Chain" - GitHub Issue [^02^]: "User deposits 1000 UAD3CRV LP to ... for 1 week ... withdraws 1000 UAD3CRV LP and gets 178 UAD3CRV LP + 72 UBQ." - GitHub Issue

sshivaditya2019 commented 3 weeks ago

@ubosshivaditya I want to know more about UBD ? what is the LP ?

Ask about UBQ instead

I am just trying out a new CoT based reasoning, it should deny if it does not information

0x4007 commented 3 weeks ago

Based on the gathered context, here is some information related to UBQ and LP:

I think we should remove this unnecessary preamble.

test-app-ubo[bot] commented 3 weeks ago

Based on the gathered context, here is some information related to UBQ and LP: 1. UBQ: UBQ is associated with Ubiquity, which is involved in developing various systems such as Ubiquity OS. 2. LP (Liquidity Pool): While the specific details about the LP (Liquidity Pool) related to UBD or UBQ are not explicitly mentioned in the gathered context, liquidity pools generally refer to pools of tokens that are locked in a smart contract to facilitate trading by providing liquidity on decentralized exchanges. The conversation indicates a preference for discussing UBQ instead of UBD, suggesting that UBQ might be a more relevant or prioritized topic in the context of the current repository or project [^01^][^02^]. If you need more specific information about UBD, UBQ, or the liquidity pool in relation to this project or repository, it might be helpful to check the specific documentation or repository details related to Ubiquity. Citations: - The discussions within the comments and issues suggest a shift in focus from UBD to UBQ [^01^]. - There is a general conversation about UBQ and associated systems like Ubiquity OS, which might be relevant for those interested in the technological aspects of UBQ [^02^].

0x4007 commented 3 weeks ago

The first citation URL is a real URL but not to the correct comment, I guess

"Objective Currently we have some hoops to jump through to mint UUSD and to use on Gnosis Chain: acquire 95% LUSD acquire 5% UBQ mint UUSD bridge to Gnosis Chain" - GitHub Issue

0x4007 commented 3 weeks ago

These elements suggest

Based on the gathered context

It would be great to change the tone to be more authoritative

sshivaditya2019 commented 3 weeks ago

@ubosshivaditya Can you sumup the changes made in this in the linked PR? Do you think there are any issues that may arise from merging this PR ?

test-app-ubo[bot] commented 3 weeks ago

The pull request introduces significant enhancements to the GitHub comment module. Below is a detailed summary of the changes:

  1. New Files Added:

    • static/dist/test.md and resr.md were created to store documentation and project details. These files include language support and UI recommendations.
  2. Code Enhancements:

    • Introduce the GithubCommentModule class, which includes methods for encoding HTML, generating body content for comments, transforming data, and posting comments.
    • Implement logic for managing and structuring metadata, incentive rows, and contribution rows. This includes handling encoding, escaping HTML characters, and managing comment length limits.
    • Support features related to issue comments, PR reviews, and task sorting.
  3. Configurations and Constants:

    • Utilize configuration files for incentives and constants such as GITHUB_COMMENT_PAYLOAD_LIMIT.
  4. Error Handling:

    • Incorporate error handling for posting comments on GitHub, ensuring robustness in case of failures.

These changes aim to enhance interaction with GitHub comments, facilitating easier generation, structuring, and posting of comments programmatically.

Considerations for potential issues include:

Overall, while the pull request introduces valuable functionality, thorough testing and validation are recommended to mitigate unforeseen issues.

sshivaditya2019 commented 3 weeks ago

@ubosshivaditya what do you know about UUSD, and its smart contract ?

test-app-ubo[bot] commented 3 weeks ago

The UUSD (Ubiquity USD) functions as a stablecoin pegged to the US dollar. It operates within a comprehensive ecosystem that includes smart contracts and bridging mechanisms to enable its use across various blockchain networks.

Key Points about UUSD and its Smart Contract

Minting Process

Smart Contract and Economics

Bridging and UI

Security and Improvements

Onboarding and User Experience

Citations: [^01^]: Context on minting and bridging UUSD GitHub Issue [^02^]: Discussion on UUSD smart contract mechanics and simplification of user processes GitHub Issue

0x4007 commented 3 weeks ago

The citations are questionable but the information seems accurate.

sshivaditya2019 commented 3 weeks ago

The citations are questionable but the information seems accurate.

Citations are quite inconsistent at the moment, as they're generated by the LLM.