ubiquity / recruiting

0 stars 2 forks source link

Replit Bounty #11

Open Keyrxng opened 3 weeks ago

Keyrxng commented 3 weeks ago

We are going to create another 3rd party bounty posting but this will be different from all others we have/intend to post for the reasons below.

  1. We need to design a task either pluck an existing one and remove the price label or create something new and spin it like a mini-hackathon.
  2. The reward for this task will not be via label as most are but instead will be paid out via Replit. Our task would still reward for conversation etc I believe but the task reward should be minimal if anything as they will be paid via Replit primarily.
  3. In order to earn the bounty a contributor must receive approval of their PR and have it merged.
  4. In order to take part they just open a PR directly against the repo and link to the issue. No using /start or self-assign as the bot might complain.
  5. We do not perform any actual reviews until the deadline but be ready to help out answering questions etc.

How this task is atypical?

This task will be a "free-for-all" and all contributors will work on completing it, with the best submission chosen as the winner.

Suggested task:

Design a new Ubiquibot plugin, specifically a new slash command /remindMe.

Or we could open it up to just design any sort of new plugin, whatever you can think of?


Idk if we should post and set a start date for a few days/weeks from now and then release all of the info on the same day to make it fairer. This assumes that any attention we catch is sufficient enough for them to set a reminder or make a point of taking part, do we have enough dev incentive/momentum for that to have more success than just releasing everything immediately and setting a deadline for submission?

It's very hard to estimate how much traffic and actual PRs that'll have to be reviewed when the time comes. I assume that most responsibilities will fall onto the assignee but if it's a crazy amount of PRs idk how we'd handle that lmao, assign specific team members or every one pitch in whatever but help if flooded would be awesome 😂

Finally, whether we use my suggestion or some other task, what should the bounty reward actually be? Considering that it's not priced like a typical task, we want to draw as many folks to it as possible. $1,000 same as our DoraHacks bounty?

The highest priced open bounty on Replit right now is $2,700.

0x4007 commented 3 weeks ago

We should price it at 2701 to stay on top I think.

I think your plan is good but my concern is that for that pricing it should be a pretty sizable task. And we need a super super detailed spec to get our money's worth.

Keyrxng commented 3 weeks ago

I like that lmao 2701 🤣

I think your plan is good but my concern is that for that pricing it should be a pretty sizable task. And we need a super super detailed spec to get our money's worth.

Okay okay. I have opened a task for /remindMe but it does feel a little "small" for the price.

I can keep that task and change the spec and we can use something else or maybe it is sufficient enough so long as the storage layer aspect is spec'd right?

Other options are to collect big tasks from around the org, remove their label or create something unique and ensure the spec or either is like a roadmap


Whatever we do it should be a TS task and related to the bot. We have plugins and AI tasks which fit this criteria. I think they'll attract two different demographics of devs with the AI side likely being of lesser quality but that's a heavy assumption

Keyrxng commented 3 weeks ago

We don't have a lot of big tasks available priced or not but these seem like good candidates for large tasks


So we are limited for these reasons for pulling existing tasks. I will try to think of more feature-rich plugins which we'd actually use


We could expand on /remind-me and it could be something like advanced-notifications.

So it could be configurable to alert org members/roles for pretty much any reason, for example:

All of the above could be done programmatically. I'm trying to think how we could embed an AI feature into it other than the obvious like workflow failure evaluation.


With some help from GPT4:

Task Dependency Manager

  • Functionality: This plugin manages dependencies between tasks, ensuring that a task cannot be started unless all its dependencies are completed. It would allow project managers to define task trees or dependency graphs where tasks are linked to prerequisite tasks.
  • Use Cases: Useful for complex projects where certain tasks depend on the completion of others. It could also notify users when dependencies are resolved and their task can be started.

Task Performance Analyzer

  • Functionality: Analyzes the time taken to complete tasks, quality of the submissions, and historical data to provide insights on task performance. This plugin could also suggest improvements or flag inefficiencies in workflows.
  • Use Cases: Helps project managers optimize workflows, allocate tasks more effectively, and identify bottlenecks or common issues. It could integrate with task-rewards to adjust rewards based on performance metrics.
0x4007 commented 3 weeks ago

I need to have a think on this but the XP system comes to mind. It's something we put off for a very long time, requires storage, and provides value to our strategic milestones.

0x4007 commented 2 weeks ago

Why don't we just add XP modifiers in a JSON file? User ID would be the most robust but least readable. We can just make something like this:

type XPModifier = number;
type XP = {
  [userId: string]: {
    [issueId: string]: XPModifier
  }
};
{
    "4975670": {
        "2480425417": -50
    }
}

Then the XP plugin ideally can sum up the total rewards generated for every user (GraphQL query?) and then apply the XP modifiers. It can recalculate every time XP is queried, assuming getting all the data is fast. If not, then we will need to cache to this file.

type XPModifier = number;
type Reward = number;

type XP = {
  [userId: string]: {
    totalReward: Reward;
    tasks: {
      [issueId: string]: {
        xpModifier: XPModifier;
        reward: Reward;
      }
    }
  }
};
{
  "4975670": {
    "totalReward": 150.50,
    "tasks": {
      "2480425417": {
        "xpModifier": -50,
        "reward": 100.00
      },
      "2480425418": {
        "xpModifier": 25,
        "reward": 50.50
      }
    }
  },
  "4975671": {
    "totalReward": 75.25,
    "tasks": {
      "2480425419": {
        "xpModifier": 10,
        "reward": 75.25
      }
    }
  }
}

Another wild idea for the future: what if we stored the database within the XP plugin repository? Then it can easily support global XP across all organizations. For now though, lets store within each organization config repo.

Keyrxng commented 2 weeks ago

I initially assumed the XP system would be designed to be global from the start. Wasn't the primary goal to allow a user, deemed "worthy" through contributions to one partner (e.g., Ubiquity), to retain their status across all partners?

We have two large concepts at play here maybe it's better to use a dedicated XP system task to hash out the specifics of it and the plugin could be built around it.

Where and how are the XP levels used within the eco-system? Is it to give distinction to users within a partner's own userbase or is it to give distinction to contributors across partners?

The thought I had for task-xp-guard V2 was that it would also be able to pull the user level and task labels can be created for those too. If XP levels are localized to the partner then when we do make a global XP system isn't that going to create issues for us? I'm thinking of collisions between local and global "level: 2" and workarounds like using Prestige for the global level causing issues with task labeling

If centralized it's easy to work with from an SDK point of view (only official installs can update, everyone can read sort of thing) and we modify your type slightly:

type XP = {
  [userId: string]: {
    totalReward: Reward;
    tasks: {
      [orgId: string]: {
        [issueId: string]: {
          xpModifier: XPModifier;
          reward: Reward;
        }
      }
    }
  }
};

Modifiers in a JSON File

Why don't we just add XP modifiers in a JSON file?

Could you clarify your suggestion? Are you proposing that a partner should manually update modifiers.json with entries like userId.issueId = -50, so when the plugin runs next, it updates the reward (or xpEarned in this context) for all tasks which includes the manual update?

What events trigger this plugin? The issues.closed event seems appropriate for updating the "database".

XP Modification Mechanics

It's not entirely clear to me how the XP system would modify experience points on a per-task basis based purely on reward amounts, other than checking if an issue was reopened. Here are my questions and thoughts:

  1. Is the XP system solely based on rewards earned?
  2. How does one acquire a negative modifier, and can this be automated by the plugin?

If the XP calculations are solely based on reward amounts, then plugin configuration options like cashToXp: 1:1, cashToNegativeXp: 2:1 seem straightforward enough if the plugin handles all calculations.

Potential Causes for Negative Modifiers

These should be programmatically verifiable.

  1. Paid-out issue was re-opened. (e,g. -50 until assignee resolves it)
  2. Paid-out issue was re-opened and then completed by someone else.

Other Possible Modifiers Unrelated to Rewards Earned

0x4007 commented 2 weeks ago

We can worry about making it global later. I think local implementation is straightforward and is our only need for the near future. If we can write to the plugin's repo then global implementation should be viable in another task.

XP Apps

Pulls

Starting New Tasks

Chrome Extension

Remarks

The most useful application in the near future is the pull disqualification. Its simple and will save our reviewers a ton of headache from new contributors who are not skilled. If it is too tedious to review an assignee's work (i.e. they submit work without testing, and of course it doesn't function as expected...) then us reviewers can keep requesting changes until they are disqualified. Because the bot kicks their assignment and closes their pull, the assignee is unlikely to retaliate towards the reviewers, making operations streamlined and simple.

0x4007 commented 2 weeks ago

@Keyrxng what do you think about making it for XP system? Given the steep cost of 2701, thats equivalent to a six week long, urgent project (2800 USD.) Massive spec will be necessary.

I assume we would essentially need to write six specs, for every one week sprint.



I wonder if its appropriate to make a parent task with six child tasks?

Keyrxng commented 2 weeks ago

I wonder if it's appropriate to make a parent task with six child tasks?

It's beneficial to consolidate each section into one place for centralized questions and assistance, rather than having a single task overwhelmed by numerous inquiries spanning all steps. None of the tasks should be labeled for permit generation, but we could visualize the reward by assigning a value to each task on the parent issue.


These steps sound ideal. The parent issue would provide an overview and the 'rules' for submissions, with each child containing a detailed specification.

For tracking, contributors should tag the completed task when pushing a commit. This would simplify progress tracking for reviewers and is more efficient than commenting on the PR or issue.

If reviewing submissions as they come, this might be more equitable and lead to higher quality final submissions but doesn't seem as "fair" and it's atypical of hackathons.

Regarding the submission process, one PR on submission day might be easier than managing six different PRs for each child task per contributor. What are your thoughts?


Should we define some metrics for submissions?

  1. Achievement of spec and degree of precision.
  2. Code cleanliness, readability, and maintainability.
  3. Friction in the review process: e/g, A+ for no changes required, D- for 20+ comments needed.
  4. ...
Keyrxng commented 2 weeks ago

V1 basic prototype:

  1. It reads and writes to GitHub JSON "database".
  2. It collects reward amounts from completed issues as positive xp.
  3. It collects actions/events from in-review, completed and re-opened issues for negative xp modifiers.
  4. It produces a final xp scoring for any given task for the assignee only.

V2 could include reviewer XP etc but V1 should focus on just the assignee.

Keyrxng commented 2 weeks ago

Stress testing:

With input from @EresDev on this one maybe in regards to using something like https://k6.io/?

Cache system:

I'm a little confused by this actually. So issues.closed would only update that issue? Then it should run on another event and it should re-process issues? Or should it just re-process all of that user's issues?

Maybe if we also store the last activity on the issue it will make re-processing issues quicker?

  [issueId: string]: {
    xpModifier: XPModifier;
    reward: Reward;
    lastActivity: Date;
  }
0x4007 commented 2 weeks ago

Somebody on the team made a good point about "coupling" or not making plugins rely too much on each other. So I have mixed feelings with the approach on XP calculation.

Ideal situation: we standardize a comment metadata interface for "scoring" or "rewards" output from every plugin. Then, the permit generation plugin can run at the end of every chain[^1^]. It will parse the scoring metadata posted by every plugin, sum it up, and generate a payment permit[^2^].

Now, the coupling is minimized and we are able to sum the rewards and save the XP.

@gentlementlegen @whilefoo rfc

Implementation

Ideally our kernel or permit generation plugin will handle compiling a coherent JSON object to append at the end of the comment. The goal would be to make this foolproof for plugin developers. Perhaps we have a method in our SDK where they pass a specific object shape that represents the rewards totals from their plugin?

Unclear if we can parse outputs directly from plugins or if we have to write the output to the comment, and then parse the comment and recompile the JSON. Either should be fine but the comment approach could be interesting to see the results stream in as the manager continues evaluations and editing the comment.

Example

Example generalized rewards metadata format:

type Rewards = {
    source: string;
    results: {
        user: number;
        reward: number;
    }[];
};

const rewards: Rewards[] = [
    {
        source: "@ubiquity-os/conversation-rewards",
        results: [
            {
                user: 0,
                reward: 100,
            },
            {
                user: 1,
                reward: 100,
            },
        ],
    },
];

[^1^]: Not sure how we can do this intelligently and automatically without manual configuration of this every time [^2^]: This approach would also support negative rewards which is great.

Keyrxng commented 2 weeks ago

So a typical scenario would be:

Right now conversation-rewards imports permit-generation like a package but the new approach would mean that it would be placed at the end of the chain and would either parse from a comment the various reward amounts or it would be passed all of these amounts via the kernel. Which would require that every installed plugin return the updated obj to the kernel and only it updates the dedicated comment, not plugins.

We can't append it to the task/PR body or the contributor can edit it. So we'd need to have a dedicated comment (maybe an immediate placeholder is commented by the bot to save the highest safe comment for it)

We would also have to create safeguards for it so that only authorized org members can actually edit the metadata comment too otherwise any member could mess with it so a plugin running on comment.edited and reverting back if it's un-authed.

0x4007 commented 1 week ago

We can probably check who last edited and crash if it's anybody but the bot.

It could also be very interesting to render a minimal chart that represents the status of every plugin in the chain

whilefoo commented 1 week ago

Ideal situation: we standardize a comment metadata interface for "scoring" or "rewards" output from every plugin. Then, the permit generation plugin can run at the end of every chain1. It will parse the scoring metadata posted by every plugin, sum it up, and generate a payment permit2.

Since we're moving to generating the permit on pay.ubq.fi, it would make more sense that plugins return reward amount to the kernel which then creates an entry in the DB

Unclear if we can parse outputs directly from plugins or if we have to write the output to the comment, and then parse the comment and recompile the JSON. Either should be fine but the comment approach could be interesting to see the results stream in as the manager continues evaluations and editing the comment.

the kernel can retrieve plugin outputs directly, there's no need to write output to a comment.

0x4007 commented 1 week ago

I heavily dislike the database dependency. It centralizes things and makes development more burdensome to set up.

We should decentralize the data. Consider storing the data on GitHub

gentlementlegen commented 1 week ago

The problem with that approach is that the results are totally specific for reward calculation, which is only relevant for a few plugins. I think we can standardize results but they should be very generic and just enforce some result key which would be an object of any shape. Any plugin should be able to output any sort of data and pass it to the next plugin in the chain if any. Maybe we should enforce even something like result: { 'assistive-pricing': { ... } } to avoid collisions in the chain maybe. This is similar to what Github does with the output of every Action through the result object.

0x4007 commented 1 week ago
type DeveloperFriendlyRewards = {
  rewards: {
    "@ubiquity-os/conversation-rewards": {
      "0x4007": 50
    }
  }
}
type AnalyticsFriendlyRewards = {
  rewards: {
    "759346183": {
      "4975670": 50
    }
  }
}

I feel like the least developer friendly to debug, but most friendly for analytics, is to use repository and user IDs, since those never change. Otherwise we can consider using the repository name and user name. If it is hosted off of GitHub though, the plugin will need to use the URL as its ID.

whilefoo commented 1 week ago

We should decentralize the data. Consider storing the data on GitHub

Storing JSON on Github has a number of problems which is exactly why databases were created. If two events both want a reward to the user at the same time, one will fail. Schema changes are really pain in the ass. Also imagine thousands of users with hunders of rewards which will create a big JSON (github max file size is 50MB) If we solve those problems I'm fine with this approach

0x4007 commented 1 week ago

We should decentralize the data. Consider storing the data on GitHub

Storing JSON on Github has a number of problems which is exactly why databases were created. If two events both want a reward to the user at the same time, one will fail. Schema changes are really pain in the ass. Also imagine thousands of users with hunders of rewards which will create a big JSON (github max file size is 50MB) If we solve those problems I'm fine with this approach

Easy to solve. Can break apart JSONs per plugin, and per partner org, and per partner repo if necessary. But if we store in each partner's ubiquibot-config repository, then it probably makes sense to just store per plugin.

i.e. ubiquity-os-conversation-rewards.json

whilefoo commented 1 week ago

We should decentralize the data. Consider storing the data on GitHub

Storing JSON on Github has a number of problems which is exactly why databases were created. If two events both want a reward to the user at the same time, one will fail. Schema changes are really pain in the ass. Also imagine thousands of users with hunders of rewards which will create a big JSON (github max file size is 50MB) If we solve those problems I'm fine with this approach

Easy to solve. Can break apart JSONs per plugin, and per partner org, and per partner repo if necessary. But if we store in each partner's ubiquibot-config repository, then it probably makes sense to just store per plugin.

i.e. ubiquity-os-conversation-rewards.json

That would partially solve the problem but it creates a problem where a plugin wants to query globally.

For example if rewards are stored in partners repo, 'pay.ubq.fi' needs to fetch all repos and calculate how much the user can claim, generate a permit and then where will it store the permit, our global repo for permits? Then you have the original problem

0x4007 commented 1 week ago

We could do per org by passing in a query param. These are all not complicated problems to solve.

llvee commented 1 week ago

Hello, is there any kind of role available for someone like myself who is up to complete all of the bounty work as an independent contractor with some kind of initial payment for bounties(a percentage of total bounty) for bounties they are willing to undertake?

I am in search of new work opportunities, & have lots of coding experience including Docker experience, experience building development environments for new products/stacks/use cases.

I realized the below due to spending lots of time studying relevant business, technology laws while building technology products & participating in bounty programmes.

I am wondering because I realised the bounty system(s) as are violates labor laws in many different countries due to not guaranteeing any payment for bounty takers in many different cases even if they put in many hours as they are working for free if they do not receive the bounty or the company simply chooses not to reward them for their work.

Sharing the previous statement to raise awareness since this is an area that is slowing down technology in some cases due to creating bad experiences for tech workers that often leads to them giving up on the technology industry.

Additionally it is likely code quality is suffering due to people being under more stress than usual due to concerns around being compensated for work efforts.

What I proposed in my first question is a better alternative to the current bounty systems as some payment should given to those who undertake the bounties regardless of whether they receive the full bounty or not. This would help avoid labor law violations, lead to better code quality.

By reducing labor law violations lots of people's time can also be saved as imagine the number of hours, amount of people that are involved in many of the technology class actions & the end result is simply a financial transfer that required 1000s of hours of people's time to occur. Plus changes also make sense because they can save companies the millions of dollars or bankruptcy in some cases that result from lawsuits.

Hope this helps build awareness, if some work opportunities are available up for those. I do like the flexibility of being able to work in the bounty systems without lots of bureaucracy.

I am curious in the current system what the timeframe is for rewarding bounties & how there is a guarantee that I will receive compensation for completing them?

Keyrxng commented 1 week ago

@githubbin765 This is a form of recruitment and atypical from how Ubiquity offers "work" via our task system.

There are two roles: contributor and team member and there are no upfront payments.

If you are interested in completing tasks and/or finding "work opportunities" visit https://work.ubq.fi/ to see an overview of the current open and ready to be worked on tasks.

If you are a talented developer you can make good money simply by identifying tasks which you are comfortable with and delivering on those as quickly and as of the highest quality that you can, this is how you ensure quick "compensation" for completing tasks.

Code quality should not suffer if the contributor accepts one task and focuses just on it, rather than trying to take on lots of tasks, this approach will also improve merge velocity and therefor time til payout.

Payment is instant after the task is closed as complete and leverages Permit2 to facilitate this instant, trust less payout.

If you show that you are a talented developer through effective, consistent and good quality PRs then you may be offered a more central role within the org but that is offered not asked for.

All in all, if you want paid, go find a task and see it through. That's all there is to it. All core team members all started off through this exact method I have detailed here, so it's an effective system for those that can deliver.

Good luck and see you in review!

Keyrxng commented 1 week ago

I heavily dislike the database dependency. It centralizes things and makes development more burdensome to set up.

We should decentralize the data. Consider storing the data on GitHub

Why don't we create a dedicated plugin which handles the github storage layer? it can respond to custom events like how the kernel has return_data_to_kernel, idk if this was the unspoken plan for it or not but GPT puts it better than what I usually do.

To improve concurrency and response times when interacting with GitHub as a storage layer:

Cloudflare provides everything we'd need for it (idk about the costs involved though) Cache, Durable Objects, KV etc.

We make it a plugin so that it has direct access to our kernel which I'd expect it to need. So only we'd need to install it into the official bot, not partners. It could respond to a custom event schema that we create and we dispatch from SDK to the repo with the data to store. Or we could just hit the worker endpoint directly from the SDK.

Cache, Queue etc introduce their own problems I'm aware but I'd think it would be an improvement over hundreds of instances all hitting the Github API directly? This plugin would be substantial to implement fully and would be a good contender for the replit bounty itself imo

0x4007 commented 1 week ago

I think worry about those scaling problems later. We have like less than 100 events per day per org of ours. It's fine.

llvee commented 1 week ago

@keyrxng Is it expected that we resolve all of the code problems on the link you shared? I managed to quickly get some of the features loading, am wondering if the errors in the file there are intentional as a developer test or not? This required rebuilding the page, changing the JS code.

Keyrxng commented 1 week ago

@Keyrxng Is it expected that we resolve all of the code problems on the link you shared? I managed to quickly get some of the features loading, am wondering if the errors in the file there are intentional as a developer test or not? This required rebuilding the page, changing the JS code.

@githubbin765 just take it one task at a time buddy, start small and learn the projects. Do not over commit or spread yourself too thin. Pick one and complete it and then move on to the next.

There are no secret developer tests, the tests are the tasks but if you experienced problems while using any part of the eco-system please open an issue in the relevant repo and include a description and image of the error so that we can price it to be resolved, thanks.

Pop into the telegram if you have any more questions as these comments are not pertinent to the task at hand, cheers.

https://t.me/UbiquityDevPool