Open moul opened 8 months ago
Currently, we assess contributors by collaborating with them over time. We're looking for new ways to do this better.
One idea is to review past governance votes, but we need to watch out for bias and conformity.
In this segment, leveraging Language Learning Models (LLMs) could help enhance our review and summarization processes for proposals, issues, pull requests, and more. Perhaps we should consider conducting some targeted experiments to explore this potential?
The Proof of Contribution and Evaluation DAO is working on a governance system that uses a network of DAOs with tiered memberships. Our goal is to have skilled decision-makers in the top tiers who can build a secure and sustainable ecosystem.
Currently, we assess contributors by collaborating with them over time. We're looking for new ways to do this better.
One idea is to review past governance votes, but we need to watch out for bias and conformity.
To balance diversity and alignment, we aim to have a diverse DAO network and use "forkability" as a built-in system.
If you have ideas or know of similar approaches that focus on shared affinity rather than a single score, please share.
cc @MichaelFrazzy