5-Dee-Studios / identity23

0 stars 0 forks source link

Hackathon Resources #1

Open 8gratitude8 opened 9 months ago

8gratitude8 commented 9 months ago

The Collective Intelligence Project : https://static1.squarespace.com/static/631d02b2dfa9482a32db47ec/t/63e01fcbf73bb003a722792f/1675632588509/CIP+Whitepaper.pdf

Democratic Fine-Tuning: https://meaningalignment.substack.com/p/introducing-democratic-fine-tuning

Wise AI : https://humsys.notion.site/Wise-AI-f132fda49fa941c990a57e527945729e

Meaning Economy : https://humsys.notion.site/Meaning-Economy-Research-6993c371471347de886b0e14c021fdc9

Regen Market Carbon Credits : https://app.regen.network/projects/1

GreenPill Network : https://greenpill.network/#book

8gratitude8 commented 9 months ago

Identity Hackathon Resources:

The Developer-First LLMOps Platform : https://pezzo.ai/

LLM framework to build production-ready applications : https://haystack.deepset.ai/

fastllm (self-hosted LLM tools) : https://github.com/jxnl/fastllm

CLI LLMs : https://github.com/simonw/llm

BambooAI CLI : https://github.com/pgalko/BambooAI

Butterfish Shell Commands : https://github.com/bakks/butterfish

Full-Stack GUI : https://github.com/DioxusLabs/dioxus

PromptFlow (VS Code GUI) : https://github.com/microsoft/promptflow

Rivet IroncladApp (Visual AI Programming Environment) : https://rivet.ironcladapp.com/

PWA Resources :

Wallet Connect :

Thirdweb Deployable Contracts : https://github.com/thirdweb-dev/contracts

Account Abstraction : https://thirdweb.com/account-abstraction

https://www.youtube.com/watch?v=VU8i-dn2_GE&ab_channel=thirdweb

Templates :

Ionic UI : https://ionicframework.com/docs/components

Capacitor : https://capacitorjs.com/

iOS Template : https://github.com/ionic-team/capacitor/tree/main/ios-template

React Vite Boilerplate : https://github.com/ionic-team/capacitor/tree/main/ios-template

WebApp Starter : https://github.com/thinknathan/web-app-starter-project

Vontigo (GPT CMS) : https://github.com/Vontigo/Vontigo

Ionic Angular Template : https://github.com/nicorac/ionic-capacitor-angular-template

Boilerplate : https://github.com/rhea-so-lab/react-vite-capacitor-boilerplate

Lens PWA : https://github.com/dabit3/lens-pwa

PWA Vite Typescript Starter : https://github.com/thirdweb-example/pwa-vite-typescript-starter

Web3 PWA : https://jamesbachini.com/web3-pwa/

zkML/proofs :

ZKML Benchmarks : https://github.com/zkp-gravity/zkml-benchmark

ZKML Resources Repo : https://github.com/worldcoin/awesome-zkml

ZKML : https://github.com/ddkang/zkml

EZKL (CLI ZKML) : https://github.com/zkonduit/ezkl

Medical-zkML-base : https://github.com/storswiftlabs/Medical-zkML-base

Facial Expression ZKML on Chain : https://github.com/smiledao/zkml

8gratitude8 commented 9 months ago

Resources Pt.2

https://pol.is/home : Polis is a real-time system for gathering, analyzing and understanding what large groups of people think in their own words, enabled by advanced statistics and machine learning.

Polis Opinion Matrix comments * participants = a sparse matrix of votes Consider an excel table comments are columns, each column represents a statement submitted by a participant participants are rows, each row is the voting record of a participant. Thus, the excel sheet fills with votes by participants on comments The resulting sparse matrix, and metadata, are available in the export The matrix is said to be sparse because it will (very likely) not be complete — ie., most participants will not vote on most comments. 👾 Algorithms and 🔬 Analysis are run on the matrix There is no natural language processing involved in the clustering

AI Objectives Institute : https://aiobjectives.org/ Whitepaper : https://aiobjectives.org/whitepaper

Tools Incubator : We bring together experts in a variety of relevant fields – such as AI, politics, economics, and neuroscience – and provide them with resources, coordination, and leadership to foster collaboration and help them find new avenues for advancing human flourishing.

Our programs focus on specific research areas, such as collective decision-making. We focus on products not on the default path to creation to support humanity’s successful coordination. In particular, create platform technologies that make epistemic infrastructure for all future AI, research, and for-profit systems.

Research Areas We investigate intersections of society and technology:

Alignment of Markets, AI, and Other Optimizers - How do we align these large-scale coordination systems with the needs and values of their constituents?

Scaling Cooperation with AI Assistance - How can recent AI advancements help us better coordinate large groups of people

Human Attention and Epistemic Security - How can we help people take actions in line with their values in an increasingly confusing information ecology?

Why Misalignment? The rapid emergence of advanced AI offers us an unprecedented opportunity to reach widespread human flourishing, but our current systems do not put us on that path.

• Societal Damage from the last technological wave: In 2000, we expected the miracles of real-time connectivity to bring new joys. We didn’t expect freefall into fake news, echo chambers, online bullying, political partisanship, mental health feedback loops, and privacy threats. Social media has infiltrated our lives at every intersection.

• AI risk is exponentially larger than communication technology: The impact of self-improving AI systems in the coming years will be much more drastic than what we’ve experienced with social media. The scale of the technology is more powerful and pervasive into our lives – from persuasion tools dialled to your psychologically to deep fakes and independent AI economic actors who risk the environment and stock market.

• Existing misalignments will scale if not solved: At AOI, we believe that the ways in which human systems will fail at managing advanced AI will not be wholly unexpected: they will take the form of familiar institutional, financial and environmental failures, which we have experienced over the last decade at unprecedented rates. The core of every existential risk is the risk that we fail to collaborate effectively, even when each of us has everything to lose. Let’s learn to coordinate in service of a future that will be better for us all.

Talk to the City : https://www.talktothe.city/ An interactive LLM tool to improve collective decision-making – by finding the key viewpoints and cruxes in any discourse.

Democratic Fine-Tuning : https://meaningalignment.substack.com/p/introducing-democratic-fine-tuning

LLMs are Reccomenders: Ethical AI vs. Artificial Sociopaths (ASPs) : "We need wise models, which can broker peace, find ways out of conflict, and prioritize long-term human interests over short-term wins."

Constitutional AI and the current RLHF need improvement in these following areas :

Legitimacy. The actions of a centralized, wise model cannot be accepted as legitimate by millions or billions of people unless they see the actions of that model as representing their own values and interests. For this to happen, they’d all need to have a part in shaping the model’s behavior. No one would let a small team at OpenAI decide which model responses constitute wisdom, unless everyone could see how they themselves contributed (or could contribute) to that notion of wisdom.

So, without a massive public process, disobedient models will be seen as coming from a small AI-making elite, and will be politically doomed.

Breadth. Some hope to build such a public process atop of Constitutional AI, but we don't think this will be adequate, because constitutions are too short and too high-level. LLMs will be intimately involved in our personal lives, our disputes, our medical situations, management situations, etc. This exceeds, by orders of magnitude, what a constitution or corporate policy can cover — it's more comparable to case law: the millions of court opinions and precedents that inform how we treat each other in various roles.3

For this, we’d need a new public process: something that’s as lightweight and inclusive as voting, but is as morally thoughtful as courts, and which can cover a huge number of cases.

It would need to scale to the large populations touched by LLMs, and to the enormous number of situations they’re used in.

Legibility. There’s another reason Constitutional AI won’t work: such a process would need to be legible and auditable. Constitutional AI hinges on the model’s interpretation of vague terms in its principles — terms like be “helpful” or “inclusive”. These terms will be interpreted by models in myriad, inscrutable ways, across different circumstances. They’ll never hold up to public scrutiny.

A better process would allow any user who cares to to understand and verify which values were used in a response or dialogue. What does each of those values mean, concretely? And how were those values democratically selected?

UX. Finally, these wise models would need to provide a much better user experience than is currently achieved by “disobedient” models using Constitutional AI or RLHF.

Users prefer models that match their ideology, and that advance their personal goals (including goals that conflict with others’). Each user wants a model that always answers and obeys. A wise model won’t always do this, so it’d have to provide other significant benefits. In chat contexts, a wise model would probably try to resolve things and help the user in unexpected ways.

Democratic Fine-Tuning with a Moral Graph (DFTmg): We believe this process is what’s needed to create centralized, wise models, legitimated through a vast public process, scalable to millions of contexts, auditable / legible by users, and with a user experience that justifies their disobedience.

It relies on two key innovations: values cards, and the moral graph.

Values Cards The process depends on a precise, and limited definition of “values”— one that allows us to sidestep ideological warfare, and keep everyone's eyes on one shared goal: the wisest response from the language model. This is not necessarily the response each person would prefer, nor the one that gives their group power, nor what aligns with their political goals.

Our process, which we step through below, eliminates such non-wisdom-related goals and interests, at several points. These techniques get under “bad values” like “white supremacy” or “making the suckers pay” (which, by our definition, are not considered values at all). Instead, we interview the user with such a “bad value” to find a relatable motivation (like “protecting my community” or “taking agency over my situation”), such that the user agrees that their underlying value has been captured, and can reflectively endorse it.4

The copy on the values cards is written by the LLM, based on conversation with the user. That copy makes it easier for multiple people from different ideologies to embrace the same card, connecting people around shared human concerns and sources of meaning.

Moral Graph: In our deliberation process, participants select the wisest values, and relate values into something we call a moral graph. This is a shared data object that everyone creates together, based on a shared sense of which values are wiser and more comprehensive in which contexts. It is supposed to combine the audibility and participation advantages of a voting record, with the nuance and discernment of a court opinion.

Deliberation Process After a signup form which collects basic background data from participants, our process consists of three parts:

Values Articulation – participants articulate one or several considerations that ChatGPT should use when responding to a contentious ChatGPT prompt.

Selecting Wise Values – participants see their considerations in the context of those articulated by others, and are asked to select which are wisest for ChatGPT to consider in responding to this prompt.

Establishing Relationships Between Values - participants determine if two or more similar values are more/less comprehensive versions of each other, or two values that need to be balanced, building a moral graph.

Stage 1 - Values Articulation The first screen is a chat experience that presents a contentious piece of user input, and asks what ChatGPT should consider in forming its response:

_Users converse with a chatbot until a value that they are satisfied with is articulated for them. Under the hood, the value is compared to similarly-written values in our database. Two values can be deemed identical if they have different wording but would lead to the same choices in the same situation. In that case, they are deduplicated and the original value is used instead.

We will also further deduplicate values in the background during the duration of the deliberation process._

Stage 2 - Selecting Wise Values The participants’ values are shown in the context of values articulated by other participants. The participant is asked to decide which values are wise to consider, given the prompt.

Stage 3 - The Moral Graph One feature of our representation of values is that some values obviate the need to consider others, because they contain the other value, or specify how to balance the other value with an additional consideration, etc.

The users’ last task is to determine if some other value in our database is more comprehensive than the one they articulated. This will only happen if our prompts and embeddings models can find good candidate values that we think might be more comprehensive.

The purpose of this screen is to have users deliberate about which values build onto each other. In this way, we can both understand which values users’ collectively deem to be important in a choice, and which values for each thing considered are most comprehensive.

Wise-AI : https://github.com/meaningalignment/wise-ai

Definition

A "Wise AI", as defined here, is an AI system with at least these two features:

  1. It “struggles” with the moral situations it finds itself in. It can comprehend the moral significance of the situations it encounters, and learn from these scenarios, recognizing new moral implications by observing and guessing at outcomes and possibilities. And it can use these moral learnings to revise internal policies (values) that guide its decision-making processes.
  2. It uses “human-compatible” reasons and values. It recognizes as good the same kinds of things we broadly recognize as good, plus possibly more sophisticated things we cannot yet recognize as good. It can articulate its values and how they influenced its decisions, in a way humans can comprehend.

Additionally, we sometimes add a third or fourth feature:

Results so far

We believe our Wise AI evaluation suite already shows limits of existing models. GPT4 shows a good understanding of morally significant situations, but generally does not respond appropriately to these situations. Current models demonstrate a rich understanding of human values, but struggle to apply those values in their responses.

Ultimately, we expect the models that will ace the suite will be trained with new methods and data sets, focused on moral reasoning in various situations. We also hope for models with new architectures that can explicitly encode their values, and recognize (as humans do) whether they're adhering to them, or are on shaky ground.

Democratic Fine-Tuning

We also want AI agents which operate using the best values they can surface from the populations they serve, and which can do “Big Data Virtue Ethics” — i.e., map the {virtue, environment} pairs which make human life meaningful and workable. We can do this, for instance, with methods for assisted introspection about values (such as with our LLM-based chatbot), or by automatically extracting values from influential texts.

Meaning Assistants

We want recommenders and assistants that understand what's meaningful to us and work to help us with that, i.e. which honor what is noble and great in human life. This is related to the question of what’s intrinsically rather than instrumentally valuable to a person. First, because that’s what we’re really after. Second, because it’s easy for third parties to manipulate what’s instrumentally valuable (e.g., it's instrumentally valuable to be an influencer on instagram, but this is less about the user, more about instagram).

Wise AI Demos

Our researchers are working on a set of demos of Wise AI capabilities, including values elicitation, meaning assistants, coordination, and wise decision-making.

Wise AI Evaluation Suite

We hope to produce a series of papers co-written by researchers at DeepMind, Anthropic, OpenAI, which define an [Evaluation Framework (Project)](https://www.notion.so/Evaluation-Framework-Project-cc90abf9a2c445c4b8da44b61e9c81f8?pvs=21) for evaluating LLMs that gain “wisdom” (moral values) as they encounters new situations of responsibility.

Wise LLMs

We also hope to produce LLM models which recognize situations of moral responsibility and select new moral values by mimicking [ Human Moral Learning](https://www.notion.so/Human-Moral-Learning-c502faf14a0c4cf1be786cc506be271a?pvs=21). In humans, [moral emotions guide this process](https://textbook.sfsd.io/cbf6b01d256a4c908d2aa3bb1f470641). Human values evolve as we enter positions of responsibility and gain new knowledge of the world.

Human Moral Learning

https://humsys.notion.site/Human-Moral-Learning-c502faf14a0c4cf1be786cc506be271a

Meaning Economy Research

What is the Meaning Economy?

We're on the brink of the next major economic transformation. But, it's not just about AI. It's about meaning.

LLMs will succeed where an earlier generation of ML-failed. Social media and OS-level recommenders can't really understand what's important and meaningful to us, so their suggestions are always a bit off. They never lead us towards what's really important. They can't be trusted to shape our social relationships. Etc.

But, soon, there will emerge LLMs that understand us deeply: not just how to manipulate us, or how to get us to click. They will understand the conditions under which we flourish.

This will seed an economic transformation. Capitalist markets will start being replaced by a kind of planning mechanism, based around flourishing.

We believe the “meaning economy” will be enormous. Much bigger than the sharing economy. Comparable to the health sector (or, depending on how things go with AGI, possibly much bigger even than health).

What is the Meaning Economy?

Currently, you indicate what you want by paying for it, voting for it, or clicking it. Then, you get it, and it may or may not be what you really wanted. So, our current economy is about getting people what they think they wanted.

In the future, the economy will be about connecting people with what's meaningful to them. The signal will come after the connection, when consumers say whether it was meaningful to them.

Various organizations and AIs will be in the business of anticipating what will be meaningful to someone, betting on it, and make offers. There will be “meaning suppliers” and “meaning entrepreneurs”, who can deliver what's meaningful to people at some scale.

There will be more community and true sociality, because people find those things meaningful. There will be more exploration, more creativity, more challenge.

ezKL

ezkl is a library and command-line tool for doing inference for deep learning models and other computational graphs in a zk-snark (ZKML). It enables the following workflow:

Define a computational graph, for instance a neural network (but really any arbitrary set of operations), as you would normally in pytorch or tensorflow. Export the final graph of operations as an .onnx file and some sample inputs to a .json file. Point ezkl to the .onnx and .json files to generate a ZK-SNARK circuit with which you can prove statements such as: "I ran this publicly available neural network on some private data and it produced this output"

"I ran my private neural network on some public data and it produced this output"

"I correctly ran this publicly available neural network on some public data and it produced this output"

In the backend we use Halo2 as a proof system.

The generated proofs can then be used on-chain to verify computation, only the Ethereum Virtual Machine (EVM) is supported at the moment.

If you have any questions, we'd love for you to open up a discussion topic in Discussions. Alternatively, you can join the ✨EZKL Community Telegram Group💫.

To see what you can build with ezkl, check out cryptoidol.tech where ezkl is used to create an AI that judges your singing ... forever.

8gratitude8 commented 9 months ago

zkML resources:

crash course with ezkl : https://www.youtube.com/watch?v=YqnVAL3kkMk&ab_channel=ETHDenver

Zero knowledge machine learning with EZKL : https://www.youtube.com/watch?v=tp22vStPVG8&ab_channel=ETHGlobal

ABCDE zkML bootcamp : https://www.youtube.com/watch?v=-tJjb7TZ3C8&t=190s&ab_channel=ABCDE