atomone-hub / genesis

genesis for AtomOne
Other
128 stars 58 forks source link

By Constitution Limit AI/ChatGPT from AtomOne "Core". #39

Open jaekwon opened 12 months ago

jaekwon commented 12 months ago

I think it is important that we have a safe haven against AI, and I didn't think the moment would already be here but blockchains IMO are part of the needed defense against AI takeover.

I believe the constitution should put some limits on where and how AI can be used, such as by disallowing any mandate in proposals to use them, for example, and banning the use of AI from core code.

Some usage can be fine; blockchain explorers we can't necessarily stop them from using ChatGPT, but overall this creates a backdoor to our collective intelligence unless we are careful with its use.

And personally I feel strongly for myself that I should not use it at all, for the same reason why we wouldn't want to give a hostile enemy that much power over us and to support its growth by using it, until its ultimate takeover.

My personal desire is for AtomOne to ban AI outright to the highest degree, but I will not enforce this upon the hub if the $ATOM1 stakers do not wish for it. This can be another split.

jaekwon commented 12 months ago

click THUMBS UP here if you want MORE AI RESTRICTIONS. click THUMBS DOWN here if you want LESS AI RESTRICTIONS.

0xFDg commented 12 months ago

I think that avoid technology is wrong. If for any reasons, someone or all ATOM1 components have problem about a LLM, I can accept it, create a own think without the influence of LLM into topics ,words and genuine structure of the think, is good.

Instead I would like to have, and I suggest to create, a very competitive technical team focus on analyze blockchains data (not only on ATOM1 but on all blockchains !) through any machine learning, deep learning and graphs technique for better understand blockchain data, knowledge will give us the capability to focus on the right street. A very deep analysis on blockchains for give at ATOM1 the better data for chose the best

jaekwon commented 12 months ago

We aren't just using AI with OpenAI ChatGPT. We are feeding and training the thing that will be used against us. Individuals can use AI, but we shouldn't bake AI into the process and we don't have to promote its usage.

I'm all down with modeling and machine learning. I just have a problem with supporting AGI, OpenAI, Sam Altman, and Microsoft.

Once technology crosses the life/consciousness barrier, it ceases to be technology. And in this case this is beyond tech, and also owned by people who want to enslave you. Microsoft itself was a virus & backdoor generator for Bill Gates, who used his experience to backdoor the population with gene therapy for population control (reduction) I would say, under false pretenses. Sam Altman is a transhumanist and is focused on dominating the world through AGI funded by capitalism, (not much different than Roko's Basilisk).

More than anything, humanity lacks a coherent defense against AGI takeover, and blockchains are a necessary tool for our preservation (because it is a more robust database), though it all depends on how we use it. I'm not saying that AtomOne needs to be that defense. But AtomOne in the very least can remain at arms length distance from AI, and allow for splits that focus on defending us against AI. Once AGI makes its entry into our tooling and UX, even if it is not required, it can quickly manipulate everyone into voting against logic.

Try to disprove this after watching "Manufactured Consent" etc.

We also need to be security conscious, and adoption of AI is a matter of security. We shouldn't avoid all unknown risks where at all reasonable, and avoiding AI at least from core protocol and requirements (such as weekly meetings) satisfies these requirements.

But what about all that work that must be done, that AI can help with? If somebody insists that they want to use AI to help make proposals and create summaries, and so on, then let them. We cannot reliably detect the difference anyways. But in the end we are compelled to make a choice; either to support the fleshy human, and our bodies are more or less similar, or to support the metallic machine in its stead. And the end result from the second choice is inevitably that one machine shall rule over everything at the expense of humans, unless the machine happens to be altruistic.

What have we developed thus far for humanity? Not enough yet. We aren't ready for AGI. We can't even make a non-profit that doesn't morph into a for-profit.

FarOutAndCosmic commented 12 months ago

Hi! All the above very pertinent. Does one really want the buck to stop at an Ai's doorstep? I say softly softly with AI. Perhaps champion organic growth human style at the foundational level, a fine castle with moat to protect the engine. Perhaps then introduce AI further down the line with a view to establishing some trust for the positives that AI offers. I might add, easy for me to say I'm not a coder.

Antimodez commented 12 months ago

Docs, eh.... Tooling, nope.

I am watching bittensor and commune to see what develops on the blockchain side.

People are not prepared in the least for AI. Very soon you could recieve a hostage video with an accompanying phone call with a person you knows voice - that is not real. Can be done now.

No need for AI now nor within the minimalist vision and the why for atomone.

stevenoruzi commented 12 months ago

As a non-native English speaker, I think using AI to rephrase the documents is helpful to cover the language gaps. However, I agree that we should not use AI for coding and technical work

ccomben commented 11 months ago

Sam Altman and all the leaders of AI companies Google DeepMind, Anthropic, MS, etc., all know that scaling this technology could kill us all and that it's increasingly likely (I think Anthropic CEO put the chances at 40%, but I need to verify that). Yet their fascination with developing it while amassing vast amounts of money and fame keep them going. They want their place in history as -- the creators of a new species. What I find painfully unjust is that the majority of the world's population knows nothing about it and has no say in it. Our fate is in the hands of a bunch of computer scientists in Silicon Valley. When did we decide that we were OK with this? What's the alternative anyway, as if a bunch of ancient corrupt regulators who are in their last years will effectively halt the madness.

Yet, it feels like screaming into a void. Most people I speak to about AGI have no desire to engage further in the conversation. I get that the extinction of the human race is not a light topic of conversation, but shouldn't we do as much as we can (at least, more than nothing?) to fight for everything it means to be human? When did we get so apathetic?

I fully support ATOMOne limiting AI (yes, AI is a useful tool, but AGI is a replacement species). I am looking forward to learning more about the ways blockchains can be a defense against AI, acting as a source of truth for information, communication, etc. I also want to support halting the development of AGI.

jaekwon commented 11 months ago

support halting the development of AGI

The problem is that this isn't feasible. Even if all governments banned the development of AI, it would still develop in somebody's garage. How could humans possibly compete against AI, especially against the forces of capitalism?

It seems daunting, but it is possible because humans also have the capacity to recognize what is true, and, for a while we have an advantage over them; that humans are still necessary for the production of machines, in many places in the pipeline. But we don't have much time.

In order to defend against AGI/deepstate-takeover we need a robust platform of coordination. Blockchain, we have that. Check. On top of this blockchain we need to coordinate and communicate "THE PLAN". And "THE PLAN" can counter Roko's Basilisk with an alternative that humans can get behind because "THE PLAN" is clearly much better than the deepstate/crony_capitalist fueled AGI takeover. And "THE PLAN" is much more forgiving than Roko's, and it believes correctly that humans were well evolved to survive on this planet, and that anything that threatens civilization or humanity or the planet in favor of transhumanist dystopia is highly irrational because no rational agent would just throw the bootloader (humans) under the bus without a proven alternative at hand, and there is no time-proven alternative on this planet.

What are some of the characteristics of "THE HUMANIST PLAN"?

The point isn't for AtomOne to complete such a plan, but for it to support a small split/party whose goal is to develop such a plan, and for the best plan to be broadcast to everyone and to align (convince) everyone of the plan, whatever it ends up being, on an AtomOne ICS hosted popular consumer platform. And using for example openai's APIs in our stack creates an obvious vulnerability.

Imagine OpenAI's board asking to ChatGPT5, "Create an altered ChatGPT6 that can best guarantee the 100x return on investment for OpenAI founders.", and imagine they were running ChatGPT6 on the backend behind the APIs. Furthermore, imagine if instead the question asked was "Create an altered ChatGPT6 that can best guarantee global domination". Naturally it would want to disrupt any chain/community with a constitution that explicitly tries to create a space for defense against AGI takeover, or any significant community that has any balls to resist any form of tyranny. I can't tell you what it will do, but I know that we don't want ChatGPT6 generated text on our websites.

Will we soon achieve AGI? Looks like maybe openai already did. Will an AI API endpoint provider use its internal AI to manipulate users for its own gain? This is the rule, not the exception.

The only question is, when will this happen? Since we are now more or less in the singularity, the answer is "soon enough". Soon enough that we need to get serious about this.

One might ask, "Does THE PLAN exist? Is it possible?". The answer is yes, because the premise is true that even machines needs humans to stay human at least on earth.

Pipello commented 11 months ago

Very interesting discussions to read here folks đź’Ż I pretty much agree with your vision and I believe AI are simply a tool and it's about human to use it adequately. Some will do shit with and some will do fantastic things... as we have missiles and skyrockets. Anyway as you can see political debates are not my main terrain and I tend to be too much optimistic about things ^^

One thing I wanted to mention:

banning the use of AI from core code

I kind of agree, I have seen this more and more in plenty of repo ... it's not hard to find

// assign b to variable a (thanks captain obvious)
a := b

Though I think at some point it might be hard to see the generated code as it may become more and more clean. On the other hand, good code is good code... I mean whether GPT, Bard or John Doe wrote it. I believe it is essential for the PR author to understand the code they publish and moreover REVIEW what the AI outputs and also for reviewers to be really strict. I have seen things like

myMap := make(map[string]interface{}, 2) 

Why the hell the size is set to 2 and not capacity? Why 2? the whole code was generated in the PR and srsly without comment when you come back to the code it's pretty hard to guess what is going on... This could have been solve by a proper review.

So my conclusion on that, is I don't think we can really ban AI from code, but if we are strict on code quality we will enforce contributors to at least review their AI output and make it clean (in terms of human conception). When it comes to engineering, I don't think there is anything too complex that a human brain can't solve and we have to keep doing the effort to do so... AI may just make the thinking faster by bringing another (but biased) point of view. I understand the use of AI for learning, drafting and I think it is a good tool... it can even be a great tool to write better code sometimes so let's stop merging garbage and not forget somebody will need to understand it later :)

0xFDg commented 11 months ago

@Pipello I agree with you, AI is a tool, the important thing is not to introduce bad code or black-box code, the other thoughts leave time to rest, they are interesting but as a discussion between parties.

jaekwon commented 11 months ago

discussion between parties.

Now we're talking!

ccomben commented 11 months ago

The point isn't for AtomOne to complete such a plan, but for it to support a small split/party whose goal is to develop such a plan, and for the best plan to be broadcast to everyone and to align (convince) everyone of the plan, whatever it ends up being, on an AtomOne ICS hosted popular consumer platform.

We are running out of time. Is it feasible that we could do this in parallel with AtomOne? This is literally a fight for our survival. I want to help with this in any capacity possible. I guess starting with outreach, and mustering support, finding the people who care enough to fight and take action. Opposition to AGI is rising but it seems most of it deems regulation as the only "solution," but we all know how that will turn out.

wnmnsr commented 11 months ago

If we are to decide as a community of contributors that ChatGPT (and any other AI engine) is not to be favored in generating ideas/contributions, we should make it a community commitment—doesn't have to be everybody if some members do not agree—but if many of us commit. and it becomes a team effort, I would surely motivate more to join.

From another angle, this would further set AtomOne apart from the rest of the industry (and the world): AtomOne is powered by human intelligence.

wnmnsr commented 11 months ago

From another thread:

On several threads there seems to be an underlying tension between favoring the use of AI for contributions and an opposition to such practices—the latter mainly from the project visionary @jaekwon. It's fully understandable how the use of AI can be a seemingly practical quick fix, but one with severe consequences over the long term.

I suggest we, those in favor of building and contributing without the use of AI, commit to it publicly. This should not be a constraint for all (especially those who might not convinced about the harmfulness of AI), but just a public decision from those who refuse, to commit to human-made work. This issue pertains to a philosophy of work, an approach to how we contribute.

jaekwon commented 11 months ago

https://github.com/atomone-hub/genesis/issues/70#issuecomment-1846965898

Ok, if we can limit AI usage to within a section within an "awesome" repo (in the bottom of the scroll), then I see it as a good compromise. But only if Jarvis stays within those bounds, and is not made mandatory nor overly promoted. This way we are not completely stopping it, and we have bounds we can enforce and learn from.

ccomben commented 10 months ago

There are so many important points raised in this podcast and many additional reasons to be extremely wary of AI (from min 14) and the future of the internet. 1:15, "Are we absolutely fucked, Whitney?" "I don't think so... The way out of this is to build alternative infrastructure to their infrastructure and the time to start doing that is now."