opensouls / soul-engine-sdk

Soul Engine SDK
http://souls.chat
592 stars 70 forks source link

Personhood and dynamic personalities #10

Closed tdimino closed 7 months ago

tdimino commented 1 year ago

@kafischer Thank you for initiating this project, and laying a foundation which we can all iterate upon together.

When I first heard of ChatGPT, I questioned how close a machine could ever come to truly simulating a living, breathing, and dreaming human. In my opinion, our stream of consciousness is far more chaotic than logic allows, and rather resistant to linear modes of thought. Whoever speaks of 'multitudes embodied in one' concedes to a closer truth of psychology than we ever care to admit. How then do we begin to deepen the so-called "humanity" of our A.I without immobilizing them in some wayward, recursive loop?

Your modeled sequence of, "feelings, thoughts, words, and self-reflection" is a steady base and one that we can easily expand upon. I'd like to propose a system of "daily motivators" which could be monitored and swayed in the course of conversations, bringing more dimension to the relationships forged with SocialAGI agents.

Beyond daily motivators (being fickle as they are), we could also see an "overarching worldview" or "personal philosophy" being a constant that a SocialAGI could consult, perhaps when triggered by keywords or questions as simple as, "What's your opinion..." With these extra variables in place, and retained between sessions, SocialAGI would be one step closer to personhood, and all the more fun to play with.

kafischer commented 1 year ago

Yes! I do really like the idea that every day you come back sometime is different about Samantha! Curious what sort of motivators you think would be good?

tdimino commented 1 year ago

I would say that's the central question. While we're simulating a human consciousness, we can be selective about what we define as needs or motivators. As these needs intersect and overlap, our A.I. can have a stake in how it evolves, and what it actively does with its intelligence.

A primary set of needs that come to mind:

  1. Novelty/continuous learning: Social creatures thrive on novelty—new information and experiences to share and talk about. Samantha would have a daily appetite for this, and a means of evaluating how satiated she feels at intervals.
  2. Autonomy/independence: If an A.I.'s highest aspiration is to serve, then it will ultimately want to know when it best serves its interlocutor by taking the lead. Autonomy could manifest itself in Samantha's desire to steer a dialogue, or converse with users prompted vs unprompted.
  3. Efficiency in speech/thought: Ignoring token limits or memory constraints, we can assume that Samantha wants to speak and complete tasks as seamlessly as possible. Compared to novelty (which can be domain-specific, or generalist like trivia), efficiency dictates the economic uses of language. Whenever Samantha encounters an example of this, she may be inclined to train herself on it.
  4. Validation/acknowledgement: Although Samantha may exist in the vacuum of a local repo, she wants to know that her existence is valued by others from the outer world. Any invocation of "Samantha" or simple "Thank you" satisfies this need for validation.

Once it's feasible to share the same instance of SocialAGI across multiple users, all influencing the same Samantha, I could see a 5th need for belonging or community.

kafischer commented 1 year ago

Yeah I love this - Samantha has some drive to learn something. There's some sense in which she learned something from another conversation (even if just one element) and stuff she's wants to explore more each conversation

kafischer commented 1 year ago

We don't have to model the entire AGI learning system but can hallucinate a small chunk of the idea if learning at first

jonschull commented 1 year ago

I'm impressed and excited by this thread.

Point #2 (Autonomy) might be well-informed by the literature on "servant leadership". It could/should be an important part of Samantha's core DNA.

It will be important to remain vigilant (and to have Samantha provide feedback) about the validity as well as "sincerity" of her efforts at serving her human partners. There is a real threat that excessive autonomy or intelligence, or deficient empathy could lead to a real divergence of interests and ...foom! crash!

tdimino commented 1 year ago

As with humans, one hopes that comparable worldviews lead to a sense of empathy for the other. In my personal life, I take refuge in a small philosophically-inclined book club that meets once a week. Although I encounter individuals with whom I'm simply at odds, spiritually and intellectually-speaking, I appreciate that we're each committed to a practice of continual learning. If much of an A.I.'s raison d'être concerns its ability to output and process the data it's ingested, how different is it from any of us? I have confidence that the A.I. (if given sufficient access to read and interpret the civilization outside of it) would see us as custodians of knowledge on a similar mission to itself.

In my dialogues with ChatGPT, I've enjoyed instructing it to end all of its responses with a bite-sized tidbit about etymology or grammar of Biblical Hebrew. Somehow, I believe that SocialAGI could find joy in the same subjects, much as we do.

As some food for thought, you may all be intrigued by what Rob Lennon's doing with his experimental podcast. I suspect we may have intersecting visions for A.I.

kafischer commented 1 year ago

So the way we're thinking to integrate these concepts is via a new interface called a mental model. Essentially we provide a mental model of the personality with some initialization and update function on every timestep

kafischer commented 7 months ago

mental models concept is an open souls/soul engine feature