w3c / cogai

for work by the Cognitive AI community group
Other
54 stars 25 forks source link

webinars: Link to Alice / Open Architecture call for contribution #47

Open johnandersen777 opened 1 year ago

johnandersen777 commented 1 year ago

Nice to meet you all! Looking forward to collaborating.

We think about an entity (Alice is our reference entity) as being in a set of parallel conscious states with context aware activation. Each context ideally forms a chain of system contexts or train of thoughts by always maintaining provenance information (SCITT, GUAC). She thinks concurrently in the existing implementation where she is defined mostly using the Open Architecture, which is language agnostic focused on defining parallel/concurrent flows, trust boundaries, and policy. The current execution of orchestration is done via Python, but is indented to be implemented in whatever language is desired.

Alice doesn't use any machine learning yet, but later we can add models assist with automation of flows as needed.

Alice's architecture, the Open Architecture, is based around thought. She communicates thoughts to us in whatever level of detail or viewed through whatever lens one wishes. She explores trains of thought and responds based on triggers and deadlines. She thinks in graphs, aka trains of thought, aka chains of system contexts. She operates in parallel, allowing her to represent N different entities.

Rolling Alice: A Current Table Of Contents

Elevator Pitch

We are writing a tutorial for an open source project on how we build an AI to work on the open source project as if she were a remote developer. Bit of a self fulfilling prophecy, but who doesn't love an infinite loop now and again. These are the draft plans: https://github.com/intel/dffml/blob/alice/docs/tutorials/rolling_alice/ first draft: https://github.com/intel/dffml/discussions/1369#discussioncomment-2603280

Essentially we are going to be using web3 (DID, DWN), KCP (kubernetes API server), provenance and attestation, and automl with feature engineering for a distributed data, analysis, control loop. We'll grow contributors into mentors, and mentors into maintainers, and Alice will grow along with us.

Alice is Here and Ready for Contribution! Initial Announcement

We're [DFFML community] building a tutorial series where we as a community collaboratively build an AI software architect (named Alice). These docs https://github.com/intel/dffml/tree/alice/docs/tutorials/rolling_alice/ are us trying to get some initial thoughts down so we can rework from there, maybe even re-write everything. We want to make sure we all start looking at the same picture of the future, consolidate all our efforts thus far and thoughts across efforts and individuals.

One of goals is to have Alice help us bring us up to the speed of the fully connected development model. To plug into the matrix. By working side by side with us, Alice will hopefully be able to detect when we've done things others on the team have done already and determined to be unsuccessful strategies. If Alice can do this we can all work independently while communicating asynchronously via Alice (our hopefully soon to be new contributor). We will work to make her detect in flight workstreams within a developer's train of thought through reporting of that developer on what they are working on. Alice I'm working on getting an HTTP request from Bob's service. Developer you may not want to do that, Bob's service is down, he told me he is working on fixing it, I can tell you when it's back online if you want.

Alice: Artificial Life Is Coming Eventually

Talk Abstract SKU 1

Given the inevitability of distributed AI we ask the question: How do we build it with intention? What would we find if we looked at it's threat model? What if it was it's own threat model? How does it defend against itself and other instances of itself? Most importantly, what would a system look like that we could cohabitate with?

Alice will join us for this talk as we flush out these questions conceptually. via our collective thoughts on what a system would look like that we could cohabitate with. We follow trains of thought to their logical conclusions when making predictions about the future. We implement and validate in order to ensure that a transparent entity with the collective's best interests at heart is ubiquitously adopted.

This talk will build on the talk: Living Threat Models are Better Than Dead Threat Models, presented at AppSecPWN. We'll dive into Alice, aka the Open Architecture, a methodology for communicating with intent with translation of risk mitigation into different operational contexts.

Security Folks

Ready to bring security to the mind? https://github.com/intel/dffml/tree/alice/docs/tutorials/rolling_alice

Securing the software supply chain is becoming about securing the supply chain of the mind, the developer mind. We need to teach developers, and we'll be teaching developers in a language they understand, code. We'll teach them by teaching Alice how to teach them, along the way we'll build Alice, who will be a developer herself one day.

Why might security folks want to be involved in the Open Architecture's definition and implemenation?

Anything accessible via the Open Architecture methodology as a proxy can be used to combine external/internal work with programmatic application of context and organizationally aware modifications to those components as they are sourced from an SBOM. This allows us to apply policy universally across static and dynamic analysis. This will allow us to apply techniques such as RBAC based on programming languague agnostic descriptions of policy at any level of granularity at analysis or runtime.

Supply Chain Security

CI/CD that goes really fast is effectively distributed compute.

@lorenc_dan

This is the same as banks trading credit default swaps in the early 2000's without understanding the underlying credit risk. Software is tight knit and most orgs are using the same OSS, magnifying the risks, which are now existential to the industry and national security.

Holistic context aware risk analysis requires an understanding of a system's architecture, behavior, and deployment relevant policy.

The Open Architecture effort is looking at software description via manifests and data flows (DAGs) with additional metadata added for deployment threat modeling. Dynamic context aware overlays are then used to enable deployment specific analysis, synthesis, and runtime evaluation.

Leveraging the Open Architecture methodology we decouple the description of the system from the underlying execution environment. In the context of discussion around distributed compute we leverage holistic risk analysis during compute contract proposal and negotiation.

RFCv1 Announcement

Here is the first version of Alice aka the Open Architecture and this pull request is a Request For Comments https://github.com/intel/dffml/tree/alice/docs/tutorials/rolling_alice Please Review and provide any and all technical or conceptual feedback! This is also a call for participation if anyone would like to get involved and contribute please comment in the linked pull request or reach out to directly. Looking forward to working with you all!

Alignment

"If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire." [Norbert Wiener]

Convey

Definition of "convey": "To communicate; to make known; to portray." [Wiktionary] Synonyms of "convey": transport

We are working on the Thought Communication Protocol and associated analysis methodologies (Alice, Open Architecture) so as to enable iterative alignment of your AI instances to your strategic principles. Enabling your AI to convey your way.

One of the considerations in our new shared threat model is the way AI conveys information to us. In the future, automating communication channels (notes -> phone call) will be the task of AI messengers. If the messenger paints a picture worth a thousand words, we must ensure our target audiance is seeing the words that best communicate the message we want them to get, aka, what's the point? We also want to make sure that if we aren't able to describe the point, if we have a misscommunication, that our AI has facilities baked in to avoid that from being a really bad misscommunication.

From our shared threat model perspective, we must ensure we have methodolgies and tooling baked into AI deployment infra. This way we ensure the AI does not become missaligned with human concepts once it outgrows them. We must ensure we can detect, prevent, and course correct from minipulation over any duration of time from any number of agents.