OTRLabs / clandestine-platform

Collaboration Platform for orgs who care about privacy
Apache License 2.0
1 stars 0 forks source link

Develop Internal Messaging System #1

Open cammclain opened 4 months ago

cammclain commented 4 months ago

Overview

We're planning to develop an internal messaging system within Conti System.

This system will prioritize privacy, client-side encryption, and seamless integration with our existing infrastructure.

Goals

Privacy and Security: Implement robust encryption methods ensuring both in-transit and at-rest data protection. Aim for client-side decryption to enhance privacy. Integration: Seamlessly embed the messaging system within our current application, making it an integral part of our user experience. Customization: Allow full control over the messaging system, enabling tailored features and functionalities. Scalability: Ensure the system can scale with our user base while maintaining performance and reliability.

Requirements

Privacy and Security

Encryption:

Utilize advanced encryption methods such as Elliptic Curve Cryptography (ECC) or post-quantum cryptography (e.g., NTRU, McEliece).

Potential Features

cammclain commented 4 months ago

Internal Messaging

With this project, there is generally an expectation that while things within the application may move slower than what we are used to with rapid response times from clear web pages with no VPN or Tor.

The layers of networking configurations lead to some abnormal behaviors that we generally make deliberate & focused efforts to account for.

My examples are falling apart but my point is:

generally speaking rather than accept these problems as they are, we make an effort to change the situation somehow.

A different approach for the internal messaging system UX

My proposal for the internal messaging system is rather than focus so much effort on trying to change the situation we have been given, I think we would actually benefit from playing into the situation created by these seemingly negative circumstances:

basically, my proposal is a little bit insane. It genuinely sounds so unappealing on paper but:

Rather than attempting to replicate the user experience of something like, say Telegram, which if you ignore all the sketchy privacy stuff, it is probably the best modern chat app by far in terms of user experience.

On these more instant messaging focused platforms, the best interactions / experiences are often going to be when both parties are online & in the chat at the same time.

We need to hurry up and take the L, by admitting we cannot match the accessibility of Telegram, being available on basically every major platform

Therefor these fast paced, real time interactions are fewer & farther between

This gets even worse when you realize part of what makes telegram so great is its speed. That is diminished when your client needs to make 6 hops to get new messages from the server. It totally kills the “real time chat”

So I think I have sufficiently outlined why the instant messaging based UI/UX is not desirable for us

That is where the email style communication platform comes in.

Oh yes. I’m talking subject line, and huge sections for beautiful markdown content. I’m talking sending files attached to messages. I’m talking asynchronous communication

Email is basically expected that you will not receive a reply immediately. It’s baked into (at least my) understanding & expectations of how using email works.

The goal is to drive more detailed & effective communication between members while minimizing unnecessary updates and notifications.

Now, you will likely take more time in composing your reply to your coworker, as maybe this is the last email you will get to send them before they go to bed, and then you have to wait 24 hrs or whatever.

AI integration with messages

Rather than a chatbot like something Mattermost or discord may have, I intend to create A RAG style agent framework that you interact with as if it was another member of the team you talk to on this email style system.

Basically this RAG would be running using local LLMs & AI so since it’s already slow, let’s own it & focus on extremely high quality results

Obtaining high quality output is achieved through iterative drafting of any reply email the agent sends a user.

Ideally the agents would have some kind of mini version control for the reply (which comes in markdown format).

the agents would basically have a “current working version” of the “reply” as well as the previous iterations of the reply to this prompt still in active memory

The agent would aggressively compare the current working version of the reply to both the prompt the user provided, a set of the most recent iterations, a value which represents the users assumed goal as a string

This assumed goal is what the agent the user desired output based on the prompt provided.

the agents extracted the actual goal value from the initial prompt because prompting is hard and sometimes when we don’t know what it is we want.

the agents preforming aggressive quality checks against the current working version of the reply allowing the agents to recognize if it is making editorial progress or if it has gone off on a unnecessary tangent & it can rollback to a previous version of the reply that is more in line with the original goals & intention of the prompt and keep moving forward

In addition to refining the output via iterative generation, we are also making use of high quality prompts like the patterns in the fabric repository which are gold

Generally speaking Daniel, the creator of fabric, gets it with AI. like Idk we agree on what it should be and where it should go and how it should be used I think

these agents are Trained on:

it is critical that these agents have the ability to cite their sources & reference where they got information using links both from the internet & internal knowledge from within the platform as a whole.

However the big kicker comes when you realize the user interface intends to emulate the UI/UX of a standard web based email client, not an instant messaging chat application like most other AI UIs

what’s the difference

But I believe that

cammclain commented 4 months ago

🚀 Develop an Internal Messaging System

Overview

We are planning to develop an internal messaging system within the Conti System. This system will prioritize privacy, client-side encryption, and seamless integration with our existing infrastructure.

Goals

Requirements

Privacy and Security

import nacl.secret
import nacl.utils
from nacl.public import PrivateKey, Box

# Generate a new random secret key
secret_key = PrivateKey.generate()

# Encrypt a message
message = b"Secret message"
box = Box(secret_key, secret_key.public_key)
encrypted = box.encrypt(message)

# Decrypt the message
decrypted = box.decrypt(encrypted)
print(decrypted.decode('utf-8'))

Potential Features

Internal Messaging

Expected Problems and Solutions

  1. Problem: Long page refresh time for UI over Tor.

    • Solution: Use Svelte instead of React and send smaller responses to the client for faster overall performance.
    <script>
        import { onMount } from 'svelte';
    
        let message = '';
    
        onMount(async () => {
            const res = await fetch('/api/message');
            message = await res.text();
        });
    </script>
    
    <main>
        <p>{message}</p>
    </main>
  2. Problem: Locally hosted LLMs require good hardware or are slow.

    • Solution: Wait for the model to finish its reply to a prompt before passing the response to the user on the UI instead of streaming the text in real-time.

Proposed UX Approach

Rather than focusing on replicating the user experience of instant messaging platforms like Telegram, we propose an email-style communication platform. This approach leverages asynchronous communication, allowing for more detailed and effective communication between team members.

Key Features:

AI Integration

We intend to create a Retrieval-Augmented Generation (RAG) style agent framework that users interact with as if it were another team member.

Key Elements:

def refine_reply(prompt, iterations=3):
    current_version = generate_initial_reply(prompt)
    for _ in range(iterations):
        current_version = iterate_reply(current_version, prompt)
    return current_version

def generate_initial_reply(prompt):
    # Generate an initial reply based on the prompt
    return f"Initial reply to: {prompt}"

def iterate_reply(current_version, prompt):
    # Refine the current version of the reply
    return f"Refined reply to: {prompt} based on: {current_version}"

Conclusion

This proposal outlines the development of a privacy-focused, seamlessly integrated, customizable, and scalable internal messaging system. By adopting an email-style communication platform and leveraging AI integration, we aim to enhance detailed and effective communication within Conti System while maintaining high standards of privacy and security.