langchain-ai / opengpts

MIT License
6.49k stars 866 forks source link

"AI Safety" Regulatory Compliance #55

Closed chadbrewbaker closed 1 year ago

chadbrewbaker commented 1 year ago

There needs to be something in the README about regulatory compliance.

Real "AI Safety" is traditional cybersecurity.

Hopefully in the near future there should be a @tinygrad library - simple formally verified parsers around the LLM process. That is true AI safety. Bonus aside you can use Valiant's result from 1970s that CFG == MATMUL, and use TinyGrad for parsing. Boolean MATMUL is bottleneck: https://www.cs.cornell.edu/home/llee/talks/bmmtalk.pdf

How should this messaging be reflected in the README to help users defend themselves from predatory "AI Safety" companies extorting them?

This is what LLM proposed:

The Truth About "AI Safety"

@oliviazhu - your input would be helpful. I want this project to have FedRAMP certified artifacts using already approved AWSLinux/RHEL packages.

hwchase17 commented 1 year ago

not sure what exactly this is asking for. if the goal is to help users defend themselves from predatory "AI Safety" companies extorting them im not sure the readme of this package is the best place to reflect that. we have a section in langchain documentation on security, perhaps more applicable there