Open JacobFV opened 3 months ago
I don't think this is reasonable, honestly. LLM's are still products of the training data that they are created from, and that training data has already been shown to be biased and hardly impartial. My reading of the t/prog philosophy is that it is a human system that is enhanced by technology, not a technological system that runs a human system
k 👍 - JacobOn Aug 8, 2024, at 23:18, vikingSec @.***> wrote: I don't think this is reasonable, honestly. LLM's are still products of the training data that they are created from, and that training data has already been shown to be biased and hardly impartial. My reading of the t/prog philosophy is that it is a human system that is enhanced by technology, not a technological system that runs a human system
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
LLMs in this case would train on previous voting data with inputs such as certain demographic markers and other factors. The inherent bias of the data would skew one way or the other leading to some of the same issues we see now. I agree with @vikingSec
No direct human voting. Only LLM prompt networks get to determine what judicial/executive decisionss should be and what information to weigh in to said decision. The Internet Computer supports running LLMs on-chain