llSourcell / Doctor-Dignity

Doctor Dignity is an LLM that can pass the US Medical Licensing Exam. It works offline, it's cross-platform, & your health data stays private.
Apache License 2.0
3.84k stars 403 forks source link

Missing ethical disclosures #13

Open 0xabad1dea opened 1 year ago

0xabad1dea commented 1 year ago

I totally get that your goal is to help people who can't afford a doctor, can't physically get to a doctor right now, etc. But like.

You can't just make a robot that dispenses medical advice with absolutely no disclosures of risk. I do not see anything in the readme, I do not see anything skimming through the notebook - nothing about the fact that LLMs are a highly experimental technology that are highly prone to confabulation.

You need an ethics disclosure in both the readme and the interface and it needs to be EXTREMELY obvious.

It should be physically impossible to engage with this LLM without being clear on the fact it may just tell you to poison yourself. Right now the only safety net is that you need to be computer-literate enough to run a pip command.

Also it wouldn't surprise me if this thing is just plain illegal in some jurisdictions but I'll let y'all figure that one out.

xyzeva commented 1 year ago

Related to #12 I absoloutely agree. The video makes this even more unjustifiable.

This can replace doctors and work way better than humans

xyzeva commented 1 year ago

Even better, the video says the following, jailbreaking LLMA 2 to actually allow it to lie more absurdly.

Its for security and for the LLM to not output bad things We need to remove that for DoctorGPT

0xabad1dea commented 1 year ago

what

Sqaaakoi commented 1 year ago

Even better, the video says the following, jailbreaking LLMA 2 to actually allow it to lie more absurdly.

Its for security and for the LLM to not output bad things We need to remove that for DoctorGPT

this is a certified AI techbro moment... these safeguards are in place for a good reason

jjhaggar commented 1 year ago

Upon reviewing Siraj Raval's content on YouTube, including titles such as "I Built a Crypto Trading Bot with ChatGPT", "I Built a Hedge Fund Run by AI Agents", "I Built a Sports Betting Bot with OddsJam and ChatGPT", and "I Built a Sports Betting Bot with ChatGPT", one might wonder if this DoctorGPT is more about fame and views than actual medical help.

chunhualiao commented 1 year ago

The Internet is the same. People search the Internet for medical advice all the time. I am not sure if all search engines have ethics disclosures. If they already do, AI models can just copy them.

jjhaggar commented 1 year ago

The Internet is the same. People search the Internet for medical advice all the time. I am not sure if all search engines have ethics disclosures. If they already do, AI models can just copy them.

I see your point, but DoctorGPT's specialized nature demands more caution and responsibility.

Here's a list of things that I think should be taken in account:

  1. Purpose: Search engines don't claim expertise; DoctorGPT does, implying more responsibility.
  2. Ethical Responsibility: As technology evolves, our commitment to its ethical use should too. Just because some platforms lack proper guidance doesn't mean we should follow suit.
  3. Trust: DoctorGPT, being specialized (and given what the author claims about its performance in the USA Medical Test), might be perceived as more trustworthy than generic search results. This could mislead users without clear disclaimers.
  4. Interactivity: AI models like DoctorGPT interact in real-time, tailoring answers based on inputs. This raises risks of giving specific but incorrect advice.
  5. Fabrication: LLMs can produce information that sounds right but is incorrect or harmful. Unlike static web articles, their outputs are dynamic and can't be cross-verified easily.
  6. Legal Implications: There's a difference between providing search results and offering tailored medical advice. Some jurisdictions might have legal restrictions on the latter.
Sqaaakoi commented 1 year ago

The Internet is the same. People search the Internet for medical advice all the time. I am not sure if all search engines have ethics disclosures. If they already do, AI models can just copy them.

I see your point, but DoctorGPT's specialized nature demands more caution and responsibility.

Here's a list of things that I think should be taken in account:

  1. Purpose: Search engines don't claim expertise; DoctorGPT does, implying more responsibility.
  2. Ethical Responsibility: As technology evolves, our commitment to its ethical use should too. Just because some platforms lack proper guidance doesn't mean we should follow suit.
  3. Trust: DoctorGPT, being specialized (and given what the author claims about its performance in the USA Medical Test), might be perceived as more trustworthy than generic search results. This could mislead users without clear disclaimers.
  4. Interactivity: AI models like DoctorGPT interact in real-time, tailoring answers based on inputs. This raises risks of giving specific but incorrect advice.
  5. Fabrication: LLMs can produce information that sounds right but is incorrect or harmful. Unlike static web articles, their outputs are dynamic and can't be cross-verified easily.
  6. Legal Implications: There's a difference between providing search results and offering tailored medical advice. Some jurisdictions might have legal restrictions on the latter.

this reads like it was written by an LLM

jjhaggar commented 1 year ago

this reads like it was written by an LLM

Haha, yeah, good eye! :) Since English is not my mother language, I asked chatgpt to help me translate & organize my ideas about the issue. Sometimes it's kinda hard to me to write correctly in English (and this is an important matter!).