fastenhealth / fasten-onprem

Fasten is an open-source, self-hosted, personal/family electronic medical record aggregator, designed to integrate with 100,000's of insurances/hospitals/clinics
GNU General Public License v3.0
1.44k stars 73 forks source link

**ChatGPT-Like Offline Interface** for Querying Your Health Record #337

Open AnalogJ opened 7 months ago

AnalogJ commented 7 months ago

regarding the Chat GPT-like features, thats pretty far out on my road map currently, I want to add support for smart devices, wearables and home medical devices first. The other thing that makes the Chat GPT feature complicated is that there's 3 "versions" of Fasten, self-hosted, desktop & (eventually) cloud.

While a Docker version could easily integrate with a prviateGPT service written in another language (like python), it would be pretty complicated to make that work in a desktop version -- and I haven't really found any Private GPT alternatives written in Go.

Investigate:

Notes from @cfu288's analysis of MedPaLM 2LLM paper

  • Developed Med-PaLM 2, a new medical LLM trained using a new base model (PaLM 2 [4]) and targeted medical domain-specific finetuning
  • Med-PaLM 2 achieved state-of-the-art results on several MultiMedQA benchmarks, including MedQA USMLE-style questions (86.5%)
  • Introduced ensemble refinement as a new prompting strategy to improve LLM reasoning
  • Human evaluation of long-form answers to consumer medical questions showed that Med-PaLM 2’s answers were preferred to physician and Med-PaLM answers across eight of nine axes relevant to clinical utility factuality, medical reasoning capability, and low likelihood of harm.
  • Med-PaLM 2 answers were judged to better reflect medical consensus 72.9% of the time compared to physician answers
  • Introduced two adversarial question datasets to probe the safety and limitations of these models. Med-PaLM 2 performed significantly better than Med-PaLM across every axis.
nicholasburka commented 7 months ago

https://ollama.ai/library/meditron

Looking into using Meditron with Langchain for local-hosted embedding (semantic record search) and Q&A

Other med LLM's incl. Gatortron, cui2vec, some others

Some private GPTs:

cfu288 commented 5 months ago

https://ollama.ai/library/meditron

  • outperforms Med-PaLM, GPT 3.5 "on many medical reasoning tasks."

Just want to point out that the model I talk about above is Med-PaLM 2, not Med-PaLM. Would be curious to see how well meditron performs against them but since meditron its self-hostable that's a huge advantage.


Been experimenting with Q&A and RAG in a PWA, just wanted to point our a handy resource I've found. I've modified and adapted the Vector Storage npm package to act as a simple Vector Index to help do similarity search across stored vectors. It uses cosine similarity and essentially iterates over vectors stored in memory. I've found at the scale of data usually stored in a PHR this is really fast and manageable.

nicholasburka commented 5 months ago

Meditron "is within 5% of GPT-4 and 10% of Med-PaLM-2" (worse in performance across benchmarks). Paper here

Nice, thanks for sharing. Curious about any accuracy measures/observations you've found on your approach. Previously experimented on similar using Synthea data and performance was OK but also lacking, don't think I've tried this on my own data yet.