This repository contains the implementation of our full paper accepted for the IEEE EMBC 2024 Conference.
You can access the article on arXiv.
A Streamlit-Powered Chat Tool for interpreting Multi-PDF Document using Four Large Language Models.
https://docs.conda.io/en/latest/miniconda.html
conda create -n MedDoc-Bot python=3.11
conda activate MedDoc-Bot
System | GPU | Command |
---|---|---|
Windows | NVIDIA | pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 |
Windows | CPU only | pip3 install torch torchvision torchaudio |
The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
For NVIDIA, you also need to install the CUDA runtime libraries:
conda install -y -c "nvidia/label/cuda-12.1.1" cuda-runtime
git clone https://github.com/yaseen28/MedDoc-Bot
cd MedDoc-Bot
pip install -r requirements.txt
(i) Llama-2 {Version: llama-2-13b.Q5_K_S.gguf} Link
(ii) MedAlpaca {Version: medalpaca-13b.Q5_K_S.gguf} Link
(iii) Meditron {Version: meditron-7b.Q5_K_S.gguf} Link
(iv) Mistral {Version: mistral-7b-instruct-v0.2.Q5_K_M.gguf} Link
NOTE!! Please ensure that you rename the model file to match the name listed in the 'Select Model' dropdown in the browser.
conda activate MedDoc-Bot
cd MedDoc-Bot
streamlit run Main_MedDoc-Bot.py
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
User
User@123
If you find our work useful, please consider citing it.
@misc{jabarulla2024meddocbot, title={MedDoc-Bot: A Chat Tool for Comparative Analysis of Large Language Models in the Context of the Pediatric Hypertension Guideline}, author={Mohamed Yaseen Jabarulla and Steffen Oeltze-Jafra and Philipp Beerbaum and Theodor Uden}, year={2024}, eprint={2405.03359}, archivePrefix={arXiv}, primaryClass={cs.CL} }