Closed Shreyanand closed 1 year ago
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
@codificat @suppathak I added the demo notebook with changes suggested. The notebook is split in 2 sections: faq included and faq not included. I could split it into two notebooks if this division makes the notebook hard to understand... I'm gonna work on the qualitative evaluation of the output next.
@Shreyanand @codificat , I added the Evaluation metrics table in the demo nb. PHAL! Thanks!
I added a few comments/suggestions to one of the notebooks but I believe they apply to both
One quick comment about dependencies
@codificat Any idea why there's a merge conflict here?
One quick comment about dependencies
@codificat Any idea why there's a merge conflict here?
Yep, some changes to the Pipfile
happened in #16 - I believe some of them were an oversight (like removing haystack and python-dotenv). The resulting current conflict looks like this:
[packages]
awscli = "*"
pypdf2 = "*"
<<<<<<< HEAD
ipywidgets = "*"
farm-haystack = {extras = ["colab", "faiss", "preprocessing"], version = "*"}
ipynb = "*"
boto3 = "*"
python-dotenv = "*"
=======
>>>>>>> e8b840d (Added a notebook for Qa evaluation metrics)
nltk = "*"
rouge = "*"
jiwer = "*"
evaluate = "*"
langchain = "*"
openai = "*"
chromadb = "*"
unstructured = "*"
bert-score = "*"
<<<<<<< HEAD
seaborn = "*"
=======
>>>>>>> e8b840d (Added a notebook for Qa evaluation metrics)
you will have to manually fix that: rebase, fix, commit, push - I believe the version you have in this PR is correct, we just need to let git know that because the file has been changed both in master
and here.
Fixes #24 This PR adds a notebook that creates solution for the ROSA use case using state of the art open ai models. For clarity, only qualitative evaluation is present in this notebook. Quantitative metrics are in another PR.