Open yuqie opened 1 year ago
Hm, I think it should be port 8935, not 8359. Are you able to visit the link http://127.0.0.1:8935/traces/01H9PYEQQ6J0HVA4QVMCAG32BT in your regular browser, outside of the Jupyter notebook?
Thanks for your answer!
Hm, I think it should be port 8935, not 8359. Are you able to visit the link http://127.0.0.1:8935/traces/01H9PYEQQ6J0HVA4QVMCAG32BT in your regular browser, outside of the Jupyter notebook?
It's a typo, I set 8935 to host port 1233. I cannot visit http://127.0.0.1:8935/traces/01H9PYEQQ6J0HVA4QVMCAG32BT or http://host:8935/traces/01H9PYEQQ6J0HVA4QVMCAG32BT.
I have tried X11 forward for GUI in docker container
docker run --gpus all -it --net=host --privileged -v $HOME/.Xauthority:/root/.Xauthority -v /tmp/.x11-unix:/tmp/.x11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth -e DISPLAY=$DISPLAY -e GDK_SCALE -e GDK_DPI_SCALE --name docker_name docker_image:v /bin/bash
success run xeyes
in this container, but with the keneral error when I try the visualizer jupyter demo.
This time I can visit the link http://hostname:8935/traces/01H9Q7MRDTARQDDTVPBQJ6HJQ7
I also tried run the demo in command line with import langchain_visualizer
and langchain_visualizer.visualize(demo)
and obtained the Segmentation fault (core dumped) error
The script is as below, could you give me some suggestions, thanks
import langchain_visualizer
import os
HUGGINGFACEHUB_API_TOKEN = **
os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN
from langchain.vectorstores import FAISS
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.chains import RetrievalQA
from langchain.llms import OpenLLM
async def QA_demo():
embedding = HuggingFaceEmbeddings(model_name='/home/huggingface/sentence-transformers_all-mpnet-base-v2')
docsearch = FAISS.load_local("./", embedding)
llms = OpenLLM(model_name='flan-t5', model_id='./huggingface/flan-t5-small',)
qa = RetrievalQA.from_chain_type(llm=llms, chain_type="refine", retriever=docsearch.as_retriever())
query = "What did the president say about Ketanji Brown Jackson"
return qa.run(query)
langchain_visualizer.visualize(QA_demo)
Interesting. Is the Docker container long-lived, or does it exit after the langchain script exits? The UI is just a webpage served by the ICE server, so there should be no need to do X11 forwarding so long as you're able to forward the ports for the ICE server. However, that will be a problem if the container exits and kills the ICE server after the langchain script finishes. If this is the case, you may want to start the Docker container with python -m ice.server start
so that the server keeps running, and then use docker exec
to execute any Python scripts you want.
If you open a shell inside the Docker container, are you able to get any response from port 8935?
Something wrong with the machine and now I can run the code and get the result. The trace result still cannot be rendered in the jupyter notebook page, but I can visit http://host:8935/traces/01H9PYEQQ6J0HVA4QVMCAG32BT.
Thanks for your answering!
Ok no problem! If you want, you can also try setting
from ice import settings as ice_settings
ice_settings.settings.OUGHT_ICE_HOST = "host"
to see if Jupyter then displays the visualization from host
instead of 127.0.0.1
Hi, I use langchain in a docker container, with port 8359 mapping to host port 1233, and 8888 to host port 1231. But when I run the jupyter notebook demo, the result couldn't be rendered.
langchain 0.0.239 and langchain-visualizer 0.0.29 are used.
Could anyone tell me how can I get the rendered result?