Closed peiwenhuang27 closed 3 years ago
First, let me explain why I am using the xhost
command. Simply put, it is an additional command that is specified to access the GUI of the host PC from inside Docker. I was just envisioning an integrated environment to run and test the transformed model inside a container, so there is no need to force its use. By the way, If you need a GUI as your integrated verification environment, the following article may be helpful. I own a Mac, but I haven't tested it working yet. I'm sorry.
If you do not need a GUI, you can use the following command. The current folder will be mounted in the container /home/user/workdir
.
docker run -it --rm \
-v `pwd`:/home/user/workdir \
pinto0309/tflite2tensorflow:latest
For example,
docker run
/home/user/workdir
You will see that you can see the files on the host PC.Add to README. 06eec0e43badd25e96bd25aa1c47fc4a39ff6b4c
Thank you so much for the quick reply and detailed explanation (please do pardon my previous misunderstanding 😅) As I don't need a GUI, I have successfully launched the run that mounted my current folder into the container using
docker run -it --rm \
-v `pwd`:/home/user/workdir \
pinto0309/tflite2tensorflow:latest
as you suggested.
I did run into another error though (The UNIDIRECTIONAL_SEQUENCE_LSTM layer is not yet implemented.
)
As it is not related to this issue, I will close this and report another one. Thanks again 🙏🏼
You can solve this by installating Xhost software. Then adding /opt/X11/bin
in your terminal Path.
I am able to solve this on my MacOS M1.
OS you are using: MacOS 11.4
Hi, first of all, thank you so much for this tool kit! It is exactly what I've been looking for. However, I couldn't successfully use tflite2tensorflow to do conversion in my Docker environment. I was able to run
and downloaded the image, but I encountered an error upon the command
the error reads:
I suppose this is a command specific to Linux environment, and I was wondering if there is an alternative equivalent command that is executable on MacOS?
Side note: after the error, I tried directly running
and it did open an TensorRT OpenVINO virtual environment, but I do not need my model to run on TensorRT for further inference. In addition, I couldn't access my model file stored in my host machine. My understanding is the command that I was unable to run should somehow mount my host machine file system to the virtual environment, so without the
xhost
command, perhaps I can use some other way such as ssh or ftp to upload my file onto the virtual envionrment?