Open nasserdr opened 10 months ago
Hi @nasserdr 👋 did you find a solution?
There shouldn't be any FiftyOne-specific configuration required; it just loads the model and runs inference. If you can successfully run inference with any Torch/TF model directly in the same environment where you're trying to use fob.compute_visualization()
or dataset.compute_embeddings()
or whatever you're attempting, then the latter should work too.
I am getting a cuda driver not found on my machine when running embeddings calculations.
Checking on my terminal, I see that I have the cuda driver installed:
Is there a specific command I should use to link fiftyone to Nvidia? or it should work out of the box?
Thanks