Open AnanthaNatarajan opened 4 years ago
I have the same issue. Impossible to run inference on colab for me.
Managed to infer the model on google colab, here's how did I do it..
after the compiling step, you gotta download python file, then you should copy the file into the project file /CAR
, This file is a modified version of run.py.
make sure to make a folder that contains all the low-resolution images you need to upscale in png format.
then run this command..
python3 run_LR.py --scale 4 --img_dir path_to_images_folder --model_dir path_to_pretrained_models \
--output_dir path_to_output
I'm trying to make use of the trained model in Colab, but i get the following error during evaluation :
CUDA error at adaptive_gridsampler_kernel.cu:76 code=209(cudaErrorNoKernelImageForDevice) "cudaGetLastError()"
Colab Specs:
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243
physical_device_desc: "device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7"
Please let me know the exact specs required to run the trained model