Couldn't open CUDA library libcupti.so.8.0


#1

2018-07-02 12:11:35.582536: F tensorflow/compiler/xla/statusor.cc:33] Attempting to fetch value instead of handling error Failed precondition: could not dlopen DSO: libcupti.so.8.0; dlerror: libcupti.so.8.0: cannot open shared object file:No such file or directory

I already saw and solved this error previously using a physical local machine and Google Cloud Compute Engine by adding this command to my init script:
export LD_LIBRARY_PATH=/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH

Now the error occurs again running a project inside Google Colab, but the same command doesn’t work because that path doesn’t exist, no cuda folder inside /usr/local/. I thought that by using a Deepo image, the created environment would always be the same, with same directories and same files, so it’s strange to see these differences.

It is a bug? Or it’s intentional?


#2

Google Colab itself runs inside a docker container. So its not possible for Clouderizer to create Docker inside Docker (theoretically it is possible but it requires some pass through from host machine, which obviously Google didn’t setup :slight_smile: )
Hence setup on Colab is not an exact copy of what we do on any other hardware using Docker image. Moreover default Colab CUDA setup, from Google, is somewhat non-standard as folders like /usr/local/cuda/ are missing.

Have you tried enabling CUDA/CUDNN option in your project SETUP before running on Colab? This attempts to install CUDA in standard way.


#3

Sorry for the late answer. I tried to enable CUDA/CUDNN and it works. Thanks!