Running multiple local versions of CUDA on Ubuntu without sudo privileges
I've been playing around with Tensorflow.js for my PhD (see my PhD Update blog post series), and I had some ideas that I wanted to test out on my own that aren't really related to my PhD. In particular, I've found this blog post to be rather inspiring - where the author sets up a character-based recurrent neural network to generate text.
The idea of transcoding all those characters to numerical values and back seems like too much work and too complicated just for a quick personal project though, so my plan is to try and develop a byte-based network instead, in the hopes that I can not only teach it to generate text as in the blog post, but valid Unicode as well.
Obviously, I can't really use the University's resources ethically for this (as it's got nothing to do with my University work) - so since I got a new laptop recently with an Nvidia GeForce RTX 2060, I thought I'd try and use it for some machine learning instead.
The problem here is that Tensorflow.js requires only CUDA 10.0, but since I'm running Ubuntu 20.10 with all the latest patches installed, I have CUDA 11.1. A quick search of the apt repositories on my system reveals nothing that suggests I can install older versions of CUDA alongside the newer one, so I had to devise another plan.
I discovered some months ago (while working with Viper - my University's HPC - for my PhD) that you can actually extract - without sudo
privileges - the contents of the CUDA .run
installers. By then fiddling with your PATH
and LD_LIBRARY_PATH
environment variables, you can get any program you run to look for the CUDA libraries elsewhere instead of loading the default system libraries.
Since this is the second time I've done this, I thought I'd document the process for future reference.
First, you need to download the appropriate .run
installer for the CUDA libraries. In my case I need CUDA 10.0, so I've downloaded mine from here:
Next, we need to create a new subdirectory and extract the .run file into it. Do that like so:
cd path/to/runfile_directory;
mkdir cuda-10.0
./cuda_10.0.130_410.48_linux.run --extract=${PWD}/cuda-10.0/
Make sure that the current working directory contains no spaces, no preferably no other special characters either. Also, adjust the file and directory names to suit your situation.
Once done, this will have extract 3 subfiles - which also have the suffix .run
. We're only interested in CUDA itself, so we only need to extract the the one that starts with cuda-linux
. Do that like so (adjusting file/directory names as before):
cd cuda-10.0;
./cuda-linux.10.0.130-24817639.run -noprompt -prefix=$PWD/cuda;
rm *.run;
mv cuda/* .;
rmdir cuda;
If you run ./cuda-linux.10.0.130-24817639.run --help
, it's actually somewhat deceptive - since there's a typo in the help text! I corrected it for this above though. Once done, this should leave the current working directory containing the CUDA libraries - that is a subdirectory next to the original .run
file:
+ /path/to/some_directory/
+ ./cuda_10.0.130_410.48_linux.run
+ cuda-10.0/
+ version.txt
+ bin/
+ doc/
+ extras/
+ ......
Now, it's just a case of fiddling with some environment variables and launching your program of choice. You can set the appropriate environment variables like this:
export PATH="/absolute/path/to/cuda-10.0/bin:${PATH}";
if [[ ! -z "${LD_LIBRARY_PATH}" ]]; then
export LD_LIBRARY_PATH="/absolute/path/to/cuda-10.0/lib64:${LD_LIBRARY_PATH}";
else
export LD_LIBRARY_PATH="/absolute/path/to/cuda-10.0/lib64";
fi
You could save this to a shell script (putting #!/usr/bin/env bash
before it as the first line, and then running chmod +x path/to/script.sh
), and then execute it in the context of the current shell for example like so:
source path/to/activate-cuda-10.0.sh
Many deep learning applications that use CUDA also use CuDNN, a deep learning library provided by Nvidia for accelerating deep learning applications. The archived versions of CuDNN can be found here: https://developer.nvidia.com/rdp/cudnn-archive
When downloading (you need an Nvidia developer account, but thankfully this is free), pay attention to the version being requested in the error messages generated by your application. Also take care to download the version of CUDA you're using, and match the CuDNN version appropriately.
When you download, select the "cuDNN Library for Linux" option. This will give you a tarball, which contains a single directory cuda
. Extract the contents of this directory over the top of your CUDA directory from following my instructions above, and it should work as intended. I used my graphical archive manager for this purpose.