How do I reduce the memory imprint due to a GPU array?

1 view (last 30 days)
Hello,
I am finding that creating a GPU array creates a huge spike in MATLAB's memory usage:
Opening matlab:
20309 mcoughli 20 0 4620m 172m 66m S 0.0 0.0 0:02.85 MATLAB
So approximately 4.6 GB. When I create a gpuArray from the command line:
>> gpuArray(1);
It spikes dramatically:
20309 mcoughli 20 0 537g 605m 255m S 0.0 0.1 2:07.06 MATLAB to approximately 537GB.
Does anyone understand why this is happening / can be prevented? It creates problems when I attempt to run on smaller computing nodes. Running ulimit -v beforehand works to some extent, but it is more difficult to set when running parallel process.
Thank you,
Michael

Accepted Answer

Edric Ellis
Edric Ellis on 3 Dec 2013
This is unfortunately likely to be due to loading all the GPU support libraries. These are quite large, and all get loaded when you first create a gpuArray. I'm afraid there's no workaround for this.
  2 Comments
Michael
Michael on 3 Dec 2013
Edric,
Thank you for your quick reply (even if it is disappointing to find out). My colleagues and I went searching for the GPU libraries, but came to the conclusion that they weren't all in one place. It would be convenient for compilation purposes to know the paths to the code we need to use GPUs. Is this even possible? Or does it basically constitute all of MATLAB / parallel computing toolbox / etc.
Michael
Joss Knight
Joss Knight on 9 Dec 2013
Have a look at installdir / bin / arch and list the contents by size - you'll see some obvious GPU libraries near the top e.g. npp, cublas, and cufft. To get good performance GPU runtime code is very non-general, but this means there are multiple implementations for every use case - add to that the overhead for supporting multiple compute architectures.

Sign in to comment.

More Answers (0)

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!