Can you bring out support for SCV theory on here?

3 views (last 30 days)
Hi,
I just wanted to ask, whether:
  1. Can you bring support for SCV Theory? (Several Complex Variables Theory, i.e. contour integration in the complex space across multiple variables, and in fourier analysis), and
  2. Can you bring out support for AMD rocm supported GPU's? (e.g. AMD Vega) as well as Nvidia Cuda? As that would enable me and other users to use the GPU coder on more than 1 type of hardware.

Accepted Answer

Walter Roberson
Walter Roberson on 5 Sep 2019
"Can you bring out support for AMD rocm supported GPU's?"
No, that is not going to happen any time soon. Although OpenCL exists as a hypothetically unifying framework, there are so many parts of it that are optional or not completely defined, that in practice it would be necessary to build a number of different OpenCL models to get reasonable efficiency. NVIDIA's CUDA keeps changing, but NVIDIA has pretty good backwards compatibilty.
  5 Comments
Muhammad
Muhammad on 22 Dec 2022
Just checking in since this thread is a little dated.
I saw there were open standards in ML libraries for GPU's on the AMD side of things that are not just tied to that brand.
Any idea if we could see support outside of Nvidia gpus?
Walter Roberson
Walter Roberson on 23 Dec 2022
There might potentially be a time when non-nvidia is supported, but it would require a combination of circumstances:
  • the AI community must be doing a fair bit of work with the devices
  • the devices must not be too expensive
  • the infrastructure for compiling for the devices must be solid, with available math libraries and documentation
The first two points address the potential market. The last point addresses the feasibility of implementation.
AMD devices succeed on affordability. However, it turns out that only a relatively small portion of the AI market is there.
It also turns out that the major programming paradigm for them, OpenCL, is a bit too open, too many optional computation paths inconsistently implemented in practice, requiring too many alternate paths be implemented for Mathworks to be comfortable about the implementation costs.
It is thus unlikely that if Mathworks implements additional GPUs that it would be AMD.
According to a paper I read a few years ago, the second largest AI market by a fair portion is in IBM GPUs.
There have been calls for Mathworks to implement the Apple "Silicon" GPUs. Those do meet the affordability test, but not the AI market test, and not the robust infrastructure test. I have read posts from companies that tried to go into the "Silicon" GPU market, but found that important aspects of using the GPUs efficiently were undocumented and that Apple refused to answer the questions. The situation is apparently quite different with Nvidia: I gather that they are pretty open about efficient implementation matters, and are open to working with companies such Mathworks when oddities come up.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!