AI for Simulink Users
Overview
Deep learning and machine learning techniques have demonstrated the ability to solve complex problems that traditional methods can’t adequately model, such as detecting objects in an image or accurately predicting battery state-of-charge based on current and voltage measurements. While these capabilities by themselves are remarkable, the AI model typically represents only a small piece of a larger system. For example, embedded software for self-driving car may have different adaptive cruise control, lane keep control, sensor fusion, LIDAR logic, and many other components in addition to deep learning-based computer vision. How do you integrate, implement, and test all these different components together while minimizing expensive testing with actual hardware and the vehicle?
In this session you will learn how to use AI with Model-Based Design to make the complexity of such systems more manageable, use simulation for adequate testing, and deployment to targeted hardware (ECU, CPU, and GPU) using code generation. We will illustrate this approach using a few industry examples.
About the Presenters
Emmanouil Tzorakoleftherakis is a product manager at MathWorks, with a focus on reinforcement learning and control systems. Emmanouil has a M.S. and a Ph.D. in Mechanical Engineering from Northwestern University, specializing in control systems and robotics, and a B.S. in Electrical and Computer Engineering from University of Patras in Greece.
Bill Chou is a product manager for code generation at MathWorks and has been working with code generation technologies for the past 15 years. Bill holds an M.S. degree in Electrical Engineering from the University of Southern California and a B.A.Sc degree in Electrical Engineering from the University of British Columbia.
Bernhard Suhm is the product manager for Machine Learning at MathWorks. He works closely with customer facing and development teams to address customer needs and market trends in our machine learning related products, primarily the Statistics and Machine Learning toolbox. Prior to joining MathWorks Bernhard applied analytics to optimizing the delivery of customer service in call centers, after specializing in speech user interfaces in his PhD from Carnegie Mellon and Karlsruhe University (Germany).
Recorded: 9 Dec 2020
Hello, everyone. Welcome to today's webinar on AI for Simulink users.
My name is Emmanouil and I'm a product Manager at MathWorks focusing on control systems and deep learning. I'm joined today by my colleagues Bill and Bernard. And together, we'll be talking about integrating AI models into Simulink.
A few logistics before we begin. If you have any problems hearing the audio or seeing the presentation, please contact the webinar host by typing in the chat panel. If you have any questions for the presenters, you can type them in the Q&A panel and we'll make sure to save some time at the end of the presentation to answer these questions. Thank you.
All right. So let's dive into today's topic. As I mentioned, our goal for today is to show you how to build AI functionality into your Simulink models. What does that mean?
If you think of all the different components you can have in a Simulink model, at the end of the day, you have a component that models the physical system or environment that we're working with and a collection of algorithms that are under development. Now eventually, we want to take those algorithms and deploy them. The environment here could be, for example, a vehicle dynamics model, a battery model, a communication channel, et cetera. Examples of algorithms would be a PIV controller or a vehicle detection component.
From what we've been hearing from our customers, we are seeing two main use cases for integrating trained AI models into Simulink. The first one is to use AI models as part of the algorithms that will eventually be deployed. And the second one is to use AI for data driven environment modeling.
Now the features and capabilities that we will be showing today, they can be applied to both use cases. However, the main focus of the demos and the examples of that we will be showing will be on algorithm development.
For example, Bill later we'll show you how to integrate deep learning models that perform lane and vehicle detection into Simulink. Neural networks, in this example, take the frames from a traffic video as input and detect vehicles, as well as lane boundaries corresponding to the lanes on the right and the left of the vehicle.
After Bill goes over his section, Bernard will talk about how you can integrate traditional machine learning models in Simulink. And he will also go over human activity recognition example.
All right. So let me now take a step back and try to motivate the topic a little bit. And first let's scope, what we mean by AI.
Artificial intelligence is a broad area with many fields all aiming to make machines smarter, one way or another. And a key field that is driving this AI megatrend involves techniques like traditional machine learning, as well as more specialized variants of the same, such as deep learning and reinforcement learning. These techniques allow computer programs to improve through a data driven training process. And these techniques have already been used by many of our customers in a variety of scenarios.
MathWorks has three solutions in this space with statistics and machine learning toolbox, deep learning toolbox, and reinforcement learning toolbox. In this presentation, we will focus on the first two. If you're interested in reinforcement learning, we have a recording of a webinar that we ran a few months back and that is are available to watch online.
OK. What are some examples where AI can prove to be useful? Imagine a scenario where you would like an automated way to detect and identify failures in the production line. You could use traditional machine learning to do this. And the work flow would be as follows.
First, you will need to collect data from the field that includes cases where the system fails. Then you would need to pre-process the data and extract relevant features. For example, you could extract various signal statistics, such as min, max values, the variance, and so on.
The next step here would be training. One option would be, for instance, to train a classification model to recognize whether there is an electrical or mechanical failure in the system. You would then typically simulate the train AI model as part of a larger system to make sure it works as expected. For example, you can use a Simulink model as a digital twin to test the train machine learning model in different failure scenarios before actually putting it into production. The main advantage of using machine learning in scenarios like these is that you don't really need to worry about coming up with mathematical models of the process, which sometimes may be challenging or even impossible to get.
One of the most common applications of deep learning is object detection. So another AI example from the automated driving space in this case would be using deep learning to detect, for instance, lanes and vehicles on the road. The workflow here would be similar to the traditional machine learning example in the previous slide. So first, you need to prepare your data, which in this case would be images captured from the vehicle. After you trained your deep learning model, you can test it together with other modules in a larger system designed, for instance, for highway lane following.
As you can see here, this system has many different components used, for instance, for vehicle modeling, for visual feedback, for controls, for sensor fusion. And so in addition to testing out performance of each component individually, it's also important to make sure that all of these pieces play nicely together.
Now, if you're satisfied with the results of the simulation, you can take this AI model along with the other algorithmic components of your design and deploy them to the real system. In this case, you could also rely on more traditional computer vision methods. But with deep learning, there is no need to do feature engineering anymore. And it has also been shown to have higher accuracy.
Some more examples. Bernard will cover a human activity recognition example, where he will show you how to integrate a support vector machine that classifies human activity based on sensor data into Simulink. For example, you could detect faults. You could detect if a person is standing, walking, running, and so on. Machine learning makes it easy to create this data driven classification model that would otherwise be hard to come up with.
Another interesting use of AI is to come up with surrogate models that lets you estimate quantities that are not measurable otherwise. Estimating the internal temperature of a motor is one such case where sensor based methods are not feasible for commercial use and methods like finite element analysis and thermal modeling, they're either very slow or require domain expertise to set up. With traditional machine learning or deep learning, you can create a data driven model to estimate motor temperature, which you can then verify in simulation and eventually deploy.
Estimating the state of charge of a battery is a similar case. One way to solve this problem is using recursive estimation based, for instance, on a Kalman filter. However, the Kalman filter requires a dynamic model of the battery, which may or may not be accurate. And also, as a recursive algorithm, it has some additional computational overhead.
Now, instead of a Kalman filter, you could actually train a neural network using battery measurements of voltage, current, and temperature to predict state of charge. You may then integrate the neural network with a simulated battery management system to verify its performance before eventually deploying.
There are many more applications than we have time to cover, but the common theme here is that today's systems are becoming more and more complex. And this is where model based design and AI can help by making it easier to create these complex systems. Over the past years, we've built strong support for the AI development workflow. And we have introduced various ways to integrate AI components with model based design in Simulink to understand-- to better understand overall system behavior.
So to summarize what we've talked about so far, once you have a trained AI component, you will likely want to test it together with other components of your system. That system may include the controller, a computer vision algorithm, sensor fusion component, and really, anything that's relevant to your application. By integrating the AI component into Simulink, you can perform system level simulations, you can make sure system requirements are satisfied, and eventually, you can deploy these components to the desired targets.
Alternatively, you could use AI for data driven environment modeling. For example, you can replace a high fidelity component that is slow to simulate with a faster machine learning or deep learning model. Some cases where this is relevant are finite element models, thermodynamic models, fluid dynamics, and so on.
Additionally, data driven systems may actually be the only option to model processes where a mathematical model is hard to find. And they're also easier to share compared to sharing a high fidelity model with hundreds or thousands of blocks, for instance.
That concludes the first part of the presentation. I will now hand it over to Bill, who is going to talk about deep learning in Simulink.
Thank you, Emmanouil. So in MATLAB and Simulink, deep learning toolbox provides the ability to work with deep learning networks. It helps you to leverage pre-defined and pretrained networks, and you can see a few of these on the right image. It has apps like the Deep Network Designer app to help you to visually create networks and modify them.
You can also use the Experiment Manager app to find optimal deep learning networks by running a series of experiments where you change and sweep through the hyper parameters. There are also tools to help to explain and visualize how a deep learning network works. And finally, it helps you to interoperate with other frameworks, for example, with TensorFlow and PyTorch.
So what are the different types of deep learning networks that you're likely to work with? So at a high level, there's essentially three different types.
The first type are image classification and semantic segmentation networks. And these help you to identify the types of objects that you're seeing in the input, or at a pixel level, figure out what types of objects that particular pixel is. And popular networks that you'll see are, for example, ResNet, MobileNet, GoogLeNet, SegNet, and others.
A second type are object detectors. And these are your YOLO v2s or SSDs. And as the name suggests, it helps you to identify objects within the input image or video frame.
And finally, you might be working with sequence networks with audio, text, and signal data. And these are your LSTM networks and bidirectional LSTM networks.
So how do you bring these type of networks into Simulink? With the first type, with image classification networks, you can use the image classifier or the predict blocks. And for the other ones, semantic segmentation, object detector, and sequence networks, you can make use of the Function Block to bring it into Simulink for simulation and code generation.
So going back to the diagram Emmanuel was talking about a little bit earlier, if we focus on designs where the AI models are in the algorithmic portion of the Simulink model, what might that look like inside of Simulink? And at this point, I'd like to give you an example of that.
So the example we're going to use is a highway lane following model. And as you can see in the Simulink model here, there are various subsystems that do a variety of different things. For example, vehicle dynamics, sensor fusion, and others.
And if we run the simulation in Simulink, you can see we're getting good results. The bottom left corner, we're identifying vehicles that are coming in. And the top left, we're identifying the left and right lanes.
So for our purposes, we're interested in a vision detector subsystem here. That's where our AI model resides.
So if we look inside that, at a high level, this is what we're going to see. We're going to see two deep learning networks, the lane detector as well as the vehicle detector. And there's going to be some pre-processing and post-processing in order to make everything work correctly.
So the two deep learning networks. The lane detector is based on AlexNet. We made some modifications in order to identify the left and the right lanes. And for the vehicle detector, we're going to use YOLO v2.
Now, the way we're going to bring in these networks is to use the image classifier block that you see there for the lane detector. And we're going to use the MATLAB function block to bring in our YOLO v2 network.
So we're going to feed in an input image on the left from a video frame. And we're going to identify the left and right lanes of the lane that the vehicle is traveling on and we're going to identify any other vehicles that we see in the input frame.
So the workflow, we're going to split into three parts. In the first part, we're going to run the simulation on our desktop CPU, desktop Intel CPU. From there, we'll switch to running the simulation on our desktop GPU, and we'll see a nice boost in performance there. And at that point, we can go ahead, generate CUDA code. And instead of targeting our desktop GPU, we're going to switch to running it on an embedded Jetson AGX Xavier board that we have connected to our host machine. So let's go ahead and take a look at our first part, and we'll run the entire Simulink model on our Intel CPU.
So here it is, our two deep learning networks, the lane detector and the vehicle detector. Our input traffic video is going to be fed in. We'll do some pre-processing to resize the image for our lane detector. And in our lane detector, here you can see it's defined using our MAT file that you see there. From there, we'll do some post-processing to detect the coordinates before we send it to the annotation to highlight the left and right lanes, before we show it on the output viewer.
In parallel, the input videos also feature a vehicle detector based on YOLO v2. If we double click on that, you'll see this is a MAT function block and the network is defined inside that MAT file there.
So the output of that is sent to the annotation where we draw bounding boxes around vehicles that we detect. And if we run this simulation right now in our CPU, you can see the input videos on the left and the output videos on the right. Frame rate's a little bit low, mostly because we're working with some pretty large networks, especially the YOLO-v2 network. This is taxing our CPU a bit.
But in the output video, you can see the left the right lanes being marked. And we also draw bounding boxes around vehicles that we detect. And the numbers on there represent the percentage confidence of detecting a vehicle.
So that's the first part where we're able to run the simulation on our desktop CPU. Now, for performance reasons, we can go ahead and we can switch to using our desktop GPU, in this case, an NVIDIA GPU, in order to help boost performance. And let's take a look at how that works.
And before we get to that, I'll just mention that when we switch over to the GPU, GPU coder will go through it will try to make use of optimization libraries. For example, cuDNN or TensorRT, or our deep learning networks. And these are libraries provided by NVIDIA. And for the non-deep learning parts, we will be generating optimized CUDA code so that it will run much faster on our NVIDIA GPU.
So here's the same Simulink model again. And if we go into model settings and take a look at our config parameters, inside simulation target pane, we're using C++ as a language, and we'll check the GPU acceleration box. Down here below, we can choose to use it either cuDNN or TensorRT optimization libraries. We'll stick with cuDNN.
And from there, we can go ahead and click on Run. And here you see the exact same results input video on the left and the output video on the right. We're speeding the simulation up quite a bit through the use of the NVIDIA GPU and, you know, the left and right lanes are being identified and we're drawing bounding boxes around vehicles that we detect.
So now we're ready to generate code. We can use either the Simulink coder or the embedded coder wraps. I'll choose embedder coder in this particular case. And inside of our configuration parameters, under the Code Generation pad, you'll see the appropriate system targets file selected, the ert in this case.
Language is set to C++. And I've checked the Generate GPU Code box. Toolchain, we'll select NVIDIA CUDA.
And going down here in Interfaces, at the bottom, you'll see Target Library. Again, you have access to both cuDNN or TensorRT. We'll keep cuDNN in this particular case. And in GPU code, you can see we've checked all the boxes for using the other NVIDIA optimization of the optimization libraries.
So we're all set. We can go ahead and click on Build to generate code. And in the code generation report, let's take a look at the step function.
Here you can see in the step function, if we jump below, you'll see cudMalloc calls, which is allocating memory on the GPU at the right places. And the cudaMencpy, which is moving data between the CPU memory and the GPU memory at the appropriate locations.
Here, you also see a couple of CUDA kernel being launched. And these CUDA kernels will run on the GPU to help speed things up.
And if we take a look at the C++ class for the deep learning networks, you can see the familiar methods for set up, predict, and cleanup. Here's our second deep learning network, the YOLO v2 network. Again, you'll see set up, predict, activations, and cleanup.
So in the setup, if we want to take a look here, this is run once at the beginning of the program. And in here, what we're doing is we're loading the deep learning network into memory. And if you look at the code, you'll see that we do this one layer at a time. And for each layer, we'll load in the weights and biases of the deep learning network as needed.
So that was a quick look at running that on the desktop GPU. Now in the third part, what we're going to do is we're again going to generate CUDA code, but in this case, we're going to re-target. And instead of writing on our desktop GPU, we'll run the entire program on our embedded Jetson AGX Xavier.
So here, we're in the same Simulink model. Again, we'll go back into Config Parameters and Hardware Implementation. Let's see. And we'll make sure that we're selected the NVIDIA Jetson as our hardware board. And under Target Hardware Resources, here is where we specify our device address. Our embedded board is connected through ethernet. And under Build Actions, we'll select Build and then Execute and Run the program.
So we're all set there. Now we're ready to go build, deploy, and start our application on our embedded Jetson board. And in this case, code generation, after code generation here, you can see the output of the image. And this is being sent back into our host machine. So we can see the output from our embedded Jetson board. And you can see it identified the left and right lanes, as well as draw bounding boxes around our vehicles.
So that's the third part. And just a quick wrap up what you saw here. First we were writing the simulation of the entire application, including the deep networks, as well as the pre and post-processing. And we were running that first on our desktop CPU before switching over to writing on our desktop GPU, which helped speed things up quite a bit. And basically, we were seeing that performance was about seven times faster running on our desktop GPU versus the CPU. And the last part of what we saw was generating CUDA code again and switching to target our embedded Jetson AGX Xavier board.
So that was a quick look at running deep learning inside of Simulink. And what we saw was running simulation as well as code generation. So now let's switch over and take a look at running traditional machine learning in Simulink. And I'll hand it off to Bernard.
You just heard Bill talk about how deep learning opens new possibilities for complex system modeling in Simulink. Aside from the convolutional neural networks that are so popular for image and video applications, engineering and industrial applications frequently use long-short term memory networks for signal and text data. However, to train performance deep neural networks, you need lots of labor data and access to powerful compute resources, best with GPUs.
By contrast, my section focuses on traditional machine learning, like linear models, decision trees, support vector machines, and Gaussian process. Machine learning is most widely used for applications with sensor numeric data these days. However, unlike deep learning, you only need moderate amounts of data and no GPUs.
So if you are working on an application where a machine learning achieves good performance, what are your options to integrate such models in Simulink? With R2020b, we deliver built-in machine learning blocks and so far specifically support vector machines for classification and regression.
The Simulink blocks need to refer to the actual machine learning model, which you trained yourself obtained from a colleague. Aside from prediction, you also need feature extraction that transforms your signal to input suitable for machine learning. And your raw signal may have to be pre-processed. Those types of functions can be integrated in Simulink using a MATLAB function block.
Next, let me show you a demonstration of how you can deploy machine learning to mobile devices leaning on the embedded deployment in Simulink. We chose the widely known Human Activity Recognition as application. It takes sensor data from the accelerometer of your mobile device and determines what activity you're performing. We simplified the classification to distinguish just two classes, walking versus idle.
You start with a Simulink model of your system, which for this simplistic application shows access to the sensor data with the Android block on the left and the display of the classifier result on the right. We represented the machine learning as subsystem in the middle. To really leverage Simulink capabilities, you might integrate logic to connect to a smart home system and control lighting or temperature based on activity.
Next, let's drill into the machine learning subsystem. It needs to extract features from the sensor data, which then become input to the classifier. To get features, this application calibrates the raw accelerometer data for different types of phones and then computes features. Drilling into the feature extraction, we see various signal processing functions being applied in the MATLAB function block.
Next, we need to integrate the classifier. And for that, you can use the corresponding prediction block in the machine learning library. The block challenges you for the name of the machine learning model. We'll look next at how you can get that.
Building machine learning models begins with labor data. You have offloaded some labor to human activity data and are showing in the scatter plot how the two different conditions, walking, idle, represent themselves in the two dimensional feature space.
You can use the Classification Learner to train models interactively. Let's start with the decision tree. And then compare to logistic regression, which is a popular basic model for two class problems. Working a little better.
And then finally, compare to a linear support vector machine. To use the support vector machine in the Simulink model, we'll export it to the workspace and there rename it. And provide that variable name with a Simulink model for reference.
With everything connected, we are ready to proceed with deploying. And once it's deployed, we can try out the application like I show here by shaking the device to show that some activity is happening.
This demo covered only the training step of what is a multi-step workflow for developing machine learning models shown here. Additional interactive tools and AutoML, however, can empower engineers and Simulink users build AI models themselves. Without going into details, key steps include cleaning messy raw data, which you can perform interactively in MATLAB using live tasks.
Next, the all important step of feature extraction, which requires signal processing knowledge. Or you can turn to AutoML and apply wavelet scheduling to automatically extract performed features from signals and images. For embedded deployment, you need a smaller model so it fits the hardware and thus select the subset of performed features. Many feature selection methods are available.
Then you proceed to the model building phase, either with the Classification Learner, as shown in the video, or by applying automated model selection available with AutoML. If you stay in the interactive flow, there is also automated model optimization and hyper parameter tuning available in the learner apps. Finally, code generation is available so you can lean on the Simulink hardware deployment capabilities.
Reaching the conclusion for this webinar, we demonstrated how integrating AI models in Simulink can deliver better performance and new functionality not possible with traditional model based design. We have introduced new blocks that bring machine and deep learning models into Simulink. And you can still test the overall design in simulations and implement your system using code generation available in Simulink. To build the required AI models, interactive apps, and examples in deep learning and statistics machine learning toolbox, as well as AutoML, empowers Simulink users without much AI expertise to build performant models.
To learn more, here's a number of resources you may find helpful. For deep learning, we have a Getting Started video, as well as a free 2-hour training in the Deep Learning Onramp. If you want to check out the demo that Bill showed with lane detection, here's a link to it, as well as additional links to examples demonstrating how to use the new deep learning blocks for integration in Simulink.
Similarly for class for machine learning, we also have videos showing the classification learner, how to use it, as well as the similar regression app. Just like for deep learning, we have a Machine Learning Onramp available. You can also check out the Human Activity Recognition demo following this link, as well as learn more about how to use the different types of blocks to integrate machine learning in Simulink models.
We hope you found the information from this webinar useful. Do engage with us. Let us know what applications you're working on or request a free trial to try out these technologies yourself.