Development of multi-object Tracking & sensor fusion solutions - MATLAB
Video Player is loading.
Current Time 0:00
Duration 50:13
Loaded: 0.00%
Stream Type LIVE
Remaining Time 50:13
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 50:13

    Development of multi-object Tracking & sensor fusion solutions

    Overview

    Join us for an in-depth webinar where we explore the simulation capabilities of multi-object Tracking & sensor fusion. In the first part, we briefly introduce the main concepts in multi-object tracking and show how to use the tool. In the second part we will delve into the complexities of tracking moving emitters with spatially distributed sensor networks, where simultaneous illumination is often inconsistent. Learn about the integration of algorithms such as Global Nearest Neighborhood (GNN), Joint Probabilistic Data Association (JPDA), and Probability Hypothesis Density (PHD) trackers, and how these are poised to enhance tracking performance in real-world applications.

    Highlights

    Development of multi-object Tracking & sensor fusion solutions

    • Introduction to multi-object trackers, scenario simulation
    • Simulation of simple tracking scenarios and tracking
    • Centralized and decentralized algorithm approaches
    • Challenges and solutions for heterogeneous sensor use-cases

    Track Data fusion for Target Tracking using Distributed Passive Sensors

    • Overview of the challenges in tracking airborne RF emitters
    • Exploration of various algorithms for angle-only measurements
    • Scenario generation & implementation of track fusion algorithms
    • Performance analysis & future directions

    About the Presenter

    Sumit Garg | Principal Application Engineer| MathWorks

    Sumit Garg is the principal application engineer at MathWorks India specializing in design analysis and implementation of signal processing and data processing applications. He works closely with customers in areas of radar, phased array, and sensor fusion. He has more than 12 years of industrial experience in the development of hardware and software applications in the radar domain.

    Nidhi Verma | Scientist | Defense Research & Development Organization

    Nidhi Verma is Scientist –‘E’, in Defence Electronics Research Laboratory, DRDO, Hyderabad. She received the B.Tech Degree in Electronics and Communication Engineering from Madan Mohan Malaviya Engineering College, Gorakhpur in 2009. She focuses on the design and development of emitter processing algorithms for ESM systems and passive seekers. Her current research interests include AI, ML, and DL techniques for defense applications, as well as Tiny ML and distributed sensor data processing techniques.

    Recorded: 24 Oct 2024

    Hello, everyone. Thank you for joining the third session on development of multi-target tracking and sensor fusion solutions as part of the Aerospace and Defense Radar and Satellite Communication Webinar Series. My name is Sumit Garg. I'm an application engineer at MathWorks. My expertise includes radar, sensor fusion, and antenna design. We'll spend next 30 minutes going over the design aspects of multi-target tracking and sensor fusion solutions. So thank you for attending. I hope you will leave this presentation with valuable information that you can apply in your job.

    In this presentation, I will cover four main topics, starting with an introduction to features of scenario simulation and multi-object tracking in Sensor Fusion and Tracking Toolbox. Second, I will demonstrate solutions how to generate tracking scenarios using Tracking Scenario Designer app and how to select trackers. Third, I will discuss how to integrate heterogeneous sensors using Sensor Fusion and Tracking Toolbox. Finally, I will talk about choosing between centralized and decentralized algorithm approaches, which is critical consideration in sensor fusion.

    So let's get started. Sensor Fusion and Tracking Toolbox includes algorithms and tools for designing, simulating, and testing systems that fuse data from multiple sensors to maintain situational awareness. In the toolbox, you will find a virtual testbed, algorithms, and analysis tools-- based on these, a testbed that allows you define scenarios and simulate sensor outputs, a rich set of localization and tracking algorithms to either use as is, build your own solutions, or to compare against your own solutions against, and a set of visualizations and metrics to analyze your results.

    In the following slides, we will focus upon these three sections in detail, starting with virtual testbed. To set up a virtual testbed by tracking simulation, you can utilize the trackingScenario object. By default, this object initializes an empty scenario, providing a flexible foundation for your simulation needs. You can then populate the scenario with platforms by calling the platform method as many times as needed. A platform can represent a variety of entities, whether moving or stationary, such as sensors, targets, or other objects of interest.

    The trackingScenarioRecording feature allows you to capture and replay dynamic scenarios. By recording scenarios, you can revisit specific events, ensuring a thorough understanding of system performance. The trackingScenarioDesigner offers an intuitive interface for creating and customizing tracking scenarios. The app empowers you to experiment with different setups. We will look into the demo of the app in upcoming slides.

    For the platforms that we discussed in virtual testbed earlier, you can define a motion of platform or targets by defining their trajectories. You can choose GeoTrajectory, which generates trajectories based upon waypoints in geodetic coordinate system; waypointTrajectory, which generates trajectories using specified waypoints. When you create system object, you can optionally specify time of arrival, velocity, orientation at each waypoint. This is also available in Tracking Scenario Designer app. I will briefly demo this during the app demonstration.

    Kinematic trajectories allows the use of acceleration and angular velocity. Lastly, the polynomialTrajectory offers trajectory generation using specified piecewise polynomial, giving you flexibility to model complex smooth paths with ease.

    In virtual testbed, you can model various sensors, including IMUs with accelerometers, gyroscopes, and magnetometers, as well as GPS receivers, altimeters, radar, LiDAR, sonar, and IR sensors. Each of these sensors can be tailored to mimic real-world conditions by adjusting sensor-specific parameters.

    If you type fusionRadarSensor in command window, the default will be created. You can specify the detection mode of the sensor as monostatic, bistatic, or electronic support measure through the DetectionMode property. You can use fusion radar sensor to simulate clustered or unclustered detections with added random noise and also generate false alarm detections.

    The virtual testbed also has inbuilt visualization capabilities via trackingGlobeViewer and theaterPlot functionalities. The visualization capability enhances your ability to analyze and interpret tracking data. The trackingGlobeViewer allows you to create a dynamic virtual globe, providing a platform for visualizing complex tracking scenarios. With this tool, you can seamlessly plot platforms, trajectories, sensor coverages, detections, and tracks.

    The theaterPlot API is for displaying plots of tracking scenario, providing a detailed and interactive view of your simulation environment. This is about the APIs for virtual testbed. Now let's see which algorithms are available.

    For single-object tracking, Kalman filter, we predict, and then we measure and correct. The Kalman filter works by predicting the object's future state, updating its estimates with incoming measurements and minimizing error through a series of iterations. This process ensures you maintain precise and reliable tracking performance over time. Here is a range of filters available to enhance your tracking systems. There is Kalman-based filters, which includes Kalman, extended Kalman, cubature, and unscented filter.

    The IMM filter excels in tracking maneuvering objects by switching between multiple models. The Gaussian-sum filter is ideal for scenarios with partial observability using a weighted sum of distribution. Also, particle filter is used for highly nonlinear systems, and the table can guide you through the selection of right filter based upon system nonlinearity, state distribution, and computational complexity.

    Modern tracking systems involve multiple target tracking. Here, one or more sensors generate multiple detections from various targets, requiring algorithms to manage data association and track management. Data association ensures each detection is correctly linked to the corresponding target. Multi-object tracker also handles track management, maintaining and updating status of each target as new data becomes available, whether it's creating new tracks, updating new tracks, or removing all tracks.

    Like Kalman filter, here is the range of trackers available for multi-object tracking. TrackerGNN uses global nearest neighbor algorithm for assignment. The trackerJPDA assigns multiple probable detections to track objects, improving accuracy by considering various potential matches. TrackerTOMHT maintains multiple hypotheses about tracked objects. And trackerPHD utilizes probability hypothesis density function. And also, grid-based occupancy evaluation evidence approach is possible via trackerGridRFS.

    Each tracker returns a list of tracks that are updated from a list of detections. ObjectDetection interface is responsible for capturing and organizing data from various sensors. It provides the necessary kinematic parameters and attributes of detected objects. ObjectTrack interface manages the life cycle of tracked objects, maintaining and updating their states as new detections are processed. It ensures that the tracker efficiently correlates detections with existing tracks, providing a coherent view of the tracking environment.

    For analysis, we use tracking quality metrics. The toolbox has three types of metrics. One is detailed metrics, score-based metrics, and vision-based metrics. The first one, what we call as detailed metrics, these metrics provide you detailed information per track and per ground truth. For example, the assignment metrics inform you things like if this ground truth is tracked and, if yes, how much time it took to establish a track on it, and, further, which track is tracking it.

    This second category of metrics, what are called as score analysis tools, these metrics provides you a single cost value as a summary of the entire system for each time step. A higher value of this metrics typically denotes poor tracking performance. This is an attractive metric when you are looking at the tracking system as a whole.

    The toolbox offers two metrics in these categories, that is the OSPA and GOSPA metrics, and finally, also vision-based metrics.

    So how do you analyze tracking quality? You use the output of tracking system that it tracks and compare it to truth. Truth can be obtained from simulation, from instrumented flights where the targets are equipped with highly accurate positioning systems, or from labeled data.

    So you essentially begin with data that is measurements that come from real sensor or from scenario with multi-sensor simulation. We define an API called objectDetection as an entry point to all the trackers in the toolbox. There are five different trackers. They output their estimate in another API called objectTrack. You can use visualization tools and metrics to observe the results.

    Within the tracking library, we provide a rich set of tracking filters. All the filters adhere to same interface, which allows them to work with various trackers. Now let's get into simulation of simple tracking scenarios and some trackers.

    Let's look at simple scenario using MATLAB program here. Let's walk through the script step by step. First, we initialize the trackingScenario with an update rate of 60 hertz. Then we add the platform to the scene. We then create a monostatic radar sensor using the fusionRadarSensor function with rotating capability. This radar will be responsible for detecting objects in the environment. The radar sensor is mounted on the platform, enabling it to move and operate as part of the platform.

    In the next step, we add a target platform to the scene. This represents the object we aim to track. We define the target trajectory using specific waypoints in 3D space. This loop runs continuously, advancing the simulation by moving both platforms and targets within the scene. Each iteration represents a new time step in the simulation.

    We capture the current simulation time, then we generate detections using the sensor mounted on the platform. The detect function provides us with the current detections and number of detections at this time step. We then buffer these detections, adding them to scanBuffer for further processing or analysis. And you can also visualize the same using the visualization functions.

    Now let's look into the . The Tracking Scenario Designer app enables you to design and visualize synthetic tracking scenarios for testing your estimation and tracking systems. You can create platforms, including planes, cars, towers, boats, using an interactive interface and configure properties of the platform in the tracking scenario. You can configure 2D or 3D trajectories, including position orientation and velocities of platforms using waypoint trajectories in the app.

    Then you can create radar sensors mounted on the platform and configure the sensor properties. Finally, using the Export tab, you can generate MATLAB code of the scenario and sensors, then programmatically modify the scenario for application purposes. Let's look at a small demo.

    In this, first we add a tower platform. Then we add a plane target and start to add the waypoints there. Once we add all the waypoints, we can configure the waypoints via the app itself, changing the altitude or manually enter the altitude of the trajectory waypoints here.

    Once we configure the trajectory parameters, then we can add the sensor on the top of the platform. Here we mount a sensor on the platform and start to configure the sensor properties. We can control the orientation properties. You can then tune the field of view, mechanical scan limits. So you can play with all these properties and configure your sensor parameters. Once you define all your sensor parameters, then you can play your scenario and export the script as well.

    Let's now look at a scenario of closely spaced targets falling in one resolution cell. And this scenario, also, we modeled using a scenario where a radar tower is equipped with a monostatic radar sensor that scans the sky. At a distance far from the sensor, two airplanes fly in close proximity and other for a short duration of time. The closing spacing of the aircraft trajectories and their distances from radar tower challenge the radar's capability to properly resolve the two objects.

    You can also visualize the same scenario and replay the scenario in the app also. And if you closely look at the detections which are getting generated from the scenario, it would look something like this, which we need to pass to the tracker. Now we can start with the least computationally intensive tracker in the library. That is GNN tracker. And we can switch between a constant velocity model and an IMM filter using FilterInitialization function and pass THESE detections to this tracker here.

    And finally, we can visualize at each step, each time step how my tracking performance is coming out with respect to the detections that I've fed here. We can analyze here how many false tracks are getting generated, how many cross trajectories are getting generated. Is it going back into the same direction? So here we can see that in IMM, we see a false track drop, but it is going back into the original direction, which is the ground truth.

    So similarly, we can switch to other tracker here, that is joint probabilistic data association tracker, which involves using most likelihood assignment of measurements to a target. When we switch to this tracker, we see that there is no false tracks or there is no drops of targets. So this way, we can do what-if analysis by switching between various trackers. And this involves quickly changing the properties by switching into a new model, new filter, and a new tracker.

    And the results, we can quickly prototype and evaluate the tracking solutions. Here, this is showing a what-if analysis to compare multiple trackers that we just saw.

    Also, if you want to look into a globe scenario, here is a visualized map display of an air traffic control scenario, you can follow the same steps. And finally, you can visualize this on a map display using tracking globe viewer.

    And the same process, you can scale it to multiple systems. That could be land-based system, air base system, maritime, or space-based systems, and also for extended objects, whether it's outdoor tracking or indoor tracking.

    Now let's jump into the heterogeneous sensor use cases and the solutions for those. In today's rapidly evolving technological landscape, the integration of heterogeneous sensors presents, both, challenges and opportunities. One key challenge is interoperability of diverse sensor types, which often have varying data formats and operational requirements. To address this, implementing standardized interfaces for data fusion can streamline this integration.

    Here are some examples. The left one shows how to track objects using angle-only measurements from spatially distributed and asynchronous passive sensors. When you have full and partial measurements, the example in the center shows how to fuse position measurements from radar and angle-only measurements. And the example on the right shows how to track heterogeneous aircraft using data collected by heterogeneous sensors. Let's look into one of the example in detail now.

    So using the new task-oriented trackers in 2024b release, you can follow these steps for specifying tracker. The first step is defining the tracker required, specifying the types of objects you want to track. In this example, these objects are passenger aircraft, general aviation aircraft, and helicopters. Using the trackerTargetSpec function here, I can create a target specification for a passenger aircraft and observe the properties of the specification.

    The passenger aircraft target specification defines typical speeds and acceleration in horizontal and vertical planes for a passenger aircraft. Similarly, you can create other specs for general aviation and helicopter target specification. After specifying what we want to track, we need to specify the sensors we use for tracking. In this example, we use airport surveillance, primary radar, and a secondary radar. And the airport surveillance radar is a monostatic rotating radar, which we specify using trackerSensorSpec. Similarly, you can add it for the second sensor.

    The next step in specifying the tracker is to configure it to use for heterogeneous sensor and target specifications that we have defined. Currently, with this new interface, only JIPDA tracker is available.

    The sensor specifications also allows to pass sensor data into tracker and act as interpreter between sensor data and tracker. To understand this point better, display the data format for activeRadarSpec. And the active radar data format includes all the information needed to understand the sensor scanning and measurements.

    And finally, you can run the tracker and see your simulation in action, where you can analyze and explore. To modify the scenario, ensure that you use correct target and sensor specifications to match the new scenario and update the specifications accordingly.

    Finally, let's see the centralized and decentralized approaches. When building surveillance systems from multiple sensors, the question is how to fuse the sensor data. Should the sensor's detections be fused by a centralized tracker? Or should each sensor or sensor modality be fused by a separate tracker? And then all sensor tracks fused by track-to-track fusion algorithm. The toolbox provides a tool that is called as tracking architecture to explore these kind of questions.

    When designing multi-sensor tracking system of systems, there is a spectrum of architectures that can be explored. On one end of the spectrum, the data from all the sensors is fed to a centralized tracker. The centralized tracker maintains the list of tracks and fuses data from all sensors with the central or globe tracks directly. The other end of the spectrum is fully decentralized approach, where each sensor data is first processed by a sensor level tracker. Each sensor level tracker produces a local estimate of objects in its limited field of view.

    A centralized track level fusion scheme fuses sensor level tracks to produce global estimate for the system. In the proposed tracking architecture framework, users can define multiple tracking architectures as black boxes. The same sensor inputs are provided to each architecture to produce tracks. Ground truth can be labeled from simulation or label data. Then the output of each architecture is compared with the ground truth. And analysis tools like track metrics provide quantitative tracking quality results.

    Here you can see the main components of tracking architecture. That is tracker, track-to-track, fusion tracking architecture itself. We enable bringing your own tracking algorithm using interfaces as well. The central piece of the tracking architecture framework is the architecture tool, which serves as container for detection level trackers, track-to-track fusion algorithms. The tool also serves as router to pass data to and from each tracking node.

    It shows here a tracking architecture to which we add two trackers and track-to-track fusion. The first tracker is single-hypothesis GNN tracker that fuses data from two sensors. Second is the sensor level JPDA tracker. Output from both the trackers, in addition to the tracks arriving from a third source, are fused by a track fusion. If you have your own custom algorithms as well, you can integrate with the Tracking Architecture tool.

    To integrate a custom tracking algorithm, compose a wrapper object with the custom algorithm. The wrapper should be responsible for translating from standardized interfaces presented in the earlier slides. So this is what SFT Toolbox does, that is, design, simulate, test, and deploy multi-sensor tracking and positioning systems. I'll be happy to help you with all the queries.

    But before getting into Q&A, let me introduce Mrs. Nidhi Verma. Nidhi Verma is a scientist in Defense Electronics Research Laboratory, DRDO Hyderabad. She focuses on the design and development of emitter processing algorithms for ESM systems and passive seekers. Her current research areas include AIML and deep learning techniques for defense applications, as well as Tiny ML and distributed sensor data processing techniques. She'll be sharing her experience on spatially distributed sensor networks. Over to you, ma'am.

    A very good afternoon to everyone. Thank you for joining in. I am Nidhi Verma, currently working as scientist in DLRL Hyderabad. DLRL Hyderabad is a premier laboratory of DRDO, working for the design and development of state of the art electronic warfare systems for Indian triservices. My specialization is in design and development of ESM processor subsystem for distributed sensor networks and passive seekers for missile applications.

    Today, I am here to share my experience of developing track data fusion algorithms for tracking of airborne emitters using distributed passive sensors. The activity has been taken up as a technology demonstration exercise to evaluate the performance enhancements by incorporation of data fusion techniques in respect of ESM applications.

    Electronic warfare, as we all know, is a game of cat and mouse, with cutting-edge technological innovations always driving the capabilities of EW. Modern military capabilities rely increasingly on the electromagnetic spectrum. And hence, EW plays a significant role in helping military forces maintain a strategic edge in the battle space by protecting our access and use of the spectrum while simultaneously denying and degrading any adversaries' use of this spectrum.

    Radio frequency surveillance is an integral part of electronic warfare operations, providing critical applications for intelligence gathering, situational awareness, and tactical decision-making. By monitoring the electromagnetic spectrum, military forces can gain insight into enemy activities, protect their own communications, and enhance overall mission effectiveness. This leads us to think that what are the most desirable characteristics for a surveillance system. It is its stealth and wider coverage.

    Hence, passive RF sensors with the ability to operate without emitting signals are ideal for covert operations and intelligence gathering, providing low detectability by adversaries. Compared to radars, the ESM sensors also have detection range advantage as the signal gets attenuated only due to one-way path loss. And hence, the radars can always be detected before it can detect the surveillance platform.

    To achieve wider spatial coverage along with stealth, distributed passive sensor networks are deployed. This approach enables collective monitoring and analysis of RF emissions covering larger geographical areas compared to single-point sensors, allowing for comprehensive surveillance and data collection. Sensor level redundancy and scalability are also added advantages, which ensure consistent wide area monitoring.

    A distributed sensor network can work together to pinpoint the sources of emissions more accurately through techniques like triangulation. This is one of the primary motivation for using distributed sensors as location is the most crucial parameter of interest in surveillance operations. And definitely, the network, the nature of distributed sensors facilitates advanced data fusion techniques, enabling the integration of information from multiple sources for deeper insights, leading to collaborative intelligence.

    So one of the applications of distributed passive RF sensor is tracking of airborne RF emitters, which are primarily the transmitters on the adversary aircrafts and unmanned aerial systems. Distributed passive RF sensors offer target detection and identification, 3D tracking, high survivability, and extended interception ranges. Advanced technologies like TDOA enables the passive system to intercept, identify, and locate the airborne radars and passively generate the 3D air situation picture while covering wide instantaneous frequency with high sensitivity.

    The interception occurs once the radiating emitter enters the three-dimensional intersection bubble. The electromagnetic radiation impinges on each sensor sequentially, enabling computation of TDOA-based DF or, alternatively, ADF and BLA-based DF's can also be used. For producing full air situation picture, the system consists of two to six sensors in a single center.

    The system's sensors are linked to create a large interception bubble. In case of primary radar being jammed, the passive RF sensors offer complementary reliable tracking of emitters, generating 3D situational scenario.

    The electromagnetic signals of interest, which can be intercepted leading to tracking of these airborne emitters, are first and foremost the airborne surveillance radar, weather radars onboard the aircraft, followed by communication signals. Next, are the transponder signals like Mode S, ADS-B, and IFF broadcast. Telemetry and data links can also be intercepted and analyzed.

    To obtain efficient tracking of an airborne RF emission, challenges range from signal propagation issues to data overload of algorithms, leading to limitations that have to be carefully handled. Primary concern is due to signal attenuation and interference, which is pronounced in environments with significant obstructions or interference. Multipath propagation due to reflection of signals of surfaces, buildings, terrains, et cetera leads to multiple intercepts of the same signal arriving at the receiver, which causes confusion in signal interpretation and reduces tracking accuracy.

    The operational scenarios tend to get more complex with multiple emitters, fast-moving targets, or evasive maneuvers. Hence, developing robust algorithms that can handle these complexities is a significant challenge. Distributed tracking systems can generate vast amounts of data that must be processed in real time. Efficient handling of data to achieve real-time processing is desired as latencies in processing can lead to missed opportunities or inaccurate tracking.

    Effectiveness of tracking and wider spatial coverage also depends on the capabilities of the sensors used. Limited sensitivity and resolution can result in missed detections, which render system ineffective against low power and covert transmissions, limiting the interception range of the system.

    Having understood the advantages and challenges of tracking airborne emitters using distributed passive sensors, let us now take a look at the architecture of passive emitter tracking system. The system comprises of multiple sensor nodes which are networked together for reception of data. All the sensor nodes are connected to a main master station where the data is intercepted by each sensor is accumulated and processed for generating the three-dimensional air situation picture of the area under surveillance.

    The number of sensor nodes deployed shall directly impact the area of coverage by the sensor network and the quality of location fix that is achievable by the system. The block diagram here represents the network containing one to n nodes. Each sensor has a antenna head unit which shall primarily comprise of wideband antenna arrays, achieving 360-degree spatial coverage. The signals are intercepted and processed for parameters at sensor nodes.

    The processed data is then forwarded to the master station where in-depth analysis is carried out and the data is presented to the user. Typically, the sensor station comprises of antenna head unit and a receiver unit. Here, the incoming RF signal is intercepted, and signal conditioning is done. The RF to IF down conversion is done to bring the signal within the sampling range of the ADC. The down-converted signal is digitized, and signal processing is performed to estimate the pulse parameters like frequency, pulse width, amplitude, direction of arrival, and time of arrival.

    FFT-based frequency measurement is done, while for DOA estimation, amplitude, phase-based or TDOA-based direction finding techniques can be utilized. The generated PDW data is further timestamped and transmitted to master station over datalinks by the sensor data processor. The synchronization across all sensor nodes, sensor configuration based on mission planning is also carried out here.

    At the master station, data from all the sensor nodes is received. The data is interleaved and analyzed for location fix of potential aerial targets. The emitter parameter extraction, track generation, and activity monitoring is performed. Post-processing, the output tracks data is presented to the user in the form of a 3D air situation picture.

    Provision of storing the raw sensor data and process track data is also desirable. Now, in this sequence of operations, the obvious question pops up is where to introduce the data fusion modules so that an enhancement in tracking performance is achieved. Is it at the master station, or at the sensor station?

    This brings us to the most relevant approaches that are prevailing today and also covered by the previous speaker in the field of data domain-- data fusion in distributed architectures. That is, the central level fusion and the track-to-track fusion. In central fusion architecture, it involves aggregating data from all the multiple sensors at a single processing point. That is the fusion center. This center processes the raw data, applying algorithms to produce a single unified output or a set of outputs.

    High-level data integration is achieved as it combines data from various sources, offering enhanced accuracy and reliability. The limitations, however, are with respect to the bandwidth constraints as the scheme requires significant amount of data to be transmitted to the fusion sensor as all the raw data that the PDW has transmitted. It also suffers from a single point of failure as if the fusion center fails, the entire system may become inoperative.

    Also, the system tends to suffer from latency issues as all data must be received and processed before a decision is made. On the contrary, if we see the track-to-track fusion, it involves merging data from different tracking systems or sensors, each maintaining its own track estimates. Diffusion occurs at the level of objects rather than the raw sensor data, allowing for quicker integration. The scheme offers advantages with respect to bandwidth requirements as only tracked data needs to be transmitted.

    Robustness to failure is also achieved due to tracking functionality at sensor nodes. This architecture can achieve lower latencies and faster decision-making. The limitations for this scheme are potentially lower accuracy, as each sensor may have its own noise characteristics and inaccuracies. So final fused track may not be as accurate or as fully integrated like that in the central fusion approach.

    So the choice between central level fusion and track-to-track fusion largely depends on the specific requirements of the application, including considerations such as system architecture, operational environment, bandwidth availability, and desired accuracy. In practice, a hybrid approach that combines elements of both methods often yields the best results, balancing the strengths and weaknesses of each technique.

    So coming to our architecture of distributed sensors for target tracking, we have incorporated static detection fusion for fusing the emissions within a given time frame before sending to the master station at the sensor level. As in dense scenario, the data generated by the system shall choke the data links. This sensor-level fusion block shall perform preliminary level of PDW fusion and hence aid in effective utilization of transmission bandwidth.

    At the master station, the detections received from all the sensor stations are fused together again using static data detection fusion block. And onwards, the fused detections are fed to the tracker, which shall perform the tracking of multiple airborne targets.

    So now, in order to build a data fusion architecture for target tracking, the basic building blocks are the data fuser block and the tracker block. A fuser block is a system or algorithm which is designed to integrate data from multiple sources or sensors to produce a consolidated and more accurate representation of the environment and objects of interests.

    Fusers can operate on diverse data types from multiple sensors, like cameras, lidars, radars, GPS, et cetera, or data streams, each providing a different perspective or modality of information. The key performance metrics for fusers are accuracy, robustness, computational efficiency, and response time. On the other hand, the tracker is a system or algorithm designed to estimate the state, that is the position, velocity, et cetera, of an object over time based on a sequence of measurements obtained from various sensors or data sources.

    Key components of a tracker are the state representation measurement model that define how the measurements from the sensors relate to actual state of the object. It incorporates the measurement noise and the uncertainties. Trackers are also evaluated based on metrics such as accuracy, robustness, latency, and computational efficiency. In typical use case, detections from sensors are first fused, and the fused detections are fed to the tracker for track generation.

    In order to get started with the simulation of the use case in MATLAB, following are the algorithmic blocks available in MATLAB. The same has been covered by my cospeaker, so I will not be delving much deeper into it. So the blocks that are available are GNN tracker, TOMHT tracker, JPDA tracker, and PHD tracker. On the fuser side, we have the static detection fuser and the track fuser.

    One thing that we need to note is that a static detection fusor uses angle-only sensor detections. It assumes that all the sensor-generated detections are simultaneously available and also that the centers have a common surveillance region, hence associating n detections from m sensors, indicating m by n missed detections or false alarms.

    So the very step-- the first step in accessing the tracking performance for airborne radars required modeling of a tracking scenario. The modeling workflow is as follows. First, we have generated the RF emissions. Then we need to propagate the emissions. And thirdly, we need to receive the emissions at the sensor stations and generate detections.

    To model an RF emission, radarEmitter object has been used, while to simulate ESM detections, fusionRadarSensor or radarDataGenerator object shall be used with detection mode set as ESM mode so that we can ensure that it is a one-way propagation mode. And it is a detection synonymous to a ESM detection.

    Different platforms are defined in the scenario with kinematic trajectories to model airborne platforms and the sensor stations. The sensor objects are associated with the sensor stations, while the transmitter object is associated with the airborne platform. Various scenarios have been created by modeling different trajectories. The scenarios simulations are used to generate the detections which shall be further fed to the fuser and tracker blocks.

    Depending on the tracking use case, the fuser and the tracker blocks are instantiated in the tracking scenario. The code snippet for a typical configuration of a static detection fuser with measurement fusion function as triangulation, or the LOS, and the number of sensors deployed in the scenario is configured. For the tracker, based on the desired tracking algorithm, the tracker block is configured.

    Here we show a typical configuration of a GNN tracker block. The input to the fuser block are the sensor detections, and it generates the fused detections as the output. And subsequently, the input to the tracker block are the fused detections generated in the first step, and it generates the tracks as the output.

    To arrive at the sensor fusion architecture, let us study the tracking scenario requirement first. So as we discussed in the previous sections, an airborne platform has several RF transmitters on board. They can be broadly classified into two categories with respect to the detections that are triggered at the sensor nodes, the transmitters which shall generate synchronous detections and the transmitters which shall generate asynchronous detections at the receiver stations.

    The transmitters like Mode S and IFF interrogators are broadcast signals and hence, based on the placement geometry of the sensor stations and their inter-sensor separation, in optimal cases are likely to illuminate all the sensors at the same time. This shall generate simultaneous detections at all the synchronous sensors. Contrary, coming to the case of a surveillance radar, as the instantaneous beamwidth is quite narrow, the radar shall employ scanning strategy to maximize the spatial coverage.

    So hence, in this case, this shall lead to sequential illumination of sensor stations by the radar beam's footprint. The tracking strategy shall be planned considering the signal of interest and the timing of the detections.

    This brings us to the first implementation of the target tracking configuration, which shall be applicable to the broadcast transmissions or wider beam width transmissions which enable simultaneous illumination of all sensors. It is tracking using synchronous passive measurements. So the primary condition here is that the sensors must report measurements synchronously. Hence, simultaneous illumination is mandatory.

    Static fusion of detection, which is azimuth angle of arrival information in the detections, is utilized for fusing the detections. The output fused detections contain the estimated positions of targets using the triangulation of LOS. Tracker GNN is utilized for tracking the target activity. As only azimuth angle information is being used for processing, A minimum of three sensors are needed for geolocating the target.

    For this case, multiple scenarios with different target trajectories, number of targets, number of sensors with different placement geometries have been simulated, and the tracking performance has been benchmarked. Results for simulations are shown for different azimuth angle measurement accuracies of the sensor, which are varying from 2 degree rms to 10 degrees rms. It is observed that all the tracks were assigned to the correct routes and no false tracks were confirmed by the tracker even when the mean noise measurement is low. These results indicate a good static association accuracy.

    The case 1, the case 2, and the case 3 highlight the tracking performance in such cases. It is observed that as the measurement noise increases, the detection between ghost associations and the true associations becomes less prominent, resulting in a significant drop in the accuracy of static association. With closely spaced targets, incorrect association of fused detections to tracks also occurs.

    So in order to mitigate that, instantiating more number of sensors seems to be elevating the problem. It is observed that by instantiating more number of sensors, the track association accuracy improves. However, due to the increase in number of sensors, computational requirements increase. The static fusion algorithm spends most of the time computing the feasibility of each triangulation.

    Coming to the second case, in the second implementation of target tracking configuration, it is focused on the tracking of scanning emitters. Airborne surveillance radars having narrow beamwidths and employ scan strategies like sector scan, cluster scan, et cetera for achieving spatial coverage. Hence, depending on the separation between sensor stations, sequential illumination of sensors is achieved.

    As the time of detection is not the same at each sensor, it is not feasible to perform static fusion. Since the detections from all sensors are not available at the same time step, this leads us to the asynchronous multi-sensor angle-only tracking problem. In this case, the prior distribution of object states must be modeled by using detections from a single sensor. Range parameterization has been used to initialize prior distribution using angle-only detections.

    A PHD tracker object is instantiated by providing the sensor configuration from the tracking sensor objects and defining range parameterized birth intensity by setting filter initialization function. The ground to tracks and the estimated tracks are observed for the performance estimation.

    For this case also, multiple sensors with different target trajectories, number of targets, number of sensors with different placement geometries have been simulated, and the tracking performance has been benchmarked. Results for simulations are shown for different cases with azimuth angle accuracies, measurement accuracies varying from 2 degrees rms, again, to 10 degrees rms.

    It is observed that the PHD filter is initialized at the first step when detections are received from a single sensor and updated with new information from other sensors. The peaks of the PHD filter follow the range parameterized components based on detections received from the first sensor. After updating the detections from the second sensor and subsequently from other sensors, the PHD estimate peaks lie close to the intersection between detections from the first sensor and the second sensor, similarly for other sensors.

    It is observed that the track converges to the true peak values around four seconds, when the number of targets is two and the number of sensors is three and azimuth angle measurement noise is 2 degrees rms.

    In order to incorporate the fuser block and the tracker blocks in ESM processor application, C code generation using the MATLAB Coder has been done. C code for static detection fuser block, GNN tracker block, and PHD trackers have been generated. A C wrapper function has been developed to instantiate the generated C modules in the main application code.

    This activity primarily has been planned in majorly four phases. In phase one, scenario simulation with different target trajectories, number of targets, number of sensors with different placement geometries have been simulated, and the detections have been generated. In phase two, the tracker and the fusers have been simulated using generated detections data, and tracking performance has been benchmarked.

    In phase 3, the C code generation of the modules using MATLAB Coder products has been carried out. The generated modules are in the process of integration with the processor application code. Coming to phase four, it comprises of performance evaluation of the scheme with real field data in simulation followed by in-system testing and performance benchmarking with respect to timing performance.

    Based on the real radar data, enhancements in the application code and the customization of the tracker parameters and fuser parameters shall be carried out to achieve the desired results. Currently, we are working towards phase three and awaiting phase four.

    To summarize the talk, we have assimilated that the distributed sensor-based network surveillance is very advantageous and crucial for the military surveillance. By incorporation of fusion and tracking algorithms, improved accuracy and tracking performance can be achieved. To establish the best tracking methodology, a hybrid approach combining both central level fusion and track-to-track level methods are needed to yield the best results.

    MATLAB as a platform provides holistic framework for development and testing of data fusion techniques with a variety of trackers and fusion algorithms available as part of the Sensor Fusion Tracking Toolbox. With powerful data visualizations and capabilities and C/C++ code generation capability, the design cycle is much more at ease.

    The future scope involves deployment of generated C code on embedded targets, functional verification of modules with processor in the loop, and real-time performance benchmarking and improvisations.

    Data fusion is an evolving field, offering significant advantages for intelligence extraction. Towards this, the major exploratory tracks are incorporation of deep learning techniques for data fusion and target tracking, adaptive and dynamic fusion techniques, and explainable AI in data fusion.

    As I draw towards the conclusion of my talk, I express sincere gratitude to Shri. N. Srinivas Rao, distinguished scientist and director of DLRL, Shri. K. Murali, outstanding scientist, additional director, and Dr. S SudhaRani, Scientist G, project director PET from DLRL for great support and motivation.

    I extend sincere thanks to MathWorks for providing this platform and especially to Mr. Sumit Garg for his excellent technical support. Special thanks to Ms. M. Nischala, team member, for her sincere efforts. Looking forward to a very long and fruitful association. Thank you.

    View more related videos