The Model Testing Dashboard collects metric data from the model design and testing artifacts in a project, such as requirements, models, and test results. Use the metric data to assess the status and quality of your model testing. Each metric in the dashboard measures a different aspect of the quality of the testing of your model and reflects guidelines in industry-recognized software development standards, such as ISO 26262 and DO-178. Use the widgets in the Model Testing Dashboard to see high-level metric results and testing gaps, as described in Explore Status and Quality of Testing Activities Using the Model Testing Dashboard. Alternatively, you can use the metric API to collect metric results programmatically.
Metric ID:
RequirementWithTestCase
Determine whether a requirement is linked to test cases.
Use this metric to determine whether a requirement is linked to a test case with a
link where the Type is set to Verifies. The metric
analyzes only requirements where the Type is set to
Functional and that are linked to the model with a link where the
Type is set to Implements.
To collect data for this metric:
In the Model Testing Dashboard, click a metric in the Requirements Linked to Tests section and, in the table, see the Test Link Status column.
Use getMetrics with the metric identifier,
RequirementWithTestCase.
Collecting data for this metric loads the model file and requires a Simulink® Requirements™ license.
For this metric, instances of metric.Result return
Value as one of these logical outputs:
0 — The requirement is not linked to test cases in the
project.
1 — The requirement is linked to at least one test case with
a link where the Type is set to
Verifies.
The metric:
Analyzes only requirements where the Type is set to
Functional and that are linked to the model with a link where the
Type is set to Implements.
Counts links to test cases in the project where the link type is set to
Verifies, including links to test cases that test other models.
For each requirement that is linked to test cases, check that the links are to test
cases that run on the model that implements the requirement.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
RequirementWithTestCasePercentage
Calculate the percentage of requirements that are linked to test cases.
This metric counts the fraction of requirements that are linked to at least one test
case with a link where the Type is set to
Verifies. The metric analyzes only requirements where the Type is set to Functional and that are linked
to the model with a link where the Type is set to
Implements.
This metric calculates the results by using the results of the Requirement linked to test cases metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Requirements with Tests widget.
Use getMetrics with the metric identifier,
RequirementWithTestCasePercentage.
Collecting data for this metric loads the model file and requires a Simulink Requirements license.
For this metric, instances of metric.Result return
Value as a fraction structure that contains these fields:
Numerator — The number of implemented requirements that are
linked to at least one test case.
Denominator — The total number of functional requirements
implemented in the model with a link where the Type is set to
Implements.
The metric:
Analyzes only requirements where the Type is set to
Functional and that are linked to the model with a link where the
Type is set to Implements.
Counts links to test cases in the project where the link type is set to
Verifies, including links to test cases that test other models.
For each requirement that is linked to test cases, check that the links are to test
cases that run on the model that implements the requirement.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
RequirementWithTestCaseDistribution
Distribution of the number of requirements linked to test cases compared to the number of requirements that are missing test cases.
Use this metric to count the number of requirements that are linked to test cases and
the number of requirements that are missing links to test cases. The metric analyzes only
requirements where the Type is set to
Functional and that are linked to the model with a link where the
Type is set to Implements. A
requirement is linked to a test case if it has a link where the Type
is set to Verifies.
This metric returns the result as a distribution of the results of the Requirement linked to test cases metric.
To collect data for this metric:
In the Model Testing Dashboard, place your cursor over the Requirements with Tests widget.
Use getMetrics with the metric identifier,
RequirementWithTestCaseDistribution.
Collecting data for this metric loads the model file and requires a Simulink Requirements license.
For this metric, instances of metric.Result return
Value as a distribution structure that contains these fields:
BinCounts — The number of requirements in each bin, returned
as an integer vector.
BinEdges — The logical output results of the Requirement
linked to test cases metric, returned as a vector with entries 0
(false) and 1
(true).
The first bin includes requirements that are not linked to test cases. The second bin includes requirements that are linked to at least one test case.
The metric:
Analyzes only requirements where the Type is set to
Functional and that are linked to the model with a link where the
Type is set to Implements.
Counts links to test cases in the project where the link type is set to
Verifies, including links to test cases that test other models.
For each requirement that is linked to test cases, check that the links are to test
cases that run on the model that implements the requirement.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
TestCasesPerRequirement
Count the number of test cases linked to each requirement.
Use this metric to count the number of test cases linked to each requirement. The
metric analyzes only requirements where the Type is set
to Functional and that are linked to the model with a link where the
Type is set to Implements. A test
case is linked to a requirement if it has a link where the Type is
set to Verifies.
To collect data for this metric:
In the Model Testing Dashboard, click a metric in the section Tests per Requirement to display the results in a table.
Use getMetrics with the metric identifier,
TestCasesPerRequirement.
Collecting data for this metric loads the model file and requires a Simulink Requirements license.
For this metric, instances of metric.Result return
Value as an integer.
The metric:
Analyzes only requirements where the Type is set to
Functional and that are linked to the model with a link where the
Type is set to Implements.
Counts links to test cases in the project where the link type is set to
Verifies, including links to test cases that test other models.
For each requirement that is linked to test cases, check that the links are to test
cases that run on the model that implements the requirement.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
TestCasesPerRequirementDistribution
Distribution of the number of test cases linked to each requirement.
This metric returns a distribution of the number of test cases linked to each
requirement. Use this metric to determine if requirements are linked to a disproportionate
number of test cases. The metric analyzes only requirements where the Type is set to Functional and that are linked
to the model with a link where the Type is set to
Implements. A test case is linked to a requirement if it has a link
where the Type is set to Verifies.
This metric returns the result as a distribution of the results of the Test cases per requirement metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Tests per Requirement widget.
Use getMetrics with the metric identifier,
TestCasesPerRequirementDistribution.
Collecting data for this metric loads the model file and requires a Simulink Requirements license.
For this metric, instances of metric.Result return
Value as a distribution structure that contains these fields:
BinCounts — The number of requirements in each bin, returned
as an integer vector.
BinEdges — Bin edges for the number of test cases linked to
each requirement, returned as an integer vector. BinEdges(1) is
the left edge of the first bin, and BinEdges(end) is the right
edge of the last bin. The length of BinEdges is one more than the
length of BinCounts.
The bins in the result of this metric correspond to the bins 0, 1, 2, 3, and >3 in the Tests per Requirement widget.
The metric:
Analyzes only requirements where the Type is set to
Functional and that are linked to the model with a link where the
Type is set to Implements.
Counts links to test cases in the project where the link type is set to
Verifies, including links to test cases that test other models.
For each requirement that is linked to test cases, check that the links are to test
cases that run on the model that implements the requirement.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
TestCaseWithRequirement
Determine whether a test case is linked to requirements.
Use this metric to determine whether a test case is linked to a requirement with a
link where the Type is set to Verifies. The metric
analyzes only test cases that run on the model for which you collect metric data.
To collect data for this metric:
In the Model Testing Dashboard, click a metric in the Tests Linked to Requirements section and, in the table, see the Requirement Link Status column.
Use getMetrics with the metric identifier,
TestCaseWithRequirement.
Collecting data for this metric loads the model file and requires a Simulink Test™ license.
For this metric, instances of metric.Result return
Value as one of these logical outputs:
0 — The test case is not linked to requirements that are
implemented in the model.
1 — The test case is linked to at least one requirement with
a link where the Type is set to
Verifies.
The metric:
Analyzes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystems.
Counts only links where the Type is set to
Verifies that link to requirements where the
Type is set to Functional. This includes
links to requirements that are not linked to the model or are linked to other models.
For each test case that is linked to requirements, check that the links are to
requirements that are implemented by the model that the test case runs on.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
TestCaseWithRequirementPercentage
Calculate the fraction of test cases that are linked to requirements.
This metric counts the fraction of test cases that are linked to at least one
requirement with a link where the Type is set to
Verifies. The metric analyzes only test cases that run on the model
for which you collect metric data.
This metric calculates the results by using the results of the Test linked to requirements metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Tests with Requirements widget.
Use getMetrics with the metric identifier,
TestCaseWithRequirementPercentage.
Collecting data for this metric loads the model file and requires a Simulink Test license.
For this metric, instances of metric.Result return
Value as a fraction structure that contains these fields:
Numerator — The number of test cases that are linked to at
least one requirement with a link where the Type is set to
Verifies.
Denominator — The total number of test cases that test the
model.
The metric:
Analyzes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystems.
Counts only links where the Type is set to
Verifies that link to requirements where the
Type is set to Functional. This includes
links to requirements that are not linked to the model or are linked to other models.
For each test case that is linked to requirements, check that the links are to
requirements that are implemented by the model that the test case runs on.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
TestCaseWithRequirementDistribution
Distribution of the number of test cases linked to requirements compared to the number of test cases that are missing links to requirements.
Use this metric to count the number of test cases that are linked to requirements and
the number of test cases that are missing links to requirements. The metric analyzes only
test cases that run on the model for which you collect metric results. A test case is
linked to a requirement if it has a link where the Type is set to
Verifies.
This metric returns the result as a distribution of the results of the Test linked to requirements metric.
To collect data for this metric:
In the Model Testing Dashboard, place your cursor over the Tests with Requirements widget.
Use getMetrics with the metric identifier,
TestCaseWithRequirementDistribution.
Collecting data for this metric loads the model file and requires a Simulink Test license.
For this metric, instances of metric.Result return the
Value as a distribution structure that contains these fields:
BinCounts — The number of test cases in each bin, returned as
an integer vector.
BinEdges — The logical output results of the Test linked to
requirements metric, returned as a vector with entries 0
(false) and 1
(true).
The first bin includes test cases that are not linked to requirements. The second bin includes test cases that are linked to at least one requirement.
The metric:
Analyzes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystems.
Counts only links where the Type is set to
Verifies that link to requirements where the
Type is set to Functional. This includes
links to requirements that are not linked to the model or are linked to other models.
For each test case that is linked to requirements, check that the links are to
requirements that are implemented by the model that the test case runs on.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
RequirementsPerTestCase
Count the number of requirements linked to each test case.
Use this metric to count the number of requirements linked to each test case. The
metric analyzes only test cases that run on the model for which you collect metric
results. A test case is linked to a requirement if it has a link where the
Type is set to Verifies.
To collect data for this metric:
In the Model Testing Dashboard, click a metric in the section Requirements per Test to display the results in a table.
Use getMetrics with the metric identifier,
RequirementsPerTestCase.
Collecting data for this metric loads the model file and requires a Simulink Test license.
For this metric, instances of metric.Result return
Value as an integer.
The metric:
Analyzes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystems.
Counts only links where the Type is set to
Verifies that link to requirements where the
Type is set to Functional. This includes
links to requirements that are not linked to the model or are linked to other models.
For each test case that is linked to requirements, check that the links are to
requirements that are implemented by the model that the test case runs on.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
RequirementsPerTestCaseDistribution
Distribution of the number of requirements linked to each test case.
This metric returns a distribution of the number of requirements linked to each test
case. Use this metric to determine if test cases are linked to a disproportionate number
of requirements. The metric analyzes only test cases that run on the model for which you
collect metric results. A test case is linked to a requirement if it has a link where the
Type is set to Verifies.
This metric returns the result as a distribution of the results of the Requirements per test case metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Requirements per Test widget.
Use getMetrics with the metric identifier,
RequirementsPerTestCaseDistribution.
Collecting data for this metric loads the model file and requires a Simulink Test license.
For this metric, instances of metric.Result return
Value as a distribution structure that contains these fields:
BinCounts — The number of test cases in each bin, returned as
an integer vector.
BinEdges — Bin edges for the number of requirements linked to
each test case, returned as an integer vector. BinEdges(1) is the
left edge of the first bin, and BinEdges(end) is the right edge
of the last bin. The length of BinEdges is one more than the
length of BinCounts.
The bins in the result of this metric correspond to the bins 0, 1, 2, 3, and >3 in the Requirements per Test widget.
The metric:
Analyzes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystems.
Counts only links where the Type is set to
Verifies that link to requirements where the
Type is set to Functional. This includes
links to requirements that are not linked to the model or are linked to other models.
For each test case that is linked to requirements, check that the links are to
requirements that are implemented by the model that the test case runs on.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID: TestCaseType
Return the type of the test case.
This metric returns the type of the test case. A test case is either a baseline, equivalence, or simulation test.
Baseline tests compare outputs from a simulation to expected results stored as baseline data.
Equivalence tests compare the outputs from two different simulations. Simulations can run in different modes, such as normal simulation and software-in-the-loop.
Simulation tests run the system under test and capture simulation data. If the system under test contains blocks that verify simulation, such as Test Sequence and Test Assessment blocks, the pass/fail results are reflected in the simulation test results.
To collect data for this metric:
In the Model Testing Dashboard, click a widget in the section Tests by Type to display the results in a table.
Use getMetrics with the metric identifier,
TestCaseType.
Collecting data for this metric loads the model file and test files and requires a Simulink Test license.
For this metric, instances of metric.Result return
Value as one of these integer outputs:
0 — Simulation test
1 — Baseline test
2 — Equivalence test
The metric includes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystems.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
TestCaseTypeDistribution
Distribution of the types of the test cases for the model.
This metric returns a distribution of the types of test cases that run on the model. A test case is either a baseline, equivalence, or simulation test. Use this metric to determine if there is a disproportionate number of test cases of one type.
Baseline tests compare outputs from a simulation to expected results stored as baseline data.
Equivalence tests compare the outputs from two different simulations. Simulations can run in different modes, such as normal simulation and software-in-the-loop.
Simulation tests run the system under test and capture simulation data. If the system under test contains blocks that verify simulation, such as Test Sequence and Test Assessment blocks, the pass/fail results are reflected in the simulation test results.
This metric returns the result as a distribution of the results of the Test case type metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Tests by Type widget.
Programmatically, use getMetrics with the metric identifier,
TestCaseTypeDistribution.
Collecting data for this metric loads the model file and requires a Simulink Test license.
For this metric, instances of metric.Result return
Value as a distribution structure that contains these fields:
BinCounts — The number of test cases in each bin, returned as
an integer vector.
BinEdges — The outputs of the Test case type metric, returned
as an integer vector. The integer outputs represent the three test case types:
0 — Simulation test
1 — Baseline test
2 — Equivalence test
The metric includes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystems.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID: TestCaseTag
Return the tags for a test case.
This metric returns the tags for a test case. You can add custom tags to a test case by using the Test Manager.
To collect data for this metric:
In the Model Testing Dashboard, click a widget in the Tests with Tag section to display the results in a table.
Use getMetrics with the metric identifier,
TestCaseTag.
Collecting data for this metric loads the model file and test files and requires a Simulink Test license.
For this metric, instances of metric.Result return
Value as a string.
The metric includes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystems.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
TestCaseTagDistribution
Distribution of the tags of the test cases for the model.
This metric returns a distribution of the tags on the test cases that run on the model. For a test case, you can specify custom tags in a comma-separated list in the Test Manager. Use this metric to determine if there is a disproportionate number of test cases that have a particular tag.
This metric returns the result as a distribution of the results of the Test case tag metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Tests with Tag widget.
Use getMetrics with the metric identifier,
TestCaseTagDistribution.
Collecting data for this metric loads the model file and requires a Simulink Test license.
For this metric, instances of metric.Result return
Value as a distribution structure that contains these fields:
BinCounts — The number of test cases in each bin, returned as
an integer vector.
BinEdges — The bin edges for the tags that are specified for
the test cases, returned as a string array.
The metric includes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystems.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID: TestCaseStatus
Return the status of the test case result.
This metric returns the status of the test case result. A test status is passed, failed, disabled, or untested.
To collect data for this metric:
In the Model Testing Dashboard, click a widget in the Model Test Status section to display the results in a table.
Use getMetrics with the metric identifier,
TestCaseStatus.
Collecting data for this metric loads the model file and test result files and requires a Simulink Test license.
For this metric, instances of metric.Result return
Value as one of these integer outputs:
0 — The test case failed.
1 — The test case passed.
2 — The test case was disabled.
3 — The test case was not run (untested).
The metric:
Includes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystem test harnesses.
Does not count the status of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as untested.
Reflects the status of the whole test case if the test case includes multiple iterations.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
TestCaseStatusPercentage
Calculate the fraction of test cases that passed.
This metric counts the fraction of test cases that passed in the test results. The metric analyzes only test cases that run on the model for which you collect metric data.
This metric calculates the results by using the results of the Test case status metric.
To collect data for this metric:
In the Model Testing Dashboard, in the Model Test Status section, place your cursor over the Passed widget.
Use getMetrics with the metric identifier,
TestCaseStatusPercentage.
Collecting data for this metric loads the model file and requires a Simulink Test license.
For this metric, instances of metric.Result return
Value as a fraction structure that contains these fields:
Numerator — The number of test cases that passed.
Denominator — The total number of test cases that test the
model.
The metric:
Includes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystem test harnesses.
Does not count the status of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as untested.
Reflects the status of the whole test case if the test case includes multiple iterations.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
TestCaseStatusDistribution
Distribution of the statuses of the test case results for the model.
This metric returns a distribution of the status of the results of test cases that run on the model. A test status is passed, failed, disabled, or untested.
This metric returns the result as a distribution of the results of the Test case type metric.
To collect data for this metric:
In the Model Testing Dashboard, use the widgets in the Model Test Status section to see the results.
Use getMetrics with the metric identifier,
TestCaseStatusDistribution.
Collecting data for this metric loads the model file and requires a Simulink Test license.
For this metric, instances of metric.Result return
Value as a distribution structure that contains these fields:
BinCounts — the number of test cases in each bin, returned as
an integer vector.
BinEdges — The outputs of the Test case status metric,
returned as an integer vector. The integer outputs represent the test result statuses:
0 — The test case failed.
1 — The test case passed.
2 — The test case was disabled.
3 — The test case was not run (untested).
The metric:
Includes only test cases in the project that test the model for which you collect the metric results. The metric does not analyze test cases that run on subsystem test harnesses.
Does not count the status of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as untested.
Reflects the status of the whole test case if the test case includes multiple iterations.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
ExecutionCoverageBreakdown
Model condition coverage achieved by test cases and justifications.
This metric returns the model execution coverage measured in the test results. The metric result includes the percentage of execution coverage achieved by the test cases and the percentage of coverage justified in coverage filters.
To collect data for this metric:
In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the Execution widget.
Use getMetrics with the metric identifier,
ExecutionCoverageBreakdown.
Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage™ license.
For this metric, instances of metric.Result return the
Value as a double vector that contains these elements.
Value(1) — The percentage of execution coverage achieved by
the tests.
Value(2) — The percentage of execution coverage justified by
coverage filters.
The metric:
Returns aggregated coverage results.
Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.
Returns 100% coverage for models that do not have execution points.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
ConditionCoverageBreakdown
Model condition coverage achieved by test cases and justifications.
This metric returns the model condition coverage measured in the test results. The metric result includes the percentage of condition coverage achieved by the test cases and the percentage of coverage justified in coverage filters.
To collect data for this metric:
In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the Condition widget.
Use getMetrics with the metric identifier,
ConditionCoverageBreakdown.
Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.
For this metric, instances of metric.Result return the
Value as a double vector that contains these elements:
Value(1) — The percentage of condition coverage achieved by
the tests.
Value(2) — The percentage of condition coverage justified by
coverage filters.
The metric:
Returns aggregated coverage results.
Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.
Returns 100% coverage for models that do not have condition points.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
DecisionCoverageBreakdown
Model decision coverage achieved by test cases and justifications.
This metric returns the model decision coverage measured in the test results. The metric result includes the percentage of decision coverage achieved by the test cases and the percentage of coverage justified in coverage filters.
To collect data for this metric:
In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the Decision widget.
Use getMetrics with the metric identifier,
DecisionCoverageBreakdown.
Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.
For this metric, instances of metric.Result return the
Value as a double vector that contains these elements:
Value(1) — The percentage of decision coverage achieved by
the tests.
Value(2) — The percentage of decision coverage justified by
coverage filters.
The metric:
Returns aggregated coverage results.
Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.
Returns 100% coverage for models that do not have decision points.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Metric ID:
MCDCCoverageBreakdown
Model modified condition and decision (MCDC) coverage achieved by test cases and justifications.
This metric returns the modified condition and decision (MCDC) measured in the test results. The metric result includes the percentage of MCDC coverage achieved by the test cases and the percentage of coverage justified in coverage filters.
To collect data for this metric:
In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the MC/DC widget.
Use getMetrics with the metric identifier,
MCDCCoverageBreakdown.
Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.
For this metric, instances of metric.Result return the
Value as a double vector that contains these elements:
Value(1) — The percentage of MCDC coverage achieved by the
tests.
Value(2) — The percentage of MCDC coverage justified by
coverage filters.
The metric:
Returns aggregated coverage results.
Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.
Returns 100% coverage for models that do not have condition/decision points.
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.