When it comes to unit testing in automotive industries, the first use case most people have in mind is functional testing. Basically, this means to take all requirements that belong to a certain unit and write test cases to verify if the unit works as expected.
Functional test cases
As already mentioned, these test cases are derived from the requirements and a single test case represents one “good run” through the system-under-test that covers a certain aspect of the unit. They are a translation of a textual requirement into a data stream that can be used for simulation. To verify the expected behavior the tester stimulates the system-under-test by defining a certain input behavior and then observes the expected results at the affected outputs during a simulation of the model and possibly code.
Due to its characteristic, the test cases can become quite long to cover a specific behavior, and the requirement usually only describes the expected behavior of the affected output signals and – if available – internal measurement points. Furthermore, even if the requirement describes the expected behavior of an output signal, it might only give information about a temporal section over the whole length of the test case. The unaffected signals are usually treated as “Don’t care” as well as not defined temporal sections of a signal.
Once all test cases that belong to one requirement are executed successfully, the requirement is counted as tested. Even though there are a lot of new testing techniques on the market today, functional testing is the standard test method to verify the functional behavior of a system-under-test.
The node section of the scripted Pipeline specifies a label expression that reflects the different dependencies to applications and OS. This is required by the BTC Test stage that runs the unit tests. Let’s assume for a moment that we already have a docker container image that contains the required applications for each of these stages. We could restructure our Pipeline like this:
Structural test cases
Today, there are several tools available that are able to generate test cases automatically. They can be put in two categories.
- Random test case generation
The test cases are generated by selecting input data randomly and check the reached coverage afterward. This approach is never complete and might lead to very long structural test cases that are hard to debug. But then, its advantage is to be fast for generating the test cases. Basically, all tools provide this approach.
- Model Checking based test case generation
This smart approach is complete, always provides the shortest possibility to cover a certain coverage goal and can even proof that a certain coverage goal can never be reached (unreachable proof). Due to this, it is slower than the random test case generation. Have a look at our Model Checking page for more details. Only very few tools provide this advanced smart approach.
Might this replace the admittedly time-consuming manual test case creation?
The simple answer is: No!
The full answer is: It depends!
If a machine-readable representation of a requirement is provided (this is called a formalized requirement), the Model Checking technology is able to generate functional test cases. To get more information about this approach, please have a look at the BTC EmbeddedSpecifier.
But let’s assume for now that we do not have a formal representation of the requirements. What is the purpose of these automatically created structural test cases, if they do not replace the functional testing?
What are structural test cases?
Let’s quickly remember that for functional testing the requirements are the source of information the test cases are derived from, even for formalized requirements. This is similar for automatically generated structural test cases, but the source of information is different.
The tools do not take the requirements but structural properties of the system-under-test into account as the source of the test case generation. There is a wide range of possible structural properties that can be taken into account to be covered by automatically generated test cases. Basically, they are all about model or code coverage. The most common coverage goals are statement, branch or decision, condition and modified decision/condition coverage (MC/DC) (also requested by ISO 26262) and there are a lot more like function coverage, equivalence classes or individually specified coverage goals. Please, have a look at this blog article that talks about code coverage.
To avoid misunderstandings, this kind of test cases are named structural test cases and their characteristic is also different. Instead of verifying requirements, they rather “stress” the system-under-test by varying inputs and calibration parameters. Usually, these test cases are short and they take all signals in all steps into account. There are no “Don’t cares”.
But isn’t this a self-fulfilling prophecy if these test cases are derived from the system-under-test and then executed on the system-under-test again? This is a valid question!
Stimuli vector generation
Deriving structural test cases is a two-step process. At first, the (we call it) engines generate stimuli vectors. Stimuli vectors only contain input data. This means data for each input signal and each calibration parameter. They do not contain any information about outputs and internal measurement points. So, these stimuli vectors only describe how to reach a certain coverage goal.
Therefore, it does not matter from a general point of view if the stimuli vectors are derived from the model or from the code. If we look a bit deeper in this topic, there are several good reasons to take the code as the source but to understand the general approach this does not make a difference.
The first benefit of these generated stimuli vectors is, that they may already point out some structural problems in the model or code that are related to the robustness of the system-under-test like overflows, covered division-by-zeros, values out-of-range or values in invalid value ranges, etc.
For the second step, to derive structural test cases from the stimuli vectors, the tester decides, what should be the source implementation to derive the output behavior from. In model-based development, this is usually the model. Therefore, all generated stimuli vectors are executed on the model and the outputs coming from the simulation are recorded. Both, the stimuli part and the recorded output behavior from the simulation together result in a structural test case.
These structural test cases can now be executed again on the code to verify if the code also behaves structurally the same as the model. With other words, it verifies, if the model is translated correctly into code. The most common issues that can be observed are scaling differences, overflows or even compiler differences that all might lead to different behavior between model and code. Therefore, the Back-to-Back Test is highly recommended by the ISO 26262
Since this approach relies on the model or code and the selected coverage goals, it does not need additional input or interaction from the tester. Therefore, this test approach can be fully automized and applied to a continues integration environment of the development and testing process.
What structural test cases are not designed for?
Since the automatic generation of structural test cases is a comfortable approach this also raises the question, if this generated test cases might randomly fit a requirement. Even though this might theoretically be possible and even that there are white papers theoretically talking about this approach, this will not be possible with almost absolute certainty.
But let’s assume for a moment that some of the generated structural test cases fit a requirement. This means that the tester has to manually review all generated test cases and check if one of them fits one of the requirements. Since the model or code could still contain failures, they might be undiscovered by this approach.
In addition, the effort and complexity of this task will obviously increase exponentially with an increasing number of requirements and generated structural test cases. Once this is done the tester still has to write additional functional test cases for the not yet covered requirements. Finally, from the effort and complexity point of view, this approach is not applicable in a real test workflow.
Even though there is a huge difference between automatically generated structural test cases and manual written functional test cases, they have their eligibility. They are an effective complement to the functional testing that, based on its systematic approach, discovers structural issues in and between model and code. We also learned that it is not a suitable approach to try to map automatically generated structural test cases to requirements.
Transferred to a book, the functional test cases check if the story is consistent and correct. Whereas the structural test cases have a look at the grammar and check if it is correctly translated into a different language. Finally, applying automatic structural test case generation to the test workflow will increase the test depth and robustness of the system-under-test.