When I started working at BTC Embedded Systems in 2014, floating-point was a rare topic in customer meetings and many control units supported fixed-point only. But since then, floating-point became more and more important and has fully arrived in the projects today.
Back-to-back testing can be used for various use cases but the most common one is to compare a Simulink model vs. auto-generated production code. In the past, this made a lot of sense since the model was usually using floating-point and the code was implemented with fixed-point. The Back-to-back test verified, if the translation from model to code does still lead to the same behavior.
This raises the question, if a floating-point Back-to-back test is still needed, since both model and code use floating-point data types now.
How does floating-point work?
Fixed-point is not a real datatype, it could be described as a ways to implement integer datatypes with a user-defined precision. Please, also read the article What you should know about fixed-point from my colleague Markus Gros for more information.
However, this is different for floating-point variables. The available bits are split into 3 different sections to represent the sign, the exponent, and the mantissa (see below).
For the value 1 with a 32-bit floating-point variable (single precision), it looks like this:
- Value: 1
- Actually stored in floating-point: 1
- Binary: 0 01111111 00000000000000000000000
- IEEE 754 (Value): + | 2^0 | 1.0
- IEEE 754 (Encoded): 0 | 127 | 0
- Po2: 1 * 2^0
For the value 0.1 we get the following result:
- Value: 0.1
- Actually stored in floating-point: 0.100000001490116119384765625
- Binary: 0 01111011 10011001100110011001101
- IEEE 754 (Value): + | 2^-4 | 1.600000023841858
- IEEE 754 (Encoded): 0 | 123 | 5033165
- Po2: 3602879701896397 * 2^-55
Based on the concept of mantissa and exponent, the above example shows that it can be a problem to represent several exact values like 0.1. If you want to know more about floating-point variables, please check out my article What you should know about floating-point.
If you want to check out more values, please, have a look at the following website: https://www.h-schmidt.net/FloatConverter/IEEE754.html and try 0.2 or 16,777,217
What influences a floating-point value?
In addition to the fact that specific values might not be representable with floating-point data types, there are additional influences on how a value is handled in a calculation like
- Rounding methods
- Precision (depending on the CPU)
Each compiler with floating-point support provides settings to define how the code should be handled. The available options might be different for each compiler as well.
In addition, floating-point variables cannot be considered for all operations. If you want to make a comparison like x == 0.1 you will never catch the true case. At least for these situations you will still need to go with fixed-point variables.
How to ensure that no deviant behavior occurs between model and code?
This brings us back to the initial question if a floating-point Back-to-back test is still needed? As we have seen in the previous text, floating-point arithmetics have a lot of influence factors that might lead to a different results on model and code level (and even differences when the same code is compiled with different compilers). To ensure equal structural behavior between model and code, a Back-to-back test is the best option. Since the compiler for floating-point has – in contrast to fixed-point data types – a huge influence on the result, I even suggest to run a Back-to-back Test between Model and Processor (MIL vs. PIL) to ensure that the target compiler also handles the code equal to the model and host compiler. The good thing about Back-to-back testing is that it can be fully automized, and does only require user interaction if a deviation is detected.
Floating-point definitely brings several advantages for some use cases and you get rid of the annoying issue of finding the right scaling for fixed-point data types. However, you have to keep in mind that the compiler settings play a big role in floating-point and can be found with a floating-point Back-to-back test.