Do you mean my explanation wasn't good?? Puzzled...I will be honest... It is not a good explaination. A singular float value is not all that useful if we consider that it will be used for further math operations which will most likely reduce precision, if the operands go out of the capable range. There is a limit to how small or large a float number can be added to another before it is quantitized and reduce the result precision. The exponent quality of floats allows a shift of range that can be useful for many purposes, but I now consider the exponentiation function of the floating point to be more of a convenience factor rather than a boost to it's actual precision. But truncation is unavoidable even with integers. There is one strategy that will keep the maximal amount of precision with floating points, and thats to add numbers closer together, and multiply numbers with factors on the power-of-2 boundaries. I have written about these things before, but I am attempting to generate responses to the idea that more possible values can have greater precision results, somehow. I have no examples, and I think that any useful example would be so niche to not really form any solid solution to the fact that the mantissa bitrange is literally the limiting factor to floats. The name Float is apt, because of how the exponent works, as long as values in the dataset are within some close distance to each other, it works well, but trying to do some operations with extremely distanced values will always fail as well. But it is possible to encode a 32-bit float as a 24-bit int, if we are only interested in linear step values. Quantization will impart some form of error that may make the point moot.
You're making the mistake, first, of pointing out the limitations of float32 in calculations. But we aren't talking about calculations, we're talking about holding a control value. You can use it in a float64 (or other) calculation just fine. It's like arguing that float32 (or int24) isn't good enough for sample files or buffers because it limits further calculations. It doesn't.
And much of what you say is no less true of integer-based fixed point, the only difference is it's more obvious with integers so people screw up less often. If you have to shift an int-based value right 32 bits to align it for an addition, it's apparent that all your bits went away.
Realize that for a standard potentiometer, 270 degrees rotation, 24-bit is 62137.8 step per degree. In contrast, unsigned32 gives 15907286 steps per degree. What do we need it for? Not rhetorical, I'm asking for you to show me the black swan. There is no knob on any gear you own, analog to digital, that has close to 24-bit resolution. Johnson-Nyquist noise alone ensures that's not possible. Sure, you can type in arbitrary precision to a control, but it will be indiscernible from the next closest 24-bit step.
Anyway, if you really think so, you need to lobby Steinberg and others to change, not me (I'm just explaining why they're not wrong). Be prepared with a good argument.
Statistics: Posted by earlevel — Fri Apr 05, 2024 10:24 pm