gatane a day ago

You can also use a library for it:

https://github.com/PetteriAimonen/libfixmath

I saw this getting forked for a custom SDK for the PS1 (which doesnt have a FPU).

  • turtledragonfly a day ago

    That's a good library (I use it myself), but just to be clear for anyone reading: that's a standalone fixed-point implementation, not a compiler-provided one like the article discusses (via -ffixed-point).

phkahler a day ago

I just converted a big fixed point algorithm to float on Cortex-M4f. It runs very close to the same speed, but significantly more readable.

In the fixed code I used straight c but wrote functions for sin, cos, and reciprocal square root.

I can't see getting just a 2x improvement over soft float, while using llvm specific features and modifying libraries just eliminates portability.

  • jonathrg a day ago

    Cortex-M4F has an FPU, i.e., hardware floating point support (albeit single precision only). You likely would see a much greater performance drop without it.

    • phkahler a day ago

      >> Cortex-M4F has an FPU, i.e., hardware floating point support

      Maybe I wasn't clear. I know it has an FPU. I originally wrote the code in fixed point because I knew it would be fast and didn't trust the FPU performance on such a small chip. Now I converted it to float and get the same performance.

      I also used an "f" suffix on all constants to avoid accidental software double precision. Also used the GCC flags to force it all to single precision and use the hard float libs.

      On smaller chips I used to expect fixed point to be faster than a hardware FPU, but that's no longer the case. They are about equal for me.

      • Dylan16807 a day ago

        > On smaller chips I used to expect fixed point to be faster than a hardware FPU, but that's no longer the case.

        What kind of chip do you have in mind where that would be true?

        Notably, M4 is the first of its line to have an FPU option at all.

      • jonathrg a day ago

        I think I misread the comment (I thought you meant that 2x improvement seemed like too much, but perhaps you meant the opposite)

mikequinlan a day ago

What is the advantage of fixed point arithmetic over rational arithmetic? Rational arithmetic seems to include fixed point arithmetic as a subset (in fixed point arithmetic the denominator is always a power of the base).

  • jcranmer a day ago

    In fixed-point arithmetic, your basic operations come out to a multiply and a divide (which is a shift, if it's binary fixed-point) in the worst case. For rational arithmetic, you need to do two multiplies and a gcd-like algorithm to normalize the result (which is probably at least as expensive as a divider, even if implemented with hardware acceleration) in the best case, and three multiplies and an addition (and the gcd-like algorithm for normalization) for a simple addition.

    Rational arithmetic operations are just plain slow, and there isn't a lot of magic that can fix that.

    • dzaima a day ago

      There is some magic that may somewhat help - delaying normalization to when it's actually necessary or the numbers get too large, and skipping steps if denominators are equal. But even then it'll be quite bad relative to floats, especially so on software implementations due to likely having unpredictable branches.

      • adrian_b 21 hours ago

        Delaying normalization is much more likely to work well when you use big numbers, e.g. 64-bit or bigger integers.

        With 32-bit integers, overflows caused by omitting the normalization step may be so frequent as to remove any advantage from delaying the normalization.

        It is probably best to have a fixed-point operation library that implements both options for normalization, so you may test an application with both delayed and immediate normalization, and choose which is better.

  • tliltocatl a day ago

    Unless you allow the denominator to grow unbound, rational have no real advantage over FP and are much slower. If you allow unbound denominator, you will run out of memory after perhaps a few thousand operations (and having to dynamically allocate memory will kill performance even further). And then there are transcendent functions which cannot be represented by rationals anyway.

    • crabbone a day ago

      > And then there are transcendent functions which cannot be represented by rationals anyway.

      They cannot be represented by floats either. So this doesn't matter.

      • dzaima a day ago

        The meaningful difference with irrational computations on rationals vs floats is that in floats you have some inherent, expected, and well-understood imprecision that applies evenly over results of add/mul/div/sqrt/sin/etc, whereas in unbounded rationals add/mul/div are all infinitely precise always (and presumably this property is why one would want rationals over floats) but has no such possible equivalent for sqrt/sin (at which point your options are either just giving up, or capping the required precision, at which point you could just use floats).

        • crabbone 15 hours ago

          > well-understood imprecision that applies evenly over results

          I'm sorry, I have some bad news for you... especially the "evenly" part. I had a friend who worked at IBM research on the problem of "normalizing" some matrix multiplication irregularities which occur due to numerical underflow or overflow. And, it's anything but a solved or an easy problem.

          • dzaima 15 hours ago

            There a maximum error of 0.5 ulp for all of add/sub/mul/div/sqrt on all inputs (ok sin/cos & co typically have a couple ulp of error but close enough).

            And indeed it's well-understood that it can get problematic if you compose a bunch of such, esp. add/sub with a large magnitude difference. Indeed, especially matrix multiplication.

        • wakawaka28 a day ago

          Floating point operations also run in constant-bounded time. You can't say that about any arbitrary-precision format.

  • Isognoviastoma a day ago

    - after few operations denominator exceeds 64-bit range, so you either need bigint, or make fractions approximate, what is in affect fixed point

    - often you use decimal fraction on output and input anyway

    - it's slower, even slower with bigint

    - no square root, sin, and so on with rationals

    • lifthrasiir 19 hours ago

      > no square root, sin, and so on with rationals

      Better statement would be "no standard definition possible for square root, sin, and so on with rationals", as rationals with unbounded denominators don't have an inherent precision limit---something like ulps in floating point systems---and the max denominator wouldn't be a good choice for precision even when denominator is limited. Every such function now has to be given an explicit denominator value or similar strategy, which complicates both the interface and implementation.

    • crabbone a day ago

      > no square root, sin, and so on with rationals

      Are you thinking about a specific library? You aren't the only person who commented this way. But, the truth is that root, sin and so on don't "work" with floats either. In fact, there are common ways to implement these functions by either using tables (which are approximate) or algebraic approximations (that give you... drum roll: rationals!)

      But, really, there isn't any way (except symbolically) to represent transcendental functions in computers. It doesn't matter what kind of number you choose to do it.

      • Isognoviastoma a day ago

        √2 with floating point is obviously closest representable number. With fixed point it is obviously closest representable number as well. With rationals, you need to arbitrarily limit precision, and the point of using rational was to use exact values.

        • NL807 a day ago

          I don't think that's a big deal though. You deliberately choose to use a rational system, because you understand the problem domain, which could greatly benefit from such a representation. If you throw a rational system at every math problem as a catch-all representation, then you are doing it wrong.

        • User23 a day ago

          Rational approximation is a thing. 22/7 and all that.

  • magicalhippo a day ago

    Floats are essentially just rationals, but with the denominators limited to powers of two. The advantage of fixed point is that you avoid all the complications of a changing denominator by having a fixed denominator.

    • jonathrg a day ago

      For simple applications, yes. In practical applications you often need to shift the "fixed" point around manually to maintain the appropriate dynamic range.

      • magicalhippo a day ago

        Right, but still fixed at compile-time.

  • 15155 a day ago

    On a small microcontroller without a hard FPU and a very small quantity of meaningful memory: performance.

  • codr7 a day ago

    As stated: rationals are more general purpose, hence more complicated.

  • the_gorilla a day ago

    It has to run on a computer. How would you simulate rational arithmetic?

    • mikequinlan a day ago

      Rational arithmetic represents numbers as a numerator and denominator (rational numbers). There are hundreds (at least) of libraries that implement rational arithmetic on a computer.

      https://www.google.com/search?q=rational+arithmetic+library

      • the_gorilla a day ago

        I meant run on the cpu itself. Most processors have floating-point support, and fixed-point just uses integer operations. All those heavy libraries require an arbitrary amount of memory for their calculations.

        • dzaima a day ago

          The article here is about hardware without native FP already; that said, rationals (even before getting to the need of bigints) are quite bad regardless, likely requiring division & gcd around basic arith, which are slow both in software and in hardware.

librasteve a day ago

the Raku language https://raku.org defaults to bigint (Int)and rational (Rat) numerics since that gives fewer unexpected results with eg decimals (0.1+0.2==0.3) … it’s easy enough to reach for floating point (Num), but this is deliberately the non default so that you the coder need to know that is what you want to trade performance vs precision

all irrational operations (sqrt, sin, cos and so on) return Num in a consistent way

  • librasteve 7 hours ago

    as a wider reply to the many comments here - thanks! this is indeed a nuanced topic with trade offs everywhere and truly it is possible to argue for Rat and Num as the default depending on your preference

    just to mention that the Raku numeric also improves over the Lisp/Scheme number tower? why - because truly a Num is _not_ a subset / child class of Rat, they are sets that intersect

  • martinky24 a day ago

    In most other languages, aren’t rationals “deliberately the non default so that you the coder need to know that ais what you want to trade precision vs performanc”?

    My point being, your point doesn't actually make Raku sound like they made a good design decision, it’s just different for the sake of being different when put that way.

    • librasteve 7 hours ago

      Apologies to the Raku folks if my description made it sound like a bad design decision. It is true that defaulting to Rat approach is distinctive (although not unique).

      Let me try again. My case is that Num (ie P754 floating point) is not well aligned to decimals that humans use in most everyday circumstances. Thus the famous (0.1+0.2==0.3) example which works with Rat and breaks with Num.

      The only benefit of a Num over Rat is performance and the disbenefit is that Num has limited precision and range. Unless you are talking about irrational numbers, when you have to have imprecision anyway.

      Therefore having Rat as the default means that Num is an option for the coder to increase execution speed, but only if they accept the implications for precision and range.

    • dfox a day ago

      The same design decision is in most languages that implement Common Lisp/Scheme-style numeric tower (eg. have rationals as core type). You do basic arithmetic on rationals or integers, you get rational or integer. You do transcendental function on pretty much anything, you get float. Also usually you also get stuff like complex rationals, which makes the rules somewhat convoluted, but that is no that relevant for this discussion.

    • amelius a day ago

      I'd say the best choice is correctness over performance. So bignums win.

      • AlotOfReading a day ago

        Bignum rationals aren't "more correct" and expressive than things like algebraic or symbolic representations, you're just picking a different set of tradeoffs than floats. That's fine, different numerical systems are fundamentally about making different sets of tradeoffs. We should just be clear what we're picking.

        Personally I'd argue that for most people, writing most applications, IEEE floats are a fantastic tradeoff between performance, memory, error, and stability. That's why they've stuck around as the dominant numerical representation for decades. For most of the situations where they suffer (e.g. rounding transcendental results, operands with widely varying orders of magnitude, results very near 0) don't have general solutions in other systems, so the middle ground between "safe for a naive programmer" and "requires expertise in numerical analysis" becomes very small overall.

        • amelius a day ago

          Bignums are arbitrarily more accurate than IEEE floats which often are implemented only in single or double precision.

          IEEE floats are fine for most cases, but sometimes you want to tune up the accuracy and with IEEE floats that's typically not possible, and it's like you're in a dead end street.

          Therefore I still think bignums are a better default choice for most programming languages.

          • AlotOfReading a day ago

            > Bignums are arbitrarily more accurate than IEEE floats...

            Yes, and that's true of all arbitrary-sized types vs fixed-size types. There's a lot of costs that come with that choice, like inherent allocations, losing constant time operations, lack of hardware support, etc.

            You can do exactly the same thing with floating point, for example GNU MPFR and ARPREC. Doing so is broadly faster than rational representations for a lot of common types of computations, and much less prone to size explosions. Still has the issue of incorrectly terminated representations at a given size or precision, but they can also represent things like irrationals far more efficiently.

            That said, I've very rarely run into situations that couldn't be made to fit in a double or quad where rationals would be better. These types can represent the distances between planets down to the nanometer or micrometer scale without losing precision. If you're dealing with atomic manufacturing on an astronomical scale, it might be worth investing in a new type, but anything short of that is very likely to be a problem requiring domain expertise to evaluate. Floats are perfectly fine as a default.

            The other advantage of floats is that they allow/require you to think about precision and rounding because they're a core part of the representation. Some people don't like that, but going to rational bignums doesn't free you of that burden.

            • amelius a day ago

              You don't need to work with atomic particles to get into problems with floating point accuracy. A badly conditioned matrix can be enough.

              Yes, you can use floats to build floats with more precision, but that's not the point. The point is to have correctness at your fingertips, rather than a datatype that comes with lots of caveats.

              I agree with you that people who work with numerical algorithms might want to use floats, but programming languages with a general audience probably should have an arbitrary-precision datatype as a default.

              • AlotOfReading a day ago

                My point is that rationals aren't correct in the way you're describing. For example, try rotating that rational ill-conditioned matrix. You'll probably have to pick a precision to approximate the result and at that point you would have been better off with arbitrary precision floating point, algebraic or symbolic representations instead. None of these representations are wrong, they're simply tradeoffs for different usages.

                That's why IEEE floats are such a good default. They balance those tradeoffs extremely well, with fixed semantics and near-universal hardware support.

          • lifthrasiir 19 hours ago

            What do you mean by bignums? They may refer to either bigints (e.g. JS BigInt, Python int), bigdecimals (e.g. Python decimal.Decimal), or rationals with bigint numerators and denominators (e.g. Python fractions.Fraction) and are very different from others.

            I believe that only bigints among them are the must-have for practical programming languages, while rationals would be good to have but can be rather easily constructed from bigints anyway. Bigdecimals are harder than what people seem to believe and can't really be a general solution.

      • tliltocatl 14 hours ago

        It's not best choice, it's a choice that is appropriate in some (quite restricted) setting where you don't really care if the program runs out of memory or pauses for a minute on an unexpected input. In many other settings having a inexact (or even obviously incorrect) result is better then stalling the whole system.

User23 a day ago

My first exposure to fixed-point arithmetic was early cpu based 3d rendering. It’s a really neat technique.