Bobinas P4G
  • Login
  • Public

    • Public
    • Groups
    • Popular
    • People

Conversation

Notices

  1. Bernie (codewiz@mstdn.io)'s status on Saturday, 28-Oct-2023 10:11:45 UTC Bernie Bernie

    I came across a clever method for approximate float comparison which exploits their binary representation:

    bool almost_equal(float a, float b) {
    uint32_t abits = to_bits(a);
    uint32_t bbits = to_bits(b);

    return abits - bbits < 4;
    }

    Where 4 is the tolerance in "units in the last place".

    to_bits() isn't a simple bit_cast. It also does some magic with the sign bit. But for now let's only consider positive floats...

    #programming #c #cpp

    In conversation Saturday, 28-Oct-2023 10:11:45 UTC from mstdn.io permalink
    • Bernie (codewiz@mstdn.io)'s status on Saturday, 28-Oct-2023 10:21:54 UTC Bernie Bernie
      in reply to

      The question on my mind was: does this give the correct distance even around the boundary of an exponent bump?

      And yes: mantissa and exponent are arranged in such a way that adding 1 to the binary form always yields the next float:

      >>> import struct
      >>> to_float = lambda i: unpack(">f", i.to_bytes(4))[0]
      >>> to_float(0x3FFFFFFF)
      1.9999998807907104
      >>> to_float(0x40000000)
      2.0
      >>> to_float(0x40000001)
      2.000000238418579

      #programming #c #cpp #python

      In conversation Saturday, 28-Oct-2023 10:21:54 UTC permalink
    • Bernie (codewiz@mstdn.io)'s status on Saturday, 28-Oct-2023 10:44:43 UTC Bernie Bernie
      in reply to

      Of course this also works with double and long double.

      My production version of almost_equals() is templated using C++20 concepts and the nicer "auto" syntax:

      bool almost_equals(std::floating_point auto a, std::floating_point auto b) {
      auto abits = to_bits(a);
      auto bbits = to_bits(a);
      return distance(abits, bibits) < 4;
      }

      In conversation Saturday, 28-Oct-2023 10:44:43 UTC permalink
    • Bernie (codewiz@mstdn.io)'s status on Saturday, 28-Oct-2023 10:53:23 UTC Bernie Bernie
      in reply to

      For more information, consult your local encyclopedia:
      https://en.wikipedia.org/wiki/Single-precision_floating-point_format

      #programming #c #cpp

      In conversation Saturday, 28-Oct-2023 10:53:23 UTC permalink

      Attachments

      1. Single-precision floating-point format
        Single-precision floating-point format (sometimes called FP32 or float32) is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. A floating-point variable can represent a wider range of numbers than a fixed-point variable of the same bit width at the cost of precision. A signed 32-bit integer variable has a maximum value of 231 − 1 = 2,147,483,647, whereas an IEEE 754 32-bit base-2 floating-point variable has a maximum value of (2 − 2−23) × 2127 ≈ 3.4028235 × 1038. All integers with 7 or fewer decimal digits, and any 2n for a whole number −149 ≤ n ≤ 127, can be converted exactly into an IEEE 754 single-precision floating-point value. In the IEEE 754-2008 standard, the 32-bit base-2 format is officially referred to as binary32; it was called single in IEEE 754-1985. IEEE 754 specifies additional floating-point types, such as 64-bit base-2 double precision and, more recently, base-10 representations. One of the first programming languages to provide...

      2. https://media.mstdn.io/mstdn-media/media_attachments/files/111/312/266/651/919/039/original/e846e452d67040b9.png
    • maryam@mstdn.io's status on Saturday, 28-Oct-2023 13:36:45 UTC Maryam Maryam
      in reply to

      @codewiz
      The distance function only looks at the last bits?
      If it looks at the first bits then we're distancing the base.

      In conversation Saturday, 28-Oct-2023 13:36:45 UTC permalink
    • Bernie (codewiz@mstdn.io)'s status on Saturday, 28-Oct-2023 16:19:26 UTC Bernie Bernie
      in reply to
      • Maryam

      @Maryam It compares all 32 bits of the floats (there's no masking), but it allows the difference to be up to 4 ULPs = 2 least significant bits.

      In conversation Saturday, 28-Oct-2023 16:19:26 UTC permalink
    • Bernie (codewiz@mstdn.io)'s status on Sunday, 29-Oct-2023 03:57:31 UTC Bernie Bernie
      in reply to
      • Zelphir Kaltstahl

      @zelphirkaltstahl It works with standard IEEE 754 floats, yes.

      The problem with doing (a - b) is that you might get 1000.0 or you might get 0.001. Depending on the magnitude of a and b, both might represent an expected rounding error in the least significant bits of a float.

      A codebase I'm working on had arbitrary epsilon constants: 1e-6 for floats and 5e-15 for doubles. Such absolute thresholds are only reasonable for values in a narrow range (for example 0.1..10.0).

      In conversation Sunday, 29-Oct-2023 03:57:31 UTC permalink
    • Zelphir Kaltstahl (zelphirkaltstahl@mastodon.social)'s status on Sunday, 29-Oct-2023 03:57:32 UTC Zelphir Kaltstahl Zelphir Kaltstahl
      in reply to

      @codewiz Does this consider IEEE 754 floating point format? How is it better than substracting the the floats from each other? Or in which cases?

      In conversation Sunday, 29-Oct-2023 03:57:32 UTC permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • Privacy
  • Source
  • Version
  • Contact

Bobinas P4G is a social network. It runs on GNU social, version 2.0.1-beta0, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All Bobinas P4G content and data are available under the Creative Commons Attribution 3.0 license.