ieee754
Read/write IEEE754 floating point numbers from/to a Buffer or array-like object
Found 233 results for ieee754
Read/write IEEE754 floating point numbers from/to a Buffer or array-like object
Read/write IEEE754 floating point numbers from/to a Buffer or array-like object
IEEE 754 half-precision floating-point for JavaScript
Test if a value is a Float64Array.
Float32Array.
The bias of a double-precision floating-point number's exponent.
Return a double-precision floating-point number with the magnitude of x and the sign of y.
The maximum biased base 2 exponent for a subnormal double-precision floating-point number.
Smallest positive double-precision floating-point normal number.
Double-precision floating-point negative infinity.
Return a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Test if a double-precision floating-point numeric value is infinite.
Double-precision floating-point positive infinity.
Test if a double-precision floating-point numeric value is NaN.
High word mask for the exponent of a double-precision floating-point number.
Create a double-precision floating-point number from a higher order word and a lower order word.
Return an unsigned 32-bit integer corresponding to the more significant 32 bits of a double-precision floating-point number.
The minimum biased base 2 exponent for a subnormal double-precision floating-point number.
Multiply a double-precision floating-point number by an integer power of two.
The maximum biased base 2 exponent for a double-precision floating-point number.
Return an integer corresponding to the unbiased exponent of a double-precision floating-point number.
High word mask for excluding the sign bit of a double-precision floating-point number.
Float64Array.
High word mask for the sign bit of a double-precision floating-point number.
128-bit complex number.
64-bit complex number.
Test if a double-precision floating-point numeric value is negative zero.
Square root of 2π.
Set the more significant 32 bits of a double-precision floating-point number.
Return an unsigned 32-bit integer corresponding to the less significant 32 bits of a double-precision floating-point number.
2π.
Test if a double-precision floating-point numeric value is positive zero.
Square root of double-precision floating-point epsilon.
Set the less significant 32 bits of a double-precision floating-point number.
Natural logarithm of 2.
Maximum safe double-precision floating-point integer.
The Euler-Mascheroni constant.
Difference between one and the smallest value greater than one that can be represented as a double-precision floating-point number.
Euler's number.
Natural logarithm of the maximum double-precision floating-point number.
π.
Smallest positive normalized single-precision floating-point number.
One half times the natural logarithm of 2.
Natural logarithm of the smallest normalized double-precision floating-point number.
Find the floating point number immediately after any given number
Maximum double-precision floating-point number.
High word mask for the significand of a double-precision floating-point number.
Natural logarithm of the square root of 2π.
Complex128Array.
1/4 times π.
Arbitrary constant `g` to be used in Lanczos approximation functions.
Test if a value is a complex number-like object.
Maximum single-precision floating-point number.
Complex64Array.
1/2 times π.
Return an unsigned 32-bit integer corresponding to the IEEE 754 binary representation of a single-precision floating-point number.
The maximum base 10 exponent for a double-precision floating-point number.
The minimum base 10 exponent for a subnormal double-precision floating-point number.
Smallest positive double-precision floating-point number.
The minimum base 10 exponent for a normal double-precision floating-point number.
Square root of 2.
The minimum biased base 2 exponent for a normal double-precision floating-point number.
Returns an integer corresponding to the unbiased exponent of a double-precision floating-point number.
Returns a 32-bit integer corresponding to the more significant 32 bits of a double-precision floating-point number.
Maximum safe nth factorial when stored in double-precision floating-point format.
Enforce finance-safe calculations using BigNumber instead of native JavaScript arithmetic and Math functions.
A module to encode and decode IEEE 754 floating point numbers.
Sets the less significant 32 bits of a double-precision floating-point number.
Smallest positive double-precision floating-point number.
Read/write IEEE754 floating point numbers from/to a Buffer or array-like object
Multiplies a double-precision floating-point number by an integer power of two.
Creates a double-precision floating-point number from a higher order word and a lower order word.
Returns a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Natural logarithm of 1/2.
Test if a double-precision floating-point numeric value is finite.
Create a filled array according to a provided callback function.
Enforce finance-safe calculations using BigNumber instead of native JavaScript arithmetic and Math functions.
Natural logarithm of 2π.
Create a zero-filled array having a specified length.
Single-precision floating-point positive infinity.
Test if a single-precision floating-point numeric value is NaN.
Single-precision floating-point negative infinity.
Return a boolean indicating if the sign bit for a double-precision floating-point number is on (true) or off (false).
Smallest positive single-precision floating-point subnormal number.
Maximum safe single-precision floating-point integer.
The bias of a single-precision floating-point number's exponent.
Minimum safe single-precision floating-point integer.
Difference between one and the smallest value greater than one that can be represented as a single-precision floating-point number.
Create a typed array.
Create an uninitialized array having a specified length.
Cube root of single-precision floating-point epsilon.
Mask for excluding the sign bit of a single-precision floating-point number.
Size (in bytes) of a single-precision floating-point number.
Square root of single-precision floating-point epsilon.
Single-precision floating-point mathematical constants.
Test if a single-precision floating-point numeric value is positive zero.
Effective number of bits in the significand of a single-precision floating-point number.
Test if a single-precision floating-point numeric value is negative zero.
Natural logarithm of π.
Generate a linearly spaced numeric array whose elements increment by 1 starting from one.
Create a single-precision floating-point number from an unsigned integer corresponding to an IEEE 754 binary representation.
Square root of π.
Mask for the sign bit of a single-precision floating-point number.
Create a filled array.
Create a filled array having a specified length.
Mask for the exponent of a single-precision floating-point number.
Split a double-precision floating-point number into a normalized fraction and an integer power of two.
Create an array filled with ones and having a specified length.
Mask for the significand of a single-precision floating-point number.
π².
π.
Test if a single-precision floating-point numeric value is infinite.
Test if a value is a Complex128Array.
1/4 times π.
Test if a value is a Complex64Array.
Returns the next representable double-precision floating-point number after x toward y.
Square root of 3.
Return a boolean indicating if the sign bit for a single-precision floating-point number is on (true) or off (false).
Golden ratio.
Maximum safe nth Fibonacci number when stored in double-precision floating-point format.
1/2 times π.
Golden ratio.
Maximum safe nth Lucas number when stored in double-precision floating-point format.
Generate a linearly spaced numeric array whose elements increment by 1 starting from zero.
Square root of 0.5π.
Return a single-precision floating-point number with the magnitude of x and the sign of y.
Minimum safe double-precision floating-point integer.
Test if a value is a 64-bit complex number.
Create an array filled with NaNs and having a specified length.
Half-precision floating-point negative infinity.
Maximum safe Lucas number when stored in double-precision floating-point format.
Square root of the golden ratio.
Square root of half-precision floating-point epsilon.
Test if a value is a 128-bit complex number.
Maximum safe half-precision floating-point integer.
Return a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Return an integer corresponding to the unbiased exponent of a single-precision floating-point number.
Return an integer corresponding to the significand of a single-precision floating-point number.
Create a single-precision floating-point number from a literal bit representation.
Single-precision floating-point NaN.
Half-precision floating-point positive infinity.
Apéry's constant.
Maximum safe double-precision floating-point integer.
Double-precision floating-point NaN.
Cube root of half-precision floating-point epsilon.
Catalan's constant.
Size (in bytes) of a half-precision floating-point number.
Maximum safe Fibonacci number when stored in double-precision floating-point format.
Smallest positive normalized half-precision floating-point number.
Difference between one and the smallest value greater than one that can be represented as a half-precision floating-point number.
Cube root of double-precision floating-point epsilon.
Maximum half-precision floating-point number.
Glaisher-Kinkelin constant.
Size (in bytes) of a double-precision floating-point number.
Base 10 logarithm of Euler's number.
Square root of 1/2.
Base 2 logarithm of Euler's number.
Fourth root of double-precision floating-point epsilon.
Effective number of bits in the significand of a double-precision floating-point number.
Smallest positive half-precision floating-point subnormal number.
Minimum safe half-precision floating-point integer.
Half-precision floating-point mathematical constants.
The bias of a half-precision floating-point number's exponent.
Double-precision floating-point mathematical constants.
Test if a single-precision floating-point numeric value is finite.
Effective number of bits in the significand of a half-precision floating-point number.
The maximum base 10 exponent for a subnormal double-precision floating-point number.
Natural logarithm of 10.
Type-based calculation does right.
Creates a single-precision floating-point number from a literal bit representation.
Return the minimum safe integer capable of being represented by a numeric real type.
Return the smallest positive normal value capable of being represented by a numeric real type.
Test if two floating point numbers overlap
Base utilities for single-precision floating-point numbers.
Decompose a double-precision floating-point number into integral and fractional parts.
Create an uninitialized array having the same length and data type as a provided array.
Returns the next representable single-precision floating-point number after x toward y.
Returns an integer corresponding to the significand of a single-precision floating-point number.
Implementation of BigBit standard for numeric data type and character encoding
Base utilities for double-precision floating-point numbers.
Converts raw modbus integer to IEEE-754 Binary16 Floating Point format
2π.
Get the “exact value” of the number
Returns a 32-bit integer corresponding to the less significant 32 bits of a double-precision floating-point number.
Test if a value is a 64-bit or 128-bit complex number.
Maximum double-precision floating-point number.
Create an array filled with ones and having the same length and data type as a provided array.
Return a double-precision floating-point number with the magnitude of x and the sign of x*y.
Square root of double-precision floating-point epsilon.
Maximum safe double-precision floating-point integer.
Maximum single-precision floating-point number.
Effective number of bits in the significand of a double-precision floating-point number.
Maximum single-precision floating-point number.
Create a zero-filled array having the same length and data type as a provided array.
Return the maximum safe integer capable of being represented by a numeric real type.
Utilities for double-precision floating-point numbers.
Returns an unsigned 32-bit integer corresponding to the IEEE 754 binary representation of a single-precision floating-point number.
Return the maximum finite value capable of being represented by a numeric real type.
Utilities for single-precision floating-point numbers.
Return a single-precision floating-point number with the magnitude of x and the sign of x*y.
Returns a double-precision floating-point number with the magnitude of x and the sign of x*y.
Generate a linearly spaced numeric array whose elements increment by 1 starting from zero and having the same length and data type as a provided input array.
Effective number of bits in the significand of a half-precision floating-point number.
Sets the more significant 32 bits of a double-precision floating-point number.
Returns a boolean indicating if the sign bit for a double-precision floating-point number is on (true) or off (false).
Return the minimum safe integer capable of being represented by a numeric real type.
Difference between one and the smallest value greater than one that can be represented as a double-precision floating-point number.
Create a typed array.
Create a complex number typed array.
Difference between one and the smallest value greater than one that can be represented as a half-precision floating-point number.
Cube root of double-precision floating-point epsilon.
num7 - SUPREME PRECISION GENERAL PURPOSE ARITHMETIC-LOGIC DECIMAL LIBRARY PACKAGE FOR JAVASCRIPT LANGUAGE
Difference between one and the smallest value greater than one that can be represented as a single-precision floating-point number.
Returns a boolean indicating if the sign bit for a single-precision floating-point number is on (true) or off (false).
Return the maximum finite value capable of being represented by a numeric real type.
Returns an integer corresponding to the unbiased exponent of a single-precision floating-point number.
Creates a single-precision floating-point number from an unsigned integer corresponding to an IEEE 754 binary representation.
Effective number of bits in the significand of a single-precision floating-point number.
Return the smallest positive normal value capable of being represented by a numeric real type.
use numeral automatically
Returns a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Computes the relative difference of two real numbers in units of double-precision floating-point epsilon.
num7 - SUPREME PRECISION GENERAL PURPOSE ARITHMETIC-LOGIC DECIMAL LIBRARY PACKAGE FOR JAVASCRIPT LANGUAGE
```typescript import { hexIeee754_32ToNum, numToIeee754_32Hex } from "ieee754-hex";
float value encode/decode two's complement and single precision floating point format(ieee754)
Create a filled array having the same length and data type as a provided array.
Compute the relative difference of two real numbers in units of double-precision floating-point epsilon.
Read/write IEEE754 floating point numbers from/to a Buffer or array-like object
Read/write IEEE754 floating point numbers from/to a Buffer or array-like object
Create an array filled with NaNs and having the same length and data type as a provided array.
Typed array pool.
Return the maximum safe integer capable of being represented by a numeric real type.
Generate a linearly spaced numeric array whose elements increment by 1 starting from one and having the same length and data type as a provided input array.