math-float64-signbit
Returns a boolean indicating if the sign bit for a double-precision floating-point number is on (true) or off (false).
Found 309 results for compute.io
Returns a boolean indicating if the sign bit for a double-precision floating-point number is on (true) or off (false).
Minimum signed 8-bit integer.
Smallest positive single-precision floating-point number.
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Interchanges the elements of x and y.
Effective number of bits in the significand of a half-precision floating-point number.
Returns a string giving the literal bit representation of an unsigned 32-bit integer.
Square root of 1/2.
Minimum signed 16-bit integer.
Minimum signed 8-bit integer.
Segments an array into chunks.
Effective number of bits in the significand of a single-precision floating-point number.
Creates an infinity-filled matrix or array.
Creates a single-precision floating-point number from an unsigned integer corresponding to an IEEE 754 binary representation.
Maximum signed 16-bit integer.
Returns an integer corresponding to the unbiased exponent of a single-precision floating-point number.
Returns a boolean indicating if the sign bit for a single-precision floating-point number is on (true) or off (false).
Computes a factorial.
Revives a JSON-serialized Matrix.
Difference between one and the smallest value greater than one that can be represented as a single-precision floating-point number.
Base 10 logarithm of Euler's number.
Evaluates the digamma function.
Computes the binomial coefficient.
Gamma function.
Returns a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Beta function.
Inverse incomplete gamma function.
Computes the relative difference of two real numbers in units of double-precision floating-point epsilon.
Computes the relative difference of two real numbers.