const-max-float32
Maximum single-precision floating-point number.
Found 309 results for compute.io
Maximum single-precision floating-point number.
Square root of the golden ratio.
π².
Returns a double-precision floating-point number with the magnitude of x and the sign of x*y.
Computes the transpose of a matrix.
Construct an array of arrays from a matrix.
Creates a zero-filled matrix or array.
Golden ratio.
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Minimum safe double-precision floating-point integer.
Anagram hash table.
Returns an unsigned 32-bit integer corresponding to the IEEE 754 binary representation of a single-precision floating-point number.
Computes the natural logarithm of 1+x.
Cube root of double-precision floating-point epsilon.
Difference between one and the smallest value greater than one that can be represented as a half-precision floating-point number.
Natural logarithm of the beta function.
Splits a single-precision floating-point number into a normalized fraction and an integer power of two.
Computes cos(πx).
Difference between one and the smallest value greater than one that can be represented as a double-precision floating-point number.
Sets the more significant 32 bits of a double-precision floating-point number.
Returns a boolean indicating if the sign bit for a double-precision floating-point number is on (true) or off (false).
Minimum signed 8-bit integer.
Smallest positive single-precision floating-point number.
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Interchanges the elements of x and y.
Effective number of bits in the significand of a half-precision floating-point number.
Returns a string giving the literal bit representation of an unsigned 32-bit integer.
Square root of 1/2.
Minimum signed 16-bit integer.
Minimum signed 8-bit integer.
Segments an array into chunks.
Effective number of bits in the significand of a single-precision floating-point number.
Creates an infinity-filled matrix or array.
Creates a single-precision floating-point number from an unsigned integer corresponding to an IEEE 754 binary representation.
Maximum signed 16-bit integer.
Returns an integer corresponding to the unbiased exponent of a single-precision floating-point number.
Returns a boolean indicating if the sign bit for a single-precision floating-point number is on (true) or off (false).
Computes a factorial.
Revives a JSON-serialized Matrix.
Difference between one and the smallest value greater than one that can be represented as a single-precision floating-point number.
Base 10 logarithm of Euler's number.
Evaluates the digamma function.
Computes the binomial coefficient.
Gamma function.
Returns a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Beta function.
Inverse incomplete gamma function.
Computes the relative difference of two real numbers in units of double-precision floating-point epsilon.
Computes the relative difference of two real numbers.