math-float64-get-low-word
Returns a 32-bit integer corresponding to the less significant 32 bits of a double-precision floating-point number.
Found 309 results for compute.io
Returns a 32-bit integer corresponding to the less significant 32 bits of a double-precision floating-point number.
Arrays.
Construct an array of arrays from a matrix.
Multiplies x and a constant and adds the result to y.
Computes exp(x) - 1.
Finds the first element equal to the maximum absolute value of x and returns the element index.
Natural logarithm of 10.
Flips a matrix vertically.
Returns a string giving the literal bit representation of an unsigned 8-bit integer.
Minimum signed 16-bit integer.
Computes the sum of absolute values (L1 norm).
Maximum double-precision floating-point number.
Creates a ones-filled matrix or array.
Maximum single-precision floating-point number.
Creates a filled matrix or array.
Returns the minimum value of a specified integer type.
Square root of the golden ratio.
A Yeoman generator for Compute.io modules.
Computes the dot product of x and y.
Maximum safe double-precision floating-point integer.
Maximum signed 8-bit integer.
Square root of double-precision floating-point epsilon.
Standard Math library.
Construct a matrix from an array of arrays.
Data structures.
Returns a double-precision floating-point number with the magnitude of x and the sign of x*y.
Computes the transpose of a matrix.
Construct an array of arrays from a matrix.
π².
Creates a zero-filled matrix or array.
Golden ratio.
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Anagram hash table.
Returns an unsigned 32-bit integer corresponding to the IEEE 754 binary representation of a single-precision floating-point number.
Minimum safe double-precision floating-point integer.
Returns a string giving the literal bit representation of an unsigned 32-bit integer.
Computes the natural logarithm of 1+x.
Splits a single-precision floating-point number into a normalized fraction and an integer power of two.
Computes cos(Ï€x).
Difference between one and the smallest value greater than one that can be represented as a half-precision floating-point number.
Natural logarithm of the beta function.
Difference between one and the smallest value greater than one that can be represented as a double-precision floating-point number.
Sets the more significant 32 bits of a double-precision floating-point number.
Returns a boolean indicating if the sign bit for a double-precision floating-point number is on (true) or off (false).
Minimum signed 8-bit integer.
Smallest positive single-precision floating-point number.
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Minimum signed 16-bit integer.
Minimum signed 8-bit integer.
Square root of 1/2.
Interchanges the elements of x and y.
Segments an array into chunks.
Creates a single-precision floating-point number from an unsigned integer corresponding to an IEEE 754 binary representation.
Maximum signed 16-bit integer.
Effective number of bits in the significand of a single-precision floating-point number.
Creates an infinity-filled matrix or array.
Returns an integer corresponding to the unbiased exponent of a single-precision floating-point number.
Returns a boolean indicating if the sign bit for a single-precision floating-point number is on (true) or off (false).
Computes a factorial.
Difference between one and the smallest value greater than one that can be represented as a single-precision floating-point number.
Computes the binomial coefficient.
Evaluates the digamma function.
Gamma function.
Inverse incomplete gamma function.
Computes the relative difference of two real numbers in units of double-precision floating-point epsilon.
Computes the relative difference of two real numbers.
Beta function.
Returns a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Revives a JSON-serialized Matrix.