compute-isfinite
Computes for each array element whether an element is a finite number.
Found 309 results for compute.io
Computes for each array element whether an element is a finite number.
Computes the sum of an array ignoring non-numeric values.
Returns a boolean indicating if an input array is sorted.
Provides a method to compute a sample standard deviation incrementally.
Provides a method to compute a sample variance incrementally.
Generates an array of linearly spaced dates.
Inverse complementary error function.
Computes sin(πx).
Computes the sine of a number.
Computes the cosine of a number.
Inverse error function.
Natural logarithm of the gamma function.
A linear congruential pseudorandom number generator (lcg).
Digamma function.
Computes the tangent of a number.
Returns the next representable double-precision floating-point number after x toward y.
Minimum safe double-precision floating-point integer.
Maximum safe double-precision floating-point integer.
Validates if a value is a matrix.
Rounds a numeric value to the nearest integer.
Computes the Euclidean distance between two arrays.
Converts a double-precision floating-point number to the nearest single-precision floating-point number.
Maximum unsigned 16-bit integer.
Creates a single-precision floating-point number from a literal bit representation.
Computes the L2 norm (Euclidean norm).
Returns a string giving the literal bit representation of a double-precision floating-point number.
Evaluates the natural logarithm of the beta function.
Returns a string giving the literal bit representation of a double-precision floating-point number.
The Euler-Mascheroni constant.
Flips a matrix horizontally.
Glaisher-Kinkelin constant.
Computes the Manhattan (city block) distance between two arrays.
Computes bˣ - 1.
Returns an integer corresponding to the significand of a single-precision floating-point number.
Creates a double-precision floating-point number from a literal bit representation.
Euler's number.
Signum function.
Returns the next representable single-precision floating-point number after x toward y.
Computes the Minkowski distance between two arrays.
Golden ratio.
Creates a NaN-filled matrix or array.
Maximum signed 8-bit integer.
Apéry's constant.
The Euler-Mascheroni constant.
Catalan's constant.
Catalan's constant.
Dirichlet eta function.
Square root of 2.
Applies a function to each matrix element.
Rounds a numeric value to the nearest multiple of 10^n.
Maximum unsigned 16-bit integer.
Riemann Zeta function.
Base 2 logarithm of Euler's number.
Rotates a matrix by 90 degrees.
Maximum unsigned 8-bit integer.
Applies a function to each typed array element.
Maximum signed 32-bit integer.
Copies values from x into y.
Natural logarithm of 2.
Incomplete gamma function.
Maximum signed 16-bit integer.
Scales elements of `x` by a constant `alpha`.
Maximum unsigned 32-bit integer.
Returns a JSON representation of a typed array.
Natural logarithm of the square root of 2π.
Flips a matrix vertically.
Computes the Chebyshev distance between two arrays.
2π.
Casts an array to an array of a different data type.
Creates a ones-filled matrix or array.
Construct an array of arrays from a matrix.
Returns a string giving the literal bit representation of a single-precision floating-point number.
Smallest positive single-precision floating-point number.
Returns the maximum value of a specified integer type.
Returns a 32-bit integer corresponding to the less significant 32 bits of a double-precision floating-point number.
Arrays.
Square root of the golden ratio.
Multiplies x and a constant and adds the result to y.
Finds the first element equal to the maximum absolute value of x and returns the element index.
Natural logarithm of 10.
Computes exp(x) - 1.
Minimum signed 16-bit integer.
Computes the sum of absolute values (L1 norm).
Maximum double-precision floating-point number.
Creates a filled matrix or array.
A Yeoman generator for Compute.io modules.
Maximum unsigned 8-bit integer.
Returns a string giving the literal bit representation of an unsigned 8-bit integer.
Returns the minimum value of a specified integer type.
Maximum single-precision floating-point number.
Computes the dot product of x and y.
Maximum safe double-precision floating-point integer.
Maximum signed 8-bit integer.
Square root of double-precision floating-point epsilon.
Standard Math library.
Construct a matrix from an array of arrays.
Effective number of bits in the significand of a double-precision floating-point number.
Computes the absolute difference of two real numbers.
Minimum signed 32-bit integer.
Data structures.
Maximum single-precision floating-point number.
π².
Square root of the golden ratio.
Creates a zero-filled matrix or array.
Golden ratio.
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Returns a double-precision floating-point number with the magnitude of x and the sign of x*y.
Computes the transpose of a matrix.
Construct an array of arrays from a matrix.
Minimum safe double-precision floating-point integer.
Anagram hash table.
Returns an unsigned 32-bit integer corresponding to the IEEE 754 binary representation of a single-precision floating-point number.
Computes the natural logarithm of 1+x.
Difference between one and the smallest value greater than one that can be represented as a half-precision floating-point number.
Natural logarithm of the beta function.
Cube root of double-precision floating-point epsilon.
Difference between one and the smallest value greater than one that can be represented as a double-precision floating-point number.
Splits a single-precision floating-point number into a normalized fraction and an integer power of two.
Computes cos(πx).
Sets the more significant 32 bits of a double-precision floating-point number.
Returns a boolean indicating if the sign bit for a double-precision floating-point number is on (true) or off (false).
Minimum signed 8-bit integer.
Smallest positive single-precision floating-point number.
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Interchanges the elements of x and y.
Effective number of bits in the significand of a half-precision floating-point number.
Returns a string giving the literal bit representation of an unsigned 32-bit integer.
Square root of 1/2.
Minimum signed 16-bit integer.
Minimum signed 8-bit integer.
Segments an array into chunks.
Effective number of bits in the significand of a single-precision floating-point number.
Creates an infinity-filled matrix or array.
Returns an integer corresponding to the unbiased exponent of a single-precision floating-point number.
Creates a single-precision floating-point number from an unsigned integer corresponding to an IEEE 754 binary representation.
Maximum signed 16-bit integer.
Returns a boolean indicating if the sign bit for a single-precision floating-point number is on (true) or off (false).
Computes a factorial.
Revives a JSON-serialized Matrix.
Difference between one and the smallest value greater than one that can be represented as a single-precision floating-point number.
Base 10 logarithm of Euler's number.
Evaluates the digamma function.
Gamma function.
Computes the binomial coefficient.
Returns a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Beta function.
Inverse incomplete gamma function.
Computes the relative difference of two real numbers in units of double-precision floating-point epsilon.
Computes the relative difference of two real numbers.