JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 2
  • Score
    100M100P100Q48932F
  • License MIT

A tiny Autograd engine based off of Andrej Karpathy's micrograd in Python

Package Exports

  • microgradts
  • microgradts/lib/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (microgradts) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

microgradts

seal

A tiny Autograd engine Based off of Andrej Karpathy's Micrograd in Python. Implements backpropagation over a dynamically built DAG that operates over scalar values.

Installation

npm install microgradts
yarn add microgradts

Example usage

Below is a slightly contrived example showing a number of possible supported operations:

import { Value, add, mul, pow } from 'microgradts'

const a = new Value(-4.0)
const b = new Value(2.0)
const c = add(a, b)
const d = add(mul(a, b), pow(b, new Value(3))) a * b + b**3
const e = new Value(3.0)
const f = div(d, e)
f.backward();

And an example usage of the Neural Net APi:

const n = new MLP(3, [4, 4, 1]);

const xs = [
  [2.0, 3.0, -1.0],
  [3.0, -1.0, 0.5],
  [0.5, 1.0, 1.0],
  [1.0, 1.0, -1.0],
].map((x) => toValues(x));
const ys = toValues([1.0, -1.0, -1.0, 1.0]);

for (let i = 0; i < 200; i++) {
  const ypred = xs.map((x) => n.run(x));
  const loss = getLoss(ys, ypred as Value[]);

  for (const p of n.parameters()) {
    p.grad = 0;
  }
  loss.backward();

  for (const p of n.parameters()) {
    p.data -= 0.01 * p.grad;
  }

  console.log(i, loss.data);
}

Todo

  • Implement visualization with Graphviz

License

MIT