The high-level interface for writing ML code, checked at compile time.
Graph
which tracks all computation and actually does execution. We’re also defining two new tensors, both of shape (3,). At this point, these “tensors” are actually GraphTensor
s that don’t hold any data. Also, notice we pass in the shape as a type generic. Types are known at compile time, similar to dfdx!
GraphTensor
s together, and get a new GraphTensor
. Notice this does not consume anything, and we’re free to use a
or b
later on. This is because GraphTensor
is a super lightweight tracking struct which implements copy. “But wait, we never set the values of a
and b
, how can we add them?” We aren’t actually adding them here. Instead, we’re writing this addition to the graph, and getting out c
, which points to the result when it’s actually done.
Then we set the data for these tensors. But if GraphTensor
doesn’t hold data, how can we set it? Well we aren’t actually setting it in the tensor, just passing it through to the graph to say once you run, set this tensor to this value. We also need to mark the output we want to retrieve later. This is so that when the graph runs, it doesn’t delete the data for c
part-way through execution (a common optimization for unused tensors). Notice we’re setting the sources after we define the computation. This is backward from a lot of other libs, but it means we can redefine the data and rerun everything without redefining the computation later on.
cx.execute()
, we’ve already set all our sources, so our addition actually gets ran and stored in c
!
c
, we can fetch the data for c
and see the result.
a
: GraphTensor<R1<3>>
. So what’s that generic all about? It’s the shape! We make tensor shapes part of the type, so they’re tracked at compile time! In this case, the shape is rank 1, with 3 elements, or in other words, a vector of 3 dimensions. (Side note: R1<N>
is a typedef of (Const<N>,)
) It should be impossible to accidentally get a runtime shape mismatch.
Now we can use a
as you would in a library like PyTorch, performing linear algebra: