![]() ![]() tanh( outputs) # inputs to the next layer return outputs # no activation on last layer def loss( params, inputs, targets): numpy as jnp from jax import grad, jit, vmap def predict( params, inputs): This is a research project, not an official Google product. Parallel programming of multiple accelerators, with more to come. Pmap for single-program multiple-data (SPMD) BothĪre instances of such transformations. You can even program multiple GPUsĭig a little deeper, and you'll see that JAX is really an extensible system forĬomposable function transformations. ![]() Maximal performance without leaving Python. Compilation and automatic differentiation can beĬomposed arbitrarily, so you can express sophisticated algorithms and get Into XLA-optimized kernels using a one-function API, But JAX also lets you just-in-time compile your own Python functions Under the hood by default, with library calls getting just-in-time compiled andĮxecuted. To compile and run your NumPy programs on GPUs and TPUs. Via grad as well as forward-mode differentiation,Īnd the two can be composed arbitrarily to any order. It supports reverse-mode differentiation (a.k.a. Recursion, and closures, and it can take derivatives of derivatives ofĭerivatives. It can differentiate through loops, branches, JAX can automatically differentiate native Brought together for high-performance machine learning research.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |