Skip to content
Snippets Groups Projects
Commit de3fa3a2 authored by Stephan Seitz's avatar Stephan Seitz
Browse files

Tune README.rst, index.rst

parent 0bcb36b6
No related merge requests found
......@@ -36,7 +36,7 @@ Usage
Create a :class:`pystencils.AssignmentCollection` with pystencils:
.. testcode::
.. code-block:: python
import sympy
import pystencils
......@@ -50,8 +50,7 @@ Create a :class:`pystencils.AssignmentCollection` with pystencils:
print(forward_assignments)
.. testoutput::
:options: -ELLIPSIS, +NORMALIZE_WHITESPACE
.. code-block:: python
Subexpressions:
Main Assignments:
......@@ -59,7 +58,7 @@ Create a :class:`pystencils.AssignmentCollection` with pystencils:
You can then obtain the corresponding backward assignments:
.. testcode::
.. code-block:: python
from pystencils.autodiff import AutoDiffOp, create_backward_assignments
backward_assignments = create_backward_assignments(forward_assignments)
......@@ -68,7 +67,7 @@ You can then obtain the corresponding backward assignments:
You can see the derivatives with respective to the two inputs multiplied by the gradient `diffz_C` of the output `z_C`.
.. testoutput::
.. code-block:: python
:options: -ELLIPSIS, +NORMALIZE_WHITESPACE
Subexpressions:
......@@ -78,7 +77,7 @@ You can see the derivatives with respective to the two inputs multiplied by the
You can also use the class :class:`.AutoDiffOp` to obtain both the assignments (if you are curious) and auto-differentiable operations for Tensorflow...
.. testcode::
.. code-block:: python
op = AutoDiffOp(forward_assignments)
backward_assignments = op.backward_assignments
......@@ -89,7 +88,7 @@ You can also use the class :class:`.AutoDiffOp` to obtain both the assignments (
... or Torch:
.. testcode::
.. code-block:: python
x_tensor = pystencils.autodiff.torch_tensor_from_field(x, cuda=False, requires_grad=True)
y_tensor = pystencils.autodiff.torch_tensor_from_field(y, cuda=False, requires_grad=True)
......
......@@ -4,7 +4,89 @@ pystencils-autodiff
This is the documentation of **pystencils-autodiff**.
.. include:: ../README.rst
Installation
------------
Install via pip :
.. code-block:: bash
pip install pystencils-autodiff
or if you downloaded this `repository <https://github.com/theHamsta/pystencils_autodiff>`_ using:
.. code-block:: bash
pip install -e .
Usage
-----
Create a :class:`pystencils.AssignmentCollection` with pystencils:
.. testcode::
import sympy
import pystencils
z, x, y = pystencils.fields("z, y, x: [20,30]")
forward_assignments = pystencils.AssignmentCollection({
z[0, 0]: x[0, 0] * sympy.log(x[0, 0] * y[0, 0])
})
print(forward_assignments)
.. testoutput::
:options: -ELLIPSIS, +NORMALIZE_WHITESPACE
Subexpressions:
Main Assignments:
z[0,0] ← y_C*log(x_C*y_C)
You can then obtain the corresponding backward assignments:
.. testcode::
from pystencils.autodiff import AutoDiffOp, create_backward_assignments
backward_assignments = create_backward_assignments(forward_assignments)
# Sorting for reprducible outputs
backward_assignments.main_assignments = sorted(backward_assignments.main_assignments, key=lambda a: str(a))
print(backward_assignments)
You can see the derivatives with respective to the two inputs multiplied by the gradient `diffz_C` of the output `z_C`.
.. testoutput::
:options: -ELLIPSIS, +NORMALIZE_WHITESPACE
Subexpressions:
Main Assignments:
\hat{x}[0,0] ← diffz_C*y_C/x_C
\hat{y}[0,0] ← diffz_C*(log(x_C*y_C) + 1)
You can also use the class :class:`.AutoDiffOp` to obtain both the assignments (if you are curious) and auto-differentiable operations for Tensorflow...
.. testcode::
op = AutoDiffOp(forward_assignments)
backward_assignments = op.backward_assignments
x_tensor = pystencils.autodiff.tf_variable_from_field(x)
y_tensor = pystencils.autodiff.tf_variable_from_field(y)
tensorflow_op = op.create_tensorflow_op({x: x_tensor, y: y_tensor}, backend='tensorflow')
... or Torch:
.. testcode::
x_tensor = pystencils.autodiff.torch_tensor_from_field(x, cuda=False, requires_grad=True)
y_tensor = pystencils.autodiff.torch_tensor_from_field(y, cuda=False, requires_grad=True)
z_tensor = op.create_tensorflow_op({x: x_tensor, y: y_tensor}, backend='torch')
Contents
......
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment