Skip to content
Snippets Groups Projects
Commit dbf84f44 authored by Michael Kuron's avatar Michael Kuron :mortar_board:
Browse files

correct importorskip in notebook tests

parent 87aa9914
No related branches found
No related tags found
1 merge request!220Improve ARM64 support
Pipeline #30475 canceled
%% Cell type:markdown id: tags:
# pystencils - LLVM generation
The generation of LLVM code is simliar but not identical as seen with the C++ version. For the generation itself a python module ``llvmlite`` is used. This module provides the necessary support and bindings for LLVM. In order to generate from the AST to llvm, the AST needs to be transformed to support type conversions. This is the biggest difference to the C++ version. C++ doesn't need that since the language itself handles the casts.
In this example a simple weighted Jacobi kernel is generated, so the focus remains on the part of LLVM generation.
%% Cell type:code id: tags:
``` python
import pytest
pytest.importorskip('llvmlite')
```
%% Cell type:code id: tags:
``` python
import sympy as sp
import numpy as np
import ctypes
from pystencils import Field, Assignment
from pystencils import create_kernel
from pystencils.display_utils import to_dot
sp.init_printing()
```
%% Cell type:markdown id: tags:
The numpy arrays (with inital values) create *Field*s for the update Rule. Later those arrays are used for the computation itself.
%% Cell type:code id: tags:
``` python
src_arr = np.zeros((30, 20))
src_arr[0,:] = 1.0
src_arr[:,0] = 1.0
dst_arr = src_arr.copy()
src_field = Field.create_from_numpy_array('src', src_arr)
dst_field = Field.create_from_numpy_array('dst', dst_arr)
```
%% Cell type:markdown id: tags:
Using the *Field* objects and the additional *Symbol* $\omega$ for the weight the update rule is specified as a *sympy* equation.
%% Cell type:code id: tags:
``` python
omega = sp.symbols("omega")
update_rule = Assignment(dst_field[0,0], omega * (src_field[0,1] + src_field[0,-1] + src_field[1,0] + src_field[-1,0]) / 4
+ (1.-omega)*src_field[0,0])
update_rule
```
%% Output
$\displaystyle {{dst}_{(0,0)}} \leftarrow {{src}_{(0,0)}} \left(1.0 - \omega\right) + \frac{\omega \left({{src}_{(1,0)}} + {{src}_{(0,1)}} + {{src}_{(0,-1)}} + {{src}_{(-1,0)}}\right)}{4}$
ω⋅(src_E + src_N + src_S + src_W)
dst_C := src_C⋅(1.0 - ω) + ─────────────────────────────────
4
%% Cell type:markdown id: tags:
With this update rule an abstract syntax tree (AST) can be created. This AST can be used to print the LLVM code. The creation follows the same routines as the C++ version does. However at the end there are two more steps. In order to generate LLVM, type casting and pointer arithmetic had to be introduced (which C++ does for you).
%% Cell type:code id: tags:
``` python
ast = create_kernel([update_rule], target='llvm')
print(str(ast))
```
%% Output
KernelFunction kernel([_data_dst, _data_src, omega])
Block for(ctr_0=1; ctr_0<29; ctr_0+=1)
Block _data_dst_00 ← pointer_arithmetic_func(_data_dst, 20*ctr_0)
_data_src_00 ← pointer_arithmetic_func(_data_src, 20*ctr_0)
_data_src_01 ← pointer_arithmetic_func(_data_src, 20*ctr_0 + 20)
_data_src_0m1 ← pointer_arithmetic_func(_data_src, 20*ctr_0 - 20)
for(ctr_1=1; ctr_1<19; ctr_1+=1)
Block _data_dst_00[ctr_1] ← omega*(_data_src_00[ctr_1 + 1] + _data_src_00[ctr_1 - 1] + _data_src_01[ctr_1] + _data_src_0m1[ctr_1])*cast_func(1/4, double) + (omega*cast_func(-1, double) + cast_func(1.0, double))*_data_src_00[ctr_1]
%% Cell type:markdown id: tags:
It is possible to examine the AST further.
%% Cell type:code id: tags:
``` python
to_dot(ast)
```
%% Output
<graphviz.files.Source at 0x7f23dbdc5c10>
%% Cell type:markdown id: tags:
With transformed AST it is now possible to generate and compile the AST into LLVM. Notice that unlike in C++ version, no files are writen to the hard drive (although it is possible).
There are multiple ways how to generate and compile the AST. The most simple one is simillar to the C++ version. Using the ``compile()`` function of the generated AST
You can also manually create a python function with ``make_python_function``.
Another option is obtaining the jit itself with ``generate_and_jit``.
The function ``generate_and_jit`` first generates and the compiles the AST.
If even more controll is needed, it is possible to use the functions ``generateLLVM`` and ``compileLLVM`` to achieve the same. For further controll, instead of calling ``compileLLVM`` the jit object itself can be created and its necessary functions for the compilation have to be run manually (``parse``, (``optimize``,) ``compile``)
%% Cell type:code id: tags:
``` python
kernel = ast.compile()
#kernel = make_python_function(ast)
# Or alternativally
#jit = generate_and_jit(ast)
# Call: jit('kernel', src_arr, dst_arr)
```
%% Cell type:markdown id: tags:
The compiled function(s) can be used now. Either call the function (with arguments, if not given before) or call the jit object with the function's name and its arguments. Here, numpy arrays are automatically adjusted with ctypes.
The functions and arguments can be read as well.
**All of the information the jit object has comes from the module which was parsed. If you parse a second module and don't run the compilation step, the module and the compiled code are not the same anymore, thus leading to false information**
%% Cell type:code id: tags:
``` python
#jit.print_functions()
```
%% Cell type:code id: tags:
``` python
for i in range(100):
kernel(src=src_arr, dst=dst_arr, omega=2/3)
src_arr, dst_arr = dst_arr, src_arr
```
%% Cell type:markdown id: tags:
The output is drawn with matplotlib.
%% Cell type:code id: tags:
``` python
import matplotlib.pyplot as plt
from matplotlib import cm
fig = plt.figure()
ax = fig.add_subplot(111)
ax.imshow(dst_arr, cmap=cm.jet)
plt.show()
```
%% Output
%% Cell type:code id: tags:
``` python
```
......
%% Cell type:code id: tags:
``` python
import pytest
pytest.importorskip('graphviz')
```
%% Cell type:code id: tags:
``` python
from pystencils.session import *
from pystencils.astnodes import Block, Conditional
```
%% Cell type:code id: tags:
``` python
src, dst = ps.fields("src, dst: double[2D]", layout='c')
true_block = Block([ps.Assignment(dst[0, 0], src[-1, 0])])
false_block = Block([ps.Assignment(dst[0, 0], src[1, 0])])
ur = [true_block, Conditional(dst.center() > 0.0, true_block, false_block)]
ast = ps.create_kernel(ur)
```
%% Cell type:code id: tags:
``` python
ps.to_dot(ast, graph_style={'size': "9.5,12.5"})
```
%% Output
<graphviz.files.Source at 0x7f62452c4110>
......
%% Cell type:code id: tags:
``` python
import pytest
pytest.importorskip('waLBerla')
```
%% Cell type:code id: tags:
``` python
from pystencils.session import *
from time import perf_counter
from statistics import median
from functools import partial
```
%% Cell type:markdown id: tags:
## Benchmark for Python call overhead
%% Cell type:code id: tags:
``` python
inner_repeats = 100
outer_repeats = 5
sizes = [2**i for i in range(1, 8)]
sizes
```
%% Output
$\displaystyle \left[ 2, \ 4, \ 8, \ 16, \ 32, \ 64, \ 128\right]$
[2, 4, 8, 16, 32, 64, 128]
%% Cell type:code id: tags:
``` python
def benchmark_pure(domain_size, extract_first=False):
src = np.zeros(domain_size)
dst = np.zeros_like(src)
f_src, f_dst = ps.fields("src, dst", src=src, dst=dst)
kernel = ps.create_kernel(ps.Assignment(f_dst.center, f_src.center)).compile()
if extract_first:
kernel = kernel.kernel
start = perf_counter()
for i in range(inner_repeats):
kernel(src=src, dst=dst)
src, dst = dst, src
end = perf_counter()
else:
start = perf_counter()
for i in range(inner_repeats):
kernel(src=src, dst=dst)
src, dst = dst, src
end = perf_counter()
return (end - start) / inner_repeats
def benchmark_datahandling(domain_size, parallel=False):
dh = ps.create_data_handling(domain_size, parallel=parallel)
f_src = dh.add_array('src')
f_dst = dh.add_array('dst')
kernel = ps.create_kernel(ps.Assignment(f_dst.center, f_src.center)).compile()
start = perf_counter()
for i in range(inner_repeats):
dh.run_kernel(kernel)
dh.swap('src', 'dst')
end = perf_counter()
return (end - start) / inner_repeats
name_to_func = {
'pure_extract': partial(benchmark_pure, extract_first=True),
'pure_no_extract': partial(benchmark_pure, extract_first=False),
'dh_serial': partial(benchmark_datahandling, parallel=False),
'dh_parallel': partial(benchmark_datahandling, parallel=True),
}
```
%% Cell type:code id: tags:
``` python
result = {'block_size': [],
'name': [],
'time': []}
for bs in sizes:
print("Computing size ", bs)
for name, func in name_to_func.items():
for i in range(outer_repeats):
time = func((bs, bs))
result['block_size'].append(bs)
result['name'].append(name)
result['time'].append(time)
```
%% Output
Computing size 2
Computing size 4
Computing size 8
Computing size 16
Computing size 32
Computing size 64
Computing size 128
%% Cell type:code id: tags:
``` python
if 'is_test_run' not in globals():
import pandas as pd
import seaborn as sns
data = pd.DataFrame.from_dict(result)
plt.subplot(1,2,1)
sns.barplot(x='block_size', y='time', hue='name', data=data, alpha=0.6)
plt.yscale('log')
plt.subplot(1,2,2)
data = pd.DataFrame.from_dict(result)
sns.barplot(x='block_size', y='time', hue='name', data=data, alpha=0.6)
```
%% Output
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment