Skip to content
Snippets Groups Projects
Commit 5ad36438 authored by Sebastian Bindgen's avatar Sebastian Bindgen
Browse files

print the direct output of the array.

parent c26ddcac
No related branches found
No related tags found
1 merge request!49Lees Edwards boundary conditions
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Lees Edwards boundary conditions with lbmpy # Lees Edwards boundary conditions with lbmpy
This example shows how to implement Lees Edwards boundary conditions following the principles discussed in Wagner, A.J., Pagonabarraga, I. Lees–Edwards Boundary Conditions for Lattice Boltzmann. Journal of Statistical Physics 107, 521–537 (2002). https://doi.org/10.1023/A:1014595628808 This example shows how to implement Lees Edwards boundary conditions following the principles discussed in Wagner, A.J., Pagonabarraga, I. Lees–Edwards Boundary Conditions for Lattice Boltzmann. Journal of Statistical Physics 107, 521–537 (2002). https://doi.org/10.1023/A:1014595628808
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from lbmpy.session import * from lbmpy.session import *
from lbmpy.updatekernels import create_stream_pull_with_output_kernel from lbmpy.updatekernels import create_stream_pull_with_output_kernel
from lbmpy.macroscopic_value_kernels import macroscopic_values_getter from lbmpy.macroscopic_value_kernels import macroscopic_values_getter
from lbmpy.macroscopic_value_kernels import macroscopic_values_setter from lbmpy.macroscopic_value_kernels import macroscopic_values_setter
from lbmpy.maxwellian_equilibrium import get_weights from lbmpy.maxwellian_equilibrium import get_weights
from lbmpy.methods.creationfunctions import create_from_equilibrium from lbmpy.methods.creationfunctions import create_from_equilibrium
from lbmpy.relaxationrates import lattice_viscosity_from_relaxation_rate from lbmpy.relaxationrates import lattice_viscosity_from_relaxation_rate
from pystencils.astnodes import LoopOverCoordinate from pystencils.astnodes import LoopOverCoordinate
from pystencils.slicing import get_slice_before_ghost_layer from pystencils.slicing import get_slice_before_ghost_layer
from pystencils.slicing import get_ghost_region_slice from pystencils.slicing import get_ghost_region_slice
from pystencils.slicing import get_periodic_boundary_functor from pystencils.slicing import get_periodic_boundary_functor
from functools import partial from functools import partial
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Parameters ## Parameters
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
N = 64 # domain size N = 64 # domain size
omega = 1. # relaxation rate of first component omega = 1. # relaxation rate of first component
U_x = 0.1 # shear velocity U_x = 0.1 # shear velocity
shear_dir = 0 # direction of shear flow shear_dir = 0 # direction of shear flow
shear_dir_normal = 1 # direction normal to shear plane, for interpolation shear_dir_normal = 1 # direction normal to shear plane, for interpolation
stencil = get_stencil("D2Q9") stencil = get_stencil("D2Q9")
weights = get_weights(stencil, c_s_sq=sp.Rational(1, 3)) weights = get_weights(stencil, c_s_sq=sp.Rational(1, 3))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Data structures ## Data structures
We allocate a set of PDFs `src` and `dst`, the density field `ρ` and the velocity field `u`. We allocate a set of PDFs `src` and `dst`, the density field `ρ` and the velocity field `u`.
For later testing, we also need a force field `F`. This will be allocated as well. For later testing, we also need a force field `F`. This will be allocated as well.
To run the simulation on GPU, change the `default_target` to gpu To run the simulation on GPU, change the `default_target` to gpu
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dim = len(stencil[0]) dim = len(stencil[0])
dh = ps.create_data_handling((N, ) * dim, dh = ps.create_data_handling((N, ) * dim,
periodicity=True, periodicity=True,
default_target='cpu') default_target='cpu')
src = dh.add_array('src', values_per_cell=len(stencil)) src = dh.add_array('src', values_per_cell=len(stencil))
dst = dh.add_array_like('dst', 'src') dst = dh.add_array_like('dst', 'src')
F = dh.add_array('F', values_per_cell=dh.dim) F = dh.add_array('F', values_per_cell=dh.dim)
ρ = dh.add_array('rho') ρ = dh.add_array('rho')
u = dh.add_array('u', values_per_cell=dh.dim) u = dh.add_array('u', values_per_cell=dh.dim)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Kernels ## Kernels
Following Wagner et al., we need to find all the populations that will cross the boundary in the direction normal to the shearing direction and alter their equilibrium distribution. Following Wagner et al., we need to find all the populations that will cross the boundary in the direction normal to the shearing direction and alter their equilibrium distribution.
Hence, we construct a piecewise function that fulfils this. Hence, we construct a piecewise function that fulfils this.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
counters = [LoopOverCoordinate.get_loop_counter_symbol(i) for i in range(dim)] counters = [LoopOverCoordinate.get_loop_counter_symbol(i) for i in range(dim)]
points_up = sp.Symbol('points_up') points_up = sp.Symbol('points_up')
points_down = sp.Symbol('points_down') points_down = sp.Symbol('points_down')
U_p = sp.Piecewise((1, sp.And(ps.data_types.type_all_numbers(counters[1] <= 1, 'int'), points_down)), U_p = sp.Piecewise((1, sp.And(ps.data_types.type_all_numbers(counters[1] <= 1, 'int'), points_down)),
(-1, sp.And(ps.data_types.type_all_numbers(counters[1] >= src.shape[1] - 2, 'int'), (-1, sp.And(ps.data_types.type_all_numbers(counters[1] >= src.shape[1] - 2, 'int'),
points_up)), (0, True)) * U_x points_up)), (0, True)) * U_x
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
For the LB update, we will use a velocity input in the shearing direction with the magnitude `U_x` that is defined further up. For the LB update, we will use a velocity input in the shearing direction with the magnitude `U_x` that is defined further up.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
collision = create_lb_update_rule(stencil=stencil, collision = create_lb_update_rule(stencil=stencil,
relaxation_rate=omega, relaxation_rate=omega,
compressible=True, compressible=True,
velocity_input=u.center_vector+sp.Matrix([U_p, 0]), velocity_input=u.center_vector+sp.Matrix([U_p, 0]),
density_input=ρ, density_input=ρ,
force_model='luo', force_model='luo',
force=F.center_vector, force=F.center_vector,
kernel_type='collide_only', kernel_type='collide_only',
optimization={'symbolic_field': src}) optimization={'symbolic_field': src})
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
We need to get the populations that cross the upper and lower boundary, respectively. We need to get the populations that cross the upper and lower boundary, respectively.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
to_insert = [s.lhs for s in collision.subexpressions to_insert = [s.lhs for s in collision.subexpressions
if collision.method.first_order_equilibrium_moment_symbols[shear_dir] if collision.method.first_order_equilibrium_moment_symbols[shear_dir]
in s.free_symbols] in s.free_symbols]
for s in to_insert: for s in to_insert:
collision = collision.new_with_inserted_subexpression(s) collision = collision.new_with_inserted_subexpression(s)
ma = [] ma = []
for a, c in zip(collision.main_assignments, collision.method.stencil): for a, c in zip(collision.main_assignments, collision.method.stencil):
if c[shear_dir_normal] == -1: if c[shear_dir_normal] == -1:
b = (True, False) b = (True, False)
elif c[shear_dir_normal] == 1: elif c[shear_dir_normal] == 1:
b = (False, True) b = (False, True)
else: else:
b = (False, False) b = (False, False)
a = ps.Assignment(a.lhs, a.rhs.replace(points_down, b[0])) a = ps.Assignment(a.lhs, a.rhs.replace(points_down, b[0]))
a = ps.Assignment(a.lhs, a.rhs.replace(points_up, b[1])) a = ps.Assignment(a.lhs, a.rhs.replace(points_up, b[1]))
ma.append(a) ma.append(a)
collision.main_assignments = ma collision.main_assignments = ma
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
stream = create_stream_pull_with_output_kernel(collision.method, src, dst, stream = create_stream_pull_with_output_kernel(collision.method, src, dst,
{'density': ρ, 'velocity': u}) {'density': ρ, 'velocity': u})
stream_kernel = ps.create_kernel(stream, target=dh.default_target).compile() stream_kernel = ps.create_kernel(stream, target=dh.default_target).compile()
collision_kernel = ps.create_kernel(collision, target=dh.default_target).compile() collision_kernel = ps.create_kernel(collision, target=dh.default_target).compile()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Initialization ## Initialization
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
init = macroscopic_values_setter(collision.method, velocity=(0, 0), init = macroscopic_values_setter(collision.method, velocity=(0, 0),
pdfs=src.center_vector, density=ρ.center) pdfs=src.center_vector, density=ρ.center)
init_kernel = ps.create_kernel(init, ghost_layers=0).compile() init_kernel = ps.create_kernel(init, ghost_layers=0).compile()
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def init(): def init():
dh.fill(ρ.name, 1.) dh.fill(ρ.name, 1.)
dh.run_kernel(init_kernel) dh.run_kernel(init_kernel)
dh.fill(u.name, 0.0) dh.fill(u.name, 0.0)
dh.fill(F.name, 0.0) dh.fill(F.name, 0.0)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Interpolation ## Interpolation
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
After applying the normal periodic boundary conditions, we interpolate back in the original cells by using a linear interpolation scheme. In this step, the corners are not special anymore so that we can use the entire upper and lower slab. After applying the normal periodic boundary conditions, we interpolate back in the original cells by using a linear interpolation scheme. In this step, the corners are not special anymore so that we can use the entire upper and lower slab.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def get_le_boundary_functor(stencil, offset, ghost_layers=1, thickness=None): def get_le_boundary_functor(stencil, offset, ghost_layers=1, thickness=None):
functor_2 = get_periodic_boundary_functor(stencil, ghost_layers, thickness) functor_2 = get_periodic_boundary_functor(stencil, ghost_layers, thickness)
def functor(pdfs, **_): def functor(pdfs, **_):
functor_2(pdfs) functor_2(pdfs)
weight = np.fmod(offset[0] + N, 1.) weight = np.fmod(offset[0] + N, 1.)
# First, we interpolate the upper pdfs # First, we interpolate the upper pdfs
for i in range(len(pdfs[:, ghost_layers, :])): for i in range(len(pdfs[:, ghost_layers, :])):
ind1 = int(np.floor(i - offset[0]) % N) ind1 = int(np.floor(i - offset[0]) % N)
ind2 = int(np.ceil(i - offset[0]) % N) ind2 = int(np.ceil(i - offset[0]) % N)
res = (1-weight) * pdfs[ind1, ghost_layers, :] + \ res = (1-weight) * pdfs[ind1, ghost_layers, :] + \
weight * pdfs[ind2, ghost_layers, :] weight * pdfs[ind2, ghost_layers, :]
pdfs[i, -ghost_layers, :] = res pdfs[i, -ghost_layers, :] = res
# Second, we interpolate the lower pdfs # Second, we interpolate the lower pdfs
for i in range(len(pdfs[:, -ghost_layers, :])): for i in range(len(pdfs[:, -ghost_layers, :])):
ind1 = int(np.floor(i + offset[0]) % N) ind1 = int(np.floor(i + offset[0]) % N)
ind2 = int(np.ceil(i + offset[0]) % N) ind2 = int(np.ceil(i + offset[0]) % N)
res = (1-weight) * pdfs[ind1, -ghost_layers-1, :] + \ res = (1-weight) * pdfs[ind1, -ghost_layers-1, :] + \
weight * pdfs[ind2, -ghost_layers-1, :] weight * pdfs[ind2, -ghost_layers-1, :]
pdfs[i, ghost_layers-1, :] = res pdfs[i, ghost_layers-1, :] = res
return functor return functor
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Timeloop ## Timeloop
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
offset = [0.0] offset = [0.0]
sync_pdfs = dh.synchronization_function([src.name], sync_pdfs = dh.synchronization_function([src.name],
functor=partial(get_le_boundary_functor, offset=offset)) functor=partial(get_le_boundary_functor, offset=offset))
def time_loop(steps): def time_loop(steps):
dh.all_to_gpu() dh.all_to_gpu()
for i in range(steps): for i in range(steps):
dh.run_kernel(collision_kernel) dh.run_kernel(collision_kernel)
sync_pdfs() sync_pdfs()
dh.run_kernel(stream_kernel) dh.run_kernel(stream_kernel)
dh.swap(src.name, dst.name) dh.swap(src.name, dst.name)
offset[0] += U_x offset[0] += U_x
dh.all_to_cpu() dh.all_to_cpu()
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def plot_v(): def plot_v():
plt.subplot(121) plt.subplot(121)
plt.title("$v_A$") plt.title("$v_A$")
plt.vector_field(dh.gather_array(u.name), step=2) plt.vector_field(dh.gather_array(u.name), step=2)
plt.subplot(122) plt.subplot(122)
plt.vector_field_magnitude(dh.gather_array(u.name)) plt.vector_field_magnitude(dh.gather_array(u.name))
plt.colorbar() plt.colorbar()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Run the simulation ## Run the simulation
### Initialize all velocities with zero ### Initialize all velocities with zero
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
init() init()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Run the simulation to show the flow profile ### Run the simulation to show the flow profile
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
time = 500 time = 500
time_loop(time) time_loop(time)
plot_v() plot_v()
``` ```
%% Output %% Output
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Checking the results ## Checking the results
This transient shear flow (planar Couette flow) can be modeled with the Navier Stokes equation. We use this as a testcase here. This transient shear flow (planar Couette flow) can be modeled with the Navier Stokes equation. We use this as a testcase here.
The solution of the Navier Stokes equation in this particular case yields: The solution of the Navier Stokes equation in this particular case yields:
$ w(x,t) = v \left[ \frac{x}{h} - \frac{1}{2} + \sum_{k=1} ^\infty \frac{1}{\pi k} e^{-\frac{4 \pi^2 \nu k^2}{h^2} t} \sin \left( \frac{2 \pi}{h} kx \right) \right] $ $ w(x,t) = v \left[ \frac{x}{h} - \frac{1}{2} + \sum_{k=1} ^\infty \frac{1}{\pi k} e^{-\frac{4 \pi^2 \nu k^2}{h^2} t} \sin \left( \frac{2 \pi}{h} kx \right) \right] $
with $w$ as the resulting velocity, $v$ as shear velocity, $h$ as the height, $\nu$ as kinematic viscosity and $t$ as time. with $w$ as the resulting velocity, $v$ as shear velocity, $h$ as the height, $\nu$ as kinematic viscosity and $t$ as time.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def w(x, t, nu, v=1.0, h=1.0, k_max=10): def w(x, t, nu, v=1.0, h=1.0, k_max=10):
w = x / h - 0.5 w = x / h - 0.5
for k in np.arange(1, k_max + 1): for k in np.arange(1, k_max + 1):
w += 1.0 / (np.pi * k) * \ w += 1.0 / (np.pi * k) * \
np.exp(-4 * np.pi**2 * nu * k**2 / h**2 * t) * \ np.exp(-4 * np.pi**2 * nu * k**2 / h**2 * t) * \
np.sin(2 * np.pi / h * k * x) np.sin(2 * np.pi / h * k * x)
return v * w return v * w
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
nu = lattice_viscosity_from_relaxation_rate(omega) nu = lattice_viscosity_from_relaxation_rate(omega)
h = N h = N
k_max = 100 k_max = 100
X = np.linspace(0.5, h-0.5, h) X = np.linspace(0.5, h-0.5, h)
U = w(X, time, nu, U_x, h, k_max) U = w(X, time, nu, U_x, h, k_max)
plt.plot(U, X, label='analytical') plt.plot(U, X, label='analytical')
X_lb = np.linspace(0.5, h-0.5, h) X_lb = np.linspace(0.5, h-0.5, h)
U_lb = dh.gather_array(u.name)[0, :, 0] U_lb = dh.gather_array(u.name)[0, :, 0]
print(U_lb) print(U_lb)
print(dh.gather_array(u.name)[0, :, 0])
plt.plot(U_lb, X_lb, 'o-', label='lbmpy') plt.plot(U_lb, X_lb, 'o-', label='lbmpy')
plt.legend() plt.legend()
np.testing.assert_array_almost_equal(U, np.testing.assert_array_almost_equal(U,
U_lb, U_lb,
decimal=5) decimal=5)
``` ```
%% Output %% Output
[-0.04845489 -0.04537392 -0.04232055 -0.03931262 -0.03636719 -0.03350019 [-0.04845489 -0.04537392 -0.04232055 -0.03931262 -0.03636719 -0.03350019
-0.03072622 -0.02805829 -0.02550768 -0.02308377 -0.02079401 -0.01864384 -0.03072622 -0.02805829 -0.02550768 -0.02308377 -0.02079401 -0.01864384
-0.01663673 -0.01477424 -0.01305613 -0.01148044 -0.0100437 -0.00874108 -0.01663673 -0.01477424 -0.01305613 -0.01148044 -0.0100437 -0.00874108
-0.00756657 -0.00651319 -0.00557316 -0.00473809 -0.00399912 -0.00334711 -0.00756657 -0.00651319 -0.00557316 -0.00473809 -0.00399912 -0.00334711
-0.00277274 -0.00226661 -0.00181933 -0.00142159 -0.00106417 -0.00073794 -0.00277274 -0.00226661 -0.00181933 -0.00142159 -0.00106417 -0.00073794
-0.00043391 -0.00014316 0.00014316 0.00043391 0.00073794 0.00106417 -0.00043391 -0.00014316 0.00014316 0.00043391 0.00073794 0.00106417
0.00142159 0.00181933 0.00226661 0.00277274 0.00334711 0.00399912 0.00142159 0.00181933 0.00226661 0.00277274 0.00334711 0.00399912
0.00473809 0.00557316 0.00651319 0.00756657 0.00874108 0.0100437 0.00473809 0.00557316 0.00651319 0.00756657 0.00874108 0.0100437
0.01148044 0.01305613 0.01477424 0.01663673 0.01864384 0.02079401 0.01148044 0.01305613 0.01477424 0.01663673 0.01864384 0.02079401
0.02308377 0.02550768 0.02805829 0.03072622 0.03350019 0.03636719 0.02308377 0.02550768 0.02805829 0.03072622 0.03350019 0.03636719
0.03931262 0.04232055 0.04537392 0.04845489] 0.03931262 0.04232055 0.04537392 0.04845489]
[-0.04845489 -0.04537392 -0.04232055 -0.03931262 -0.03636719 -0.03350019
-0.03072622 -0.02805829 -0.02550768 -0.02308377 -0.02079401 -0.01864384
-0.01663673 -0.01477424 -0.01305613 -0.01148044 -0.0100437 -0.00874108
-0.00756657 -0.00651319 -0.00557316 -0.00473809 -0.00399912 -0.00334711
-0.00277274 -0.00226661 -0.00181933 -0.00142159 -0.00106417 -0.00073794
-0.00043391 -0.00014316 0.00014316 0.00043391 0.00073794 0.00106417
0.00142159 0.00181933 0.00226661 0.00277274 0.00334711 0.00399912
0.00473809 0.00557316 0.00651319 0.00756657 0.00874108 0.0100437
0.01148044 0.01305613 0.01477424 0.01663673 0.01864384 0.02079401
0.02308377 0.02550768 0.02805829 0.03072622 0.03350019 0.03636719
0.03931262 0.04232055 0.04537392 0.04845489]
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Testing of the interpolation scheme ## Testing of the interpolation scheme
We redefine our init function here. In order to test the interpolation scheme, we send an impulse across the boundary in the shear gradient direction. If interpolated correctly, it should appear shifted to the right by the preselected offset. Therefore we also need to fix the offset and redefine the time loop. We redefine our init function here. In order to test the interpolation scheme, we send an impulse across the boundary in the shear gradient direction. If interpolated correctly, it should appear shifted to the right by the preselected offset. Therefore we also need to fix the offset and redefine the time loop.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def time_loop_2(steps): def time_loop_2(steps):
dh.all_to_gpu() dh.all_to_gpu()
for i in range(steps): for i in range(steps):
dh.run_kernel(collision_kernel) dh.run_kernel(collision_kernel)
sync_pdfs() sync_pdfs()
dh.run_kernel(stream_kernel) dh.run_kernel(stream_kernel)
dh.swap(src.name, dst.name) dh.swap(src.name, dst.name)
dh.all_to_cpu() dh.all_to_cpu()
def init_2(): def init_2():
dh.fill(ρ.name, 1.) dh.fill(ρ.name, 1.)
dh.run_kernel(init_kernel) dh.run_kernel(init_kernel)
dh.fill(u.name, 0.0) dh.fill(u.name, 0.0)
dh.fill(F.name, 0.0) dh.fill(F.name, 0.0)
dh.cpu_arrays[F.name][N//3, 1, :] = [1e-2, -1e-1] dh.cpu_arrays[F.name][N//3, 1, :] = [1e-2, -1e-1]
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Run the simulation without any offset and a constant offset ## Run the simulation without any offset and a constant offset
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
offset[0] = 0 offset[0] = 0
init_2() init_2()
time = 20 time = 20
time_loop_2(time) time_loop_2(time)
plot_v() plot_v()
vel_unshifted = np.array(dh.gather_array(u.name)[:, -3:-1, :]) vel_unshifted = np.array(dh.gather_array(u.name)[:, -3:-1, :])
``` ```
%% Output %% Output
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
offset[0] = 10 offset[0] = 10
init_2() init_2()
time = 20 time = 20
time_loop_2(time) time_loop_2(time)
plot_v() plot_v()
vel_shifted = np.array(dh.gather_array(u.name)[:, -3:-1, :]) vel_shifted = np.array(dh.gather_array(u.name)[:, -3:-1, :])
``` ```
%% Output %% Output
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### We role the array back by the offset and compare it to the results of the unshifted measurement ### We role the array back by the offset and compare it to the results of the unshifted measurement
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vel_rolled = np.roll(vel_shifted, -offset[0], axis=0) vel_rolled = np.roll(vel_shifted, -offset[0], axis=0)
np.testing.assert_array_almost_equal(vel_unshifted, vel_rolled) np.testing.assert_array_almost_equal(vel_unshifted, vel_rolled)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Finally let's run the simualtion with fixed offset a while longer ### Finally let's run the simualtion with fixed offset a while longer
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
init_2() init_2()
time = 500 time = 500
time_loop_2(time) time_loop_2(time)
plot_v() plot_v()
``` ```
%% Output %% Output
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment