Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • Sparse
  • WallLaw
  • improved_comm
  • master
  • release/0.2.1
  • release/0.2.10
  • release/0.2.11
  • release/0.2.12
  • release/0.2.13
  • release/0.2.14
  • release/0.2.15
  • release/0.2.2
  • release/0.2.3
  • release/0.2.4
  • release/0.2.5
  • release/0.2.6
  • release/0.2.7
  • release/0.2.8
  • release/0.2.9
  • release/0.3.0
  • release/0.3.1
  • release/0.3.2
  • release/0.3.3
  • release/0.3.4
  • release/0.4.0
  • release/0.4.1
  • release/0.4.2
  • release/0.4.3
  • release/0.4.4
  • release/1.0
  • release/1.0.1
  • release/1.1
  • release/1.1.1
  • release/1.2
  • release/1.3
  • release/1.3.1
  • release/1.3.2
  • release/1.3.3
  • release/1.3.4
  • release/1.3.5
  • release/1.3.6
  • release/1.3.7
42 results

Target

Select target project
  • ravi.k.ayyala/lbmpy
  • brendan-waters/lbmpy
  • anirudh.jonnalagadda/lbmpy
  • jbadwaik/lbmpy
  • alexander.reinauer/lbmpy
  • itischler/lbmpy
  • he66coqe/lbmpy
  • ev81oxyl/lbmpy
  • Bindgen/lbmpy
  • da15siwa/lbmpy
  • holzer/lbmpy
  • RudolfWeeber/lbmpy
  • pycodegen/lbmpy
13 results
Select Git revision
  • Sparse
  • WallLaw
  • improved_comm
  • master
  • release/0.2.1
  • release/0.2.10
  • release/0.2.11
  • release/0.2.12
  • release/0.2.13
  • release/0.2.14
  • release/0.2.15
  • release/0.2.2
  • release/0.2.3
  • release/0.2.4
  • release/0.2.5
  • release/0.2.6
  • release/0.2.7
  • release/0.2.8
  • release/0.2.9
  • release/0.3.0
  • release/0.3.1
  • release/0.3.2
  • release/0.3.3
  • release/0.3.4
  • release/0.4.0
  • release/0.4.1
  • release/0.4.2
  • release/0.4.3
  • release/0.4.4
  • release/1.0
  • release/1.0.1
  • release/1.1
  • release/1.1.1
  • release/1.2
34 results
Show changes
Showing
with 2588 additions and 2165 deletions
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
from lbmpy.session import * from lbmpy.session import *
from lbmpy.relaxationrates import * from lbmpy.relaxationrates import *
``` ```
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
# Tutorial 06: Modifying a LBM method: Smagorinsky model # Tutorial 06: Modifying a LBM method: Smagorinsky model
   
In this demo, we show how to modify a lattice Boltzmann method. As example we are going to add a simple turbulence model, by introducing a rule that locally computes the relaxation parameter dependent on the local strain rate tensor. The Smagorinsky model is implemented directly in *lbmpy* as well, however here we take the manual approach to demonstrate how a LB method can be changed in *lbmpy*. In this demo, we show how to modify a lattice Boltzmann method. As example we are going to add a simple turbulence model, by introducing a rule that locally computes the relaxation parameter dependent on the local strain rate tensor. The Smagorinsky model is implemented directly in *lbmpy* as well, however here we take the manual approach to demonstrate how a LB method can be changed in *lbmpy*.
   
## 1) Theoretical background ## 1) Theoretical background
   
Since we have *sympy* available, we want to start out with the basic model equations and derive the concrete equations ourselves. This approach is less error prone, since the calculations are done by the computer algebra system, and oftentimes this approach is also more general and easier to understand. Since we have *sympy* available, we want to start out with the basic model equations and derive the concrete equations ourselves. This approach is less error prone, since the calculations are done by the computer algebra system, and oftentimes this approach is also more general and easier to understand.
   
### a) Smagorinsky model ### a) Smagorinsky model
   
The basic idea of the Smagorinsky turbulence model is to safe compute time, by not resolving the smallest eddies of the flow on the grid, but model them by an artifical dissipation term. The basic idea of the Smagorinsky turbulence model is to safe compute time, by not resolving the smallest eddies of the flow on the grid, but model them by an artifical dissipation term.
The energy dissipation of small scale vortices is taken into account by introducing a "turbulent viscosity". This additional viscosity depends on local flow properties, namely the local shear rates. The larger the local shear rates the higher the turbulent viscosity and the more artifical dissipation is added. The energy dissipation of small scale vortices is taken into account by introducing a "turbulent viscosity". This additional viscosity depends on local flow properties, namely the local shear rates. The larger the local shear rates the higher the turbulent viscosity and the more artifical dissipation is added.
   
The total viscosity is The total viscosity is
   
$$\nu_{total} = \nu_0 + \underbrace{(C_S \Delta)^2 |S|}_{\nu_{t}}$$ $$\nu_{total} = \nu_0 + \underbrace{(C_S \Delta)^2 |S|}_{\nu_{t}}$$
   
where $\nu_0$ is the normal viscosity, $C_S$ is the Smagorinsky constant, not to be confused with the speed of sound! Typical values of the Smagorinsky constant are between 0.1 - 0.2. The filter length $\Delta$ is chosen as 1 in lattice coordinates. where $\nu_0$ is the normal viscosity, $C_S$ is the Smagorinsky constant, not to be confused with the speed of sound! Typical values of the Smagorinsky constant are between 0.1 - 0.2. The filter length $\Delta$ is chosen as 1 in lattice coordinates.
   
The quantity $|S|$ is computed from the local strain rate tensor $S$ that is given by The quantity $|S|$ is computed from the local strain rate tensor $S$ that is given by
   
$$S_{ij} = \frac{1}{2} \left( \partial_i u_j + \partial_j u_i \right)$$ $$S_{ij} = \frac{1}{2} \left( \partial_i u_j + \partial_j u_i \right)$$
   
and and
   
$$|S| = \sqrt{2 S_{ij} S_{ij}}$$ $$|S| = \sqrt{2 S_{ij} S_{ij}}$$
   
   
### b) LBM implementation of Smagorinsky model ### b) LBM implementation of Smagorinsky model
   
To add the Smagorinsky model to a LB scheme one has to first compute the strain rate tensor $S_{ij}$ in each cell, and compute the turbulent viscosity $\nu_t$ from it. Then the local relaxation rate has to be adapted to match the total viscosity $\nu_{total}$ instead of the standard viscosity $\nu_0$. To add the Smagorinsky model to a LB scheme one has to first compute the strain rate tensor $S_{ij}$ in each cell, and compute the turbulent viscosity $\nu_t$ from it. Then the local relaxation rate has to be adapted to match the total viscosity $\nu_{total}$ instead of the standard viscosity $\nu_0$.
   
A fortunate property of LB methods is, that the strain rate tensor can be computed locally from the non-equilibrium part of the distribution function. This is somewhat surprising, since the strain rate tensor contains first order derivatives. The strain rate tensor can be obtained by A fortunate property of LB methods is, that the strain rate tensor can be computed locally from the non-equilibrium part of the distribution function. This is somewhat surprising, since the strain rate tensor contains first order derivatives. The strain rate tensor can be obtained by
   
$$S_{ij} = - \frac{3 \omega_s}{2 \rho_{(0)}} \Pi_{ij}^{(neq)}$$ $$S_{ij} = - \frac{3 \omega_s}{2 \rho_{(0)}} \Pi_{ij}^{(neq)}$$
   
where $\omega_s$ is the relaxation rate that determines the viscosity, $\rho_{(0)}$ is $\rho$ in compressible models and $1$ for incompressible schemes. where $\omega_s$ is the relaxation rate that determines the viscosity, $\rho_{(0)}$ is $\rho$ in compressible models and $1$ for incompressible schemes.
$\Pi_{ij}^{(neq)}$ is the second order moment tensor of the non-equilibrium part of the distribution functions $f^{(neq)} = f - f^{(eq)}$ and can be computed as $\Pi_{ij}^{(neq)}$ is the second order moment tensor of the non-equilibrium part of the distribution functions $f^{(neq)} = f - f^{(eq)}$ and can be computed as
   
$$\Pi_{ij}^{(neq)} = \sum_q c_{qi} c_{qj} \; f_q^{(neq)}$$ $$\Pi_{ij}^{(neq)} = \sum_q c_{qi} c_{qj} \; f_q^{(neq)}$$
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
We first have to find a closed form for $S_{ij}$ since in the formula above, it depends on $\omega$, which should be adapated according to $S_{ij}$. We first have to find a closed form for $S_{ij}$ since in the formula above, it depends on $\omega$, which should be adapated according to $S_{ij}$.
So we compute $\omega$ and insert it into the formula for $S$: So we compute $\omega$ and insert it into the formula for $S$:
   
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
τ_0, ρ, ω, ω_total, ω_0 = sp.symbols("tau_0 rho omega omega_total omega_0", positive=True, real=True) τ_0, ρ, ω, ω_total, ω_0 = sp.symbols("tau_0 rho omega omega_total omega_0", positive=True, real=True)
ν_0, C_S, S, Π = sp.symbols("nu_0, C_S, |S|, Pi", positive=True, real=True) ν_0, C_S, S, Π = sp.symbols("nu_0, C_S, |S|, Pi", positive=True, real=True)
   
Seq = sp.Eq(S, 3 * ω / 2 * Π) Seq = sp.Eq(S, 3 * ω / 2 * Π)
Seq Seq
``` ```
   
%% Output %% Output
   
$\displaystyle |S| = \frac{3 \Pi \omega}{2}$ $\displaystyle |S| = \frac{3 \Pi \omega}{2}$
3⋅Π⋅ω 3⋅Π⋅ω
|S| = ───── |S| = ─────
2 2
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
Note that we left of the minus, since we took the absolute value of both tensor. The absolute value is defined as above, with the factor of two inside the square root. The $\rho_{(0)}$ has been left out, remembering that $\Pi^{(neq)}$ has to be divided by $\rho$ in case of compressible models|. Note that we left of the minus, since we took the absolute value of both tensor. The absolute value is defined as above, with the factor of two inside the square root. The $\rho_{(0)}$ has been left out, remembering that $\Pi^{(neq)}$ has to be divided by $\rho$ in case of compressible models|.
   
Next, we compute $\omega$ from the total viscosity as given by the Smagorinsky equation: Next, we compute $\omega$ from the total viscosity as given by the Smagorinsky equation:
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
relaxation_rate_from_lattice_viscosity(ν_0 + C_S ** 2 * S) relaxation_rate_from_lattice_viscosity(ν_0 + C_S ** 2 * S)
``` ```
   
%% Output %% Output
   
$\displaystyle \frac{2}{6 C_{S}^{2} |S| + 6 \nu_{0} + 1}$ $\displaystyle \frac{2}{6 C_{S}^{2} |S| + 6 \nu_{0} + 1}$
2 2
───────────────────── ─────────────────────
2 2
6⋅C_S ⋅|S| + 6⋅ν₀ + 1 6⋅C_S ⋅|S| + 6⋅ν₀ + 1
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
and insert it into the equation for $|S|$ and insert it into the equation for $|S|$
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
Seq2 = Seq.subs(ω, relaxation_rate_from_lattice_viscosity(ν_0 + C_S **2 * S )) Seq2 = Seq.subs(ω, relaxation_rate_from_lattice_viscosity(ν_0 + C_S **2 * S ))
Seq2 Seq2
``` ```
   
%% Output %% Output
   
$\displaystyle |S| = \frac{3 \Pi}{6 C_{S}^{2} |S| + 6 \nu_{0} + 1}$ $\displaystyle |S| = \frac{3 \Pi}{6 C_{S}^{2} |S| + 6 \nu_{0} + 1}$
3⋅Π 3⋅Π
|S| = ───────────────────── |S| = ─────────────────────
2 2
6⋅C_S ⋅|S| + 6⋅ν₀ + 1 6⋅C_S ⋅|S| + 6⋅ν₀ + 1
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
This equation contains only known quantities, such that we can solve it for $|S|$. This equation contains only known quantities, such that we can solve it for $|S|$.
Additionally we substitute the lattice viscosity $\nu_0$ by the original relaxation time $\tau_0$. The resulting equations get simpler using relaxation times instead of rates. Additionally we substitute the lattice viscosity $\nu_0$ by the original relaxation time $\tau_0$. The resulting equations get simpler using relaxation times instead of rates.
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
solveRes = sp.solve(Seq2, S) solveRes = sp.solve(Seq2, S)
assert len(solveRes) == 1 assert len(solveRes) == 1
SVal = solveRes[0] SVal = solveRes[0]
SVal = SVal.subs(ν_0, lattice_viscosity_from_relaxation_rate(1 / τ_0)).expand() SVal = SVal.subs(ν_0, lattice_viscosity_from_relaxation_rate(1 / τ_0)).expand()
SVal SVal
``` ```
   
%% Output %% Output
   
$\displaystyle - \frac{\tau_{0}}{6 C_{S}^{2}} + \frac{\sqrt{72 C_{S}^{2} \Pi + 4 \tau_{0}^{2}}}{12 C_{S}^{2}}$ $\displaystyle - \frac{\tau_{0}}{6 C_{S}^{2}} + \frac{\sqrt{72 C_{S}^{2} \Pi + 4 \tau_{0}^{2}}}{12 C_{S}^{2}}$
___________________ ___________________
╱ 2 2 ╱ 2 2
τ₀ ╲╱ 72⋅C_S ⋅Π + 4⋅τ₀ τ₀ ╲╱ 72⋅C_S ⋅Π + 4⋅τ₀
- ────── + ────────────────────── - ────── + ──────────────────────
2 2 2 2
6⋅C_S 12⋅C_S 6⋅C_S 12⋅C_S
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
Knowning $|S|$ we can compute the total relaxation time using Knowning $|S|$ we can compute the total relaxation time using
   
$$\nu_{total} = \nu_0 +C_S^2 |S|$$ $$\nu_{total} = \nu_0 +C_S^2 |S|$$
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
τ_val = 1 / (relaxation_rate_from_lattice_viscosity(lattice_viscosity_from_relaxation_rate(1/τ_0) + C_S**2 * SVal)).cancel() τ_val = 1 / (relaxation_rate_from_lattice_viscosity(lattice_viscosity_from_relaxation_rate(1/τ_0) + C_S**2 * SVal)).cancel()
τ_val τ_val
``` ```
   
%% Output %% Output
   
$\displaystyle \frac{\tau_{0}}{2} + \frac{\sqrt{18 C_{S}^{2} \Pi + \tau_{0}^{2}}}{2}$ $\displaystyle \frac{\tau_{0}}{2} + \frac{\sqrt{18 C_{S}^{2} \Pi + \tau_{0}^{2}}}{2}$
_________________ _________________
╱ 2 2 ╱ 2 2
τ₀ ╲╱ 18⋅C_S ⋅Π + τ₀ τ₀ ╲╱ 18⋅C_S ⋅Π + τ₀
── + ──────────────────── ── + ────────────────────
2 2 2 2
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
To compute $\Pi^{(neq)}$ we use the following functions: To compute $\Pi^{(neq)}$ we use the following functions:
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
def second_order_moment_tensor(function_values, stencil): def second_order_moment_tensor(function_values, stencil):
assert len(function_values) == len(stencil) assert len(function_values) == len(stencil)
dim = len(stencil[0]) dim = len(stencil[0])
return sp.Matrix(dim, dim, lambda i, j: sum(c[i] * c[j] * f for f, c in zip(function_values, stencil))) return sp.Matrix(dim, dim, lambda i, j: sum(c[i] * c[j] * f for f, c in zip(function_values, stencil)))
   
   
def frobenius_norm(matrix, factor=1): def frobenius_norm(matrix, factor=1):
return sp.sqrt(sum(i*i for i in matrix) * factor) return sp.sqrt(sum(i*i for i in matrix) * factor)
``` ```
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
In the next cell we construct equations that take an standard relaxation rate $\omega_0$ and compute a new relaxation rate $\omega_{total}$ according to the Smagorinksy model, using `τ_val` computed above In the next cell we construct equations that take an standard relaxation rate $\omega_0$ and compute a new relaxation rate $\omega_{total}$ according to the Smagorinksy model, using `τ_val` computed above
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
def smagorinsky_equations(ω_0, ω_total, method): def smagorinsky_equations(ω_0, ω_total, method):
f_neq = sp.Matrix(method.pre_collision_pdf_symbols) - method.get_equilibrium_terms() f_neq = sp.Matrix(method.pre_collision_pdf_symbols) - method.get_equilibrium_terms()
return [sp.Eq(τ_0, 1 / ω_0), return [sp.Eq(τ_0, 1 / ω_0),
sp.Eq(Π, frobenius_norm(second_order_moment_tensor(f_neq, method.stencil), factor=2)), sp.Eq(Π, frobenius_norm(second_order_moment_tensor(f_neq, method.stencil), factor=2)),
sp.Eq(ω_total, 1 / τ_val)] sp.Eq(ω_total, 1 / τ_val)]
   
   
smagorinsky_equations(ω_0, ω_total, create_lb_method()) smagorinsky_equations(ω_0, ω_total, create_lb_method())
``` ```
   
%% Output %% Output
   
$\displaystyle \left[ \tau_{0} = \frac{1}{\omega_{0}}, \ \Pi = \sqrt{4 \left(- f_{5} + f_{6} + f_{7} - f_{8} - u_{0} u_{1}\right)^{2} + 2 \left(- \frac{\delta_{\rho}}{3} + f_{1} + f_{2} + f_{5} + f_{6} + f_{7} + f_{8} - u_{1}^{2}\right)^{2} + 2 \left(- \frac{\delta_{\rho}}{3} + f_{3} + f_{4} + f_{5} + f_{6} + f_{7} + f_{8} - u_{0}^{2}\right)^{2}}, \ \omega_{total} = \frac{1}{\frac{\tau_{0}}{2} + \frac{\sqrt{18 C_{S}^{2} \Pi + \tau_{0}^{2}}}{2}}\right]$ $\displaystyle \left[ \tau_{0} = \frac{1}{\omega_{0}}, \ \Pi = \sqrt{4 \left(- f_{5} + f_{6} + f_{7} - f_{8} - u_{0} u_{1}\right)^{2} + 2 \left(- \frac{\delta_{\rho}}{3} + f_{1} + f_{2} + f_{5} + f_{6} + f_{7} + f_{8} - u_{1}^{2}\right)^{2} + 2 \left(- \frac{\delta_{\rho}}{3} + f_{3} + f_{4} + f_{5} + f_{6} + f_{7} + f_{8} - u_{0}^{2}\right)^{2}}, \ \omega_{total} = \frac{1}{\frac{\tau_{0}}{2} + \frac{\sqrt{18 C_{S}^{2} \Pi + \tau_{0}^{2}}}{2}}\right]$
⎡ ___________________________________________________________ ⎡ ___________________________________________________________
⎢ ╱ ⎢ ╱
⎢ 1 ╱ 2 ⎛ δᵨ ⎢ 1 ╱ 2 ⎛ δᵨ
⎢τ₀ = ──, Π = ╱ 4⋅(-f₅ + f₆ + f₇ - f₈ - u₀⋅u₁) + 2⋅⎜- ── + f₁ + f₂ + f₅ + ⎢τ₀ = ──, Π = ╱ 4⋅(-f₅ + f₆ + f₇ - f₈ - u₀⋅u₁) + 2⋅⎜- ── + f₁ + f₂ + f₅ +
⎢ ω₀ ╲╱ ⎝ 3 ⎢ ω₀ ╲╱ ⎝ 3
______________________________________________________________________ ______________________________________________________________________
2 2 2 2
2⎞ ⎛ δᵨ 2⎞ 2⎞ ⎛ δᵨ 2⎞
f₆ + f₇ + f₈ - u₁ ⎟ + 2⋅⎜- ── + f₃ + f₄ + f₅ + f₆ + f₇ + f₈ - u₀ ⎟ , ωₜₒₜₐₗ f₆ + f₇ + f₈ - u₁ ⎟ + 2⋅⎜- ── + f₃ + f₄ + f₅ + f₆ + f₇ + f₈ - u₀ ⎟ , ωₜₒₜₐₗ
⎠ ⎝ 3 ⎠ ⎠ ⎝ 3 ⎠
1 ⎥ 1 ⎥
= ─────────────────────────⎥ = ─────────────────────────⎥
_________________⎥ _________________⎥
╱ 2 2 ⎥ ╱ 2 2 ⎥
τ₀ ╲╱ 18⋅C_S ⋅Π + τ₀ ⎥ τ₀ ╲╱ 18⋅C_S ⋅Π + τ₀ ⎥
── + ────────────────────⎥ ── + ────────────────────⎥
2 2 ⎦ 2 2 ⎦
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
## 2) Application: Channel flow ## 2) Application: Channel flow
   
Next we modify a *lbmpy* scenario to use the Smagorinsky model. Next we modify a *lbmpy* scenario to use the Smagorinsky model.
We create a MRT method, where we fix all relaxation rates except the relaxation rate that controls the viscosity. We create a MRT method, where we fix all relaxation rates except the relaxation rate that controls the viscosity.
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
lbm_conifg = LBMConfig(stencil=Stencil.D2Q9, method=Method.MRT, force=(1e-6, 0), lbm_config = LBMConfig(stencil=Stencil.D2Q9, method=Method.MRT, force=(1e-6, 0),
force_model=ForceModel.LUO, relaxation_rates=[ω, 1.9, 1.9, 1.9]) force_model=ForceModel.LUO, relaxation_rates=[ω, 1.9, 1.9, 1.9])
   
method = create_lb_method(lbm_config=lbm_conifg) method = create_lb_method(lbm_config=lbm_config)
method method
``` ```
   
%% Output %% Output
   
<lbmpy.methods.momentbased.momentbasedmethod.MomentBasedLbMethod at 0x7f745d5c3b20> <lbmpy.methods.momentbased.momentbasedmethod.MomentBasedLbMethod at 0x7f745d5c3b20>
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
Only the collision rule has to be changed. Thus we first construct the collision rule, add the Smagorinsky equations and create a normal scenario from the modified collision rule. To avoid that the macroscopic quantity symbols in the Smagorinsky equations fall prey to optimization, we must disable simplification: Only the collision rule has to be changed. Thus we first construct the collision rule, add the Smagorinsky equations and create a normal scenario from the modified collision rule. To avoid that the macroscopic quantity symbols in the Smagorinsky equations fall prey to optimization, we must disable simplification:
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
optimization = {'simplification' : False} optimization = {'simplification' : False}
collision_rule = create_lb_collision_rule(lb_method=method, optimization=optimization) collision_rule = create_lb_collision_rule(lb_method=method, optimization=optimization)
collision_rule = collision_rule.new_with_substitutions({ω: ω_total}) collision_rule = collision_rule.new_with_substitutions({ω: ω_total})
   
collision_rule.subexpressions += smagorinsky_equations(ω, ω_total, method) collision_rule.subexpressions += smagorinsky_equations(ω, ω_total, method)
collision_rule.topological_sort(sort_subexpressions=True, sort_main_assignments=False) collision_rule.topological_sort(sort_subexpressions=True, sort_main_assignments=False)
collision_rule collision_rule
``` ```
   
%% Output %% Output
   
AssignmentCollection: d_6, d_5, d_2, d_3, d_0, d_1, d_7, d_4, d_8 <- f(f_2, f_4, f_7, f_5, omega_total, f_6, f_3, f_1, f_0, f_8) AssignmentCollection: d_6, d_5, d_2, d_3, d_0, d_1, d_7, d_4, d_8 <- f(f_2, f_4, f_7, f_5, omega_total, f_6, f_3, f_1, f_0, f_8)
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
In the next cell the collision rule is simplified by extracting common subexpressions In the next cell the collision rule is simplified by extracting common subexpressions
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
from pystencils.simp import sympy_cse from pystencils.simp import sympy_cse
#collision_rule = sympy_cse(collision_rule) #collision_rule = sympy_cse(collision_rule)
``` ```
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
A channel scenario can be created from a modified collision rule: A channel scenario can be created from a modified collision rule:
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
ch = create_channel((300, 100), force=1e-6, collision_rule=collision_rule, ch = create_channel((300, 100), force=1e-6, collision_rule=collision_rule,
kernel_params={"C_S": 0.12, "omega": 1.999}) kernel_params={"C_S": 0.12, "omega": 1.999})
``` ```
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
#show_code(ch.ast) #show_code(ch.ast)
``` ```
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
ch.run(5000) ch.run(5000)
``` ```
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
plt.figure(dpi=200) plt.figure(dpi=200)
plt.vector_field(ch.velocity[:, :]) plt.vector_field(ch.velocity[:, :])
np.max(ch.velocity[:, :]) np.max(ch.velocity[:, :])
``` ```
   
%% Output %% Output
   
$\displaystyle 0.00504266401703371$ $\displaystyle 0.00504266401703371$
0.00504266401703371 0.00504266401703371
   
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
## Appendix: Strain rate tensor formula from Chapman Enskog ## Appendix: Strain rate tensor formula from Chapman Enskog
   
The connection between $S_{ij}$ and $\Pi_{ij}^{(neq)}$ can be seen using a Chapman Enskog expansion. Since *lbmpy* has a module that automatically does this expansions we can have a look at it: The connection between $S_{ij}$ and $\Pi_{ij}^{(neq)}$ can be seen using a Chapman Enskog expansion. Since *lbmpy* has a module that automatically does this expansions we can have a look at it:
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
from lbmpy.chapman_enskog import ChapmanEnskogAnalysis, CeMoment from lbmpy.chapman_enskog import ChapmanEnskogAnalysis, CeMoment
from lbmpy.chapman_enskog.chapman_enskog import remove_higher_order_u from lbmpy.chapman_enskog.chapman_enskog import remove_higher_order_u
compressible_model = create_lb_method(stencil=Stencil.D2Q9, compressible=True, zero_centered=False) compressible_model = create_lb_method(stencil=Stencil.D2Q9, compressible=True, zero_centered=False)
incompressible_model = create_lb_method(stencil=Stencil.D2Q9, compressible=False, zero_centered=False) incompressible_model = create_lb_method(stencil=Stencil.D2Q9, compressible=False, zero_centered=False)
   
ce_compressible = ChapmanEnskogAnalysis(compressible_model) ce_compressible = ChapmanEnskogAnalysis(compressible_model)
ce_incompressible = ChapmanEnskogAnalysis(incompressible_model) ce_incompressible = ChapmanEnskogAnalysis(incompressible_model)
``` ```
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
The Chapman Enskog analysis yields expresssions for the moment The Chapman Enskog analysis yields expresssions for the moment
   
$\Pi = \Pi^{(eq)} + \epsilon \Pi^{(1)} + \epsilon^2 \Pi^{(2)} \cdots$ $\Pi = \Pi^{(eq)} + \epsilon \Pi^{(1)} + \epsilon^2 \Pi^{(2)} \cdots$
and the strain rate tensor is related to $\Pi^{(1)}$. However the best approximation we have for $\Pi^{(1)}$ is and the strain rate tensor is related to $\Pi^{(1)}$. However the best approximation we have for $\Pi^{(1)}$ is
$\Pi^{(neq)}$. For details, see the paper "Shear stress in lattice Boltzmann simulations" by Krüger, Varnik and Raabe from 2009. $\Pi^{(neq)}$. For details, see the paper "Shear stress in lattice Boltzmann simulations" by Krüger, Varnik and Raabe from 2009.
   
Lets look at the values of $\Pi^{(1)}$ obtained from the Chapman enskog expansion: Lets look at the values of $\Pi^{(1)}$ obtained from the Chapman enskog expansion:
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
Π_1_xy = CeMoment("\\Pi", moment_tuple=(1,1), superscript=1) Π_1_xy = CeMoment("\\Pi", moment_tuple=(1,1), superscript=1)
Π_1_xx = CeMoment("\\Pi", moment_tuple=(2,0), superscript=1) Π_1_xx = CeMoment("\\Pi", moment_tuple=(2,0), superscript=1)
Π_1_yy = CeMoment("\\Pi", moment_tuple=(0,2), superscript=1) Π_1_yy = CeMoment("\\Pi", moment_tuple=(0,2), superscript=1)
components = (Π_1_xx, Π_1_yy, Π_1_xy) components = (Π_1_xx, Π_1_yy, Π_1_xy)
   
Π_1_xy_val = ce_compressible.higher_order_moments[Π_1_xy] Π_1_xy_val = ce_compressible.higher_order_moments[Π_1_xy]
Π_1_xy_val Π_1_xy_val
``` ```
   
%% Output %% Output
   
$\displaystyle \frac{\rho u_{0}^{2} {\partial^{(1)}_{0} u_{1}} + 2 \rho u_{0} u_{1} {\partial^{(1)}_{0} u_{0}} + 2 \rho u_{0} u_{1} {\partial^{(1)}_{1} u_{1}} + \rho u_{1}^{2} {\partial^{(1)}_{1} u_{0}} - \frac{\rho {\partial^{(1)}_{1} u_{0}}}{3} - \frac{\rho {\partial^{(1)}_{0} u_{1}}}{3} + u_{0}^{2} u_{1} {\partial^{(1)}_{0} \rho} + u_{0} u_{1}^{2} {\partial^{(1)}_{1} \rho}}{\omega}$ $\displaystyle \frac{\rho u_{0}^{2} {\partial^{(1)}_{0} u_{1}} + 2 \rho u_{0} u_{1} {\partial^{(1)}_{0} u_{0}} + 2 \rho u_{0} u_{1} {\partial^{(1)}_{1} u_{1}} + \rho u_{1}^{2} {\partial^{(1)}_{1} u_{0}} - \frac{\rho {\partial^{(1)}_{1} u_{0}}}{3} - \frac{\rho {\partial^{(1)}_{0} u_{1}}}{3} + u_{0}^{2} u_{1} {\partial^{(1)}_{0} \rho} + u_{0} u_{1}^{2} {\partial^{(1)}_{1} \rho}}{\omega}$
2 2 ρ⋅D(u_0) 2 2 ρ⋅D(u_0)
ρ⋅u₀ ⋅D(u_1) + 2⋅ρ⋅u₀⋅u₁⋅D(u_0) + 2⋅ρ⋅u₀⋅u₁⋅D(u_1) + ρ⋅u₁ ⋅D(u_0) - ──────── - ρ⋅u₀ ⋅D(u_1) + 2⋅ρ⋅u₀⋅u₁⋅D(u_0) + 2⋅ρ⋅u₀⋅u₁⋅D(u_1) + ρ⋅u₁ ⋅D(u_0) - ──────── -
3 3
────────────────────────────────────────────────────────────────────────────── ──────────────────────────────────────────────────────────────────────────────
ω ω
ρ⋅D(u_1) 2 2 ρ⋅D(u_1) 2 2
──────── + u₀ ⋅u₁⋅D(rho) + u₀⋅u₁ ⋅D(rho) ──────── + u₀ ⋅u₁⋅D(rho) + u₀⋅u₁ ⋅D(rho)
3 3
───────────────────────────────────────── ─────────────────────────────────────────
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
This term has lots of higher order error terms in it. We assume that $u$ is small in lattice coordinates, so if we neglect all terms in $u$ that are quadratic or higher we get: This term has lots of higher order error terms in it. We assume that $u$ is small in lattice coordinates, so if we neglect all terms in $u$ that are quadratic or higher we get:
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
remove_higher_order_u(Π_1_xy_val.expand()) remove_higher_order_u(Π_1_xy_val.expand())
``` ```
   
%% Output %% Output
   
$\displaystyle - \frac{\rho {\partial^{(1)}_{1} u_{0}}}{3 \omega} - \frac{\rho {\partial^{(1)}_{0} u_{1}}}{3 \omega}$ $\displaystyle - \frac{\rho {\partial^{(1)}_{1} u_{0}}}{3 \omega} - \frac{\rho {\partial^{(1)}_{0} u_{1}}}{3 \omega}$
ρ⋅D(u_0) ρ⋅D(u_1) ρ⋅D(u_0) ρ⋅D(u_1)
- ──────── - ──────── - ──────── - ────────
3⋅ω 3⋅ω 3⋅ω 3⋅ω
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
Putting these steps together into a function, we can display them for the different cases quickly: Putting these steps together into a function, we can display them for the different cases quickly:
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
def get_Π_1(ce_analysis, component): def get_Π_1(ce_analysis, component):
val = ce_analysis.higher_order_moments[component] val = ce_analysis.higher_order_moments[component]
return remove_higher_order_u(val.expand()) return remove_higher_order_u(val.expand())
``` ```
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
Compressible case: Compressible case:
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
tuple(get_Π_1(ce_compressible, Pi) for Pi in components) tuple(get_Π_1(ce_compressible, Pi) for Pi in components)
``` ```
   
%% Output %% Output
   
$\displaystyle \left( - \frac{2 \rho {\partial^{(1)}_{0} u_{0}}}{3 \omega}, \ - \frac{2 \rho {\partial^{(1)}_{1} u_{1}}}{3 \omega}, \ - \frac{\rho {\partial^{(1)}_{1} u_{0}}}{3 \omega} - \frac{\rho {\partial^{(1)}_{0} u_{1}}}{3 \omega}\right)$ $\displaystyle \left( - \frac{2 \rho {\partial^{(1)}_{0} u_{0}}}{3 \omega}, \ - \frac{2 \rho {\partial^{(1)}_{1} u_{1}}}{3 \omega}, \ - \frac{\rho {\partial^{(1)}_{1} u_{0}}}{3 \omega} - \frac{\rho {\partial^{(1)}_{0} u_{1}}}{3 \omega}\right)$
⎛-2⋅ρ⋅D(u_0) -2⋅ρ⋅D(u_1) ρ⋅D(u_0) ρ⋅D(u_1)⎞ ⎛-2⋅ρ⋅D(u_0) -2⋅ρ⋅D(u_1) ρ⋅D(u_0) ρ⋅D(u_1)⎞
⎜────────────, ────────────, - ──────── - ────────⎟ ⎜────────────, ────────────, - ──────── - ────────⎟
⎝ 3⋅ω 3⋅ω 3⋅ω 3⋅ω ⎠ ⎝ 3⋅ω 3⋅ω 3⋅ω 3⋅ω ⎠
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
Incompressible case: Incompressible case:
   
%% Cell type:code id: tags: %% Cell type:code id: tags:
   
``` python ``` python
tuple(get_Π_1(ce_incompressible, Pi) for Pi in components) tuple(get_Π_1(ce_incompressible, Pi) for Pi in components)
``` ```
   
%% Output %% Output
   
$\displaystyle \left( \frac{2 u_{0} {\partial^{(1)}_{0} \rho}}{3 \omega} - \frac{2 {\partial^{(1)}_{0} u_{0}}}{3 \omega}, \ \frac{2 u_{1} {\partial^{(1)}_{1} \rho}}{3 \omega} - \frac{2 {\partial^{(1)}_{1} u_{1}}}{3 \omega}, \ \frac{u_{0} {\partial^{(1)}_{1} \rho}}{3 \omega} + \frac{u_{1} {\partial^{(1)}_{0} \rho}}{3 \omega} - \frac{{\partial^{(1)}_{1} u_{0}}}{3 \omega} - \frac{{\partial^{(1)}_{0} u_{1}}}{3 \omega}\right)$ $\displaystyle \left( \frac{2 u_{0} {\partial^{(1)}_{0} \rho}}{3 \omega} - \frac{2 {\partial^{(1)}_{0} u_{0}}}{3 \omega}, \ \frac{2 u_{1} {\partial^{(1)}_{1} \rho}}{3 \omega} - \frac{2 {\partial^{(1)}_{1} u_{1}}}{3 \omega}, \ \frac{u_{0} {\partial^{(1)}_{1} \rho}}{3 \omega} + \frac{u_{1} {\partial^{(1)}_{0} \rho}}{3 \omega} - \frac{{\partial^{(1)}_{1} u_{0}}}{3 \omega} - \frac{{\partial^{(1)}_{0} u_{1}}}{3 \omega}\right)$
⎛2⋅u₀⋅D(rho) 2⋅D(u_0) 2⋅u₁⋅D(rho) 2⋅D(u_1) u₀⋅D(rho) u₁⋅D(rho) D(u_0 ⎛2⋅u₀⋅D(rho) 2⋅D(u_0) 2⋅u₁⋅D(rho) 2⋅D(u_1) u₀⋅D(rho) u₁⋅D(rho) D(u_0
⎜─────────── - ────────, ─────────── - ────────, ───────── + ───────── - ───── ⎜─────────── - ────────, ─────────── - ────────, ───────── + ───────── - ─────
⎝ 3⋅ω 3⋅ω 3⋅ω 3⋅ω 3⋅ω 3⋅ω 3⋅ω ⎝ 3⋅ω 3⋅ω 3⋅ω 3⋅ω 3⋅ω 3⋅ω 3⋅ω
) D(u_1)⎞ ) D(u_1)⎞
─ - ──────⎟ ─ - ──────⎟
3⋅ω ⎠ 3⋅ω ⎠
   
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
   
In the incompressible case has some terms $\partial \rho$ which are zero, since $\rho$ is assumed constant. In the incompressible case has some terms $\partial \rho$ which are zero, since $\rho$ is assumed constant.
   
Leaving out the error terms we finally obtain: Leaving out the error terms we finally obtain:
   
   
$$\Pi_{ij}^{(neq)} \approx \Pi_{ij}^{(1)} = -\frac{2 \rho_{(0)}}{3 \omega_s} \left( \partial_i u_j + \partial_j u_i \right)$$ $$\Pi_{ij}^{(neq)} \approx \Pi_{ij}^{(1)} = -\frac{2 \rho_{(0)}}{3 \omega_s} \left( \partial_i u_j + \partial_j u_i \right)$$
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -541,7 +541,7 @@ ...@@ -541,7 +541,7 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Python 3", "display_name": "Python 3 (ipykernel)",
"language": "python", "language": "python",
"name": "python3" "name": "python3"
}, },
...@@ -555,7 +555,7 @@ ...@@ -555,7 +555,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.8.2" "version": "3.11.4"
} }
}, },
"nbformat": 4, "nbformat": 4,
......
This diff is collapsed.
...@@ -8,13 +8,10 @@ API Reference ...@@ -8,13 +8,10 @@ API Reference
enums.rst enums.rst
stencils.rst stencils.rst
kernelcreation.rst kernelcreation.rst
methods.rst
equilibrium.rst equilibrium.rst
moment_transforms.rst moment_transforms.rst
maxwellian_equilibrium.rst maxwellian_equilibrium.rst
continuous_distribution_measures.rst continuous_distribution_measures.rst
moments.rst moments.rst
cumulants.rst cumulants.rst
boundary_conditions.rst
forcemodels.rst
zbibliography.rst zbibliography.rst
...@@ -110,6 +110,24 @@ issn = {0898-1221}, ...@@ -110,6 +110,24 @@ issn = {0898-1221},
doi = {10.1016/j.camwa.2015.05.001}, doi = {10.1016/j.camwa.2015.05.001},
} }
@article{geier2017,
author = {Geier, Martin and Pasquali, Andrea and Sch{\"{o}}nherr, Martin},
title = {Parametrization of the Cumulant Lattice Boltzmann Method for Fourth Order Accurate Diffusion Part I},
year = {2017},
issue_date = {November 2017},
publisher = {Academic Press Professional, Inc.},
address = {USA},
volume = {348},
number = {C},
issn = {0021-9991},
url = {https://doi.org/10.1016/j.jcp.2017.05.040},
doi = {10.1016/j.jcp.2017.05.040},
journal = {J. Comput. Phys.},
month = {nov},
pages = {862–888},
numpages = {27}
}
@Article{Coreixas2019, @Article{Coreixas2019,
title = {Comprehensive comparison of collision models in the lattice Boltzmann framework: Theoretical investigations}, title = {Comprehensive comparison of collision models in the lattice Boltzmann framework: Theoretical investigations},
author = {Coreixas, Christophe and Chopard, Bastien and Latt, Jonas}, author = {Coreixas, Christophe and Chopard, Bastien and Latt, Jonas},
...@@ -248,4 +266,52 @@ journal = {Communications in Computational Physics} ...@@ -248,4 +266,52 @@ journal = {Communications in Computational Physics}
publisher = {Springer Link}, publisher = {Springer Link},
} }
@article{BouzidiBC,
author = {Bouzidi, M’hamed and Firdaouss, Mouaouia and Lallemand, Pierre},
title = "{Momentum transfer of a Boltzmann-lattice fluid with boundaries}",
journal = {Physics of Fluids},
year = {2001},
month = {11},
doi = {10.1063/1.1399290},
url = {https://doi.org/10.1063/1.1399290},
}
@Article{rozema15,
doi = {10.1063/1.4928700},
year = {2015},
month = {aug},
publisher = {AIP Publishing},
volume = {27},
number = {8},
author = {Wybe Rozema and Hyun J. Bae and Parviz Moin and Roel Verstappen},
title = {Minimum-dissipation models for large-eddy simulation},
journal = {Physics of Fluids}
}
@article{Han2021,
doi = {10.1088/1873-7005/ac1782},
url = {https://dx.doi.org/10.1088/1873-7005/ac1782} ,
year = {2021},
month = {aug},
publisher = {IOP Publishing},
volume = {53},
number = {4},
pages = {045506},
author = {Mengtao Han and Ryozo Ooka and Hideki Kikumoto},
title = {A wall function approach in lattice Boltzmann method: algorithm and validation using turbulent channel flow},
journal = {Fluid Dynamics Research}
}
@article{Maronga2020,
author = {Maronga, Bj{\"o}rn and Knigge, Christoph and Raasch, Siegfried},
year = {2020},
title = {{An Improved Surface Boundary Condition for Large-Eddy Simulations Based on Monin--Obukhov Similarity Theory: Evaluation and Consequences for Grid Convergence in Neutral and Stable Conditions}},
pages = {297--325},
volume = {174},
number = {2},
issn = {0006-8314},
journal = {{Boundary-layer meteorology}},
doi = {10.1007/s10546-019-00485-w}
}
@Comment{jabref-meta: databaseType:bibtex;} @Comment{jabref-meta: databaseType:bibtex;}
******************************************* ********************************
Methods and Method Creation (lbmpy.methods) Collision models (lbmpy.methods)
******************************************* ********************************
This module defines the classes defining all types of lattice Boltzmann methods available in *lbmpy*, This module defines the classes defining all types of lattice Boltzmann methods available in *lbmpy*,
together with a set of factory functions used to create their instances. The factory functions are together with a set of factory functions used to create their instances. The factory functions are
...@@ -22,21 +22,6 @@ Abstract LB Method Base Class ...@@ -22,21 +22,6 @@ Abstract LB Method Base Class
.. autoclass:: lbmpy.methods.AbstractLbMethod .. autoclass:: lbmpy.methods.AbstractLbMethod
:members: :members:
Conserved Quantity Computation
==============================
The classes of the conserved quantity computation (CQC) submodule define an LB Method's conserved quantities and
the equations to compute them. For hydrodynamic methods, :class:`lbmpy.methods.DensityVelocityComputation` is
the typical choice. For custom methods, however, a custom CQC class might have to be created.
.. autoclass:: lbmpy.methods.AbstractConservedQuantityComputation
:members:
.. autoclass:: lbmpy.methods.DensityVelocityComputation
:members:
.. _methods_rawmomentbased: .. _methods_rawmomentbased:
Raw Moment-based methods Raw Moment-based methods
...@@ -163,3 +148,17 @@ Low-Level Creation Functions ...@@ -163,3 +148,17 @@ Low-Level Creation Functions
.. autofunction:: lbmpy.methods.creationfunctions.create_with_continuous_maxwellian_equilibrium .. autofunction:: lbmpy.methods.creationfunctions.create_with_continuous_maxwellian_equilibrium
.. autofunction:: lbmpy.methods.creationfunctions.create_from_equilibrium .. autofunction:: lbmpy.methods.creationfunctions.create_from_equilibrium
Conserved Quantity Computation
==============================
The classes of the conserved quantity computation (CQC) submodule define an LB Method's conserved quantities and
the equations to compute them. For hydrodynamic methods, :class:`lbmpy.methods.DensityVelocityComputation` is
the typical choice. For custom methods, however, a custom CQC class might have to be created.
.. autoclass:: lbmpy.methods.AbstractConservedQuantityComputation
:members:
.. autoclass:: lbmpy.methods.DensityVelocityComputation
:members:
\ No newline at end of file
...@@ -4,6 +4,10 @@ Tutorials ...@@ -4,6 +4,10 @@ Tutorials
All tutorials are automatically created by Jupyter Notebooks. All tutorials are automatically created by Jupyter Notebooks.
You can open the notebooks directly to play around with the code examples. You can open the notebooks directly to play around with the code examples.
=================
Basics
=================
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
...@@ -13,17 +17,68 @@ You can open the notebooks directly to play around with the code examples. ...@@ -13,17 +17,68 @@ You can open the notebooks directly to play around with the code examples.
/notebooks/03_tutorial_lbm_formulation.ipynb /notebooks/03_tutorial_lbm_formulation.ipynb
/notebooks/04_tutorial_cumulant_LBM.ipynb /notebooks/04_tutorial_cumulant_LBM.ipynb
/notebooks/05_tutorial_nondimensionalization_and_scaling.ipynb /notebooks/05_tutorial_nondimensionalization_and_scaling.ipynb
===================
Turbulence modeling
===================
.. toctree::
:maxdepth: 1
/notebooks/06_tutorial_modifying_method_smagorinsky.ipynb /notebooks/06_tutorial_modifying_method_smagorinsky.ipynb
=================
Thermal flows
=================
.. toctree::
:maxdepth: 1
/notebooks/07_tutorial_thermal_lbm.ipynb /notebooks/07_tutorial_thermal_lbm.ipynb
/notebooks/demo_thermalized_lbm.ipynb
=================
Multiphase flows
=================
.. toctree::
:maxdepth: 1
/notebooks/08_tutorial_shanchen_twophase.ipynb /notebooks/08_tutorial_shanchen_twophase.ipynb
/notebooks/09_tutorial_shanchen_twocomponent.ipynb /notebooks/09_tutorial_shanchen_twocomponent.ipynb
/notebooks/10_tutorial_conservative_allen_cahn_two_phase.ipynb /notebooks/10_tutorial_conservative_allen_cahn_two_phase.ipynb
========================
Thermocapillary flows
========================
.. toctree::
:maxdepth: 1
/notebooks/12_Thermocapillary_flows_heated_channel.ipynb
/notebooks/13_Thermocapillary_flows_droplet_motion.ipynb
===================
Non Newtonian flow
===================
.. toctree::
:maxdepth: 1
/notebooks/11_tutorial_Non_Newtonian_Flow.ipynb /notebooks/11_tutorial_Non_Newtonian_Flow.ipynb
=================
Diverse
=================
.. toctree::
:maxdepth: 1
/notebooks/demo_stencils.ipynb /notebooks/demo_stencils.ipynb
/notebooks/demo_streaming_patterns.ipynb /notebooks/demo_streaming_patterns.ipynb
/notebooks/demo_create_method_from_scratch.ipynb /notebooks/demo_create_method_from_scratch.ipynb
/notebooks/demo_moments_cumulants_and_maxwellian_equilibrium.ipynb /notebooks/demo_moments_cumulants_and_maxwellian_equilibrium.ipynb
/notebooks/demo_automatic_chapman_enskog_analysis.ipynb /notebooks/demo_automatic_chapman_enskog_analysis.ipynb
/notebooks/demo_thermalized_lbm.ipynb /notebooks/demo_interpolation_boundary_conditions.ipynb
/notebooks/demo_shallow_water_lbm.ipynb /notebooks/demo_shallow_water_lbm.ipynb
/notebooks/demo_theoretical_background_generic_equilibrium_construction.ipynb /notebooks/demo_theoretical_background_generic_equilibrium_construction.ipynb
from .creationfunctions import create_lb_ast, create_lb_collision_rule, create_lb_function,\
create_lb_method, create_lb_update_rule, LBMConfig, LBMOptimisation
from .enums import Stencil, Method, ForceModel, CollisionSpace
from .lbstep import LatticeBoltzmannStep
from .macroscopic_value_kernels import pdf_initialization_assignments, macroscopic_values_getter,\
compile_macroscopic_values_getter, compile_macroscopic_values_setter, create_advanced_velocity_setter_collision_rule
from .maxwellian_equilibrium import get_weights
from .relaxationrates import relaxation_rate_from_lattice_viscosity, lattice_viscosity_from_relaxation_rate,\
relaxation_rate_from_magic_number
from .scenarios import create_lid_driven_cavity, create_fully_periodic_flow
from .stencils import LBStencil
__all__ = ['create_lb_ast', 'create_lb_collision_rule', 'create_lb_function', 'create_lb_method',
'create_lb_method_from_existing', 'create_lb_update_rule', 'LBMConfig', 'LBMOptimisation',
'Stencil', 'Method', 'ForceModel', 'CollisionSpace',
'LatticeBoltzmannStep',
'pdf_initialization_assignments', 'macroscopic_values_getter', 'compile_macroscopic_values_getter',
'compile_macroscopic_values_setter', 'create_advanced_velocity_setter_collision_rule',
'get_weights',
'relaxation_rate_from_lattice_viscosity', 'lattice_viscosity_from_relaxation_rate',
'relaxation_rate_from_magic_number',
'create_lid_driven_cavity', 'create_fully_periodic_flow',
'LBStencil']
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
from lbmpy.boundaries.boundaryconditions import (
UBB, FixedDensity, DiffusionDirichlet, SimpleExtrapolationOutflow,
ExtrapolationOutflow, NeumannByCopy, NoSlip, StreamInConstant, FreeSlip)
from lbmpy.boundaries.boundaryhandling import LatticeBoltzmannBoundaryHandling
__all__ = ['NoSlip', 'FreeSlip', 'UBB', 'SimpleExtrapolationOutflow', 'ExtrapolationOutflow',
'FixedDensity', 'DiffusionDirichlet', 'NeumannByCopy',
'LatticeBoltzmannBoundaryHandling', 'StreamInConstant']
def analytic_rising_speed(gravitational_acceleration, bubble_diameter, viscosity_gas):
r"""
Calculated the analytical rising speed of a bubble. This is the expected end rising speed.
Args:
gravitational_acceleration: the gravitational acceleration acting in the simulation scenario. Usually it gets
calculated based on dimensionless parameters which describe the scenario
bubble_diameter: the diameter of the bubble at the beginning of the simulation
viscosity_gas: the viscosity of the fluid inside the bubble
"""
result = -(gravitational_acceleration * bubble_diameter * bubble_diameter) / (12.0 * viscosity_gas)
return result
import sympy as sp
from pystencils.simp import (
SimplificationStrategy, apply_to_all_assignments,
insert_aliases, insert_zeros, insert_constants)
def create_phasefield_simplification_strategy(lb_method):
s = SimplificationStrategy()
expand = apply_to_all_assignments(sp.expand)
s.add(expand)
s.add(insert_zeros)
s.add(insert_aliases)
s.add(insert_constants)
s.add(lambda ac: ac.new_without_unused_subexpressions())
return s
This diff is collapsed.
import pytest
import pystencils as ps
from lbmpy.creationfunctions import create_lb_function, LBMConfig
from lbmpy.enums import Method
from lbmpy.scenarios import create_lid_driven_cavity
@pytest.mark.parametrize('double_precision', [False, True])
@pytest.mark.parametrize('method_enum', [Method.SRT, Method.CENTRAL_MOMENT, Method.CUMULANT])
def test_creation(double_precision, method_enum):
"""Simple test that makes sure that only float variables are created"""
lbm_config = LBMConfig(method=method_enum, relaxation_rate=1.5, compressible=True)
config = ps.CreateKernelConfig(data_type="float64" if double_precision else "float32",
default_number_float="float64" if double_precision else "float32")
func = create_lb_function(lbm_config=lbm_config, config=config)
code = ps.get_code_str(func)
if double_precision:
assert 'float' not in code
assert 'double' in code
else:
assert 'double' not in code
assert 'float' in code
@pytest.mark.parametrize('double_precision', [False, True])
@pytest.mark.parametrize('method_enum', [Method.SRT, Method.CENTRAL_MOMENT, Method.CUMULANT])
def test_scenario(method_enum, double_precision):
lbm_config = LBMConfig(method=method_enum, relaxation_rate=1.45, compressible=True)
config = ps.CreateKernelConfig(data_type="float64" if double_precision else "float32",
default_number_float="float64" if double_precision else "float32")
sc = create_lid_driven_cavity((16, 16, 8), lbm_config=lbm_config, config=config)
sc.run(1)
code = ps.get_code_str(sc.ast)
if double_precision:
assert 'float' not in code
assert 'double' in code
else:
assert 'double' not in code
assert 'float' in code
This diff is collapsed.
This diff is collapsed.