Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • Sparse
  • WallLaw
  • fhennig/pystencils2.0-compat
  • improved_comm
  • master
  • suffa/psm_optimization
  • release/0.2.1
  • release/0.2.10
  • release/0.2.11
  • release/0.2.12
  • release/0.2.13
  • release/0.2.14
  • release/0.2.15
  • release/0.2.2
  • release/0.2.3
  • release/0.2.4
  • release/0.2.5
  • release/0.2.6
  • release/0.2.7
  • release/0.2.8
  • release/0.2.9
  • release/0.3.0
  • release/0.3.1
  • release/0.3.2
  • release/0.3.3
  • release/0.3.4
  • release/0.4.0
  • release/0.4.1
  • release/0.4.2
  • release/0.4.3
  • release/0.4.4
  • release/1.0
  • release/1.0.1
  • release/1.1
  • release/1.1.1
  • release/1.2
  • release/1.3
  • release/1.3.1
  • release/1.3.2
  • release/1.3.3
  • release/1.3.4
  • release/1.3.5
  • release/1.3.6
  • release/1.3.7
44 results

Target

Select target project
  • ravi.k.ayyala/lbmpy
  • brendan-waters/lbmpy
  • anirudh.jonnalagadda/lbmpy
  • jbadwaik/lbmpy
  • alexander.reinauer/lbmpy
  • itischler/lbmpy
  • he66coqe/lbmpy
  • ev81oxyl/lbmpy
  • Bindgen/lbmpy
  • da15siwa/lbmpy
  • holzer/lbmpy
  • RudolfWeeber/lbmpy
  • pycodegen/lbmpy
13 results
Select Git revision
  • GetterSetterAPI
  • HRR
  • HydroPressure
  • InplaceConfig
  • Outflow
  • PhaseField
  • Sparse
  • UBBVelocity
  • UpdateAPISparse
  • WallLaw
  • WetNodeBoundaries
  • csebug
  • feature/sparse
  • feature/try
  • improved_comm
  • install_requires
  • master
  • phaseField
  • relaxationrates
  • test_martin
  • release/0.2.1
  • release/0.2.10
  • release/0.2.11
  • release/0.2.12
  • release/0.2.13
  • release/0.2.14
  • release/0.2.15
  • release/0.2.2
  • release/0.2.3
  • release/0.2.4
  • release/0.2.5
  • release/0.2.6
  • release/0.2.7
  • release/0.2.8
  • release/0.2.9
  • release/0.3.0
  • release/0.3.1
  • release/0.3.2
  • release/0.3.3
  • release/0.3.4
  • release/0.4.0
  • release/0.4.1
  • release/0.4.2
  • release/0.4.3
  • release/0.4.4
  • release/1.0
  • release/1.0.1
  • release/1.1
  • release/1.1.1
  • release/1.2
  • release/1.3
  • release/1.3.1
  • release/1.3.2
  • release/1.3.3
  • release/1.3.4
  • release/1.3.5
  • release/1.3.6
57 results
Show changes
Showing
with 565 additions and 79 deletions
...@@ -20,21 +20,21 @@ and belongs to pystencils, not lbmpy. This can be found in the pystencils module ...@@ -20,21 +20,21 @@ and belongs to pystencils, not lbmpy. This can be found in the pystencils module
of the generated code is specified. of the generated code is specified.
1. *Method*: 1. *Method*:
the method defines the collision process. Currently there are two big categories: the method defines the collision process. Currently, there are two big categories:
moment and cumulant based methods. A method defines how each moment or cumulant is relaxed by moment and cumulant based methods. A method defines how each moment or cumulant is relaxed by
storing the equilibrium value and the relaxation rate for each moment/cumulant. storing the equilibrium value and the relaxation rate for each moment/cumulant.
2. *Collision/Update Rule*: 2. *Collision/Update Rule*:
Methods can generate a "collision rule" which is an equation collection that define the Methods can generate a "collision rule" which is an equation collection that define the
post collision values as a function of the pre-collision values. On these equation collection post collision values as a function of the pre-collision values. On these equation collection
simplifications are applied to reduce the number of floating point operations. simplifications are applied to reduce the number of floating point operations.
At this stage an entropic optimization step can also be added to determine one relaxation rate by an At this stage an entropic optimisation step can also be added to determine one relaxation rate by an
entropy condition. entropy condition.
Then a streaming rule is added which transforms the collision rule into an update rule. Then a streaming rule is added which transforms the collision rule into an update rule.
The streaming step depends on the pdf storage (source/destination, AABB pattern, EsoTwist). The streaming step depends on the pdf storage (source/destination, AABB pattern, EsoTwist).
Currently only the simple source/destination pattern is supported. Currently only the simple source/destination pattern is supported.
3. *AST*: 3. *AST*:
The abstract syntax tree describes the structure of the kernel, including loops and conditionals. The abstract syntax tree describes the structure of the kernel, including loops and conditionals.
The ast can be modified e.g. to add OpenMP pragmas, reorder loops or apply other optimizations. The ast can be modified, e.g., to add OpenMP pragmas, reorder loops or apply other optimisations.
4. *Function*: 4. *Function*:
This step compiles the AST into an executable function, either for CPU or GPUs. This function This step compiles the AST into an executable function, either for CPU or GPUs. This function
behaves like a normal Python function and runs one LBM time step. behaves like a normal Python function and runs one LBM time step.
...@@ -52,6 +52,8 @@ For example, to modify the AST one can run:: ...@@ -52,6 +52,8 @@ For example, to modify the AST one can run::
func = create_lb_function(ast=ast, ...) func = create_lb_function(ast=ast, ...)
""" """
import copy
from dataclasses import dataclass, field, replace from dataclasses import dataclass, field, replace
from typing import Union, List, Tuple, Any, Type, Iterable from typing import Union, List, Tuple, Any, Type, Iterable
from warnings import warn, filterwarnings from warnings import warn, filterwarnings
...@@ -61,33 +63,36 @@ import pystencils.astnodes ...@@ -61,33 +63,36 @@ import pystencils.astnodes
import sympy as sp import sympy as sp
import sympy.core.numbers import sympy.core.numbers
from lbmpy.enums import Stencil, Method, ForceModel, CollisionSpace from lbmpy.enums import Stencil, Method, ForceModel, CollisionSpace, SubgridScaleModel
import lbmpy.forcemodels as forcemodels import lbmpy.forcemodels as forcemodels
import lbmpy.methods.centeredcumulant.force_model as cumulant_force_model
from lbmpy.fieldaccess import CollideOnlyInplaceAccessor, PdfFieldAccessor, PeriodicTwoFieldsAccessor from lbmpy.fieldaccess import CollideOnlyInplaceAccessor, PdfFieldAccessor, PeriodicTwoFieldsAccessor
from lbmpy.fluctuatinglb import add_fluctuations_to_collision_rule from lbmpy.fluctuatinglb import add_fluctuations_to_collision_rule
from lbmpy.partially_saturated_cells import add_psm_to_collision_rule, PSMConfig
from lbmpy.non_newtonian_models import add_cassons_model, CassonsParameters from lbmpy.non_newtonian_models import add_cassons_model, CassonsParameters
from lbmpy.methods import (create_mrt_orthogonal, create_mrt_raw, create_central_moment, from lbmpy.methods import (create_mrt_orthogonal, create_mrt_raw, create_central_moment,
create_srt, create_trt, create_trt_kbc) create_srt, create_trt, create_trt_kbc)
from lbmpy.methods.creationfunctions import CollisionSpaceInfo from lbmpy.methods.creationfunctions import CollisionSpaceInfo
from lbmpy.methods.creationfunctions import ( from lbmpy.methods.creationfunctions import (
create_with_monomial_cumulants, create_with_polynomial_cumulants, create_with_default_polynomial_cumulants) create_with_monomial_cumulants, create_cumulant, create_with_default_polynomial_cumulants)
from lbmpy.methods.momentbased.entropic import add_entropy_condition, add_iterative_entropy_condition from lbmpy.methods.momentbased.entropic import add_entropy_condition, add_iterative_entropy_condition
from lbmpy.relaxationrates import relaxation_rate_from_magic_number from lbmpy.relaxationrates import relaxation_rate_from_magic_number
from lbmpy.simplificationfactory import create_simplification_strategy from lbmpy.simplificationfactory import create_simplification_strategy
from lbmpy.stencils import LBStencil from lbmpy.stencils import LBStencil
from lbmpy.turbulence_models import add_smagorinsky_model from lbmpy.turbulence_models import add_sgs_model
from lbmpy.updatekernels import create_lbm_kernel, create_stream_pull_with_output_kernel from lbmpy.updatekernels import create_lbm_kernel, create_stream_pull_with_output_kernel
from lbmpy.advanced_streaming.utility import Timestep, get_accessor from lbmpy.advanced_streaming.utility import Timestep, get_accessor
from pystencils import CreateKernelConfig, create_kernel from pystencils import CreateKernelConfig, create_kernel
from pystencils.astnodes import Conditional, Block
from pystencils.cache import disk_cache_no_fallback from pystencils.cache import disk_cache_no_fallback
from pystencils.node_collection import NodeCollection
from pystencils.typing import collate_types from pystencils.typing import collate_types
from pystencils.field import Field from pystencils.field import Field
from pystencils.simp import sympy_cse, SimplificationStrategy from pystencils.simp import sympy_cse, SimplificationStrategy
# needed for the docstring # needed for the docstring
from lbmpy.methods.abstractlbmethod import LbmCollisionRule, AbstractLbMethod from lbmpy.methods.abstractlbmethod import LbmCollisionRule, AbstractLbMethod
from lbmpy.methods.cumulantbased import CumulantBasedLbMethod
# Filter out JobLib warnings. They are not usefull for use: # Filter out JobLib warnings. They are not useful for use:
# https://github.com/joblib/joblib/issues/683 # https://github.com/joblib/joblib/issues/683
filterwarnings("ignore", message="Persisting input arguments took") filterwarnings("ignore", message="Persisting input arguments took")
...@@ -99,7 +104,7 @@ class LBMConfig: ...@@ -99,7 +104,7 @@ class LBMConfig:
""" """
stencil: lbmpy.stencils.LBStencil = LBStencil(Stencil.D2Q9) stencil: lbmpy.stencils.LBStencil = LBStencil(Stencil.D2Q9)
""" """
All stencils are defined in :class:`lbmpy.enums.Stencil`. From that :class:`lbmpy.stencils.LBStenil` All stencils are defined in :class:`lbmpy.enums.Stencil`. From that :class:`lbmpy.stencils.LBStencil`
class will be created class will be created
""" """
method: Method = Method.SRT method: Method = Method.SRT
...@@ -112,13 +117,19 @@ class LBMConfig: ...@@ -112,13 +117,19 @@ class LBMConfig:
""" """
Sequence of relaxation rates, number depends on selected method. If you specify more rates than Sequence of relaxation rates, number depends on selected method. If you specify more rates than
method needs, the additional rates are ignored. method needs, the additional rates are ignored.
If no relaxation rates are specified, the parameter `relaxation_rate` will be consulted.
""" """
relaxation_rate: Union[int, float, Type[sp.Symbol]] = None relaxation_rate: Union[int, float, Type[sp.Symbol]] = None
""" """
For SRT, TRT and polynomial cumulant models it is possible to define The method's primary relaxation rate. In most cases, this is the relaxation rate governing shear viscosity.
a single ``relaxation_rate`` instead of a list (Internally this is converted to a list with a single entry). For SRT, this is the only relaxation rate.
The second rate for TRT is then determined via magic number. For the moment, central moment based and the For TRT, the second relaxation rate is then determined via magic number.
cumulant model, it sets only the relaxation rate corresponding to shear viscosity, setting all others to unity. In the case of raw moment, central moment, and cumulant-based MRT methods, all other relaxation rates will be
set to unity.
If neither `relaxation_rate` nor `relaxation_rates` is specified, the behaviour is as if
`relaxation_rate=sp.Symbol('omega')` was set.
""" """
compressible: bool = False compressible: bool = False
""" """
...@@ -134,7 +145,7 @@ class LBMConfig: ...@@ -134,7 +145,7 @@ class LBMConfig:
""" """
delta_equilibrium: bool = None delta_equilibrium: bool = None
""" """
Determines whether or not the (continuous or discrete, see `continuous_equilibrium`) maxwellian equilibrium is Determines whether or not the (continuous or discrete, see `continuous_equilibrium`) Maxwellian equilibrium is
expressed in its absolute form, or only by its deviation from the rest state (typically given by the reference expressed in its absolute form, or only by its deviation from the rest state (typically given by the reference
density and zero velocity). This parameter is only effective if `zero_centered` is set to `True`. Then, if density and zero velocity). This parameter is only effective if `zero_centered` is set to `True`. Then, if
`delta_equilibrium` is `False`, the rest state must be reintroduced to the populations during collision. Otherwise, `delta_equilibrium` is `False`, the rest state must be reintroduced to the populations during collision. Otherwise,
...@@ -164,7 +175,7 @@ class LBMConfig: ...@@ -164,7 +175,7 @@ class LBMConfig:
""" """
A list of lists of modes, grouped by common relaxation times. This is usually used in A list of lists of modes, grouped by common relaxation times. This is usually used in
conjunction with `lbmpy.methods.default_moment_sets.mrt_orthogonal_modes_literature`. conjunction with `lbmpy.methods.default_moment_sets.mrt_orthogonal_modes_literature`.
If this argument is not provided, Gram-Schmidt orthogonalization of the default modes is performed. If this argument is not provided, Gram-Schmidt orthogonalisation of the default modes is performed.
""" """
force_model: Union[lbmpy.forcemodels.AbstractForceModel, ForceModel] = None force_model: Union[lbmpy.forcemodels.AbstractForceModel, ForceModel] = None
...@@ -195,7 +206,14 @@ class LBMConfig: ...@@ -195,7 +206,14 @@ class LBMConfig:
galilean_correction: bool = False galilean_correction: bool = False
""" """
Special correction for D3Q27 cumulant LBMs. For Details see Special correction for D3Q27 cumulant LBMs. For Details see
:mod:`lbmpy.methods.centeredcumulant.galilean_correction` :mod:`lbmpy.methods.cumulantbased.galilean_correction`
"""
fourth_order_correction: Union[float, bool] = False
"""
Special correction for rendering D3Q27 cumulant LBMs fourth-order accurate in diffusion. For Details see
:mod:`lbmpy.methods.cumulantbased.fourth_order_correction`. If set to `True`, the fourth-order correction is
employed without limiters (or more precisely with a very high limiter, practically disabling the limiters). If this
variable is set to a number, the latter is used for the limiters (uniformly for omega_3, omega_4 and omega_5).
""" """
collision_space_info: CollisionSpaceInfo = None collision_space_info: CollisionSpaceInfo = None
""" """
...@@ -221,12 +239,17 @@ class LBMConfig: ...@@ -221,12 +239,17 @@ class LBMConfig:
omega_output_field: Field = None omega_output_field: Field = None
""" """
A pystencils Field can be passed here, where the calculated free relaxation rate of A pystencils Field can be passed here, where the calculated free relaxation rate of
an entropic or Smagorinsky method is written to an entropic or subgrid-scale method is written to
""" """
smagorinsky: Union[float, bool] = False eddy_viscosity_field: Field = None
""" """
set to Smagorinsky constant to activate turbulence model, ``omega_output_field`` can be set to A pystencils Field can be passed here, where the eddy-viscosity of a subgrid-scale model is written.
write out adapted relaxation rates. If set to `True`, 0.12 is used as default smagorinsky constant. """
subgrid_scale_model: Union[SubgridScaleModel, tuple[SubgridScaleModel, float], tuple[SubgridScaleModel, int]] = None
"""
Choose a subgrid-scale model (SGS) for large-eddy simulations. ``omega_output_field`` can be set to
write out adapted relaxation rates. Either provide just the SGS and use the default model constants or provide a
tuple of the SGS and its corresponding model constant.
""" """
cassons: CassonsParameters = False cassons: CassonsParameters = False
""" """
...@@ -244,6 +267,12 @@ class LBMConfig: ...@@ -244,6 +267,12 @@ class LBMConfig:
Temperature for fluctuating lattice Boltzmann methods. Temperature for fluctuating lattice Boltzmann methods.
""" """
psm_config: PSMConfig = None
"""
If a PSM config is specified, (1 - fractionField) is added to the relaxation rates of the collision
and to the potential force term, and a solid collision is build and added to the main assignments.
"""
output: dict = field(default_factory=dict) output: dict = field(default_factory=dict)
""" """
A dictionary mapping macroscopic quantites e.g. the strings 'density' and 'velocity' to pystencils A dictionary mapping macroscopic quantites e.g. the strings 'density' and 'velocity' to pystencils
...@@ -260,6 +289,11 @@ class LBMConfig: ...@@ -260,6 +289,11 @@ class LBMConfig:
Symbolic field where the density is read from. If `None` is given the density is calculated inplace from Symbolic field where the density is read from. If `None` is given the density is calculated inplace from
with zeroth order moment. with zeroth order moment.
""" """
conserved_moments: bool = True
"""
If lower order moments are conserved or not. If velocity or density input is set the lower order moments are not
conserved anymore.
"""
kernel_type: Union[str, Type[PdfFieldAccessor]] = 'default_stream_collide' kernel_type: Union[str, Type[PdfFieldAccessor]] = 'default_stream_collide'
""" """
...@@ -325,9 +359,11 @@ class LBMConfig: ...@@ -325,9 +359,11 @@ class LBMConfig:
self.stencil = LBStencil(self.stencil) self.stencil = LBStencil(self.stencil)
if self.relaxation_rates is None: if self.relaxation_rates is None:
self.relaxation_rates = [sp.Symbol("omega")] * self.stencil.Q # Fall back to regularized method
if self.relaxation_rate is None:
self.relaxation_rate = sp.Symbol("omega")
# if only a single relaxation rate is defined (which makes sense for SRT or TRT methods) # if only a single relaxation rate is defined,
# it is internally treated as a list with one element and just sets the relaxation_rates parameter # it is internally treated as a list with one element and just sets the relaxation_rates parameter
if self.relaxation_rate is not None: if self.relaxation_rate is not None:
if self.method in [Method.TRT, Method.TRT_KBC_N1, Method.TRT_KBC_N2, Method.TRT_KBC_N3, Method.TRT_KBC_N4]: if self.method in [Method.TRT, Method.TRT_KBC_N1, Method.TRT_KBC_N2, Method.TRT_KBC_N3, Method.TRT_KBC_N4]:
...@@ -340,8 +376,8 @@ class LBMConfig: ...@@ -340,8 +376,8 @@ class LBMConfig:
if not self.compressible and self.method in (Method.MONOMIAL_CUMULANT, Method.CUMULANT): if not self.compressible and self.method in (Method.MONOMIAL_CUMULANT, Method.CUMULANT):
raise ValueError("Incompressible cumulant-based methods are not supported (yet).") raise ValueError("Incompressible cumulant-based methods are not supported (yet).")
if self.zero_centered and (self.entropic or self.fluctuating): if self.zero_centered and self.entropic:
raise ValueError("Entropic and fluctuating methods can only be created with `zero_centered=False`.") raise ValueError("Entropic methods can only be created with `zero_centered=False`.")
# Check or infer delta-equilibrium # Check or infer delta-equilibrium
if self.delta_equilibrium is not None: if self.delta_equilibrium is not None:
...@@ -412,8 +448,10 @@ class LBMConfig: ...@@ -412,8 +448,10 @@ class LBMConfig:
force_not_zero = True force_not_zero = True
if self.force_model is None and force_not_zero: if self.force_model is None and force_not_zero:
self.force_model = cumulant_force_model.CenteredCumulantForceModel(self.force[:self.stencil.D]) \ if self.method == Method.CUMULANT:
if self.method == Method.CUMULANT else forcemodels.Guo(self.force[:self.stencil.D]) self.force_model = forcemodels.CentralMoment(self.force[:self.stencil.D])
else:
self.force_model = forcemodels.Guo(self.force[:self.stencil.D])
force_model_dict = { force_model_dict = {
'simple': forcemodels.Simple, 'simple': forcemodels.Simple,
...@@ -424,11 +462,14 @@ class LBMConfig: ...@@ -424,11 +462,14 @@ class LBMConfig:
'silva': forcemodels.Buick, 'silva': forcemodels.Buick,
'edm': forcemodels.EDM, 'edm': forcemodels.EDM,
'kupershtokh': forcemodels.EDM, 'kupershtokh': forcemodels.EDM,
'cumulant': cumulant_force_model.CenteredCumulantForceModel,
'he': forcemodels.He, 'he': forcemodels.He,
'shanchen': forcemodels.ShanChen 'shanchen': forcemodels.ShanChen,
'centralmoment': forcemodels.CentralMoment
} }
if self.psm_config is not None and self.psm_config.fraction_field is not None:
self.force = [(1.0 - self.psm_config.fraction_field.center) * f for f in self.force]
if isinstance(self.force_model, str): if isinstance(self.force_model, str):
new_force_model = ForceModel[self.force_model.upper()] new_force_model = ForceModel[self.force_model.upper()]
warn(f'ForceModel "{self.force_model}" as str is deprecated. Use {new_force_model} instead or ' warn(f'ForceModel "{self.force_model}" as str is deprecated. Use {new_force_model} instead or '
...@@ -439,6 +480,9 @@ class LBMConfig: ...@@ -439,6 +480,9 @@ class LBMConfig:
force_model_class = force_model_dict[self.force_model.name.lower()] force_model_class = force_model_dict[self.force_model.name.lower()]
self.force_model = force_model_class(force=self.force[:self.stencil.D]) self.force_model = force_model_class(force=self.force[:self.stencil.D])
if self.density_input or self.velocity_input:
self.conserved_moments = False
@dataclass @dataclass
class LBMOptimisation: class LBMOptimisation:
...@@ -453,10 +497,13 @@ class LBMOptimisation: ...@@ -453,10 +497,13 @@ class LBMOptimisation:
""" """
Run common subexpression elimination after all other simplifications have been executed. Run common subexpression elimination after all other simplifications have been executed.
""" """
simplification: Union[str, bool] = 'auto' simplification: Union[str, bool, SimplificationStrategy] = 'auto'
""" """
Simplifications applied during the derivation of the collision rule. For details Simplifications applied during the derivation of the collision rule. If ``True`` or ``'auto'``,
see :func:`lbmpy.simplificationfactory.create_simplification_strategy` a default simplification strategy is selected according to the type of the method;
see :func:`lbmpy.simplificationfactory.create_simplification_strategy`.
If ``False``, no simplification is applied.
Otherwise, the given simplification strategy will be applied.
""" """
pre_simplification: bool = True pre_simplification: bool = True
""" """
...@@ -637,9 +684,31 @@ def create_lb_collision_rule(lb_method=None, lbm_config=None, lbm_optimisation=N ...@@ -637,9 +684,31 @@ def create_lb_collision_rule(lb_method=None, lbm_config=None, lbm_optimisation=N
else: else:
collision_rule = lb_method.get_collision_rule(pre_simplification=pre_simplification) collision_rule = lb_method.get_collision_rule(pre_simplification=pre_simplification)
if lbm_config.psm_config is not None:
if lbm_config.psm_config.fraction_field is None or lbm_config.psm_config.object_velocity_field is None:
raise ValueError("Specify a fraction and object velocity field in the PSM Config")
collision_rule = add_psm_to_collision_rule(collision_rule, lbm_config.psm_config)
if lbm_config.galilean_correction:
from lbmpy.methods.cumulantbased import add_galilean_correction
collision_rule = add_galilean_correction(collision_rule)
if lbm_config.fourth_order_correction:
from lbmpy.methods.cumulantbased import add_fourth_order_correction
# must provide a second relaxation rate in implementation; defaults to 1
if len(lbm_config.relaxation_rates) == 1:
lbm_config.relaxation_rates.append(1)
cumulant_limiter = 1e6 if lbm_config.fourth_order_correction is True else lbm_config.fourth_order_correction
collision_rule = add_fourth_order_correction(collision_rule=collision_rule,
shear_relaxation_rate=lbm_config.relaxation_rates[0],
bulk_relaxation_rate=lbm_config.relaxation_rates[1],
limiter=cumulant_limiter)
if lbm_config.entropic: if lbm_config.entropic:
if lbm_config.smagorinsky or lbm_config.cassons: if lbm_config.subgrid_scale_model or lbm_config.cassons:
raise ValueError("Choose either entropic, smagorinsky or cassons") raise ValueError("Choose either entropic, subgrid-scale or cassons")
if lbm_config.entropic_newton_iterations: if lbm_config.entropic_newton_iterations:
if isinstance(lbm_config.entropic_newton_iterations, bool): if isinstance(lbm_config.entropic_newton_iterations, bool):
iterations = 3 iterations = 3
...@@ -649,14 +718,24 @@ def create_lb_collision_rule(lb_method=None, lbm_config=None, lbm_optimisation=N ...@@ -649,14 +718,24 @@ def create_lb_collision_rule(lb_method=None, lbm_config=None, lbm_optimisation=N
omega_output_field=lbm_config.omega_output_field) omega_output_field=lbm_config.omega_output_field)
else: else:
collision_rule = add_entropy_condition(collision_rule, omega_output_field=lbm_config.omega_output_field) collision_rule = add_entropy_condition(collision_rule, omega_output_field=lbm_config.omega_output_field)
elif lbm_config.smagorinsky:
elif lbm_config.subgrid_scale_model:
if lbm_config.cassons: if lbm_config.cassons:
raise ValueError("Cassons model can not be combined with Smagorinsky model") raise ValueError("Cassons model can not be combined with a subgrid-scale model")
smagorinsky_constant = 0.12 if lbm_config.smagorinsky is True else lbm_config.smagorinsky
collision_rule = add_smagorinsky_model(collision_rule, smagorinsky_constant, model_constant = None
omega_output_field=lbm_config.omega_output_field) sgs_model = lbm_config.subgrid_scale_model
if 'split_groups' in collision_rule.simplification_hints:
collision_rule.simplification_hints['split_groups'][0].append(sp.Symbol("smagorinsky_omega")) if isinstance(lbm_config.subgrid_scale_model, tuple):
sgs_model = lbm_config.subgrid_scale_model[0]
model_constant = lbm_config.subgrid_scale_model[1]
collision_rule = add_sgs_model(collision_rule=collision_rule, subgrid_scale_model=sgs_model,
model_constant=model_constant, omega_output_field=lbm_config.omega_output_field,
eddy_viscosity_field=lbm_config.eddy_viscosity_field)
if 'split_groups' in collision_rule.simplification_hints:
collision_rule.simplification_hints['split_groups'][0].append(sp.Symbol("sgs_omega"))
elif lbm_config.cassons: elif lbm_config.cassons:
collision_rule = add_cassons_model(collision_rule, parameter=lbm_config.cassons, collision_rule = add_cassons_model(collision_rule, parameter=lbm_config.cassons,
...@@ -666,7 +745,7 @@ def create_lb_collision_rule(lb_method=None, lbm_config=None, lbm_optimisation=N ...@@ -666,7 +745,7 @@ def create_lb_collision_rule(lb_method=None, lbm_config=None, lbm_optimisation=N
output_eqs = cqc.output_equations_from_pdfs(lb_method.pre_collision_pdf_symbols, lbm_config.output) output_eqs = cqc.output_equations_from_pdfs(lb_method.pre_collision_pdf_symbols, lbm_config.output)
collision_rule = collision_rule.new_merged(output_eqs) collision_rule = collision_rule.new_merged(output_eqs)
if lbm_optimisation.simplification == 'auto': if lbm_optimisation.simplification is True or lbm_optimisation.simplification == 'auto':
simplification = create_simplification_strategy(lb_method, split_inner_loop=lbm_optimisation.split) simplification = create_simplification_strategy(lb_method, split_inner_loop=lbm_optimisation.split)
elif callable(lbm_optimisation.simplification): elif callable(lbm_optimisation.simplification):
simplification = lbm_optimisation.simplification simplification = lbm_optimisation.simplification
...@@ -674,6 +753,10 @@ def create_lb_collision_rule(lb_method=None, lbm_config=None, lbm_optimisation=N ...@@ -674,6 +753,10 @@ def create_lb_collision_rule(lb_method=None, lbm_config=None, lbm_optimisation=N
simplification = SimplificationStrategy() simplification = SimplificationStrategy()
collision_rule = simplification(collision_rule) collision_rule = simplification(collision_rule)
if isinstance(collision_rule.method, CumulantBasedLbMethod):
from lbmpy.methods.cumulantbased.cumulant_simplifications import check_for_logarithms
check_for_logarithms(collision_rule)
if lbm_config.fluctuating: if lbm_config.fluctuating:
add_fluctuations_to_collision_rule(collision_rule, **lbm_config.fluctuating) add_fluctuations_to_collision_rule(collision_rule, **lbm_config.fluctuating)
...@@ -697,6 +780,11 @@ def create_lb_method(lbm_config=None, **params): ...@@ -697,6 +780,11 @@ def create_lb_method(lbm_config=None, **params):
if isinstance(lbm_config.force, Field): if isinstance(lbm_config.force, Field):
lbm_config.force = tuple(lbm_config.force(i) for i in range(dim)) lbm_config.force = tuple(lbm_config.force(i) for i in range(dim))
if lbm_config.psm_config is None:
fraction_field = None
else:
fraction_field = lbm_config.psm_config.fraction_field
common_params = { common_params = {
'compressible': lbm_config.compressible, 'compressible': lbm_config.compressible,
'zero_centered': lbm_config.zero_centered, 'zero_centered': lbm_config.zero_centered,
...@@ -706,14 +794,15 @@ def create_lb_method(lbm_config=None, **params): ...@@ -706,14 +794,15 @@ def create_lb_method(lbm_config=None, **params):
'continuous_equilibrium': lbm_config.continuous_equilibrium, 'continuous_equilibrium': lbm_config.continuous_equilibrium,
'c_s_sq': lbm_config.c_s_sq, 'c_s_sq': lbm_config.c_s_sq,
'collision_space_info': lbm_config.collision_space_info, 'collision_space_info': lbm_config.collision_space_info,
'fraction_field': fraction_field,
} }
cumulant_params = { cumulant_params = {
'zero_centered': lbm_config.zero_centered, 'zero_centered': lbm_config.zero_centered,
'force_model': lbm_config.force_model, 'force_model': lbm_config.force_model,
'c_s_sq': lbm_config.c_s_sq, 'c_s_sq': lbm_config.c_s_sq,
'galilean_correction': lbm_config.galilean_correction,
'collision_space_info': lbm_config.collision_space_info, 'collision_space_info': lbm_config.collision_space_info,
'fraction_field': fraction_field,
} }
if lbm_config.method == Method.SRT: if lbm_config.method == Method.SRT:
...@@ -724,12 +813,15 @@ def create_lb_method(lbm_config=None, **params): ...@@ -724,12 +813,15 @@ def create_lb_method(lbm_config=None, **params):
method = create_trt(lbm_config.stencil, relaxation_rates[0], relaxation_rates[1], **common_params) method = create_trt(lbm_config.stencil, relaxation_rates[0], relaxation_rates[1], **common_params)
elif lbm_config.method == Method.MRT: elif lbm_config.method == Method.MRT:
method = create_mrt_orthogonal(lbm_config.stencil, relaxation_rates, weighted=lbm_config.weighted, method = create_mrt_orthogonal(lbm_config.stencil, relaxation_rates, weighted=lbm_config.weighted,
nested_moments=lbm_config.nested_moments, **common_params) nested_moments=lbm_config.nested_moments,
conserved_moments=lbm_config.conserved_moments, **common_params)
elif lbm_config.method == Method.CENTRAL_MOMENT: elif lbm_config.method == Method.CENTRAL_MOMENT:
method = create_central_moment(lbm_config.stencil, relaxation_rates, method = create_central_moment(lbm_config.stencil, relaxation_rates,
nested_moments=lbm_config.nested_moments, **common_params) nested_moments=lbm_config.nested_moments,
conserved_moments=lbm_config.conserved_moments, **common_params)
elif lbm_config.method == Method.MRT_RAW: elif lbm_config.method == Method.MRT_RAW:
method = create_mrt_raw(lbm_config.stencil, relaxation_rates, **common_params) method = create_mrt_raw(lbm_config.stencil, relaxation_rates,
conserved_moments=lbm_config.conserved_moments, **common_params)
elif lbm_config.method in (Method.TRT_KBC_N1, Method.TRT_KBC_N2, Method.TRT_KBC_N3, Method.TRT_KBC_N4): elif lbm_config.method in (Method.TRT_KBC_N1, Method.TRT_KBC_N2, Method.TRT_KBC_N3, Method.TRT_KBC_N4):
if lbm_config.stencil.D == 2 and lbm_config.stencil.Q == 9: if lbm_config.stencil.D == 2 and lbm_config.stencil.Q == 9:
dim = 2 dim = 2
...@@ -740,13 +832,28 @@ def create_lb_method(lbm_config=None, **params): ...@@ -740,13 +832,28 @@ def create_lb_method(lbm_config=None, **params):
method_nr = lbm_config.method.name[-1] method_nr = lbm_config.method.name[-1]
method = create_trt_kbc(dim, relaxation_rates[0], relaxation_rates[1], 'KBC-N' + method_nr, **common_params) method = create_trt_kbc(dim, relaxation_rates[0], relaxation_rates[1], 'KBC-N' + method_nr, **common_params)
elif lbm_config.method == Method.CUMULANT: elif lbm_config.method == Method.CUMULANT:
if lbm_config.fourth_order_correction:
if lbm_config.stencil.D != 3 and lbm_config.stencil.Q != 27:
raise ValueError("Fourth-order correction can only be applied to D3Q27 cumulant methods.")
assert len(relaxation_rates) <= 2, "Optimal parametrisation for fourth-order cumulants needs either one " \
"or two relaxation rates, associated with the shear (and bulk) " \
"viscosity. All other relaxation rates are automatically chosen " \
"optimally"
# define method in terms of symbolic relaxation rates and assign optimal values later
from lbmpy.methods.cumulantbased.fourth_order_correction import FOURTH_ORDER_RELAXATION_RATE_SYMBOLS
relaxation_rates = FOURTH_ORDER_RELAXATION_RATE_SYMBOLS
if lbm_config.nested_moments is not None: if lbm_config.nested_moments is not None:
method = create_with_polynomial_cumulants( method = create_cumulant(lbm_config.stencil, relaxation_rates, lbm_config.nested_moments,
lbm_config.stencil, relaxation_rates, lbm_config.nested_moments, **cumulant_params) conserved_moments=lbm_config.conserved_moments, **cumulant_params)
else: else:
method = create_with_default_polynomial_cumulants(lbm_config.stencil, relaxation_rates, **cumulant_params) method = create_with_default_polynomial_cumulants(lbm_config.stencil, relaxation_rates, **cumulant_params)
elif lbm_config.method == Method.MONOMIAL_CUMULANT: elif lbm_config.method == Method.MONOMIAL_CUMULANT:
method = create_with_monomial_cumulants(lbm_config.stencil, relaxation_rates, **cumulant_params) method = create_with_monomial_cumulants(lbm_config.stencil, relaxation_rates,
conserved_moments=lbm_config.conserved_moments, **cumulant_params)
else: else:
raise ValueError("Failed to create LB method. Please use lbmpy.enums.Method for the creation") raise ValueError("Failed to create LB method. Please use lbmpy.enums.Method for the creation")
...@@ -757,10 +864,58 @@ def create_lb_method(lbm_config=None, **params): ...@@ -757,10 +864,58 @@ def create_lb_method(lbm_config=None, **params):
if lbm_config.entropic: if lbm_config.entropic:
method.set_conserved_moments_relaxation_rate(relaxation_rates[0]) method.set_conserved_moments_relaxation_rate(relaxation_rates[0])
lbm_config.method = method lbm_config.lb_method = method
return method return method
def create_psm_update_rule(lbm_config, lbm_optimisation):
node_collection = []
# Use regular lb update rule for no overlapping particles
config_without_psm = copy.deepcopy(lbm_config)
config_without_psm.psm_config = None
# TODO: the force is still multiplied by (1.0 - self.psm_config.fraction_field.center)
# (should not harm if memory bound since self.psm_config.fraction_field.center should always be 0.0)
lb_update_rule = create_lb_update_rule(
lbm_config=config_without_psm, lbm_optimisation=lbm_optimisation
)
node_collection.append(
Conditional(
lbm_config.psm_config.fraction_field.center(0) <= 0.0,
Block(lb_update_rule.all_assignments),
)
)
# Only one particle, i.e., no individual_fraction_field is provided
if lbm_config.psm_config.individual_fraction_field is None:
assert lbm_config.psm_config.MaxParticlesPerCell == 1
psm_update_rule = create_lb_update_rule(
lbm_config=lbm_config, lbm_optimisation=lbm_optimisation
)
node_collection.append(
Conditional(
lbm_config.psm_config.fraction_field.center(0) > 0.0,
Block(psm_update_rule.all_assignments),
)
)
else:
for p in range(lbm_config.psm_config.MaxParticlesPerCell):
# Add psm update rule for p overlapping particles
config_with_p_particles = copy.deepcopy(lbm_config)
config_with_p_particles.psm_config.MaxParticlesPerCell = p + 1
psm_update_rule = create_lb_update_rule(
lbm_config=config_with_p_particles, lbm_optimisation=lbm_optimisation
)
node_collection.append(
Conditional(
lbm_config.psm_config.individual_fraction_field.center(p) > 0.0,
Block(psm_update_rule.all_assignments),
)
)
return NodeCollection(node_collection)
# ---------------------------------------------------------------------------------------------------------------------- # ----------------------------------------------------------------------------------------------------------------------
def update_with_default_parameters(params, opt_params=None, lbm_config=None, lbm_optimisation=None, config=None): def update_with_default_parameters(params, opt_params=None, lbm_config=None, lbm_optimisation=None, config=None):
# Fix CreateKernelConfig params # Fix CreateKernelConfig params
......
File moved
import numpy as np
import sympy as sp
from pystencils.typing import TypedSymbol, create_type
from pystencils.backends.cbackend import CustomCodeNode
class NeighbourOffsetArrays(CustomCodeNode):
@staticmethod
def neighbour_offset(dir_idx, stencil):
if isinstance(sp.sympify(dir_idx), sp.Integer):
return stencil[dir_idx]
else:
return tuple([sp.IndexedBase(symbol, shape=(1,))[dir_idx]
for symbol in NeighbourOffsetArrays._offset_symbols(len(stencil[0]))])
@staticmethod
def _offset_symbols(dim):
return [TypedSymbol(f"neighbour_offset_{d}", create_type('int32')) for d in ['x', 'y', 'z'][:dim]]
def __init__(self, stencil, offsets_dtype=np.int32):
offsets_dtype = create_type(offsets_dtype)
dim = len(stencil[0])
array_symbols = NeighbourOffsetArrays._offset_symbols(dim)
code = "\n"
for i, arrsymb in enumerate(array_symbols):
code += _array_pattern(offsets_dtype, arrsymb.name, (d[i] for d in stencil))
offset_symbols = NeighbourOffsetArrays._offset_symbols(dim)
super(NeighbourOffsetArrays, self).__init__(code, symbols_read=set(),
symbols_defined=set(offset_symbols))
class MirroredStencilDirections(CustomCodeNode):
@staticmethod
def mirror_stencil(direction, mirror_axis):
assert mirror_axis <= len(direction), f"only {len(direction)} axis available for mirage"
direction = list(direction)
direction[mirror_axis] = -direction[mirror_axis]
return tuple(direction)
@staticmethod
def _mirrored_symbol(mirror_axis):
axis = ['x', 'y', 'z']
return TypedSymbol(f"{axis[mirror_axis]}_axis_mirrored_stencil_dir", create_type('int32'))
def __init__(self, stencil, mirror_axis, dtype=np.int32):
offsets_dtype = create_type(dtype)
mirrored_stencil_symbol = MirroredStencilDirections._mirrored_symbol(mirror_axis)
mirrored_directions = [stencil.index(MirroredStencilDirections.mirror_stencil(direction, mirror_axis))
for direction in stencil]
code = "\n"
code += _array_pattern(offsets_dtype, mirrored_stencil_symbol.name, mirrored_directions)
super(MirroredStencilDirections, self).__init__(code, symbols_read=set(),
symbols_defined={mirrored_stencil_symbol})
class LbmWeightInfo(CustomCodeNode):
def __init__(self, lb_method, data_type='double'):
self.weights_symbol = TypedSymbol("weights", data_type)
weights = [f"(({self.weights_symbol.dtype.c_name})({str(w.evalf(17))}))" for w in lb_method.weights]
weights = ", ".join(weights)
w_sym = self.weights_symbol
code = f"const {self.weights_symbol.dtype.c_name} {w_sym.name} [] = {{{ weights }}};\n"
super(LbmWeightInfo, self).__init__(code, symbols_read=set(), symbols_defined={w_sym})
def weight_of_direction(self, dir_idx, lb_method=None):
if isinstance(sp.sympify(dir_idx), sp.Integer):
return lb_method.weights[dir_idx].evalf(17)
else:
return sp.IndexedBase(self.weights_symbol, shape=(1,))[dir_idx]
class TranslationArraysNode(CustomCodeNode):
def __init__(self, array_content, symbols_defined):
code = ''
for content in array_content:
code += _array_pattern(*content)
super(TranslationArraysNode, self).__init__(code, symbols_read=set(), symbols_defined=symbols_defined)
def __str__(self):
return "Variable PDF Access Translation Arrays"
def __repr__(self):
return "Variable PDF Access Translation Arrays"
def _array_pattern(dtype, name, content):
return f"const {str(dtype)} {name} [] = {{ {','.join(str(c) for c in content)} }}; \n"
import json
import six
import inspect
from pystencils.runhelper.db import PystencilsJsonEncoder
from pystencils.simp import SimplificationStrategy
from lbmpy import LBStencil, Method, CollisionSpace, SubgridScaleModel
from lbmpy.creationfunctions import LBMConfig, LBMOptimisation
from lbmpy.methods import CollisionSpaceInfo
from lbmpy.forcemodels import AbstractForceModel
from lbmpy.non_newtonian_models import CassonsParameters
class LbmpyJsonEncoder(PystencilsJsonEncoder):
def default(self, obj):
if isinstance(obj, (LBMConfig, LBMOptimisation, CollisionSpaceInfo, CassonsParameters)):
return obj.__dict__
if isinstance(obj, (LBStencil, Method, CollisionSpace, SubgridScaleModel)):
return obj.name
if isinstance(obj, AbstractForceModel):
return obj.__class__.__name__
if isinstance(obj, SimplificationStrategy):
return obj.__str__()
if inspect.isclass(obj):
return obj.__name__
return PystencilsJsonEncoder.default(self, obj)
class LbmpyJsonSerializer(object):
@classmethod
def serialize(cls, data):
if six.PY3:
if isinstance(data, bytes):
return json.dumps(data.decode('utf-8'), cls=LbmpyJsonEncoder, ensure_ascii=False).encode('utf-8')
else:
return json.dumps(data, cls=LbmpyJsonEncoder, ensure_ascii=False).encode('utf-8')
else:
return json.dumps(data, cls=LbmpyJsonEncoder, ensure_ascii=False).encode('utf-8')
@classmethod
def deserialize(cls, data):
if six.PY3:
return json.loads(data.decode('utf-8'))
else:
return json.loads(data.decode('utf-8'))
...@@ -159,7 +159,7 @@ class CollisionSpace(Enum): ...@@ -159,7 +159,7 @@ class CollisionSpace(Enum):
""" """
Cumulant space, meaning relaxation is applied to a set of linearly independent, polynomial cumulants of the Cumulant space, meaning relaxation is applied to a set of linearly independent, polynomial cumulants of the
discrete population vector. Default for `lbmpy.enums.Method.CUMULANT` and `lbmpy.enums.Method.MONOMIAL_CUMULANT`. discrete population vector. Default for `lbmpy.enums.Method.CUMULANT` and `lbmpy.enums.Method.MONOMIAL_CUMULANT`.
Results in the creation of an instance of :class:`lbmpy.methods.centeredcumulant.CenteredCumulantBasedLbMethod`. Results in the creation of an instance of :class:`lbmpy.methods.cumulantbased.CumulantBasedLbMethod`.
""" """
def compatible(self, method: Method): def compatible(self, method: Method):
...@@ -209,10 +209,6 @@ class ForceModel(Enum): ...@@ -209,10 +209,6 @@ class ForceModel(Enum):
""" """
See :class:`lbmpy.forcemodels.EDM` See :class:`lbmpy.forcemodels.EDM`
""" """
CUMULANT = auto()
"""
See :class:`lbmpy.methods.centeredcumulant.CenteredCumulantForceModel`
"""
HE = auto() HE = auto()
""" """
See :class:`lbmpy.forcemodels.He` See :class:`lbmpy.forcemodels.He`
...@@ -221,3 +217,22 @@ class ForceModel(Enum): ...@@ -221,3 +217,22 @@ class ForceModel(Enum):
""" """
See :class:`lbmpy.forcemodels.ShanChen` See :class:`lbmpy.forcemodels.ShanChen`
""" """
CENTRALMOMENT = auto()
"""
See :class:`lbmpy.forcemodels.CentralMoment`
"""
class SubgridScaleModel(Enum):
"""
The SubgridScaleModel enumeration defines which subgrid-scale model (SGS) is used to perform
Large-Eddy-Simulations (LES).
"""
SMAGORINSKY = auto()
"""
See :func:`lbmpy.turbulence_models.add_smagorinsky_model`
"""
QR = auto()
"""
See :func:`lbmpy.turbulence_models.add_qr_model`
"""
File moved
import sympy as sp
import pystencils as ps
from pystencils.field import Field
def welford_assignments(field, mean_field, sum_of_squares_field=None, sum_of_cubes_field=None):
r"""
Creates the assignments for the kernel creation of Welford's algorithm
(https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Welford's_online_algorithm).
This algorithm is an online algorithm for the mean and variance calculation of sample data, here implemented for
the execution on scalar or vector fields, e.g., velocity.
The calculation of the variance / the third-order central moments is optional and only performed if a field for
the sum of squares / sum of cubes is given.
The mean value is directly updated in the mean vector field.
The variance / covariance must be retrieved in a post-processing step. Let :math `M_{2,n}` denote the value of the
sum of squares after the first :math `n` samples. According to Welford the biased sample variance
:math `\sigma_n^2` and the unbiased sample variance :math `s_n^2` are given by
.. math ::
\sigma_n^2 = \frac{M_{2,n}}{n}
s_n^2 = \frac{M_{2,n}}{n-1},
respectively.
Liekwise, to the 3rd-order central moment(s) of the (vector) field, the sum of cubes field must be divided
by :math `n` in a post-processing step.
"""
if isinstance(field, Field):
dim = field.values_per_cell()
welford_field = field.center
elif isinstance(field, Field.Access):
dim = field.field.values_per_cell()
welford_field = field
else:
raise ValueError("Vector field has to be a pystencils Field or Field.Access")
if isinstance(mean_field, Field):
welford_mean_field = mean_field.center
elif isinstance(mean_field, Field.Access):
welford_mean_field = mean_field
else:
raise ValueError("Mean vector field has to be a pystencils Field or Field.Access")
if sum_of_squares_field is None:
# sum of products will not be calculated, i.e., the covariance matrix is not retrievable
welford_sum_of_squares_field = None
else:
if isinstance(sum_of_squares_field, Field):
welford_sum_of_squares_field = sum_of_squares_field.center
elif isinstance(sum_of_squares_field, Field.Access):
welford_sum_of_squares_field = sum_of_squares_field
else:
raise ValueError("Sum of squares field has to be a pystencils Field or Field.Access")
assert welford_sum_of_squares_field.field.values_per_cell() == dim ** 2, \
(f"Sum of squares field does not have the right layout. "
f"Index dimension: {welford_sum_of_squares_field.field.values_per_cell()}, expected: {dim ** 2}")
if sum_of_cubes_field is None:
# sum of cubes will not be calculated, i.e., the 3rd-order central moments are not retrievable
welford_sum_of_cubes_field = None
else:
if isinstance(sum_of_cubes_field, Field):
welford_sum_of_cubes_field = sum_of_cubes_field.center
elif isinstance(sum_of_cubes_field, Field.Access):
welford_sum_of_cubes_field = sum_of_cubes_field
else:
raise ValueError("Sum of cubes field has to be a pystencils Field or Field.Access")
assert welford_sum_of_cubes_field.field.values_per_cell() == dim ** 3, \
(f"Sum of cubes field does not have the right layout. "
f"Index dimension: {welford_sum_of_cubes_field.field.values_per_cell()}, expected: {dim ** 3}")
# for the calculation of the thrid-order moments, the variance must also be calculated
if welford_sum_of_cubes_field is not None:
assert welford_sum_of_squares_field is not None
# actual assignments
counter = sp.Symbol('counter')
delta = sp.symbols(f"delta_:{dim}")
main_assignments = list()
for i in range(dim):
main_assignments.append(ps.Assignment(delta[i], welford_field.at_index(i) - welford_mean_field.at_index(i)))
main_assignments.append(ps.Assignment(welford_mean_field.at_index(i),
welford_mean_field.at_index(i) + delta[i] / counter))
if sum_of_squares_field is not None:
delta2 = sp.symbols(f"delta2_:{dim}")
for i in range(dim):
main_assignments.append(
ps.Assignment(delta2[i], welford_field.at_index(i) - welford_mean_field.at_index(i)))
for i in range(dim):
for j in range(dim):
idx = i * dim + j
main_assignments.append(ps.Assignment(
welford_sum_of_squares_field.at_index(idx),
welford_sum_of_squares_field.at_index(idx) + delta[i] * delta2[j]))
if sum_of_cubes_field is not None:
for i in range(dim):
for j in range(dim):
for k in range(dim):
idx = (i * dim + j) * dim + k
main_assignments.append(ps.Assignment(
welford_sum_of_cubes_field.at_index(idx),
welford_sum_of_cubes_field.at_index(idx)
- delta[k] / counter * welford_sum_of_squares_field(i * dim + j)
- delta[j] / counter * welford_sum_of_squares_field(k * dim + i)
- delta[i] / counter * welford_sum_of_squares_field(j * dim + k)
+ delta2[i] * (2 * delta[j] - delta2[j]) * delta[k]
))
return main_assignments
...@@ -19,9 +19,7 @@ def add_fluctuations_to_collision_rule(collision_rule, temperature=None, amplitu ...@@ -19,9 +19,7 @@ def add_fluctuations_to_collision_rule(collision_rule, temperature=None, amplitu
"""""" """"""
if not (temperature and not amplitudes) or (temperature and amplitudes): if not (temperature and not amplitudes) or (temperature and amplitudes):
raise ValueError("Fluctuating LBM: Pass either 'temperature' or 'amplitudes'.") raise ValueError("Fluctuating LBM: Pass either 'temperature' or 'amplitudes'.")
if collision_rule.method.conserved_quantity_computation.zero_centered_pdfs:
raise ValueError("The fluctuating LBM is not implemented for zero-centered PDF storage.")
method = collision_rule.method method = collision_rule.method
if not amplitudes: if not amplitudes:
amplitudes = fluctuation_amplitude_from_temperature(method, temperature, c_s_sq) amplitudes = fluctuation_amplitude_from_temperature(method, temperature, c_s_sq)
...@@ -44,9 +42,7 @@ def fluctuation_amplitude_from_temperature(method, temperature, c_s_sq=sp.Symbol ...@@ -44,9 +42,7 @@ def fluctuation_amplitude_from_temperature(method, temperature, c_s_sq=sp.Symbol
"""Produces amplitude equations according to (2.60) and (3.54) in Schiller08""" """Produces amplitude equations according to (2.60) and (3.54) in Schiller08"""
normalization_factors = sp.matrix_multiply_elementwise(method.moment_matrix, method.moment_matrix) * \ normalization_factors = sp.matrix_multiply_elementwise(method.moment_matrix, method.moment_matrix) * \
sp.Matrix(method.weights) sp.Matrix(method.weights)
density = method.zeroth_order_equilibrium_moment_symbol density = method._cqc.density_symbol
if method.conserved_quantity_computation.zero_centered_pdfs:
density += 1
mu = temperature * density / c_s_sq mu = temperature * density / c_s_sq
return [sp.sqrt(mu * norm * (1 - (1 - rr) ** 2)) return [sp.sqrt(mu * norm * (1 - (1 - rr) ** 2))
for norm, rr in zip(normalization_factors, method.relaxation_rates)] for norm, rr in zip(normalization_factors, method.relaxation_rates)]
......
...@@ -91,10 +91,12 @@ import sympy as sp ...@@ -91,10 +91,12 @@ import sympy as sp
from pystencils.sympyextensions import scalar_product from pystencils.sympyextensions import scalar_product
from lbmpy.maxwellian_equilibrium import get_equilibrium_values_of_maxwell_boltzmann_function from lbmpy.maxwellian_equilibrium import (
discrete_maxwellian_equilibrium, get_equilibrium_values_of_maxwell_boltzmann_function
)
from lbmpy.moments import ( from lbmpy.moments import (
MOMENT_SYMBOLS, exponent_tuple_sort_key, exponents_to_polynomial_representations, MOMENT_SYMBOLS, exponent_tuple_sort_key, exponents_to_polynomial_representations,
extract_monomials, moment_sort_key, extract_monomials, moment_sort_key, moment_matrix,
monomial_to_polynomial_transformation_matrix, set_up_shift_matrix) monomial_to_polynomial_transformation_matrix, set_up_shift_matrix)
FORCE_SYMBOLS = sp.symbols("F_x, F_y, F_z") FORCE_SYMBOLS = sp.symbols("F_x, F_y, F_z")
...@@ -129,6 +131,7 @@ class AbstractForceModel(abc.ABC): ...@@ -129,6 +131,7 @@ class AbstractForceModel(abc.ABC):
# central moment space # central moment space
self.has_moment_space_forcing = hasattr(self, "moment_space_forcing") self.has_moment_space_forcing = hasattr(self, "moment_space_forcing")
self.has_central_moment_space_forcing = hasattr(self, "central_moment_space_forcing") self.has_central_moment_space_forcing = hasattr(self, "central_moment_space_forcing")
self.has_symmetric_central_moment_forcing = hasattr(self, "symmetric_central_moment_forcing")
def __call__(self, lb_method): def __call__(self, lb_method):
r""" r"""
...@@ -199,6 +202,35 @@ class Simple(AbstractForceModel): ...@@ -199,6 +202,35 @@ class Simple(AbstractForceModel):
moments = (lb_method.moment_matrix * sp.Matrix(self(lb_method))).expand() moments = (lb_method.moment_matrix * sp.Matrix(self(lb_method))).expand()
return lb_method.shift_matrix * moments return lb_method.shift_matrix * moments
def symmetric_central_moment_forcing(self, lb_method, central_moments):
u = lb_method.first_order_equilibrium_moment_symbols
cm_matrix = moment_matrix(central_moments, lb_method.stencil, shift_velocity=u)
before = sp.Matrix([0] * lb_method.stencil.Q)
after = cm_matrix @ sp.Matrix(self(lb_method))
return before, after
class CentralMoment(AbstractForceModel):
r"""
A force model that only shifts the macroscopic and equilibrium velocities and does not introduce a forcing term to
the collision process. Forcing is then applied through relaxation of the first central moments in the shifted
frame of reference (cf. https://doi.org/10.1016/j.camwa.2015.05.001).
"""
def __call__(self, lb_method):
raise ValueError("This force model can only be combined with the Cumulant collision model")
def symmetric_central_moment_forcing(self, lb_method, *_):
before = sp.Matrix([0, ] * lb_method.stencil.Q)
after = sp.Matrix([0, ] * lb_method.stencil.Q)
for i, sf in enumerate(self.symbolic_force_vector):
before[i + 1] = sp.Rational(1, 2) * sf
after[i + 1] = sp.Rational(1, 2) * sf
return before, after
def equilibrium_velocity_shift(self, density):
return default_velocity_shift(density, self.symbolic_force_vector)
class Luo(AbstractForceModel): class Luo(AbstractForceModel):
r"""Force model by Luo :cite:`luo1993lattice`. r"""Force model by Luo :cite:`luo1993lattice`.
...@@ -230,6 +262,13 @@ class Luo(AbstractForceModel): ...@@ -230,6 +262,13 @@ class Luo(AbstractForceModel):
moments = lb_method.moment_matrix * self(lb_method) moments = lb_method.moment_matrix * self(lb_method)
return (lb_method.shift_matrix * moments).expand() return (lb_method.shift_matrix * moments).expand()
def symmetric_central_moment_forcing(self, lb_method, central_moments):
u = lb_method.first_order_equilibrium_moment_symbols
cm_matrix = moment_matrix(central_moments, lb_method.stencil, shift_velocity=u)
before = sp.Matrix([0] * lb_method.stencil.Q)
after = (cm_matrix @ sp.Matrix(self(lb_method))).expand()
return before, after
class Guo(AbstractForceModel): class Guo(AbstractForceModel):
r""" r"""
...@@ -267,6 +306,12 @@ class Guo(AbstractForceModel): ...@@ -267,6 +306,12 @@ class Guo(AbstractForceModel):
return central_moments return central_moments
def symmetric_central_moment_forcing(self, lb_method, central_moments):
luo = Luo(self.symbolic_force_vector)
_, force_cms = luo.symmetric_central_moment_forcing(lb_method, central_moments)
force_cms = sp.Rational(1, 2) * force_cms
return force_cms, force_cms
def equilibrium_velocity_shift(self, density): def equilibrium_velocity_shift(self, density):
return default_velocity_shift(density, self.symbolic_force_vector) return default_velocity_shift(density, self.symbolic_force_vector)
...@@ -282,20 +327,20 @@ class He(AbstractForceModel): ...@@ -282,20 +327,20 @@ class He(AbstractForceModel):
.. math:: .. math::
F (\mathbf{c}) F (\mathbf{c})
= \frac{1}{\rho c_s^2} = \frac{1}{\rho c_s^2}
\mathbf{F} \cdot ( \mathbf{c} - \mathbf{u} ) \mathbf{F} \cdot ( \mathbf{c} - \mathbf{u} )
f^{\mathrm{eq}} (\mathbf{c}) f^{\mathrm{eq}} (\mathbf{c})
the following analytical expresson for the monomial raw moments of the force is found: the following analytical expresson for the monomial raw moments of the force is found:
.. math:: .. math::
m_{\alpha\beta\gamma}^{F, \mathrm{He}} m_{\alpha\beta\gamma}^{F, \mathrm{He}}
= \frac{1}{\rho c_s^2} \left( = \frac{1}{\rho c_s^2} \left(
F_x m^{\mathrm{eq}}_{\alpha+1,\beta,\gamma} F_x m^{\mathrm{eq}}_{\alpha+1,\beta,\gamma}
+ F_y m^{\mathrm{eq}}_{\alpha,\beta+1,\gamma} + F_y m^{\mathrm{eq}}_{\alpha,\beta+1,\gamma}
+ F_z m^{\mathrm{eq}}_{\alpha,\beta,\gamma+1} + F_z m^{\mathrm{eq}}_{\alpha,\beta,\gamma+1}
- m^{eq}_{\alpha\beta\gamma} ( \mathbf{F} \cdot \mathbf{u} ) - m^{eq}_{\alpha\beta\gamma} ( \mathbf{F} \cdot \mathbf{u} )
\right) \right)
""" """
...@@ -304,14 +349,15 @@ class He(AbstractForceModel): ...@@ -304,14 +349,15 @@ class He(AbstractForceModel):
super(He, self).__init__(force) super(He, self).__init__(force)
def forcing_terms(self, lb_method): def forcing_terms(self, lb_method):
rho = lb_method.zeroth_order_equilibrium_moment_symbol
u = sp.Matrix(lb_method.first_order_equilibrium_moment_symbols) u = sp.Matrix(lb_method.first_order_equilibrium_moment_symbols)
force = sp.Matrix(self.symbolic_force_vector) force = sp.Matrix(self.symbolic_force_vector)
cs_sq = sp.Rational(1, 3) # squared speed of sound cs_sq = sp.Rational(1, 3) # squared speed of sound
# eq. 6.31 in the LB book by Krüger et al. shows that the equilibrium terms are devided by rho. # eq. 6.31 in the LB book by Krüger et al. shows that the equilibrium terms are devided by rho.
# This is done here with subs({rho: 1}) to be consistent with compressible and incompressible force models # This is done here with subs({rho: 1}) to be consistent with compressible and incompressible force models
eq_terms = lb_method.get_equilibrium_terms().subs({rho: 1}) cqc = lb_method.conserved_quantity_computation
eq_terms = discrete_maxwellian_equilibrium(lb_method.stencil, rho=sp.Integer(1),
u=cqc.velocity_symbols, c_s_sq=sp.Rational(1, 3))
result = [] result = []
for direction, eq in zip(lb_method.stencil, eq_terms): for direction, eq in zip(lb_method.stencil, eq_terms):
...@@ -321,14 +367,14 @@ class He(AbstractForceModel): ...@@ -321,14 +367,14 @@ class He(AbstractForceModel):
return sp.Matrix(result) return sp.Matrix(result)
def continuous_force_raw_moments(self, lb_method): def continuous_force_raw_moments(self, lb_method, moments=None):
rho = lb_method.zeroth_order_equilibrium_moment_symbol rho = lb_method.zeroth_order_equilibrium_moment_symbol
u = lb_method.first_order_equilibrium_moment_symbols u = lb_method.first_order_equilibrium_moment_symbols
dim = lb_method.dim dim = lb_method.dim
c_s_sq = sp.Rational(1, 3) c_s_sq = sp.Rational(1, 3)
force = sp.Matrix(self.symbolic_force_vector) force = sp.Matrix(self.symbolic_force_vector)
moment_polynomials = lb_method.moments moment_polynomials = lb_method.moments if moments is None else moments
moment_exponents = sorted(extract_monomials(moment_polynomials), key=exponent_tuple_sort_key) moment_exponents = sorted(extract_monomials(moment_polynomials), key=exponent_tuple_sort_key)
moment_monomials = exponents_to_polynomial_representations(moment_exponents) moment_monomials = exponents_to_polynomial_representations(moment_exponents)
extended_monomials = set() extended_monomials = set()
...@@ -351,10 +397,12 @@ class He(AbstractForceModel): ...@@ -351,10 +397,12 @@ class He(AbstractForceModel):
polynomial_force_moments = mono_to_poly_matrix * sp.Matrix(monomial_force_moments) polynomial_force_moments = mono_to_poly_matrix * sp.Matrix(monomial_force_moments)
return polynomial_force_moments return polynomial_force_moments
def continuous_force_central_moments(self, lb_method): def continuous_force_central_moments(self, lb_method, moments=None):
raw_moments = self.continuous_force_raw_moments(lb_method) if moments is None:
moments = lb_method.moments
raw_moments = self.continuous_force_raw_moments(lb_method, moments=moments)
u = lb_method.first_order_equilibrium_moment_symbols u = lb_method.first_order_equilibrium_moment_symbols
shift_matrix = set_up_shift_matrix(lb_method.moments, lb_method.stencil, velocity_symbols=u) shift_matrix = set_up_shift_matrix(moments, lb_method.stencil, velocity_symbols=u)
return (shift_matrix * raw_moments).expand() return (shift_matrix * raw_moments).expand()
def __call__(self, lb_method): def __call__(self, lb_method):
...@@ -381,6 +429,11 @@ class He(AbstractForceModel): ...@@ -381,6 +429,11 @@ class He(AbstractForceModel):
central_moments = (correction_factor * central_moments).expand() central_moments = (correction_factor * central_moments).expand()
return central_moments return central_moments
def symmetric_central_moment_forcing(self, lb_method, central_moments):
central_moments = exponents_to_polynomial_representations(central_moments)
force_cms = sp.Rational(1, 2) * self.continuous_force_central_moments(lb_method, moments=central_moments)
return force_cms, force_cms
def equilibrium_velocity_shift(self, density): def equilibrium_velocity_shift(self, density):
return default_velocity_shift(density, self.symbolic_force_vector) return default_velocity_shift(density, self.symbolic_force_vector)
...@@ -442,12 +495,16 @@ class EDM(AbstractForceModel): ...@@ -442,12 +495,16 @@ class EDM(AbstractForceModel):
def __call__(self, lb_method): def __call__(self, lb_method):
cqc = lb_method.conserved_quantity_computation cqc = lb_method.conserved_quantity_computation
rho = cqc.density_symbol if cqc.compressible else 1 reference_density = cqc.density_symbol if cqc.compressible else 1
rho = cqc.density_symbol
delta_rho = cqc.density_deviation_symbol
rho_0 = cqc.background_density
u = cqc.velocity_symbols u = cqc.velocity_symbols
equilibrium_terms = lb_method.get_equilibrium_terms() equilibrium_terms = lb_method.get_equilibrium_terms()
equilibrium_terms = equilibrium_terms.subs({delta_rho: rho - rho_0})
shifted_u = (u_i + f_i / rho for u_i, f_i in zip(u, self._force)) shifted_u = (u_i + f_i / reference_density for u_i, f_i in zip(u, self._force))
shifted_eq = equilibrium_terms.subs({u_i: su_i for u_i, su_i in zip(u, shifted_u)}) shifted_eq = equilibrium_terms.subs({u_i: su_i for u_i, su_i in zip(u, shifted_u)})
return shifted_eq - equilibrium_terms return shifted_eq - equilibrium_terms
......
File moved
File moved