opened
merge request
!27
"Fix clang-format off directive for older LLVM versions"
at
pycodegen / pystencils-sfg
accepted
merge request
!26
"Fixes to postprocessing: Remove unused code, test vector extraction, unify treatment of scalar fields"
at
pycodegen / pystencils-sfg
-
b1c47558 · Merge branch 'fhennig/postprocessing-fixes' into 'master'
- ... and 1 more commit. Compare 020347e7...b1c47558
-
b23716bd · make SupportsFieldExtraction and SupportsVectorExtraction runtime-c...
opened
merge request
!26
"Fixes to postprocessing: Remove unused code, test vector extraction, unify treatment of scalar fields"
at
pycodegen / pystencils-sfg
accepted
merge request
!24
"Extend Support for CUDA and HIP kernel invocations"
at
pycodegen / pystencils-sfg
-
020347e7 · Merge branch 'fhennig/cuda-invoke' into 'master'
- ... and 1 more commit. Compare 8949bedb...020347e7
-
dc1a3935 · added missing default value
commented on
merge request !24
"Extend Support for CUDA and HIP kernel invocations"
at
pycodegen / pystencils-sfg
As far as testing is concerned, the CudaKernels
and HipKernels
test cases already run all available launch configurations at least once.
commented on
merge request !24
"Extend Support for CUDA and HIP kernel invocations"
at
pycodegen / pystencils-sfg
very good point. I pushed a patch wrapping the entire logic into a Builder class.
-
8b597b98 · clean up implementation of gpu_invoke using a builder
commented on
merge request !24
"Extend Support for CUDA and HIP kernel invocations"
at
pycodegen / pystencils-sfg
This gpu_invoke function is currently around 130 lines long and defines 3 local functions. From my perspective this is a bit complex and it took me...
-
cefe0bdd · fix outdated deprecation notice
-
a8403669 · fix default block size for dynamic launch grids
-
9fda3a06 · fix default block size for dynamic launch grids