Remove pystencils.GPU_DEVICE
All threads resolved!
All threads resolved!
Compare changes
SerialDataHandling
now performs the device selection upon construction. It can also be constructed with an explicit device number to deviate from the default selection.ParallelDataHandling
, the assignment of devices to MPI ranks should be handled by Walberla by calling cudaSetDevice()
. It has selectDeviceBasedOnMpiRank
for this purpose. I am not sure it actually calls it -- I think it should be called from MPIManager::initializeMPI
. Right now everything probably just ends up on the first GPU.gpu_indexing_params
needs an explicit device number, I don't think any kind of default is reasonable.lbmpy's test_gpu_block_size_limiting.py::test_gpu_block_size_limiting fails since !335 (merged), but that is due to an error in the test, which lbmpy!146 (merged) fixes.