This "feature" of Python can lead to bugs in PyHEADTAIL on the GPU when developing new modules, need to keep this in mind for the future (all current modules in PyHEADTAIL work fine normally!):
Basis: e.g. a * b calls the operator __mul__ of a. Vice versa, b * a calls the operator __mul__ of b.
Take a to be a pycuda.GPUArray of some length (typically e.g. the beam.x array on the GPU) and b to be a scalar / single number stored in a pycuda.GPUArray of length 1.
In PyHEADTAIL on the GPU, a * b now provides an array of length a.shape while b * a leads to a length 1 output. I.e. all contents of a have been swallowed...
PyHEADTAIL monkeypatches PyCUDA to deal with a * b (kudos to @Stefannn ) but b * a will lead to mostly unwanted behaviour. (The monkey patching:
|
# patch the GPUArray to be able to cope with gpuarrays of size 1 as ops |
)
==> we should cleanly fix this at some point! It's there since the very beginning...
This "feature" of Python can lead to bugs in PyHEADTAIL on the GPU when developing new modules, need to keep this in mind for the future (all current modules in PyHEADTAIL work fine normally!):
Basis: e.g.
a * bcalls the operator__mul__ofa. Vice versa,b * acalls the operator__mul__ofb.Take
ato be apycuda.GPUArrayof some length (typically e.g. thebeam.xarray on the GPU) andbto be a scalar / single number stored in apycuda.GPUArrayof length 1.In PyHEADTAIL on the GPU,
a * bnow provides an array of lengtha.shapewhileb * aleads to a length 1 output. I.e. all contents ofahave been swallowed...PyHEADTAIL monkeypatches
PyCUDAto deal witha * b(kudos to @Stefannn ) butb * awill lead to mostly unwanted behaviour. (The monkey patching:PyHEADTAIL/PyHEADTAIL/general/contextmanager.py
Line 99 in cbd0976
==> we should cleanly fix this at some point! It's there since the very beginning...