On Tue, 11 Sep 2018 15:28:35 +0200
Post by Dirk EibachI have a grabber device on The PCIe-bus that is able to transfer image
data to other PCIe devices.
I want to setup a wayland client, that reserves a buffer in GPU
memory. Then the grabber could DMA to the buffer address. After
finishing the transfer, the client could flip the buffer.
Is there already a concept for this in weston? What might be a good
starting point?
Hi Dirk,
that would not involve Weston in any special way at all. Buffer
allocation is usually done in the client any way the client wants. To
ensure the buffer can be used by the compositor before you fill it with
data, you would export your buffer as a dmabuf and use
zwp_linux_dmabuf_v1 extension to send the buffer details to the Wayland
compositor. If that succeeds, all is good and you can fill the buffer.
After that, you have a wl_buffer you can attach to a wl_surface, and
the compositor will just process it, even put it on a DRM plane
bypassing compositing if possible.
If you want to process the buffer contents with the GPU inside your
client instead of showing it directly on screen, then you would not do
anything at all with Wayland. Once you have the dmabuf, you can try to
import it as an EGLImage and turn that into a GL texture.
How to do the non-Wayland things in the client is a good question.
Presumably your grabber card has a Linux kernel driver. You could have
the grabber device/driver allocate the buffer and export it as dmabuf
(requires implementation in the driver), but then there is a risk that
it is non-optimal or even unusable to the GPU and/or the display.
Allocating on a GPU device you would need to go through EGL or GBM,
export as dmabuf, import the dmabuf to your grabber driver (needs
implementation again) and hope the grabber device/driver is able to
write to that buffer. gbm_bo_create_with_modifiers() might be the best
bet.
Anyway, the gist is that the buffer handle in userspace is always a
dmabuf file descriptor, and the grabber card driver needs to be
prepared to use those. Physical addresses in usespace are no-go.
Thanks,
pq