This is part 2 and will cover the actual server side (and a little bit about generic EGL implementation) of the solution. The first part can be read here. The third and last blog post will revolve around the client side solution and how you can use it today, as well as future work. There are a -lot- of links in this blog, please take a look at them to fully understand what is being explained.
This work was and is done as part of my job as Chief Research Engineer in Jolla, which develop Sailfish OS, a mobile-optimized operating system that has the flexibility, ubiquity and stability of the Linux core with a cutting edge user experience built with the renowned Qt platform.
The views and opinions expressed in this blog series are my own and not that of my employer.
The aim is to have documented the proof of concept code and published it under a "LGPLv2.1 only" license, for the benefit of many different communities and projects (Sailfish, OpenWebOS, Qt Project, KDE, GNOME, Hawaii, Nemo Mobile, Mer Core based projects, EFL, etc).
This work is done with the hope that it will attract more contribution and collaboration to bring this solution and Wayland in general into wider use across the open source ecosystem and use a large selection of reference device designs for their OS'es.
Rendering with OpenGL ES 2.0 to a screen with Android APIs
In Android, when SurfaceFlinger wants to render to the screen, it utilizes a class named FramebufferNativeWindow which it passes to eglCreateWindowSurface. As I mentioned in my previous post, on Android, when you use eglCreateWindowSurface you utilize a type/'class' named ANativeWindow. FramebufferNativeWindow implements this type. This then means it gets buffers utilizing FramebufferNativeWindow, renders to them within the OpenGL ES 2.0 implementation and queues them to be shown on the screen utilizing the same FramebufferNativeWindow.
But what happens under the hood? I'll try to explain with libhybris' "fbdev" windowing system as an example.
We're back to ANativeWindow - what libhybris' "fbdev" windowing system does is practically to do an implementation of ANativeWindow.
When a OpenGL ES 2.0 implementation wants to have a buffer to render into, it will call the dequeueBuffer method of an ANativeWindow. This usually happens upon surface creation or when you have done eglSwapBuffers and would like a new fresh buffer to render into.
You may have heard of fancy things like 'vsync' and you know that you have to follow signaling of vsync to avoid things like tearing. On the occasion that you do not have any buffers available (as some might be waiting to be posted to the framebuffer), you will need to block and wait for a non-busy buffer to be available within a dequeueBuffer implementation - don't just return NULL. Use pthread conditions and be CPU-friendly. This also makes sure you will block in eglSwapBuffers()
A quick note for implementors of ANativeWindow: Many OpenGL ES drivers are very temperamental. When it relays the information to you that it wants to set your buffer count to 4 buffers, it means that it wants 4 buffers and only to see those 4 buffers in the lifetime until usage or format changes. Mess up and it will happily crash on you - these drivers do not come with debug symbols.
When you want to allocate graphical buffers you naturally need gralloc to do so - gralloc is a module that is accessible through Android's libhardware API - in practice, gralloc is a shared object that libhardware dlopen()s, see /system/lib/hw/ for examples of these (gps, lights, sensors, etc).
When loading gralloc you will naturally get the interface of the gralloc module itself, but when you initialize gralloc, you get an allocation device interface where you can allocate and free buffers with, by specifying parameters such as width, height, usage, format. Usage is important since we'd like to allocate buffers for use with the framebuffer - so when we allocate a buffer, we allocate with usage 'usage | GRALLOC_USAGE_HW_FB'.
The return value of the alloc() call is a integer value indicating if it was a success, a native handle in the provided memory location (read my previous blog post for an explanation on what this is) and stride of the buffer.
We then wrap the handle and related information in a ANativeWindowBuffer structure and pass it back to the caller. Please note two things in this structure. incRef and decRef - they are very important. You will need to implement reference counting and you will need to increase/decrease your reference counting matching your own references to it. When reference count reaches 0, the buffer should destruct.
Eventually we will then get the buffer back from the caller in queueBuffer -- but how do we now send it to the framebuffer to be displayed?
In the initialization of our framebuffer window, we should have also opened the framebuffer with the libhardware API. It is in the same hw_module_t as gralloc. The framebuffer interface includes handy information such as the width, height, format, dpi and a few methods to actually utilize the framebuffer. The most important one for us is post(). This allows us to flip an actual buffer to the screen - utilizing the buffer handle, provided it has the same width, height/format as framebuffer and is allocated with appropiate usage (framebuffer usage). This call will on occasion block.
We have to be careful not to deliver the current front buffer to the caller in dequeueBuffer until we have replaced it with another at the front of the screen or we may see flickering.
A note to users of libhybris: there may be some Android adaptations that implement a custom framebuffer interface requiring extra implementation to achieve sane posting of frames that blocks. Check your FramebufferNativeWindow.cpp for this. This does not seem to be pervasive but I've encountered it on HP Touchpad with CyanogenMod/ICS.
Server-side Wayland enablement
The wayland protocol has two sides, server - and client. But unlike X, there is no "Wayland server". The implementation of the protocol communication for each side is implemented in respectively libwayland-server and libwayland-client. When implementing a compositor, you then utilize libwayland-server API to create server sockets, do communication, etc.
But how does the EGL stack get to be connected to a Wayland server instance when the associated EGLDisplay the stack is connected to, probably isn't a Wayland display? (note: may be in nested compositors - ie, a Wayland compositor running as a client to another Wayland compositor) That's where the next topic comes in:
EGL extensions - EGL_WL_bind_wayland_display
In order to connect your EGL stack to a Wayland display, you need to bind to one - you do this with eglBindWaylandDisplay(EGLDisplay, struct wl_display *) from the EGL_WL_bind_wayland_display. In libhybris, we provide this extension when it has been configured with --enable-wayland and available in most windowing systems (we provide an environment variable EGL_PLATFORM to select between 'windowing systems). Since the extension is not just part of the Wayland windowing system, it is possible to do nested Wayland compositors.
But what happens in libhybris when you bind to a Wayland display? We call the server_wlegl_create method in server_wlegl.cpp. What this does is to add a global object to the Wayland protocol - with a certain interface, - but where is this interface defined? As it has to be shared between both client and server; it is specified in a xml format file that is then converted by a tool called 'wayland-scanner' into .c files that are then linked into your client or server part. We then implement the actual interfaces for server side in our code.
Creation of buffers
When a client requests to create a buffer, it first creates an Android native handle object on server side and shares the client's native handle through the Wayland protocol (utilizing the support for fd passing) and then actually asks to create the buffer (note: we actually created the buffer on client side, now we're just sharing it with the compositor and letting it know the details).
When that happens on libhybris side, is when a handle is created, we construct an Android native_handle_t on server side - with the correct fd and integer information. We then map this with registerBuffer into the compositor's address space so the buffer is available to EGL and related stacks.
Once we have the handle in place, we can now create an Android native buffer - like we made ANativeBuffer on client side in the framebuffer/generic window scenario, we make one representing the remote buffer with the reconstructed handle. Finally we construct a Wayland object referencing this Android buffer and increase the reference counter of the Android buffer - and pass the buffer back to the Wayland client.
When we want to destroy the buffer again (well, when the reference count reaches 0), we unregister the buffer and close the native handle.
Utilizing a Wayland-Android buffer as part of your scenegraph
When a compositor would like to utilize a Wayland buffer in general, it uses eglCreateImageKHR with the EGL_WAYLAND_BUFFER_WL target and passing the (server-side) wl_buffer. This means that the compositor does not have to worry about the factual implementation of the wl_buffer behind the scene.
In our case, our wl_buffer is the one we indicated above - so we know it actually encapsulates a ANativeWindowBuffer with a handle/width/height etc. The knowledged reader might realize that eglCreateImageKHR in the Android EGL implementation does not support EGL_WAYLAND_BUFFER_WL. It does however support EGL_NATIVE_BUFFER_ANDROID
The way that this is handled is that we wrap eglCreateImageKHR and when we see EGL_WAYLAND_BUFFER_WL, we call the real eglCreateImageKHR with EGL_NATIVE_BUFFER_ANDROID - with the ANativeWindowBuffer. And we get the Wayland client's buffer as part of our OpenGL scenegraph.
This way we can also easily implement method such as eglQueryWaylandBufferWL as we know the attributes of the Android buffer.
An implementor's note: the destructor of a buffer is first called with wl_buffer_destroy coming from client side. You'll have to remember reference counting and not just delete the buffer
Conclusion
Thanks for reading this (rather technical) second blog post, the third one should follow quite soon. The code is already published and continually developed in http://github.com/libhybris/libhybris but it's not easy to approach or use for general users or developers right now.
The final post will describe how you can use this solution together with QtCompositor on top of Mer Core as well as describe how the Wayland client side works to make all this tie together. You can already now study, comment or flame the client side implementation. But for the explanation and for a description of what is missing, you'll have to wait for next one :)
Feel free to join us in #libhybris on irc.freenode.net to discuss and contribute to this work.
No Flames please...this has advanced the state of webOS Ports by all this work. Bravo Carsten! Good work.
ReplyDeleteDid the final post ever surface?
ReplyDeleteWould love to read it.