Sunday, November 3, 2013

A SailfishOS Co-creators Community in 2014?

co-created by Filip Kłębczyk and Carsten Munk

Introduction

There is a challenge that stands before Jolla and that is how to create a well working community that would support the efforts of the company and help build the ecosystem around the Sailfish OS and Jolla's products. Until now Jolla has been very successful in creating a large community of fans, especially in Finland, so in other words the potential future customers of Jolla devices. 

Creating demand for the products is needed, but nowadays successful mobile products also require developers willing to develop native software for them and do other kind of community contributions. 

That is needed especially because Jolla is a young company and has limited resources. In order to compete with big mobile devices market players like Samsung or Apple, Jolla has to utilise community potential as much as it is possible. 

In order to make that happen, certain steps must be taken to attract people, not only previously connected with Maemo and MeeGo platforms, but also completely new ones or coming from other platforms/backgrounds. 

Building a community of co-creators in which everyone would feel needed and respected is a challenging task and this document addresses possible ways how that could be handled by Jolla.

Analysis of community programs/interaction in other companies/projects.

Maemo/MeeGo (Nokia, Nokia and Intel)

Nokia's method of building Maemo community was very effective. The act of providing a whole portal - maemo.org with mailing lists to discuss and tools to host application projects was a good bet. It was amongst the first to provide this kind of infrastructure for mobile device application developers.

A large focus was placed on open source application developers and less on the participation in OS development, though that was later improved by supporting community projects such as community-supported updates to the device. Commercial software developers were handled through Forum Nokia in a seperate manner, which was tried to be unified later on.

Nokia also organized events like Maemo summits and actively participated in existing open source conferences, which were important for integrating community also outside the Internet. Additionally, developer programs with early access and providing developer devices also boosted interest in the platform.

Nokia of course didn't avoid mistakes such as making bad decisions in areas where community worked well itself. Old habits of corporate behaviour were hard to kill and so many initiatives were not properly balanced with the needs of a growing community.

One of the bad things that happened in the opinion of the community was that Maemo platform was abandoned in favour of MeeGo platform, which was dramatically different and Nokia support ended too early. On positive side is the fact that community is still supporting the Maemo platform and that content and software was moved to community server infrastructure.

MeeGo(.com) project put a lot of emphasis on working in the open, upstreaming, inclusiveness and meritocracy. Main problem was the realisation in practice of all above and changing old patterns of working with the result that MeeGo was very negatively perceived by the community since it in practice was a regression from the previous state at maemo.org.

The introduction of new project also came with new infrastructure, so in result there were actually two places for communication - new forums.meego.com and old talk.maemo.org and hence it divided the community. A full migration from maemo.org community to the MeeGo project never happened.

BlackBerry 10 (BlackBerry, previously RIM)

BlackBerry was very successful in attracting developers from different communities in 2012. The company organized several big events, but most importantly a lot of small 1-day events around the world. They've hired a lot of evangelists, often connected with former communities (like MeeGo or WebOS). A lot of attention by BlackBerry has been put on attracting people who were previously developing Qt applications for Nokia platforms (Harmattan, Symbian) to develop for BB10. 

The main form of convincing was giving developer devices to those who already wrote some mobile apps in the past and were willing to write/port an app to BB10. In addition to that to motivate even more those who actually wrote/ported app, BlackBerry has given limited developer edition of the final device in exchange for developer device. 

Additionally developers, including Android developers were being convinced to port their applications and games during virtual events called porthathons and as a reward money was offered for each ported app. One of the problems BlackBerry had at some point, was trouble in handling all those people they've attracted with different promises given by different evangelists, which caused some bad PR in social media like Twitter. Also some people pointed that rules of Built for BlackBerry program weren't so clear.

BlackBerry was also succesful in cooperating with universities, by creating the BlackBerry Academy program. They were providing devices for usage in laboratories and student projects, but also materials like ready to use slides and lab scenarios. Besides that, BlackBerry used that channel also to inform about the events like coding camps they were organizing.

Tizen (Samsung and Intel)

The Tizen project website became public in September 27th 2011 with the announcement on the same day at meego.com that community should now move to MeeGo successor - Tizen. 

Even though the website had similar layout and communication channels offer to community like meego.com had, the big move didn't happen. The main problem was lack of communication with community and lack of announcements - Tizen website was mostly dead until release of the source code preview in January 2012, so for around 4 months. 

Another problem was that the project was divided between Samsung and Intel, without much cooperation between those two companies to be seen in public. There was also not enough focus on the open source aspects. What is more Samsung and Intel at beginning weren't much interested in attending 3rd party conferences and promoting the platform, instead they did a major event in San Francisco where they did distribute developer devices to attendees. Later, after the conference, they've also provided some developer devices to those who applied through their website. 

The situation got improved when Tizen 2.0 was released - website got renewed and some more succesful actions to attract interest have been taken. Tizen is now more visible at the events and the conference in San Francisco is still a major event, but not the only one now. What is important is that the Tizen project looks for building community of both app and platform developers - splitting the website with sections for each group.

Also, it is worth to mention that Samsung and Intel didn't offer any program for cooperation with universities although some small, not centrally coordinated steps, are being taken in that direction.

Ubuntu Touch (Canonical)

Canonical started it's actions by announcing the plan to introduce Ubuntu on different mobile devices. Although their first mobile Ubuntu "version" was mainly a set of mockups, an important part of that move was providing an image to popular Android devices like Nexus 7, so a lot of people could actually try and have a feel what Ubuntu on mobile devices will be like. 

It was all connected with a professionally prepared website, where developers were suggested to sign-up for more news and even volunteer to help in making first apps. A form was provided where interested developers could leave their e-mail contact and information about their skills in C++, Javascript and QML. 

As a result Canonical from community formed teams of people working at certain Ubuntu Touch apps. Meeting of those teams took place publicly on IRC and at least on begining course of those meeting was available on the Ubuntu Touch Wiki. All the above efforts were quite easy as Ubuntu is a strong community brand.

Things that are certainly on the good side is regular blogging by Mark Shuttelworth. Canonical also managed to drive successful, from marketing point of view, Ubuntu Edge crowdfunded campaign. Despite they have failed in gathering the required sum, they have managed to spread the word about the platform effectively on the Internet and outside of it, reaching popular medias and attracting new people to join the project.

The main advantage of the Ubuntu Touch community is that it is extension of an already well known Ubuntu community as Ubuntu Touch is a version of Ubuntu for new mobile devices. 

Canonical was until last year organizing Ubuntu Developer Summits events which happened in different places around the world (mainly U.S. and Europe) and were targeted at the community of developers working on Ubuntu. Now they've replaced by more frequent virtual developer summits happening every quarter. That makes participating in them more accesible to people that couldn't afford travelling to physical summits, but the downside is that community misses the summits where real face-to-face contact was possible. What is more, not everyone likes or feels comfortable in video chats.

Firefox OS

Community around Firefox OS is relatively new one as it's a young platform pushed by Mozilla. Project was called Boot to Gecko on beginning and it was announced on the mozilla.dev.platform mailing list back in 2011.

Since the beginning Mozilla has been quite active in the public space, organizing it's own events and taking part in 3rd party events, so in other words the project has good visibility. 

Mozilla and it's partners takes also advantage of appearance in places like hackerspaces. Futhermore, Mozilla also seems to delegate a lot of activities to already established local groups or actively seeks new ones. For example they've aided people that were creating community websites by providing trainings for them and including in the national Firefox OS launch team.

On the platform developer side, there is a gap in terms of openness due to tight project relations with ODMs making Firefox OS devices where hardware adaptation licensing and availability is making platform developer entry difficult.

How Jolla could improve it's relations with community and expand it?

Understanding the needs and expectations community has from company is the key to maintaining good relations with community members and most of all expanding it by attracting new people. Healthy community is one that grows, actively participates and takes care of new challenges.

Communication and openness

More contact with community where it is possible (not only open code but also open relations with community).

For now the news sources are mostly third party pages and it is hard to follow what is new and what is officially confirmed and not just only a rumor or gossip. In other words social networks like Twitter for communicating news are clearly not enough.

A good solution would be preparing some dedicated page or blog for interaction (with RSS for those who only want to read what's new) and at least communicating announcements and new features there. 

Announcements like new versions of SailfishOS SDKs with changelogs are already happening on a mailing list, which is of course good practice, but finding those posts if someone's not a regular subscriber might be time consuming.

Public SailfishOS bug tracker and collaborating on reporting/fixing the bugs in the open is a must.

For now a devel mailing list is used for that purpose, which is only a workaround and not solution to the problem - especially that some of mentioned bugs don't get fixed or noticed at all.

The current community around SailfishOS and projects on which it is based (Mer, Nemo) is mostly consisting of people previously involved in Maemo and MeeGo projects. 

This situation is quite natural and understandable as SailfishOS is a successor of that projects. On the other hand it should also be an alarm as it shows that community is not growing and attracting new people. The solution would be putting more focus on diversity and also attracting contributors from outside of former Nokia's circles and communities like for example people from XDA developers forum. 

Furthermore also actions focused on attracting women to get involved are needed as a male-only community is something not only badly perceived, but also a less creative place.

Stimulating other companies to get involved/cooperate is also a must

An idea can be to create an ecosystem of specialized companies focused on different areas, instead of one main company doing everything from start till the end. To attract such cooperation in the areas that are important for companies and individuals some steps must be taken. Spreading the word and showing benefits of cooperation in Mer/Nemo and also maybe some financial encouragement like bounty programs.

Gamifying community efforts

Gamification ideas on community portal

* Point measuring system for certain activities where contributor is a player
* Time point reduction - points gained by contributors dissappear with time, when contributor has no activity (motivate people to be active all the time, prevent making a glass wall between veterans and newcomers)
* Contributor profile to be similar like character profile in RPG games, so depending on type of contributions a person could earn certain character classes and gain levels in areas like code, design, documentation, community. Character classes could be for example - code wizard, design master and to make it more elastic a lot of mixed classes would be available
* Some community challenges/sprints every months/two months - porting applications marathons, finding bugs marathons etc
* Big minus points for offensive/arrogant or discouraging actions against newbies/newcomers (report abuse option)
* Awarding people with possibilities such as meetings in person with Sailfish OS or Jolla device creators. Or maybe some special trainings for such people?
* Community metrics to show how community is developing

WARNING:
All the gamification ideas should be well balanced implemented with caution in order not to divide community, so to award all types of contribution (code,  documentation, community building etc.). In other words every person being active in community should feel he/she is important part of the community and things he/she is doing are important. Evaluation of how gamification elements work and careful tuning will be certainly needed here.

Developer documentation

* In form of a wiki with possibility for non-Jolla employees to do moderated edits (moderators not only from Jolla, but also trusted community contributors should be involved in moderating)
* Two versions of documentation - one with user comments/feedback on bottom (maybe not only on bottom, but on the side also) of each page and one without user comments. Good example of that approach is PostgreSQL documentation.  This could also be done as some layer you can enable with possible user corrections/patches on it. Bottom part could also contain links to some real code examples (apps), use cases of certain functions/components.
* Additional supplement of documentation could be cheat sheets (especially for beginners, but also for advanced)
* Quick guides on how to migrate from other platforms for developers coming from different platforms such as Android, iOS, Windows Phone etc. 
* For Android developers: Page about UX and other benefits an application will have if they'll create a native Sailfish app instead of just pushing Android app to Jolla store
* Images, diagrams and other graphical elements will help to make documentation more attractive (maybe even comics or funny stories) and help to understand more deeply important/complex parts of it.
* Ability to submit bugs in the documentation - currently it is not possible and many of the bugs mentioned on sailfish-devel@ mailing list seem not to get fixed and even if, there is usually no information about that.
* Being open to suggestions from developers and community how the documentation should look like - some short survey about the form of documentation
* Some screencasts and tutorials would be also a plus, beside that one place with links to all videos and other materials from talks about SailfishOS application development and connected topics that happened on the conferences and other events.
* Information what is the best way to do certain things on the platform - store data, store settings, access to hardware features etc.
* Besides official documentation and other materials website also regulary monitoring services like StackOverflow and answering questions there.

Financing/sponsoring challenges:
* Aid projects and groups of people collaborating instead of single person projects. In other words motivate people to collaborate with others on doing interesting/needed projects.
* Discount offerings for devices/accessories to most active people, but not giving devices for free
* Giving/borrowing devices to places like open devices labs/hackerspaces, where they would be accessible on-site for many people that would like to test their apps on Sailfish (or test self-made Other Halfs)

Collaboration with groups/organizations/universities:
* Collaboration with local Linux users groups in order to spread the word, involve people and to have help in organizing events (getting venues) etc.
* Collaboration with hackerspaces, places where makers and co-creators usually gather
* Summer of Code like programs, but connected with projects during the semester (also a mentor from Jolla or Nemo community needed for each project)
* Devices can be provided/borrowed to partnering universities, but there should be a report every semester how the devices are effectively used, what kind of projects are being created by students etc.
* Group of some trusted advisors in each of the countries should be created, that could help Jolla to get the picture what groups/communities are there, which one are active etc. In other word some local coordinators/ambassadors.

Events for community/partners:
* Have one annual big event, that moves between different countries (different place each year), involve local communities to help in organizing it
* Supporting local conferences, do mini-events on them (the thing is that Apple and Google are supporting and doing mainly only their own events, which from perspective of different conference organizers makes difficult to invite people representing those companies, which can be an advantage for Jolla). Supporting local events is much cheaper than doing own ones and is a great occasion to reach new people/communities.

These are the ideas we'd like to present. We encourage you to give us feedback and your improvement proposals in comments.

Filip (fk_lx) & Carsten (Stskeeps)

Wednesday, May 8, 2013

Wayland utilizing Android GPU drivers on glibc based systems, Part 2


In this blog series, I am presenting a solution that I've developed that enables the use of Wayland on top of Android hardware adaptations, specifically the GPU drivers, but without actually requiring the OS to be Bionic based.

This is part 2 and will cover the actual server side (and a little bit about generic EGL implementation) of the solution. The first part can be read here. The third and last blog post will revolve around the client side solution and how you can use it today, as well as future work. There are a -lot- of links in this blog, please take a look at them to fully understand what is being explained.

This work was and is done as part of my job as Chief Research Engineer in Jolla, which develop Sailfish OS, a mobile-optimized operating system that has the flexibility, ubiquity and stability of the Linux core with a cutting edge user experience built with the renowned Qt platform.

The views and opinions expressed in this blog series are my own and not that of my employer.

The aim is to have documented the proof of concept code and published it under a "LGPLv2.1 only" license, for the benefit of many different communities and projects (Sailfish, OpenWebOS, Qt Project, KDE, GNOME, Hawaii, Nemo Mobile, Mer Core based projects, EFL, etc).

This work is done with the hope that it will attract more contribution and collaboration to bring this solution and Wayland in general into wider use across the open source ecosystem and use a large selection of reference device designs for their OS'es.

Rendering with OpenGL ES 2.0 to a screen with Android APIs

In Android, when SurfaceFlinger wants to render to the screen, it utilizes a class named FramebufferNativeWindow which it passes to eglCreateWindowSurface. As I mentioned in my previous post, on Android, when you use eglCreateWindowSurface you utilize a type/'class' named ANativeWindow. FramebufferNativeWindow implements this type. This then means it gets buffers utilizing FramebufferNativeWindow, renders to them within the OpenGL ES 2.0 implementation and queues them to be shown on the screen utilizing the same FramebufferNativeWindow.

But what happens under the hood? I'll try to explain with libhybris' "fbdev" windowing system as an example.

We're back to ANativeWindow - what libhybris' "fbdev" windowing system does is practically to do an implementation of ANativeWindow.

When a OpenGL ES 2.0 implementation wants to have a buffer to render into, it will call the dequeueBuffer method of an ANativeWindow. This usually happens upon surface creation or when you have done eglSwapBuffers and would like a new fresh buffer to render into.

You may have heard of fancy things like 'vsync' and you know that you have to follow signaling of vsync to avoid things like tearing. On the occasion that you do not have any buffers available (as some might be waiting to be posted to the framebuffer), you will need to block and wait for a non-busy buffer to be available within a dequeueBuffer implementation - don't just return NULL. Use pthread conditions and be CPU-friendly. This also makes sure you will block in eglSwapBuffers()

A quick note for implementors of ANativeWindow: Many OpenGL ES drivers are very temperamental. When it relays the information to you that it wants to set your buffer count to 4 buffers, it means that it wants 4 buffers and only to see those 4 buffers in the lifetime until usage or format changes. Mess up and it will happily crash on you - these drivers do not come with debug symbols.

When you want to allocate graphical buffers you naturally need gralloc to do so - gralloc is a module that is accessible through Android's libhardware API - in practice, gralloc is a shared object that libhardware dlopen()s, see /system/lib/hw/ for examples of these (gps, lights, sensors, etc).

When loading gralloc you will naturally get the interface of the gralloc module itself, but when you initialize gralloc, you get an allocation device interface where you can allocate and free buffers with, by specifying parameters such as width, height, usage, format. Usage is important since we'd like to allocate buffers for use with the framebuffer - so when we allocate a buffer, we allocate with usage 'usage | GRALLOC_USAGE_HW_FB'.

The return value of the alloc() call is a integer value indicating if it was a success, a native handle in the provided memory location (read my previous blog post for an explanation on what this is) and stride of the buffer.

We then wrap the handle and related information in a ANativeWindowBuffer structure and pass it back to the caller. Please note two things in this structure. incRef and decRef - they are very important. You will need to implement reference counting and you will need to increase/decrease your reference counting matching your own references to it. When reference count reaches 0, the buffer should destruct.

Eventually we will then get the buffer back from the caller in queueBuffer -- but how do we now send it to the framebuffer to be displayed?

In the initialization of our framebuffer window, we should have also opened the framebuffer with the libhardware API. It is in the same hw_module_t as gralloc. The framebuffer interface includes handy information such as the width, height, format, dpi and a few methods to actually utilize the framebuffer. The most important one for us is post(). This allows us to flip an actual buffer to the screen - utilizing the buffer handle, provided it has the same width, height/format as framebuffer and is allocated with appropiate usage (framebuffer usage). This call will on occasion block.

We have to be careful not to deliver the current front buffer to the caller in dequeueBuffer until we have replaced it with another at the front of the screen or we may see flickering.

A note to users of libhybris: there may be some Android adaptations that implement a custom framebuffer interface requiring extra implementation to achieve sane posting of frames that blocks. Check your FramebufferNativeWindow.cpp for this. This does not seem to be pervasive but I've encountered it on HP Touchpad with CyanogenMod/ICS.

Server-side Wayland enablement

The wayland protocol has two sides, server - and client. But unlike X, there is no "Wayland server". The implementation of the protocol communication for each side is implemented in respectively libwayland-server and libwayland-client. When implementing a compositor, you then utilize libwayland-server API to create server sockets, do communication, etc.

But how does the EGL stack get to be connected to a Wayland server instance when the associated EGLDisplay the stack is connected to, probably isn't a Wayland display? (note: may be in nested compositors - ie, a Wayland compositor running as a client to another Wayland compositor) That's where the next topic comes in:

EGL extensions - EGL_WL_bind_wayland_display

In order to connect your EGL stack to a Wayland display, you need to bind to one - you do this with eglBindWaylandDisplay(EGLDisplay, struct wl_display *) from the EGL_WL_bind_wayland_display. In libhybris, we provide this extension when it has been configured with --enable-wayland and available in most windowing systems (we provide an environment variable EGL_PLATFORM to select between 'windowing systems). Since the extension is not just part of the Wayland windowing system, it is possible to do nested Wayland compositors.

But what happens in libhybris when you bind to a Wayland display? We call the server_wlegl_create method in server_wlegl.cpp. What this does is to add a global object to the Wayland protocol - with a certain interface, - but where is this interface defined? As it has to be shared between both client and server; it is specified in a xml format file that is then converted by a tool called 'wayland-scanner' into .c files that are then linked into your client or server part. We then implement the actual interfaces for server side in our code.

Creation of buffers

When a client requests to create a buffer, it first creates an Android native handle object on server side and shares the client's native handle through the Wayland protocol (utilizing the support for fd passing) and then actually asks to create the buffer (note: we actually created the buffer on client side, now we're just sharing it with the compositor and letting it know the details).

When that happens on libhybris side, is when a handle is created, we construct an Android native_handle_t on server side - with the correct fd and integer information. We then map this with registerBuffer into the compositor's address space so the buffer is available to EGL and related stacks.

Once we have the handle in place, we can now create an Android native buffer - like we made ANativeBuffer on client side in the framebuffer/generic window scenario, we make one representing the remote buffer with the reconstructed handle. Finally we construct a Wayland object referencing this Android buffer and increase the reference counter of the Android buffer - and pass the buffer back to the Wayland client.

When we want to destroy the buffer again (well, when the reference count reaches 0), we unregister the buffer and close the native handle.

Utilizing a Wayland-Android buffer as part of your scenegraph

When a compositor would like to utilize a Wayland buffer in general, it uses eglCreateImageKHR with the EGL_WAYLAND_BUFFER_WL target and passing the (server-side) wl_buffer. This means that the compositor does not have to worry about the factual implementation of the wl_buffer behind the scene.

In our case, our wl_buffer is the one we indicated above - so we know it actually encapsulates a ANativeWindowBuffer with a handle/width/height etc. The knowledged reader might realize that eglCreateImageKHR in the Android EGL implementation does not support EGL_WAYLAND_BUFFER_WL. It does however support EGL_NATIVE_BUFFER_ANDROID

The way that this is handled is that we wrap eglCreateImageKHR and when we see EGL_WAYLAND_BUFFER_WL, we call the real eglCreateImageKHR with EGL_NATIVE_BUFFER_ANDROID - with the ANativeWindowBuffer. And we get the Wayland client's buffer as part of our OpenGL scenegraph.

This way we can also easily implement method such as eglQueryWaylandBufferWL as we know the attributes of the Android buffer.

An implementor's note: the destructor of a buffer is first called with wl_buffer_destroy coming from client side. You'll have to remember reference counting and not just delete the buffer

Conclusion

Thanks for reading this (rather technical) second blog post, the third one should follow quite soon. The code is already published and continually developed in http://github.com/libhybris/libhybris but it's not easy to approach or use for general users or developers right now.

The final post will describe how you can use this solution together with QtCompositor on top of Mer Core as well as describe how the Wayland client side works to make all this tie together. You can already now study, comment or flame the client side implementation. But for the explanation and for a description of what is missing, you'll have to wait for next one :)

Feel free to join us in #libhybris on irc.freenode.net to discuss and contribute to this work.

Thursday, April 11, 2013

Wayland utilizing Android GPU drivers on glibc based systems, Part 1


In this blog series, I will be presenting a solution that I've developed that enables the use of Wayland on top of Android hardware adaptations, specifically the GPU drivers, but without actually requiring the OS to be Bionic based.  This is part 1.

This work was and is done as part of my job as Chief Research Engineer in Jolla, which develop Sailfish OS, a mobile-optimized operating system that has the flexibility, ubiquity and stability of the Linux core with a cutting edge user experience built with the renowned Qt platform.

The views and opinions expressed in this blog series are my own and not that of my employer.

At the end of the series, the aim is to have finished cleaning up the proof of concept code and published it under a "LGPLv2.1 only" license, for the benefit of many different communities and projects (Sailfish, OpenWebOS, Qt Project, KDE, GNOME, Hawaii, Nemo Mobile, Mer Core based projects, EFL, etc).

QML compositor, libhybris, Wayland on top of Qualcomm GPU Android drivers

The blog series seeks to explain and document the solution and many aspects about non-Android systems, Wayland and Android GPU drivers that are not widely known.



(Ignore the tearing, old demo video)

This work is done with the hope that it will attract more contribution and collaboration to bring this solution and Wayland in general into wider use across the open source ecosystem and use a large selection of reference device designs for their OS'es.

Why am I not releasing code today? Because that code alone doesn't foster collaboration. There's more to involving contributors into development - such as explaining reasons why things are the way they are. It's also my own way to make sure I document the code and clean it up, to make it easier for people to get involved.

Now, let's get to it..

The grim situation in mobile hardware adaptation availability

One of the first thing somebody with a traditional Linux background realizes as he tries to make a mobile device today when he meets with an ODM is that 99% of chipset vendors and hence ODMs - will only offer Android hardware adaptations to go along with the device designs.

When you ask about X11 support within the GPU drivers or even Wayland they'll often look blankly at you and wonder why anybody would want to do anything else than Android based systems. And when you go into details they'll either tell you it can't be done - or charge you a massive cost to have it done.

This means that OS'es and devices that are non-Android will be not able to take into usage the huge availability of (often low cost) device designs that are out there, increasing the time to market and R&D cost massively.

Libhybris

In August 2012, I published my initial prototype for 'libhybris'. What is libhybris? Libhybris is a solution that allows non-Android systems such as glibc-based systems (like most non-Android systems are) to utilize shared objects (libraries) built for Android. In practice this means that you can leverage things like OpenGL ES 2.0 and other hardware interfacing provided within Android hardware adaptations.

I had developed libhybris initially in my idle hours at home and the big question you might have is: Why did I open source it instead of keeping it to myself and earn on it as it obviously was the holy grail for non-Android systems?

The simple answer is this: by working together on open source code, it would help accelerate the development of libhybris and testing of the software for everybody's mutual benefit.

I didn't feel good about libhybris initially, it's not the most perfect solution to the problem: many around me in the open source community were and are fighting to have chipset vendors provide Wayland or X11 adaptations for mobile chipsets or even GPU drivers for non-Android systems in the first place.

But I felt that this was the required road that had to be taken before non-Android systems turned completely irrelevant in the bigger picture. When we again have volume of non-Android devices, we can have our own dedicated HW adaptations again

Open sourcing worked quite well - a small group of people got together, tested it,  improved it, got it running on a lot of multiple chipsets - thanks to OpenWebOS, Florian Haenel (heeen), Thomas Perl (thp), Simon Busch (morphis) and others. It turned the project from a late night hacking project into a viable solution for building device OS'es on top of. Or even running Android NDK applications using.

Earlier this year however, I discovered that a well-known company had taken the code - disappeared underground with it for several months, improved upon it, utilized the capability in their advertisements and demos and in the end posted the code utilizing their own source control system, detached from any state of that of the upstream project's. Even to the extent some posters around the web thought libhybris was done by that company itself.

That kind of behavior ruined the initial reason I open sourced libhybris in the first place and I was shocked to the point that I contemplated to by default not open source my hobby projects any more. It's not cool for companies to do things like this, no matter your commercial reasons. It ruins it for all of us who want to strengthen the open source ecosystem. We could have really used your improvements and patches earlier on instead of struggling with some of these issues.

But, I will say that their behavior has improved - they are now participating in the project, discussing, upstreaming patches that are useful. And I forgive them because they've changed their ways and are participating sanely now.

Now for a few words on my story with Wayland..

Wayland

My journey with Wayland started in late 2011. It was my belief around that time that Wayland was a bit dry and boring - it was just a protocol. I was not fully appreciating the power and simplicity that it provided for embedded UI. That it was a building block for much more exciting things - like libhybris turned out to be.

Being in embedded Linux and exploring Qt Lighthouse, I had learnt of Qt Scenegraph and a curious new thing called QtCompositor. QtCompositor was what sold me on Wayland - it enabled amazing capabilities on embedded and ease of development of effects, window management and homescreens. Things that previously would take several manyears to develop for embedded devices was made easy to do. And allowed stack builders to have similar graphical stacks on SDK/virtual machines for development as on target devices.

If you don't know QML, it's a declarative UI language to design beautiful and functional UIs with. What QtCompositor did, was to enable that you first off could get events about windows appearing, changing size, etc - but also that each window, - the graphical buffer, became just another item in your UI scenegraph like an image or a label would be. It could even make your graphical buffer be a widget inside your traditional UI.

Screenshot from http://blog.qt.digia.com/blog/2011/03/18/multi-process-lighthouse/

This could naturally be expanded to much more curious things, such as 3D Wayland compositors. If you'd like to hear more about QtCompositor, you can also watch the following talk from Qt Developer Days by Andy Nichols. Capable QtCompositor technology is something that is here today. Not something that has to be developed from scratch or roadmapped.

I was doing the role of maintainer of the Nokia N900 hardware adaptation for MeeGo at the time and I wanted to see if I could get Wayland working on it - it had a PowerVR SGX GPU. I reached out to #wayland on irc.freenode.net and was met with open arms, guidance and a lot of help from krh (Wayland founder), jlind, capisce (QtWayland/QtCompositor), Andrew Baldwin, Mika Bostrom and many other talented people and I was able to get started very quickly with Wayland.

To get things working with Wayland, what I needed to do was figure out:

  • How to render an image with OpenGL ES 2.0 into GPU buffers that I had under my control
  • Share that GPU buffer with another process (the compositor)
  • Include that GPU buffer as part of a OpenGL scenegraph, a texture - and display this to the screen (in the compositor)
  • And for performance, flip a full screen GPU buffer straight to the screen, bypassing the need to do OpenGL rendering

To be able to render into a specific GPU buffer under your own control, you usually need to get inside the EGL/OpenGL ES implementation. On some chipsets, it's possible to use specific EGL operations that allows shared (across two processes) images to be rendered into - such as on Raspberry Pi.

In the EGL implementation, you should be able to follow the path of the buffer, it's attributes (size, stride, bpp/format) and when the client has requested to do eglSwapBuffers. 

On the PowerVR SGX, there was an API provided called WSEGL, for making plugins that were windowing systems (X11, Framebuffer, QWS, Wayland..) that allowed me to do just that. 

Sharing that buffer is sometimes a bit more difficult - you effectively need to make the same GPU buffer appear in two processes at once. On SGX, this was simple - you could request a simple integer handle to the buffer and share that value using whatever protocol you wanted. In the target process you then just map in the GPU buffer through a mapping method.

Wanting to standing on the shoulders of giants, I looked at how Mesa had implemented their Wayland protocol for DRM - it too had simple handles and shared these buffers through a custom Wayland protocol.

Even if it was a custom protocol for buffer handling (creation, modification, etc), the same operations for handling buffers in Wayland still applied to it. I didn't need to do anything extra for compositor or client for the buffers in particular - I could piggyback on existing infrastructure available in Wayland protocol.

Wayland made it easy for me to take existing methods for the techniques/needs listed above into use and made it possible to quickly and easily implement Wayland support for the chipset.

Now, to something a little different, but quite related:

Android and it's ANativeWindow

When you use eglCreateWindowSurface, as in, creating a EGL window surface for later GL rendering with Android drivers, you have to provide a reference to the native window you want to do it within. In Android, the native window type is ANativeWindow.

As you know, Android's graphics stack is roughly application -> libEGL that sends GPU buffers to SurfaceFlinger that either flings the buffer to the screen or composites it with OpenGL again with libEGL.

Why not just include all the functionality and code in the EGL stack which communicates with SurfaceFlinger? The answer is that you need to sometimes target multiple types of outputs - be it video/camera streaming to another process, framebuffer rendering, output to a HW composer or communication with SurfaceFlinger.

One of the good things about Android graphics architecture is that through the use of ANativeWindow, they have made it possible to flexibly keep the code that does this work outside the EGL drivers - that is, open source and available for customization for each purpose. That means that EGL/OpenGL drivers are less tied to the Android version itself (sometimes API versions of ANativeWindow changes) and can be reused in binary form easily across upgrades.

ANativeWindow provides handy hooks for a windowing system to be managing GPU buffers (queueBuffer - send a buffer, I'm done rendering, dequeueBuffer - I need a buffer for rendering, cancelBuffer - woops, I didn't need it anyway, etc) - and it gives the methods you need to accomplish things, like I did on PowerVR SGX.

This is the entry point used to implement Wayland on top of Android GPU drivers on glibc based systems. Some fantastic work in this area has already been done by Pekka Paalanen (pq) as part of his work for Collabora Ltd. (Telepathy, GStreamer, WebKit, X11 experts) which proved that this is possible. Parts of the solution I will publish is based on their work - their work was groundbreaking in this field and made all this possible.

A note on gralloc and native handles
The graphical buffer allocation in Android is handled by a libhardware module named 'gralloc'. This is pretty straightforward, allocate buffer - get a buffer handle, dealloc, register (if you got the buffer from a separate process and want to map it in), but most documentation pieces don't talk about buffer_handle_t and what it actually is.
If you do a little bit of detective work, you'll find out that buffer_handle_t is actually defined as a native_handle_t* .. and what are native handles?

The structure is practically this: number of integers and a number of file descriptors plus the actual integers and file descriptors. How do you share a buffer across two processes then?

You have to employ something as obscure as "file descriptor passing". This page describes it as "socket magic" which it truly is. It takes a file descriptor from one process and makes it available in another.

The android GPU buffers are typically consisting of GPU buffer metadata (handle, size, bpp, usage hints, GPU buffer handle) and then file descriptors mapping GPU memory or otherwise shared memory into memory. To make the buffer appear in two processes, you pass the handle information and the related file descriptors.

The good news however is that Wayland already supports file descriptor passing so you don't have to write obscure code handling it yourself for your custom Wayland compositor.

Conclusion

This concludes the first blog post in this series, to give a bit of background about how Wayland, libhybris and GPU drivers for Android can work together.  Next blog post will talk more about the actual server side implementation of this work. Last blog post will talk about direction of the future work on it - and what you can do with it today and how as well as explaining the .

If you'd like to use, discuss and participate in the development of this solution, #libhybris on irc.freenode.net is the best place to be. A neutral place for development across different OS efforts.

Sunday, May 13, 2012

Tizen conference, wrapping up



The Linux Foundation sponsored me to travel to and attend the Tizen Conference 2012 in San Francisco and as part of this sponsorship, I'll be blogging about the conference and my insights and thoughts of the talks and keynotes at it. This is my last post in a two-parter about the conference.

When attending a conference, or a music festival, or any other event with multiple tracks, there will always be sessions that you for some reason do not end up attending, be it because of meeting somebody suddenly at the coffee table while a session you'd like to see is about to start, or because there's a session that you'd rather see, or simply because you decided to take a break. The solution to this is the conference recording the sessions on video, which is not often performed very well.

I've encountered the usual screw-ups in conferences: session recording that does not include the slides at all, session recording where the A/V people forgot to wire up the microphone to the recording equipment(!), or so bad audio that you couldn't even hear the speaker. It's also more difficult, when viewing the session afterwards, to having to focus on three things - the movements of the speaker, his/her voice and the slide content.
That's why I welcome a better format, which is what the Tizen conference will be utilizing it seems - recording the speaker audio and along with that, the slide content, but not the speaker him/herself. And exporting it in podcast format, allowing you to catch up while on the move on the newest technology. Without having to dedicate your full attention to it.

Which brings me to that I didn't participate in that many sessions during the last day. But I got to attend the ones that mattered to me the most: Open Build Service—Facts, Features and Future, by adrianS and mls from openSUSE and Next Gen OS Initialization Done Right, by Auke Kok. And missed out on Tizen IVI architecture with Mikko Ylinen.
Meeting the OBS guys is always a pleasure - you get to sync on what their ideas and plans are and they listen to what your expectations and sometimes crazy ideas are. Compared to many other distribution build systems, OBS does not only function or serve their own community (openSUSE) -  it is entirely usable, deployable and fantastic for building other distributions. OBS has leveled the amount of infrastructure you previously needed in other to run/roll your own distribution. And that's why we love it in Mer. It enables anyone with a minimum of OBS knowledge to maintain their own customized distribution. And for ISVs, to build against your distribution. 

As with most technical discussions - the hallway track is the most interesting. The questions and concerns the audience comes up with during the session seeds the ground for the continued discussion in the hallway afterwards. One concern, that was raised by Dominique Le Foll, of Intel OTC was that OBS-to-OBS links are simply too fragile and too often causes build stalls and problems, was actually a matter we approached at first with Mer too, given the very unstable nature of the meego.com API - we needed a way to synchronize and access the OBS projects MeeGo consisted of, offline. Their need was simply needing a way to export MeeGo (well, now Tizen) releases to customers in a reliable manner and allowing them to modify it too.

What we invented there was a piece of software called FakeOBS (now Mer Delivery System) which, to make a long story short, serves up a HTTP/REST interface that is similar enough to the OBS-to-OBS protocol as to make it able to have another OBS connect to it and it thinking it's a remote OBS. 

While in fact, it is a cache of sorts - we extracted through the OBS API the entire OBS project history of sources, built binaries and put it into a on-disk format that FakeOBS could then use to provide those over the OBS-to-OBS protocol, giving us effectively offline access - leaving us able to not have to care about external entities. You can view the latest iteration of it here. There's also a file called 'gitmer.py' which is how we deal with the git-based approach that Mer uses, for sources.

When we generate Mer releases, we do not only export the built binaries, we also export this on-disk format for FakeOBS, allowing anyone to re-create Mer and re-build it in their own OBS, along with the additional use case that we have built OBS package repositories for ISVs to build against. Meaning that even if Merproject.org shuts down, anybody can resurrect the project. As well as that vendors do not have to rely on merproject.org being up.

Next session was Auke Kok's systemd session. Auke's one of my personal open source heroes, always working on quite interesting things. As some of you might know, the traditional way that most UIs are launching applications and daemons on boot are through D-Bus and /etc/xdg/autostart .desktop files. In Mer and MeeGo, this was accomplished with uxlaunch (another of Auke's inventions)

But what if you could use the same flexibility that systemd offers you, in order to create a proper dependency tree for proper optimized booting within the user session? Well, guess what Auke invented :) 

Instead of starting uxlaunch, you'd instead start systemd --user as a user, which would properly start up X, perhaps services that do not need X before X starts,  and ability to indicate session-internal dependencies. Which leads to amazing results. You can check out the systemd discussion on this mailing list thread. 

Another thing that happened was the availability to all conference attendees (except for Intel & Samsung, it seems) of a Tizen reference device, the so-called Lunchbox. It's an amazing piece of hardware, 1280x720 AMOLED display, 4.3", with dual-core Samsung Exonys 4210, Cortex-A9, 1gb ram, 16gb eMMC, microSD slot, sim slot (though modem ability is unknown), Mali GFX chipset, u-boot boot loader, 8mp back camera, 1mp front camera, PN544 NFC chip (unsure how to use), GPS chip, WLAN (WiFi Direct possible).. so, a quite nice kit. And a possible replacement for the N800 as well.

And if you're wondering if we've tried to put Mer on it. Now, of course we have. We've found most interesting pieces, including a "Boot to SD card" mode (with no success just yet - press power key, volume up and volume down at same time), kernel source code (2.6.36) and investigated the system which uses currently Xorg 1.9.3 with a Xorg driver we can't find source for yet. But it'd surprise me if it wasn't somehow similar to the Mali Xorg driver. Once we've figured out SD card boot, it should be a breeze to run Mer on there. Even with X11-GLESv2 acceleration.

That's all I have to say about the conference, will look forward to the next one.