planet.freedesktop.org
July 18, 2019

The average user has approximately one thumb per hand. That thumb comes in handy for a number of touchpad interactions. For example, moving the cursor with the index finger and clicking a button with the thumb. On so-called Clickpads we don't have separate buttons though. The touchpad itself acts as a button and software decides whether it's a left, right, or middle click by counting fingers and/or finger locations. Hence the need for thumb detection, because you may have two fingers on the touchpad (usually right click) but if those are the index and thumb, then really, it's just a single finger click.

libinput has had some thumb detection since the early days when we were still hand-carving bits with stone tools. But it was quite simplistic, as the old documentation illustrates: two zones on the touchpad, a touch started in the lower zone was always a thumb. Where a touch started in the upper thumb area, a timeout and movement thresholds would decide whether it was a thumb. Internally, the thumb states were, Schrödinger-esque, "NO", "YES", and "MAYBE". On top of that, we also had speed-based thumb detection - where a finger was moving fast enough, a new touch would always default to being a thumb. On the grounds that you have no business dropping fingers in the middle of a fast interaction. Such a simplistic approach worked well enough for a bunch of use-cases but failed gloriously in other cases.

Thanks to Matt Mayfields' work, we now have a much more sophisticated thumb detection algorithm. The speed detection is still there but it better accounts for pinch gestures and two-finger scrolling. The exclusion zones are still there but less final about the state of the touch, a thumb can escape that "jail" and contribute to pointer motion where necessary. The new documentation has a bit of a general overview. A requirement for well-working thumb detection however is that your device has the required (device-specific) thresholds set up. So go over to the debugging thumb thresholds documentation and start figuring out your device's thresholds.

As usual, if you notice any issues with the new code please let us know, ideally before the 1.14 release.

July 14, 2019

The All Systems Go! 2019 Call for Participation Re-Opened for ONE DAY!

Due to popular request we have re-opened the Call for Participation (CFP) for All Systems Go! 2019 for one day. It will close again TODAY, on 15 of July 2019, midnight Central European Summit Time! If you missed the deadline so far, we’d like to invite you to submit your proposals for consideration to the CFP submission site quickly! (And yes, this is the last extension, there's not going to be any more extensions.)

ASG image

All Systems Go! is everybody's favourite low-level Userspace Linux conference, taking place in Berlin, Germany in September 20-22, 2019.

For more information please visit our conference website!

July 11, 2019
July 09, 2019

For a long period of time, I’m cultivating the desire of having a habit of writing monthly status updates. Someway, Drew DeVault’s Blog posts and Martin Peres’s advice leverage me towards this direction. So, here I am! I have decided to embrace the challenge of composing a report per month. I hope this new habit helps me to improve my writing and communication skills but most importantly, help me to keep track of my work. I want to start this update by describing my work conditions and then focus on the technical stuff.

In the last two months, I’ve been facing an infrastructure problem to work. I’m dealing with obstacles such as restricted Internet access and long hours in public transportation from my home to my workplace. Unfortunately, I can’t work in my house due to the lack of space, and the best place to work is a public library at the University of Brasilia (UnB). Going to UnB every day makes me waste around 3h per day in a bus. The library has a great environment, but it also has thousands of internet restrictions. The fact that I can’t access websites with ‘.me’ domain and connect to my IRC bouncer is an example of that. In summary: It’s been hard to work these days. So let’s stop talking about non-technical stuff and get into the heart of the matter.

I really like working on VKMS. I know this is not news to anybody, and in June, most of my efforts were dedicated to VKMS. One of my paramount endeavors it was found and fixed a bug in vkms that makes kms_cursor_crc, and kms_pipe_crc_basic fails. I was chasing this bug for a long time as can be seen here [1]. After many hours debugging it, I sent a patch for handling this issue [2], however, after Daniel’s review, I realized that my patch didn’t fix correctly the problem. So Daniel decided to dig into this issue to find the root of the problem and later sent a final fix. If you want to see the solution, take a look at [3]. One day, I want to write a post about this fix since it is an interesting subject to discuss.

Daniel also noticed some concurrency problems in the CRC code and sent a patchset composed of 10 patches that tackle the issue. These patches focused on creating better framebuffers manipulation and avoiding race conditions. It took me around 4 days to take a look and test this series. During my review, I asked many things related to concurrency and other clarification about DRM. Daniel always replied with a very nice and detailed explanation. If you want to learn a little bit more about locks, I recommend you to take a look at [4]. Seriously, it is really nice!

I also worked for adding the writeback support in vkms; since XDC2018 I could not stop to think about the idea of adding writeback connector in vkms due to the benefits it could bring, such as new test and assist developers with visual output. As a result, I started some clumsy attempts to implement it in January; but I really dove in this issue in the middle of April, and in June I was focused on making it work. It was tough for me to implement these features due to the following reasons:

  1. There is not i-g-t test for writeback in the main repository, I had to use a WIP patchset made by Brian and Liviu.
  2. I was not familiar with framebuffer, connectors, and fancy manipulation.

As a result of the above limitations, I had to invest many hours reading the documentation and the DRM/IGT code. In the end, I think that add writeback connectors paid well for me since I feel much more comfortable with many things related to DRM these days. The writeback support was not landed yet, however, at this moment the patch is under review (V3) and changed a lot since the first version; for details about this series take a look at [5]. I’ll write a post about this feature after it gets merged.

After having the writeback connectors working in vkms, I felt so grateful for Brian, Liviu, and Daniel for all the assistance they provided to me. In particular, I was thrilled that Brian and Liviu made kms_writeback test which worked as an implementation guideline for me. As a result, I updated their patchsets for making it work in the latest version of IGT and made some tiny fixes. My goal was helping them to upstream kms_writeback. I submitted the series with the hope to see it landed in the IGT [9].

Parallel to my work with ‘writeback’ I was trying to figure out how I could expose vkms configurations to the userspace via configfs. After many efforts, I submitted the first version of configfs support; in this patchset I exposed the virtual and writeback connectors. Take a look at [6] for more information about this feature, and definitely, I’ll write a post about this feature after it gets landed.

Finally, I’m still trying to upstream a patch that makes drm_wait_vblank_ioctl return EOPNOTSUPP instead of EINVAL if the driver does not support vblank get landed. Since this change is in the DRM core and also change the userspace, it is not easy to make this patch get landed. For the details about this patch, you can take a look here [7]. I also implemented some changes in the kms_flip to validate the changes that I made in the function drm_wait_vblank_ioctl and it got landed [8].

July Aims

In June, I was totally dedicated to vkms, now I want to slow my road a little bit and study more about userspace. I want to take a step back and make some tiny programs using libdrm with the goal of understanding the interaction among userspace and kernel space. I also want to take a look at the theoretical part related to computer graphics.

I want to put some effort to improve a tool named kw that help me during my work with Linux Kernel. I also want to take a look at real overlay planes support in vkms. I noted that I have to find a “contribution protocol” (review/write code) that works for me in my current work conditions; otherwise, work will become painful for my relatives and me. Finally, and most importantly, I want to take some days off to enjoy my family.

Info: If you find any problem with this text, please let me know. I will be glad to fix it.

References

[1] “First discussion in the Shayenne’s patch about the CRC problem”. URL: https://lkml.org/lkml/2019/3/10/197

[2] “Patch fix for the CRC issue”. URL: https://patchwork.freedesktop.org/patch/308617/

[3] “Daniel final fix for CRC”. URL: https://patchwork.freedesktop.org/patch/308881/?series=61703&rev=1

[4] “Rework crc worker”. URL: https://patchwork.freedesktop.org/series/61737/

[5] “Introduces writeback support”. URL: https://patchwork.freedesktop.org/series/61738/

[6] “Introduce basic support for configfs”. URL: https://patchwork.freedesktop.org/series/63010/

[7] “Change EINVAL by EOPNOTSUPP when vblank is not supported”. URL: https://patchwork.freedesktop.org/patch/314399/?series=50697&rev=7

[8] “Skip VBlank tests in modules without VBlank”. URL: https://gitlab.freedesktop.org/drm/igt-gpu-tools/commit/2d244aed69165753f3adbbd6468db073dc1acf9a

[9] “Add support for testing writeback connectors”. URL: https://patchwork.freedesktop.org/series/39229/

July 05, 2019
July 03, 2019

The Dell Wireless 5821e module is a Qualcomm SDX20 based LTE Cat16 device. This modem can work in either MBIM mode or QMI mode, and provides different USB layouts for each of the modes. In Linux kernel based and Windows based systems, the MBIM mode is the default one, because it provides easy integration with the OS (e.g. no additional drivers or connection managers required in Windows) and also provides all the features that QMI provides through QMI over MBIM operations.

The firmware update process of this DW5821e module is integrated in your GNU/Linux distribution, since ModemManager 1.10.0 and fwupd 1.2.6. There is no official firmware released in the LVFS (yet) but the setup is completely ready to be used, just waiting for Dell to publish an initial official firmware release.

The firmware update integration between ModemManager and fwupd involves different steps, which I’ll try to describe here so that it’s clear how to add support for more devices in the future.

1) ModemManager reports expected update methods, firmware version and device IDs

The Firmware interface in the modem object exposed in DBus contains, since MM 1.10, a new UpdateSettings property that provides a bitmask specifying which is the expected firmware update method (or methods) required for a given module, plus a dictionary of key-value entries specifying settings applicable to each of the update methods.

In the case of the DW5821e, two update methods are reported in the bitmask: “fastboot” and “qmi-pdc“, because both are required to have a complete firmware upgrade procedure. “fastboot” would be used to perform the system upgrade by using an OTA update file, and “qmi-pdc” would be used to install the per-carrier configuration files after the system upgrade has been done.

The list of settings provided in the dictionary contain the two mandatory fields required for all devices that support at least one firmware update method: “device-ids” and “version”. These two fields are designed so that fwupd can fully rely on them during its operation:

  • The “device-ids” field will include a list of strings providing the device IDs associated to the device, sorted from the most specific to the least specific. These device IDs are the ones that fwupd will use to build the GUIDs required to match a given device to a given firmware package. The DW5821e will expose four different device IDs:
    • “USB\VID_413C“: specifying this is a Dell-branded device.
    • “USB\VID_413C&PID_81D7“: specifying this is a DW5821e module.
    • “USB\VID_413C&PID_81D7&REV_0318“: specifying this is hardware revision 0x318 of the DW5821e module.
    • “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE“: specifying this is hardware revision 0x318 of the DW5821e module running with a Vodafone-specific carrier configuration.
  • The “version” field will include the firmware version string of the module, using the same format as used in the firmware package files used by fwupd. This requirement is obviously very important, because if the format used is different, the simple version string comparison used by fwupd (literally ASCII string comparison) would not work correctly. It is also worth noting that if the carrier configuration is also versioned, the version string should contain not only the version of the system, but also the version of the carrier configuration. The DW5821e will expose a firmware version including both, e.g. “T77W968.F1.1.1.1.1.VF.001” (system version being F1.1.1.1.1 and carrier config version being “VF.001”)
  • In addition to the mandatory fields, the dictionary exposed by the DW5821e will also contain a “fastboot-at” field specifying which AT command can be used to switch the module into fastboot download mode.

2) fwupd matches GUIDs and checks available firmware versions

Once fwupd detects a modem in ModemManager that is able to expose the correct UpdateSettings property in the Firmware interface, it will add the device as a known device that may be updated in its own records. The device exposed by fwupd will contain the GUIDs built from the “device-ids” list of strings exposed by ModemManager. E.g. for the “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” device ID, fwupd will use GUID “b595e24b-bebb-531b-abeb-620fa2b44045”.

fwupd will then be able to look for firmware packages (CAB files) available in the LVFS that are associated to any of the GUIDs exposed for the DW5821e.

The CAB files packaged for the LVFS will contain one single firmware OTA file plus one carrier MCFG file for each supported carrier in the give firmware version. The CAB files will also contain one “metainfo.xml” file for each of the supported carriers in the released package, so that per-carrier firmware upgrade paths are available: only firmware updates for the currently used carrier should be considered. E.g. we don’t want users running with the Vodafone carrier config to get notified of upgrades to newer firmware versions that aren’t certified for the Vodafone carrier.

Each of the CAB files with multiple “metainfo.xml” files will therefore be associated to multiple GUID/version pairs. E.g. the same CAB file will be valid for the following GUIDs (using Device ID instead of GUID for a clearer explanation, but really the match is per GUID not per Device ID):

  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” providing version “T77W968.F1.2.2.2.2.VF.002”
  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_TELEFONICA” providing version “T77W968.F1.2.2.2.2.TF.003”
  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VERIZON” providing version “T77W968.F1.2.2.2.2.VZ.004”
  • … and so on.

Following our example, fwupd will detect our device exposing device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” and version “T77W968.F1.1.1.1.1.VF.001” in ModemManager and will be able to find a CAB file for the same device ID providing a newer version “T77W968.F1.2.2.2.2.VF.002” in the LVFS. The firmware update is possible!

3) fwupd requests device inhibition from ModemManager

In order to perform the firmware upgrade, fwupd requires full control of the modem. Therefore, when the firmware upgrade process starts, fwupd will use the new InhibitDevice(TRUE) method in the Manager DBus interface of ModemManager to request that a specific modem with a specific uid should be inhibited. Once the device is inhibited in ModemManager, it will be disabled and removed from the list of modems in DBus, and no longer used until the inhibition is removed.

The inhibition may be removed by calling InhibitDevice(FALSE) explicitly once the firmware upgrade is finished, and will also be automatically removed if the program that requested the inhibition disappears from the bus.

4) fwupd downloads CAB file from LVFS and performs firmware update

Once the modem is inhibited in ModemManager, fwupd can right away start the firmware update process. In the case of the DW5821e, the firmware update requires two different methods and two different upgrade cycles.

The first step would be to reboot the module into fastboot download mode using the AT command specified by ModemManager in the “at-fastboot” entry of the “UpdateSettings” property dictionary. After running the AT command, the module will reset itself and reboot with a completely different USB layout (and different vid:pid) that fwupd can detect as being the same device as before but in a different working mode. Once the device is in fastboot mode, fwupd will download and install the OTA file using the fastboot protocol, as defined in the “flashfile.xml” file provided in the CAB file:

<parts interface="AP">
  <part operation="flash" partition="ota" filename="T77W968.F1.2.2.2.2.AP.123_ota.bin" MD5="f1adb38b5b0f489c327d71bfb9fdcd12"/>
</parts>

Once the OTA file is completely downloaded and installed, fwupd will trigger a reset of the module also using the fastboot protocol, and the device will boot from scratch on the newly installed firmware version. During this initial boot, the module will report itself running in a “default” configuration not associated to any carrier, because the OTA file update process involves fully removing all installed carrier-specific MCFG files.

The second upgrade cycle performed by fwupd once the modem is detected again involves downloading all carrier-specific MCFG files one by one into the module using the QMI PDC protocol. Once all are downloaded, fwupd will activate the specific carrier configuration that was previously active before the download was started. E.g. if the module was running with the Vodafone-specific carrier configuration before the upgrade, fwupd will select the Vodafone-specific carrier configuration after the upgrade. The module would be reseted one last time using the QMI DMS protocol as a last step of the upgrade procedure.

5) fwupd removes device inhibition from ModemManager

The upgrade logic will finish by removing the device inhibition from ModemManager using InhibitDevice(FALSE) explicitly. At that point, ModemManager would re-detect and re-probe the modem from scratch, which should already be running in the newly installed firmware and with the newly selected carrier configuration.

June 29, 2019
June 26, 2019

In my last Panfrost blog, I announced my internship goal: improve Panfrost to run GNOME3. GNOME is a popular Linux desktop making heavy use of OpenGL; to use GNOME with only free and open-source software on a machine with Mali graphics, Panfrost is necessary.

Two months ahead-of-schedule, here I am, drafting this blog post from GNOME on my laptop running Panfrost!

A tiled architecture

Bring-up of GNOME required improving the driver’s robustness and performance, focused on Mali’s tiled architecture. Typically found in mobile devices, tiling GPU architectures divide the screen into many small tiles, like a kitchen floor, rendering each tile separately. This allows for unique optimizations but also poses unique challenges.

One natural question is: how big should tiles be? If the tiles are too big, there’s no point to tiling, but if the tiles are too small, the GPU will repeat unnecessary work. Mali offers a hybrid answer: allow lots of different sizes! Mali’s technique of “hierarchical tiling” allows the GPU to use tiles as small as 16x16 pixels all the way up to 2048x2048 pixels. This “sliding scale” allows different types of content to be optimized in different ways. The tiling needs of a 3D game like SuperTuxKart are different from those of a user interface like GNOME Shell, so this technique gets us the best of both worlds!

Although primarily handled in hardware, hierarchical tiling is configured by the driver; I researched this configuration mechanism in order to understand it and improve our configuration with respect to performance and memory usage.

Tiled architectures additionally present an optimization opportunity: if the driver can figure out a priori which 16x16 tiles will definitely not change, those tiles can be culled from rendering entirely, saving both read and write bandwidth. As a conceptual example, if the GPU composites your entire desktop while you’re writing an email, there’s no need to re-render your web browser in the other window, since that hasn’t changed. I implemented an initial version of this optimization in Panfrost, accumulating the scissor state across draws within a frame, rendering only to the largest bounding box of the scissors. This optimization is particularly helpful for desktop composition, ideally improving performance on workloads like GNOME, Sway, and Weston.

…Of course, theory aside, mostly what GNOME needed was a good, old-fashioned bugfixing spree, because the answer is always under your nose. Turns out what really broke the desktop was a trivial bug in the viewport specification code. Alas.

Scoreboarding

Looking forward to sophisticated workloads as this open driver matures, I researched job “scoreboarding”. For some background, the Mali hardware divides a frame into many small “jobs”. For instance, a “vertex job” executes a vertex shader; a “tiler job” executes tiling (sorting geometry job into tiles at varying hierarchy levels). Many of these jobs have to execute in a specific order; for instance, geometry has to be output by a vertex job before a tiler job can read that geometry. Previously, these relationships were hard-coded into the driver, which was okay for simple workloads but does not scale well. 

I have since replaced this code with an elegant dependency management system, based on the hardware’s scoreboarding. Instead of hard-coding relationships, the driver can now specify high level dependencies, and a generic algorithm (based on toplogical sorting) works out the order of submission and scoreboard flags necessary to actualize the given requirements. The new scoreboarding implementation has enabled new features, like rasterizer discard, to be implemented with ease.

With these improvements and more, several new features have landed in the driver, fixing hundreds of failing dEQP tests since my last blog post, bringing us nearer to conformance on OpenGL ES 2.0 and beyond.

Originally posted on Collabora’s blog

June 24, 2019

So I hope everyone is enjoying Fedora Workstation 30, but we don’t rest on our laurels here so I thought I share some of things we are working on for Fedora Workstation 31. This is not an exhaustive list, but some of the more major items we are working on.

Wayland – Our primary focus is still on finishing the Wayland transition and we feel we are getting close now, and thank you to the community for their help in testing and verifying Wayland over the last few years. The single biggest goal currently is fully removing our X Windowing System dependency, meaning that GNOME Shell should be able to run without needing XWayland. For those wondering why that has taken so much time, well it is simple; for 20 years developers could safely assume we where running atop of X. So refactoring everything needed to remove any code that makes the assumption that it is running on top of X.org has been a major effort. The work is mostly done now for the shell itself, but there are a few items left in regards to the GNOME Setting daemon where we need to expel the X dependency. Olivier Fourdan is working on removing those settings daemon bits as part of his work to improve the Wayland accessibility support. We are optimistic that can declare this work done within a GNOME release or two. So GNOME 3.34 or maybe 3.36. Once that work is complete an X server (XWayland) would only be started if you actually run a X application and when you shut that application down the X server will be shut down too.

Wayland logo

Wayland Graphics


Another change that Hans de Goede is working on at the moment is allowing X applications to be run as root under XWayland. In general running desktop apps as root isn’t considered adviceable from a security point of view, but since it always worked under X we feel it should continue to be there for XWayland too. This should fix a few applications out there which only works when run as root currently. One last item Hans de Goede is looking at is improving SDLs Wayland support in regards to how it deals with scaling of lower resolution games. Thanks to the great effort by Valve and others we got a huge catalog of games available under Linux now and we want to ensure that those keep running and runs well. So we will work with the SDL devs to come up with a solution here, we just don’t know the exact shape and and form the solution will take yet, so stay tuned.

Finally there is the NVidia binary driver support question. So you can run a native Wayland session on top of the binary driver and you had that ability for a very long time. Unfortunately there has been no support for the binary driver in XWayland and thus and X applications (which there are a lot of) would not be getting any HW accelerated 3D graphics support. Adam Jackson has worked on letting XWaylands load the binary NVidia x.org driver and we are now waiting on NVidia to review that work and hopefully be able to update their driver to support it.

Once we are done with this we expect X.org to go into hard maintenance mode fairly quickly. The reality is that X.org is basically maintained by us and thus once we stop paying attention to it there is unlikely to be any major new releases coming out and there might even be some bitrot setting in over time. We will keep an eye on it as we will want to ensure X.org stays supportable until the end of the RHEL8 lifecycle at a minimum, but let this be a friendly notice for everyone who rely the work we do maintaining the Linux graphics stack, get onto Wayland, that is where the future is.

PipeWire – Wim Taymans keeps improving the core features of Pipewire, as we work step by step to be ready to replace Jack and PulseAudio. He has recently been focusing on improving existing features like the desktop sharing portal together with Jonas Adahl and we are planning a hackfest for Wayland in the fall, current plan is to do it around the All Systems Go conference in Berlin, but due to some scheduling conflicts by some of our core stakeholders we might need to reschedule it to a little later in fall.
A new user for the desktop sharing portal is the new Miracast support that Benjamin Berg has been steadily working on. The Miracast support is shaping up and you can grab the Network Displays test client from his COPR repository while he is working to get the package into Fedora proper. We would appreciate further users testing and feedback as we know there are definitely devices out there where things do not work properly and identifying them is the first step to figuring out how to make our support in the desktop more robust. Eventually we want to make the GNOME integration even more seamless than the standalone app, but for early testing and polish it does the trick. If you are interested in contributing the code is hosted here on github.

Network Display

Network Display application using Miracast

Btw, you still need to set the enable Pipewire flag in Chrome to get the Pipewire support (chrome://flags). So I included a screenshot here to show you where to go in the browser and what the key is called:

Chrome Pipewire Flag

Chrome Pipewire Flag

Flatpak – Work on Flatpak in Fedora is continuing. Current focus is on improving the infrastructure for building Flatpaks from RPMS and automating what we can.This is pre-requisite work for eventually starting to ship some applications as Flatpaks by default and eventually shipping all applications as Flatpaks by default. We are also working on setting things up so that we can offer applications from flathub.io and quay.io out of the box and in accordance with Fedora rules for 3rd party software. We are also making progress on making a Red Hat UBI based runtime available. This means that as a 3rd party developer you can use that to build your applications on top of and be certain that it will be stay around and be supported by Red Hat for the lifetime of a given RHEL release, which means around 10 years. This frees you up as a developer to really update your application at your own pace as opposed to have to chase more short lived runtimes. It will also ensure that your application can be certified for RHEL which gives you access to all our workstation customers in addition to Fedora and all other distros.

Fedora Toolbox – Work is progressing on the Fedora Toolbox, our tool for making working with pet containers feel simple and straightforward. Debarshi Ray is currently looking on improvements to GNOME Terminal that will ensure that you get a more natural behaviour inside the terminal when interacting with pet containers, for instance ensuring that if you have a terminal open to a pet container and create a new tab that tab will also be inside the container inside of pointing at the host. We are also working on finding good ways to make the selection of containers more discoverable, so that you more easily can get access to a Red Hat UBI container or a Red Hat TensorFlow container for instance. There will probably be a bit of a slowdown in terms of new toolbox features soon though as we are going to rewrite it to make it more maintainable. The current implementation is a giant shell script, but the new version will most likely be written in Go (so that we can more easily integrate with the other container libraries and tools out there, mostly written in Go).

Fedora Toolbox

Fedora Toolbox in action

GNOME Classic – We have had Classic mode available in GNOME and Fedora for a long time, but we recently decided to give it a second look and try to improve the experience. So Allan Day reviewed the experience and we decided to make it a more pure GNOME 2 style experience by dropping the overview completely when you run classic mode.
We have also invested time and effort on improving the Classic mode workspace switcher to make life better for people who use a very workspace centric workflow. The goal of the improvements is to make the Classic mode workspace switcher more feature complete and also ensure that it can work with standard GNOME 3 in addition to Classic mode. We know this will greatly improve the experience for many of our users and at the same time hopefully let new people switch to Fedora and GNOME to get the advantage of all the other great improvements we are bringing to Linux and the Linux desktop.

Sysprof & performance – We have had a lot of focus in the community on improving GNOME Shell performance. Our particular focus has been on doing both some major re-architecting of some core subsystems that where needed to make some of the performance improvements you seen even possible. And lately Christian Hergert has been working on improving our tooling for profiling the desktop, so let our developers more easily see exactly where in the stack bottlenecks are and what is causing them. Be sure to read Christians blog for further details about sysprof and friends.

Fleet Commander – our tool for configuring large deployments of Fedora and RHEL desktops should have a release out very soon that can work with Active Directory as your LDAP server. We know a lot of RHEL and Fedora desktop users are part of bigger organizations where Linux users are a minority and thus Active Directory is being deployed in the organization. With this new release Fleet Commander can be run using Active Directory or FreeIPA as the directory server and thus a lot of organizations who previously could not easily deploy Fleet Commander can now take advantage of this powerful tool. Next step for Fleet Commander after that is finishing of some lose ends in terms of our Firefox support and also ensure that you can easily configure GNOME Shell extensions with Fleet Commander. We know a lot of our customers and users are deploying one or more GNOME Shell extensions for their desktop so we want to ensure Fleet Commander can help you do that efficiently across your whole fleet of systems.

Fingerprint support – We been working closely with our hardware partners to bring proper fingerprint reader support to Linux. Bastien Nocera worked on cleaning up the documentation of fprint and make sure there is good sample code and our hardware partners then worked with their suppliers to ensure they provided drivers conforming to the spec for hardware supplied to them. So there is a new drivers from Synaptics finger print readers coming out soon thanks to this effort. We are not stopping there though, Benjamin Berg is continuing the effort to improve the infrastructure for Linux fingerprint reader support, making sure we can support in-device storage of fingerprints for instance.

Fingerprint image

Fingerprint readers now better supported

Gamemode – Christian Kellner has been contributing a bit to gamemode recently, working to make it more secure and also ensure that it can work well with games packaged as Flatpaks. So if you play Linux games, especially those from Ferral Interactive, and want to squeeze some extra performance from your system make sure to install gamemode on your Fedora system.

Dell Totem support – Red Hat has a lot of customers in the fields of animation and CAD/CAM systems. Due to this Benjamin Tissoires and Peter Hutterer been working with Dell on enabling their Totem input device for a while now. That works is now coming to a close with the Totem support shipping in the latest libinput version with the kernel side of things being merged some time ago. You can get the full details from Peters blog about Dell Totem.

Dell Totel

The Dell Totem input device

Media codec support – So the OpenH264 2.0 release is out from Cisco now and Kalev Lember has been working to get the Fedora packages updated. This is a crucial release as it includes the support for Main and High profile that I mentioned in an earlier blog post. That work happened due to a collaboration between Cisco, Endless, Red Hat and Centricular with Jan Schmidt at Centricular doing the work implementing support for these two codecs. This work makes OpenH264 a lot more useful as it now supports playing back most files found in the wild and we been working to ensure it can be used for general playback in Firefox. At the same time Wim Taymans is working to fix some audio quality issues in the AAC implementation we ship so we should soon have both a fully working H264 decoder/encoder in Fedora and a fully functional AAC decoder/encoder. We are still trying to figure out what to do with MPEG2 video as we are ready to ship support for that too, but are still trying to figure out the details of implementation. Beyond that we don’t have any further plans around codecs atm as we feel that with H264, MPEG2 video, AAC, mp3 and AC3 we have them most critical ones covered, alongside the growing family of great free codecs such as VP9, Opus and AV1. We might take a look at the status of things like Windows Media and DivX at some point, but it is not anywhere close to the top of our priority list currently.

June 21, 2019
The obvious change to announce is the new website design. But there is much more to talk about. ### Website overhaul The old website, reachable primarily on the domain [subdiff.de][subdiff.de], was a pure blog built with Jekyll and the design was some random theme I picked up on GitHub. It was a quick thing to do back in the days when I needed a blog up fast for community interaction as a KWin and Plasma developer. But on the back burner my goal was already for quite some time to rebuild the website with a more custom and professional design. Additionally I wanted this website to not only be a blog but also a landing page with some general information about my work. The opportunity arose now and after several months of research and coding I finished the website rebuild. This all needed longer because it seemed to me like an ideal occasion to learn about modern web development techniques and so I didn't settle for the first plain solution I came across but invested some more time into selecting and learning a suitable technology stack. In the end I decided to use [Gridsome][gridsome], a static site generator leveraging [Vue.js][vue] for the frontend and [GraphQL][graphql] as data backend when generating the site. By that Gridsome is a prime example of the [JAMstack][jamstack], a most modern and very sensible way of building small to medium sized websites with only few selected dynamic elements through JavaScript APIs while keeping everything else static. After all that learning, decision taking and finally coding I'm now really happy with this solution and I definitely want to write in greater detail about it in the future. Feature-wise the current website provides what I think are the necessary basics and it could still be extended in several ways, but as for now I will stick to these basics and only look into new features when I get an urge to do it. ### Freelancer business Since January I work as a freelancer. This means in Germany that I basically had to start a company, so I did that. I called it *subdiff : software system*, and the brand is still the domain name you are currently browsing. I used it already before as this website's domain name and as an online nickname. It is derived from a mathematical concept and on the other side stands for a slogan I find sensible on a practical level in work and life: > Subtract the nonsense, differentiate what's left. ### Part of Valve's Open Source Group As a freelancer I am contracted by Valve to work on certain gaming-related XServer projects and improve KWin in this regard and for general desktop usage. In the XServer there are two main projects at the moment. The technical details of one of them are currently discussed on a work-in-progress patch series [on Gitlab][xserver-composite-accel-patch] but I want to write accessible articles about both projects here on the blog as well in the near future. In KWin I have several large projects I will look into, which would benefit KWin on X11 and Wayland alike. The most relevant one is [reworking the compositing pipeline][phab-comp-rework]. You can expect more info about this project and the other ones in KWin in future blog posts too. ### New code While there are some big projects in the pipeline I was also able to commit some major changes in the last few months to KWin and Plasma. The largest one was for sure [XWayland drag-and-drop support][xwl-dnd] in KWin. But in best case scenario the user won't even notice this feature because drag-and-drop between any relevant windows will just work from now on in our Wayland session. Inside KWin though the technical solution enabling this was built up from the ground. And in a way such that we should be able to later support something like middle-click-paste between XWayland and Wayland native windows easily. There were two other major initiatives by me that I was able to merge: the finalization of basing every display representation in KWin on the generic `AbstractOutput` class and in Plasma's display management library, daemon and settings panel to [save display-individual values][kscreen-patch] in a consistent way by introducing a new communication channel between these components. While the results of both enhancements are again supposed to be unnoticeable by the user but should improve the code structure and increase the overall stability there is more work lined up for display management which then will directly affect the interface. Take a look at [this task][display-further-work-task] to see what I have planned. So there is interesting work ahead. Luckily this week I am with my fellow KWin and Plasma developers at the Plasma and Usability sprint in Valencia to discuss and plan work on such projects. The sprint officially started yesterday and the first day already was very productive. We strive to keep up that momentum till the end of the sprint next week and I plan on writing an article about the sprint results afterwards. In the meantime you can follow [@kdecommunity][twitter-kdecommunity] on Twitter if you want to receive timely updates on our sprint while it's happening. ### Final remarks and prospect I try to keep the articles in this blog rather prosaic and technical but there are so many things moving forward and evolving right now that I want to spend a few paragraphs in the end on the opposite. In every aspect there is just immense *potential* when looking at our open source graphics stack consisting of KDE Plasma with KWin, at the moment still good old X but in the future Wayland, and the Linux graphics drivers below. While the advantages of free and open source software for the people were always obvious, how rapidly this type of software became the backbone of our global economy signifies that it is immensely valuable for companies alike. In this context the opportunities on how to make use of our software offerings and improve them are endless while the technical challenges we face when doing that are interesting. By this we can do our part such that the open source community will grow and foster. As a reader of these sentences you are already in a prime position to take part in this great journey as well by becoming an active member of the community through contributing. Maybe you already do this for example by coding, designing, researching, donating or just by giving us feedback on how our technology can become better. But if you are not yet, this is a great time to get involved and bring in your individual talents and motivation to build up something great together for ourselves and everybody. You can find out more on how to do that by visiting KDE's [Get Involved page][kde-involved] or join in on the ongoing discussion about KDE's [future goals][goals-blog]. [subdiff.de]: https://subdiff.de [gridsome]: https://gridsome.org [vue]: https://vuejs.org [graphql]: https://graphql.org [jamstack]: https://jamstack.org [xserver-composite-accel-patch]: https://gitlab.freedesktop.org/xorg/xserver/merge_requests/211 [phab-comp-rework]: https://phabricator.kde.org/T11071 [xwl-dnd]: https://phabricator.kde.org/R108:548978bfe1f714e51af6082933a512d28504f7e3 [kscreen-patch]: https://phabricator.kde.org/T10028 [display-further-work-task]: https://phabricator.kde.org/T11095 [twitter-kdecommunity]: https://twitter.com/kdecommunity [kde-involved]: https://community.kde.org/Get_Involved [goals-blog]: http://blog.lydiapintscher.de/2019/06/09/evolving-kde-lets-set-some-new-goals-for-kde/
June 20, 2019
June 19, 2019

This is merely an update on the current status quo, if you read this post in a year's time some of the details may have changed

libinput provides an API to handle graphics tablets, i.e. the tablets that are used by artists. The interface is based around tools, each of which can be in proximity at any time. "Proximity" simply means "in detectable range". libinput promises that any interaction is framed by a proximity in and proximity out event pair, but getting to this turned out to be complicated. libinput has seen a few changes recently here, so let's dig into those. Remember that proverb about seeing what goes into a sausage? Yeah, that.

In the kernel API, the proximity events for pens are the BTN_TOOL_PEN bit. If it's 1, we're in proximity, if it's 0, we're out of proximity. That's the theory.

Wacom tablets (or rather the kernel driver) always reset all axes on proximity out. So libinput needs to take care not to send a 0 value to the caller, lest you want a jump to the top left corner every time you move the pen away from the tablet. Some Wacom pens have serial numbers and we use those to uniquely identify a tool. But some devices start sending proximity and axis events before we get the serial numbers which means we can't identify the tool until several ms later. In that case we simply discard the serial. This means we cannot uniquely identify those pens but so far no-one has complained.

A bunch of tablets (HUION) don't have proximity at all. For those, we start getting events and then stop getting events, without any other information. So libinput has a timer - if we don't get events for a given time, we force a proximity out. Of course, this means we also need to force a proximity in when the next event comes in. These tablets are common enough that recently we just enabled the proximity timeout for all tablets. Easier than playing whack-a-mole, doubly so because HUION re-uses USD ids so you can't easily identify them anyway.

Some tablets (HP Spectre 13) have proximity but never send it. So they advertise the capability, just don't generate events for it. Same handling as the ones that don't have proximity at all.

Some tablets (HUION) have proximity, but only send it once per plug-in, after that it's always in proximity. Since libinput may start after the first pen interaction, this means we have to a) query the initial state of the device and b) force proximity in/out based on the timer, just like above.

Some tablets (Lenovo Flex 5) sometimes send proximity out events, but sometimes do not. So for those we have a timer and forced proximity events, but only when our last interaction didn't trigger a proximity event.

The Dell Active Pen always sends a proximity out event, but with a delay of ~200ms. That timeout is longer than the libinput timeout so we'll get a proximity out event, but only after we've already forced proximity out. We can just discard that event.

The Dell Canvas pen (identifies as "Wacom HID 4831 Pen") can have random delays of up to ~800ms in its event reporting. Which would trigger forced proximity out events in libinput. Luckily it always sends proximity out events, so we could quirk out to specifically disable the timer.

The HP Envy x360 sends a proximity in for the pen, followed by a proximity in from the eraser in the next event. This is still an unresolved issue at the time of writing.

That's the current state of things, I'm sure it'll change in a few months time again as more devices decide to be creative. They are artist's tools after all.

The lesson to take away here: all of the above are special cases that need to be implemented but this can only be done on demand. There's no way any one person can test every single device out there and testing by vendors is often nonexistent. So if you want your device to work, don't complain on some random forum, file a bug and help with debugging and testing instead.

June 18, 2019

We're on the road to he^libinput 1.14 and last week I merged the Dell Canvas Totem support. "Wait, what?" I hear you ask, and "What is that?". Good question - but do pay attention to random press releases more. The Totem (Dell.com) is a round knob that can be placed on the Dell Canvas. Which itself is a pen and touch device, not unlike the Wacom Cintiq range if you're familiar with those (if not, there's always lmgtfy).

The totem's intended use is as secondary device - you place it on the screen while you're using the pen and up pops a radial menu. You can rotate the totem to select items, click it to select something and bang, you're smiling like a stock photo model eating lettuce. The radial menu is just an example UI, there are plenty others. I remember reading papers about bimanual interaction with similar interfaces that dated back to the 80s, so there's a plethora to choose from. I'm sure someone at Dell has written Totem-Pong and if they have not, I really question their job priorities. The technical side is quite simple, the totem triggers a set of touches in a specific configuration, when the firmware detects that arrangement it knows this isn't a finger but the totem.

Pen and touch we already handle well, but the totem required kernel changes and a few new interfaces in libinput. And that was the easy part, the actual UI bits will be nasty.

The kernel changes went into 4.19 and as usual you can throw noises of gratitude at Benjamin Tissoires. The new kernel API basically boils down to the ABS_MT_TOOL_TYPE axis sending MT_TOOL_DIAL whenever the totem is detected. That axis is (like others of the ABS_MT range) an odd one out. It doesn't work as an axis but rather an enum that specifies the tool within the current slot. We already had finger, pen and palm, adding another enum value means, well, now we have a "dial". And that's largely it in terms of API - handle the MT_TOOL_DIAL and you're good to go.

libinput's API is only slightly more complicated. The tablet interface has a new tool type called the LIBINPUT_TABLET_TOOL_TYPE_TOTEM and a new pair of axes for the tool, the size of the touch ellipse. With that you can get the position of the totem and the size (so you know how big the radial menu needs to be). And that's basically it in regards to the API. The actual implementation was a bit more involved, especially because we needed to implement location-based touch arbitration first.

I haven't started on the Wayland protocol additions yet but I suspect they'll look the same as the libinput API (the Wayland tablet protocol is itself virtually identical to the libinput API). The really big changes will of course be in the toolkits and the applications themselves. The totem is not a device that slots into existing UI paradigms, it requires dedicated support. Whether this will be available in your favourite application is likely going to be up to you. Anyway, christmas in July [1] is coming up so now you know what to put on your wishlist.

[1] yes, that's a thing. Apparently christmas with summery temperature, nice weather, sandy beaches is so unbearable that you have to re-create it in the misery of winter. Explains everything you need to know about humans, really.

Even if you are not a gamer, odds are that you already heard about Vulkan graphics and compute API that provides high-efficency, cross-platform access to modern GPUs. This API is designed by the Khronos Group and it is supported by a new set of drivers specifically designed to implement the different functions and features defined by the spec (at the time of writing this post, it is version 1.1).

Vulkan

In order to guarantee that the drivers work according to the spec, drivers need to pass a conformance test suite that ensures they do what it is expected from them. VK-GL-CTS is the name of the conformance test suite used for certify the conformance on both Vulkan and OpenGL APIs and… it is open-source!

VK-GL-CTS

As part of my daily job at Igalia, I contribute to VK-GL-CTS from fixing some bugs, improving existing tests or even writing new tests for a variety of extensions. In this post I am going to describe some of the work I have been doing in the last few months.

VK_EXT_host_query_reset

This extension gives you the opportunity to reset queries outside a command buffer, which is a fast way of doing it once your application has finished reading query’s data. All that you need is to call vkResetQueryPoolEXT() function. There are several Vulkan drivers supporting already this extension on GNU/Linux (NVIDIA, open-source drivers AMDVLK, RADV and ANV) and probably more in other platforms.

I have implemented tests for all the different queries: occlusion queries, pipeline timestamp queries and statistics queries. Transform feedback stream queries tests landed a bit later.

VK_EXT_discard_rectangles

VK_EXT_discard_rectangles provides a way to define rectangles in framebuffer-space coordinates that discard rasterization of all points, lines and triangles that fall inside (exclusive mode) or outside (inclusive mode) of their area. You can regard this feature as something similar to scissor testing but it operates orthogonally to the existing scissor test functionality.

It is easier to understand with an example. Imagine that you want to do the following in your application: clear the color attachment to red color, draw a green quad covering the whole attachment but defining a discard rectangle in order to restrict the rasterization of the quad to the area defined by the discard rectangle.

For that, you define the discard rectangles at pipeline creation time for example (it is possible to define them dynamically too); as we want to restrict the rasterization of the quad to the area defined by the discard rectangle, then we set its mode to VK_DISCARD_RECTANGLE_MODE_INCLUSIVE_EXT.

VK_EXT_discard_rectangles inclusive mode example

If we want to discard the rasterization of the green quad inside the area defined by the discard rectangle, then we set VK_DISCARD_RECTANGLE_MODE_EXCLUSIVE_EXT mode at pipeline creation time and that’s all. Here you have the output for this case:

VK_EXT_discard_rectangles exclusive mode example

You are not limited to define just one discard rectangle, drivers supporting this extension should support a minimum of 4 of discard rectangles but some drivers may support more. As this feature works orthogonally to other tests like scissor test, you can do fancy things in your app :-)

The tests I developed for VK_EXT_discard_rectangles extension are already available in VK-GL-CTS repo. If you want to test them on an open-source driver, right now only RADV has implemented this extension.

VK_EXT_pipeline_creation_feedback

VK_EXT_pipeline_creation_feedback is another example of a useful feature for application developers, specially game developers. This extension gives a way to know at pipeline creation, if the pipeline hit the provided pipeline cache, the time consumed to create it or even which shaders stages hit the cache. This feature gives feedback about pipeline creation that can help to improve the pipeline caches that are shipped to users, with the final goal of reducing load times.

Tests for VK_EXT_pipeline_creation_feedback extension have made their way into VK-GL-CTS repo. Good news for the ones using open-source drivers: both RADV and ANV have implemented the support for this extension!

Conclusions

Since I started working in the Graphics team at Igalia, I have been contributing code to Mesa drivers for both OpenGL and Vulkan, adding new tests to Piglit, improve VkRunner among other contributions.

Now I am contributing to increase VK-GL-CTS coverage by developing new tests for extensions, fixing existing tests among other things. This work also involves developing patches for Vulkan Validation Layers, fixes for glslang and more things to come. In summary, I am enjoying a lot doing contributions to the open-source ecosystem created by Khronos Group as part of my daily work!

Note: if you are student and you want to start contributing to open-source projects, don’t miss our Igalia Coding Experience program (more info in our website).

Igalia

Even if you are not a gamer, odds are that you already heard about Vulkan graphics and compute API that provides high-efficency, cross-platform access to modern GPUs. This API is designed by the Khronos Group and it is supported by a new set of drivers specifically designed to implement the different functions and features defined by the spec (at the time of writing this post, it is version 1.1).

Vulkan

In order to guarantee that the drivers work according to the spec, drivers need to pass a conformance test suite that ensures they do what it is expected from them. VK-GL-CTS is the name of the conformance test suite used for certify the conformance on both Vulkan and OpenGL APIs and… it is open-source!

VK-GL-CTS

As part of my daily job at Igalia, I contribute to VK-GL-CTS from fixing some bugs, improving existing tests or even writing new tests for a variety of extensions. In this post I am going to describe some of the work I have been doing in the last few months.

VK_EXT_host_query_reset

This extension gives you the opportunity to reset queries outside a command buffer, which is a fast way of doing it once your application has finished reading query’s data. All that you need is to call vkResetQueryPoolEXT() function. There are several Vulkan drivers supporting already this extension on GNU/Linux (NVIDIA, open-source drivers AMDVLK, RADV and ANV) and probably more in other platforms.

I have implemented tests for all the different queries: occlusion queries, pipeline timestamp queries and statistics queries. Transform feedback stream queries tests landed a bit later.

VK_EXT_discard_rectangles

VK_EXT_discard_rectangles provides a way to define rectangles in framebuffer-space coordinates that discard rasterization of all points, lines and triangles that fall inside (exclusive mode) or outside (inclusive mode) of their area. You can regard this feature as something similar to scissor testing but it operates orthogonally to the existing scissor test functionality.

It is easier to understand with an example. Imagine that you want to do the following in your application: clear the color attachment to red color, draw a green quad covering the whole attachment but defining a discard rectangle in order to restrict the rasterization of the quad to the area defined by the discard rectangle.

For that, you define the discard rectangles at pipeline creation time for example (it is possible to define them dynamically too); as we want to restrict the rasterization of the quad to the area defined by the discard rectangle, then we set its mode to VK_DISCARD_RECTANGLE_MODE_INCLUSIVE_EXT.

VK_EXT_discard_rectangles inclusive mode example

If we want to discard the rasterization of the green quad inside the area defined by the discard rectangle, then we set VK_DISCARD_RECTANGLE_MODE_EXCLUSIVE_EXT mode at pipeline creation time and that’s all. Here you have the output for this case:

VK_EXT_discard_rectangles exclusive mode example

You are not limited to define just one discard rectangle, drivers supporting this extension should support a minimum of 4 of discard rectangles but some drivers may support more. As this feature works orthogonally to other tests like scissor test, you can do fancy things in your app :-)

The tests I developed for VK_EXT_discard_rectangles extension are already available in VK-GL-CTS repo. If you want to test them on an open-source driver, right now only RADV has implemented this extension.

VK_EXT_pipeline_creation_feedback

VK_EXT_pipeline_creation_feedback is another example of a useful feature for application developers, specially game developers. This extension gives a way to know at pipeline creation, if the pipeline hit the provided pipeline cache, the time consumed to create it or even which shaders stages hit the cache. This feature gives feedback about pipeline creation that can help to improve the pipeline caches that are shipped to users, with the final goal of reducing load times.

Tests for VK_EXT_pipeline_creation_feedback extension have made their way into VK-GL-CTS repo. Good news for the ones using open-source drivers: both RADV and ANV have implemented the support for this extension!

Conclusions

Since I started working in the Graphics team at Igalia, I have been contributing code to Mesa drivers for both OpenGL and Vulkan, adding new tests to Piglit, improve VkRunner among other contributions.

Now I am contributing to increase VK-GL-CTS coverage by developing new tests for extensions, fixing existing tests among other things. This work also involves developing patches for Vulkan Validation Layers, fixes for glslang and more things to come. In summary, I am enjoying a lot doing contributions to the open-source ecosystem created by Khronos Group as part of my daily work!

Note: if you are student and you want to start contributing to open-source projects, don’t miss our Igalia Coding Experience program (more info in our website).

Igalia

June 13, 2019
June 12, 2019
There have been questions about the Fedora 30 Flicker Free Boot Change in various places, here is a FAQ which hopefully answers most questions:

1) I get a black screen for a couple of seconds during boot?

1.1) If you have an AMD or Nvidia GPU driving your screen, then this is normal. The graphics drivers for AMD and Nvidia GPUs reset the hardware when loading, this will cause the display to temporarily go black. There is nothing which can be done about this.

1b) If you have a somewhat older Intel GPU (your CPU is pre Skylake) then the i915 driver's support to skip the mode-reset is disabled by default (for now) to fix this add "i915.fastboot=1" to your kernel commandline. For more info on modifying the kernel cmdline, see question 6. .

1c) Do "ls /sys/firmware/efi/efivars" if you get a "No such file or directory" error then your system is booting in classic BIOS mode instead of UEFI mode, to fix this you need to re-install and boot the livecd/installer in UEFI mode when installing. Alternatively you can try to convert your existing install, note this is quite tricky, make backups first!

1d) Your system may be using the classic VGA BIOS during boot despite running in UEFI mode. Often you can select BIOS mode compatility in your BIOS settings aka the CSM setting. If you can select this on a per component level, set the VIDEO/VGA option to "UEFI only" or "UEFI first", alternatively you can try completely disabling the CSM mode.

2) I get a black background instead of the firmware splash while Fedora is booting?

Do "ls /sys/firmware/acpi/bgrt" if you get a "No such file or directory" error then try answers 1c and 1d . If you do have a /sys/firmware/acpi/bgrt directory, but you are still getting the Fedora logo + spinner on a black background instead of on top of the firmware-splash, please file a bug about this and drop me a mail with a link to the bug.

3) Getting rid of the vendor-logo/firmware-splash being shown while Fedora is booting?

If you don't want the firmware-splash to be used as background during boot, you can switch plymouth to the spinner theme, which is identical to the new bgrt theme, except that it does not use the firmware-splash as background, to do this execute the following command from a terminal: "sudo plymouth-set-default-theme -R spinner"

Note that the kernel will restore the vendor-logo early on at boot in case it got damaged by e.g. option ROM messages. If you are switching to the spinner theme you may also want to add "video=efifb:nobgrt" to your kernel commandline. See 6) below for how to edit the kernel commandline.

4) Keeping the firmware-splash as background while unlocking the disk?

If you prefer this, it is possible to keep the firmware-splash as background while the diskcrypt password is shown. To do this do the following:

  1. "sudo mkdir /usr/share/plymouth/themes/mybgrt"

  2. "sudo cp /usr/share/plymouth/themes/bgrt/bgrt.plymouth /usr/share/plymouth/themes/mybgrt/mybgrt.plymouth"

  3. edit /usr/share/plymouth/themes/mybgrt/mybgrt.plymouth, change DialogClearsFirmwareBackground=true to DialogClearsFirmwareBackground=false, change DialogVerticalAlignment=.382 to DialogVerticalAlignment=.6

  4. "sudo plymouth-set-default-theme -R mybgrt"

Note if you do this the disk-passphrase entry dialog may be partially drawn over the vendor-logo part of the firmware-splash, if this happens then try increasing DialogVerticalAlignment to e.g. 0.7 .

5) Get detailed boot progress instead of the boot-splash ?

To get detailed boot progress info press ESC during boot.

6) Always get detailed boot progress instead of the boot-splash ?

To always get detailed boot progress instead of the boot-splash, remove "rhgb" from your kernel commandline:

Edit /etc/default/grub and remove rhgb from GRUB_CMDLINE_LINUX and then if you are booting using UEFI (see 1c) run:
"grub2-mkconfig -o /etc/grub2-efi.cfg"
else (if you are booting using classic BIOS boot) run:
"grub2-mkconfig -o /etc/grub2.cfg".
June 06, 2019
June 05, 2019

Hello, I’m Alyssa Rosenzweig, a student, the lead developer of the open-source Panfrost graphics driver… and now a Collaboran!

Years ago, I joined the open-source community with a passion and a mission: to enable equal access to high-quality computing via open-source software. With this mission, I co-founded Panfrost, aiming to create an open-source driver for the Mali GPU. Before Panfrost, users of Mali GPUs required a proprietary blob, restricting their ability to use their machines as they saw fit. Some users were unable to run Linux, their operating system of choice, with the display system of their choosing, simply because there were not blobs available for their particular configuration. Others wished to use an upstream kernel; yet others held a deep philosophical belief in free and open-source software. To each users’ driver problem, Panfrost seeks to provide a solution.

Days ago, I joined Collabora with the same passion and the same mission. Collabora was founded on an “open first” model, sharing my personal open source conviction. Collabora’s long-term vision is to let open-source software blossom throughout computing, fulfilling my own dream of an open-source utopia.

With respect to graphics, Collabora has shared my concerns. After all, we’re all on “Team Open Source” together! Collabora’s partners make awesome technology, often containing a Mali GPU, and they need equally awesome graphics drivers to power their products and empower their users. Our partners and our users asked, and Panfrost answered.

At Collabora, I am now a full-time Software Engineering Intern, continuing throughout the summer to work on Panfrost. I’m working alongside other veteran Panfrost contributors like Collaboran Tomeu Vizoso, united with open-source community members like Ryan Houdek. My focus will be improving Panfrost’s OpenGL ES 2.0 userspace, to deliver a better experience to Panfrost users. By the end of the summer, we aim to bring the driver to near conformance, to close any performance gaps, and through this work, to get GNOME Shell working fluidly on supported Mali hardware with only upstream, open-source software!

Supporting GNOME in Panfrost is a task of epic proportions, a project dream since day #1, yet ever distant in the horizon. But at Collabora, we’re always up for the challenge.

Originally posted on Collabora’s blog

May 29, 2019
May 23, 2019
May 22, 2019
Thank you all for the large amount of feedback I have received after my previous Wayland Itches blog post. I've received over 40 mails, below is an attempt at summarizing all the mails.

Highlights

1. Middle click on title / header bar to lower the Window does not work for native apps. Multiple people have reported this issue to me. A similar issue was fixed for not being able to raise Windows. It should be easy to apply a similar fix for the lowering problem. There are bugs open for this here, here and here.

2. Running graphical apps via sudo or pxexec does not work. There are numerous examples of apps breaking because of this, such as lshw-gui and usbivew. At least for X11 apps this is not that hard to fix. But sofar this has deliberately not been fixed. The reasoning behind this is described in this bug. I agree with the reasoning behind this, but I think it is not pragmatic to immediately disallow all GUI apps to connect when run as root starting today.

We need some sort of transition period. So when I find some time for this, I plan to submit a merge-requests which optionally makes gnome-shell/mutter start Xwayland with an xauth file, like how it is done when running in GNOME on Xorg mode. This will be controlled by a gsettings option, which will probably default to off upstream and then distros can choice to override this for now, giving us a transition period

Requests for features implemented as external programs on X11

There are various features which can be implemented as external programs
on X11, but because of the tighter security need to be integrated into the
compositor with Wayland:

  • Hiding of the mouse-cursor when not used à la unclutter-xfixes, xbanish.

  • Rotating screen 90 / 270 degrees à la "xrandr -o [left|right]" mostly used through custom hotkeys, possible fix is defining bindable actions for this in gsd-media-keys.

  • Mapping actions to mouse buttons à la easystroke

  • Some touchscreen's, e.g. so called smart-screens for education, need manual calibration. Under X11 there are some tools to get the callibration matrix for the touchscreen, after which this can be manually applied through xinput. Even under X11 this currently is far from ideal but at least it is possible there.

  • Keys Indicator gnome-shell extension. This still works when using Wayland, but only works for apps using Xwayland, it does not work for native apps.

  • Some sort of xkill and xdotool utility equivalents would be nice

  • The GNOME on screen keyboard is not really suitable for use with apps which are not touch-enabled, as it lacks a way to send ctrl + key, etc. Because of this some users have reported that it is impossible to use alternative on screen keyboards with Wayland. Not being able to use alternative on screen keyboards is by design and IMHO the proper fix here is to improve GNOME's on screen keyboard.

App specific problems


  • Citrix ICA Client does not work well with Xwayland

  • Eclipse does not work well with Xwayland

  • Teamviewer does not work with Wayland. It needs to be updated to use pipewire for screencapturing and the RemoteDesktop portal to inject keyboard and mouse events.

  • Various apps lack screenrecording / capture support due to the app not having support for pipewire: gImageReader, green-recorder, OBS studio, peek, screenrecorder, slack

  • For apps which do support pipewire, there is not an option to share the contents of a window, other then the window making the request. On Xorg it is possible to share a random window and since pipewire allows sharing the whole desktop I see no security reason why we would not allow sharing another window.

  • guake window has incorrect size when using HiDPI scaling, see this issue

Miscellaneous problems


  • Mouse cursor is slow / lags

  • Drag and drop sometimes does not work, e.g. dragging files into file roller to compress or out of file roller to extract.

  • Per keyboard layouts. On X11 after plugging in a keyboard, the layout/keymap for just that one keyboard can be updated manually using xinput, allowing different keyboard layouts for different keyboards when multiple keyboards are connected

  • No-title-bar shell extension, X button can be hit unintentionally, see this issue

  • Various issues with keyboard layout switching

Hard to fix issues


  • Alt-F2, r equivalent (restart the gnome-shell)

  • X11 apps running on top of Xwayland do not work well on HiDPI screens

  • Push-to-talk (passive key grab on space) does not work in Mumble when using native Wayland apps, see this issue

Problems with other compositors then GNOME3 / mutter

I've also received several reports about issues when using another Wayland compositor as GNOME / mutter (Weston, KDE, Sway). I'm sorry but I have not looked very closely into these reports. I believe that it is great that Linux users have multiple Desktop Environments to choose from and I wish for the other DEs to thrive. But there are only so many hours in a day so I've chosen to mainly focus on GNOME.
First of all I do not want people to get their hopes up about $subject of this blogpost. Improving gaming support is a subjects which holds my personal interest and it is an issue I plan to spend time on trying to improve. But this will take a lot of time (think months for simple things, years for more complex things).

As I see it there are currently 2 big issues when running games under Wayland:

1. Many games show as a smal centered image with a black border (letterbox) around the image when running fullscreen.

For 2D games this is fixed by switching to SDL2 which will transparently scale the pixmap the game renders to the desktop resolution. This assumes that 2D games in general do not demand a lot of performance and thus will not run into performance issues when introducing an extra   scaling step. A problem here is that many games still use SDL1.2 (and some games do not use SDL at all).

I plan to look into the recently announced SDL1.2 compatibility wrapper around SDL2. If this works well this should fix this issue for all SDL1.2 2D games, by making them use SDL2 under the hood.

For 3D games this can be fixed by rendering at the desktop resolution, but this might be slow and rendering at a lower resolution leads to the letterbox issue.

Recently mutter has has grown support for the WPviewport extension, which allows Wayland apps to tell the compositor to scale the pixmap the app gives to the compositor before presenting it. If we add support to SDL2's Wayland backend for this then, this can be used to allow rendering 3D apps at a lower resolution and still have them fill the entire screen.

Unfortunately there are 2 problems with this plan:

  1. SDL2 by default uses its x11 backend, not its wayland backend. I'm not sure what fixes need to be done to change this, at a minimum we need a fix at either the SDL or mutter side for this issue, which is going to be tricky.

  2. This only helps for SDL2 apps, again hopefully the SDL1.2 compatibility wrapper for SDL2 can help here, at least for games using SDL.

2. Fullscreen performance is bad with many games.

Since under Wayland games cannot change the monitor resolution, they need to either render at the full desktop resolution, which can be very slow; or they render at a lower resolution and then need to do an extra scaling step each frame.

If we manage to make SDL2's Wayland backend the default and then add WPviewport support to it then this should help by reducing an extra memcpy/blit of a desktop-sized pixmap. Currently what apps which use scaling do is:

  1. render lower-res-pixmap;

  2. scale lower-res-pixmap to desktop-res-pixmap

  3. give desktop-res-pixmap to the compositor;

  4. compositor does a hardware blit of the desktop-res-pixmap to the framebuffer.

With viewport support this becomes:

  1. render lower-res-pixmap;

  2. give low-res-pixmap to the compositor;

  3. compositor uses hardware to do a scaling blit from the low-res-pixmap to the desktop-res framebuffer

Also with viewport support, the compositor could in the case of there only being the one fullscreen app even keep the framebuffer in lowres and use a hardware scaling drm-plane to send the low-res framebuffer scaled to desktop-res to the output while only reading the low-res framebuffer from memory saving a ton of memory bandwidth. But this optimization is going to be a challenge to pull off.
May 21, 2019
The just released 5.2-rc1 kernel includes improved support for Logitech wireless keyboards and mice. Until now we were relying on the generic HID keyboard and mouse emulation for 27 MHz and non-unifying 2.4 GHz wireless receivers.

Starting with the 5.2 kernel instead we actually look at the devices behind the receiver. This allows us to provide battery monitoring support and to have per device quirks, like device specific HID-code to evdev-code mappings where necessary. Until now device specific quirks where not possible because the receivers have a generic product-id which is the same independent of the device behind the receiver.

The per device key-mapping is especially important for 27MHz wireless devices, these use the same HID-code for Fn + F1 to Fn + F12 for all devices, but the markings on the keys differ per model. Sofar it was impossible for Linux to get the mapping for this right, but now that we have per device product-ids for the devices behind the receiver we can finally fix this. As is the case with other devices with vendor specific mappings, the actual mapping is done in userspace through hwdb.

If you have a 27 MHz device (often using this receiver, keyboard marked as canada 210 or canada 310 at the bottom). Please give 5.2 a try. Download the latest 60-keyboard.hwdb file and place it in /lib/udev/hwdb.d (replacing the existing file) and then run "sudo udevadm hwdb --update", before booting into the 5.2 kernel. Then run "sudo evemu-record" select your keyboard and try Fn + F1 to Fn + F12 and any other special keys. If any keys do not work, edit 60-keyboard.hwdb, search for Logitech and add an entry for your keyboard, see the existing Logitech entries. After editing you need to re-run "sudo udevadm hwdb --update", followed by "sudo udevadm trigger" for the changes to take effect. Once you have a working entry, submit a pull-req to systemd to get the changes upstream. If you need any help drop me an email.

We still have some old code for the generic HID emulation for 27 MHz receivers with a product-id of c50c, these should work fine with the new code, but we've been unable to test this. I would really like to move the c50c id over to the new code and remove all the old code. If you've a 27 MHz Logitech device, please run lsusb, if your device has a product-id of c50c and you are willing to test, please drop me an email.

Likewise I suspect that 2.4GHz receivers with a product-id of c531 should work fine with the new support for non-unifying 2.4 GHz receivers, if you have one of those also please drop me an email.
May 20, 2019
May 18, 2019
May 14, 2019
We have a job in our team, it's a pretty senior role, definitely want people with lots of experience. Great place to work,ignore any possible future mergers :-)

https://global-redhat.icims.com/jobs/68911/principal-software-engineer/job?mobile=false&width=1526&height=500&bga=true&needsRedirect=false&jan1offset=600&jun1offset=600
Now that GNOME3 on Wayland is the default in Fedora I've been trying to use this as my default desktop, but until recently I've kept falling back to GNOME3 on Xorg because of various small issues.

To fix this I've switched to using GNOME3 on Wayland as day to day desktop now and I'm working on fixing any issues which this causes as I hit them, aka "The Wayland Itches project". So far I've hit and fixed the following issues:

1. TopIcons

The TopIcons extension, which I depend on for some of my workflow, was not working well under Wayland with GNOME-3.30, only the top row of icons was clickable. This was fixed in GNOME-3.32, but with GNOME-3.32 using TopIcons was causing gnome-shell to go into a loop leading to a very high CPU load. The day I wanted to start looking into fixing this I was chatting to Carlos Garnacho and he pointed out to me that this was fixed a couple of days ago in gnome-shell. The fix for this is in gnome-shell 3.32.2 .

2. Hotkeys/desktop shortcuts not working in VirtualBox Virtual Machines

When running a VirtualBox VM under GNOME3 on Wayland, hotkeys such as alt+tab go to the GNOME3 desktop, rather then being forwarded to the VM as happens under Xorg. This can be fixed by changing 2 settings:

  gsettings set org.gnome.mutter.wayland xwayland-allow-grabs true
  gsettings set org.gnome.mutter.wayland xwayland-grab-access-rules "['VirtualBox Machine']"

This is a decent workaround, but we want things to "just work" of course, so we have been working on some changes to make this just work in the next GNOME version.

3. firefox-wayland

I've been also trying to use firefox-wayland as my day to day browser, this has lead to me filing three firefox bugs and I've switched back to regular
firefox (x11) for now.


If you have any Wayland Itches yourself, please drop me an email at hdegoede@redhat.com explaining them in as much detail as you can and I will see what I can do. Note that I typically get a lot of emails when asking for feedback like this, so I cannot promise that I will reply to every email; but I will be reading them all.
May 07, 2019

Once upon a time a driver was written for the Lenovo Ideapad firmware interface for handling special keys and rfkill functionality. This driver was written on an Ideapad laptop with a slider on the side to turn wifi on/off, a so called hardware rfkill switch. Sometime later a Yoga model using the same firmware interface showed up, without a hardware rfkill switch. It turns out that in this case the firmware interface reports the non-present switch as always in the off position, causing NetworkManager to not even try to use the wifi effectively breaking wifi.

So I added a dmi blacklist for models without a hardware rfkill switch. The same firmware interface is still used on new Ideapad and Yoga models and since most modern laptops typically do not have such a switch this dmi blacklist has been growing and growing. Just in the 5.1 kernel alone 5 new models were added. Worse as mentioned not being on the list for a model without the hardware switch leads to non working wifi, pretty much leading to any new Ideapad model not working with Linux until added to the list.

To fix this I've submitted a patch upstream turning the huge blacklist into a whitelist. This whitelist is empty for now, meaning that we define all models as not having a rfkill switch. This does lead to a small regression on models which do actually have a hardware rfkill switch, before this commit they would correctly report the wifi being disabled by the hw switch and e.g. the GNOME3 UI would report "wifi disabled in hardware", where as now users will just get an empty list of available wifi networks when the switch is in the off position. But this is a small price to pay to make sure that as of yet unknown and new Ideapad models do not have non-working wifi because of this issue.

As said the whitelist for models which do actually have a hardware rfkill switch is currently empty, so I need your help to fill it. If you have an Ideapad or Yoga laptop with a wifi on/off slider switch on it. Please run "rfkill list" if this contains "ideapad_wlan" in the output then you are using the ideapad-laptop driver. In this case please check that the "Hard blocked" setting for the "ideapad_wlan" rfkill device properly shows no / yes based on the switch position. If this works your model should be added to the new whitelist. For this please run: "sudo dmidecode &> dmidecode.log" and send me an email at hdegoede@redhat.com with the dmidecode.log attached.

Note the patch to change the list to a whitelist has been included in the Fedora kernels starting with kernel 5.0.10-300, so if you have an
Ideapad or Yoga running Fedora and you do see "ideapad_wlan" in the "rfkill list" output, but the "Hard blocked" setting does not respond, try with a kernel older then 5.0.10-300, let me know if you need help with this.

May 03, 2019

A previous post introduced the SPURV Android compatibility layer for Wayland based Linux environment.
In this post we're going to dig into how you can run an Android application on the very common i.MX6 based Nitrogen6_MAX board from Boundary Devices.

Install dependencies

sudo apt install \
    apt-transport-https \
    bmap-tools \
    ca-certificates \
    curl \
    git \
    gnupg2 \
    repo \
    software-properties-common \
    u-boot-tools \
    qemu-kvm

Set up Docker container for building

# Install Docker
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
sudo apt update
sudo apt install docker-ce

# Set up privileges for Docker
sudo usermod -aG docker ${USER}
su - ${USER}

# Fetch Docker image
docker pull godebos/debos:latest

Build

Build Android

mkdir android; cd android
repo …
May 02, 2019

lwn.net just featured an article the sustainability of open source, which seems to be a bit a topic in various places since a while. I’ve made a keynote at Siemens Linux Community Event 2018 last year which lends itself to a different take on all this:

The slides for those who don’t like videos.

This talk was mostly aimed at managers of engineering teams and projects with fairly little experience in shipping open source, and much less experience in shipping open source through upstream cross vendor projects like the kernel. It goes through all the usual failings and missteps and explains why an upstream first strategy is the right one, but with a twist: Instead of technical reasons, it’s all based on economical considerations of why open source is succeeding. Fundamentally it’s not about the better software, or the cheaper prize, or that the software freedoms are a good thing worth supporting.

Instead open source is eating the world because it enables a much more competitive software market. And all the best practices around open development are just to enable that highly competitive market. Instead of arguing that open source has open development and strongly favours public discussions because that results in better collaboration and better software we put on the economic lens, and private discussions become insider trading and collusions. And that’s just not considered cool in a competitive market. Similar arguments can be made with everything else going on in open source projects.

Circling back to the list of articles at the top I think it’s worth looking at the sustainability of open source as an economic issue of an extremely competitive market, in other words, as a market failure: Occasionally the result is that no one gets paid, the customers only receive a sub-par product with all costs externalized - costs like keeping up with security issues. And like with other market failures, a solution needs to be externally imposed through regulations, taxation and transfers to internalize all the costs again into the product’s prize. Frankly no idea how that would look like in practice though.

Anyway, just a thought, but good enough a reason to finally publish the recording and slides of my talk, which covers this just in passing in an offhand remark.

Update: Fix slides link.

April 26, 2019

The Igalia Coding Experience is an grant program which provides their first exposure to the professional world, working hand in hand with Igalia programmers and learning with them. The program is aimed to students with background in Computer Science, Information Technology, or Free Software development.

This program is a great opportunity for those students willing to improve their technical skills by working on the field, learn how to contribute to open-source projects, and work together with the engineers of Igalia, which is a worker-owned company rocking in the Free Software world for more than 18 years!

Igalia

We are looking for candidates that are passionate about Free Software philosophy and willing to work on Free Software projects. If you have already contributed to any Free Software project related to our areas of specialization… that’s great! But don’t worry if you have not yet, we encourage you to apply as well!

The conditions of the program are the following:

  • You will be mentored by one Igalian that is an expert in the respective field, so you are not going to be alone.
  • You will need to spend 450h working in the tasks agreed with the mentor, but you are free to distribute them along the year as it fits better for you. Usually students prefer to distribute them on timetables of 3 months working full-time, or 6 months part-time or even 1 year working 10 hours per week!
  • You are not going to do it for free. We will compensate you with 6500€ for all your work :)

This year we are offering Coding experience positions on 6 different areas:

  • Implementation of web standards. The student will become familiar, and contribute to the implementation of W3C standards in open source web engines.

  • WebKit, one of the most important open source web rendering engines. The student will have the opportunity to help maintain and/or contribute to the development of new features.

  • Chromium, a well-known browser rendering engine. The student will work on specific features development and/or bug-fixing. Additional tasks may include maintenance of our internal buildbots, and creation of Chromium/Wayland packages for distribution.

  • Compilers, with focus on WebAssembly and JavaScript implementations. The student will contribute JS engines like V8 or JSC, working on new language features, optimizations or ports.

  • Multimedia and GStreamer, the leading open source multimedia framework. The student will help develop the Video Editing stack in GStreamer (namely GES and NLE). This work will include adding new features in any part of GStreamer, GStreamer Editing Services or in the Pitivi video editor, as well as fixing bugs in any of those components.

  • Open-source graphics stack. The student will work in the development of specific features in Mesa or in improving any of the open-source testing suites (VkRunner, piglit) used in the Mesa community. Candidates who would like to propose topics of interest to work on, please include them in the cover letter.

The last area is the one that I have been working for more than 5 years inside the Graphics team at Igalia and I am thrilled we can offer such kind of position this year :-)

You can find more information about Igalia Coding experience program in the website… don’t forget to apply for it! Happy hacking!

April 15, 2019

A previous post introduced the SPURV Android compatibility layer for Wayland based Linux environment.
In this post we're going to dig into how you can run an Android application on the very common i.MX6 based Nitrogen6_MAX board from Boundary Devices.

Build SPURV for Nitrogen6_MAX

Build Android

mkdir android; cd android
repo init -u https://android.googlesource.com/platform/manifest -b android-9.0.0_r10
git clone https://gitlab.collabora.com/zodiac/android_manifest.git .repo/local_manifests/
repo sync -j15
. build/envsetup.sh
lunch spurv-eng
make -j12
cd ..

Build Linux Kernel

git clone https://gitlab.collabora.com/zodiac/linux.git -b android-container
cd linux
sh ../android/device/freedesktop/spurv/build-kernel.sh
cd ..

Create root filesystem

Just a kernel does not make an OS, so we're using Debian …

April 10, 2019

After upgrading my main workstation to F30 a while ago (soon after it branched) dbus-broker failed to start, making my machine pretty-much unusable. I tried putting selinux in permissive mode and that fixed it, so I made a note to revisit this later.

Fast-forward to today, I applied all updates, did a full-relabel for good measure and things were still broken. Spinning up a fresh F30 vm does not exhibit this problem, so the problem had to be something specific to my machine. After lots of debugging I found bug 1663040 which is about the same thing happen on the live media and only on the live media, the problem turns out to be the selinux attributes on the mount-points (/dev, /proc, /sys) in / which cannot be updated by a relabel because at that time they already have a filesystem mounted on them.

I created the problem of the wrong labels myself when I moved from a hdd to a ssd and did a cp -pr of the non mount dirs and a straight forward mkdir to create the mount-points on the ssd. Zbigniew gives a need trick to detect this problem from a running system in bug 1663040:

mkdir /tmp/foo
sudo mount --bind / /tmp/foo
ls -lZd /tmp/foo/* | grep unlabeled

If the output of the last command show any files/dirs with unlabeled_t as type then your system has the same issue as mine had. To fix this boot from a livecd, mount your / on /mnt, cd into /mnt and then run:

chcon -t device_t dev
chcon -t home_root_t home
chcon -t root_t proc sys
chcon -t var_run_t run

Then umount /mnt and reboot. After this your system should be able to run in enforcing mode again without problems.

April 06, 2019
While browsing for some internationalisation/localisation features, I found an interesting piece of functionality in Android's developer documentation. I'll quote it here:
A pseudolocale is a locale that is designed to simulate characteristics of languages that cause UI, layout, and other translation-related problems when an app is translated.
I've now implemented this for applications and libraries that use gettext, as an LD_PRELOAD library, available from this repository.


The current implementation can highlight a number of potential problems (paraphrasing the Android documentation again):
- String concatenation, which displays as one message split across 2 or more brackets.
- Hard-coded strings, which cannot be sent to translation, display as unaccented text in the pseudolocale to make them easy to notice.
- Right-to-left (RTL) problems such as elements not being mirrored.

Our old friend, Office Runner 


Testing brought some unexpected results :)
April 03, 2019

I just installed the Fedora Workstation 30 Beta yesterday and so far things are looking great. As many others have reported to, with the GNOME 3.32 update things definitely feels faster and smoother. So I thought it was a good time to talk about what is coming in Fedora Workstation 30 and what we are currently working on.

Fractional Scaling: One of the big features that landed, although still considered experimental was the fractional scaling feature that has been a collaboration between Jonas Ådahl here at Red hat and Marco Trevisan at Canonical. It has taken quite some time since the initial hackfest as it is a complex task, but we are getting close. Fractional scaling is a critical feature for many HiDPI screen laptops to get a desktop size that perfectly fits their screen, not being to small or to large.

Screen sharing support for Chrome and Firefox under Wayland. The Wayland security model doesn’t allow any application to freely grab images or streams of the whole desktop like you could under X. This is of course a huge improvement in security, but it did cause some disruption for valid usecases like screen sharing with things like BlueJeans and Google Hangouts. We been working on resolving that with the help of PipeWire. We been at it for some time and things are now coming together. Chrome 73 ships with everything needed to make this work with Chrome, although you have to turn it on manually (got to this URL to turn it on: chrome://flags/#enable-webrtc-pipewire-capturer). The reason it needs to be manually enabled is not that it is unreliable, it is because the UI is still a little fugly due to a combination of feature overlap between the browser and the desktop and also how the security feature of the desktop is done. We are trying to come up with ways for the UI to be smoother without sacrificing your privacy/security. For Firefox we will keep shipping with our downstream patch until we manage to get it landed upstream.

Firefox for Wayland: Martin Stransky has been hard at work making Firefox be able to run Wayland-native. That work is tantalizingly near, but we decided to postpone it for Fedora Workstation 31 in the end to make sure it is really well polished before releasing it upon the world. The advantage of Wayland native Firefox is that in addition to bring us one step closer to not needing to run an X server (XWayland) all the time it also enables things like fractional scaling mentioned above to work for Firefox.

OpenH264 improved: As many of you know Firefox relies on a library called OpenH264, provided by Cisco, for its H264 video codec support for WebRTC. This library is also provided to Fedora users from Cisco free of charge (you can install it through GNOME Software). However its usefulness have been somewhat limited due to only supporting the baseline profile used for video calling, but not the Main and High profiles used by most online video content. Well what I can tell you is that Red Hat, Endless and Cisco partnered with Centricular some time ago to add support for decoding those profiles to OpenH264 and that work is now almost complete. The basic code enabling them is already merged, but Jan Schmidt at Centricular is working on fixing a few files that are still giving us problems. As soon as that is generally shipping we hope to get Firefox to be able to use OpenH264 also for things like Youtube playback and of course also use OpenH264 to playback any H264 using GStreamer applications like Totem. So a big thank you to Endless, Cisco and Centricular for working with us on this and thus enabling us to have a legal way to offer H264 support to our users.

NVidia binary driver support under Wayland: We been putting it quite a bit of effort trying to tie off the lose ends for using the NVidia binary driver with Wayland. We did manage to fix a long list of bugs like dealing with various colorspace issues, multimonitor setups and so on. For Intel and AMD graphics users things should actually be pretty good to go at this point. The last major item holding us back on the NVidia side is full support for using the binary driver with XWayland applications (native Wayland applications should work fine already). Adam Jackson worked diligently to get all the pieces in place and we do think we have a model now that will allow NVidia to provide an updated driver that should enable XWayland. As it stands though that driver update is likely to only come out towards the fall, so we will keep defaulting to X for NVidia binary driver users for some time more.

Gaming under Wayland. Olivier Fourdan and Jonas Ådahl has trying to crush any major Wayland bug reported for quite some time now and one area where we seem to have rounded the corner is for games. Valve has been kind enough to give us the ability to install and run any steam game for testing purposes, so whenever we found a game giving us trouble we have been able to let Olivier and Jonas reproduce it easily. So on my own gaming box I am now able to run all the Steam games I have under Wayland, including those using Proton, without a hitch. We haven’t tested with the full Steam catalog of course, there are thousands, so if your favourite game is giving you trouble under Wayland still, please let us know. Talking about gaming one area we will try to free up some cycles going forward to look deeper at is Flatpaks and gaming. We already done quite a bit of work in this area, with things like the NVidia binary driver extension and the Steam package on Flathub. But we know from leading linux game devs that there are still some challenges to be resolved, like making host device access for gamepads simpler from within the Flatpak sandbox.

Flatpak Creation in Fedora. Owen Taylor has been in charge of getting Flatpaks building in Fedora, ensuring we can produce Flatpaks from Fedora packages. Owen set up a system to track the Fedora Flatpak status, we got about 10 applications so far, but hope to greatly grow that number of time as we polish up the system. This enables us to start planning for shipping some applications in Fedora Workstation as Flatpaks by default in a future release. This respository will be available by default in Fedora workstation 30 and you can choose the flatpak version of the package through the new drop down box in the top right corner of GNOME Software. For now the RPM version of the package is still the default, but we expect to change that in later releases of Fedora Workstation.

Gedit in GNOME Software with Source drop down box

Gedit in GNOME Software with Source drop down box

Fedora Toolbox – Debarshi Ray is leading the effort we call Fedora Toolbox, which is our starting point for our goal to revitalise and revolutionize development on Linux. Fedora Toolbox is trying to take the model of a pet container for development and make it seamless and natural. Our goal is to make it dead simple to create pet containers for your projects, so you can for instance have a Fedora pet container where you develop against the leading edge libraries and tools in Fedora, and you can have a RHEL based container where you develop against the library versions and tools shipping in RHEL (makes updating and fixing in production applications a lot easier) and maybe a SteamOS container to work on your little game project. Currently the model is that you have one pet container per OS your targeting, but we are pondering if maybe having one pet container per project would be even better if we can find good ways to avoid it being a lot of extra overhead (by for example having to re-install all your favourite command line tools in the container) or just outright confusing (which container got what tools and libraries again). Our goal here though is to ensure Fedora becomes the premier container native OS out there and thus a natural home for developers doing container development.
We are also working with the team inside Red Hat focusing on AI/ML and trying to ensure that we have a super smooth way for you to get a pet container with things like TensorFlow and CUDA up and running quickly.

Being an excellent platform for Openshift and Kubernetes development: We are putting effort into together with the Red Hat developer tools organization to bringing the OpenShift and CodeReady Studio and CodeReady Workspaces tools to Fedora. These tools have so far been very focused on RHEL support, but thanks to Flatpak for CodeReady Studio and web integration for CodeReady Workspaces we now have a path for making them easily available in Fedora too. In the world of Kubernetes OpenShift is where you want to be, and we want Fedora Workstation to be the ultimate portal for OpenShift development.

Fleet Commander with Active Directory support – So we are about to hit a very major milestone with Fleet Commander our large scale desktop management tool for Fedora and RHEL. Oliver Gutierrez has been hard at work making it work with Active Directory in addition to the existing FreeIPA support. We know that a majority of people interested in Fleet Commander are only using Active Directory currently, so being able to use Active Directory with Fleet Commander should make this great tool available to a huge number of new users. So if you are managing a University computer lab or a large number of Fedora or RHEL clients in your company we should soon have a Fleet Commander release out that you can use. And if you are not using Fedora or RHEL today well Fleet Commander is a very big reason for switching over!
We will do a proper announcement with further details once the release with Active Directory support is out.

PipeWire – I don’t have a major development to report, just a lot of steady work being done to stabilize and improve PipeWire. As mentioned earlier we now have Wayland screen sharing and recording working smoothly in the major browsers which is the user facing feature I think most of you will notice. Wim is still working on pushing the audio side it forward, but that is also a huge task. We have started talking about organizing a new hackfest soon to see if we can accelerate the effort further again. Likely scenario at this point in time is that we start enabling the JACK side of PipeWire first, maybe as early as Fedora Workstation 31, and then come back and do the PulseAudio replacement as a last stage.

Improved Input handling Another area we keep focusing on is improving input in Fedora. Peter Hutterer and Benjamin Tissoires are working hard on improving the stack. Peter just sent an extensive RFC out for how to deal with high resolution mice under Linux and Benjamin has been trying to get support for the Dell Totem landed. Neither will be there unfortunately for Fedora Workstation 30,but we expect to land this before Fedora Workstation 31.

Flicker-free boot
Hans de Goede has continued working on his effort to create a flicker-free boot experience with Fedora. The results of this work is on display in Fedora Workstation 30 and will for most of you now provide a seamless bootup experience . This effort is not so much about functionality as it is about ensuring you have an end-to-end polished experience with your Linux desktop. Things like the constant mode changes we seen in the past contribute to giving Linux an image of being unpolished and we want Fedora to be the vehicle that breaks down that image.

Ramping up Silverblue

For those of you following Fedora you are probably aware of Silverblue, which is our effort to re-think the Linux desktop distribution from the ground up and help us take the Linux desktop to a new level. The distribution model hasn’t really changed much over the last 20 years and we probably polished up the offering as far as we can within the scope of that model. For instance I upgraded my system to Fedora 30 beta yesterday and it was a long and tedious process of looking at about 6000 individual packages get updated from the Fedora 29 version to the Fedora 30 version one by one. I didn’t hit a lot of major snags despite this being a beta, but it is screamingly obvious that updating your operating system in this way is both slow and inherently fragile as anyone of those 6000 packages might hit a problem during upgrade and leave the system in a unknown state, especially since its common for packages to run scripts and similar as part of their upgrade.

Silverblue provides a revolutionary replacement for that process. First of all since it ships as a unified image we make life a lot easier for our QE team who can then test and verify against a single image which is in a known state. This in turn ensures that you as a user can feel confident that the new OS version will not break something on your system. And since the new version is just an image stored on your system next to the old one, upgrading is just about rebooting your system. There is no waiting for individual packages to get upgraded, as everything is already there and ready. Compare it to booting into a different kernel version on Fedora, it is quick and trivial.
And this also means that in the unlikely case that there is a problem with the new OS version you can just as easily go back to the previous version, by rebooting again and choosing to boot into that version. So you basically have instant upgrades with instant rollback if needed.
We believe this will radically change the way you look at OS upgrades forever, in fact you might almost forget they are happening.

And since Silverblue will basically be a Flatpak (and other containers) only OS you will have a clean delimitation between OS and applications. This means that even if we do major updates to the host, your applications should remain unaffected by the host OS update.
In fact we have some very interesting developments underway for Flatpak, with some major new efforts underway, efforts that I would love to talk about, but they are tied to some major Red Hat announcements that will happen at this years Red Hat Summit which will happen on May 7th – May 9th, so I will leave it as a teaser and then let you all know once the Summit is underway and Red Hats related major announcements are done.

There is a lot of work happening around Silverblue and as it happens Matthias Clasen wrote a long blog entry about it today. That blog goes into a lot more details on some of the Silverblue work items we been doing.

Anyway, I feel really excited about Silverblue and as we continue to refine the experience and figure out how everything will look in this brave new world I am sure everyone else will get excited too. Silverblue represents the single biggest evolution of the Linux desktop since the original GNOME and KDE releases back in the late nineties. It is not just about trying to tweak the existing experience, but an attempt at taking a big leap forward and provide an operating system that embodies all that we learned over these last 20 years and provide a natural home for developers and creators of all kind in our container centric computing future. Be sure to grab the Silverblue image of Fedora 30 beta and give it a test run. I recommend activating flathub.org repo to get started in order to get a decent range of applications available. As we move forward we are working hard to ensure that you have the world of applications available out of the box, so no need to go an enable any 3rd party repositories, but there are some more work that needs to happen before we can do that.

Summary
So Fedora Workstation 30 is going to be another exiting release of both of traditional RPM based Workstation version and of Silverblue, and I hope they will encourage even more people to join our rapidly growing Fedora community. Be sure to join us in #fedora-workstation on freenode IRC to talk!

April 02, 2019

Portable Services Walkthrough (Go Edition)

A few months ago I posted a blog story with a walkthrough of systemd Portable Services. The example service given was written in C, and the image was built with mkosi. In this blog story I'd like to revisit the exercise, but this time focus on a different aspect: modern programming languages like Go and Rust push users a lot more towards static linking of libraries than the usual dynamic linking preferred by C (at least in the way C is used by traditional Linux distributions).

Static linking means we can greatly simplify image building: if we don't have to link against shared libraries during runtime we don't have to include them in the portable service image. And that means pretty much all need for building an image from a Linux distribution of some kind goes away as we'll have next to no dependencies that would require us to rely on a distribution package manager or distribution packages. In fact, as it turns out, we only need as few as three files in the portable service image to be fully functional.

So, let's have a closer look how such an image can be put together. All of the following is available in this git repository.

A Simple Go Service

Let's start with a simple Go service, an HTTP service that simply counts how often a page from it is requested. Here are the sources: main.go — note that I am not a seasoned Go programmer, hence please be gracious.

The service implements systemd's socket activation protocol, and thus can receive bound TCP listener sockets from systemd, using the $LISTEN_PID and $LISTEN_FDS environment variables.

The service will store the counter data in the directory indicated in the $STATE_DIRECTORY environment variable, which happens to be an environment variable current systemd versions set based on the StateDirectory= setting in service files.

Two Simple Unit Files

When a service shall be managed by systemd a unit file is required. Since the service we are putting together shall be socket activatable, we even have two: portable-walkthrough-go.service (the description of the service binary itself) and portable-walkthrough-go.socket (the description of the sockets to listen on for the service).

These units are not particularly remarkable: the .service file primarily contains the command line to invoke and a StateDirectory= setting to make sure the service when invoked gets its own private state directory under /var/lib/ (and the $STATE_DIRECTORY environment variable is set to the resulting path). The .socket file simply lists 8088 as TCP/IP port to listen on.

An OS Description File

OS images (and that includes portable service images) generally should include an os-release file. Usually, that is provided by the distribution. Since we are building an image without any distribution let's write our own version of such a file. Later on we can use the portablectl inspect command to have a look at this metadata of our image.

Putting it All Together

The four files described above are already every file we need to build our image. Let's now put the portable service image together. For that I've written a Makefile. It contains two relevant rules: the first one builds the static binary from the Go program sources. The second one then puts together a squashfs file system combining the following:

  1. The compiled, statically linked service binary
  2. The two systemd unit files
  3. The os-release file
  4. A couple of empty directories such as /proc/, /sys/, /dev/ and so on that need to be over-mounted with the respective kernel API file system. We need to create them as empty directories here since Linux insists on directories to exist in order to over-mount them, and since the image we are building is going to be an immutable read-only image (squashfs) these directories cannot be created dynamically when the portable image is mounted.
  5. Two empty files /etc/resolv.conf and /etc/machine-id that can be over-mounted with the same files from the host.

And that's already it. After a quick make we'll have our portable service image portable-walkthrough-go.raw and are ready to go.

Trying it out

Let's now attach the portable service image to our host system:

# portablectl attach ./portable-walkthrough-go.raw
(Matching unit files with prefix 'portable-walkthrough-go'.)
Created directory /etc/systemd/system.attached.
Created directory /etc/systemd/system.attached/portable-walkthrough-go.socket.d.
Written /etc/systemd/system.attached/portable-walkthrough-go.socket.d/20-portable.conf.
Copied /etc/systemd/system.attached/portable-walkthrough-go.socket.
Created directory /etc/systemd/system.attached/portable-walkthrough-go.service.d.
Written /etc/systemd/system.attached/portable-walkthrough-go.service.d/20-portable.conf.
Created symlink /etc/systemd/system.attached/portable-walkthrough-go.service.d/10-profile.conf → /usr/lib/systemd/portable/profile/default/service.conf.
Copied /etc/systemd/system.attached/portable-walkthrough-go.service.
Created symlink /etc/portables/portable-walkthrough-go.raw → /home/lennart/projects/portable-walkthrough-go/portable-walkthrough-go.raw.

The portable service image is now attached to the host, which means we can now go and start it (or even enable it):

# systemctl start portable-walkthrough-go.socket

Let's see if our little web service works, by doing an HTTP request on port 8088:

# curl localhost:8088
Hello! You are visitor #1!

Let's try this again, to check if it counts correctly:

# curl localhost:8088
Hello! You are visitor #2!

Nice! It worked. Let's now stop the service again, and detach the image again:

# systemctl stop portable-walkthrough-go.service portable-walkthrough-go.socket
# portablectl detach portable-walkthrough-go
Removed /etc/systemd/system.attached/portable-walkthrough-go.service.
Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d/10-profile.conf.
Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d/20-portable.conf.
Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d.
Removed /etc/systemd/system.attached/portable-walkthrough-go.socket.
Removed /etc/systemd/system.attached/portable-walkthrough-go.socket.d/20-portable.conf.
Removed /etc/systemd/system.attached/portable-walkthrough-go.socket.d.
Removed /etc/portables/portable-walkthrough-go.raw.
Removed /etc/systemd/system.attached.

And there we go, the portable image file is detached from the host again.

A Couple of Notes

  1. Of course, this is a simplistic example: in real life services will be more than one compiled file, even when statically linked. But you get the idea, and it's very easy to extend the example above to include any additional, auxiliary files in the portable service image.

  2. The service is very nicely sandboxed during runtime: while it runs as regular service on the host (and you thus can watch its logs or do resource management on it like you would do for all other systemd services), it runs in a very restricted environment under a dynamically assigned UID that ceases to exist when the service is stopped again.

  3. Originally I wanted to make the service not only socket activatable but also implement exit-on-idle, i.e. add a logic so that the service terminates on its own when there's no ongoing HTTP connection for a while. I couldn't figure out how to do this race-freely in Go though, but I am sure an interested reader might want to add that? By combining socket activation with exit-on-idle we can turn this project into an excercise of putting together an extremely resource-friendly and robust service architecture: the service is started only when needed and terminates when no longer needed. This would allow to pack services at a much higher density even on systems with few resources.

  4. While the basic concepts of portable services have been around since systemd 239, it's best to try the above with systemd 241 or newer since the portable service logic received a number of fixes since then.

Further Reading

A low-level document introducing Portable Services is shipped along with systemd.

Please have a look at the blog story from a few months ago that did something very similar with a service written in C.

There are also relevant manual pages: portablectl(1) and systemd-portabled(8).

April 01, 2019

Running Android has some advantages compared to native Linux applications, for example with regard to the availability of applications and application developers.

For current non-Android systems, this work enables a path forward to running Android applications in the same graphical environment as traditional non-Android applications are run.

What is SPURV?

SPURV is our experimental containerized Android environment, and this is a quick overview of what it is.

It's aptly named after the first robotic fish since a common Android naming scheme is fish-themed names. Much like its spiritual ancestor Goldfish, the Android emulator.

Other Android Compatibility Layers

This means that Anbox which is LXC based, is different from SPURV in terms of how hardware is accessed. The hardware access that Anbox provides in indirect, and …

Arm driver timeline

The process of reverse engineering Arm GPUs has been going on for a long time, starting with Luc Verhaegens work on the low-end Mali 2/3/400 series of GPUs based on the Arm Utgard family of GPUs.
This driver has recently seen a lot new attention and is itself progressing quickly, which means it will likely be accepted into the kernel soon.
A piece of trivia is that this GPU architecture was what Arm received when they purchased the Norwegian GPU IP vendor Falanx Microsystems.

The Mali T and G-series of GPUs are based on the Midgard and Bifrost architectures respectively, both of which are quite different from the 2/3/400 series. However the T and G-series are somewhat similar at least when …

Back in October, Panfrost ran some simple benchmarks, like glmark. Five months later, Panfrost has grown from running benchmarks to real-world apps, like Kodi, and 3D games like SuperTuxKart and Neverball.

Since the previous post, there have been major improvements across every part of the aspect culminating in this milestone. On the kernel side, my co-contributors Tomeu Vizoso and Rob Herring have created a modern kernel driver, suitable for mainline inclusion. Panfrost now uses this upstream-friendly driver, rather than relying on a modified legacy kernel as in the past. The new kernel module is currently under review for mainline inclusion. You can read more about this progress on Tomeu’s blog.

Outside the kernel, however, the changes have been no less significant. Early development was constrained to our own project repositories, as the code was not yet ready to general users. In early February, thanks in part to the progress on the kernel-space, we flew out of our nest, and Panfrost was merged into upstream Mesa, the central repository for free software graphics drivers. Development now occurs in-tree in Mesa.

We have continued decoding new aspects of the hardware and implementing support in the driver. A few miscellaneous additions include cube maps, gl_PointSize and gl_PointCoord, linear depth rendering, performance counters, and new shader instructions.

One area of particular improvement has been our understanding of the hardware formats (like “4-element vector of 32-bit floats” or “single 16-bit unsigned normalized integer”). In Panfrost’s early days, we knew magic numbers to distinguish a few of the most common formats, but the underlying meanings of the voodoo patterns were elusive. Further, the format bits for textures and attributes were not unified, further hindering the diversity of supported formats available. However, Connor Abbott and I have since identified the underlying meaning of the format codes for textures, attributes, and framebuffers. This new understanding allows for the magic numbers to be replaced by a streamlined format selection routine, mapping Gallium’s formats to the hardware’s and supporting the full spectrum of formats required for a conformant driver. Panfrost is now passing texture format tests for OpenGL ES 2.0.

From a performance standpoint, various optimizations have been added. In particular, a fast path likely relating to the “tiler” in the hardware was discovered. When this fast path is used, performance on geometry heavy scenes skyrockets. In one extreme demo (shading the Stanford bunny), performance more than tripled, and these gains trickle down to real-world games.

Features aside, one of the key issues with an early driver is the brittleness and instability. Accordingly, to guarantee robustness, I now test with the drawElements Quality Program (dEQP), which includes comprehensive code correctness tests. Although we’re still a while away from conformance, I now systematically step through identified issues and resolved the bugs, translating to fixes across every aspect of the driver.

One real-world benefactor of these fixes is the Kodi media center, which today works well using Panfrost to achieve a fluid interface on Midgard devices. For standalone installations of Kodi, today there are experimental images featuring Kodi and Panfrost. To further improve fluidity, Kodi and Panfrost can even interoperate with the video decoding acceleration, contingent on cooperative kernel drivers.

For users more inclined to gaming, some 3D games are beginning to show signs of life with Panfrost. For instance, the classic (OpenGL ES 2.0) backend of the ever-popular kart racing game, SuperTuxKart, now renders with some minor glitches with Panfrost. Performance is playable on simple tracks, though we have many opportunities for optimization. To bring up this racing game, I added support for complex control flow in the compiler. Traditionally, control flow is discouraged in graphics, due to the architecture of desktop GPUs (thread “warps”). However, Midgard does not feature these traditional optimizations, negating the performance penalty for branching from control flow. The implementation required new bookkeeping in the compiler, as well as an investigation into long jumps due to the size of the game’s “uber-shader”. In total, this compiler improvement – paired with assorted bug fixes – allows SuperTuxKart to run.

Likewise, Neverball is playable (and fun!) with Panfrost, although there are rendering anomalies relating to the currently unimplemented legacy feature “point sprites”. In contrast to Kodi and SuperTuxKart, which make liberal use of custom shaders, Neverball is implemented with purely fixed-function desktop OpenGL. This poses an interesting challenge, as Midgard is designed specifically for embedded graphics; the blob does not support this desktop superset. But that’s no reason we can’t!

Like most modern free software OpenGL drivers, Panfrost is built atop the modular “Gallium” architecture. This architecture abstracts away interface details, like desktop versus embedded OpenGL, normalizing differences to allow drivers to focus on the hardware itself. This abstraction means that by implementing Panfrost as an embedded driver atop Gallium, we get a partial desktop OpenGL implementation “free”.

Of course, there is functionality in the desktop superset that does not exist in the embedded profile. While Gallium tries to paper over these differences, the driver is required to implement features like point sprites and alpha testing to expose the corresponding desktop functions. So, the bring-up of desktop OpenGL applications like Neverball has led me to implement some of this additional functionality. Translating the “alpha test” to a conditional discard instruction in the fragment shader works. Similarly, translating “point sprites” to the modern equivalent, gl_PointCoord, is planned.

Interestingly, the hardware does support some functionality only available through the full desktop profile. It is unknown how many “hidden features” of this type are supported; as the blob does not appear to use them, these features were discovered purely by accident on our part. For instance, in addition to the familiar set of “points, lines, and triangles”, Midgard can natively render quadrilaterals and polygons. The existence of this feature will suggested by the corresponding performance counters, and the driver-side mechanics were determined by manual bruteforce of the primitive selection bits. Nevertheless, now that these bonus features are understood, quads can be drawn from desktop applications without first translating to indexed triangles in software. Similarly, it appears in addition to the embedded standard of boolean occlusion queries, setting a chicken bit enable the hardware’s hidden support for precise occlusion counters, a useful desktop feature.

Going forward, although the implementation of OpenGL ES 2.0 is approaching feature-completeness, we will continue to polish the driver, guided by dEQP. Orthogonal to conformance, further optimization to improve performance and lower memory usage is on the roadmap.

It’s incredible to reflect back and realise just one year ago, work had not even begun on writing a real OpenGL driver. Yet here we are today with an increasingly usable, exclusive free software, hardware-accelerated desktop with Mali Midgard graphics.

Frost on.

March 29, 2019

Aside from the regular board elections we also have some bylaw changes to vote on. As usual with bylaw changes, we need a supermajority of all members to agree - if you don’t vote you essentially reject it, but the board has no way of knowing.

Please see the detailed changes of the bylaws, make up your mind, and go voting on the shiny new members page.

March 28, 2019

Today the announcement went out that the Linux Vendor Firmware Service has become and official Linux Foundation service. For those that don’t know it yet LVFS is a service that provides firmware for your linux running hardware and it was one off our initial efforts as part of the Fedora Workstation effort to drain the swamp in terms of making Linux a first class desktop operating system.

The effort came about due to Peter Jones, who is Red Hats representative to the UEFI standards body, approaching me to talk about how Microsoft was trying to push for a standardized way to ship UEFI firmware for Windows and how UEFI being a standard openeded a path for us to actually get full support for this without each vendor having to ship and maintain their own proprietary firmware tools. So we did a meeting with Peter Jones and also brought in Richard Hughes who had already been looking at the problem of firmware updates in Linux, partly due to his ColorHug hardware, and the effort got started with Peter working on the low level OS tooling and Richard taking on building the service to drive distribution and the work to integrate it all into GNOME Software. One concern we had of course was if we could reach critical mass for this and get vendors interested, but luckily Dell was just as keen on improving firmware handling under Linux as us and signed on from the start. Having Dell onboard helped give the effort a lot of credibility and as the service matured we ended up having more and more vendors sign up. We also reached out through Red Hats partnerships to push vendors to adopt supporting it. As Richard also mentions in his interview about it, we had made the solution as similar to Microsofts as possible to decrease the threshold for hardware vendors to join, the goal being that if they did the basic work to support Windows they could more or less just ship the same firmware file to LVFS.

One issue that we had gone back on forth about inside Red Hat was the formal setup of the service. While we all agreed the service was hugely beneficial it felt like something that should be a shared service for all of Linux and we felt that if the service was Red Hat provided it might dissuade other vendors to join. So we started looking around for a neutral place to land the service while in the meantime LVFS had a sort of autonomous status being run as a community effort by Richard Hughes. We ended up talking to Chris Wright, the Red Hat CTO, about the project and he offered to facilitate contact with the Linux Foundation. The initial meetings was very positive and the Linux Foundation seemed interested in running the service right from the start, it did end up taking us quite some time to clear all formal and technical hurdles to get there, but I for one is very happy to see the LVFS now being a vendor neutral service provided by the Linux Foundation.

So a big thank you to Richard Hughes, Peter Jones, Chris Wright, Mario Limonciello and Dell and the Linux Foundation for their help in getting us here. And also a big thank you to Fedora and the Fedora community for their help with providing us a place to develop and polish up this service to the benefit of all. To me this is one of many examples of how Fedora keeps innovating and leading the way on Desktop linux.

March 21, 2019

I had to work on an image yesterday where I couldn't install anything and the amount of pre-installed tools was quite limited. And I needed to debug an input device, usually done with libinput record. So eventually I found that hexdump supports formatting of the input bytes but it took me a while to figure out the right combination. The various resources online only got me partway there. So here's an explanation which should get you to your results quickly.

By default, hexdump prints identical input lines as a single line with an asterisk ('*'). To avoid this, use the -v flag as in the examples below.

hexdump's format string is single-quote-enclosed string that contains the count, element size and double-quote-enclosed printf-like format string. So a simple example is this:


$ hexdump -v -e '1/2 "%d\n"'
-11643
23698
0
0
-5013
6
0
0
This prints 1 element ('iteration') of 2 bytes as integer, followed by a linebreak. Or in other words: it takes two bytes, converts it to int and prints it. If you want to print the same input value in multiple formats, use multiple -e invocations.

$ hexdump -v -e '1/2 "%d "' -e '1/2 "%x\n"'
-11568 d2d0
23698 5c92
0 0
0 0
6355 18d3
1 1
0 0
This prints the same 2-byte input value, once as decimal signed integer, once as lowercase hex. If we have multiple identical things to print, we can do this:

$ hexdump -v -e '2/2 "%6d "' -e '" hex:"' -e '4/1 " %x"' -e '"\n"'
-10922 23698 hex: 56 d5 92 5c
0 0 hex: 0 0 0 0
14879 1 hex: 1f 3a 1 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0
Which prints two elements, each size 2 as integers, then the same elements as four 1-byte hex values, followed by a linebreak. %6d is a standard printf instruction and documented in the manual.

Let's go and print our protocol. The struct representing the protocol is this one:


struct input_event {
#if (__BITS_PER_LONG != 32 || !defined(__USE_TIME_BITS64)) && !defined(__KERNEL__)
struct timeval time;
#define input_event_sec time.tv_sec
#define input_event_usec time.tv_usec
#else
__kernel_ulong_t __sec;
#if defined(__sparc__) && defined(__arch64__)
unsigned int __usec;
#else
__kernel_ulong_t __usec;
#endif
#define input_event_sec __sec
#define input_event_usec __usec
#endif
__u16 type;
__u16 code;
__s32 value;
};
So we have two longs for sec and usec, two shorts for type and code and one signed 32-bit int. Let's print it:

$ hexdump -v -e '"E: " 1/8 "%u." 1/8 "%06u" 2/2 " %04x" 1/4 "%5d\n"' /dev/input/event22
E: 1553127085.097503 0002 0000 1
E: 1553127085.097503 0002 0001 -1
E: 1553127085.097503 0000 0000 0
E: 1553127085.097542 0002 0001 -2
E: 1553127085.097542 0000 0000 0
E: 1553127085.108741 0002 0001 -4
E: 1553127085.108741 0000 0000 0
E: 1553127085.118211 0002 0000 2
E: 1553127085.118211 0002 0001 -10
E: 1553127085.118211 0000 0000 0
E: 1553127085.128245 0002 0000 1
And voila, we have our structs printed in the same format evemu-record prints out. So with nothing but hexdump, I can generate output I can then parse with my existing scripts on another box.

March 15, 2019

Ho ho ho, let's write libinput. No, of course I'm not serious, because no-one in their right mind would utter "ho ho ho" without a sufficient backdrop of reindeers to keep them sane. So what this post is instead is me writing a nonworking fake libinput in Python, for the sole purpose of explaining roughly how libinput's architecture looks like. It'll be to the libinput what a Duplo car is to a Maserati. Four wheels and something to entertain the kids with but the queue outside the nightclub won't be impressed.

The target audience are those that need to hack on libinput and where the balance of understanding vs total confusion is still shifted towards the latter. So in order to make it easier to associate various bits, here's a description of the main building blocks.

libinput uses something resembling OOP except that in C you can't have nice things unless what you want is a buffer overflow\n\80xb1001af81a2b1101. Instead, we use opaque structs, each with accessor methods and an unhealthy amount of verbosity. Because Python does have classes, those structs are represented as classes below. This all won't be actual working Python code, I'm just using the syntax.

Let's get started. First of all, let's create our library interface.


class Libinput:
@classmethod
def path_create_context(cls):
return _LibinputPathContext()

@classmethod
def udev_create_context(cls):
return _LibinputUdevContext()

# dispatch() means: read from all our internal fds and
# call the dispatch method on anything that has changed
def dispatch(self):
for fd in self.epoll_fd.get_changed_fds():
self.handlers[fd].dispatch()

# return whatever the next event is
def get_event(self):
return self._events.pop(0)

# the various _notify functions are internal API
# to pass things up to the context
def _notify_device_added(self, device):
self._events.append(LibinputEventDevice(device))
self._devices.append(device)

def _notify_device_removed(self, device):
self._events.append(LibinputEventDevice(device))
self._devices.remove(device)

def _notify_pointer_motion(self, x, y):
self._events.append(LibinputEventPointer(x, y))



class _LibinputPathContext(Libinput):
def add_device(self, device_node):
device = LibinputDevice(device_node)
self._notify_device_added(device)

def remove_device(self, device_node):
self._notify_device_removed(device)


class _LibinputUdevContext(Libinput):
def __init__(self):
self.udev = udev.context()

def udev_assign_seat(self, seat_id):
self.seat_id = seat.id

for udev_device in self.udev.devices():
device = LibinputDevice(udev_device.device_node)
self._notify_device_added(device)


We have two different modes of initialisation, udev and path. The udev interface is used by Wayland compositors and adds all devices on the given udev seat. The path interface is used by the X.Org driver and adds only one specific device at a time. Both interfaces have the dispatch() and get_events() methods which is how every caller gets events out of libinput.

In both cases we create a libinput device from the data and create an event about the new device that bubbles up into the event queue.

But what really are events? Are they real or just a fidget spinner of our imagination? Well, they're just another object in libinput.


class LibinputEvent:
@property
def type(self):
return self._type

@property
def context(self):
return self._libinput

@property
def device(self):
return self._device

def get_pointer_event(self):
if instanceof(self, LibinputEventPointer):
return self # This makes more sense in C where it's a typecast
return None

def get_keyboard_event(self):
if instanceof(self, LibinputEventKeyboard):
return self # This makes more sense in C where it's a typecast
return None


class LibinputEventPointer(LibinputEvent):
@property
def time(self)
return self._time/1000

@property
def time_usec(self)
return self._time

@property
def dx(self)
return self._dx

@property
def absolute_x(self):
return self._x * self._x_units_per_mm

@property
def absolute_x_transformed(self, width):
return self._x * width/ self._x_max_value
You get the gist. Each event is actually an event of a subtype with a few common shared fields and a bunch of type-specific ones. The events often contain some internal value that is calculated on request. For example, the API for the absolute x/y values returns mm, but we store the value in device units instead and convert to mm on request.

So, what's a device then? Well, just another I-cant-believe-this-is-not-a-class with relatively few surprises:


class LibinputDevice:
class Capability(Enum):
CAP_KEYBOARD = 0
CAP_POINTER = 1
CAP_TOUCH = 2
...

def __init__(self, device_node):
pass # no-one instantiates this directly

@property
def name(self):
return self._name

@property
def context(self):
return self._libinput_context

@property
def udev_device(self):
return self._udev_device

@property
def has_capability(self, cap):
return cap in self._capabilities

...
Now we have most of the frontend API in place and you start to see a pattern. This is how all of libinput's API works, you get some opaque read-only objects with a few getters and accessor functions.

Now let's figure out how to work on the backend. For that, we need something that handles events:


class EvdevDevice(LibinputDevice):
def __init__(self, device_node):
fd = open(device_node)
super().context.add_fd_to_epoll(fd, self.dispatch)
self.initialize_quirks()

def has_quirk(self, quirk):
return quirk in self.quirks

def dispatch(self):
while True:
data = fd.read(input_event_byte_count)
if not data:
break

self.interface.dispatch_one_event(data)

def _configure(self):
# some devices are adjusted for quirks before we
# do anything with them
if self.has_quirk(SOME_QUIRK_NAME):
self.libevdev.disable(libevdev.EV_KEY.BTN_TOUCH)


if 'ID_INPUT_TOUCHPAD' in self.udev_device.properties:
self.interface = EvdevTouchpad()
elif 'ID_INPUT_SWITCH' in self.udev_device.properties:
self.interface = EvdevSwitch()
...
else:
self.interface = EvdevFalback()


class EvdevInterface:
def dispatch_one_event(self, event):
pass

class EvdevTouchpad(EvdevInterface):
def dispatch_one_event(self, event):
...

class EvdevTablet(EvdevInterface):
def dispatch_one_event(self, event):
...


class EvdevSwitch(EvdevInterface):
def dispatch_one_event(self, event):
...

class EvdevFallback(EvdevInterface):
def dispatch_one_event(self, event):
...
Our evdev device is actually a subclass (well, C, *handwave*) of the public device and its main function is "read things off the device node". And it passes that on to a magical interface. Other than that, it's a collection of generic functions that apply to all devices. The interfaces is where most of the real work is done.

The interface is decided on by the udev type and is where the device-specifics happen. The touchpad interface deals with touchpads, the tablet and switch interface with those devices and the fallback interface is that for mice, keyboards and touch devices (i.e. the simple devices).

Each interface has very device-specific event processing and can be compared to the Xorg synaptics vs wacom vs evdev drivers. If you are fixing a touchpad bug, chances are you only need to care about the touchpad interface.

The device quirks used above are another simple block:


class Quirks:
def __init__(self):
self.read_all_ini_files_from_directory('$PREFIX/share/libinput')

def has_quirk(device, quirk):
for file in self.quirks:
if quirk.has_match(device.name) or
quirk.has_match(device.usbid) or
quirk.has_match(device.dmi):
return True
return False

def get_quirk_value(device, quirk):
if not self.has_quirk(device, quirk):
return None

quirk = self.lookup_quirk(device, quirk)
if quirk.type == "boolean":
return bool(quirk.value)
if quirk.type == "string":
return str(quirk.value)
...
A system that reads a bunch of .ini files, caches them and returns their value on demand. Those quirks are then used to adjust device behaviour at runtime.

The next building block is the "filter" code, which is the word we use for pointer acceleration. Here too we have a two-layer abstraction with an interface.


class Filter:
def dispatch(self, x, y):
# converts device-unit x/y into normalized units
return self.interface.dispatch(x, y)

# the 'accel speed' configuration value
def set_speed(self, speed):
return self.interface.set_speed(speed)

# the 'accel speed' configuration value
def get_speed(self):
return self.speed

...


class FilterInterface:
def dispatch(self, x, y):
pass

class FilterInterfaceTouchpad:
def dispatch(self, x, y):
...

class FilterInterfaceTrackpoint:
def dispatch(self, x, y):
...

class FilterInterfaceMouse:
def dispatch(self, x, y):
self.history.push((x, y))
v = self.calculate_velocity()
f = self.calculate_factor(v)
return (x * f, y * f)

def calculate_velocity(self)
for delta in self.history:
total += delta
velocity = total/timestamp # as illustration only

def calculate_factor(self, v):
# this is where the interesting bit happens,
# let's assume we have some magic function
f = v * 1234/5678
return f
So libinput calls filter_dispatch on whatever filter is configured and passes the result on to the caller. The setup of those filters is handled in the respective evdev interface, similar to this:

class EvdevFallback:
...
def init_accel(self):
if self.udev_type == 'ID_INPUT_TRACKPOINT':
self.filter = FilterInterfaceTrackpoint()
elif self.udev_type == 'ID_INPUT_TOUCHPAD':
self.filter = FilterInterfaceTouchpad()
...
The advantage of this system is twofold. First, the main libinput code only needs one place where we really care about which acceleration method we have. And second, the acceleration code can be compiled separately for analysis and to generate pretty graphs. See the pointer acceleration docs. Oh, and it also allows us to easily have per-device pointer acceleration methods.

Finally, we have one more building block - configuration options. They're a bit different in that they're all similar-ish but only to make switching from one to the next a bit easier.


class DeviceConfigTap:
def set_enabled(self, enabled):
self._enabled = enabled

def get_enabled(self):
return self._enabled

def get_default(self):
return False

class DeviceConfigCalibration:
def set_matrix(self, matrix):
self._matrix = matrix

def get_matrix(self):
return self._matrix

def get_default(self):
return [1, 0, 0, 0, 1, 0, 0, 0, 1]
And then the devices that need one of those slot them into the right pointer in their structs:

class EvdevFallback:
...
def init_calibration(self):
self.config_calibration = DeviceConfigCalibration()
...

def handle_touch(self, x, y):
if self.config_calibration is not None:
matrix = self.config_calibration.get_matrix

x, y = matrix.multiply(x, y)
self.context._notify_pointer_abs(x, y)

And that's basically it, those are the building blocks libinput has. The rest is detail. Lots of it, but if you understand the architecture outline above, you're most of the way there in diving into the details.

One of the features in the soon-to-be-released libinput 1.13 is location-based touch arbitration. Touch arbitration is the process of discarding touch input on a tablet device while a pen is in proximity. Historically, this was provided by the kernel wacom driver but libinput has had userspace touch arbitration for quite a while now, allowing for touch arbitration where the tablet and the touchscreen part are handled by different kernel drivers.

Basic touch arbitratin is relatively simple: when a pen goes into proximity, all touches are ignored. When the pen goes out of proximity, new touches are handled again. There are some extra details (esp. where the kernel handles arbitration too) but let's ignore those for now.

With libinput 1.13 and in preparation for the Dell Canvas Dial Totem, the touch arbitration can now be limited to a portion of the screen only. On the totem (future patches, not yet merged) that portion is a square slightly larger than the tool itself. On normal tablets, that portion is a rectangle, sized so that it should encompass the users's hand and area around the pen, but not much more. This enables users to use both the pen and touch input at the same time, providing for bimanual interaction (where the GUI itself supports it of course). We use the tilt information of the pen (where available) to guess where the user's hand will be to adjust the rectangle position.

There are some heuristics involved and I'm not sure we got all of them right so I encourage you to give it a try and file an issue where it doesn't behave as expected.

March 13, 2019

Arm driver timeline

The process of reverse engineering Arm GPUs has been going on for a long time, starting with Luc Verhaegens work on the low-end Mali 2/3/400 series of GPUs based on the Arm Utgard family of GPUs.
This driver has recently seen a lot new attention and is itself progressing quickly, which means it will likely be accepted into the kernel soon.
A piece of trivia is that this GPU architecture was what Arm received when they purchased the Norwegian GPU IP vendor Falanx Microsystems.

The Mali T and G-series of GPUs are based on the Midgard and Bifrost architectures respectively, both of which are quite different from the 2/3/400 series. However the T and G-series are somewhat similar at least when …

Arm driver timeline

The process of reverse engineering Arm GPUs has been going on for a long time, starting with Luc Verhaegens work on the low-end Mali 2/3/400 series of GPUs based on the Arm Utgard family of GPUs.
This driver has recently seen a lot new attention and is itself progressing quickly, which means it will likely be accepted into the kernel soon.
A piece of trivia is that this GPU architecture was what Arm received when they purchased the Norwegian GPU IP vendor Falanx Microsystems.

The Mali T and G-series of GPUs are based on the Midgard and Bifrost architectures respectively, both of which are quite different from the 2/3/400 series. However the T and G-series are somewhat similar at least when …

Imagine a finite resource that you want to distribute amongst peers in a fair manner. If you know the number of peers to be n, the problem becomes trivial and you can assign every peer 1/n-th of the total. This way every peer gets the same amount, while no part of the resource stays unused. But what if the number of peers is only known retrospectively? That is, how many resources do you grant a peer if you do not know whether there are more peers or not? How do you define “fairness”? And how do you make sure as little of the resource as possible stays unused?

The fairdist algorithm provides one possible solution to this problem. It defines how many resources a new peer is assigned, considering the following propertis:

  1. The total amount of resources already distributed to other peers. This is also referred to by the term consumption.
  2. The number of peers that already got resources assigned.
  3. The amount of resources remaining. That is, the resources that are remaining to be distributed. This is also referred to by the term reserve.

The following is a mathematical proof of the properties of the fairdist algorithm. For the reference implementation of the algorithm and information on the different applications of it, see the r-fairdist project.

Prerequisites

We define a set of symbols up front, to keep the proofs shorter. Whenever these symbols are mentioned, the following definition applies:

  • Let be a total amount of consumed resources.
  • Let be a total amount of reserved resources.
  • Let be a number of peers that consumed resources.
  • Let be a function that computes the proportion of a peer can consume, based on the number of peers that currently have resources consumed.
  • Let be a function that computes the proportion of a peer is guaranteed, based on the number of peers that currently have resources consumed.

The algorithm considers a total amount of resources, but splits it into two separate parts, the remaining reserve and the consumed part . Their sum represents the total amount that was initially available. It then declares a function , which is the resource allocator. It will later on be used to calculate how many resources of the reserve a peer can allocate: . That is, defines the proportion of the reserve a new peer gets access to. The smaller it is, the more a peer gets.

Similarly, the guarantee is used to declare a lower bound of the total resources the allocator grants a new peer. That is, while is a function applied to allocations, is a property the allocator will guarantee you. Unlike an allocation, will later on be calculated based on the total amount of resources: . Again, the function defines the proportion that is guaranteed. So the smaller is, the stronger the guarantee becomes.

Definition

The allocator is said to guarantee the limit , if there exists a function so that for all , , and :

implies both:

The idea here is to define a function which calculates how many resources a new peer can allocate. That is, considering a new peer requests resources, it will get of the reserve. The first property of this implication guarantees that this allocation is bigger than, or equal to, the guaranteed total for each peer. The guaranteed total is calculated through based on the total amount of resources (which is the consumption plus the reserve).

If you now pick an allocator and a guarantee that fulfil this definition, the idea is that this ensures you that the allocator can be used to serve resource requests from new peers, and it ensures that regardless of how many peers will request resources, each one will be guaranteed an amount equal to, or bigger than, the guarantee .

This definition requires the existance of a reserve watermark . It uses this watermark as a selector for an inductive step. That is, if the requirements of this reserve selector are true, the second implication guarantees that they are true for an infinite number of following allocations. That is, the right hand side of the second implication matches exactly the requirement of the implication, once a single allocation was performed (i.e., a resource chunk was subtracted from the reserve and added to the total consumption, while the number of peers increased by one).

Note that if is , then the requirement of the implication is true for all and . This guarantees that there is always a situation where allocator can actually be applied.

Lemma 1

To prove an allocator guarantees , it is sufficient to show that fulfils:

and

This lemma is used to make it easier to prove a specific allocator guarantees a specific limit. Without it, each proof of the different allocators would have to replicate it.

However, this lemma also gives a better feeling of what the different functions actually mean. For instance, it clearly shows must always be smaller than , and that by a considerable amount. If , then no would ever fulfil this requirement (remember: ). At the same time, you can see the closer and are together, the smaller gets, and as such the requirements on the reserve get harder to fulfil.

The second requirement gives you a recursive equation to find an for any allocator you pick. Hence, in combination both these requirements show you an iterative process to find and , for any guarantee you pick. However, the closer and get, the harder it becomes to solve the recursive equation.

Proof

To show this lemma is true, we must show both implications of the definition are true. As first step, we show the first implication is true, which is:

We show this b starting with the left-hand side and showing it implies the right hand side, using the requisite of this lemma.

As second step, we need to show the second implication of the definition is true, which is:

To prove this, we start with the second requisite of this lemma and then show it implies the right-hand side of the implication, using the requisite of the implication.

Hint: The following introduction of is correct, since is per definition greater than , so neither side can be 0.

Theorem

The following allocators each guarantee the specified limit:

This theorem defines three different allocators for different guarantees. The last one provides the strongest guarantee. Both the allocation and the guarantee are quasilinear. It is thus a good fit for fair allocation schemes, while still being reasonably fast to compute.

The other two provide quadratic and exponential guarantees and are mostly listed for documentational purposes. With the quasilinear guarantees at hand, there is little reason to use the other two.

As you might notice, this theorem does not provide a solution where and become infinitesimally close. It remains open whether what this solution would look like. However, the listed quasilinear solution is good enough, that it is unlikely that better options exist, which can still be calculated in reasonable amounts of time.

Proof

We provide a function for each pair. We then substitute them in Lemma 1 and show through equivalence transformations that the assertions are true.

Proof 1: Exponential Guarantee

  • Allocator:
  • Guarantee:

Let .

Part 1:

Part 2:

Proof 2: Polynomial Guarantee

  • Allocator:
  • Guarantee:

Let .

Part 1:

Part 2:

Proof 3: Quasilinear Guarantee

  • Allocator:
  • Guarantee:

Let .

Part 1:

Part 2:

For this part, we rely on the following property:

This is true for all logarithms for all .

We now show the second requirement of the Lemma is true. However, we cannot use equivalence transformations as in the other proofs. Hence, we show it by implication.

March 08, 2019
GNOME 3.32 will very soon be released, so I thought I'd go back on a few of the things that happened with some of our content applications.

Videos
First, many thanks to Marta Bogdanowicz, Baptiste Mille-Mathias, Ekaterina Gerasimova and Andre Klapper who toiled away at updating Videos' user documentation since 2012, when it was still called “Totem”, and then again in 2014 when “Videos” appeared.

The other major change is that Videos is available, fully featured, from Flathub. It should play your Windows Movie Maker films, your circular wafers of polycarbonate plastic and aluminium, and your Devolver indie films. No more hunting codecs or libraries!

In the process, we also fixed a large number of outstanding issues, such as accommodating for the app menu's planned disappearance, moving the audio/video properties tab to nautilus proper, making the thumbnailer available as an independent module, making the MPRIS plugin work better and loads, loads mo.


Download on Flathub

Books

As Documents was removed from the core release, we felt it was time for Books to become independent. And rather than creating a new package inside a distribution, the Flathub version was updated. We also fixed a bunch of bugs, so that's cool :)
Download on Flathub

Weather

I didn't work directly on Weather, but I made some changes to libgweather which means it should be easier to contribute to its location database.

Adding new cities doesn't require adding a weather station by hand, it would just pick the closest one, and weather stations also don't need to be attached to cities either. They were usually attached to villages, sometimes hamlets!

The automatic tests are also more stringent, and test for more things, which should hopefully mean less bugs.

And even more Flatpaks

On Flathub, you'll also find some applications I packaged up in the last 6 months. First is Teo Thomson emulator, GBE+, a Game Boy emulator focused on accessories emulation, and a way to run your old Flash games offline.
March 04, 2019

The video

Below you can see the same scene that I recorded in January, which was rendered by Panfrost in Mesa but using Arm's kernel driver. This time, Panfrost is using a new kernel driver that is in a form close to be acceptable in the mainline kernel:

The history behind it

During the past two months Rob Herring and I have been working on a new driver for Midgard and Bifrost GPUs that could be accepted mainline.

Arm already maintains a driver out of tree with an acceptable open source license, but it doesn't implement the DRM ABI and several design considerations make it unsuitable for inclusion in mainline Linux.

The absence of a driver in mainline prevents users from keeping their kernels up-to-date and hurts integration with other parts of the free software stack. It also discourages SoC and BSP vendors from submitting their code to mainline, and hurts their ability to track mainline closely.

Besides the code of the driver itself, there's one more condition for mainline inclusion: an open source implementation of the userspace library needs to exist, so other kernel contributors can help verifying, debugging and maintaining the kernel driver. It's an enormous pile of difficult work to reverse engineer the inner workings of a GPU and then implement a compiler and command submission infrastructure, so big thanks to Alyssa Rosenzweig for leading that effort.

Upstream status

Most of the Panfrost code is already part of mainline Mesa, with the code that directly interacts with the new DRM driver being in the review stage. Currently targeted GPUs are T760 and T860, with the RK3399 being the SoC more often used for testing.

The kernel driver is being developed in the open and though we are trying to follow the best practices as displayed by other DRM drivers, there's a number of tasks that need to be done before we consider it ready for submission.

The work ahead

In the kernel:
- Make MMU code more complete for correctness and better performance
- Handle errors and hangs and correctly reset the GPU
- Improve fence handling
- Test with compute shaders (to check completeness of the ABI)
- Lots of cleanups and bug fixing!

In Mesa:
- Get GNOME Shell working
- Get Chromium working with accelerated WebGL
- Get all of glmark2 working
- Get a decent subset of dEQP passing and use it in CI
- Keep refactoring the code
- Support more hardware

Get the code

The exact bits used for the demo recorded above are in various stages of getting upstreamed to the various upstreams, but here are in branches for easier reproduction:


February 28, 2019
For those of you who want to give the new Flicker Free Boot enhancements for Fedora 30 a try on Fedora 29, this is possible now since the latest F29 bugfix update for plymouth also includes the new theme used in Fedora 30.

If you want to give this a try, add "plymouth.splash_delay=0 i915.fastboot=1" to your kernel commandline:

  1. Edit /etc/default/grub, add "plymouth.splash_delay=0 i915.fastboot=1" to GRUB_CMDLINE_LINUX

  2. Run "sudo grub2-mkconfig -o /etc/grub2-efi.cfg"

Note that i915.fastboot=1 causes the backlight to not work on Haswell CPUs (e.g. i5-42xx CPUs), this is fixed in the 5.0 kernels which are currently available in rawhide/F30.

Run the following commands to get the updated plymouth and the new theme and to select the new theme:

  1. "sudo dnf update plymouth*"

  2. "sudo dnf install plymouth-theme-spinner"

  3. "sudo cp /usr/share/pixmaps/fedora-gdm-logo.png /usr/share/plymouth/themes/spinner/watermark.png"

  4. "sudo plymouth-set-default-theme -R bgrt"

Now on the next boot / installing of offline-updates you should get the new theme.
February 26, 2019

Intro slide

Downloads

If you're curious about the slides, you can download the PDF or the ODP.

Thanks

This post has been a part of work undertaken by my employer Collabora.

I would like to thank the wonderful organizers of Embedded World for hosting a great event.

February 25, 2019

This is the first report about Igalia’s activities around Computer Graphics, specifically 3D graphics and, in particular, the Mesa3D Graphics Library (Mesa), focusing on the year 2018.

GL_ARB_gl_spirv and GL_ARB_spirv_extensions

GL_ARB_gl_spirv is an OpenGL extension whose purpose is to enable an OpenGL program to consume SPIR-V shaders. In the case of GL_ARB_spirv_extensions, it provides a mechanism by which an OpenGL implementation would be able to announce which particular SPIR-V extensions it supports, which is a nice complement to GL_ARB_gl_spirv.

As both extensions, GL_ARB_gl_spirv and GL_ARB_spirv_extensions, are core functionality in OpenGL 4.6, the drivers need to provide them in order to be compliant with that version.

Although Igalia picked up the already started implementation of these extensions in Mesa back in 2017, 2018 is a year in which we put a big deal of work to provide the needed push to have all the remaining bits in place. Much of this effort provides general support to all the drivers under the Mesa umbrella but, in particular, Igalia implemented the backend code for Intel‘s i965 driver (gen7+). Assuming that the review process for the remaining patches goes without important bumps, it is expected that the whole implementation will land in Mesa during the beginning of 2019.

Throughout the year, Alejandro Piñeiro gave status updates of the ongoing work through his talks at FOSDEM and XDC 2018. This is a video of the latter:

ETC2/EAC

The ETC and EAC formats are lossy compressed texture formats used mostly in embedded devices. OpenGL implementations of the versions 4.3 and upwards, and OpenGL/ES implementations of the versions 3.0 and upwards must support them in order to be conformant with the standard.

Most modern GPUs are able to work directly with the ETC2/EAC formats. Implementations for older GPUs that don’t have that support but want to be conformant with the latest versions of the specs need to provide that functionality through the software parts of the driver.

During 2018, Igalia implemented the missing bits to support GL_OES_copy_image in Intel’s i965 for gen7+, while gen8+ was already complying through its HW support. As we were writing this entry, the work has finally landed.

VK_KHR_16bit_storage

Igalia finished the work to provide support for the Vulkan extension VK_KHR_16bit_storage into Intel’s Anvil driver.

This extension allows the use of 16-bit types (half floats, 16-bit ints, and 16-bit uints) in push constant blocks, and buffers (shader storage buffer objects).  This feature can help to reduce the memory bandwith for Uniform and Storage Buffer data accessed from the shaders and / or optimize Push Constant space, of which there are only a few bytes available, making it a precious shader resource.

shaderInt16

Igalia added Vulkan’s optional feature shaderInt16 to Intel’s Anvil driver. This new functionality provides the means to operate with 16-bit integers inside a shader which, ideally, would lead to better performance when you don’t need a full 32-bit range. However, not all HW platforms may have native support, still needing to run in 32-bit and, hence, not benefiting from this feature. Such is the case for operations associated with integer division in the case of Intel platforms.

shaderInt16 complements the functionality provided by the VK_KHR_16bit_storage extension.

SPV_KHR_8bit_storage and VK_KHR_8bit_storage

SPV_KHR_8bit_storage is a SPIR-V extension that complements the VK_KHR_8bit_storage Vulkan extension to allow the use of 8-bit types in uniform and storage buffers, and push constant blocks. Similarly to the the VK_KHR_16bit_storage extension, this feature can help to reduce the needed memory bandwith.

Igalia implemented its support into Intel’s Anvil driver.

VK_KHR_shader_float16_int8

Igalia implemented the support for VK_KHR_shader_float16_int8 into Intel’s Anvil driver. This is an extension that enables Vulkan to consume SPIR-V shaders that use Float16 and Int8 types in arithmetic operations. It extends the functionality included with VK_KHR_16bit_storage and VK_KHR_8bit_storage.

In theory, applications that do not need the range and precision of regular 32-bit floating point and integers, can use these new types to improve performance. Additionally, its implementation is mostly API agnostic, so most of the work we did should also help to have a proper mediump implementation for GLSL ES shaders in the future.

The review process for the implementation is still ongoing and is on its way to land in Mesa.

VK_KHR_shader_float_controls

VK_KHR_shader_float_controls is a Vulkan extension which allows applications to query and override the implementation’s default floating point behavior for rounding modes, denormals, signed zero and infinity.

Igalia has coded its support into Intel’s Anvil driver and it is currently under review before being merged into Mesa.

VkRunner

VkRunner is a Vulkan shader tester based on shader_runner in Piglit. Its goal is to make it feasible to test scripts as similar as possible to Piglit’s shader_test format.

Igalia initially created VkRunner as a tool to get more test coverage during the implementation of GL_ARB_gl_spirv. Soon, it was clear that it was useful way beyond the implementation of this specific extension but as a generic way of testing SPIR-V shaders.

Since then, VkRunner has been enabled as an external dependency to run new tests added to the Piglit and VK-GL-CTS suites.

Neil Roberts introduced VkRunner at XDC 2018. This is his talk:

freedreno

During 2018, Igalia has also started contributing to the freedreno Mesa driver for Qualcomm GPUs. Among the work done, we have tackled multiple bugs identified through the usual testing suites used in the graphic drivers development: Piglit and VK-GL-CTS.

Khronos Conformance

The Khronos conformance program is intended to ensure that products that implement Khronos standards (such as OpenGL or Vulkan drivers) do what they are supposed to do and they do it consistently across implementations from the same or different vendors.

This is achieved by producing an extensive test suite, the Conformance Test Suite (VK-GL-CTS or CTS for short), which aims to verify that the semantics of the standard are properly implemented by as many vendors as possible.

In 2018, Igalia has continued its work ensuring that the Intel Mesa drivers for both Vulkan and OpenGL are conformant. This work included reviewing and testing patches submitted for inclusion in VK-GL-CTS and continuously checking that the drivers passed the tests. When failures were encountered we provided patches to correct the problem either in the tests or in the drivers, depending on the outcome of our analysis or, even, brought a discussion forward when the source of the problem was incomplete, ambiguous or incorrect spec language.

The most important result out of this significant dedication has been successfully passing conformance applications.

OpenGL 4.6

Igalia helped making Intel’s i965 driver conformant with OpenGL 4.6 since day zero. This was a significant achievement since, besides Intel Mesa, only nVIDIA managed to do this too.

Igalia specifically contributed to achieve the OpenGL 4.6 milestone providing the GL_ARB_gl_spirv implementation.

Vulkan 1.1

Igalia also helped to make Intel’s Anvil driver conformant with Vulkan 1.1 since day zero, too.

Igalia specifically contributed to achieve the Vulkan 1.1 milestone providing the VK_KHR_16bit_storage implementation.

Mesa Releases

Igalia continued the work that was already carrying on in Mesa’s Release Team throughout 2018. This effort involved a continuous dedication to track the general status of Mesa against the usual test suites and benchmarks but also to react quickly upon detected regressions, specially coordinating with the Mesa developers and the distribution packagers.

The work was obviously visible by releasing multiple bugfix releases as well as doing the branching and creating a feature release.

CI

Continuous Integration is a must in any serious SW project. In the case of API implementations it is even critical since there are many important variables that need to be controlled to avoid regressions and track the progress when including new features: agnostic tests that can be used by different implementations, different OS platforms, CPU architectures and, of course, different GPU architectures and generations.

Igalia has kept a sustained effort to keep Mesa (and Piglit) CI integrations in good health with an eye on the reported regressions to act immediately upon them. This has been a key tool for our work around Mesa releases and the experience allowed us to push the initial proposal for a new CI integration when the FreeDesktop projects decided to start its migration to GitLab.

This work, along with the one done with the Mesa releases, lead to a shared presentation, given by Juan Antonio Suárez during XDC 2018. This is the video of the talk:

XDC 2018

2018 was the year that saw A Coruña hosting the X.Org Developer’s Conference (XDC) and Igalia as Platinum Sponsor.

The conference was organized by GPUL (Galician Linux User and Developer Group) together with University of A Coruña, Igalia and, of course, the X.Org Foundation.

Since A Coruña is the town in which the company originated and where we have our headquarters, Igalia had a key role in the organization, which was greatly benefited by our vast experience running events. Moreover, several Igalians joined the conference crew and, as mentioned above, we delivered talks around GL_ARB_gl_spirv, VkRunner, and Mesa releases and CI testing.

The feedback from the attendees was very rewarding and we believe the conference was a great event. Here you can see the Closing Session speech given by Samuel Iglesias:

Other activities

Conferences

As usual, Igalia was present in many graphics related conferences during the year:

New Igalians in the team

Igalia’s graphics team kept growing. Two new developers joined us in 2018:

  • Hyunjun Ko is an experienced Igalian with a strong background in multimedia. Specifically, GStreamer and Intel’s VAAPI. He is now contributing his impressive expertise into our Graphics team.
  • Arcady Goldmints-Orlov is the latest addition to the team. His previous expertise as a graphics developer around the nVIDIA GPUs fits perfectly for the kind of work we are pushing currently in Igalia.

Conclusion

Thank you for reading this blog post and we look forward to more work on graphics in 2019!

Igalia

February 21, 2019
Fedora 30 now contains all changes changes for a fully Flicker Free Boot. Last week a new version of plymouth landed which implements the new theme for this and also includes a much improved offline-updates experience, following this design.

At boot the display will seamlessly transit from the firmware boot-splash into the new plymouth theme, which uses the firmware boot-splash as background:



If you are using full-disk encryption then the unlock dialog will look like this:



Last, but not least the new plymouth theme looks like this in offline-updates mode:



ATM the texts in the offline-updates theme are not translated. They are rendered using pango + cairo, so we have the capability to make this fully translatable into all languages, I just need to add gettext support to plymouth for this. I plan to do this next week.

This all assumes that you are booting your machine with UEFI and your firmware supports the BGRT extension (which almost all firmware does). Otherwise you will get a dark-grey background instead of the firmware boot-splash.

If you are running rawhide and are seeing a totally different boot-theme then you likely have changed your plymouth theme away from the "charge" default at one point in time; and in that case you are not automatically updated to the new plymouth theme. To switch to the new theme run:

sudo plymouth-set-default-theme -R bgrt

If you do not like the firmware-splash being used as background, you can use the new theme on a dark-grey background instead by running:

sudo plymouth-set-default-theme -R spinner
February 18, 2019

I attended FOSDEM again this year thanks to funding from Igalia. This time I gave a talk about VkRunner in the graphics dev room. It’s now available on Igalia’s YouTube channel below:

I thought this might be a good opportunity to give a small status update of what has happened since my last blog post nearly a year ago.

Test suite integration

The biggest news is that VkRunner is now integrated into Khronos’ Vulkan CTS test suite and Mesa’s Piglit test suite. This means that if you work on a feature or a bugfix in your Vulkan driver and you want to make sure it doesn’t get regressed, it’s now really easy to add a VkRunner test for it and have it collected in one of these test suites. For Piglit all that is needed is to give the test script a .vk_shader_test extension and drop it anywhere under the tests/vulkan folder and it will automatically be picked up by the Piglit framework. As an added bonus, these tests are also run automatically on Intel’s CI system, so if your test is related to i965 in Mesa you can be sure it will not be regressed.

On the Khronos CTS side the integration is currently a little less simple. Along with help from Samuel Iglesias, we have merged a branch into master that lays the groundwork for adding VkRunner tests. Currently there are only proof-of-concept tests to show how the tests could work. Adding more tests still requires tweaking the C++ code so it’s not quite as simple as we might hope.

API

When VkRunner is built, in now also builds a static library containing a public API. This can be used to integrate VkRunner into a larger test suite. Indeed, the Khronos CTS integration takes advantage of this to execute the tests using the VkDevice created by the test suite itself. This also means it can execute multiple tests quickly without having to fork an external process.

The API is intended to be very highlevel and is as close to possible as just having simple functions to ask VkRunner to execute a test script and return an enum reporting whether the test succeeded or not. There is an example of its usage in the README.

Precompiled shader scripts

One of the concerns raised when integrating VkRunner into CTS is that it’s not ideal to have to run glslang as an external process in order to compile the shaders in the scripts to SPIR-V. To work around this, I added the ability to have scripts with binary shaders. In this case the 32-bit integer numbers of the compiled SPIR-V are just listed in ASCII in the shader test instead of the GLSL source. Of course writing this by hand would be a pain, so the VkRunner repo includes a Python script to precompile a bunch of shaders in a batch. This can be really useful to run the tests on an embedded device where installing glslang isn’t practical.

However, in the end for the CTS integration we took a different approach. The CTS suite already has a mechanism to precompile all of the shaders for all tests. We wanted to take advantage of this also when compiling the shaders from VkRunner tests. To make this work, Samuel added some functions to the VkRunner API to query the GLSL in a VkRunner shader script and then replace them with binary equivalents. That way the CTS suite can use these functions to replace the shaders with its cached compiled versions.

UBOs, SSBOs and compute shaders

One of the biggest missing features mentioned in my last post was UBO and SSBO support. This has now been fixed with full support for setting values in UBOs and SSBOs and also probing the results of writing to SSBOs. Probing SSBOs is particularily useful alongside another added feature: compute shaders. Thanks to this we can run our shaders as compute shaders to calculate some results into an SSBO and probe the buffer to see whether it worked correctly. Here is an example script to show how that might look:

[compute shader]
#version 450

/* UBO input containing an array of vec3s */
layout(binding = 0) uniform inputs {
        vec3 input_values[4];
};

/* A matrix to apply to these values. This is stored in a push
 * constant. */
layout(push_constant) uniform transforms {
        mat3 transform;
};

/* An SSBO to store the results */
layout(binding = 1) buffer outputs {
        vec3 output_values[];
};

void
main()
{
        uint i = gl_WorkGroupID.x;

        /* Transform one of the inputs */
        output_values[i] = transform * input_values[i];
}

[test]
# Set some input values in the UBO
ubo 0 subdata vec3 0 \
  3 4 5 \
  1 2 3 \
  1.2 3.4 5.6 \
  42 11 9

# Create the SSBO
ssbo 1 1024

# Store a matrix uniform to swap the x and y
# components of the inputs
push mat3 0 \
  0 1 0 \
  1 0 0 \
  0 0 1

# Run the compute shader with one instance
# for each input
compute 4 1 1

# Check that we got the expected results in the SSBO
probe ssbo vec3 1 0 ~= \
  4 3 5 \
  2 1 3 \
  3.4 1.2 5.6 \
  11 42 9

Extensions in the requirements section

The requirements section can now contain the name of any extension. If this is done then VkRunner will check for the availability of the extension when creating the device and enable it. Otherwise it will report that the test was skipped. A lot of the Vulkan extensions also add an extended features struct to be used when creating the device. These features can also be queried and enabled for extentions that VkRunner knows about simply by listing the name of the feature in that struct. For example if shaderFloat16 in listed in the requirements section, VkRunner will check for the VK_KHR_shader_float16_int8 extension and the shaderFloat16 feature within its extended feature struct. This makes it really easy to test optional features.

Cross-platform support

I spent a fair bit of time making sure VkRunner works on Windows including compiling with Visual Studio. The build files have been converted to CMake which makes building on Windows even easier. It also compiles for Android thanks to patches from Jaebaek Seo. The repo contains Android build files to build the library and the vkrunner executable. This can be run directly on a device using adb.

User interface

There is a branch containing the beginnings of a user interface for editing VkRunner scripts. It presents an editor widget via GTK and continuously runs the test script in the background as you are editing it. It then displays the results in an image and reports any errors in a text field. The test is run in a separate process so that if it crashes it doesn’t bring down the user interface. I’m not sure whether it makes sense to merge this branch into master, but in the meantime it can be a convenient way to fiddle with a test when it fails and it’s not obvious why.

And more…

Lots of other work has been going on in the background. The best way to get to more details on what VkRunner can do is to take a look at the README. This has been kept up-to-date as the source of documentation for writing scripts.

February 14, 2019

In this blog post, I'll explain how to update systemd's hwdb for a new device-specific entry. I'll focus on input devices, as usual.

What is the hwdb and why do I need to update it?

The hwdb is a binary database sitting at /etc/udev/hwdb.bin and /usr/lib/udev/hwdb.d. It is usually used to apply udev properties to specific devices, those properties are then picked up by other processes (udev builtins, libinput, ...) to apply device-specific behaviours. So you'll need to update the hwdb if you need a specific behaviour from the device.

One of the use-cases I commonly deal with is that some touchpad announces wrong axis ranges or resolutions. With the correct hwdb entry (see the example later) udev can correct these at device initialisation time and every process sees the right axis ranges.

The database is compiled from the various .hwdb files you have sitting on your system, usually in /etc/udev/hwdb.d and /usr/lib/hwdb.d. The terser format of the hwdb files makes them easier to update than, say, writing a udev rule to set those properties.

The full process goes something like this:

  • The various .hwdb files are installed or modified
  • The hwdb.bin file is generated from the .hwdb files
  • A udev rule triggers the udev hwdb builtin. If a match occurs, the builtin prints the to-be properties, and udev captures the output and applies it as udev properties to the device
  • Some other process (often a different udev builtin) reads the udev property value and does something.
On its own, the hwdb is merely a lookup tool though, it does not modify devices. Think of it as a virtual filing cabinet, something will need to look at it, otherwise it's just dead weight.

An example for such a udev rule from 60-evdev.rules contains:


IMPORT{builtin}="hwdb --subsystem=input --lookup-prefix=evdev:", \
RUN{builtin}+="keyboard", GOTO="evdev_end"
The IMPORT statement translates as "look up the hwdb, import the properties". The RUN statement runs the "keyboard" builtin which may change the device based on the various udev properties now set. The GOTO statement goes to skip the rest of the file.

So again, on its own the hwdb doesn't do anything, it merely prints to-be udev properties to stdout, udev captures those and applies them to the device. And then those properties need to be processed by some other process to actually apply changes.

hwdb file format

The basic format of each hwdb file contains two types of entries, match lines and property assignments (indented by one space). The match line defines which device it is applied to.

For example, take this entry from 60-evdev.hwdb:


# Lenovo X230 series
evdev:name:SynPS/2 Synaptics TouchPad:dmi:*svnLENOVO*:pn*ThinkPad*X230*
EVDEV_ABS_01=::100
EVDEV_ABS_36=::100
The match line is the one starting with "evdev", the other two lines are property assignments. Property values are strings, any interpretation to numeric values or others is to be done in the process that requires those properties. Noteworthy here: the hwdb can overwrite previously set properties, but it cannot unset them.

The match line is not specified by the hwdb beyond "it's a glob". The format to use is defined by the udev rule that invokes the hwdb builtin. Usually the format is:


someprefix:search criteria:
For example, the udev rule that applies for the match above is this one in 60-evdev.rules:

KERNELS=="input*", \
IMPORT{builtin}="hwdb 'evdev:name:$attr{name}:$attr{[dmi/id]modalias}'", \
RUN{builtin}+="keyboard", GOTO="evdev_end"
What does this rule do? $attr entries get filled in by udev with the sysfs attributes. So on your local device, the actual lookup key will end up looking roughly like this:

evdev:name:Some Device Name:dmi:bvnWhatever:bvR112355:bd02/01/2018:...
If that string matches the glob from the hwdb, you have a match.

Attentive readers will have noticed that the two entries from 60-evdev.rules I posted here differ. You can have multiple match formats in the same hwdb file. The hwdb doesn't care, it's just a matching system.

We keep the hwdb files matching the udev rules names for ease of maintenance so 60-evdev.rules keeps the hwdb files in 60-evdev.hwdb and so on. But this is just for us puny humans, the hwdb will parse all files it finds into one database. If you have a hwdb entry in my-local-overrides.hwdb it will be matched. The file-specific prefixes are just there to not accidentally match against an unrelated entry.

Applying hwdb updates

The hwdb is a compiled format, so the first thing to do after any changes is to run


$ systemd-hwdb update
This command compiles the files down to the binary hwdb that is actually used by udev. Without that update, none of your changes will take effect.

The second thing is: you need to trigger the udev rules for the device you want to modify. Either you do this by physically unplugging and re-plugging the device or by running


$ udevadm trigger
or, better, trigger only the device you care about to avoid accidental side-effects:

$ udevadm trigger /sys/class/input/eventXYZ
In case you also modified the udev rules you should re-load those too. So the full quartet of commands after a hwdb update is:

$ systemd-hwdb update
$ udevadm control --reload-rules
$ udevadm trigger
$ udevadm info /sys/class/input/eventXYZ
That udevadm info command lists all assigned properties, these should now include the modified entries.

Adding new entries

Now let's get down to what you actually want to do, adding a new entry to the hwdb. And this is where it also get's tricky to have a generic guide because every hwdb file has its own custom match rules.

The best approach is to open the .hwdb files and the matching .rules file and figure out what the match formats are and which one is best. For USB devices there's usually a match format that uses the vendor and product ID. For built-in devices like touchpads and keyboards there's usually a dmi-based match format (see /sys/class/dmi/id/modalias). In most cases, you can just take an existing entry and copy and modify it.

My recommendation is: add an extra property that makes it easy to verify the new entry is applied. For example do this:


# Lenovo X230 series
evdev:name:SynPS/2 Synaptics TouchPad:dmi:*svnLENOVO*:pn*ThinkPad*X230*
EVDEV_ABS_01=::100
EVDEV_ABS_36=::100
FOO=1
Now run the update commands from above. If FOO=1 doesn't show up, then you know it's the hwdb entry that's not yet correct. If FOO=1 does show up in the udevadm info output, then you know the hwdb matches correctly and any issues will be in the next layer.

Increase the value with every change so you can tell whether the most recent change is applied. And before your submit a pull request, remove the FOOentry.

Oh, and once it applies correctly, I recommend restarting the system to make sure everything is in order on a freshly booted system.

Troubleshooting

The reason for adding hwdb entries is always because we want the system to handle a device in a custom way. But it's hard to figure out what's wrong when something doesn't work (though 90% of the time it's a typo in the hwdb match).

In almost all cases, the debugging sequence is the following:

  • does the FOO property show up?
  • did you run systemd-hwdb update?
  • did you run udevadm trigger?
  • did you restart the process that requires the new udev property?
  • is that process new enough to have support for that property?
If the answer to all these is "yes" and it still doesn't work, you may have found a bug. But 99% of the time, at least one of those is a sound "no. oops.".

Your hwdb match may run into issues with some 'special' characters. If your device has e.g. an ® in its device name (some Microsoft devices have this), a bug in systemd caused the match to fail. That bug is fixed now but until it's available in your distribution, replace with an asterisk ('*') in your match line.

Greybeards who have been around since before 2014 (systemd v219) may remember a different tool to update the hwdb: udevadm hwdb --update. This tool still exists, but it does not have the exact same behaviour as systemd-hwdb update. I won't go into details but the hwdb generated by the udevadm tool can provide unexpected matches if you have multiple matches with globs for the same device. A more generic glob can take precedence over a specific glob and so on. It's a rare and niche case and fixed since systemd v233 but the udevadm behaviour remained the same for backwards-compatibility.

Happy updating and don't forget to add Database Administrator to your CV when your PR gets merged.

February 05, 2019
Video and slides for my talk in the LLVM devroom on TableGen are now available here.

Now I only need the time and energy to continue my blog series on the topic...
February 01, 2019
i've just landed a big milestone for the Flicker Free Boot work I'm doing for Fedors 30. Starting with todays rawhide kernel build, version 5.0.0-0.rc4.git3.1, the fastboot option for the i915 Intel display driver is enabled by default on systems with a Skylake CPU/iGPU and newer, as well as on Valleyview and Cherryview (Bay- and Cherry-Trail) systems.

This means that the last modeset / flicker during boot of UEFI systems using the integrated Intel GPU for display output is now gone.