planet.freedesktop.org
December 13, 2019
If you remember, back in 2016, I did the work to get a “Launch on Discrete GPU” menu item added to application in gnome-shell.

This cycle I worked on adding support for the NVIDIA proprietary driver, so that the menu item shows up, and the right environment variables are used to launch applications on that device.

Tested with another unsupported device...


Behind the scenes

There were a number of problems with the old detection code in switcheroo-control:
- it required the graphics card to use vga_switcheroo in the kernel, which the NVIDIA driver didn't do
- it could support more than 2 GPUs
- and it didn't really actually know which GPU was going to be the “main” one

And, on top of all that, gnome-shell expected the Mesa OpenGL stack to be used, so it only knew the right environment variables to do that, and only for one secondary GPU.

So we've extended switcheroo-control and its API to do all this.

(As a side note, commenters asked me about the KDE support, and how it would integrate, and it turns out that KDE's code just checks for the presence of a file in /sys, which is only present when vga_switcheroo is used. So I would encourage KDE to adopt the switcheroo-control D-Bus API for this)

Closing

All this will be available in Fedora 32, using GNOME 3.36 and switcheroo-control 2.0. We might backport this to Fedora 31 after it's been tested, and if there is enough interest.
December 10, 2019

Unlike the tradition of my past few talks at Linux Plumbers or Kernel conferences, this time around in Lisboa I did not start out with a rant proposing to change everything. Instead I celebrated roughly 10 years of upstream graphics progress and finally achieving paradise. But that was all just prelude to a few bait-and-switches later fulfill expectations on what’s broken this time around in upstream, totally, and what needs to be fixed and changed, maybe.

The LPC video recording is now released, slides are uploaded. If neither of that is to your taste, read below the break for the written summary.

Mission Accomplished

10 or so years ago upstream graphics was essentially a proof of concept for the promised to come. Kernel display modeset just landed, finally bringing a somewhat modern display driver userspace API to linux. And GEM, the graphics execution manager landed, bringing proper GPU memory management and multi client rendering. Realistically a lot needed to be done still, from rendering drivers for all the various SoC, to an atomic display API that can expose all the features, not just what was needed to light up a linux desktop back in the days. And lots of work to improve the codebase and make it much easier and quicker to write drivers.

There’s obviously still a lot to do, but I think we’ve achieved that - for full details, check out my ELCE talk about everything great for upstream graphics.

Now despite all this justified celebrating, there is one sticking point still:

NVIDIA

The trouble with team green from an open source perspective - for them it’s a great boon - is that they own the GPU software stack in two crucial ways:

  • NVIDIA defines how desktop GL works. Not so relevant anymore, and at least the core profile is a solid spec and has fully open source test suite from Khronos by now. But the compatibility profile, which didn’t throw out all the legacy features from the GL1.x days in the 90s, does not have any of the interactions with all the new features specced out and covered with tests - NVIDIA’s binary driver is that standard, and that since roughly 20 years.

  • More relevant today is CUDA, not quite as long as desktop GL, but for a market that’s growing at a rather brisk pace. CUDA is the undisputed king of the general purpose GPU compute hill. Anything and everything that matters runs on top of it, often exclusively.

Together these create a huge software moat around the high margin hardware business. All an open stack would achieve is filling in that moat and inviting competition to eat the nice surplus. In other words, stupid to even attempt, vendor lock-in just pays too well.

Now of course the reverse engineered nouveau driver still exists. But if you have to pay for reverse engineering already, then you might as well go with someone else’s hardware, since you’re not going to get any of the CUDA/GL goodies.

And the business case for open source drivers indeed exists so much that even paying for reverse engineering a full stack is no problem. The result is a vibrant community of hardware vendors, customers, distros and consulting shops who pay the bills for all the open driver work that’s being done. And in userspace even “upstream first” works - releases happen quickly and often enough, with sufficiently smooth merge process that having a vendor tree is simply not needed. Plus customer’s willingness to upgrade if necessary, because it’s usually a well-contained component to enable new hardware support.

In short without a solid business case behind open graphics drivers, they’re just not going to happen, viz. NVIDIA.

Not Shipping Upstream

Unfortunately the business case for “upstream first” on the kernel side is completely broken. Not for open source, and not for any fundamental reasons, but simply because the kernel moves too slowly, is too big, drivers aren’t well contained enough and therefore customer will not or even can not upgrade. For some hardware upstreaming early enough is possible, but graphics simply moves too fast: By the time the upstreamed driver is actually in shipping distros, it’s already one hardware generation behind. And missing almost a year of tuning and performance improvements. Worse it’s not just new hardware, but also GL and Vulkan versions that won’t work on older kernels due to missing features, fragementing the ecosystem further.

This is entirely unlike the userspace side, where refactoring and code sharing in a cross-vendor shared upstream project actually pays off. Even in the short term.

There’s a lot of approaches trying to paper over this rift with the linux kernel:

  • Stable kernel ABI for driver modules, so that you can upgrade the core kernel and drivers independently. Google Android is very much aiming this solution at their huge vendor tree problem. Traditionally enterprise distros do the same. This works, safe that stable kernel-internal ABI is not a notion that’s very popular with kernel maintainers …

  • If you go with an “upstream first” approach to shipping graphics drivers you first need to polish your driver, refactor out common components, and push it to upstream. Only to then pay a second team to re-add all the crap so you can ship your driver on all the old kernels, where all the helpers and new common code don’t exist.

  • Pay your distro or OS vendor to just backport the new helpers before they even have landed in an upstream release. Which means instead of a backporting team for the driver on your payroll you now pay for backporting the entire subsystem - which in many cases is cheaper, but an even harder sell to beancounters. And sometimes not possible because other driver teams from competitors might not be on board and insist on not breaking the stable driver ABI for a given distro release kernel.

Also, there just isn’t a single LTS kernel. Even upstream has multiple, plus every distro has their own flavour, plus customers love to grow their own variety trees too. Often they’re not even coordinated on the same upstream release. Cheapest way to support this entire madness is to completely ignore upstream and just write your own subsystem. Or at least not use any of the helper libraries provided by kernel subsystems, completely defeating the supposed benefit of upstreaming code.

No matter the strategy, they all boil down to paying twice - if you want to upstream your code. And there’s no added return for the doubled bill. In conclusion, upstream first needs a business case, like the open source graphics stack in general. And that business case is very much real, except for upstreaming, it’s only real in userspace.

In the kernel, “upstream first” is a sham, at least for graphics drivers.

Thanks to Alex Deucher for reading and commenting on drafts of this text.

December 03, 2019

At ELC Europe in Lyon I held a nice little presentation about the state of upstream graphics drivers, and how absolutely awesome it all is. Of course with a big focus on SoC and embedded drivers. Slides and the video recording

Key takeaways for the busy:

  • The upstream DRM graphics subsystem really scales down to tiny drivers now, with the smallest driver coming in at just around 250 lines (including comments and whitespace), 10’000x less than the biggest!

  • Batteries all included - there’s modular helpers for everything. As a rule of thumb even minimal legacy fbdev drivers ported to DRM shrink by a factor of 2-4 thanks to these helpers taking care of anything that’s remotely standardized in displays and GPUs.

  • For shipping userspace drivers go with a dual-stack: Open source GL and Vulkan drivers for those who want that, and for getting the kernel driver merged into upstream. Closed source for everyone else, running on the same userspace API and kernel driver.

  • And for customer support backport the entire subsystem, try to avoid backporting an individual driver.

In other words, world domination is assured and progressing according to plan.

November 27, 2019
I got contacted by a user with a HP X2 10 p018wm 2-in-1 about the device waking up 10-60 seconds after suspend. I have access to a HP X2 10 p002nd myself which in essence is the same HW and I managed to reproduce the problem there. This is when the fun started:

1. There were a whole bunch of ACPI related errors in dmesg. It turns out that these affect almost all HP laptop models and we have a multiple bugs open for this. Debugging these pointed to the hp-wmi driver. I wrote 2 patches fixes 2 different kind of errors and submitted these upstream. Unfortunately this does not help with the suspend/resume issue, but it does fix all those errors people have been complaining about :)

2. I noticed some weird messages in dmesg with look like a PCI bus re-enumeration is started during suspend when suspending by closing the lid and then the re-enumeration continues after resume. This turns out to be triggered by this piece of buggy AML code which
is used for monitor hotplug notification on gfx state changes (the i915 driver ACPI opregion also tracks the lid state for some reason):

                Method (GNOT, 2, NotSerialized)
                {
                    ...
                    CEVT = Arg0
                    CSTS = 0x03
                    If (((CHPD == Zero) && (Arg1 == Zero)))
                    {
                        If (((OSYS > 0x07D0) || (OSYS < 0x07D6)))
                        {
                            Notify (PCI0, Arg1)
                        }
                        Else
                        {
                            Notify (GFX0, Arg1)
                        }
                    }
                    ...
                }

Notice how "If (((OSYS > 0x07D0) || (OSYS < 0x07D6)))" is always true, the condition is broken the "||" clearly should have been a "&&" this is causing the code to send a hotplug notify to the PCI root instead of to the gfx card, triggering a re-enumeration. Doing a grep for this on my personal DSDT collection shows that 55 of the 93 DSDTs in my collection have this issue!

Luckily this can be easily fixed by setting CHPD to 1 in the i915 driver, which is something which we should do anyways according to the
opregion documentation. So I wrote a patch doing this and submitted it upstream. Unfortunately this also does not help with the suspend/resume issue.

3. So the actual spurious wakeups are caused by HP using an external embedded controller (EC) on the "legacy-free" platform which they use for these laptops. Since these are not designed to use an external EC they lack the standard interface for this, so HP has hooked the EC up over I2C and using an ACPI GPIO event handler as EC interrupt.

These devices use suspend2idle (s2idle) instead of good old firmware handled S3, so the EC stays active during suspend. It does some housekeeping work which involves a round-trip through the AML code every minute. Normally EC wakeups are ignored durin s2idle by some special handling in the kernel, but this is only done for ECs using the standardized ACPI EC interface, not for this bolted on the
side model. I've started a discussion on maybe extending our ACPI event handling to deal with this special case.

For now as a workaround I ended up writing 2 more patches to allow blacklisting wakeup by ACPI GPIO event handlers on select models. This breaks wakeup by opening the LID, the user needs to wake the laptop with the powerbutton. But at least the laptop will stay suspended now.
There have been questions about the Fedora 30 Flicker Free Boot Change in various places, here is a FAQ which hopefully answers most questions:

1) I get a black screen for a couple of seconds during boot?

1.1) If you have an AMD or Nvidia GPU driving your screen, then this is normal. The graphics drivers for AMD and Nvidia GPUs reset the hardware when loading, this will cause the display to temporarily go black. There is nothing which can be done about this.

1b) If you have a somewhat older Intel GPU (your CPU is pre Skylake) then the i915 driver's support to skip the mode-reset is disabled by default (for now) to fix this add "i915.fastboot=1" to your kernel commandline. For more info on modifying the kernel cmdline, see question 7.

1c) Do "ls /sys/firmware/efi/efivars" if you get a "No such file or directory" error then your system is booting in classic BIOS mode instead of UEFI mode, to fix this you need to re-install and boot the livecd/installer in UEFI mode when installing. Alternatively you can try to convert your existing install, note this is quite tricky, make backups first!

1d) Your system may be using the classic VGA BIOS during boot despite running in UEFI mode. Often you can select BIOS mode compatility in your BIOS settings aka the CSM setting. If you can select this on a per component level, set the VIDEO/VGA option to "UEFI only" or "UEFI first", alternatively you can try completely disabling the CSM mode. On some systems you can disable the classic VGA BIOS by disabling / unselecting the "Legacy Option ROMs" option.

2) The vendor-logo/firmware-splash looks squashed or has the wrong size?

Your system may be using the classic VGA BIOS during boot despite running in UEFI mode, see answer 1d.

3) I get a black background instead of the firmware splash while Fedora is booting?

Do "ls /sys/firmware/acpi/bgrt" if you get a "No such file or directory" error then try answers 1c and 1d . If you do have a /sys/firmware/acpi/bgrt directory, but you are still getting the Fedora logo + spinner on a black background instead of on top of the firmware-splash, please file a bug about this and drop me a mail with a link to the bug.

4) Getting rid of the vendor-logo/firmware-splash being shown while Fedora is booting?

If you don't want the firmware-splash to be used as background during boot, you can switch plymouth to the spinner theme, which is identical to the new bgrt theme, except that it does not use the firmware-splash as background, to do this execute the following command from a terminal: "sudo plymouth-set-default-theme -R spinner"

Note that the kernel will restore the vendor-logo early on at boot in case it got damaged by e.g. option ROM messages. If you are switching to the spinner theme you may also want to add "video=efifb:nobgrt" to your kernel commandline. See 7 below for how to edit the kernel commandline.

5) Keeping the firmware-splash as background while unlocking the disk?

If you prefer this, it is possible to keep the firmware-splash as background while the diskcrypt password is shown. To do this do the following:

  1. "sudo mkdir /usr/share/plymouth/themes/mybgrt"

  2. "sudo cp /usr/share/plymouth/themes/bgrt/bgrt.plymouth /usr/share/plymouth/themes/mybgrt/mybgrt.plymouth"

  3. edit /usr/share/plymouth/themes/mybgrt/mybgrt.plymouth, change DialogClearsFirmwareBackground=true to DialogClearsFirmwareBackground=false, change DialogVerticalAlignment=.382 to DialogVerticalAlignment=.6

  4. "sudo plymouth-set-default-theme -R mybgrt"

Note if you do this the disk-passphrase entry dialog may be partially drawn over the vendor-logo part of the firmware-splash, if this happens then try increasing DialogVerticalAlignment to e.g. 0.7 .

6) Get detailed boot progress instead of the boot-splash ?

To get detailed boot progress info press ESC during boot.

7) Always get detailed boot progress instead of the boot-splash ?

To always get detailed boot progress instead of the boot-splash, remove "rhgb" from your kernel commandline:

Edit /etc/default/grub and remove rhgb from GRUB_CMDLINE_LINUX and then if you are booting using UEFI (see 1c) run:
"grub2-mkconfig -o /etc/grub2-efi.cfg"
else (if you are booting using classic BIOS boot) run:
"grub2-mkconfig -o /etc/grub2.cfg".
November 15, 2019
The Lenovo Thinkpad 8 and also the Asus T100CHI both have an USB3 micro-B connector, but using a standard USB3 OTG (USB 3 micro-B to USB3-A receptacle) cable results in only USB2 devices working. USB3 devices are not recognized.

Searching the internet learns that many people have this problem and that the solution is to find a USB3 micro-A to USB3-A receptacle cable. This sounds like nonsense to me as micro-B really is micro-AB and is supposed to use the ID pin to automatically switch between modes dependent on the used cable.; and this does work for the USB-2 parts of the micro-B connector on the Thinkpad. Yet people do claim success with such cables (with a more square micro-A connector, instead of the trapezoid micro-B connector). The only problem is such cables are not for sale anywhere.

So I guessed that this means is that they have swapped the Rx and Tx superspeed pairs on the USB3 only part of the micro-B connector, and I decided to cut open one of my USB3 micro-A to USB3-A receptacle cables and swap the superspeed pairs. Here is what the cable looks like when it it cut open:



If you are going to make such a cable yourself, to get this picture I first removed the outer plastic isolation (part of it is still there on the right side in this picture). Then I folded away the shield wires till they are all on one side (wires at the top of the picture). After this I removed the metal foil underneath the shield wires.

Having removed the first 2 layers of shielding this reveals 4 wires in the standard USB2 colors: red, black, green and white. and 2 separately shielded cable pairs. On the picture above the separately shielded pairs have been cut, giving us 4 pairs, 2 on each end of cable; and the shielding has been removed from 3 of the 4 pairs, you can still see the shielding on the 4th pair.

A standard USB3 cable uses the following color codes:

  • Red: Vbus / 5 volt

  • White:  USB 2.0 Data -

  • Green: USB 2.0 Data +

  • Black: Ground

  • Purple: Superspeed RX -

  • Orange: Superspeed RX +

  • Blue: Superspeed TX -

  • Yellow: Superspeed TX -

So to swap RX and TX we need to connect purple to blue / blue to purple and orange to yellow / yellow to orange, resulting in:



Note the wires are just braided together here, not soldered yet. This is a good moment to carefully test the cable. Also note that the superspeed wire pairs must be length matched, so you need to cut and strip all 8 cables at the same length! If everything works you can put some solder on those braided together wires, re-test after soldering, and then cover them with some heat-shrink-tube:



And then cover the entire junction with a bigger heat-shrink-tube:



And you have a superspeed capable cable even though no one will sell you one.

Note that the Thinkpad 8 supports ACA mode so if you get an ACA capable "Y" cable or an ACA charging HUB then you can charge and use the Thinkpad 8 USB port at the same time. Typically ACA "Y" cables or hubs are USB2 only. So the superspeed mod from this blogpost will not help with those. The Asus T100CHI has a separate USB2 micro-B just for charging, so you do not need anything special there to charge + connect an USB device.
November 10, 2019

It has been a while since my last post, but there is a simple reason for that: on August 5th, I had to move from Brazil to Canada. Why did I move? Thanks to Harry Wentland recommendation, I got an interview for a software engineering position at AMD (Markham), and I got hired to work on the display team. From now on, I suppose that I’ll be around the DRM subsystem for a long time :). Even though I’m now employed by AMD this post reflect my personal thoughts only and should not be construed to represent AMD in any way.

I have a few updates about my work with the community since I have been busy with my relocation and adaptation. However, my main updates came from XDC 2019 [1] and I want to share it here.

XDC 2019 - Montréal (Concordia University Conference)

This year I had the great luck joining XDC again. However, at this time, I was with Harry Wentland, Nicholas Kazlauskas, and Leo Li (we worked together at AMD). We put effort into learning from other people’s experiences, and we tried to know what the compositor developers wanted to see in our driver. We also used this opportunity to try to explain a little bit more about our hardware features. In particular, we had conversations about Freesync, MST, DKMS, and so forth. Thinking of that, I’ll share my view of the most exciting moments that we had.

VKMS

As usual, I tried my best to understand what people want to see in VKMS soon or later. For example, from the XDC 2018, I focused on fixing some bugs but mainly in add writeback support cause it could provide a visual output (this work is almost done, see [2]). This year I collected feedback from multiple people (special thanks to Marten, Lyude, Hiler, and Harry), and from these conversations I tend to work in the following tasks:

  1. Finish the writeback feature and enable visual output;
  2. Add support for adaptive refresh rate;
  3. Add support for “dynamic connectors”, which can enable the MST test.

Additionally, Martin Peres gave a talk that he shared his views for the CI and test. In his presentation, he suggested using VKMS to validate the API, and I have to admit that I’m really excited about this idea. I hope that I can help with this.

Freesync

The amdgpu drivers support a technology named Freesync [3]. In a few words, this feature allows the dynamic change of the refreshes rate, which can bring benefits for games and for power saving. Harry Wayland talked about that feature and you can see it here:

Video 1: Freesync, Adaptive Sync & VRR

After Harry’s presentation, many people asked interesting questions related to this subject, this situation caught my attention, and for this reason, I added the VRR to my roadmap in the VKMS. Roman Gilg, from KDE, was one of the developers that asked for a protocol extension in Wayland for support Freesync; additionally, compositor developers asked for mechanisms that enable them to know in advance if the experience of a specific panel will be good or not. Finally, there were some discussions about the use of Freesync for power saving and in an application that requires time-sensitive.

IGT and CI

This year I kept my tradition of asking thousands of questions to Hiler with the goal of learning more about IGT, and as usual, he was extremely kind and gentle with my questions (thanks Hiler). One of the concepts that Hiler explained to me, it is the use of podman (https://podman.io/) for prepare IGT image, for example, after a few minutes of code pair with him I could run IGT on my machine after executing the following commands:

sudo su
podman run --privileged registry.freedesktop.org/drm/igt-gpu-tools/igt:master
podman run --privileged registry.freedesktop.org/drm/igt-gpu-tools/igt:master \
                        igt_runner -t core_auth
podman run --privileged registry.freedesktop.org/drm/igt-gpu-tools/igt:master \
                        igt_runner -t core_auth /tmp
podman run --privileged -v /tmp/results:/results \
  registry.freedesktop.org/drm/igt-gpu-tools/igt:master igt_runner -t core_auth /results

We also had a chance to discuss CI with Martin Peres, and he explained his work for improving the way that the CI keep track of bugs. In particular, he introduced a fantastic tool named cibuglog, which is responsible for keep tracking of test failures and using this data for building a database. Cibuglog has many helpful filters that enable us to see test problems associated with a specific machine and bugs in the Bugzilla. The huge insights from cibuglog it is the idea of using data for helping with bug tracking. Thanks Martin, for showing us this amazing tool.

Updates

I just want to finish this post with brief updates from my work with free software, starting with kw and finish with VKMS.

Kernel Workflow (kw)

When I started to work with VKMS, I wrote a tool named kworkflow, or simply kw, for helping me with basic tasks related to Kernel development. These days kw reborn to me since I was looking for a way to automate my work with amdgpu; as a result, I implemented the following features:

  • Kernel deploy in a target machine (any machine reachable via IP);
  • Module deploy;
  • Capture .config file from a target machine;

Unfortunately, the code is not ready for merging in the main branch, I’m working on it; I think that in a couple of weeks, I can release a new version with these features. If you want to know a little bit more about kw take a look at https://siqueira.tech/doc/kw/

VKMS

I was not working in VKMS due to my change of country; however, now I am reworking part of the IGT test related to writeback, and as soon as I finish it, I will try to upstream it again. I hope that I can also have the VKMS writeback merged into the drm-misc-next by the end of this month. Finally, I merged the prime supported implemented by Oleg Vasilev (huge thanks!).

References

[1] “First discussion in the Shayenne’s patch about the CRC problem”. URL: https://xdc2019.x.org

[2] “Introduces writeback support”. URL: https://patchwork.freedesktop.org/series/61738/

[3] “FreeSync”. URL: https://en.wikipedia.org/wiki/FreeSync

November 05, 2019
As mentioned in an earlier blogpost, I have been working on fixing many games showing a small image centered on a black background when they are run fullscreen under Wayland. In that blogpost I was moslty looking at how to solve this for native Wayland games. But for various reasons almost all games still use X11, so instead I've ended up focussing on fixing this for games using Xwayland.

Xwayland now has support for emulating resolution changes requested by an app through the randr or vidmode extensions. If a client makes a resolution change requests this is remembered and if the client then creates a window located at the monitor's origin and sized to exactly that resolution, then Xwayland will ask the compositor to scale it to fill the entire monitor.

For apps which use _NET_WM_FULLLSCREEN (e.g. SDL2, SFML or OGRE based apps) to go fullscreen some help from the compositor is necessary. This is currently implemented in mutter. If you are a developer of another compositor and have questions about this, please drop me an email.

I failed to get this all upstream in time for Fedora 31 final. But now it is all upstream, so 've backported the changes and created an update with the changes. This update is currently in updates-testing, to install this update run the following command:

sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2019-103a594d07
October 24, 2019

I recently went to XDC 2019, where I gave yet another talk about Zink. I kinda forgot to write a blog-post about it, so here’s me trying to make up for it… or something like that. I’ll also go into some more recent developments as well.

My presentation was somewhat similar to the talk I did at SIGGRAPH this year, but with a bit more emphasis on the technical aspect, as the XDC audience is more familiar with Mesa internals.

If you’re interested, you can find the slides for the talk here. The talk goes through the motivation and basic approach. I don’t think I need to go through this again, as I’ve already covered that before.

As for the status, Zink currently supports OpenGL 2.1 and OpenGL ES 2.0. And there’s no immediate plans on working on OpenGL 3.0 and OpenGL ES 3.0 until Zink is upstream.

Which gets us to the more interesting bit; that I started working on upstreaming Zink. So let’s talk about that for a bit.

Upstreaming Zink

So, my current goal is to get Zink upstream in Mesa. The plan outline in my XDC talk is slightly outdated by now, so here I’ll instead say what’s actually happened so far, and what I hope will follow.

Before I could add the driver itself, I decided to send all changes outside of the driver as a set of merge-requests:

The last one is probably the most interesting one, as it moves a lot of fixed-function operations into the state-tracker, so individual drivers won’t have to deal with them. Unless they choose to, of course.

But all of these has already been merged, and there’s just one final merge-request left:

This merge-request adds the driver in its current state. It consists of 163 commits at the time of writing, so it’s not a thing of beauty. But new drivers usually aren’t, so I’m not too worried.

When this is merged, Zink will finally be a “normal” part of Mesa… Well, sort of anyway. I don’t think we’ll enable Zink to be built by default for a while. But that’ll just be a matter of adding zink to the -Dgallium-drivers meson-option.

Testing on CI

The current branch only adds building of Zink to the CI. There’s no testing being done yet. The reasons for this is two-fold:

  1. We need to get a running environment on CI. Rather of bringing up some hardware-enabled test-runner, I intend to try to set up SwiftShader as a software rasterizer instead, as that supports Vulkan 1.1 these days.
  2. We need some tests to run. Zink currently only supports OpenGL 2.1, and sadly the conformance test suite doesn’t have any tests for OpenGL versions older than 3.0. Piglit has some, but a full piglit-run takes significantly more time, which makes it tricky for CI. So right now, it seems the OpenGL ES 2.0 conformance tests are our best bet. We’ll of course add more test-suites as we add more features.

So, there’s some work to be done here, but it seems like we should be able to get something working without too much hassle.

Next steps

After Zink is upstream, it will be maintained similarly to other Mesa drivers. Practically speaking, this means that patches are sent to the upstream repo rather than my fork. It shouldn’t make a huge difference for most users.

The good thing is that if I go away on vacation, or are for some other reason unavailable, other people can still merge patches, so we’d slightly reduce the technical bus-factor.

I’m not stopping developing Zink at all, but I have other things going on in my life that means I might be busy with other things at times. As is the case for everyone! :wink:

In fact, I’m very excited to start working on OpenGL 3.x and 4.x level features; we still have a few patches for some 3.x features in some older branches.

The future is bright! :sunny:

October 23, 2019
As some of you running Fedora 31 may already have noticed, I have some good news to share. As part of my recent work on plymouth I've implemented a feature request which was first requested in 2012: support for an indicator that capslock is active while entering the disk unlock password for machines using full diskencryption. Besides the capslock indicator I've also added support for an indicator of the configured keyboard layout, since this sometimes also causes confusion:



And here is what it looks like when capslock is pressed:



If you're running Fedora 31 with full diskencryption then you may notice that the above screenshots are slightly different then what you have now. I've pushed an update to Fedora 31 updates-testing today which implements a few small tweaks to the theme after feedback from the design-team on the initial version. For those of you still on Fedora 30, this is coming to Fedora 30 too, it should show up in updates-testing with the next updates push.
October 22, 2019
First of all, as always my opinions are my own, not those of my employer.

Since I have 2 children I was happy to learn that the Netherlands would be one of the first countries to get Disney+ streaming.

So I subscribed for the testing period, problem all devices in my home run Fedora. I started up Firefox and was greeted with an "Error Code 83", next I tried Chrome, same thing.

So I mailed the Disney helpdesk about this, explaining how Linux works fine with Netflix, AmazonPrime video and even the web-app from my local cable provider. They promised to get back to me in 24 hours, the eventually got back to me in about a week. They wrote: "We are familiar with Error 83. This often happens if you want to play Disney + via the web browser or certain devices. Our IT department working hard to solve this. In the meantime, I want to advise you to watch Disney + via the app on a phone or tablet. If this error code still occurs in a few days, you can check the help center ..." this was on September 23th.

So I thought, ok they are working on this lets give them a few days. It is almost a month later now and nothing has changed. Their so called help-center does not even know about "Error Code 83" even though the internet is full of people experiencing this. Note that this error also happens a lot on other platforms, it is not just Linux.

Someone on tweakers.net has done some digging and this is a Widevine error: "the response is: {"errors":[{"code":"platform-verification-failed","description":"Platform verification status incompatible with security level"}]}". Widevine has 3 security levels and many devices, including desktop Linux and many Android devices only support the lowest security setting (software encryption only). In this case e.g. Netflix will not offer full HD or 4k resolutions, but otherwise everything works fine, which is a balance between DRM and usability which I can accept. Disney+ OTOH seems to have the drm features kranked up to maximum draconian settings and simply will not work on a lot of android devices, nor on Chromebooks, nor on desktop Linux.

So if you care about Linux in any way, please do not subscribe to Disney+, instead send them a message letting them know that you are boycotting them until they get their Linux support in order.
October 17, 2019

Upcoming in libinput 1.15 is a small feature to support Wacom tablets a tiny bit better. If you look at the higher-end devices in Wacom's range, e.g. the Cintiq 27QHD you'll notice that at the top right of the device are three hardware-buttons with icons. Those buttons are intended to open the config panel, the on-screen display or the virtual keyboard. They've been around for a few years and supported in the kernel for a few releases. But in userspace, they events from those keys were ignored, casted out in the wild before eventually running out of electrons and succumbing to misery. Well, that's all changing now with a new interface being added to libinput to forward those events.

Step back a second and let's look at the tablet interfaces. We have one for tablet tools (styli) and one for tablet pads. In the latter, we have events for rings, strips and buttons. The latter are simply numerically ordered, so button 1 is simply button 1 with no special meaning. Anything more specific needs to be handled by the compositor/client side which is responsible for assigning e.g. keyboard shortcuts to those buttons.

The special keys however are different, they have a specific function indicated by the icon on the key itself. So libinput 1.15 adds a new event type for tablet pad keys. The events look quite similar to the button events but they have a linux/input-event-codes.h specific button code that indicates what they are. So the compositor can start the OSD, or control panel, or whatever directly without any further configuration required.

This interface hasn't been merged yet, it's waiting for the linux kernel 5.4 release which has a few kernel-level fixes for those keys.

For a few years now, libinput has provided button scrolling. Holding a designated button down and moving the device up/down or left/right creates the matching scroll events. We enable this behaviour by default on some devices (e.g. trackpoints) but it's available on mice and some other devices. Users can change the button that triggers it, e.g. assign it to the right button. There are of course a couple of special corner cases to make sure you can still click that button normally but as I said, all this has been available for quite some time now.

New in libinput 1.15 is the button lock feature. The button lock removes the need to hold the button down while scrolling. When the button lock is enabled, a single button click (i.e. press and release) of that button holds that button logically down for scrolling and any subsequent movement by the device is translated to scroll events. A second button click releases that button lock and the device goes back to normal movement. That's basically it, though there are some extra checks to make sure the button can still be used for normal clicking (you will need to double-click for a single logical click now though).

This is primarily an accessibility feature and is likely to find it's way into the GUI tools under the accessibility headers.

October 16, 2019

A few weeks back, I was at XDC and gave a talk about various current and past input stack developments (well, a subset thereof anyway). One of the slides pointed out libinput's bus factor and I'll use this blog to make this a bit more widely known.

If you don't know what the bus factor is, Wikipedia defines it as:

The "bus factor" is the minimum number of team members that have to suddenly disappear from a project before the project stalls due to lack of knowledgeable or competent personnel.
libinput has a bus factor of 1.

Let's arbitrarily pick the 1.9.0 release (roughly 2 years ago) and look at the numbers: of the ~1200 commits since 1.9.0, just under 990 were done by me. In those 2 years we had 76 contributors in total, but only 24 of which have more than one commit and only 6 contributors have more than 5 commits. The numbers don't really change much even if we go all the way back to 1.0.0 in 2015. These numbers do not include the non-development work: release maintenance for new releases and point releases, reviewing CI failures [1], writing documentation (including the stuff on this blog), testing and bug triage. Right now, this is effectively all done by one person.

This is... less than ideal. At this point libinput is more-or-less the only input stack we have [2] and all major distributions rely on it. It drives mice, touchpads, tablets, keyboards, touchscreens, trackballs, etc. so basically everything except joysticks.

Anyway, I'm largely writing this blog post in the hope that someone gets motivated enough to dive into this. Right now, if you get 50 patches into libinput you get the coveted second-from-the-top spot, with all the fame and fortune that entails (i.e. little to none, but hey, underdogs are big in popular culture). Short of that, any help with building an actual community would be appreciated too.

Either way, lest it be said that no-one saw it coming, let's ring the alarm bells now before it's too late. Ding ding!

[1] Only as of a few days ago can we run the test suite as part of the CI infrastructure, thanks to Benjamin Tissoires. Previously it was run on my laptop and virtually nowhere else.
[2] fyi, xf86-input-evdev: 5 patches in the same timeframe, xf86-input-synaptics: 6 patches (but only 3 actual changes) so let's not pretend those drivers are well-maintained.

October 07, 2019

GStreamer Conference 2019 banner

GStreamer Conference 2019 in Lyon France


So the GStreamer Conference 2019 is approaching being held in Lyon, France between 31st October and 1st November 2019. This year is special as it marks the GStreamer projects 20th year of existence. I still remember seeing the announcement of GStreamer 0.0.9 which Erik Walthinsen sent to the GNOME announe mailing list. Back then I felt that multimedia support where one of the big gaps around the Linux operating system that needed filling (no, XAnim was nice for its time, but it was not a long term solution :) and GStreamer seemed like the perfect project to fill it. So I joined the GStreamer IRC channel determined to try to help the project succeed however I could. A little over a year later we all met for the first time at GUADEC in Copenhagen, even posing for this exciting team photo.

GStreamer Team at GUADEC Copenhagen in 2001 (we all looked slightly younger and fresher back then.)


Anyway, 20 years later there will be a talk and presentation by GStreamer co-founder Wim Taymans (wearing blue shirt and black pants in picture above) at the GStreamer Conference commemorating 20 years of GStreamer. Detailing taking the project from idealistic spare time effort to the multimedia industry juggernaut it is today.

Of course the conference is not going to be focused on the past, as there is a long line up of great talks talking about modern streaming with DASH, HDR support in GStreamer, latest developments around GStreamer and Rust, Virtual reality, Vulkan and more. Actually on the ‘and more’ topic, Wim Taymans will also do a presentation on PipeWire, the next generation audio and video server, at the GStreamer Conference this year, hopefully demoing some of the great improvements in things like our pro-audio Jack emulation support.
So if you haven’t already, make your way to the GStreamer Conference 2019 website and register for the 10th annual GStreamer Conference!

For those going be aware that there will also be a joint GStreamer fall hackfest and PipeWire hackfest in the two days following the GStreamer Conference. So be sure to sign up for those if interested. They will be co-located with participants flowing freely between the two events.

September 23, 2019

We are laboring on getting Fedora Workstation 31 out the door next Month, with the beta release being made available last week. So here are some of the highlights of this upcoming release which I and the team hope you will enjoy. Many of these items I already covered in my June blogpost about Fedora Workstation 31, so if you read that one consider this one a status update as there will be some repeats.

Wayland improvements
Fedora has been leading the migration to Wayland since day one and we are not planning to stop. XWayland on demand has been an effort a lot of people contributed to this cycle. The goal is to only need XWayland for legacy X applications, not have it started and running all the time as that is a waste of system resources and also having core functionality still depend on X under Wayland makes the system more fragile. XWayland-on-demand has been a big effort with contributions from a lot of people and companies. One piece of this was the Systemd user session patches that was originally written by Iain Lane from Canonical. They had been lingering for a bit so Benjamin Berg took those patches on for this cycle and helped shepherd them over the finish line and get them merged upstream. This work wasn’t a hard requirement for Wayland-on-demand, but since it makes it a lot easier to do different things under X and Wayland which in turn makes moving towards XWayland-on-demand a little simpler to implement. That work will also allow (in future releases) us to do things like only start services under GNOME that are actually needed for your hardware, so for instance if you don’t have a bluetooth adapter in your computer there is no reason to run the bits of GNOME dealing with bluetooth. So expect further resource savings coming from this work over time.

Carlos Garnacho then spent time going through GNOME Shell removing any lingering X dependencies while Olivier Fourdan worked on cleaning up the control center. This work has mostly landed, but it is hidden behind an experimental flag (gsettings set org.gnome.mutter experimental-features "[...,'autostart-xwayland']") in Fedora 31 as we need to mature it a bit more before its ready for primetime. But we hope and expect to have it running by default in Fedora Workstation 32.

One example of something that was still requiring X that is now gone is the keyboard and mouse accessibility features in GNOME 3, which Olivier Fourdan got re-implemented and improved for this release. So if anyone out there reading this rely on the hover click accessibility feature then that is actually a lot nicer in Fedora Workstation 31. As seen in the screenshot below you now have this nice little pie animation filling up as it prepares to click which is a huge improvement over how it used to work.

Clock on hover

Click on hover in action

Another item we feel is an important part of reducing the need for XWayland is having Firefox running natively on Wayland. Martin Stransky and Jan Horak has been working tirelessly on trying to ensure Firefox works well on Wayland and in the Fedora 31 Beta it is running on Wayland by default. However there are a few bugs discovered that Martin and Jan are trying hard to fix atm so we can keep this default for the GA release, but if they miss the deadline we will ship the X backend version in F31 and then move to the Wayland version later on.

In Fedora Workstation 31 Wayland is still disabled by default if you use the Nvidia binary driver. The reason for this is due to lack of acceleration under XWayland, meaning that any application depending on GLX, like a lot of games, will just get software GL rendering with the binary NVidia driver. This isn’t something we can resolv on our own, Nvidia has to do the work since its their closed source driver, but we been discussing it regularly with them and we been told now that they are looking at the work Adam Jackson some time ago which was specifically aimed at helping them bring their X.org driver to XWayland. We don’t have a timeline yet, but it is being actively looked at and hopefully a proper date can be provided soon. I am actually running Fedora Workstation 31 using the NVidia driver myself at the moment on this laptop, and for those interested in helping dogfood this setup, in preparation for hopefully being able to enable Wayland on NVidia in Fedora Workstation 32, it is fairly simple thing to do. Under /usr/lib/udev/rules.d/ you find a file called 61-gdm.rules, just edit that file and comment out (#) the line that reads ‘DRIVER=="nvidia", RUN+="/usr/libexec/gdm-disable-wayland"‘ and you will revert to a standard setup where your standard session is a Wayland session, but with a x.org session available as a fallback. The more people that run this and report issues the better as it helps us make this rock solid before releasing it upon the world.

Atomic kernel modesetting
Jonas Ådahl has been hard at work this cycle on adding support for atomic mode setting. This work is not done, but the first parts of it has landed, but it has major long term advantages for us. I asked Jonas to provide a short description of the work and what it will eventually achieve as I don’t we articulated that anywhere else yet:

There are two ways for a display server to control the configuration and content of monitors – the old classic Kernel Mode Setting (classic KMS), and newer atomic Kernel Mode Setting (atomic KMS). The main difference between these two modes of operations is that with atomic KMS, the display server posts transactions containing configuration KMS that are then processed atomically by the kernel, while when using the classic KMS, the display server posts configurations command by command, where each monitor is configured by posting multiple commands. The benefits with atomic KMS are for example that the display server will up front know whether a configuration is valid (e.g. enough memory bandwidth), or that the display server can configure multiple aspects of the hardware atomically.

During the cycle leading up to Fedora Workstation 31 the foundations for how mutter (the window manager powering GNOME Shell) can make use of the new atomic KMS API was put in place. What was done was to introduce an internal transactional API for configuring monitors. This will eventually allow us to have much more control over how more advanced monitor features are utilized. For example it will be possible to place client windows directly in hardware overlay planes, meaning we can more often completely bypass full frame compositing when only the content of a single window changes. Another example for what this enables us to do is with color management; we will be able to do seamless switching between managing window color profiles using OpenGL and for instance gamma ramps. Yet another example of what this work opens the door for is framebuffer modifiers, which will among other things potentially result in higher performance with very high resolution monitors.
Finally an important aspect of the work done related to the new internal KMS API is that it aims to be thread safe, meaning eventually it will be possible to put KMS processing completely in a separate thread. This means that together with e.g. moving input device processing to its own thread it will be possible to get very short latency between mouse movement and the cursor
being moved on screen.

QtGNOME improvements
Jan Grulich has continued improved the QtGNOME module to make sure Qt apps integrate as well as possible into Fedora Workstation. His latest updates ensures that the theming keeps up to date with latest upstream changes in Adwaita, that we have a fully working dark theme, that accessibility theming work and that it works with Flatpaks. Below is a screenshot showing Okular running allowing you to see how the QtGNOME module affects the look and feel of Qt applications.

Firmware improvements
The LVFS firmware service keeps going from strength to strength. Richard Hughes presented on it during the Open Firmware Conference recently and was approached by a lot of vendors afterwards both thanking him and Red Hat for the effort, but also asking about getting more of their hardware supported. New vendors are coming onboard at rapid pace, for instance Acer joined recently and are planning to support more of their hardware on the LVFS going forward. It is also worth mentioning the GNOME Firmware tool that can now be downloaded from flathub and which works great on Fedora Workstation 31.

OpenH264 Greatly Improved
The much improved version of OpenH264 will be available soon for Fedora users. This new version adds support for the High and Advanced profiles of H264 which is what most videos found online or produced by your camera would be using. This means you can add H264 playback support to your Fedora Workstation without having to search online for 3rd party repositories like you have had to do up to now. We also are trying to ensure this will be usable by Firefox for video playback eventually. This was work we partnered with Endless, Cisco to hire the multimedia experts at Centricular to do, so another great example of cross company collaboration to bring improved functionality to the community.

Fedora Toolbox
Debarshi Ray has been working on many small improvements and better robustness for Fedora Toolbox going into Fedora Workstation 31. Fedora Toolbox for those not aware of it yet, is our tool to make doing development using pet containers simple and convenient, providing ease of use features on top of traditional container tools and integration with GNOME terminal and the GNOME Shell. The version shipping in F31 will be the last shell script based one as once Fedora Workstation 31 is out we will be going all in on rewritting Fedora Toolbox in Go, in preparation for future development and expansion. I strongly recommend trying it out as it will help open your eyes to the possibilities that using pet containers for development gives you. For instance you can easily set up a RHEL based pet container on your Fedora system to do development work that is mean to be deployed on a RHEL system or grab our special AI/ML development container for easy access to TensorFlow and similar tools.

Improved Classic mode
Another notable change in this release is the updates to GNOME Classic mode. GNOME Classic mode is a set of extensions to GNOME 3 that makes it look and behave a lot more like GNOME 2, which still has many fans out there. With this release we collected feedback from a group of Classic mode users and tried to improve the experience further, mostly be removing some remaining GNOME 3’isms that didn’t really fit the GNOME Classic user experience, like the overview and the hot corner. The session manager is now also easily accessible in the bottom corner. The theming also got cleaned up a little to remove the last bit of the ‘black’ GNOME 3 theming. That said I think it is important to remember that this is still GNOME 3 in the end, we are really just showcasing the power of extensions to tweak the user experience in quite fundamental ways here.

GNOME Classic improved

Improved GNOME Classic mode


Better support for non-English users
Fedora Workstation is used all over the globe, but we have not been happy about how our support for picking languages other than English has worked so far. The thing is that if you choose one or more languages at install time, things tended to just work fine, but if you wanted to add a new language afterwards it required jumping onto the command line and figuring out how to install the needed langpacks. In Fedora Workstation 31 Sundeep Anand have worked hard to improve this, so if you choose a new language in the GNOME Control center in Fedora Workstation 31, the required langpacks should be installed automatically for you.

Fleet Commander
Fleet Commander 0.14.1 is out just in time for Fedora Workstation 31. Fleet Commander is a tool for doing large scale deployments of Fedora and RHEL workstations, allowing you to set system wide profiles. So for instance if you have a GNOME Shell extension everyone in your organization or a specific team inside your organization should have enabled, you can deploy a profile with Fleet commander ensuring that extension is enabled for those users. Basically any setting within GNOME can be set using this, including network configuration options. There is also support for Firefox and LibreOffice settings in Fleet Commander. The big feature addition of 0.14.1 is that Fleet Commander now can be used with Active Directory, which means that even if your company or university use Active Directory for their user management, you can now deploy Fedora and RHEL profiles without needing FreeIPA. Fleet Commander is pretty much finished at this point, at least as far as any piece of software can ever be finished. Oliver Gutierrez Suarez is working on finishing up some last bits of Firefox support currently, but we don’t have any major Fleet Commander items on his todo list after that, so if you been waiting to test it out there are on new major features you need to wait on anymore, it is all there. If you are doing large scale linux desktop deployments I definitely recommend checking out Fleet commander. You will find that Fleet Commander definitely makes Fedora a great choice for doing large scale Linux desktop deployments.

Pipewire
We are not doing a lot of changes to Pipewire for Fedora Workstation 31. Mostly some bugfixes and minor improvements to the video infrastructure it already provides in Fedora 30 for Flatpaks and web browsers. We are planning major changes for Fedora Workstation 32 though, where we in fact plan to ship Pipewire as a tech preview for both Jack and PulseAudio users. The way it will work is that the system will still default to PulseAudio, but we will provide either a script or a UI option to switch over to Pipewire (and back again). There is also a plan to have a core set of ProAudio applications available as Flatpaks for Fedora Workstation 32 tested and verified to work perfectly with Pipewire, the current apps planned to be included are Ardour, Carla, a2jmidid, Hydrogen, Qtractor and Patroneo, but if there is interested contributors joining the effort we could have even more. Then for Fedora Workstation 33 the idea is to ship with Pipewire as the default audio handler, but with some way for users to switch back to PulseAudio if they have a need. Not unlike how the setup is currently with Wayland and X.org in Fedora. Wim Taymans will also be attending the Sonoj conference in Cologne Germany at the end of October to discuss Pipewire with many members of the Linux ProAudio community and hopefully help prepare them for a future where Fedora Workstation is the perfect home for ProAudio users and developers.

Sysprof
Christian Hergert spent some cycles this round on improving the Sysprof tool as it was becoming clear that to keep improving GNOME Shell and general desktop performance going forward we needing better data and ability to find the bottlenecks. Tools like sysprof often ends up being the unsung heroes of the system, but as we continue improving the overall GNOME performance and resource usage of the next few years the revamped sysprof tool will be a big part of that story.

Sysprof

Much improved Sysprof tool

Silverblue
A lot of the items we work on are part of our vision around Silverblue, a Linux desktop OS built on the idea of an immutable core image. We often mention the theoretic advantage that such a setup with an immutable OS brings, but actually as I upgraded from F30 and F31 beta on my RPM based laptop (I got a separate machine where I run Silverblue) I hit the exact kind of issue that Silverblue can help us and our users avoid. What happened was that after my upgrade I suddenly had no Wayland session anymore, just the fallback X.org session. After quite a bit of fault searching I discovered that the reason for this was that I had been testing Valves ACO shader compiler on F30. These packages had a newer version number than the F31 packages and thus where not overwritten as part of the upgrade. Unfortunately the EGL package that came as part of that repository did not work well on F31 and thus the Wayland session failed. Once I did a distro sync and forced all packages to be the actual F31 versions things worked correctly, but it did illustrate the challenges with systems where different parts of the core can and will get updated at different times. With a single well tested core OS image these kind of problems will not happen. That said being able to test such things as ACO is valuable and useful and luckily OStree and Silverblue do offer functionality for installing such things in a clean and non-damaging way through what is known as package layering. When you install new packages like that on using package layering they will only last until your next reboot, after you reboot your back to a clean original state system. Of course if you really want to keep some experimental packages around there are other things you can do too, like overriding, but for simple testing like I did with ACO, package layering will provide you with a simple and safe way to do that.

We realize that Silverblue is a major change in how a Linux distro is ‘supposed’ to work, so we are taking our time with it to ensure we do it right and that we have made sure applications and tools work in a way that functions well on an immutable OS. So if you are interested I do recommend that you grab the Fedora 31 Silverblue image and give it a spin, but we are still working on polishing the experience so don’t expect it to be a seamless experience at this point in time. Of course as things like Flatpaks, Fedora Toolbox and a host of smaller issues get improved upon we do believe this will be such an overall improvement over an ‘old fashioned’ linux distro that you will be asking yourself why the Linux world didn’t do this years ago.

Improved performance
A lot of work has gone into improving the general performance of GNOME 3.34. The GNOME shell team has been very active and is a great example of a large numbers of developers working together from different backgrounds. So this release features a lot of great performance work by Daniel van Vugt from Canonical and by Georges Stavracas from Endless for instance. The Red Hat team has focused on providing patch review and feedback and working on bigger long term changes and enablers, like Christian Hergerts work on Sysprof, Jonas Ådahl work on atomic mode setting and Benjamin Bergs work on systemd-user session support. All in all I think you will find that Fedora Workstation 31 with GNOME 3.34 provides a faster and smoother experience, an experience we will continue to build upon going forward as some of these long term efforts starts paying off.

Sonic Boom

Performance is better than ever

Summary
So this has been a roundup of some of the core items you should look forward to in Fedora Workstation 31. There are other items coming too in this release, like the Miracast GNOME Network Display application that Benjamin Berg has written, more Fedora Flatpaks available than ever before and more. We also have a lot of interesting items coming up in Fedora Workstation 32 like Bastien Noceras work improving low memory handling. So stay tuned.

September 22, 2019

At the very beginning of this blog I mentioned that I might not only talk about technical topics but also about political or philosophical ones. Until now I successfully managed to avoid that but over three years later this is the first article about such a topic.

It is though directly related to me being part of the KDE community and my work on KDE software. In the last few weeks I noticed a rise in political activism in KDE what I see critical. The climax was two days ago an official endorsement of the Global Climate Strike on KDE's social media accounts. Why this straw broke the camel's back and how if at all I think KDE can be political, I will expand upon in the following.

Worse activism

Let us first dive into some current political activism in KDE and why I see it as damaging to the community.

Grasping at plastic straws

Talking about straws there is a discussion ongoing already since July on the member mailing list of the KDE e.V. (that is the legal body representing the KDE community in official matters) about an environment responsibility policy for KDE.

I would not feel comfortable bringing this discussion in detail to the public without prior consent of the KDE e.V. board or the people involved so please excuse me that I won't expand on it much more. But what I can say is that the current draft still reads like the law book of a cult covering every little aspect of its members' lives.

For example people are supposed to have QR Codes on their business cards so that other people can scan these instead of having to hand over – and by that wasting – a paper card. I am not sure if the battery power used for scanning a QR code with a smart phone offsets the energy involved to produce a single business card but it would be an uninteresting thing to calculate for sure.

In all fairness the draft would likely go through several further iterations smoothing out lots of the weird and extreme demands before having a chance of becoming a policy but that we even have to spend time discussing such nonsensical ideas is difficult to comprehend.

Especially if one thinks of how little impact this will make with the 5 to 100 nerds going to a KDE event each time.

This reminds one of the discussion about plastic in the ocean often exemplified in the media by plastic straws we use in drinks and so on. But even without knowing any numbers it should be obvious to anybody that plastic straws can not be a huge polluter to our environment in comparison to all the other packaging private persons and the industry use up on a daily basis. Still the general public and even politicians like to concentrate on straws instead of real issues.

Hashtag ClimateStrike

Why do we talk about plastic straws and not other packaging or fishing gear? Because the first is easy to understand. We all know what it is and how it looks. We have an image of it in our heads. The straw is a symbol.

One could think it does not matter if all the details are right, symbols are more important. There needs to be something easy to grasp so people can rally behind it. By this logic also the Global Climate Strike (or any other strike) is a symbol to generate alertness and all kind of internal conflict that the loss of detail induces is justifiable by that. So all good?

Not really. KDE's primary objective is and must stay free software. This is what unites us in KDE independent of our other believes, opinions or socioeconomic and cultural background. Using KDE as a platform to grow interest into other maybe also important but unrelated topics through political symbolism will erode this base on what the community stands.

And that this will be the outcome is easy to see. Just look at the responses the KDE climate strike endorsement received on Twitter or Facebook.

On Twitter besides evoking people that call themselves Don Trumpeone and Anarcho-Taoist we have the marketing lead of KDE telling another user and self-proclaimed "KDE fan" that he should not use our software anymore because the user has the wrong opinion about climate change.

Personally I also think that the user's opinion about climate change is wrong but I would never tell him that he should not use free software anymore because of this. These reactions are divisive to the core and it was already the outcome only few hours after the original endorsement was published.

The conflicts this activism creates are the one thing. On the other side I just do not want to believe you have to use simplified symbols to gain people's interest in political topics, I do not want to believe that even today we can not make the world better without shouting at each other and hating people for their different opinions.

I think the modern tools we have nowadays could allow us to be better in this regard. But more about such tools and what role KDE could play here at the end of this article.

Hey guys!

Before that another topic I want to strive quickly because it is getting on my nerves: the constant reminder of a handful of people at KDE events, in particular Akademy, to use inclusive language like addressing a group of women and men as "people" or "humans" instead of "guys".

I get it! Tech is not diverse, there are a lot more men than women and you want to change that for whatever reason you see fit. Personally I don't see an issue per se. I think most women just like to do different things, that sexes are different in this respect. But on the other side modern gender science also has some interesting arguments worth considering. And putting some resources into outreach programs might still be a good idea. After all especially in young years men and women are influenced by their peers and such programs could counterbalance.

But whatever your opinion is, stop trying to shove it everyone else in the face by telling them they should not use the word "guys" which is totally common in the above context. Especially don't do it when the person you think about lecturing is at the moment holding a talk about free software in front of hundreds of people. It is annoying for the people in the audience and quite frankly rude to the presenter to disrupt his talk.

My own theory is that this speech regulation is just another form of elitism to differentiate the money and cultural rich from the poor. At least it reminds me of French being the language of the nobles some hundred years ago and I have yet to meet the child of a working class family that does not find this inclusive language utter nonsense.

Stallman

When we are talking about speech regulations I want to also quickly address the resignations of Richard Stallman since it happened recently and some people in KDE showed their support of that rather openly.

From what I read the media twisted his words on what he had to resign for in the end. On the other side it was reported by multiple credible sources that he also showed some unprofessional behavior in the past in regards to women. Then again does this already warrant his expulsion? Or that he published some questionable statements in his blog years ago? On the other side do his overall behavior and antics not hinder the spread of free and open source software overall?

So we can see that it is complicated. Also neither did I know him personally nor did I ever read his blog. When judging one way or the other it would be good if more people admitted this to themselves and would just say "I do not know". That said there are certain things to keep in mind regardless of how well one knows the circumstances when we talk about accusations of bad thoughts or behavior.

People should be given the opportunity to express their opinions freely and without fear of repercussions and they should be allowed to change their opinions again if they got convinced they were wrong. This holds also or even more true for people in power. Otherwise they might hide their real opinions and form policies with hidden agendas. Or we just get opportunists without opinions. Also not ideal.

If a person in an official position is hindering the progress of open source for certain reasons then this person should be replaced. But for these reasons and not because of something unrelated he said on an internal mailing list or on a personal blog years ago that got read probably only by a handful of people. And of course one should give this person the chance to better himself before demanding his resignation.

At last ask yourself who benefits from a call for resignation. Maybe the people's motives are pure, but own interest might be involved as well. Especially when a company asks for it. Most companies are still there first of all for their own profits. On the other side do not drift into conspiracy theories, just be wary and give the criticized person the benefit of the doubt.

Better politics

After criticizing some of the current damaging activism in KDE I want to give some ideas what political goals our community actually could unite around besides the promotion of free and open source software. I will keep this part short but if there is interest in some of these ideas future articles could go deeper.

Code literacy

I believe the number one skill to learn in the future besides reading and writing will be to code. And with that I don't only mean learning the formal rules of some programming language but to think in logical steps, test own hypotheses and find the right level of abstraction to understand and solve a problem.

This is not only a program for the western world. It can give rise to the living standards and social conditions of people in any culture.

KDE with contributors from all around the world and with different levels of coding knowledge is in a prime position to promote code literacy everywhere and to help governments and schools in achieving it.

Open government

How modern technology and in particular the web has changed our understanding of how politicians should interact with the public and how the public should influence the political discourse and its decision-making is a wide and interesting field.

And it does not stop at public institutions. Also private entities, in particular companies in quasi-monopolistic positions, must open up to the public discourse in reasonable ways.

In any case free and open source software is for me the foundation on what other ideas can grow in this area. If the public does not have access to the code the government or the platform holder uses for its open government program there will be only minimal and one-sided innovation.

KDE is in a unique position of providing free software, having this software been deployed in municipal governments already and being independent of any commercial or governmental influence. How about we reach out to one of the NGOs doing work in this important field already and see what opportunities there are?

Effective public discourse

This is somewhat related to open government but also to the overall topic on how we want to communicate with each other in the community and with the outside.

Again how and where we share thoughts and discuss topics has been changed dramatically with modern information technology. But sometimes when looking through discussions on Facebook or Twitter one could think for the worse.

I believe though there is no real change to us, just a rise in supply of opportunities to express ourselves. It covers the earlier unfulfilled demand. But this does not mean that already all possibilities have been exhausted and we can not improve any more on the current state.

I think the technology for effective ways of sharing thoughts, discussing ideas and finding consent is not yet fully developed. There are likely better ways and KDE with a legal body, an online community of individuals and many other official and private stakeholders has all the right reasons to look for and promote new technical solutions for fostering civil discourse and finding common ground.

Progressive solutions instead of divisive activism

The quintessence is neither that the KDE community should keep out of all political topics besides free software nor that it may only be active in one of the three areas I circumscribed above. For example I would very much support work on environmental friendly technology in KDE which would then also justify some political engagement.

But there must be work on such technology first and then the political engagement grounded on that, not the other way around. I am sure if people come forward with specific solutions instead of general opinions there will be no divisiveness afterwards.

And that is the overarching theme when I think of KDE. I joined the community also because it always felt like a collective of diverse people from all around the world being interested in creating pragmatic yet proper free software solutions for everyone no matter their race, gender, social background or political opinions to improve their own daily life, their education or their income. I hope we find ways to apply this down-to-earth technical mentality also to whatever related political goals we strive for in the future.

September 21, 2019

Intro slide

Downloads

If you're curious about the slides, you can download the PDF or the ODP.

Thanks

This post has been a part of work undertaken by my employer Collabora.

I would like to thank the wonderful organizers of ELC NA for hosting the event.

September 20, 2019

Intro slide

Downloads

If you're curious about the slides, you can download the PDF or the ODP.

Thanks

This post has been a part of work undertaken by my employer Collabora.

I would like to thank the wonderful organizers of ELC NA for hosting the event.

September 12, 2019

An annoying thing about C code is that there are plenty of functions that cannot be unit-tested by some external framework - specifically anything declared as static. Any larger code-base will end up with hundreds of those functions, many of which are short and reasonably self-contained but complex enough to not trust them by looks only. But since they're static I can't access them from the outside (and "outside" is defined as "not in the same file" here).

The approach I've chosen in the past is to move the more hairy ones into separate files or at least declare them normally. That works but is annoying for some cases, especially those that really only get called once. In case you're wondering whether you have at least one such function in your source tree: yes, the bit that parses your commandline arguments is almost certainly complicated and not tested.

Anyway, this week I've finally found the right combination of hacks to make testing static functions easy, and it's:

  • #include the source file in your test code.
  • Mock any helper functions you'd need to trick the called functions
  • Instruct the linker to ignore unresolved symbols
And boom, you can write test cases to only test a single file within your source tree. And without any modifications to the source code itself.

A more detailed writeup is available in this github repo.

For the impatient, the meson snippet for a fictional source file example.c would look like this:


test('test-example',
executable('test-example',
'example.c', 'test-example.c',
dependencies: [dep_ext_library],
link_args: ['-Wl,--unresolved-symbols=ignore-all',
'-Wl,-zmuldefs',
'-no-pie'],
install: false),
)

There is no restriction on which test suite you can use. I've started adding a few of test cases based on this approach to libinput and so far it's working well. If you have a better approach or improvements, I'm all ears.

August 26, 2019

Sounds like déjà vu? Right, I posted a post with an almost identical title 18 months ago or so. This is about Tuhi 0.2, new and remodeled and completely different to that. Sort-of.

Tuhi is an application that supports the Wacom SmartPad devices - Bamboo Spark, Bamboo Slate, Bamboo Folio and Intuos Pro. The Bamboo range are digital notepads. They come with a real pen, you draw normally on the pad and use Bluetooth LE and Wacom's Inkspace application later to sync the files to disk. The Intuos Pro is the same but it's designed as a "normal" tablet with the paper mode available as well.

18 months ago, Benjamin Tissoires and I wrote Tuhi as a DBus session daemon. Tuhi would download the drawings from the file and make them available as JSON files over DBus to be converted to SVG or some other format by ... "clients". We wrote a simple commandline tool to debug Tuhi but no GUI, largely in the hope that maybe someone would be interested in doing that. Fast forward to now and that hasn't happened but I had some spare cycles over the last weeks so I present to you: Tuhi 0.2, now with a GTK GUI:

It's basic but also because it shouldn't do much more than just downloading the drawings and allowing you to save them. This is not an editing UI, it's effectively a file manager for the drawings on the tablet. And since by design those drawings get deleted as you download them, there isn't even much to that (don't worry, Tuhi doesn't really delete files, you can recover almost everything).

Under the hood there were some internal changes too but I suspect they'll be boring to most. The more interesting bits are reworks so we can test the conversions a lot better now and - worst case - recover files if Tuhi crashes. It is largely reverse-engineered after all.

On that note I would like to also extend my thanks to Wacom who have provided us with some of the specs for the protocol (under NDA, we cannot share these with the community, sorry). These specs helped tremendously understanding the protocol bits that were confusing at best and unknown at worst. There are still some corners in the protocol that we don't know but for the most recent generation (i.e. Intuos Pro) we should have correct parsing of the protocol.

And many thanks to Jakub Steiner for the fancy logo.

And, as of a few minutes ago, Tuhi is available as flatpak from flathub.org. For the forseeable future is is the best way to install Tuhi.

August 21, 2019
I'll soon be flying to Greece for GUADEC but wanted to mention one of the things I worked on the past couple of weeks: the low-memory-monitor project is off the ground, though not production-ready.

low-memory-monitor, as its name implies, monitors the amount of free physical memory on the system and will shoot off signals to interested user-space applications, usually session managers, or sandboxing helpers, when that memory runs low, making it possible for applications to shrink their memory footprints before it's too late either to recover a usable system, or avoid taking a performance hit.

It's similar to Android's lowmemorykiller daemon, Facebook's oomd, Endless' psi-monitor, amongst others

Finally a GLib helper and a Flatpak portal are planned to make it easier for applications to use, with an API similar to iOS' or Android's.

Combined with work in Fedora to use zswap and remove the use of disk-backed swap, this should make most workstation uses more responsive and enjoyable.
August 18, 2019

End of June I attended the annual Plasma sprint that was this year held in Valencia in conjunction with the Usability sprint. And in July we organised on short notice a KWin sprint in Nuremberg directly following up on the KDE Connect sprint. Let me talk you through some of the highlights and what I concentrated on at these sprints.

Plasma sprint in Valencia

It was great to see many new faces at the Plasma sprint. Most of these new contributors were working on the Plasma and KDE Apps Ui and Ux and we definitely need some new blood in these areas. KDE's Visual Design Group, the VDG, thinned out over the last two years because some leading figures left. But now seeing new talented and motivated people joining as designers and Ux experts I am optimistic that there will be a revival of the golden time of the VDG that brought us Breeze and Plasma 5.

In regards to technical topics there is always a wide field of different challenges and technologies to combine at a Plasma sprint. From my side I wanted to discuss current topics in KWin but of course not everyone at the sprint is directly working on KWin and some topics require deeper technical knowledge about it. Still there were some fruitful discussions, of course in particular with David, who was the second KWin core contributor present besides me.

As a direct product of the sprint my work on dma-buf support in KWin and KWayland can be counted. I started work on that at the sprint mostly because it was a feature requested already for quite a long time by Plasma Mobile developers who need it on some of their devices to get them to work. But this should in general improve in our Wayland session the performance and energy consumption on many devices. Like always such larger features need time so I was not able to finish them at the sprint. But last week I landed them.

Megasprint in Nuremberg

At the Plasma sprint we talked about the current state of KWin and what our future goals should be. I wanted to talk about this some more but the KWin core team was sadly not complete at the Plasma sprint. It was Eike's idea to organize a separate sprint just for KWin and I took the next best opportunity to do this: as part of the KDE Connect and the Onboarding sprints in the SUSE offices in Nuremberg just a few weeks later. Jokingly we called the whole endeavor because of the size of three combined sprints the Megasprint.

KDE Connect sprint

I was there one or two days earlier to also attend the KDE Connect sprint. This was a good idea because the KDE Connect team needs us to provide some additional functionality in our Wayland session.

The first feature they rely on is a clipboard management protocol to read out and manipulate the clipboard via connected devices. This is something we want to have in our Wayland session also in general because without it we can not provide a clipboard history in the Plasma applet. And a clipboard selection would be lost as soon as the client providing it is closed. This can be intentionally but in most cases you expect to at least have simple text fragments still available after the source client quit.

The second feature are fake inputs of keyboard and mouse via other KDE Connect linked devices. In particular fake input via keyboard is tricky. My approach would be to implement the protocols developed by Purism for virtual keyboards and input methods. Implementation of these looks straight forward at first, the tricky part comes in when we look at the current internal keyboard input code in KWayland and KWin: there is not yet support for multiple seats or for one set multiple keyboards at the same time. But this is a prerequisite for virtual keyboards if we want to do it right including the support of different layouts on different keyboards.

KWin sprint

After the official begin of the KWin sprint we went through a long list of topics. As this was the first KWin sprint for years or even forever there was a lot to talk about, starting with small code style issues we needed to agree on till large long-time goals on what our development efforts should concentrate in the future. Also we discussed current topics and one of the bigger ones is for sure my compositing rework.

But in the overall picture this again is only one of several areas we need to put work in. In general it can be said that KWin is a great piece of software with many great features and a good performance but its foundations have become old and in some cases rotten over time. Fixes over fixes have been put in one after the other increasing the complexity and decreasing the overall cohesion. This is normal for actively used software and nothing to criticize but I think we are now at a point in the product life cycle of KWin to either phase it out or put in the hours to rework many areas from the ground up.

I want to put in the hours but on the other side in light of possible regressions with such large changes the question arises if this should be done dissociated with normal KWin releases. There was not yet a decision taken on that.

Upcoming conferences

While the season of sprints for this year is over now there are some important conferences I will attend and if you can manage I invite you to join these as well. No entry fee! In the second week of September the KDE Akademy is held in Milan, Italy. And in the first week of October the X.Org Developer's Conference (XDC) is held in Montreal, Canada. At XDC I have two talks lined up myself: a full length talk about KWin and a lightning talk about a work-in-progress solution by me for multi DPI scaling in XWayland. And if there is time I would like to hold a third one about my ongoing work on auto-list compositing.

In the beginning I planned only to travel to Canada for XDC but just one week later the WineConf 2019 is held close to Montreal, in Toronto, so I might prolong the stay a bit to see how or if at all I as a compositor developer could help the Wine community in achieving their goals. To my knowledge this would be the first time a KWin developer attends WineConf.

August 17, 2019
August 12, 2019

Over the past 2 years Flathub has evolved from a wild idea at a hackfest to a community of app developers and publishers making over 600 apps available to end-users on dozens of Linux-based OSes. We couldn’t have gotten anything off the ground without the support of the 20 or so generous souls who backed our initial fundraising, and to make the service a reality since then we’ve relied on on the contributions of dozens of individuals and organisations such as Codethink, Endless, GNOME, KDE and Red Hat. But for our day to day operations, we depend on the continuous support and generosity of a few companies who provide the services and resources that Flathub uses 24/7 to build and deliver all of these apps. This post is about saying thank you to those companies!

Running the infrastructure

Mythic Beasts Logo

Mythic Beasts is a UK-based “no-nonsense” hosting provider who provide managed and un-managed co-location, dedicated servers, VPS and shared hosting. They are also conveniently based in Cambridge where I live, and very nice people to have a coffee or beer with, particularly if you enjoy talking about IPv6 and how many web services you can run on a rack full of Raspberry Pis. The “heart” of Flathub is a physical machine donated by them which originally ran everything in separate VMs – buildbot, frontend, repo master – and they have subsequently increased their donation with several VMs hosted elsewhere within their network. We also benefit from huge amounts of free bandwidth, backup/storage, monitoring, management and their expertise and advice at scaling up the service.

Starting with everything running on one box in 2017 we quickly ran into scaling bottlenecks as traffic started to pick up. With Mythic’s advice and a healthy donation of 100s of GB / month more of bandwidth, we set up two caching frontend servers running in virtual machines in two different London data centres to cache the commonly-accessed objects, shift the load away from the master server, and take advantage of the physical redundancy offered by the Mythic network.

As load increased and we brought a CDN online to bring the content closer to the user, we also moved the Buildbot (and it’s associated Postgres database) to a VM hosted at Mythic in order to offload as much IO bandwidth from the repo server, to keep up sustained HTTP throughput during update operations. This helped significantly but we are in discussions with them about a yet larger box with a mixture of disks and SSDs to handle the concurrent read and write load that we need.

Even after all of these changes, we keep the repo master on one, big, physical machine with directly attached storage because repo update and delta computations are hugely IO intensive operations, and our OSTree repos contain over 9 million inodes which get accessed randomly during this process. We also have a physical HSM (a YubiKey) which stores the GPG repo signing key for Flathub, and it’s really hard to plug a USB key into a cloud instance, and know where it is and that it’s physically secure.

Building the apps

Our first build workers were under Alex’s desk, in Christian’s garage, and a VM donated by Scaleway for our first year. We still have several ARM workers donated by Codethink, but at the start of 2018 it became pretty clear within a few months that we were not going to keep up with the growing pace of builds without some more serious iron behind the Buildbot. We also wanted to be able to offer PR and test builds, beta builds, etc ­­— all of which multiplies the workload significantly.

Packet Logo

Thanks to an introduction by the most excellent Jorge Castro and the approval and support of the Linux Foundation’s CNCF Infrastructure Lab, we were able to get access to an “all expenses paid” account at Packet. Packet is a “bare metal” cloud provider — like AWS except you get entire boxes and dedicated switch ports etc to yourself – at a handful of main datacenters around the world with a full range of server, storage and networking equipment, and a larger number of edge facilities for distribution/processing closer to the users. They have an API and a magical provisioning system which means that at the click of a button or one method call you can bring up all manner of machines, configure networking and storage, etc. Packet is clearly a service built by engineers for engineers – they are smart, easy to get hold of on e-mail and chat, share their roadmap publicly and set priorities based on user feedback.

We currently have 4 Huge Boxes (2 Intel, 2 ARM) from Packet which do the majority of the heavy lifting when it comes to building everything that is uploaded, and also use a few other machines there for auxiliary tasks such as caching source downloads and receiving our streamed logs from the CDN. We also used their flexibility to temporarily set up a whole separate test infrastructure (a repo, buildbot, worker and frontend on one box) while we were prototyping recent changes to the Buildbot.

A special thanks to Ed Vielmetti at Packet who has patiently supported our requests for lots of 32-bit compatible ARM machines, and for his support of other Linux desktop projects such as GNOME and the Freedesktop SDK who also benefit hugely from Packet’s resources for build and CI.

Delivering the data

Even with two redundant / load-balancing front end servers and huge amounts of bandwidth, OSTree repos have so many files that if those servers are too far away from the end users, the latency and round trips cause a serious problem with throughput. In the end you can’t distribute something like Flathub from a single physical location – you need to get closer to the users. Fortunately the OSTree repo format is very efficient to distribute via a CDN, as almost all files in the repository are immutable.

Fastly Logo

After a very speedy response to a plea for help on Twitter, Fastly – one of the world’s leading CDNs – generously agreed to donate free use of their CDN service to support Flathub. All traffic to the dl.flathub.org domain is served through the CDN, and automatically gets cached at dozens of points of presence around the world. Their service is frankly really really cool – the configuration and stats are reallly powerful, unlike any other CDN service I’ve used. Our configuration allows us to collect custom logs which we use to generate our Flathub stats, and to define edge logic in Varnish’s VCL which we use to allow larger files to stream to the end user while they are still being downloaded by the edge node, improving throughput. We also use their API to purge the summary file from their caches worldwide each time the repository updates, so that it can stay cached for longer between updates.

To get some feelings for how well this works, here are some statistics: The Flathub main repo is 929 GB, of which 73 GB are static deltas and 1.9 GB of screenshots. It contains 7280 refs for 640 apps (plus runtimes and extensions) over 4 architectures. Fastly is serving the dl.flathub.org domain fully cached, with a cache hit rate of ~98.7%. Averaging 9.8 million hits and 464 Gb downloaded per hour, Flathub uses between 1-2 Gbps sustained bandwidth depending on the time of day. Here are some nice graphs produced by the Fastly management UI (the numbers are per-hour over the last month):

Graph showing the requests per hour over the past month, split by hits and misses.
Graph showing the data transferred per hour over the past month.

To buy the scale of services and support that Flathub receives from our commercial sponsors would cost tens if not hundreds of thousands of dollars a month. Flathub could not exist without Mythic Beasts, Packet and Fastly‘s support of the free and open source Linux desktop. Thank you!

August 10, 2019
August 08, 2019
After more than a year of work libfprint 1.0 has just been released!

It contains a lot of bug fixes for a number of different drivers, which would make it better for any stable or unstable release of your OS.

There was a small ABI break between versions 0.8.1 and 0.8.2, which means that any dependency (really just fprintd) will need to be recompiled. And it's good seeing as we also have a new fprintd release which also fixes a number of bugs.

Benjamin Berg will take over maintenance and development of libfprint with the goal of having a version 2 in the coming months that supports more types of fingerprint readers that cannot be supported with the current API.

From my side, the next step will be some much needed modernisation for fprintd, both in terms of code as well as in the way it interacts with users.
August 03, 2019
August 02, 2019

Recently the Raspberry Pi Foundation released the Raspberry Pi 4, which shipped with the V3D driver I wrote as its GLES driver.

I’m pretty proud of the work I did on the project. I was a solo developer building a GLES3 graphics driver based on Mesa, splitting my time between the new V3D and maintaining VC4, while also fixing issues in the X server and building a kernel driver. I didn’t finish everything (the hardware should be able to do GLES 3.2, and I almost made it to CTS-complete on 3.1 before shipping), but I feel like this is clear proof of how productive graphics driver developers can be working on the Mesa stack.

Now, I’m working at Google on the freedreno graphics driver and Mesa in general, as part of the Chrome OS graphics team. The folks at Igalia are taking over from me on V3D (I’ve already done a bunch of review of perf improvement and bugfix patches they’ve made), and Bootlin is going to be continuing to work on getting open source display to parity with the closed source stack. The future of open source Raspberry Pi graphics is still looking good, even if I’m not driving it any more.

July 27, 2019
July 25, 2019

Here’s an overview of the recent changes in Zink since the previous post, as well as an exciting announcement!

What’s new in the world of Zink?

OK, so I haven’t been as good at making frequent updates on as I was hoping, but let’s try to make up for it:

Since last time, there’s quite a lot of things that has been resolved:

  • We now do proper control-flow. This means things like if-statements, for-loops etc. There might be some control-flow primitives missing still, but that’s because I haven’t encountered any use yet.
  • Alpha testing has been implemented.
  • Client-defined clip-planes has been implemented.
  • Support for gl_FrontFacing has been implemented.
  • Lowering of glPointSize() to gl_PointSize has been implemented. This means you can use glPointSize() to specify sizes instead of having to write the gl_PointSize-output from the vertex shader.
  • Support for gl_FragDepth has been implemented.
  • Two-sided lighting has been implemented.
  • Shadow-samplers has been implemented.
  • Support for 8-bit primitive indices has been implemented.
  • Occlusion queries has been implemented correctly across command buffers. This includes the ability to pause / restore queries.
  • The compiler has been ported to C.
  • The compiler no longer lowers IO, but instead process derefs directly.
  • The compiler now handles booleans properly. It’s no longer lowering them to floats.
  • David Airlie has contributed lowering of glShadeModel(GL_FLAT) to flat interpolation qualifiers. This still doesn’t give us the right result, because Vulkan only supports the first vertex as the provoking vertex, and OpenGL defaults to the last one. To resolve this in a reasonable way, we need to inject a geometry shader that reorders the vertices, but this hasn’t been implemented yet. You can read more about this in this ticket
  • … and a boat-load of smaller fixes. There’s just too many to mention, really.

In addition to this, there’s been a pretty significant rewrite, changing the overall design of Zink. The reason for this, was that I made some early design-mistakes, and after having piled a bit too many features on top of this, I decided that it would be better to get the fundamentals right first.

Sadly, not all features have been brought forward since the rewrite, so we’re currently back to OpenGL 2.1 support. Fixing this is on my list of things I want to do, but I suspect that cleaning things up and upstreaming will take presedence over OpenGL 3.0 support.

And just to give you something neat to look at, here’s Blender running using Zink:

Blender on Zink Blender on Zink

Khronos Vulkan BoF at SIGGRAPH 2019

Khronos has been nice enough to give me a slot in their Vulkan Sessions at the Khronos BoFs during SIGGRAPH 2019!

The talk will be a slightly less tech-heavy introduction to Zink, what it does and what the future holds. It will focus more on the motivation and use cases than the underlying technical difficulties compared to my previous talks.

So, if you’re in the area please drop by! A conference pass is not required to attend the BoF, as it’s not part of the official SIGGRAPH program.

July 18, 2019

The average user has approximately one thumb per hand. That thumb comes in handy for a number of touchpad interactions. For example, moving the cursor with the index finger and clicking a button with the thumb. On so-called Clickpads we don't have separate buttons though. The touchpad itself acts as a button and software decides whether it's a left, right, or middle click by counting fingers and/or finger locations. Hence the need for thumb detection, because you may have two fingers on the touchpad (usually right click) but if those are the index and thumb, then really, it's just a single finger click.

libinput has had some thumb detection since the early days when we were still hand-carving bits with stone tools. But it was quite simplistic, as the old documentation illustrates: two zones on the touchpad, a touch started in the lower zone was always a thumb. Where a touch started in the upper thumb area, a timeout and movement thresholds would decide whether it was a thumb. Internally, the thumb states were, Schrödinger-esque, "NO", "YES", and "MAYBE". On top of that, we also had speed-based thumb detection - where a finger was moving fast enough, a new touch would always default to being a thumb. On the grounds that you have no business dropping fingers in the middle of a fast interaction. Such a simplistic approach worked well enough for a bunch of use-cases but failed gloriously in other cases.

Thanks to Matt Mayfields' work, we now have a much more sophisticated thumb detection algorithm. The speed detection is still there but it better accounts for pinch gestures and two-finger scrolling. The exclusion zones are still there but less final about the state of the touch, a thumb can escape that "jail" and contribute to pointer motion where necessary. The new documentation has a bit of a general overview. A requirement for well-working thumb detection however is that your device has the required (device-specific) thresholds set up. So go over to the debugging thumb thresholds documentation and start figuring out your device's thresholds.

As usual, if you notice any issues with the new code please let us know, ideally before the 1.14 release.

July 14, 2019

The All Systems Go! 2019 Call for Participation Re-Opened for ONE DAY!

Due to popular request we have re-opened the Call for Participation (CFP) for All Systems Go! 2019 for one day. It will close again TODAY, on 15 of July 2019, midnight Central European Summit Time! If you missed the deadline so far, we’d like to invite you to submit your proposals for consideration to the CFP submission site quickly! (And yes, this is the last extension, there's not going to be any more extensions.)

ASG image

All Systems Go! is everybody's favourite low-level Userspace Linux conference, taking place in Berlin, Germany in September 20-22, 2019.

For more information please visit our conference website!

July 11, 2019
July 09, 2019

For a long period of time, I’m cultivating the desire of having a habit of writing monthly status updates. Someway, Drew DeVault’s Blog posts and Martin Peres’s advice leverage me towards this direction. So, here I am! I have decided to embrace the challenge of composing a report per month. I hope this new habit helps me to improve my writing and communication skills but most importantly, help me to keep track of my work. I want to start this update by describing my work conditions and then focus on the technical stuff.

In the last two months, I’ve been facing an infrastructure problem to work. I’m dealing with obstacles such as restricted Internet access and long hours in public transportation from my home to my workplace. Unfortunately, I can’t work in my house due to the lack of space, and the best place to work is a public library at the University of Brasilia (UnB). Going to UnB every day makes me waste around 3h per day in a bus. The library has a great environment, but it also has thousands of internet restrictions. The fact that I can’t access websites with ‘.me’ domain and connect to my IRC bouncer is an example of that. In summary: It’s been hard to work these days. So let’s stop talking about non-technical stuff and get into the heart of the matter.

I really like working on VKMS. I know this is not news to anybody, and in June, most of my efforts were dedicated to VKMS. One of my paramount endeavors it was found and fixed a bug in vkms that makes kms_cursor_crc, and kms_pipe_crc_basic fails. I was chasing this bug for a long time as can be seen here [1]. After many hours debugging it, I sent a patch for handling this issue [2], however, after Daniel’s review, I realized that my patch didn’t fix correctly the problem. So Daniel decided to dig into this issue to find the root of the problem and later sent a final fix. If you want to see the solution, take a look at [3]. One day, I want to write a post about this fix since it is an interesting subject to discuss.

Daniel also noticed some concurrency problems in the CRC code and sent a patchset composed of 10 patches that tackle the issue. These patches focused on creating better framebuffers manipulation and avoiding race conditions. It took me around 4 days to take a look and test this series. During my review, I asked many things related to concurrency and other clarification about DRM. Daniel always replied with a very nice and detailed explanation. If you want to learn a little bit more about locks, I recommend you to take a look at [4]. Seriously, it is really nice!

I also worked for adding the writeback support in vkms; since XDC2018 I could not stop to think about the idea of adding writeback connector in vkms due to the benefits it could bring, such as new test and assist developers with visual output. As a result, I started some clumsy attempts to implement it in January; but I really dove in this issue in the middle of April, and in June I was focused on making it work. It was tough for me to implement these features due to the following reasons:

  1. There is not i-g-t test for writeback in the main repository, I had to use a WIP patchset made by Brian and Liviu.
  2. I was not familiar with framebuffer, connectors, and fancy manipulation.

As a result of the above limitations, I had to invest many hours reading the documentation and the DRM/IGT code. In the end, I think that add writeback connectors paid well for me since I feel much more comfortable with many things related to DRM these days. The writeback support was not landed yet, however, at this moment the patch is under review (V3) and changed a lot since the first version; for details about this series take a look at [5]. I’ll write a post about this feature after it gets merged.

After having the writeback connectors working in vkms, I felt so grateful for Brian, Liviu, and Daniel for all the assistance they provided to me. In particular, I was thrilled that Brian and Liviu made kms_writeback test which worked as an implementation guideline for me. As a result, I updated their patchsets for making it work in the latest version of IGT and made some tiny fixes. My goal was helping them to upstream kms_writeback. I submitted the series with the hope to see it landed in the IGT [9].

Parallel to my work with ‘writeback’ I was trying to figure out how I could expose vkms configurations to the userspace via configfs. After many efforts, I submitted the first version of configfs support; in this patchset I exposed the virtual and writeback connectors. Take a look at [6] for more information about this feature, and definitely, I’ll write a post about this feature after it gets landed.

Finally, I’m still trying to upstream a patch that makes drm_wait_vblank_ioctl return EOPNOTSUPP instead of EINVAL if the driver does not support vblank get landed. Since this change is in the DRM core and also change the userspace, it is not easy to make this patch get landed. For the details about this patch, you can take a look here [7]. I also implemented some changes in the kms_flip to validate the changes that I made in the function drm_wait_vblank_ioctl and it got landed [8].

July Aims

In June, I was totally dedicated to vkms, now I want to slow my road a little bit and study more about userspace. I want to take a step back and make some tiny programs using libdrm with the goal of understanding the interaction among userspace and kernel space. I also want to take a look at the theoretical part related to computer graphics.

I want to put some effort to improve a tool named kw that help me during my work with Linux Kernel. I also want to take a look at real overlay planes support in vkms. I noted that I have to find a “contribution protocol” (review/write code) that works for me in my current work conditions; otherwise, work will become painful for my relatives and me. Finally, and most importantly, I want to take some days off to enjoy my family.

Info: If you find any problem with this text, please let me know. I will be glad to fix it.

References

[1] “First discussion in the Shayenne’s patch about the CRC problem”. URL: https://lkml.org/lkml/2019/3/10/197

[2] “Patch fix for the CRC issue”. URL: https://patchwork.freedesktop.org/patch/308617/

[3] “Daniel final fix for CRC”. URL: https://patchwork.freedesktop.org/patch/308881/?series=61703&rev=1

[4] “Rework crc worker”. URL: https://patchwork.freedesktop.org/series/61737/

[5] “Introduces writeback support”. URL: https://patchwork.freedesktop.org/series/61738/

[6] “Introduce basic support for configfs”. URL: https://patchwork.freedesktop.org/series/63010/

[7] “Change EINVAL by EOPNOTSUPP when vblank is not supported”. URL: https://patchwork.freedesktop.org/patch/314399/?series=50697&rev=7

[8] “Skip VBlank tests in modules without VBlank”. URL: https://gitlab.freedesktop.org/drm/igt-gpu-tools/commit/2d244aed69165753f3adbbd6468db073dc1acf9a

[9] “Add support for testing writeback connectors”. URL: https://patchwork.freedesktop.org/series/39229/

July 05, 2019
July 03, 2019

The Dell Wireless 5821e module is a Qualcomm SDX20 based LTE Cat16 device. This modem can work in either MBIM mode or QMI mode, and provides different USB layouts for each of the modes. In Linux kernel based and Windows based systems, the MBIM mode is the default one, because it provides easy integration with the OS (e.g. no additional drivers or connection managers required in Windows) and also provides all the features that QMI provides through QMI over MBIM operations.

The firmware update process of this DW5821e module is integrated in your GNU/Linux distribution, since ModemManager 1.10.0 and fwupd 1.2.6. There is no official firmware released in the LVFS (yet) but the setup is completely ready to be used, just waiting for Dell to publish an initial official firmware release.

The firmware update integration between ModemManager and fwupd involves different steps, which I’ll try to describe here so that it’s clear how to add support for more devices in the future.

1) ModemManager reports expected update methods, firmware version and device IDs

The Firmware interface in the modem object exposed in DBus contains, since MM 1.10, a new UpdateSettings property that provides a bitmask specifying which is the expected firmware update method (or methods) required for a given module, plus a dictionary of key-value entries specifying settings applicable to each of the update methods.

In the case of the DW5821e, two update methods are reported in the bitmask: “fastboot” and “qmi-pdc“, because both are required to have a complete firmware upgrade procedure. “fastboot” would be used to perform the system upgrade by using an OTA update file, and “qmi-pdc” would be used to install the per-carrier configuration files after the system upgrade has been done.

The list of settings provided in the dictionary contain the two mandatory fields required for all devices that support at least one firmware update method: “device-ids” and “version”. These two fields are designed so that fwupd can fully rely on them during its operation:

  • The “device-ids” field will include a list of strings providing the device IDs associated to the device, sorted from the most specific to the least specific. These device IDs are the ones that fwupd will use to build the GUIDs required to match a given device to a given firmware package. The DW5821e will expose four different device IDs:
    • “USB\VID_413C“: specifying this is a Dell-branded device.
    • “USB\VID_413C&PID_81D7“: specifying this is a DW5821e module.
    • “USB\VID_413C&PID_81D7&REV_0318“: specifying this is hardware revision 0x318 of the DW5821e module.
    • “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE“: specifying this is hardware revision 0x318 of the DW5821e module running with a Vodafone-specific carrier configuration.
  • The “version” field will include the firmware version string of the module, using the same format as used in the firmware package files used by fwupd. This requirement is obviously very important, because if the format used is different, the simple version string comparison used by fwupd (literally ASCII string comparison) would not work correctly. It is also worth noting that if the carrier configuration is also versioned, the version string should contain not only the version of the system, but also the version of the carrier configuration. The DW5821e will expose a firmware version including both, e.g. “T77W968.F1.1.1.1.1.VF.001” (system version being F1.1.1.1.1 and carrier config version being “VF.001”)
  • In addition to the mandatory fields, the dictionary exposed by the DW5821e will also contain a “fastboot-at” field specifying which AT command can be used to switch the module into fastboot download mode.

2) fwupd matches GUIDs and checks available firmware versions

Once fwupd detects a modem in ModemManager that is able to expose the correct UpdateSettings property in the Firmware interface, it will add the device as a known device that may be updated in its own records. The device exposed by fwupd will contain the GUIDs built from the “device-ids” list of strings exposed by ModemManager. E.g. for the “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” device ID, fwupd will use GUID “b595e24b-bebb-531b-abeb-620fa2b44045”.

fwupd will then be able to look for firmware packages (CAB files) available in the LVFS that are associated to any of the GUIDs exposed for the DW5821e.

The CAB files packaged for the LVFS will contain one single firmware OTA file plus one carrier MCFG file for each supported carrier in the give firmware version. The CAB files will also contain one “metainfo.xml” file for each of the supported carriers in the released package, so that per-carrier firmware upgrade paths are available: only firmware updates for the currently used carrier should be considered. E.g. we don’t want users running with the Vodafone carrier config to get notified of upgrades to newer firmware versions that aren’t certified for the Vodafone carrier.

Each of the CAB files with multiple “metainfo.xml” files will therefore be associated to multiple GUID/version pairs. E.g. the same CAB file will be valid for the following GUIDs (using Device ID instead of GUID for a clearer explanation, but really the match is per GUID not per Device ID):

  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” providing version “T77W968.F1.2.2.2.2.VF.002”
  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_TELEFONICA” providing version “T77W968.F1.2.2.2.2.TF.003”
  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VERIZON” providing version “T77W968.F1.2.2.2.2.VZ.004”
  • … and so on.

Following our example, fwupd will detect our device exposing device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” and version “T77W968.F1.1.1.1.1.VF.001” in ModemManager and will be able to find a CAB file for the same device ID providing a newer version “T77W968.F1.2.2.2.2.VF.002” in the LVFS. The firmware update is possible!

3) fwupd requests device inhibition from ModemManager

In order to perform the firmware upgrade, fwupd requires full control of the modem. Therefore, when the firmware upgrade process starts, fwupd will use the new InhibitDevice(TRUE) method in the Manager DBus interface of ModemManager to request that a specific modem with a specific uid should be inhibited. Once the device is inhibited in ModemManager, it will be disabled and removed from the list of modems in DBus, and no longer used until the inhibition is removed.

The inhibition may be removed by calling InhibitDevice(FALSE) explicitly once the firmware upgrade is finished, and will also be automatically removed if the program that requested the inhibition disappears from the bus.

4) fwupd downloads CAB file from LVFS and performs firmware update

Once the modem is inhibited in ModemManager, fwupd can right away start the firmware update process. In the case of the DW5821e, the firmware update requires two different methods and two different upgrade cycles.

The first step would be to reboot the module into fastboot download mode using the AT command specified by ModemManager in the “at-fastboot” entry of the “UpdateSettings” property dictionary. After running the AT command, the module will reset itself and reboot with a completely different USB layout (and different vid:pid) that fwupd can detect as being the same device as before but in a different working mode. Once the device is in fastboot mode, fwupd will download and install the OTA file using the fastboot protocol, as defined in the “flashfile.xml” file provided in the CAB file:

<parts interface="AP">
  <part operation="flash" partition="ota" filename="T77W968.F1.2.2.2.2.AP.123_ota.bin" MD5="f1adb38b5b0f489c327d71bfb9fdcd12"/>
</parts>

Once the OTA file is completely downloaded and installed, fwupd will trigger a reset of the module also using the fastboot protocol, and the device will boot from scratch on the newly installed firmware version. During this initial boot, the module will report itself running in a “default” configuration not associated to any carrier, because the OTA file update process involves fully removing all installed carrier-specific MCFG files.

The second upgrade cycle performed by fwupd once the modem is detected again involves downloading all carrier-specific MCFG files one by one into the module using the QMI PDC protocol. Once all are downloaded, fwupd will activate the specific carrier configuration that was previously active before the download was started. E.g. if the module was running with the Vodafone-specific carrier configuration before the upgrade, fwupd will select the Vodafone-specific carrier configuration after the upgrade. The module would be reseted one last time using the QMI DMS protocol as a last step of the upgrade procedure.

5) fwupd removes device inhibition from ModemManager

The upgrade logic will finish by removing the device inhibition from ModemManager using InhibitDevice(FALSE) explicitly. At that point, ModemManager would re-detect and re-probe the modem from scratch, which should already be running in the newly installed firmware and with the newly selected carrier configuration.

June 29, 2019
June 26, 2019

In my last Panfrost blog, I announced my internship goal: improve Panfrost to run GNOME3. GNOME is a popular Linux desktop making heavy use of OpenGL; to use GNOME with only free and open-source software on a machine with Mali graphics, Panfrost is necessary.

Two months ahead-of-schedule, here I am, drafting this blog post from GNOME on my laptop running Panfrost!

A tiled architecture

Bring-up of GNOME required improving the driver’s robustness and performance, focused on Mali’s tiled architecture. Typically found in mobile devices, tiling GPU architectures divide the screen into many small tiles, like a kitchen floor, rendering each tile separately. This allows for unique optimizations but also poses unique challenges.

One natural question is: how big should tiles be? If the tiles are too big, there’s no point to tiling, but if the tiles are too small, the GPU will repeat unnecessary work. Mali offers a hybrid answer: allow lots of different sizes! Mali’s technique of “hierarchical tiling” allows the GPU to use tiles as small as 16x16 pixels all the way up to 2048x2048 pixels. This “sliding scale” allows different types of content to be optimized in different ways. The tiling needs of a 3D game like SuperTuxKart are different from those of a user interface like GNOME Shell, so this technique gets us the best of both worlds!

Although primarily handled in hardware, hierarchical tiling is configured by the driver; I researched this configuration mechanism in order to understand it and improve our configuration with respect to performance and memory usage.

Tiled architectures additionally present an optimization opportunity: if the driver can figure out a priori which 16x16 tiles will definitely not change, those tiles can be culled from rendering entirely, saving both read and write bandwidth. As a conceptual example, if the GPU composites your entire desktop while you’re writing an email, there’s no need to re-render your web browser in the other window, since that hasn’t changed. I implemented an initial version of this optimization in Panfrost, accumulating the scissor state across draws within a frame, rendering only to the largest bounding box of the scissors. This optimization is particularly helpful for desktop composition, ideally improving performance on workloads like GNOME, Sway, and Weston.

…Of course, theory aside, mostly what GNOME needed was a good, old-fashioned bugfixing spree, because the answer is always under your nose. Turns out what really broke the desktop was a trivial bug in the viewport specification code. Alas.

Scoreboarding

Looking forward to sophisticated workloads as this open driver matures, I researched job “scoreboarding”. For some background, the Mali hardware divides a frame into many small “jobs”. For instance, a “vertex job” executes a vertex shader; a “tiler job” executes tiling (sorting geometry job into tiles at varying hierarchy levels). Many of these jobs have to execute in a specific order; for instance, geometry has to be output by a vertex job before a tiler job can read that geometry. Previously, these relationships were hard-coded into the driver, which was okay for simple workloads but does not scale well. 

I have since replaced this code with an elegant dependency management system, based on the hardware’s scoreboarding. Instead of hard-coding relationships, the driver can now specify high level dependencies, and a generic algorithm (based on toplogical sorting) works out the order of submission and scoreboard flags necessary to actualize the given requirements. The new scoreboarding implementation has enabled new features, like rasterizer discard, to be implemented with ease.

With these improvements and more, several new features have landed in the driver, fixing hundreds of failing dEQP tests since my last blog post, bringing us nearer to conformance on OpenGL ES 2.0 and beyond.

Originally posted on Collabora’s blog

June 24, 2019

So I hope everyone is enjoying Fedora Workstation 30, but we don’t rest on our laurels here so I thought I share some of things we are working on for Fedora Workstation 31. This is not an exhaustive list, but some of the more major items we are working on.

Wayland – Our primary focus is still on finishing the Wayland transition and we feel we are getting close now, and thank you to the community for their help in testing and verifying Wayland over the last few years. The single biggest goal currently is fully removing our X Windowing System dependency, meaning that GNOME Shell should be able to run without needing XWayland. For those wondering why that has taken so much time, well it is simple; for 20 years developers could safely assume we where running atop of X. So refactoring everything needed to remove any code that makes the assumption that it is running on top of X.org has been a major effort. The work is mostly done now for the shell itself, but there are a few items left in regards to the GNOME Setting daemon where we need to expel the X dependency. Olivier Fourdan is working on removing those settings daemon bits as part of his work to improve the Wayland accessibility support. We are optimistic that can declare this work done within a GNOME release or two. So GNOME 3.34 or maybe 3.36. Once that work is complete an X server (XWayland) would only be started if you actually run a X application and when you shut that application down the X server will be shut down too.

Wayland logo

Wayland Graphics


Another change that Hans de Goede is working on at the moment is allowing X applications to be run as root under XWayland. In general running desktop apps as root isn’t considered adviceable from a security point of view, but since it always worked under X we feel it should continue to be there for XWayland too. This should fix a few applications out there which only works when run as root currently. One last item Hans de Goede is looking at is improving SDLs Wayland support in regards to how it deals with scaling of lower resolution games. Thanks to the great effort by Valve and others we got a huge catalog of games available under Linux now and we want to ensure that those keep running and runs well. So we will work with the SDL devs to come up with a solution here, we just don’t know the exact shape and and form the solution will take yet, so stay tuned.

Finally there is the NVidia binary driver support question. So you can run a native Wayland session on top of the binary driver and you had that ability for a very long time. Unfortunately there has been no support for the binary driver in XWayland and thus and X applications (which there are a lot of) would not be getting any HW accelerated 3D graphics support. Adam Jackson has worked on letting XWaylands load the binary NVidia x.org driver and we are now waiting on NVidia to review that work and hopefully be able to update their driver to support it.

Once we are done with this we expect X.org to go into hard maintenance mode fairly quickly. The reality is that X.org is basically maintained by us and thus once we stop paying attention to it there is unlikely to be any major new releases coming out and there might even be some bitrot setting in over time. We will keep an eye on it as we will want to ensure X.org stays supportable until the end of the RHEL8 lifecycle at a minimum, but let this be a friendly notice for everyone who rely the work we do maintaining the Linux graphics stack, get onto Wayland, that is where the future is.

PipeWire – Wim Taymans keeps improving the core features of Pipewire, as we work step by step to be ready to replace Jack and PulseAudio. He has recently been focusing on improving existing features like the desktop sharing portal together with Jonas Adahl and we are planning a hackfest for Wayland in the fall, current plan is to do it around the All Systems Go conference in Berlin, but due to some scheduling conflicts by some of our core stakeholders we might need to reschedule it to a little later in fall.
A new user for the desktop sharing portal is the new Miracast support that Benjamin Berg has been steadily working on. The Miracast support is shaping up and you can grab the Network Displays test client from his COPR repository while he is working to get the package into Fedora proper. We would appreciate further users testing and feedback as we know there are definitely devices out there where things do not work properly and identifying them is the first step to figuring out how to make our support in the desktop more robust. Eventually we want to make the GNOME integration even more seamless than the standalone app, but for early testing and polish it does the trick. If you are interested in contributing the code is hosted here on github.

Network Display

Network Display application using Miracast

Btw, you still need to set the enable Pipewire flag in Chrome to get the Pipewire support (chrome://flags). So I included a screenshot here to show you where to go in the browser and what the key is called:

Chrome Pipewire Flag

Chrome Pipewire Flag

Flatpak – Work on Flatpak in Fedora is continuing. Current focus is on improving the infrastructure for building Flatpaks from RPMS and automating what we can.This is pre-requisite work for eventually starting to ship some applications as Flatpaks by default and eventually shipping all applications as Flatpaks by default. We are also working on setting things up so that we can offer applications from flathub.io and quay.io out of the box and in accordance with Fedora rules for 3rd party software. We are also making progress on making a Red Hat UBI based runtime available. This means that as a 3rd party developer you can use that to build your applications on top of and be certain that it will be stay around and be supported by Red Hat for the lifetime of a given RHEL release, which means around 10 years. This frees you up as a developer to really update your application at your own pace as opposed to have to chase more short lived runtimes. It will also ensure that your application can be certified for RHEL which gives you access to all our workstation customers in addition to Fedora and all other distros.

Fedora Toolbox – Work is progressing on the Fedora Toolbox, our tool for making working with pet containers feel simple and straightforward. Debarshi Ray is currently looking on improvements to GNOME Terminal that will ensure that you get a more natural behaviour inside the terminal when interacting with pet containers, for instance ensuring that if you have a terminal open to a pet container and create a new tab that tab will also be inside the container inside of pointing at the host. We are also working on finding good ways to make the selection of containers more discoverable, so that you more easily can get access to a Red Hat UBI container or a Red Hat TensorFlow container for instance. There will probably be a bit of a slowdown in terms of new toolbox features soon though as we are going to rewrite it to make it more maintainable. The current implementation is a giant shell script, but the new version will most likely be written in Go (so that we can more easily integrate with the other container libraries and tools out there, mostly written in Go).

Fedora Toolbox

Fedora Toolbox in action

GNOME Classic – We have had Classic mode available in GNOME and Fedora for a long time, but we recently decided to give it a second look and try to improve the experience. So Allan Day reviewed the experience and we decided to make it a more pure GNOME 2 style experience by dropping the overview completely when you run classic mode.
We have also invested time and effort on improving the Classic mode workspace switcher to make life better for people who use a very workspace centric workflow. The goal of the improvements is to make the Classic mode workspace switcher more feature complete and also ensure that it can work with standard GNOME 3 in addition to Classic mode. We know this will greatly improve the experience for many of our users and at the same time hopefully let new people switch to Fedora and GNOME to get the advantage of all the other great improvements we are bringing to Linux and the Linux desktop.

Sysprof & performance – We have had a lot of focus in the community on improving GNOME Shell performance. Our particular focus has been on doing both some major re-architecting of some core subsystems that where needed to make some of the performance improvements you seen even possible. And lately Christian Hergert has been working on improving our tooling for profiling the desktop, so let our developers more easily see exactly where in the stack bottlenecks are and what is causing them. Be sure to read Christians blog for further details about sysprof and friends.

Fleet Commander – our tool for configuring large deployments of Fedora and RHEL desktops should have a release out very soon that can work with Active Directory as your LDAP server. We know a lot of RHEL and Fedora desktop users are part of bigger organizations where Linux users are a minority and thus Active Directory is being deployed in the organization. With this new release Fleet Commander can be run using Active Directory or FreeIPA as the directory server and thus a lot of organizations who previously could not easily deploy Fleet Commander can now take advantage of this powerful tool. Next step for Fleet Commander after that is finishing of some lose ends in terms of our Firefox support and also ensure that you can easily configure GNOME Shell extensions with Fleet Commander. We know a lot of our customers and users are deploying one or more GNOME Shell extensions for their desktop so we want to ensure Fleet Commander can help you do that efficiently across your whole fleet of systems.

Fingerprint support – We been working closely with our hardware partners to bring proper fingerprint reader support to Linux. Bastien Nocera worked on cleaning up the documentation of fprint and make sure there is good sample code and our hardware partners then worked with their suppliers to ensure they provided drivers conforming to the spec for hardware supplied to them. So there is a new drivers from Synaptics finger print readers coming out soon thanks to this effort. We are not stopping there though, Benjamin Berg is continuing the effort to improve the infrastructure for Linux fingerprint reader support, making sure we can support in-device storage of fingerprints for instance.

Fingerprint image

Fingerprint readers now better supported

Gamemode – Christian Kellner has been contributing a bit to gamemode recently, working to make it more secure and also ensure that it can work well with games packaged as Flatpaks. So if you play Linux games, especially those from Ferral Interactive, and want to squeeze some extra performance from your system make sure to install gamemode on your Fedora system.

Dell Totem support – Red Hat has a lot of customers in the fields of animation and CAD/CAM systems. Due to this Benjamin Tissoires and Peter Hutterer been working with Dell on enabling their Totem input device for a while now. That works is now coming to a close with the Totem support shipping in the latest libinput version with the kernel side of things being merged some time ago. You can get the full details from Peters blog about Dell Totem.

Dell Totel

The Dell Totem input device

Media codec support – So the OpenH264 2.0 release is out from Cisco now and Kalev Lember has been working to get the Fedora packages updated. This is a crucial release as it includes the support for Main and High profile that I mentioned in an earlier blog post. That work happened due to a collaboration between Cisco, Endless, Red Hat and Centricular with Jan Schmidt at Centricular doing the work implementing support for these two codecs. This work makes OpenH264 a lot more useful as it now supports playing back most files found in the wild and we been working to ensure it can be used for general playback in Firefox. At the same time Wim Taymans is working to fix some audio quality issues in the AAC implementation we ship so we should soon have both a fully working H264 decoder/encoder in Fedora and a fully functional AAC decoder/encoder. We are still trying to figure out what to do with MPEG2 video as we are ready to ship support for that too, but are still trying to figure out the details of implementation. Beyond that we don’t have any further plans around codecs atm as we feel that with H264, MPEG2 video, AAC, mp3 and AC3 we have them most critical ones covered, alongside the growing family of great free codecs such as VP9, Opus and AV1. We might take a look at the status of things like Windows Media and DivX at some point, but it is not anywhere close to the top of our priority list currently.

June 21, 2019

The obvious change to announce is the new website design. But there is much more to talk about.

Website overhaul

The old website, reachable primarily on the domain subdiff.de, was a pure blog built with Jekyll and the design was some random theme I picked up on GitHub. It was a quick thing to do back in the days when I needed a blog up fast for community interaction as a KWin and Plasma developer.

But on the back burner my goal was already for quite some time to rebuild the website with a more custom and professional design. Additionally I wanted this website to not only be a blog but also a landing page with some general information about my work. The opportunity arose now and after several months of research and coding I finished the website rebuild.

This all needed longer because it seemed to me like an ideal occasion to learn about modern web development techniques and so I didn't settle for the first plain solution I came across but invested some more time into selecting and learning a suitable technology stack.

In the end I decided to use Gridsome, a static site generator leveraging Vue.js for the frontend and GraphQL as data backend when generating the site. By that Gridsome is a prime example of the JAMstack, a most modern and very sensible way of building small to medium sized websites with only few selected dynamic elements through JavaScript APIs while keeping everything else static.

After all that learning, decision taking and finally coding I'm now really happy with this solution and I definitely want to write in greater detail about it in the future.

Feature-wise the current website provides what I think are the necessary basics and it could still be extended in several ways, but as for now I will stick to these basics and only look into new features when I get an urge to do it.

Freelancer business

Since January I work as a freelancer. This means in Germany that I basically had to start a company, so I did that.

I called it subdiff : software system, and the brand is still the domain name you are currently browsing. I used it already before as this website's domain name and as an online nickname. It is derived from a mathematical concept and on the other side stands for a slogan I find sensible on a practical level in work and life:

Subtract the nonsense, differentiate what's left.

Part of Valve's Open Source Group

As a freelancer I am contracted by Valve to work on certain gaming-related XServer projects and improve KWin in this regard and for general desktop usage.

In the XServer there are two main projects at the moment. The technical details of one of them are currently discussed on a work-in-progress patch series on Gitlab but I want to write accessible articles about both projects here on the blog as well in the near future.

In KWin I have several large projects I will look into, which would benefit KWin on X11 and Wayland alike. The most relevant one is reworking the compositing pipeline. You can expect more info about this project and the other ones in KWin in future blog posts too.

New code

While there are some big projects in the pipeline I was also able to commit some major changes in the last few months to KWin and Plasma.

The largest one was for sure XWayland drag-and-drop support in KWin. But in best case scenario the user won't even notice this feature because drag-and-drop between any relevant windows will just work from now on in our Wayland session. Inside KWin though the technical solution enabling this was built up from the ground. And in a way such that we should be able to later support something like middle-click-paste between XWayland and Wayland native windows easily.

There were two other major initiatives by me that I was able to merge: the finalization of basing every display representation in KWin on the generic AbstractOutput class and in Plasma's display management library, daemon and settings panel to save display-individual values in a consistent way by introducing a new communication channel between these components.

While the results of both enhancements are again supposed to be unnoticeable by the user but should improve the code structure and increase the overall stability there is more work lined up for display management which then will directly affect the interface. Take a look at this task to see what I have planned.

So there is interesting work ahead. Luckily this week I am with my fellow KWin and Plasma developers at the Plasma and Usability sprint in Valencia to discuss and plan work on such projects.

The sprint officially started yesterday and the first day already was very productive. We strive to keep up that momentum till the end of the sprint next week and I plan on writing an article about the sprint results afterwards. In the meantime you can follow @kdecommunity on Twitter if you want to receive timely updates on our sprint while it's happening.

Final remarks and prospect

I try to keep the articles in this blog rather prosaic and technical but there are so many things moving forward and evolving right now that I want to spend a few paragraphs in the end on the opposite.

In every aspect there is just immense potential when looking at our open source graphics stack consisting of KDE Plasma with KWin, at the moment still good old X but in the future Wayland, and the Linux graphics drivers below.

While the advantages of free and open source software for the people were always obvious, how rapidly this type of software became the backbone of our global economy signifies that it is immensely valuable for companies alike.

In this context the opportunities on how to make use of our software offerings and improve them are endless while the technical challenges we face when doing that are interesting. By this we can do our part such that the open source community will grow and foster.

As a reader of these sentences you are already in a prime position to take part in this great journey as well by becoming an active member of the community through contributing.

Maybe you already do this for example by coding, designing, researching, donating or just by giving us feedback on how our technology can become better. But if you are not yet, this is a great time to get involved and bring in your individual talents and motivation to build up something great together for ourselves and everybody.

You can find out more on how to do that by visiting KDE's Get Involved page or join in on the ongoing discussion about KDE's future goals.

June 20, 2019
June 19, 2019

This is merely an update on the current status quo, if you read this post in a year's time some of the details may have changed

libinput provides an API to handle graphics tablets, i.e. the tablets that are used by artists. The interface is based around tools, each of which can be in proximity at any time. "Proximity" simply means "in detectable range". libinput promises that any interaction is framed by a proximity in and proximity out event pair, but getting to this turned out to be complicated. libinput has seen a few changes recently here, so let's dig into those. Remember that proverb about seeing what goes into a sausage? Yeah, that.

In the kernel API, the proximity events for pens are the BTN_TOOL_PEN bit. If it's 1, we're in proximity, if it's 0, we're out of proximity. That's the theory.

Wacom tablets (or rather the kernel driver) always reset all axes on proximity out. So libinput needs to take care not to send a 0 value to the caller, lest you want a jump to the top left corner every time you move the pen away from the tablet. Some Wacom pens have serial numbers and we use those to uniquely identify a tool. But some devices start sending proximity and axis events before we get the serial numbers which means we can't identify the tool until several ms later. In that case we simply discard the serial. This means we cannot uniquely identify those pens but so far no-one has complained.

A bunch of tablets (HUION) don't have proximity at all. For those, we start getting events and then stop getting events, without any other information. So libinput has a timer - if we don't get events for a given time, we force a proximity out. Of course, this means we also need to force a proximity in when the next event comes in. These tablets are common enough that recently we just enabled the proximity timeout for all tablets. Easier than playing whack-a-mole, doubly so because HUION re-uses USD ids so you can't easily identify them anyway.

Some tablets (HP Spectre 13) have proximity but never send it. So they advertise the capability, just don't generate events for it. Same handling as the ones that don't have proximity at all.

Some tablets (HUION) have proximity, but only send it once per plug-in, after that it's always in proximity. Since libinput may start after the first pen interaction, this means we have to a) query the initial state of the device and b) force proximity in/out based on the timer, just like above.

Some tablets (Lenovo Flex 5) sometimes send proximity out events, but sometimes do not. So for those we have a timer and forced proximity events, but only when our last interaction didn't trigger a proximity event.

The Dell Active Pen always sends a proximity out event, but with a delay of ~200ms. That timeout is longer than the libinput timeout so we'll get a proximity out event, but only after we've already forced proximity out. We can just discard that event.

The Dell Canvas pen (identifies as "Wacom HID 4831 Pen") can have random delays of up to ~800ms in its event reporting. Which would trigger forced proximity out events in libinput. Luckily it always sends proximity out events, so we could quirk out to specifically disable the timer.

The HP Envy x360 sends a proximity in for the pen, followed by a proximity in from the eraser in the next event. This is still an unresolved issue at the time of writing.

That's the current state of things, I'm sure it'll change in a few months time again as more devices decide to be creative. They are artist's tools after all.

The lesson to take away here: all of the above are special cases that need to be implemented but this can only be done on demand. There's no way any one person can test every single device out there and testing by vendors is often nonexistent. So if you want your device to work, don't complain on some random forum, file a bug and help with debugging and testing instead.

June 18, 2019

We're on the road to he^libinput 1.14 and last week I merged the Dell Canvas Totem support. "Wait, what?" I hear you ask, and "What is that?". Good question - but do pay attention to random press releases more. The Totem (Dell.com) is a round knob that can be placed on the Dell Canvas. Which itself is a pen and touch device, not unlike the Wacom Cintiq range if you're familiar with those (if not, there's always lmgtfy).

The totem's intended use is as secondary device - you place it on the screen while you're using the pen and up pops a radial menu. You can rotate the totem to select items, click it to select something and bang, you're smiling like a stock photo model eating lettuce. The radial menu is just an example UI, there are plenty others. I remember reading papers about bimanual interaction with similar interfaces that dated back to the 80s, so there's a plethora to choose from. I'm sure someone at Dell has written Totem-Pong and if they have not, I really question their job priorities. The technical side is quite simple, the totem triggers a set of touches in a specific configuration, when the firmware detects that arrangement it knows this isn't a finger but the totem.

Pen and touch we already handle well, but the totem required kernel changes and a few new interfaces in libinput. And that was the easy part, the actual UI bits will be nasty.

The kernel changes went into 4.19 and as usual you can throw noises of gratitude at Benjamin Tissoires. The new kernel API basically boils down to the ABS_MT_TOOL_TYPE axis sending MT_TOOL_DIAL whenever the totem is detected. That axis is (like others of the ABS_MT range) an odd one out. It doesn't work as an axis but rather an enum that specifies the tool within the current slot. We already had finger, pen and palm, adding another enum value means, well, now we have a "dial". And that's largely it in terms of API - handle the MT_TOOL_DIAL and you're good to go.

libinput's API is only slightly more complicated. The tablet interface has a new tool type called the LIBINPUT_TABLET_TOOL_TYPE_TOTEM and a new pair of axes for the tool, the size of the touch ellipse. With that you can get the position of the totem and the size (so you know how big the radial menu needs to be). And that's basically it in regards to the API. The actual implementation was a bit more involved, especially because we needed to implement location-based touch arbitration first.

I haven't started on the Wayland protocol additions yet but I suspect they'll look the same as the libinput API (the Wayland tablet protocol is itself virtually identical to the libinput API). The really big changes will of course be in the toolkits and the applications themselves. The totem is not a device that slots into existing UI paradigms, it requires dedicated support. Whether this will be available in your favourite application is likely going to be up to you. Anyway, christmas in July [1] is coming up so now you know what to put on your wishlist.

[1] yes, that's a thing. Apparently christmas with summery temperature, nice weather, sandy beaches is so unbearable that you have to re-create it in the misery of winter. Explains everything you need to know about humans, really.

Even if you are not a gamer, odds are that you already heard about Vulkan graphics and compute API that provides high-efficency, cross-platform access to modern GPUs. This API is designed by the Khronos Group and it is supported by a new set of drivers specifically designed to implement the different functions and features defined by the spec (at the time of writing this post, it is version 1.1).

Vulkan

In order to guarantee that the drivers work according to the spec, drivers need to pass a conformance test suite that ensures they do what it is expected from them. VK-GL-CTS is the name of the conformance test suite used for certify the conformance on both Vulkan and OpenGL APIs and… it is open-source!

VK-GL-CTS

As part of my daily job at Igalia, I contribute to VK-GL-CTS from fixing some bugs, improving existing tests or even writing new tests for a variety of extensions. In this post I am going to describe some of the work I have been doing in the last few months.

VK_EXT_host_query_reset

This extension gives you the opportunity to reset queries outside a command buffer, which is a fast way of doing it once your application has finished reading query’s data. All that you need is to call vkResetQueryPoolEXT() function. There are several Vulkan drivers supporting already this extension on GNU/Linux (NVIDIA, open-source drivers AMDVLK, RADV and ANV) and probably more in other platforms.

I have implemented tests for all the different queries: occlusion queries, pipeline timestamp queries and statistics queries. Transform feedback stream queries tests landed a bit later.

VK_EXT_discard_rectangles

VK_EXT_discard_rectangles provides a way to define rectangles in framebuffer-space coordinates that discard rasterization of all points, lines and triangles that fall inside (exclusive mode) or outside (inclusive mode) of their area. You can regard this feature as something similar to scissor testing but it operates orthogonally to the existing scissor test functionality.

It is easier to understand with an example. Imagine that you want to do the following in your application: clear the color attachment to red color, draw a green quad covering the whole attachment but defining a discard rectangle in order to restrict the rasterization of the quad to the area defined by the discard rectangle.

For that, you define the discard rectangles at pipeline creation time for example (it is possible to define them dynamically too); as we want to restrict the rasterization of the quad to the area defined by the discard rectangle, then we set its mode to VK_DISCARD_RECTANGLE_MODE_INCLUSIVE_EXT.

VK_EXT_discard_rectangles inclusive mode example

If we want to discard the rasterization of the green quad inside the area defined by the discard rectangle, then we set VK_DISCARD_RECTANGLE_MODE_EXCLUSIVE_EXT mode at pipeline creation time and that’s all. Here you have the output for this case:

VK_EXT_discard_rectangles exclusive mode example

You are not limited to define just one discard rectangle, drivers supporting this extension should support a minimum of 4 of discard rectangles but some drivers may support more. As this feature works orthogonally to other tests like scissor test, you can do fancy things in your app :-)

The tests I developed for VK_EXT_discard_rectangles extension are already available in VK-GL-CTS repo. If you want to test them on an open-source driver, right now only RADV has implemented this extension.

VK_EXT_pipeline_creation_feedback

VK_EXT_pipeline_creation_feedback is another example of a useful feature for application developers, specially game developers. This extension gives a way to know at pipeline creation, if the pipeline hit the provided pipeline cache, the time consumed to create it or even which shaders stages hit the cache. This feature gives feedback about pipeline creation that can help to improve the pipeline caches that are shipped to users, with the final goal of reducing load times.

Tests for VK_EXT_pipeline_creation_feedback extension have made their way into VK-GL-CTS repo. Good news for the ones using open-source drivers: both RADV and ANV have implemented the support for this extension!

Conclusions

Since I started working in the Graphics team at Igalia, I have been contributing code to Mesa drivers for both OpenGL and Vulkan, adding new tests to Piglit, improve VkRunner among other contributions.

Now I am contributing to increase VK-GL-CTS coverage by developing new tests for extensions, fixing existing tests among other things. This work also involves developing patches for Vulkan Validation Layers, fixes for glslang and more things to come. In summary, I am enjoying a lot doing contributions to the open-source ecosystem created by Khronos Group as part of my daily work!

Note: if you are student and you want to start contributing to open-source projects, don’t miss our Igalia Coding Experience program (more info in our website).

Igalia

Even if you are not a gamer, odds are that you already heard about Vulkan graphics and compute API that provides high-efficency, cross-platform access to modern GPUs. This API is designed by the Khronos Group and it is supported by a new set of drivers specifically designed to implement the different functions and features defined by the spec (at the time of writing this post, it is version 1.1).

Vulkan

In order to guarantee that the drivers work according to the spec, drivers need to pass a conformance test suite that ensures they do what it is expected from them. VK-GL-CTS is the name of the conformance test suite used for certify the conformance on both Vulkan and OpenGL APIs and… it is open-source!

VK-GL-CTS

As part of my daily job at Igalia, I contribute to VK-GL-CTS from fixing some bugs, improving existing tests or even writing new tests for a variety of extensions. In this post I am going to describe some of the work I have been doing in the last few months.

VK_EXT_host_query_reset

This extension gives you the opportunity to reset queries outside a command buffer, which is a fast way of doing it once your application has finished reading query’s data. All that you need is to call vkResetQueryPoolEXT() function. There are several Vulkan drivers supporting already this extension on GNU/Linux (NVIDIA, open-source drivers AMDVLK, RADV and ANV) and probably more in other platforms.

I have implemented tests for all the different queries: occlusion queries, pipeline timestamp queries and statistics queries. Transform feedback stream queries tests landed a bit later.

VK_EXT_discard_rectangles

VK_EXT_discard_rectangles provides a way to define rectangles in framebuffer-space coordinates that discard rasterization of all points, lines and triangles that fall inside (exclusive mode) or outside (inclusive mode) of their area. You can regard this feature as something similar to scissor testing but it operates orthogonally to the existing scissor test functionality.

It is easier to understand with an example. Imagine that you want to do the following in your application: clear the color attachment to red color, draw a green quad covering the whole attachment but defining a discard rectangle in order to restrict the rasterization of the quad to the area defined by the discard rectangle.

For that, you define the discard rectangles at pipeline creation time for example (it is possible to define them dynamically too); as we want to restrict the rasterization of the quad to the area defined by the discard rectangle, then we set its mode to VK_DISCARD_RECTANGLE_MODE_INCLUSIVE_EXT.

VK_EXT_discard_rectangles inclusive mode example

If we want to discard the rasterization of the green quad inside the area defined by the discard rectangle, then we set VK_DISCARD_RECTANGLE_MODE_EXCLUSIVE_EXT mode at pipeline creation time and that’s all. Here you have the output for this case:

VK_EXT_discard_rectangles exclusive mode example

You are not limited to define just one discard rectangle, drivers supporting this extension should support a minimum of 4 of discard rectangles but some drivers may support more. As this feature works orthogonally to other tests like scissor test, you can do fancy things in your app :-)

The tests I developed for VK_EXT_discard_rectangles extension are already available in VK-GL-CTS repo. If you want to test them on an open-source driver, right now only RADV has implemented this extension.

VK_EXT_pipeline_creation_feedback

VK_EXT_pipeline_creation_feedback is another example of a useful feature for application developers, specially game developers. This extension gives a way to know at pipeline creation, if the pipeline hit the provided pipeline cache, the time consumed to create it or even which shaders stages hit the cache. This feature gives feedback about pipeline creation that can help to improve the pipeline caches that are shipped to users, with the final goal of reducing load times.

Tests for VK_EXT_pipeline_creation_feedback extension have made their way into VK-GL-CTS repo. Good news for the ones using open-source drivers: both RADV and ANV have implemented the support for this extension!

Conclusions

Since I started working in the Graphics team at Igalia, I have been contributing code to Mesa drivers for both OpenGL and Vulkan, adding new tests to Piglit, improve VkRunner among other contributions.

Now I am contributing to increase VK-GL-CTS coverage by developing new tests for extensions, fixing existing tests among other things. This work also involves developing patches for Vulkan Validation Layers, fixes for glslang and more things to come. In summary, I am enjoying a lot doing contributions to the open-source ecosystem created by Khronos Group as part of my daily work!

Note: if you are student and you want to start contributing to open-source projects, don’t miss our Igalia Coding Experience program (more info in our website).

Igalia

June 13, 2019
June 06, 2019
June 05, 2019

Hello, I’m Alyssa Rosenzweig, a student, the lead developer of the open-source Panfrost graphics driver… and now a Collaboran!

Years ago, I joined the open-source community with a passion and a mission: to enable equal access to high-quality computing via open-source software. With this mission, I co-founded Panfrost, aiming to create an open-source driver for the Mali GPU. Before Panfrost, users of Mali GPUs required a proprietary blob, restricting their ability to use their machines as they saw fit. Some users were unable to run Linux, their operating system of choice, with the display system of their choosing, simply because there were not blobs available for their particular configuration. Others wished to use an upstream kernel; yet others held a deep philosophical belief in free and open-source software. To each users’ driver problem, Panfrost seeks to provide a solution.

Days ago, I joined Collabora with the same passion and the same mission. Collabora was founded on an “open first” model, sharing my personal open source conviction. Collabora’s long-term vision is to let open-source software blossom throughout computing, fulfilling my own dream of an open-source utopia.

With respect to graphics, Collabora has shared my concerns. After all, we’re all on “Team Open Source” together! Collabora’s partners make awesome technology, often containing a Mali GPU, and they need equally awesome graphics drivers to power their products and empower their users. Our partners and our users asked, and Panfrost answered.

At Collabora, I am now a full-time Software Engineering Intern, continuing throughout the summer to work on Panfrost. I’m working alongside other veteran Panfrost contributors like Collaboran Tomeu Vizoso, united with open-source community members like Ryan Houdek. My focus will be improving Panfrost’s OpenGL ES 2.0 userspace, to deliver a better experience to Panfrost users. By the end of the summer, we aim to bring the driver to near conformance, to close any performance gaps, and through this work, to get GNOME Shell working fluidly on supported Mali hardware with only upstream, open-source software!

Supporting GNOME in Panfrost is a task of epic proportions, a project dream since day #1, yet ever distant in the horizon. But at Collabora, we’re always up for the challenge.

Originally posted on Collabora’s blog

May 29, 2019
May 23, 2019
May 22, 2019
Thank you all for the large amount of feedback I have received after my previous Wayland Itches blog post. I've received over 40 mails, below is an attempt at summarizing all the mails.

Highlights

1. Middle click on title / header bar to lower the Window does not work for native apps. Multiple people have reported this issue to me. A similar issue was fixed for not being able to raise Windows. It should be easy to apply a similar fix for the lowering problem. There are bugs open for this here, here and here.

2. Running graphical apps via sudo or pxexec does not work. There are numerous examples of apps breaking because of this, such as lshw-gui and usbivew. At least for X11 apps this is not that hard to fix. But sofar this has deliberately not been fixed. The reasoning behind this is described in this bug. I agree with the reasoning behind this, but I think it is not pragmatic to immediately disallow all GUI apps to connect when run as root starting today.

We need some sort of transition period. So when I find some time for this, I plan to submit a merge-requests which optionally makes gnome-shell/mutter start Xwayland with an xauth file, like how it is done when running in GNOME on Xorg mode. This will be controlled by a gsettings option, which will probably default to off upstream and then distros can choice to override this for now, giving us a transition period

Requests for features implemented as external programs on X11

There are various features which can be implemented as external programs
on X11, but because of the tighter security need to be integrated into the
compositor with Wayland:

  • Hiding of the mouse-cursor when not used à la unclutter-xfixes, xbanish.

  • Rotating screen 90 / 270 degrees à la "xrandr -o [left|right]" mostly used through custom hotkeys, possible fix is defining bindable actions for this in gsd-media-keys.

  • Mapping actions to mouse buttons à la easystroke

  • Some touchscreen's, e.g. so called smart-screens for education, need manual calibration. Under X11 there are some tools to get the callibration matrix for the touchscreen, after which this can be manually applied through xinput. Even under X11 this currently is far from ideal but at least it is possible there.

  • Keys Indicator gnome-shell extension. This still works when using Wayland, but only works for apps using Xwayland, it does not work for native apps.

  • Some sort of xkill and xdotool utility equivalents would be nice

  • The GNOME on screen keyboard is not really suitable for use with apps which are not touch-enabled, as it lacks a way to send ctrl + key, etc. Because of this some users have reported that it is impossible to use alternative on screen keyboards with Wayland. Not being able to use alternative on screen keyboards is by design and IMHO the proper fix here is to improve GNOME's on screen keyboard.

App specific problems


  • Citrix ICA Client does not work well with Xwayland

  • Eclipse does not work well with Xwayland

  • Teamviewer does not work with Wayland. It needs to be updated to use pipewire for screencapturing and the RemoteDesktop portal to inject keyboard and mouse events.

  • Various apps lack screenrecording / capture support due to the app not having support for pipewire: gImageReader, green-recorder, OBS studio, peek, screenrecorder, slack

  • For apps which do support pipewire, there is not an option to share the contents of a window, other then the window making the request. On Xorg it is possible to share a random window and since pipewire allows sharing the whole desktop I see no security reason why we would not allow sharing another window.

  • guake window has incorrect size when using HiDPI scaling, see this issue

Miscellaneous problems


  • Mouse cursor is slow / lags

  • Drag and drop sometimes does not work, e.g. dragging files into file roller to compress or out of file roller to extract.

  • Per keyboard layouts. On X11 after plugging in a keyboard, the layout/keymap for just that one keyboard can be updated manually using xinput, allowing different keyboard layouts for different keyboards when multiple keyboards are connected

  • No-title-bar shell extension, X button can be hit unintentionally, see this issue

  • Various issues with keyboard layout switching

Hard to fix issues


  • Alt-F2, r equivalent (restart the gnome-shell)

  • X11 apps running on top of Xwayland do not work well on HiDPI screens

  • Push-to-talk (passive key grab on space) does not work in Mumble when using native Wayland apps, see this issue

Problems with other compositors then GNOME3 / mutter

I've also received several reports about issues when using another Wayland compositor as GNOME / mutter (Weston, KDE, Sway). I'm sorry but I have not looked very closely into these reports. I believe that it is great that Linux users have multiple Desktop Environments to choose from and I wish for the other DEs to thrive. But there are only so many hours in a day so I've chosen to mainly focus on GNOME.
First of all I do not want people to get their hopes up about $subject of this blogpost. Improving gaming support is a subjects which holds my personal interest and it is an issue I plan to spend time on trying to improve. But this will take a lot of time (think months for simple things, years for more complex things).

As I see it there are currently 2 big issues when running games under Wayland:

1. Many games show as a smal centered image with a black border (letterbox) around the image when running fullscreen.

For 2D games this is fixed by switching to SDL2 which will transparently scale the pixmap the game renders to the desktop resolution. This assumes that 2D games in general do not demand a lot of performance and thus will not run into performance issues when introducing an extra   scaling step. A problem here is that many games still use SDL1.2 (and some games do not use SDL at all).

I plan to look into the recently announced SDL1.2 compatibility wrapper around SDL2. If this works well this should fix this issue for all SDL1.2 2D games, by making them use SDL2 under the hood.

For 3D games this can be fixed by rendering at the desktop resolution, but this might be slow and rendering at a lower resolution leads to the letterbox issue.

Recently mutter has has grown support for the WPviewport extension, which allows Wayland apps to tell the compositor to scale the pixmap the app gives to the compositor before presenting it. If we add support to SDL2's Wayland backend for this then, this can be used to allow rendering 3D apps at a lower resolution and still have them fill the entire screen.

Unfortunately there are 2 problems with this plan:

  1. SDL2 by default uses its x11 backend, not its wayland backend. I'm not sure what fixes need to be done to change this, at a minimum we need a fix at either the SDL or mutter side for this issue, which is going to be tricky.

  2. This only helps for SDL2 apps, again hopefully the SDL1.2 compatibility wrapper for SDL2 can help here, at least for games using SDL.

2. Fullscreen performance is bad with many games.

Since under Wayland games cannot change the monitor resolution, they need to either render at the full desktop resolution, which can be very slow; or they render at a lower resolution and then need to do an extra scaling step each frame.

If we manage to make SDL2's Wayland backend the default and then add WPviewport support to it then this should help by reducing an extra memcpy/blit of a desktop-sized pixmap. Currently what apps which use scaling do is:

  1. render lower-res-pixmap;

  2. scale lower-res-pixmap to desktop-res-pixmap

  3. give desktop-res-pixmap to the compositor;

  4. compositor does a hardware blit of the desktop-res-pixmap to the framebuffer.

With viewport support this becomes:

  1. render lower-res-pixmap;

  2. give low-res-pixmap to the compositor;

  3. compositor uses hardware to do a scaling blit from the low-res-pixmap to the desktop-res framebuffer

Also with viewport support, the compositor could in the case of there only being the one fullscreen app even keep the framebuffer in lowres and use a hardware scaling drm-plane to send the low-res framebuffer scaled to desktop-res to the output while only reading the low-res framebuffer from memory saving a ton of memory bandwidth. But this optimization is going to be a challenge to pull off.
May 21, 2019
The just released 5.2-rc1 kernel includes improved support for Logitech wireless keyboards and mice. Until now we were relying on the generic HID keyboard and mouse emulation for 27 MHz and non-unifying 2.4 GHz wireless receivers.

Starting with the 5.2 kernel instead we actually look at the devices behind the receiver. This allows us to provide battery monitoring support and to have per device quirks, like device specific HID-code to evdev-code mappings where necessary. Until now device specific quirks where not possible because the receivers have a generic product-id which is the same independent of the device behind the receiver.

The per device key-mapping is especially important for 27MHz wireless devices, these use the same HID-code for Fn + F1 to Fn + F12 for all devices, but the markings on the keys differ per model. Sofar it was impossible for Linux to get the mapping for this right, but now that we have per device product-ids for the devices behind the receiver we can finally fix this. As is the case with other devices with vendor specific mappings, the actual mapping is done in userspace through hwdb.

If you have a 27 MHz device (often using this receiver, keyboard marked as canada 210 or canada 310 at the bottom). Please give 5.2 a try. Download the latest 60-keyboard.hwdb file and place it in /lib/udev/hwdb.d (replacing the existing file) and then run "sudo udevadm hwdb --update", before booting into the 5.2 kernel. Then run "sudo evemu-record" select your keyboard and try Fn + F1 to Fn + F12 and any other special keys. If any keys do not work, edit 60-keyboard.hwdb, search for Logitech and add an entry for your keyboard, see the existing Logitech entries. After editing you need to re-run "sudo udevadm hwdb --update", followed by "sudo udevadm trigger" for the changes to take effect. Once you have a working entry, submit a pull-req to systemd to get the changes upstream. If you need any help drop me an email.

We still have some old code for the generic HID emulation for 27 MHz receivers with a product-id of c50c, these should work fine with the new code, but we've been unable to test this. I would really like to move the c50c id over to the new code and remove all the old code. If you've a 27 MHz Logitech device, please run lsusb, if your device has a product-id of c50c and you are willing to test, please drop me an email.

Likewise I suspect that 2.4GHz receivers with a product-id of c531 should work fine with the new support for non-unifying 2.4 GHz receivers, if you have one of those also please drop me an email.
May 20, 2019
May 18, 2019
May 14, 2019
We have a job in our team, it's a pretty senior role, definitely want people with lots of experience. Great place to work,ignore any possible future mergers :-)

https://global-redhat.icims.com/jobs/68911/principal-software-engineer/job?mobile=false&width=1526&height=500&bga=true&needsRedirect=false&jan1offset=600&jun1offset=600
Now that GNOME3 on Wayland is the default in Fedora I've been trying to use this as my default desktop, but until recently I've kept falling back to GNOME3 on Xorg because of various small issues.

To fix this I've switched to using GNOME3 on Wayland as day to day desktop now and I'm working on fixing any issues which this causes as I hit them, aka "The Wayland Itches project". So far I've hit and fixed the following issues:

1. TopIcons

The TopIcons extension, which I depend on for some of my workflow, was not working well under Wayland with GNOME-3.30, only the top row of icons was clickable. This was fixed in GNOME-3.32, but with GNOME-3.32 using TopIcons was causing gnome-shell to go into a loop leading to a very high CPU load. The day I wanted to start looking into fixing this I was chatting to Carlos Garnacho and he pointed out to me that this was fixed a couple of days ago in gnome-shell. The fix for this is in gnome-shell 3.32.2 .

2. Hotkeys/desktop shortcuts not working in VirtualBox Virtual Machines

When running a VirtualBox VM under GNOME3 on Wayland, hotkeys such as alt+tab go to the GNOME3 desktop, rather then being forwarded to the VM as happens under Xorg. This can be fixed by changing 2 settings:

  gsettings set org.gnome.mutter.wayland xwayland-allow-grabs true
  gsettings set org.gnome.mutter.wayland xwayland-grab-access-rules "['VirtualBox Machine']"

This is a decent workaround, but we want things to "just work" of course, so we have been working on some changes to make this just work in the next GNOME version.

3. firefox-wayland

I've been also trying to use firefox-wayland as my day to day browser, this has lead to me filing three firefox bugs and I've switched back to regular
firefox (x11) for now.


If you have any Wayland Itches yourself, please drop me an email at hdegoede@redhat.com explaining them in as much detail as you can and I will see what I can do. Note that I typically get a lot of emails when asking for feedback like this, so I cannot promise that I will reply to every email; but I will be reading them all.