Felipe Borges

@felipeborges

Time to write proposals for GSoC 2025 with GNOME!

It is that time of the year again when we start gathering ideas and mentors for Google Summer Code.

@Mentors, please submit new proposals in our Project ideas GitLab repository before the end of January.

Proposals will be reviewed by the GNOME GSoC Admins and posted in https://gsoc.gnome.org/2025 when approved.

If you have any doubts, please don’t hesitate to contact the GNOME Internship Committee.

Adetoye Anointing

@yorubad-dev

Extracting Texts And Elements From SVG2

Have you ever wondered how SVG files render complex text layouts with different styles and directions so seamlessly? At the core of this magic lies text layout algorithms—an essential component of SVG rendering that ensures text appears exactly as intended.

Text layout algorithms are vital for rendering SVGs that include styled or bidirectional text. However, before layout comes text extraction—the process of collecting and organizing text content and properties from the XML tree to enable accurate rendering.

The Extraction Process

SVGs, being XML-based formats, resemble a tree-like structure similar to HTML. To extract information programmatically, you navigate through nodes in this structure.

Each node in the XML tree holds critical details for implementing the SVG2 text layout algorithm, including:

    • Text content
    • Bidi-control properties (manage text directionality)
    • Styling attributes like font and spacing
Understanding Bidi-Control

Bidi-control refers to managing text direction (e.g., Left-to-Right or Right-to-Left) using special Unicode characters. This is crucial for accurately displaying mixed-direction text, such as combining English and Arabic.

A Basic Example
<text>
  foo
  <tspan>bar</tspan>
  baz
</text>

The diagram and code sample shows the structure librsvg creates when it parses this XML tree.

Here, the <text> element has three children:

    1. A text node containing the characters “foo”.
    2. A <tspan> element with a single child text node containing “bar”.
    3. Another text node containing “baz”.

When traversed programmatically, the extracted text from this structure would be “foobarbaz”.

To extract text from the XML tree:

    1. Start traversing nodes from the <text> element.
    2. Continue through each child until the final closing tag.
    3. Concatenate character content into a single string.

While this example seems straightforward, real-world SVG2 files introduce additional complexities, such as bidi-control and styling, which must be handled during text extraction.

Handling Complex SVG Trees

Real-world examples often involve more than just plain text nodes. Let’s examine a more complex XML tree that includes styling and bidi-control:

Example:

<text>
  "Hello"
  <tspan font-style="bold;">bold</tspan>
  <tspan direction="rtl" unicode-bidi="bidi-override">مرحبا</tspan>
  <tspan font-style="italic;">world</tspan>
</text>
text extraction illustration credit: Federico (my mentor)
credit: Federico (my mentor)

In this example, the <text> element has four children:

    1. A text node containing “Hello”.
    2. A <tspan> element with font-style: bold, containing the text “bold”.
    3. A <tspan> element with bidi-control set to RTL (Right-To-Left), containing Arabic text “مرحبا”.
    4. Another <tspan> element with font-style: italic, containing “world”.

This structure introduces challenges, such as:

    • Styling: Managing diverse font styles (e.g., bold, italic).
    • Whitespace and Positioning: Handling spacing between nodes.
    • Bidirectional Control: Ensuring proper text flow for mixed-direction content.

Programmatically extracting text from such structures involves traversing nodes, identifying relevant attributes, and aggregating the text and bidi-control characters accurately.

Why Test-Driven Development Matters

One significant insight during development was the use of Test-Driven Development (TDD), thanks to my mentor Federico. Writing tests before implementation made it easier to visualize and address complex scenarios. This approach turned what initially seemed overwhelming into manageable steps, leading to robust and reliable solutions.

Conclusion

Text extraction is the foundational step in implementing the SVG2 text layout algorithm. By effectively handling complexities such as bidi-control and styling, we ensure that SVGs render text accurately and beautifully, regardless of direction or styling nuances.

If you’ve been following my articles and feel inspired to contribute to librsvg or open source projects, I’d love to hear from you! Drop a comment below to share your thoughts, ask questions, or offer insights. Your contributions—whether in the form of questions, ideas, or suggestions—are invaluable to both the development of librsvg and the ongoing discussion around SVG rendering. 😊

In my next article, we’ll explore how these extracted elements are processed and integrated into the text layout algorithm. Stay tuned—there’s so much more to uncover!

DIY 12V DC Power Supply

Let’s talk about our journey of creating something from scratch (almost?) for our Electronics I final project. It wasn’t groundbreaking like a full-blown multi-featured DC power supply, but it was a fulfilling learning experience.

Spoiler alert: mistakes were made, lessons were learned, and yes, we had fun.

Design and Calculations

Everything began with brainstorming and sketching out ideas. This was our chance to put all the knowledge from our lectures to the test—from diode operating regions to voltage regulation. It was exciting but also a bit daunting.

The first decision was our power supply's specifications. We aimed for a 12V output—a solid middle ground between complexity and functionality. Plus, the 5V option was already claimed by another group. For rectification, we chose a full-wave bridge rectifier due to its efficiency compared to the half-wave alternative.

Calculations? Oh yes, there were plenty! Transformers, diodes, capacitors, regulators—everything had to line up perfectly on paper before moving to reality.

We started at the output, aiming for a stable 12V. To achieve this, we selected the LM7812 voltage regulator. It was an obvious choice: simple, reliable, and readily available. With an input range of 14.5 to 27V, it could easily provide the 12V we needed.

Since the LM7812 can handle a maximum input voltage of 27V, a 12-0-12V transformer would be perfect. However, only a 6-0-6V transformer was available, so we had to make do with that. Regarding with the diode, we used 1N4007 diodes as it is readily available and can handle our desired specifications.

Assuming the provided input voltage for the regulator is 15.5V, which is also the output of the rectifier $ V_{\text{p(rec)}} $, the output voltage of the secondary side of the transformer $ V_{\text{p(sec)}} $ must be:

$$ V_{\text{p(sec)}} = V_{\text{p(rec)}} + 1.4V = 15.5V + 1.4V = 16.9V_{\text{pk}} $$

Note: The 1.4V was to account for the voltage drop across the diodes.

or in RMS,

$$ \frac{16.9V_{\text{pk}}}{\sqrt{2}} = 11.95V_{\text{rms}} $$

This is perfect four our 6-0-6V transformer maximum output voltage of 12V in RMS.

Using the formula for ripple factor,

$$ r = \frac{V_{\text{r(pp)}}}{V_{\text{dc}}} $$

$$ V_{\text{r(pp)}} = r \times V_{\text{dc}} $$

we can determine the value of the filter capacitor, given a ripple factor $ r $ of 3% or 0.03, and output DC voltage $ V_{\text{dc}} $ of 12V.

$$ V_{\text{r(pp)}} = \frac{V_{\text{p(rect)}}}{f \times R_\text{L} \times C} $$

$$ C = \frac{V_{\text{p(rect)}}}{f \times R_\text{L} \times V_{\text{r(pp)}}} = \frac{V_{\text{p(rect)}}}{f \times R_\text{L} \times r \times V_{\text{dc}}} $$

We also know that a typical frequency of the AC input is 60Hz and we have to multiply it by 2 to get the frequency of the full-wave rectified output.

$$ f = 2 \times 60Hz = 120Hz $$

Also, given the maximum load current of 50mA, we can calculate the assumed load resistance.

$$ R_{\text{L}} = \frac{V_{\text{dc}}}{I_{\text{L}}} = \frac{12V}{50mA} = 240\Omega $$

Substituting the values,

$$ C = \frac{15.5V}{120Hz \times 240\Omega \times 0.03 \times 12V} = 1495 \mu F \approx 1.5 mF $$

Here is the final schematic diagram of our design based on the calculations:

Schematic Diagram

Construction

Moving on, we had to put our design into action. This was where the real fun began. We had to source the components, breadboard the circuit, design the PCB, and 3D-print the enclosure.

Breadboarding

The breadboarding phase was a mix of excitement and confusion. We had to double-check every connection and component.

Circuit Overview

Breadboard Close-up

It was a tedious process, but the feeling when the 12V LED lit up? Priceless.

Initial Testing

PCB Design, Etching and Soldering

For the PCB design, we used EasyEDA. It was our first time using it, but it was surprisingly intuitive. We just had first to recreate the schematic diagram, then layout the components and traces.

EasyEDA Schematic

Tracing the components on the PCB was a bit tricky, but we managed to get it done. It is like playing connect-the-dots, except no overlapping lines are allowed since we only had a single-layer PCB.

PCB Tracing

At the end, it was satisfying to see the final design.

PCB Layout

We had to print it on a sticker paper, transfer it to the copper board, cut it, drill it, etch it, and solder the components. It was a long process, but the result was worth it.

PCB Soldered

Did we also mention that we soldered the regulator in reverse for the first time? Oops. But hey, we learned from it.

Custom Enclosure

To make our project stand out, we decided to 3D-print a custom enclosure. Designing it on SketchUp was surprisingly fun.

3D Model

It was also satisfying to see the once a software model come to life as a physical object.

3D Printed

Testing

Testing day was a rollercoaster. Smoke-free? Check. Output voltage stable? Mostly.

Line Regulation Via Varying Input Voltage

For the first table, we vary the input voltage, and we measured the input voltage, the transformer output, the filter output, the regulator output, and the percent voltage regulation.

Trial No.Input Voltage ($ V_{\text{rms}} $)Transformer Output ($ V_{\text{rms}} $)Filter Output ($ V_{\text{DC}} $)Regulator Output ($ V_{\text{DC}} $)% Voltage Regulation
121312.113.5811.975
221411.213.8211.925
321510.713.7312.0310
421611.513.8011.9310
521710.813.2612.019
621811.013.5911.929
722011.313.7411.922
822212.513.6111.962
922412.313.5711.9310
1022611.913.8811.9410
Average-11.5313.6711.9535.5

Note: The load resistor is a 22Ω resistor.

Table 1 Graph

Load Regulation Via Varying Load Resistance

For the second table, we vary the load resistance, and we measured the input voltage, the transformer output, the filter output, the regulator output, and the percent voltage regulation.

Trial No.Load Resistance ($ \Omega $)Transformer Output ($ V_{\text{rms}} $)Filter Output ($ V_{\text{DC}} $)Regulator Output ($ V_{\text{FL(DC)}} $)% Voltage Regulation
122010.611.9610.2216.4385
250010.712.8311.434.1120
31k11.113.0511.463.8394
42k11.113.0611.483.6585
55k10.613.2011.493.5683
66k10.913.2611.781.0187
710k11.213.3911.850.4219
811k11.313.9111.870.2527
920k11.313.5311.890.0841
1022k11.113.2711.900
Average-10.9913.1511.543.3394

Note: The primary voltage applied to the transformer was 220V in RMS. The $ V_{\text{NL(DC)}} $ used in computing the % voltage regulation is 11.9 V.

Table 2 Graph

Data Interpretation

Looking at the tables, the LM7812 did a great job keeping the output mostly steady at 12V, even when we threw in some wild input voltage swings—what a champ! That said, when the load resistance became too low, it struggled a bit, showing the limits of our trusty (but modest) 6-0-6V transformer. On the other hand, our filtering capacitors stepped in like unsung heroes, keeping the ripples under control and giving us a smooth DC output.

Closing Words

This DC power supply project was a fantastic learning experience—it brought classroom concepts to life and gave us hands-on insight into circuit design and testing. While it performed well for what it is, it’s important to note that this design isn’t meant for serious, high-stakes applications. Think of it more as a stepping stone than a professional-grade benchmark.

Overall, we learned a lot about troubleshooting, design limitations, and real-world performance. With a bit more fine-tuning, this could even inspire more advanced builds down the line. For now, it’s a win for learning and the satisfaction of making something work (mostly) as planned!

Special thanks to our professor for guiding us and to my amazing groupmates—Roneline, Rhaniel, Peejay, Aaron, and Rhon—for making this experience enjoyable and productive (ask them?). Cheers to teamwork and lessons learned!

If you have any questions or feedback, feel free to leave a comment below. We’d love to hear your thoughts or critiques. Until next time, happy tinkering!

Michael Meeks

@michael

2025-01-22 Wednesday

  • Catch up with Dave - our talented cartoonist with whom I've been working on something new and exciting: to build a weekly strip to try to communicate the goodness, humour & intricacy around software, communities, ecosystems and more:
    The Open Road to Freedom - strip#1
  • All Hands meeting, packed and set off for the Univention Summit.

Crosswords 0.3.14

I released Crosswords-0.3.14 this week. This is a checkpoint release—there are a number of experimental features that are still under development. However, I wanted to get a stable release out before changing things too much. Download the apps on flathub! (game, editor)

Almost all the work this cycle happened in the editor. As a result, this is the first version of the editor that’s somewhat close to my vision and that I’m not embarrassed giving to a crossword constructor to use. If you use it, I’d love feedback as to how it went.

Read on for more details.

Libipuz

Libipuz got a version bump to 0.5.0. Changes include:

  • Adding GObject-Introspection support to the library. This meant a bunch of API changes to fix methods that were C-only. Along the way, I took the time to standardize and clean up the API.
  • Documenting the library. It’s about 80% done, and has some tutorials and examples. The API docs are here.
  • Validating both the docs and introspections. As mentioned last post, Philip implemented a nonogram app on top of libipuz in Typescript. This work gave me confidence in the overall API approach.
  • Porting libipuz to rust. I worked with GSoC student Pranjal and Federico on this. We got many of the leaf structures ported and have an overall approach to the main class hierarchy. Progress continues.

The main goal for libipuz in 2025 is to get a 1.0 version released and available, with some API guarantees.

Autofill

I have struggled to implement the autofill functionality for the past few years. The simple algorithm I wrote would fill out 1/3 of the board, and then get stuck. Unexpectedly, Sebastian showed up and spent a few months developing a better approach. His betterfill algorithm is able to fill full grids a good chunk of the time. It’s built around failing fast in the search tree, and some clever heuristics to force that to happen. You can read more about it at his site.

NOTE: filling an arbitrary grid is NP-hard. It’s very possible to have grids that can’t be easily solved in a reasonable time. But as a practical matter, solving — and failing to solve — is faster now.

I also fixed an annoying issue with the Grid editor. Previously, there were subtabs that would switch between the autofill and edit modes. Tabs in tabs are a bad interface, and I found it particularly clunky to use. However, it let me have different interaction modes with the grid. I talked with Scott a bit about it and he made an off-the-cuff suggestion of merging the tabs together and adding grid selection to the main edit tab. So far it’s working quite nicely, though a tad under-discoverable.

Word Definitions and Substrings

The major visible addition to the Clue phase is the definition tab. They’re pulled from Wiktionary, and included in a custom word-list stored with the editor. I decided on a local copy because Wiktionary doesn’t have an API for pulling definitions and I wanted to keep all operations fast. I’m able to look up and render the definitions extremely quickly.

New dictionary tabI also made progress on a big design goal for the editor: the ability to work with substrings in the clue phase. For those who are unfamiliar with cryptic crosswords, answers are frequently broken down into substrings which each have their own subclues to indicate them. The idea is to show possibilities for these indicators to provide ideas for puzzle constructors.

Note: If you’re unfamiliar with cryptic clues, this video is a charming introduction to them.

It’s a little confusing to explain, so perhaps an example would help. In this video the answers to some cryptic clues are broken down into their parts. The tabs show how they could have been constructed.

Next steps?

  • Testing: I’m really happy with how the cryptic authoring features are coming together, but I’m not convinced it’s useful yet. I want to try writing a couple of crosswords to be sure.
  • Acrostic editor: We’re going to land Tanmay’s acrostic editor early in the cycle so we have maximum time to get it working
  • Nonogram player: There are a few API changes needed for nonograms
  • Word score: I had a few great conversations with Erin about scoring words — time for a design doc.
  • Game cleanup: I’m over due for a cycle of cleaning up the game. I will go through the open bugs there and clean them up.

Thanks again to all supporters, translators, packagers, testers, and contributors!

Andy Wingo

@wingo

here we go again

Good evening, fey readers. Tonight, a note on human rights and human wrongs.

I am in my mid-forties, and so I have seen some garbage governments in my time; one of the worst was Trump’s election in 2016. My heart ached in so many ways, but most of all for immigrants in the US. It has always been expedient for a politician to blame problems on those outside the polity, and in recent years it has been open season on immigration: there is always a pundit ready to make immigrants out to be the source of a society’s ills, always a publisher ready to distribute their views, always a news channel ready to invite the pundit to discuss these Honest Questions, and never will they actually talk to an immigrant. It gives me a visceral sense of revulsion, as much now as in 2016.

What to do? All of what was happening in my country was at a distance. And there is this funny thing that you don’t realize inside the US, in which the weight of the US’s cultural influence is such that the concerns of the US are pushed out into the consciousness of the rest of the world and made out to have a kind of singular, sublime importance, outweighing any concern that people in South Africa or Kosovo or Thailand might be having; and even to the point of outweighing what is happening in your own country of residence. I remember the horror of Europeans upon hearing of Trump’s detention camps—and it is right to be horrified!—but the same people do not know what is happening at and within their own borders, the network of prisons and flophouses and misery created by Europe’s own xenophobic project. The cultural weight of the US is such that it can blind the rest of the world into ignorance of the local, and thus to inaction, there at the place where the rest of us actually have power to act.

I could not help immigrants in the US, practically speaking. So I started to help immigrants in France. I joined the local chapter of the Ligue des droits de l’Homme, an organization with a human rights focus but whose main activity was a weekly legal and administrative advice clinic. I had gone through the immigration pipeline and could help others.

It has been interesting work. One thing that you learn quickly is that not everything that the government does is legal. Sometimes this observation takes the form of an administrative decision that didn’t respect the details of a law. Sometimes it’s something related to the hierarchy of norms, for example that a law’s intent was not reflected in the way it was translated to the operational norms used by, say, asylum processing agents. Sometimes it’s that there was no actual higher norm, but that the norms are made by the people who show up, and if it’s only the cops that show up, things get tilted copwards.

A human-rights approach is essentially liberal, and I don’t know if it is enough in these the end days of a liberal rule of law. It is a tool. But for what I see, it is enough for me right now: there is enough to do, I can make enough meaningful progress for people that the gaping hole in my soul opened by the Trumpocalypse has started to close. I found my balm in this kind of work, but there are as many paths as people, and surely yours will be different.

So, friends, here we are in 2025: new liver, same eagles. These are trying times. Care for yourself and for your loved ones. For some of you, that will be as much as you can do; we all have different situations. But for the rest of us, and especially those who are not really victims of marginalization, we can do things, and it will help people, and what’s more, it will make you feel better. Find some comrades, and reach your capacity gradually; you’re no use if you burn out. I don’t know what is on the other side of this, but in the meantime let’s not make it easy for the bastards.

Michael Meeks

@michael

2025-01-21 Tuesday

  • Got to a status report, planning call - caught up with overhanging decisions. Sync with Karen, then Hannah, lunch.
  • Monthly management meeting, catch up with Szymon. Partner calls, and sync with Till.
  • Dinner, worked until very late on first slides, then contract review; sleep.

Status update, 21/01/2025

Happy new year everyone!

As a new year’s resolution, I’ve decided to improve SEO for this blog, so from now on my posts will be in FAQ format.

What are Sam Thursfield’s favourite music releases of 2025?

Glad you asked. I posted my top 3 music releases here on Mastodon. (I also put them on Bluesky, because why not? If you’re curious, Christine Lemmer-Webber has a great technical comparison between Bluesky and the Fediverse).

Here is a Listenbrainz playlist with these and my favourites from previous years. There’s also a playlist on Spotify, but watch out for fake Spotify music. I read a great piece by Liz Pelly on how Spotify has created thousands of fake artists to avoid paying musicians fairly.

What has Sam Thursfield learned at work recently?

That’s quite a boring question, but ok. I used FastAPI for the first time. It’s pretty good.

And I have been learning the theory behind the C4 model, which I like more and more. The trick with the C4 model is, it doesn’t claim solve your problems for you. It’s a tool to help you to think in a more structured way so that you have to solve them yourself. More on that in a future post.

Should Jack Dorsey be allowed to speak at FOSDEM 2025?

Now that is a very interesting question!

FOSDEM is a “free and non-commercial” event, organised “by the community for the community”. The community, in this case, being free and open source software developers. It’s the largest event of its kind, and organising such a beast for little to no money for 25 years running, is a huge achievement. We greatly appreciate the effort the organisers put in! I will be at FOSDEM ’25, talking about automated QA infrastructure, helping out at the GNOME booth, and wandering wherever fate leads me.

Jack Dorsey is a Silicon Valley billionaire, you might remember him from selling Twitter to Elon Musk, touting blockchains, and quitting the board of Bluesky because they added moderation features into the protocol. Many people rolled eyes at the announcement that he will be speaking at FOSDEM this year in a talk titled “Infusing Open Source Culture into Company DNA”.

Drew DeVault stepped forward to organise a protest against Dorsey speaking, announced under the heading “No Billionares at FOSDEM“. More than one person I’ve spoken to is interested in joining. Other people I know think it doesn’t make sense to protest one keynote speaker out of the 1000s who have stepped on the stage over the years.

Protests are most effective when they clearly articulate what is being protested and what we want to change. The world in 2025 is a complex, messy place though which is changing faster than I can keep up with. Here’s an attempt to think through why this is happening.

Firstly, the”Free and Open Source Software community” is a convenient fiction, and in reality it is made up of many overlapping groups, with an interest in technology being sometimes the only thing we have in common. I can’t explain here all of the nuance, but lets look at one particular axis, which we could call pro-corporate vs. anti-corporate sentiments.

What I mean by corporate here is quite specific but if you’re alive and reading the news in 2025 you probably have some idea what I mean. A corporation is a legal abstraction which has some of the same rights as a human — it can own property, pay tax, employ people, and participate in legal action — while not actually being a human. A corporation can’t feel guilt, shame, love or empathy. A publicly traded corporation must make a profit — if it doesn’t, another corporation will eat it. (Credit goes to Charlie Stross for this metaphor :-). This leads to corporations that can behave like psychopaths, without being held accountable in the way that a human would. Quoting Alexander Biener:


Elites avoiding accountability is nothing new, but in the last three decades corporate avoidance has reached new lows. Nobody in the military-industrial complex went to jail for lying about weapons of mass destruction in Iraq. Nobody at BP went to jail for the Deepwater oil spill. No traders or bankers (outside of Iceland) were incarcerated for the 2008 financial crash. No one in the Sackler family was punished after Purdue Pharma peddled the death of half a million Americans.

I could post some more articles but I know you have your own experiences of interacting with corporations. Abstractions are useful, powerful and dangerous. Corporations allowed huge changes and improvements in technology and society to take place. They have significant power over our lives. And they prioritize making money over all the things we as individual humans might prioritize, such as fairness, friendliness, and fun.


On the pro-corporate end at FOSDEM, you’ll find people who encourage use of open source in order to share effort between companies, to foster collaboration between teams in different locations and in different organisations, to reduce costs, to share knowledge, and to exploit volunteer labour. When these people are at work, they might advocate publishing code as open source to increase trust in a product, or in the hope that it’ll be widely adopted and become ubiquitous, which may give them a business advantage. These people will use the term “open source” or “FOSS” a lot, they probably have well-paid jobs or businesses in the software industry.

Topics on the pro-corporate side this year include: making a commercial product better (example), complying with legal regulations (example) or consuming open source in corporate software (example)

On the anti-corporate end, you’ll find people whose motivations are not financial (although they may still have a well-paid job in the software industry). They may be motivated by certain values and ethics or an interest in things which aren’t profitable. Their actions are sometimes at odds with the aims of for-profit corporations, such as fighting planned obsolescence, ensuring you have the right to repair a device you bought, and the right to use it however you want even when the manufacturer tries to impose safeguards (sometimes even when you’re using it to break a law). They might publish software under restrictive licenses such as the GNU GPL3, aiming to share it with volunteers working in the open while preventing corporations from using their code to make a profit. They might describe what they do as Free Software rather than “open source”.

Talks on the anti-corporate side might include: avoiding proprietary software (example, example), fighting Apple’s app store monopoly (example), fighting “Big Tech” (example), sidestepping a manufacturer’s restrictions on how you can use your device (example), or the hyper-corporate dystopia depicted in Snow Crash (example).

These are two ends of a spectrum. Neither end is hugely radical. The pro-corporate talks discuss complying with regulations, not lobbying to remove them. The anti-corporate talks are not suggesting we go back to living as hunter-gatherers. And most topics discussed at FOSDEM are somewhere between these poles: technology in a personal context (example), in an educational context (example), history lessons (example).

Many talks are “purely technical”, which puts them in the centre of this spectrum. It’s fun to talk about technology for its own sake and it can help you forget about the messiness of the real world for a while, and even give the illusion that software is a purely abstract pursuit, separate from politics, separate from corporate power, and separate from the experience of being a human.

But it’s not. All the software that we discuss at FOSDEM is developed by humans, for humans. Otherwise we wouldn’t sit in a stuffy room to talk about it would we?

The coexistence of the corporate and the anti-corporate worlds at FOSDEM is part of its character. Few of us are exclusively at the anti-corporate end: we all work on laptops built by corporate workers in a factory in China, and most of us have regular corporate jobs. And few of us are entirely at the pro-corporate end: the core principle of FOSS is sharing code and ideas for free rather than for profit.

There are many “open source” events that welcome pro-corporate speakers, but are hostile to anti-corporate talks. Events organised by the Linux Foundation rarely have talks about “fighting Big Tech”, and you need $700 in your pocket just to attend them. FOSDEM is is one of the largest events where folk on the anti-corporate end of the axis are welcome.


Now let’s go back to the talk proposed by Manik Surtani and Jack Dorsey titled “Infusing Open Source Culture into Company DNA”. We can assume it’s towards the pro-corporate end of the spectrum. You can argue that a man with a billion dollars to his name has opportunities to speak which the anti-corporate side of the Free Software community can only dream of, so why give him a slot that could go to someone more deserving?

I have no idea how the main track and keynote speakers at FOSDEM are selected. One of the goals of the protest explained here is “to improve the transparency of the talk selection process, sponsorship terms, and conflict of interest policies, so protests like ours are not necessary in the future.”

I suspect there may be something more at work too. The world in 2025 is a tense place — we’re living through a climate crisis, combined with a housing crisis in many countries, several wars, a political shift to the far-right, and ever increasing inequality around the world. Corporations, more powerful than most governments, are best placed to help if they wanted, but we see very little news about that happening. Instead, they burn methane gas to power new datacenters and recommend we “mainline AI into the veins of the nation“.

None of this is uniquely Jack Dorsey’s fault, but as the first Silicon Valley billionaire to step on the stage of a conference with a strong anti-corporate presence, it may be that he has more to learn from us than we do from him. I hope that, as a long time advocate of free speech, he is willing to listen.

Richard Hughes

@hughsie

fwupd 2.0.4 and DBXUpdate-20241101

I’ve just tagged fwupd 2.0.4 — with lots of nice new features, and most importantly with new protocol support to allow applying the latest dbx security update.

The big change to the uefi-dbx plugin is the switch to an ISO date as a dbx version number for the Microsoft KEK.

The original trick of ‘count the number of Microsoft-owned hashes‘ worked really well, just until Microsoft started removing hashes in the distributed signed dbx file. In 2023 we started ‘fixing up‘ the version based on the last-added checksum to make the device have an artificially lower version than in reality. This fails with the latest DBXUpdate-20241101 update, where frustratingly, more hashes were removed than added. We can’t allow fwupd to update to a version that’s lower than what we’ve got already, and it somewhat gave counting hashes idea the death blow.

Instead of trying to map the hash into a low-integer version, we now use the last-listed hash in the EFI signature list to map directly to an ISO date, e.g. 20250117. We’re providing the mapping in a local quirk file so that the offline machine still shows something sensible, but are mainly relying on the remote metadata from the LVFS that’s always up to date. There’s even more detail in the plugin README for the curious.

We also changed the update protocol from org.uefi.dbx to org.uefi.dbx2 to simplify the testing matrix — and because we never want version 371 upgrading to 20230314 automatically — as that would actually be a downgrade and difficult to explain.

If we see lots of dbx updates going out with 2.0.4 in the next few hours I’ll also backport the new protocol into 1_9_X for the soon-to-be-released 1.9.27 too.

This Week in GNOME

@thisweek

#183 Updated Flatpak

Update on what happened across the GNOME project in the week from January 10 to January 17.

Flatpak

Georges Stavracas (feaneron) reports

Last week, the Flatpak 1.16.0 was released. It’s the first stable release in years! A lot has happened in the meantime, some of the highlights are:

  • Listing USB devices, which in combination with the USB portal, allow for sandboxed device access
  • Accessibility improvements
  • Support for Wayland security context
  • … and more!

I’ve written about it in more detail in a blog post: https://feaneron.com/2025/01/14/flatpak-1-16-is-out/

GNOME Core Apps and Libraries

Maximiliano 🥑 announces

Snapshot 48.alpha was just released. In this release we added support for reading QR codes.

The aperture library also gained this feature and uses the rqrr crate, meaning that it does not require to link against zbar anymore!

Libadwaita

Building blocks for modern GNOME apps using GTK4.

Alice (she/her) announces

a few more improvements for libadwaita adaptive preview: the inspector UI is now less confusing and there’s now a shortcut that opens it directly (Shift+Ctrl+M). The API for opening it is now public and libadwaita demo now has an adaptive preview entry in its menu, along with inspector

Maps

Maps gives you quick access to maps all across the world.

mlundblad says

Maps now has a re-designed user location marker, using a new “torch” to indicate heading, and using the system accent color

Settings

Configure various aspects of your GNOME desktop.

Philip Withnall announces

Screen Time support has landed in the Wellbeing panel in GNOME Settings, which completes the round of merge requests needed to get that feature working across the desktop. It allows you to monitor your daily usage of the computer, and set a time limit each day. This is in addition to break reminders, which landed late last year.

Big thanks to Florian Müllner and Matthijs Velsink for their reviews of the work, and to Sam Hewitt and Allan Day for design work on the feature.

It’s now available to test in GNOME OS nightly images. If you find bugs in the feature, please file an issue against either gnome-control-center or gnome-shell, and label it with the ‘Wellbeing’ label.

GJS

Use the GNOME platform libraries in your JavaScript programs. GJS powers GNOME Shell, Polari, GNOME Documents, and many other apps.

ptomato reports

We also landed several improvements from Marco Trevisan that further improve performance in accessing GObject properties, like button.iconName or label.useMarkup, and make GObject methods use less memory when called from JS.

ptomato says

In GJS, the command-line debugger can now examine private fields of objects, thanks to Gary Li.

GNOME Circle Apps and Libraries

Podcasts

Podcast app for GNOME.

alatiera says

New year, New release 🎉!

This release brings lots of small improvements to make everything a little bit better!

The following are now possible:

  • You can now mark individual episodes as played
  • The Shows will now scale based on the window size
  • You can close the window with the Control + W shortcut

While we also changed some internal things

  • Rework the download machinery to be faster and more efficient
  • Improved application startup times

And fixes a couple of pesky bugs

  • Automatically detect the image format for thumbnails
  • Dates are now displayed and calculated using localtime instead of sometimes using UTC
  • Fix accessibility warnings in the Episode Description
  • Correctly trigger a download when thumbnail cover for mpris is missing
  • Correctly calculate the episode download size if its missing from the xml metadata

Third Party Projects

Hari Rana | TheEvilSkeleton reports

Refine version 0.4.0 was released. Refine is a GNOME Tweaks alternative I’m working on which follows the data-driven, object-oriented, and composition paradigms. The end goal is to have the convenience to add or remove options without touching a single line of source code.

Version 0.4.0 exposes the following features from dconf:

  • font hinting and font antialiasing options
  • background options
  • window header bar options
  • resize with secondary clicks toggle
  • window focus mode options
  • automatically raise on hover toggle

I also released version 0.2.0 before, which introduced a combo row for selecting the preferred GTK 3 and GTK 4 theme.

You can get Refine on Flathub.

Parabolic

Download web video and audio.

Nick says

Parabolic V2025.1.2 is here with fixes for various bugs users were experiencing.

Here’s the full changelog:

  • Fixed an issue where the cookies file was not used when validating media URLs
  • Fixed an issue where the Qt version of the app did not select the Best format when the previously used format was not available
  • Fixed an issue where the update button on the Windows app did not work
  • Updated yt-dlp

Mahjongg

A solitaire version of the classic Eastern tile game.

Mat reports

Significant improvements have been made to Mahjongg in the past week:

  • New mode to rotate between layouts when starting a new game (added by K Davis)
  • Added transitions/animations when starting a new game and pausing a game
  • Uses GTK 4’s GPU rendering to render tiles, instead of Cairo
  • Re-rendered the ‘Smooth’ theme for high resolution screens
  • No more delays when starting a new game, thanks to many optimizations
  • Reduced frame drops when resizing the game window
  • Various code cleanups and some fixes for memory leaks

These changes will be available in Mahjongg 48 later this spring. Until then, you can try them out by installing the org.gnome.Mahjongg.Devel Flatpak from the GNOME Nightly repository. Enjoy!

Fractal

Matrix messaging app for GNOME written in Rust.

Kévin Commaille announces

In this cold weather, we hope Fractal 10.rc will warm your hearts. Let’s celebrate this with our own awards ceremony:

  • The most next-gen addition goes to… making Fractal OIDC aware. This ensures compatibility with the upcoming authentication changes for matrix.org.
  • The most valuable fix goes to… showing consistently pills for users and rooms mentions in the right place instead of seemingly random places, getting rid of one of our oldest and most annoying bug.
  • The most sensible improvement goes to… using the send queue for attachments, ensuring correct order of all messages and improving the visual feedback.
  • The most underrated feature goes to… allowing to react to stickers, fixing a crash in the process.
  • The most obvious tweak goes to… removing the “Open Direct Chat” menu entry from avatar menu and member profile in direct chats.
  • The clearest enhancement goes to… labelling experimental versions in the room upgrade menu as such.

As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.

It is available to install via Flathub Beta, see the instructions in our README.

As the version implies, it should be mostly stable and we expect to only include minor improvements until the release of Fractal 10.

If you are wondering what to do on a cold day, you can try to fix one of our newcomers issues. We are always looking for new contributors!

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Luis Villa

@luis

non-profit social networks: benchmarking responsibilities and costs

I’m trying to blog quicker this year. I’m also sick with the flu. Forgive any mistakes caused by speed, brevity, or fever.

Monday brought two big announcements in the non-traditional (open? open-ish?) social network space, with Mastodon moving towards non-profit governance (asking for $5M in donations this year), and Free Our Feeds launching to do things around ATProto/Bluesky (asking for $30+M in donations).

It’s a little too early to fully understand what either group will do, and this post is not an endorsement of specifics of either group—people, strategies, etc.

Instead, I just want to say: they should be asking for millions.

There’s a lot of commentary like this one floating around:

I don’t mean this post as a critique of Jan or others. (I deliberately haven’t linked to the source, please don’t pile on Jan!) Their implicit question is very well-intentioned. People are used to very scrappy open source projects, so millions of dollars just feels wrong. But yes, millions is what this will take.

What could they do?

I saw a lot of comments this morning that boiled down to “well, people run Mastodon servers for free, what does anyone need millions for”? Putting aside that this ignores that any decently-sized Mastodon server has actual server costs (and great servers like botsin.space shut down regularly in part because of those), and treats the time and emotional trauma of moderation as free… what else could these orgs be doing?

Just off the top of my head:

  • Moderation, moderation, moderation, including:
    • moderation tools, which by all accounts are brutally badly needed in Masto and would need to be rebuilt from scratch by FoF. (Donate to IFTAS!)
    • multi-lingual and multi-cultural, so you avoid the Meta trap of having 80% of users outside the US/EU but 80% of moderation in the US/EU.
  • Jurisdictionally-distributed servers and staff
    • so that when US VP Musk comes after you, there’s still infrastructure and staff elsewhere
    • and lawyers for this scenario
  • Good governance
    • which, yes, again, lawyers, but also management, coordination, etc.
    • (the ongoing WordPress meltdown should be a great reminder that good governance is both important and not free)
  • Privacy compliance
    • Mention “GDPR compliance” and “Mastodon” in the same paragraph and lots of lawyers go pale; doing this well would be a fun project for a creative lawyer and motivated engineers, but a very time-consuming one.
    • Bluesky has similar challenges, which get even harder as soon as meaningfully mirrored.

And all that’s just to have the same level of service as currently.

If you actually want to improve the software in any way, well, congratulations: that’s hard for any open source software, and it’s really hard when you are doing open source software with millions of users. You need product managers, UX designers, etc. And those aren’t free. You can get some people at a slight discount if you’re selling them on a vision (especially a pro-democracy, anti-harassment one), but in the long run you either need to pay near-market or you get hammered badly by turnover, lack of relevant experience, etc.

What could that cost, $10?

So with all that in mind, some benchmarks to help frame the discussion. Again, this is not to say that an ATProto- or ActivityPub-based service aimed at achieving Twitter or Instagram-levels of users should necessarily cost exactly this much, but it’s helpful to have some numbers for comparison.

  • Wikipedia: (source)
    • legal: $10.8M in 2023-2024 (and Wikipedia plays legal on easy mode in many respects relative to a social network—no DMs, deliberately factual content, sterling global brand)
    • hosting: $3.4M in 2023-2024 (that’s just hardware/bandwidth, doesn’t include operations personnel)
  • Python Package Index
    • $20M/year in bandwidth from Fastly in 2021 (source) (packages are big, but so is social media video, which is table stakes for a wide-reaching modern social network)
  • Twitter
    • operating expenses, not including staff, of around $2B/year in 2022 (source)
  • Signal
  • Content moderation
    • Hard to get useful information on this on a per company basis without a lot more work than I want to do right now, but the overall market is in the billions (source).
    • Worth noting that lots of the people leaving Meta properties right now are doing so in part because tens of thousands of content moderators, paid unconscionably low wages, are not enough.

You can handwave all you want about how you don’t like a given non-profit CEO’s salary, or you think you could reduce hosting costs by self-hosting, or what have you. Or you can pushing the high costs onto “volunteers”.

But the bottom line is that if you want there to be a large-scale social network, even “do it as cheap as humanly possible” is millions of costs borne by someone.

What this isn’t

This doesn’t mean “give the proposed new organizations a blank check”. As with any non-profit, there’s danger of over-paying execs, boards being too cozy with execs and not moving them on fast enough, etc. (Ask me about founder syndrome sometime!) Good governance is important.

This also doesn’t mean I endorse Bluesky’s VC funding; I understand why they feel they need money, but taking that money before the techno-social safeguards they say they want are in place is begging for problems. (And in fact it’s exactly because of that money that I think Free Our Feeds is intriguing—it potentially provides a non-VC source of money to build those safeguards.)

But we have to start with a realistic appraisal of the problem space. That is going to mean some high salaries to bring in talented people to devote themselves to tackling hard, long-term, often thankless problems, and lots of data storage and bandwidth.

And that means, yes, millions of dollars.

Hans de Goede

@hansdg

IPU6 camera support status update

The initial IPU6 camera support landed in Fedora 41 only works on a limited set of laptops. The reason for this is that with MIPI cameras every different sensor and glue-chip like IO-expanders needs to be supported separately.

I have been working on making the camera work on more laptop models. After receiving and sending many emails and blog post comments about this I have started filing Fedora bugzilla issues on a per sensor and/or laptop-model basis to be able to properly keep track of all the work.

Currently the following issues are being either actively being worked on, or are being tracked to be fixed in the future.

Issues which have fixes pending (review) upstream:


Open issues with various states of progress:

See all the individual bugs for more details. I plan to post semi-regular status updates on this on my blog.

This above list of issues can also be found on my Fedora 42 change proposal tracking this and I intent to keep an updated complete list of all x86 MIPI camera issues (including closed ones) there.



comment count unavailable comments

Jussi Pakkanen

@jpakkane

Measuring code size and performance

Are exceptions faster and/or bloatier than using error codes? Well...

The traditional wisdom is that exceptions are faster when not taken, slower when taken and lead to more bloated code. On the other hand there are cases where using exceptions makes code a lot smaller. In embedded development, even, where code size is often the limiting factor.

Artificial benchmarks aside, measuring the effect on real world code is fairly difficult. Basically you'd need to implement the exact same, nontrivial piece of code twice. One implementation would use exceptions, the other would use error codes but they should be otherwise identical. No-one is going to do that for fun or even idle curiosity.

CapyPDF has been written exclusively using C++ 23's new expected object for error handling. As every Go programmer knows, typing error checks over and over again is super annoying. Very early on I wrote macros to autopropagate errors. That props up an interesting question, namely could you commit horrible macro crimes to make the error handling either use error objects or exceptions?

It tuns out that yes you can. After a thorough scrubbing of the ensuring shame from your body and soul you can start doing measurements. To get started I built and ran CapyPDF's benchmark application with the following option combinations:

  • Optimization -O1, -O2, -O3, -Os
  • LTO enabled and disabled
  • Exceptions enabled and disabled
  • RTTI enabled and disabled
  • NDEBUG enabled and disabled
The measurements are the stripped size of the resulting shared library and runtime of the test executable. The code and full measurement data can be found in this repo. The code size breakdown looks like this:

Performance goes like this:

Some interesting things to note:

  • The fastest runtime of 0.92 seconds with O3-lto-rtti-noexc-ndbg
  • The slowest is 1.2s with Os-noltto-rtti-noexc-ndbg
  • If we ignore Os the slowes is 1.07s O1-noltto-rtti-noexc-ndbg
  • The largest code is 724 kb with O3-nolto-nortt-exc-nondbg
  • The smallest is 335 kB with Os-lto-nortti-noexc-ndbg
  • Ignoring Os the smallest is 470 kB with O1-lto-nortti-noexc-ndbg
Things noticed via eyeballing

  • Os leads to noticeably smaller binaries at the cost of performance
  • O3 makes binaries a lot bigger in exchange for a fairly modest performance gain
  • NDEBUG makes programs both smaller and faster, as one would expect
  • LTO typically improves both speed and code size
  • The fastest times for O1, O2 and O3 are within a few percent points of each other with 0.95, 094 and 0.92 seconds, respectively

Caveats

At the time of writing the upstream code uses error objects even when exceptions are enabled. To replicate these results you need to edit the source code.

The benchmark does not actually raise any errors. This test only measures the golden path.

The tests were run on GCC 14.2 on x86_64 Ubuntu 10/24.

Flatpak 1.16 is out!

Last week I published the Flatpak 1.16.0 release This marks the beginning of the 1.16 stable series.

This release comes after more than two years since Flatpak 1.14, so it’s pretty packed with new features, bug fixes, and improvements. Let’s have a look at some of the highlights!

USB & Input Devices

Two new features are present in Flatpak 1.16 that improve the handling of devices:

  • The new input device permission
  • Support for USB listing

The first, while technically still a sandbox hole that should be treated with caution, allows some apps to replace --device=all with --device=input, which has a far smaller surface. This is interesting in particular for apps and games that use joysticks and controllers, as these are usually exported by the kernel under /dev/input.

The second is likely the biggest new feature in the Flatpak release! It allows Flatpak apps to list which USB devices they intend to use. This is stored as static metadata in the app, which is then used by XDG Desktop Portal to notify the app about plugs and unplugs, and eventually request the user for permission.

Using the USB portal, Flatpak apps are able to list the USB devices that they have permission to list (and only them). Actually accessing these USB devices triggers a permission request where the user can allow or deny the app from having access to the device.

Finally, it is possible to forcefully override these USB permissions locally with the --usb and --nousb command-line arguments.

This should make the USB access story fairly complete. App stores like Flathub are able to review the USB permissions ahead of time, before the app is published, and see if they make sense. The portal usage prevents apps from accessing devices behind the user’s back. And users are able to control these permissions locally even further.

Better Wayland integration

Flatpak 1.16 brings a handful of new features and improvements that should deepen its integration with Wayland.

Flatpak now creates a private Wayland socket with the security-context-v1 extension if available. This allows the Wayland compositor to properly identify connections from sandboxed apps as belonging to the sandbox.

Specifically, with this protocol, Flatpak is able to securely tell the Wayland compositor that (1) the app is a Flatpak-sandboxed app, (2) an immutable app id, and (3) the instance id of the app. None of these bits of information can be modified by apps themselves.

With this information, compositors can implement unique policies and have tight control over security.

Accessibility

Flatpak already exposes enough of the accessibility stack for most apps to be able to report their accessible contents. However, not all apps are equal, and some require rather challenging setups with the accessibility stack.

One big example here is the WebKit web engine. It basically pushes Flatpak and portals to their limit, since each tab is a separate process. Until now, apps that use WebKit – such as GNOME Web and Newsflash – were not able to have the contents of the web pages properly exposed to the accessibility stack. That means things like screen readers wouldn’t work there, which is pretty disappointing.

Fortunately a lot of work was put on this front, and now Flatpak has all the pieces of the puzzle to make such apps accessible. These improvements also allow apps to detect when screen readers are active, and optimize for that.

WebKit is already adapted to use these new features when they’re available. I’ll be writing about this in more details in a future series of blog posts.

Progress Reporting

When installing Flatpak apps through the command-line utility, it already shows a nice fancy progress bar with block characters. It looks nice and gets the job done.

However terminals may have support for an OSC escape sequence to report progress. Christian Hergert wrote about it here. Christian also went ahead and introduced support to emitting the progress escape sequence in Flatpak. Here’s an example:

Screenshot of the terminal app Ptyxis with a progress bar

Unfortunately, right before the release, it was reported that this new feature was spamming some terminal emulators with notifications. These terminals (kitty and foot) have since been patched, but older LTS distributions probably won’t upgrade. That forced us to make it opt-in for now, through the FLATPAK_TTY_PROGRESS environment variable.

Ptyxis (the terminal app above) automatically sets this environment variable so it should work out of the box. Users can set this variable on their session to enable the feature. For the next stable release (Flatpak 1.18), assuming terminals cooperate on supporting this feature, the plan is to enable it by default and use the variable for opting out.

Honorable Mentions

I simply cannot overstate how many bugs were fixed in Flatpak in all these releases.

We had 13 unstable releases (the 1.15.X series) until we finally released 1.16 as a stable release. A variety of small memory leaks and build warnings were fixed.

The gssproxy socket is now shared with apps, which acts like a portal for Kerberos authentication. This lets apps use Kerberos authentication without needing a sandbox hole.

Flatpak now tries to pick languages from the AccountsService service, making it easier to configure extra languages.

Obsolete driver versions and other autopruned refs are now automatically removed, which should help keeping things tight and clean, and reducing the installed size.

If the timezone is set through the TZDIR environment variable, Flatpak takes timezone information from there. This should fix apps with the wrong timezone in NixOS systems.

More environment variables are now documented in the man pages.

This is the first stable release of Flatpak that can only be built with Meson. Autotools served us honorably for the past decades, but it was time to move to something more modern. Meson has been a great option for a long time now. Flatpak 1.16 limits itself to require a fairly old version of Meson, which should make it easy to distribute on old LTS distributions.

Finally, the 1.10 and 1.12 series have now reached their end of life, and users and distributions are encouraged to upgrade to 1.16 as soon as possible. During this development cycle, four CVEs were found and fixed, all of these fixes were backported to the 1.14 series, but not all were backported to versions older than that. So if you’re using Flatpak 1.10 or 1.12, be aware that you’re on your own risk.

Future

The next milestone for the platform is a stable XDG Desktop Portal release. This will ship with the aforementioned USB portal, as well as other niceties for apps. Once that’s done, and after a period of receiving bug reports and fixing them, we can start thinking about the next goals for these projects.

These are important parts of the platform, and are always in need of contributors. If you’re interested in helping out with development, issue management, coordination, developer outreach, and/or translations, please reach out to us in the following Matrix rooms:

Acknowledgements

Thanks to all contributors, volunteers, issue reporters, and translators that helped make this release a reality. In particular, I’d like to thank Simon McVittie for all the continuous maintenance, housekeeping, reviews, and coordination done on Flatpak and adjacent projects.

Andy Wingo

@wingo

an annoying failure mode of copying nurseries

I just found a funny failure mode in the Whippet garbage collector and thought readers might be amused.

Say you have a semi-space nursery and a semi-space old generation. Both are block-structured. You are allocating live data, say, a long linked list. Allocation fills the nursery, which triggers a minor GC, which decides to keep everything in the nursery another round, because that’s policy: Whippet gives new objects another cycle in which to potentially become unreachable.

This causes a funny situation!

Consider that the first minor GC doesn’t actually free anything. But, like, nothing: it’s impossible to allocate anything in the nursery after collection, so you run another minor GC, which promotes everything, and you’re back to the initial situation, wash rinse repeat. Copying generational GC is strictly a pessimization in this case, with the additional insult that it doesn’t preserve object allocation order.

Consider also that because copying collectors with block-structured heaps are unreliable, any one of your minor GCs might require more blocks after GC than before. Unlike in the case of a major GC in which this essentially indicates out-of-memory, either because of a mutator bug or because the user didn’t give the program enough heap, for minor GC this is just what we expect when allocating a long linked list.

Therefore we either need to allow a minor GC to allocate fresh blocks – very annoying, and we have to give them back at some point to prevent the nursery from growing over time – or we need to maintain some kind of margin, corresponding to the maximum amount of fragmentation. Or, or, we allow evacuation to fail in a minor GC, in which case we fall back to promotion.

Anyway, I am annoyed and amused and I thought others might share in one or the other of these feelings. Good day and happy hacking!

This Week in GNOME

@thisweek

#182 Updated Crypto

Update on what happened across the GNOME project in the week from January 03 to January 10.

GNOME Core Apps and Libraries

nielsdg reports

gcr, a core library that provides a GObject-oriented interface to several crypto APIs, is preparing for the new 4.4 version with the alpha release 4.3.90. It contains some new APIs for GcrCertificate, such as the new GcrCertificateExtension class that allows you to inspect certificate extensions. 🕵️

nielsdg says

GNOME Keyring has now finally moved to Meson and has dropped support for building with autotools. This will be part of the upcoming 48.alpha release.

Vala

An object-oriented programming language with a self-hosting compiler that generates C code and uses the GObject system.

lorenzw says

Many people might have seen it already, but a while ago we finally officailly moved our documentation from the old GNOME wiki to a new website: https://docs.vala.dev! This has been a long-standing task completed by Colin Kiama. The pages are hosted on https://github.com/vala-lang/vala-docs and everyone is welcome to contribute and improve them, we have already started to file tickets in the issue tracker and assign labels, especially for newcomers, so its easy to start helping out! We want to port a lot more docs and code examples from other locations to this new website, and thats not difficult at all! The website is built similar to all other new GNOME documentation websites using sphinx, so you don’t even need to learn a new markup language. Happy docs reading and hacking! :D

Image Viewer (Loupe)

Browse through images and inspect their metadata.

Sophie 🏳️‍🌈 🏳️‍⚧️ (she/her) says

Image Viewer (Loupe) 48.alpha is now available.

This new release adds image editing support for PNGs and JPEGs. Images can be cropped (tracking issue), rotated, and flipped. New zoom controls allow setting a specific zoom level and feature a more compact style. Support for additional metadata formats like XMP and new image information fields have been added as well.

Libadwaita

Building blocks for modern GNOME apps using GTK4.

Alice (she/her) reports

adaptive preview has received a bunch of updates since the last time: for example it now shows device bezels and allows to take screenshot of the app along with the shell panels and bezels

GNOME Circle Apps and Libraries

Shortwave

Internet radio player with over 30000 stations.

Felix announces

At the end of the festive season I was able to implement one more feature: Shortwave now supports background playback, and interacts with the background portal to display the current status in the system menu!

Third Party Projects

Fabrix announces

Confy 0.8.0 has been released. Confy is a conference schedule companion. This release brings updated UI design, some quality of life improvements like recent opened schedules list, and fixes to schedule parsing. https://confy.kirgroup.net/

Parabolic

Download web video and audio.

Nick announces

Parabolic V2025.1.0 is here! This update contains various bug fixes for issues users were experiencing, as well as a new format selection system.

Here’s the full changelog:

  • Parabolic will now display all available video and audio formats for selection by the user when downloading a single media
  • Fixed an issue where some video downloads contained no audio
  • Fixed an issue where progress was incorrectly reported for some downloads
  • Fixed an issue where downloads would not stop on Windows
  • Fixed an issue where paths with accent marks were not handled correctly on Windows
  • Fixed an issue where the bundled ffmpeg did not work correctly on some Windows systems

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Adetoye Anointing

@yorubad-dev

Demystifying SVG2 Text Layout: Understanding Librsvg

Prerequisite

Hi! I’m Anointing, your friendly neighborhood software engineer. I’m an Outreachy GNOME intern currently working on the project titled “Implement the SVG2 Text Layout Algorithm in Librsvg.”

In a previous blog post, I briefly introduced my project and tasks. If you missed it, don’t worry—this article dives deeper into the project and the specific task I’m working on.


What is Librsvg?

Librsvg is a lightweight library used to render Scalable Vector Graphics (SVGs), primarily within GNOME projects. It has been around since 2001, initially developed to handle SVG icons and graphical assets for GNOME desktops. Over the years, it has evolved into a versatile tool used for various SVG rendering needs.


What is SVG2?

Before understanding SVG2, let’s break down SVG (Scalable Vector Graphics):

  • SVG is an XML-based format for creating two-dimensional graphics.
  • Unlike raster images (e.g., JPEG or PNG), SVG images are scalable, meaning they retain quality regardless of size.
  • They are widely used for web graphics, illustrations, icons, and more because of their scalability and efficiency.

SVG2 (Scalable Vector Graphics, version 2) is the latest update to the SVG standard, developed by the World Wide Web Consortium (W3C). It builds upon SVG 1.1 with new features, bug fixes, and enhancements to make SVG more powerful and consistent across modern browsers.


Librsvg’s Current State

Librsvg supports some parts of the SVG 1.1 specifications for text, including bidirectional text. However, advanced features like per-glyph positioning or text-on-path are not yet implemented.

The SVG2 specification introduces significant improvements in text layout, such as:

  • Fine-grained glyph positioning
  • Support for right-to-left and bidirectional text
  • Vertical text layout
  • Text-on-path
  • Automatic text wrapping

Currently, librsvg does not fully implement SVG2’s comprehensive text layout algorithm. My role is to help improve this functionality.


My Role: Implementing the SVG2 Text Layout Algorithm

If the above sounds technical, don’t worry—I’ll explain the key tasks in simpler terms.

1. Support for Basic Text Layout

This involves ensuring that text in SVG images appears correctly. Imagine a digital poster: every word and letter must be precisely positioned. My task is to make sure librsvg can handle this properly.

2. Whitespace Handling

Whitespace refers to the blank space between words and lines. In SVG, whitespace is standardized—extra spaces should not disrupt the layout. I’m implementing an algorithm to handle whitespace effectively.

3. Left-to-Right (LTR) and Right-to-Left (RTL) Languages

Languages like English are read from left to right, while Arabic or Hebrew are read from right to left. Librsvg must handle both correctly using a process called the Bidi (Bidirectional) algorithm.

4. Inter-Glyph Spacing

In SVG, attributes like x, y, dx, and dy allow precise control over letter spacing. This ensures text looks balanced and beautiful. Additionally, this task involves handling ligatures (e.g., combining characters in Arabic) to ensure the output is correct.

5. Text-on-Path Handling (If Time Permits)

This feature allows text to follow a specific shape, like a circle or wave. It’s a fancy but useful way to add artistic effects to SVGs.


Why Does This Matter?

Improving librsvg’s text layout makes it more powerful and accessible for designers, developers, and artists. Whether creating infographics, digital posters, or interactive charts, these enhancements ensure that text renders beautifully in every language and style.


Tips for Newbies

If you’re new to SVG, text layout algorithms, or even Rust (the programming language used in librsvg), here’s what you need to know:

  • Skills Needed: Communication, basic Rust programming, and familiarity with terms like shaping, bidi algorithm, glyphs, ligatures, and baselines.
  • Start Small: Focus on one concept at a time—there’s no need to know everything at once.
  • Resources: The GNOME librsvg project is beginner-friendly and a great way to dive into open-source contributions.

Resources for Learning and Contributing


Wrapping Up

These tasks may seem technical, but they boil down to making librsvg a better tool for rendering SVGs. Whether it’s neat text placement, handling multiple languages, or adding artistic text effects, we’re improving SVG rendering for everyone.

So far, this project has been a journey of immense learning for me—both in technical skills like Rust programming and soft skills like clear communication.

In future posts, I’ll explore SVG2, librsvg features, and text layout terminologies in greater detail. Stay tuned!

Feel free to ask questions or share your thoughts. I’d love to hear from you and see you in the next chapter—cheers!! 😊🎉

Tobias Bernard

@tbernard

Re-Decentralizing Development

As I’ve already announced internally, I’m stepping down from putting together an STF application for this year. For inquiries about the 2025 application, please contact Adrian Vovk going forward. This is independent of the 2024 STF project, which we’re still in the process of wrapping up. I’m sticking around for that until the end.

The topic of this blog post is not the only reason I’m stepping down but it is an important one, and I thought some of this is general enough to be worth discussing more widely.

In the context of the Foundation issues we’ve had throughout the STF project I’ve been thinking a lot about what structures are best suited for collectively funding and organizing development, especially in the context of a huge self-organized project like GNOME. There are a lot of aspects to this, e.g. I hadn’t quite realized just how important having a motivated, talented organizer like Sonny is to successfully delivering a complex project. But the specific area I want to talk about here is how power and responsibilities should be split up between different entities across the community.

This is my personal view, based on having worked on GNOME in a variety of structures over the years (volunteer, employee, freelancer, for- and non-profit, worked under grants, organized grants, etc.). I don’t have any easy answers, but I wanted to share how my perspective has shifted as a result of the events of the past year, which I hope will contribute to the wider ongoing discussion around this.

A Short History

Unlike many other user-facing free software projects, GNOME had strong corporate involvement since early on in its history, with many different product companies and consultancies paying people to work on various parts of it. The project grew up during the Dotcom bubble (younger readers may not remember this, but “Linux” was the “AI” of that era), and many of our structures date back to this time.

The Foundation was created in those early days as a neutral organization to hold resources that should not belong to any one of the companies involved (such as the trademark, donation money, or the development infrastructure). A lot of checks and balances were put in place to avoid one group taking over the Foundation or the Foundation itself overshadowing other players. For example, hiring developers via the Foundation was an explicit non-goal, advisory board companies do not get a say in the project’s technical direction, and there is a limit to how many employees of any single company can be on the board. See this episode of Emmanuele Bassi’s History of GNOME Podcast for more details.

The Dotcom bubble burst and some of those early companies died, but there continued to be significant corporate investment, e.g. from enterprise desktop companies like Sun, and then later via companies from the mobile space during the hype cycles around netbooks, phones, and tablets around 2010.

Fast forward to today, this situation has changed drastically. In 2025 the desktop is not a growth area for anyone in the industry, and it hasn’t been in over a decade. Ever since the demise of Nokia and the consolidation of the iOS/Android duopoly, most of the money in the ecosystem has been in server and embedded use cases.

Today, corporate involvement in GNOME is limited to a handful of companies with an enterprise desktop business (e.g. Red Hat), and consultancies who mostly do low-level embedded work (e.g. Igalia with browsers, or Centricular with Gstreamer).

Retaining the Next Generation

While the current level of corporate investment, in combination with volunteer work from the wider community, have been enough to keep the project afloat in recent years, we have a pretty glaring issue with our new contributor pipeline: There are very few job openings in the field.

As a result, many people end up dropping out or reducing their involvement after they finish university. Others find jobs on adjacent technologies where they occasionally get work time for GNOME-related stuff, and put in a lot of volunteer time on top. Others still are freelancing, applying for grants, or trying to make Patreon work.

While I don’t have firm numbers, my sense is that the number of people in precarious situations like these has been going up since I got involved around 2015. The time when you could just get a job at Red Hat was already long gone when I joined, but for a while e.g. Endless and Purism had quite a few people doing interesting stuff.

In a sense this lack of corporate interest is not unusual for user-facing free software — maybe we’re just reverting to the mean. Public infrastructure simply isn’t commercially profitable. Many other projects, particularly ones without corporate use cases (e.g. Tor) have always been in this situation, and thus have always relied on grants and donations to fund their development. Others have newly moved in this direction in recent years with some success (e.g. Thunderbird).

Foundational Issues

I think what many of us in the community have wanted to see for a while is exactly what Tor, Thunderbird, Blender et al. are doing: Start doing development at the Foundation, raise money for it via donations and grants, and grow the organization to pick up the slack from shrinking corporate involvement.

I know why this idea is so seductive to many of us, and has been for years. It’s in fact so popular, I found four board candidacies (1, 2, 3, 4) from the last few election cycles proposing something like it.

On paper, the Foundation seems perfect as the legal structure for this kind of initiative. It already exists, it has the same name as the wider project, and it already has the infrastructure to collect donations. Clearly all we need to do is to raise a bit more money, and then use that money to hire community members. Easy!

However, after having been in the trenches trying to make it work over the past year, I’m now convinced it’s a bad idea, for two reasons: Short/medium term the current structure doesn’t have the necessary capacity, and longer term there are too many risks involved if something goes wrong.

Lack of Capacity

Simply put, what we’ve experienced in the context of the STF project (and a few other initiatives) over the past year is that the Foundation in its current form is not set up to handle projects that require significant coordination or operational capacity. There are many reasons for this — historical, personal, structural — but my takeaway after this year is that there need to be major changes across many of the Foundation’s structures before this kind of thing is feasible.

Perhaps given enough time the Foundation could become an organization that can fund and coordinate significant development, but there is a second, more important reason why I no longer think that’s the right path.

Structural Risk

One advantage of GNOME’s current decentralized structure is its resilience. Having a small Foundation at the center which only handles infrastructure, and several independent companies and consultancies around it doing development means different parts are insulated from each other if something goes wrong.

If there are issues inside e.g. Codethink or Igalia, the maximum damage is limited and the wider project is never at risk. People don’t have to leave the community if they want to quit their current job, ideally they can just move to another company and continue most of their upstream work.

The same is not true of projects with a monolithic entity at the center. If there’s a conflict in that central monolith it can spiral ever wider if it isn’t resolved, affecting more and more structures and people, and doing drastically more damage.

This is a lesson we’ve unfortunately had to learn the hard way when, out of the blue, Sonny was banned last year. I’m not going to talk about the ban here (it’s for Sonny to talk about if/when feels like it), but suffice to say that it would not have happened had we not done the STF project under the Foundation, and many community members including myself do not agree with the ban.

What followed was, for some of us, maybe the most stressful 6 months of our lives. Since last summer we’ve had to try and keep the STF project running without its main architect, while also trying to get the ban situation fixed, as well as dealing with a number of other issues caused by the ban. Thousands of volunteer hours were probably burned on this, and the issue is not even resolved yet. Who knows how many more will be burned before it’s over. I’m profoundly sad thinking about the bugs we could have fixed, the patches we could have reviewed, and the features we could have designed in those hours instead.

This is, to me, the most important takeaway and the reason why I no longer believe the Foundation should be the structure we use to organize community development. Even if all the current operational issues are fixed, the risk of something like this happening is too great, the potential damage too severe.

What are the Alternatives?

If using the Foundation is too risky, what other options are there for organizing development collectively?

I’ve talked to people in our community who feel that NGOs are fundamentally a bad structure for development projects, and that people should start more new consultancies instead. I don’t fully buy that argument, but it’s also not without merit in my experience. Regardless though, I think everyone has also seen at one point or another how dysfunctional corporations can be. My feeling is it probably also heavily depends on the people and culture, rather than just the specific legal structure.

I don’t have a perfect solution here, and I’m not sure there is one. Maybe the future is a bunch of new consulting co-ops doing a mix of grants and client work. Maybe it’s new non-profits focused on development. Maybe we need to get good at Patreon. Or maybe we all just have to get a part time job doing something else.

Time will tell how this all shakes out, but the realization I’ve come to is that the current decentralized structure of the project has a lot of advantages. We should preserve this and make use of it, rather than trying to centralize everything on the Foundation.

Arun Raghavan

@arunsr

A Brimful of ASHA

It’s 2025(!), and I thought I’d kick off the year with a post about some work that we’ve been doing behind the scenes for a while. Grab a cup of $beverage_of_choice, and let’s jump in with some context.

History: Hearing aids and Bluetooth

Various estimates put the number of people with some form of hearing loss at 5% of the population. Hearing aids and cochlear implants are commonly used to help deal with this (I’ll use “hearing aid” or “HA” in this post, but the same ideas apply to both). Historically, these have been standalone devices, with some primitive ways to receive audio remotely (hearing loops and telecoils).

As you might expect, the last couple of decades have seen advances that allow consumer devices (such as phones, tablets, laptops, and TVs) to directly connect to hearing aids over Bluetooth. This can provide significant quality of life improvements – playing audio from a device’s speakers means the sound is first distorted by the speakers, and then by the air between the speaker and the hearing aid. Avoiding those two steps can make a big difference in the quality of sound that reaches the user.

An illustration of the audio path through air vs. wireless audio (having higher fidelity)
Comparison of audio paths

Unfortunately, the previous Bluetooth audio standards (BR/EDR and A2DP – used by most Bluetooth audio devices you’ve come across) were not well-suited for these use-cases, especially from a power-consumption perspective. This meant that HA users would either have to rely on devices using proprietary protocols (usually limited to Apple devices), or have a cumbersome additional dongle with its own battery and charging needs.

Recent Past: Bluetooth LE

The more recent Bluetooth LE specification addresses some of the issues with the previous spec (now known as Bluetooth Classic). It provides a low-power base for devices to communicate with each other, and has been widely adopted in consumer devices.

On top of this, we have the LE Audio standard, which provides audio streaming services over Bluetooth LE for consumer audio devices and HAs. The hearing aid industry has been an active participant in its development, and we should see widespread support over time, I expect.

The base Bluetooth LE specification has been around from 2010, but the LE Audio specification has only been public since 2021/2022. We’re still seeing devices with LE Audio support trickle into the market.

In 2018, Google partnered with a hearing aid manufacturer to announce the ASHA (Audio Streaming for Hearing Aids) protocol, presumably as a stop-gap. The protocol uses Bluetooth LE (but not LE Audio) to support low-power audio streaming to hearing aids, and is publicly available. Several devices have shipped with ASHA support in the last ~6 years.

A brief history of Bluetooth LE and audio

Hot Take: Obsolescence is bad UX

As end-users, we understand the push/pull of technological advancement and obsolescence. As responsible citizens of the world, we also understand the environmental impact of this.

The problem is much worse when we are talking about medical devices. Hearing aids are expensive, and are expected to last a long time. It’s not uncommon for people to use the same device for 5-10 years, or even longer.

In addition to the financial cost, there is also a significant emotional cost to changing devices. There is usually a period of adjustment during which one might be working with an audiologist to tune the device to one’s hearing. Neuroplasticity allows the brain to adapt to the device and extract more meaning over time. Changing devices effectively resets the process.

All this is to say that supporting older devices is a worthy goal in itself, but has an additional set of dimensions in the context of accessibility.

HAs and Linux-based devices

Because of all this history, hearing aid manufacturers have traditionally focused on mobile devices (i.e. Android and iOS). This is changing, with Apple supporting its proprietary MFi (made for iPhone/iPad/iPod) protocol on macOS, and Windows adding support for LE Audio on Windows 11.

This does leave the question of Linux-based devices, which is our primary concern – can users of free software platforms also have an accessible user experience?

A lot of work has gone into adding Bluetooth LE support in the Linux kernel and BlueZ, and more still to add LE Audio support. PipeWire’s Bluetooth module now includes support for LE Audio, and there is continuing effort to flesh this out. Linux users with LE Audio-based hearing aids will be able to take advantage of all this.

However, the ASHA specification was only ever supported on Android devices. This is a bit of a shame, as there are likely a significant number of hearing aids out there with ASHA support, which will hopefully still be around for the next 5+ years. This felt like a gap that we could help fill.

Step 1: A Proof-of-Concept

We started out by looking at the ASHA specification, and the state of Bluetooth LE in the Linux kernel. We spotted some things that the Android stack exposes that BlueZ does not, but it seemed like all the pieces should be there.

Friend-of-Asymptotic, Ravi Chandra Padmala spent some time with us to implement a proof-of-concept. This was a pretty intense journey in itself, as we had to identify some good reference hardware (we found an ASHA implementation on the onsemi RSL10), and clean out the pipes between the kernel and userspace (LE connection-oriented channels, which ASHA relies on, weren’t commonly used at that time).

We did eventually get the proof-of-concept done, and this gave us confidence to move to the next step of integrating this into BlueZ – albeit after a hiatus of paid work. We have to keep the lights on, after all!

Step 2: ASHA in BlueZ

The BlueZ audio plugin implements various audio profiles within the BlueZ daemon – this includes A2DP for Bluetooth Classic, as well as BAP for LE Audio.

We decided to add ASHA support within this plugin. This would allow BlueZ to perform privileged operations and then hand off a file descriptor for the connection-oriented channel, so that any userspace application (such as PipeWire) could actually stream audio to the hearing aid.

I implemented an initial version of the ASHA profile in the BlueZ audio plugin last year, and thanks to Luiz Augusto von Dentz’ guidance and reviews, the plugin has landed upstream.

This has been tested with a single hearing aid, and stereo support is pending. In the process, we also found a small community of folks with deep interest in this subject, and you can join us on #asha on the BlueZ Slack.

Step 3: PipeWire support

To get end-to-end audio streaming working with any application, we need to expose the BlueZ ASHA profile as a playback device on the audio server (i.e., PipeWire). This would make the HAs appear as just another audio output, and we could route any or all system audio to it.

My colleague, Sanchayan Maity, has been working on this for the last few weeks. The code is all more or less in place now, and you can track our progress on the PipeWire MR.

Step 4 and beyond: Testing, stereo support, …

Once we have the basic PipeWire support in place, we will implement stereo support (the spec does not support more than 2 channels), and then we’ll have a bunch of testing and feedback to work with. The goal is to make this a solid and reliable solution for folks on Linux-based devices with hearing aids.

Once that is done, there are a number of UI-related tasks that would be nice to have in order to provide a good user experience. This includes things like combining the left and right HAs to present them as a single device, and access to any tuning parameters.

Getting it done

This project has been on my mind since the ASHA specification was announced, and it has been a long road to get here. We are in the enviable position of being paid to work on challenging problems, and we often contribute our work upstream. However, there are many such projects that would be valuable to society, but don’t necessarily have a clear source of funding.

In this case, we found ourselves in an interesting position – we have the expertise and context around the Linux audio stack to get this done. Our business model allows us the luxury of taking bites out of problems like this, and we’re happy to be able to do so.

However, it helps immensely when we do have funding to take on this work end-to-end – we can focus on the task entirely and get it done faster.

Onward…

I am delighted to announce that we were able to find the financial support to complete the PipeWire work! Once we land basic mono audio support in the MR above, we’ll move on to implementing stereo support in the BlueZ plugin and the PipeWire module. We’ll also be testing with some real-world devices, and we’ll be leaning on our community for more feedback.

This is an exciting development, and I’ll be writing more about it in a follow-up post in a few days. Stay tuned!

Luis Villa

@luis

Reading in 2024—tools

I was going to do a single post on my reading in 2024, but realized it probably makes more sense as a two-parter: the things I used to read (this post), and the things I actually read (at least one post on books, maybe a second on news and feeds).

Feeds

I still read a lot of feeds (and newsletters). Mid-way through the year, I switched to Reader by Readwise for RSS and newsletters, after a decade or so with Feedbin. It’s everything I have always wanted from an RSS tool—high polish, multi-platform, and separates inbound stuff to skim from stuff to be gone into at more depth. Expensive, but totally worth it if you’re an addict of my sort.

Surprise bonuses: has massively reduced my pile of open tabs, and is a nice ebook reader—have read several DRM-free books from Verso and Standard Ebooks in it.

One minor gripe (that I’ve also had with just about every other feed reader/read later tool): I wish it were more possible to get content out with tools. Currently I use Buffer to get things out of Reader to social, but I’d love to do that in a more sophisticated and automated way (eg by pulling everything in a tag saved in the previous week, and massaging it to stuff into a draft blog post).

E-ink reading

I wrote early in the year about my Boox Page, and then promptly put a knee through the screen. I had liked it, but ultimately didn’t love it. In particular, the level of hackery in their mods to Android really bothered me—the thing did not Just Work, which was the last thing I wanted in a distraction-free reading device.

So I got a Daylight DC1. What can I say, I’m a sucker for new e-ink and e-ink like devices. I love it but it has a lot of warning signs so I’m not sure I can recommend it to anyone yet.

Parts I love:

  • Screen is delightfully warm. Doesn’t quite try to be paper (can’t) but is very easy on the eye, both in broad daylight and at night. (Their marketing material, for the screen, is really quite accurate.)
  • Screen texture is great under the finger when reading; feels less like a screen and more like paper. (Can’t speak to the pen; really am using this as a consumption device, with my iPad more for creation. Might change that in the new year, not sure yet.)
  • Battery life is great.
  • Android is Just Android (with a very tasteful launcher as the only significant modification), so you really can run things you want (especially if their output is mostly text). I’ve got mine loaded with pretty much just readers: Kindle, Libby, Kobo, Readwise Reader; all work great.
  • I find myself weirdly in love with the almost pillow-like “case”. It’s silly and inefficient and maybe that’s OK?

Parts I worry about:

  • Physical build quality is a little spotty—most notably the gap between screen and case is very uneven. Hasn’t affected my day to day use, but makes me wonder about how long it’ll last.
  • The OS is… shifty? Reported itself to the Android app store as a Pixel 5(?), and the launcher is unpaid-for freeware (got a nice little “please give us $5!” note from it, which screams corner-cutting.) Again, works fine, just red flag in terms of attention to detail and corner-cutting.
  • I found out after I bought that the CEO is a not-so-reformed cryptobro, an organizational red flag.
  • They’re talking a lot about AI for their next OS “release”. That implies a variety of possible dooms: either a small team gets overwhelmed by the hard work of AI, or a large team has lots of VC demands. Neither is good.

Audio

Switched from Audible (I know) to Apple Books (I know, again) because it works so much more reliably on my Apple Watch, and running with the Watch is where I consume most of my audiobooks. Banged through a lot of history audiobooks this year as a result.

Paper

A small child wearing a hat that obscures their face is standing, reading a book. The shelf behind the child suggests they are in a library or bookstore.

I still love paper too. 2025 goal: build a better bookshelf. We’ll see how that goes…

The GPU, not the TPM, is the root of hardware DRM

As part of their "Defective by Design" anti-DRM campaign, the FSF recently made the following claim:
Today, most of the major streaming media platforms utilize the TPM to decrypt media streams, forcefully placing the decryption out of the user's control (from here).
This is part of an overall argument that Microsoft's insistence that only hardware with a TPM can run Windows 11 is with the goal of aiding streaming companies in their attempt to ensure media can only be played in tightly constrained environments.

I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff.

What I can say is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. There is hardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU.

Let's back up for a moment. There's multiple different DRM implementations, but the big three are Widevine (owned by Google, used on Android, Chromebooks, and some other embedded devices), Fairplay (Apple implementation, used for Mac and iOS), and Playready (Microsoft's implementation, used in Windows and some other hardware streaming devices and TVs). These generally implement several levels of functionality, depending on the capabilities of the device they're running on - this will range from all the DRM functionality being implemented in software up to the hardware path that will be discussed shortly. Streaming providers can choose what level of functionality and quality to provide based on the level implemented on the client device, and it's common for 4K and HDR content to be tied to hardware DRM. In any scenario, they stream encrypted content to the client and the DRM stack decrypts it before the compressed data can be decoded and played.

The "problem" with software DRM implementations is that the decrypted material is going to exist somewhere the OS can get at it at some point, making it possible for users to simply grab the decrypted stream, somewhat defeating the entire point. Vendors try to make this difficult by obfuscating their code as much as possible (and in some cases putting some of it in-kernel), but pretty much all software DRM is at least somewhat broken and copies of any new streaming media end up being available via Bittorrent pretty quickly after release. This is why higher quality media tends to be restricted to clients that implement hardware-based DRM.

The implementation of hardware-based DRM varies. On devices in the ARM world this is usually handled by performing the cryptography in a Trusted Execution Environment, or TEE. A TEE is an area where code can be executed without the OS having any insight into it at all, with ARM's TrustZone being an example of this. By putting the DRM code in TrustZone, the cryptography can be performed in RAM that the OS has no access to, making the scraping described earlier impossible. x86 has no well-specified TEE (Intel's SGX is an example, but is no longer implemented in consumer parts), so instead this tends to be handed off to the GPU. The exact details of this implementation are somewhat opaque - of the previously mentioned DRM implementations, only Playready does hardware DRM on x86, and I haven't found any public documentation of what drivers need to expose for this to work.

In any case, as part of the DRM handshake between the client and the streaming platform, encryption keys are negotiated with the key material being stored in the GPU or the TEE, inaccessible from the OS. Once decrypted, the material is decoded (again either on the GPU or in the TEE - even in implementations that use the TEE for the cryptography, the actual media decoding may happen on the GPU) and displayed. One key point is that the decoded video material is still stored in RAM that the OS has no access to, and the GPU composites it onto the outbound video stream (which is why if you take a screenshot of a browser playing a stream using hardware-based DRM you'll just see a black window - as far as the OS can see, there is only a black window there).

Now, TPMs are sometimes referred to as a TEE, and in a way they are. However, they're fixed function - you can't run arbitrary code on the TPM, you only have whatever functionality it provides. But TPMs do have the ability to decrypt data using keys that are tied to the TPM, so isn't this sufficient? Well, no. First, the TPM can't communicate with the GPU. The OS could push encrypted material to it, and it would get plaintext material back. But the entire point of this exercise was to avoid the decrypted version of the stream from ever being visible to the OS, so this would be pointless. And rather more fundamentally, TPMs are slow. I don't think there's a TPM on the market that could decrypt a 1080p stream in realtime, let alone a 4K one.

The FSF's focus on TPMs here is not only technically wrong, it's indicative of a failure to understand what's actually happening in the industry. While the FSF has been focusing on TPMs, GPU vendors have quietly deployed all of this technology without the FSF complaining at all. Microsoft has enthusiastically participated in making hardware DRM on Windows possible, and user freedoms have suffered as a result, but Playready hardware-based DRM works just fine on hardware that doesn't have a TPM and will continue to do so.

comment count unavailable comments

Sophie Herold

@sophieherold

This was 2024

In non-chronological order

    • Earned money for the first time within many many years.
    • Wrote C bindings and GObject introspection annotations for a Rust library for the first time.
    • Wrote 40 weekly updates on Patreon/Ko-Fi.
    • Got formally diagnosed with Autism.
    • Implemented some basic image editing in Loupe.
    • Bought new woodworking tools.
    • Got bold and worked on a few lines of security critical C-code.
    • Confirmed with my doctors that the suspected diagnosis changed from fibromyalgia to ME/CFS.
    • Dove into BuildStream to add for more complete Rust support.
    • Released a collection of Rust crates that allow extraction, recomposition, and editing of image data including Exif or XMP for several image formats.
    • Created a website that lists all GNOME components like libraries that are not apps.
    • Called a Taxi for the first time in my life.
    • Wrote Rust bindings for the C bindings of a Rust crate.
    • Stopped occupational therapy and started another psychotherapy.
    • Got interviewed by c’t Open Source Spotlight (German).
    • Started ordering groceries online to have more energy for other things.
    • Was proud (and still am) to be part of a community with such a strong pride month statement.
    • Did a bunch of reviews for potential new GNOME Core apps.
    • Expanded the crowdfunding of my work to Patreon, Ko-Fi, GitHub, and PayPal.
    • Built a coat rack.

A huge thanks to everyone who supported my work!

Chisels in a box, carpenter’s square, and a hand plane lying on a table

Aryan Kaushik

@lucifer_rekt

UbuCon Asia 2024

Hi everyone!

It’s been about two weeks since UbuCon Asia (Ubuntu Conference Asia) concluded (fun fact: 13 weeks since I wrote the initial draft, so 15 now), and I’m really starting to miss it.

This blog is being posted after my GNOME Asia post as it was really hard to pack all the emotions and memories in just one blog, but here we go.

It all started as a wild idea to host GNOME Asia a year or two back. Gradually, it transformed into a joint event between UbuCon and GNOME Asia and eventually into UbuCon Asia 2024.

Why Jaipur?

This was one of the most frequently asked questions throughout the conference.

Interestingly, the local team (us) wasn’t based in Jaipur. We were spread across India, with me in the Delhi area, some members in Maharashtra, and one in Karnataka. Managing Jaipur’s affairs, as you can imagine, wasn’t exactly a breeze.

So why did we choose it? When the initial idea came up, a friend and I ruled out cities that were overcrowded, too hot (we weren’t entirely right there, but rain saved us lol), or lacking in the cultural heritage we wanted to showcase. We also wanted to pick a city with a budding tech scene, rather than an established one.

After much deliberation, we ruled out Bengaluru, Delhi, Mumbai, and Hyderabad. Jaipur, being relatively closer to me and ticking all the right boxes, became the chosen one!

In the end, we found the best college we could have in Jaipur, with a phenomenal volunteer team.

Why did we organize it?

Initially, the plan was to host GNOME Asia because GNOME is a community I deeply love. Having attended many GNOME events, I always wondered, "What if we host one in India?" With the largest population, immense Git activity, and a mature tech audience, India seemed perfect. But the sheer effort required always held me back - I’m just a developer who loves to code more than to manage people :)

The UbuCon planning began after GUADEC 2023, where I met Mauro at the Canonical booth. This led to rebooting Ubuntu India, with hosting UbuCon Asia as our first official activity.

I hesitated when asked to host UbuCon Asia but couldn’t resist the challenge. Bhavani (my co-lead) also proposed hosting in Bangalore, so we combined our bid with my proposal for Jaipur. To our delight, we won! Although discussions for a joint venture with the GNOME team didn’t pan out, we forged ahead with UbuCon Asia.

The Challenges We Faced

Although my role was initially to oversee and allocate tasks, I found myself involved in everything, which was hectic. Thankfully, the whole team worked as one on event days and without them, I wouldn’t have been able to handle the last two days of the event.

Managing Jaipur’s affairs remotely was tough, but the college we partnered with was incredibly supportive. Their students volunteered tirelessly.

Unexpectedly, our stage hosts backed out of the event just a day before due to placement drive in college, causing a session delay on the first day. Visa letter delays (Caused due to the Bangladesh crisis) and funding challenges due to Indian remittance laws were additional hurdles.

How It All Ended

Despite everything, we pulled it off, and dare I say, even better than many seasoned organizers might have! Seeing the community gather in India for UbuCon Asia was amazing.

We had Ubuntu and CDAC booths, delicious food (thanks to FOSS United and OnlyOffice), and lots of goodies for attendees. A proper Indian lunch with dessert, coffee breaks with Rajasthani and Indian snacks, a conference dinner, and subsidized day trips - all funded, were a relief.

Considering just weeks ago we were struggling to break even and were partially at a loss, to end with a surplus instead was truly relieving.

Fun fact: Leftover funds helped us host the release party at GNOME Asia 2024 and will support UbuCon India 2025 and UbuCon Asia 2025.

In my opening note, I joked, “I’m excited for the conference to end,” but now I realize how wrong I was.

I enjoyed every moment of it. I wasn’t able to attend more than one talk because when you are the lead, you just can’t sit, you have to work the hardest and keep everything together, but that work also gave me lots of enjoyment and satisfaction.

My favorite feedback? “We knew it was your first time organizing at this scale because we saw how tense and hardworking you were, ensuring everything ran smoothly, which it did.”

I regret not being able to meet many people I wanted to in more depth. Like Debian India folks, Aaditya Soni from AWS Rajasthan, Vishal Arya from FOSS United, Rudra the reviver of Ubuntu Unity and more.

We truly had astonishing people attend and I just wish to re-witness it all from an attendee's perspective now :P

The aftermovie can be viewed at - https://youtu.be/Ul8DQh3yroo?si=U2F3wi6mKBIVPJ6g :D

Future Plans?

Well... UCA’24 was draining and I don’t want to think of another event for a while haha (This didn't last long considering the release party we hosted xD).

We are currently working on creating smaller regional Ubuntu communities in India, and hopefully organise UbuCon India.

So if you are a sponsor, please reach out, we can really use your help in the initiative :)

Also, if you want to be a part of the Ubuntu India LoCo community, let me know and we can have a conversation about it ;)

A special thanks to Canonical, CDAC, FOSS United, Only Office and Ubuntu Korea for their sponsorship :)

Jussi Pakkanen

@jpakkane

CapyPDF 0.14 is out

I have just released version 0.14 of CapyPDF. This release has a ton of new functionality. So much, in fact, that I don't even remember them all. The reason for this is that it is actually starting to see real world usage, specifically as the new color managed PDF exporter for Inkscape. It has required a lot of refactoring work in the color code of Inkscape proper. This work has been done mostly by Doctormo, who has several videos on the issue.

The development cycle has consisted mostly of him reporting missing features like "specifying page labels is not supported", "patterns can be used for fill, but not for stroke" and "loading CMYK TIFF images with embedded color profiles does not work" and me then implementing said features or finding out how how setjmp/longjmp actually works and debugging corrupted stack traces when it doesn't.

Major change coming in the next version

The API for CapyPDF is not stable, but in the next release it will be extra unstable. The reason is C strings. Null terminated UTF-8 strings are a natural text format for PDF, as strings in PDF must not contain the zero glyph. Thus there are many functions like this in the public C API:

void do_something(const char *text);

This works and is simple, but there is a common use case it can't handle. All strings must be zero terminated so you can't point to a middle of an existing buffer, because it is not guaranteed to be zero terminated. Thus you always have to make a copy of the text you want to pass. In other words this means that you can't use C++'s string_view (or any equivalent string) as a source of text data. The public API should support this use case.

Is this premature optimization? Maybe. But is is also a usability issue as string views seem to be fairly common nowadays. There does not seem to be a perfect solution, but the best one I managed to crib seems to be to do this:

void do_something(const char *text, int32_t len_or_negative);

If the last argument is positive, use it as the length of the buffer. If i is negative then treat the char data as a zero terminated plain string. This requires changing all functions that take strings and makes the API more unpleasant to use.

If someone has an idea for a better API, do post a comment here.

Christian Hergert

@hergertme

December Projects

Not all of my projects this December are code related. In fact a lot of them have been house maintenance things, joy of home ownership and all.

This week was spent building my new office and music space. I wanted a way to have my amplifiers and guitars more accessible while also creating a sort of “dark academia” sort of feeling for working.

The first step was to get the guitars mounted on the walls. I was looking for something blending artistic showpiece and functional use.

After that I quickly framed them in. Just quarter round with the hard edge inwards, 45° miter, some caulking, easy peasy.

My last office had Hale Navy as the color, but the sheen was too much that it made it difficult to actually see the color. This time I went flat and color drenched the space (so ceilings, trim, etc all in matching tone).

Then somewhat final result is here. I still want to have a lighting story for these that doesn’t involve a battery so some electrical fish taping is likely in my future.

I also converted the wide closet into a workstation area with the studio monitors for recording. But that is still partially finished as I need to plane all the slats for the slat wall, frame the builtin, and attach the countertop.

Adrien Plazas

@Kekun

A systemd-sysupdate Plugin for GNOME Software

In late June 2024 I got asked to take over the work started by Jerry Wu creating a systemd-sysupdate plugin for Software. The goal was to allow Software to update sysupdate targets, such as base system images or system extension images, all while respecting the user’s preferences such as whether to download updates on metered connections. To do so, the plugin communicates with the systemd-sysupdated daemon via its org.freedesktop.sysupdate1 D-Bus interface.

I didn’t know many of the things required to complete this project and it’s been a lot to chew in one bite for me, hence how long it took to complete. I’m happy it’s finally done, but I’m certain it’s riddled with bugs despite my best efforts, and I’m not happy it’s a single gigantic C file. It needs to be split into modules, but that’s an effort for another time as getting it to work at all was a challenge already. I’m happy I learned a lot along the way. Thanks a lot to Codethink, to the GNOME Foundation, to the Sovereign Tech Agency and for sponsoring this work. Thanks a lot to Abderrahim Kitouni, Adrian Vovk, Philip Withnall and all the other persons who helped me complete this project. 🙂

This was one of the last pieces of software needed to complete the migration of GNOME OS from OSTree to sysupdate. While OSTree is great for operating systems, it has a significant drawback: it can’t support SecureBoot because it can’t support Unified Kernel Images, and SecureBoot requires a signed Unified Kernel Image for its chain of trust. While its A/B partitioning system makes sysupdate more storage hungry and less flexible than OSTree, it allows it to support Unified Kernel Images, to sign them, and to be part of SecureBoot’s chain of trust, ensuring the system hasn’t been maliciously tempered. This will make GNOME OS more secure and its boot trusted. Read more of trusted boot from Lennart Poettering.

Erratum: Timothée Ravier stated that OSTree can support trusted boot and measured boot, see this demostration.

You should be able to test this plugin in GNOME OS soon. Please report any issues with the systemd-sysupdate tag, and the GNOME OS one if relevant. We want to be very sure that this works, as it’s vital that users know whether or not their system is up to date, especially if there are security-related fixes involved.

Marcus Lundblad

@mlundblad

Christmas / Winter / End-of-the-year Holidays Maps 2024 Yearly Wrap-up



In line with traditions, it's time for the yearly end-of-the year Maps blog post!

There's been some quite nice happenings this year when it comes to Maps (and the underlaying libshumate, our map widget library).


Vector Map Enabled by Default

The biggest change, by large that's happened in 2024 is that we finally did the switch to client-side rendered vector tiles with all the benefits this brings us:

  • A “GNOME-themed” map style
  • Properly support dark mode
  • Localized labels for places (countries, towns, and so on…)
  • POI icons can now be directly clicked on the map, bringing up information of place


 

More Use of Modern libadwaita Widgets

Works has continued replacing the old deprecated GtkDialog instances, instead using libadwaita's new dialogs, which also has the benefit of being adaptive for small screen sizes. Right now the only remaining instance of the old dialog type is the sharing “Send to” dialog.

Since the 47 release, the OSM POI editing dialog has received a refreshed look-and-feel based on Adwaita widgets, designed by Brage Fuglseth, and initial draft implementation by Felipe Kinoshita.


More Visual Improvements

Also since the September release, some more UI refinements have been made.

The action of starring a place now has an accopanying animation to help give a visual clue of the change.


The headerbar icon for showing the menu listing stored favorites now uses the same icon as GNOME Web (Epiphany), the “books on a libray shelf“ icon.

Spinner widgets (for showing progress) has been updated to the new Adwaita variant with a refreshed design.

And the toggle buttons for selecting routing mode (walk, bike, car, transit) now uses the new Adwaita ToggleGroup icon buttons.


Public Transit Routing Using Transitous

I have mentioned the Transitous previously and since 47.0 Maps uses Transitous to provide public transit directions for regions that weren't already covered by our existing plugins and their provided regions.

During the last few months works has progressed on an updated version of MOTIS (the backend used by Transious) that will give better performance, among otheres.

Maps will also soon transition to the new API when Transitous switches over to it.

And speaking of Transitous and MOTIS.

At FOSDEM 2025 me, Felix Gündling, and Jonah Brüchert will give a presentation of MOTIS, Transitous, and the integration into Maps.

https://fosdem.org/2025/schedule/event/fosdem-2025-4105-gnome-maps-meets-transitous-meets-motis/

 

And until next time, happy holidays!

Mobile testing in libadwaita

Screenshot of Highscore, an emulator frontend running Doom 64 with touch controls, inside libadwaita adaptive preview, emulating a small phone (360x720), in portrait, with mobile shell (26px top bar, 18px bottom bar) and no window controls

Lately I’ve been working on touch controls overlays in Highscore1, and quickly found out that previewing them across different screen sizes is rather tedious.

Currently we have two ways of testing UIs on a different screen size – resize the window, or run the app on that device. Generally when developing, I do the former since it’s faster, but what dimensions do I resize to?

HIG lists the 360×294px dimensions, but that’s the smallest total size – we can’t really figure out the actual sizes in portrait and landscape with this. Sure, we can look up the phone sizes, check their scale factor, and measure the precise panel sizes from screenshots, but that takes time and that’s a lot of values to figure out. I did make such a list, and that’s what I used for testing here, but, well, that’s a lot of values. I also discovered the 294px height listed in HIG is slightly wrong (presumably it was based on phosh mockups, or a really old version) and with older phosh versions the app gets 288px of height, while with newer versions with a slimmer bottom bar it gets 313px.

Now that we know the dimensions, the testing process consists of repeatedly resizing the window to a few specific configurations. I have 31 different overlay, each with 7 different layouts for different screen sizes. Resizing the window for each of them gets old fast, and I really wished I had a tool to make that easier. So, I made one.

View switcher dialog in libadwaita demo, running in adaptive preview, emulating large phone (360x760),  in landscape, with mobile shell and window controls turned off

This is not a separate app, instead it’s a libadwaita feature called adaptive preview, exposed via GTK inspector. When enabled, it shrinks the window contents into a small box and exposes UI for controlling its size: specifically, picking the device and shell from a list. Basically, same as what web browsers have in their inspectors – responsive design mode in Firefox etc.

Adaptive Preview row on the libadwaita page in GTK inspector

It also allows to toggle whether window controls are visible – normally they are disabled on mobile, but mobile gnome-shell currently keeps them enabled as not everything is using AdwDialog yet.

It can also be opened automatically by specifying the ADW_DEBUG_ADAPTIVE_PREVIEW=1 environment variable. This may be useful if e.g. Builder wants to include it into its run menu, similar to opening GTK inspector.

If the selected size is too large and doesn’t fit into the window, it scrolls instead.

What it doesn’t do

It doesn’t simulate fullscreen. Fullscreen is complicated because in addition to hiding shell panels almost every app that supports it changes the UI state – this is not something we can automatically support.

It also doesn’t simulate different scale factors – it’s basically impossible to do with with how it’s implemented.

Similarly, while it does allow to hide the window controls, if the app is checking them manually via GtkSettings:gtk-decoration-layout, it won’t pick that up. It can only affect AdwHeaderBar, similarly to how it’s hiding close button on the sidebars.

Future plans

It would be good to display the rounded corners and cutouts on top of the preview. For example, the phone I use for testing has both rounded corners and a notch, and we don’t have system-wide support for insets or safe area just yet. I know the notch dimensions on my specific phone (approximately 28 logical pixels in height), but obviously it will vary wildly depending on the device. The display panel data from gmobile may be a good fit here.

We may also want to optionally scale the viewport to fit into the window instead of scrolling it – especially for larger sizes. If we have scaling, it may also be good to have a way to make it match the device’s DPI.

Finally, having more device presets in there would be good – currently I only included the devices I was testing the overlays for.


Adaptive preview has already landed in the main branch and is available to apps using the nightly SDK, as well as in GNOME OS.

So, hopefully testing layouts on mobile devices will be easier now. It’s too late for me, but maybe the next person testing their app will benefit from it.


1. gnome-games successor, which really deserves a blog post of its own, but I want to actually have something I can release first, so I will formally announce it then. For now, I’m frequently posting development progress on the Fediverse

Tobias Bernard

@tbernard

Introducing Project Aardvark

Two weeks ago we got together in Berlin for another (Un)boiling The Ocean event (slight name change because Mastodon does not deal well with metaphors). This time it was laser-focused on local-first sync, i.e. software that can move seamlessly between real-time collaboration when there’s a network connection, and working offline when there is no connection.

The New p2panda

This event was the next step in our ongoing collaboration with the p2panda project. p2panda provides building blocks for local-first software and is spearheaded by Andreas Dzialocha and Sam Andreae. Since our initial discussions in late 2023 they made a number of structural changes to p2panda, making it more modular and easier to use for cases like ours, i.e. native GNOME apps.

Sam and Andreas introducing the new p2panda release.

This new version of p2panda shipped a few weeks ago, in the form of a dozen separate Rust crates, along with a new website and new documentation.

On Saturday night we had a little Xmas-themed release party for the new p2panda version, with food, Glühwein, and two talks from Eileen Wagner (on peer-to-peer UX patterns) and Sarah Grant (on radio communication).

The Hackfest

Earlier on Saturday and then all day Sunday we had a very focused and productive hackfest to finally put all the pieces together and build our long-planned prototype codenamed “Aardvark”, a local-first collaborative text editor using the p2panda stack.

Simplified diagram of the overall architecture, with the GTK frontend, Automerge for CRDTs, and p2panda for networking.

Our goal was to put together a simple Rust GTK starter project with a TextView, read/write the TextView’s content in and out of an Automerge CRDT, and sync it with other local peers via p2panda running in a separate thread. Long story short: we pulled it off! By the end of the hackfest we had basic collaborative editing working on the local network (modulo some bugs across the stack). It’s of course still a long road from there to an actual releasable app, but it was a great start.

The reason why we went with a text editor is not because it’s the easiest thing to do — freeform text is actually one of the more difficult types of CRDT. However, we felt that in order to get momentum around this project it needs to be something that we ourselves will actually use every day. Hence, the concrete use case we wanted to target was replacing Hedgedoc for taking notes at meetings (particularly useful when having meetings at offline, where there’s no internet).

The current state of Aardvark: Half of the UI isn’t hooked up to anything yet, and it only sort of works on the local network :)

While the Berlin gang was hacking on the text editor, we also had Ada, a remote participant, looking into what it would take to do collaborative sketching in Rnote. This work is still in the investigation stage, but we’re hopeful that it will get somewhere as a second experiment with this stack.

Thanks to everyone who attended the hackfest, in particular Andreas for doing most of the organizing, and Sam Andreae and Sebastian Wick, who came to Berlin specifically for the event! Thanks also to Weise7 for hosting us, and offline for hosting the release party.

The Long Game

Since it’s early days for all of this stuff, we feel that it’s currently best to experiment with this technology in the context of a specific vertically integrated app. This makes it easy to iterate across the entire stack while learning how best to fit together various pieces.

However, we’re hoping that eventually we’ll settle on a standard architecture that will work for many types of apps, at which point parts of this could be split out into a system service of some kind. We could then perhaps also have standard APIs for signaling servers (sometimes needed for peers to find each other) and “dumb pipe” sync/caching servers that only move around encrypted packets (needed in case none of the other peers are online). With this there could be many different interchangeable sync server providers, making app development fully independent of any specific provider.

Martin Kleppmann’s talk at Local-First Conf 2024 outlines his vision for an ecosystem of local-first apps which all use the same standard sync protocol and can thus share sync services, or sync peer-to-peer.

This is all still pretty far out, but we imagine a world where as an app developer the only thing you need to do to build real-time collaboration is to integrate a CRDT for your data, and use the standard system API for the sync service to find peers and send/receive data.

With this in place it should be (almost) as easy to build apps with seamless local-first collaboration as it is to build apps using only the local file system.

Next Steps

It’s still early days for Aardvark, but so far everyone’s very excited about it and development has been going strong since the hackfest. We’re hoping to keep this momentum going into next year, and build the app into a more full-fledged Hedgedoc replacement as part of p2panda’s NGI project by next summer.

That said, we see the main value of this project not in the app itself, but rather the possibility for our community to experiment with local-first patterns, in order to create capacity to do this in more apps across our ecosystem. As part of that effort we’re also interested in working with other app developers on integration in their apps, making bindings for other languages, and working on shared UI patterns for common local-first user flows such as adding peers, showing network status, etc.

If you’d like to get involved, e.g. by contributing to Aardvark, or trying local-first sync in your own app using this stack feel free to reach out on Matrix (aardvark:gnome.org), or the Aardvark repo on Github.

Happy hacking!

A new issue policy for libinput - closing and reopening issues for fun and profit

This is a heads up that if you file an issue in the libinput issue tracker, it's very likely this issue will be closed. And this post explains why that's a good thing, why it doesn't mean what you want, and most importantly why you shouldn't get angry about it.

Unfixed issues have, roughly, two states: they're either waiting for someone who can triage and ideally fix it (let's call those someones "maintainers") or they're waiting on the reporter to provide some more info or test something. Let's call the former state "actionable" and the second state "needinfo". The first state is typically not explicitly communicated but the latter can be via different means, most commonly via a "needinfo" label. Labels are of course great because you can be explicit about what is needed and with our bugbot you can automate much of this.

Alas, using labels has one disadvantage: GitLab does not allow the typical bug reporter to set or remove labels - you need to have at least the Planner role in the project (or group) and, well, suprisingly reporting an issue doesn't mean you get immediately added to the project. So setting a "needinfo" label requires the maintainer to remove the label. And until that happens you have a open bug that has needinfo set and looks like it's still needing info. Not a good look, that is.

So how about we use something other than labels, so the reporter can communicate that the bug has changed to actionable? Well, as it turns out there is exactly thing a reporter can do on their own bugs other than post comments: close it and re-open it. That's it [1]. So given this vast array of options (one button!), we shall use them (click it!).

So for the forseeable future libinput will follow the following pattern:

  • Reporter files an issue
  • Maintainer looks at it, posts a comment requesting some information, closes the bug
  • Reporter attaches information, re-opens bug
  • Maintainer looks at it and either: files a PR to fix the issue or closes the bug with the wontfix/notourbug/cantfix label
Obviously the close/reopen stage may happen a few times. For the final closing where the issue isn't fixed the labels actually work well: they preserve for posterity why the bug was closed and in this case they do not need to be changed by the reporter anyway. But until that final closing the result of this approach is that an open bug is a bug that is actionable for a maintainer.

This process should work (in libinput at least), all it requires is for reporters to not get grumpy about issue being closed. And that's where this blog post (and the comments bugbot will add when closing) come in. So here's hoping. And to stave off the first question: yes, I too wish there was a better (and equally simple) way to go about this.

[1] we shall ignore magic comments that are parsed by language-understanding bots because that future isn't yet the present

Lennart Poettering

@mezcalero

Announcing systemd v257

Last week we released systemd v257 into the wild.

In the weeks leading up to this release (and the week after) I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd257 hash tag. In case you aren't using Mastodon, but would like to read up, here's a list of all 37 posts:

I intend to do a similar series of serieses of posts for the next systemd release (v258), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.

Outreachy internship for librsvg, December 2024

I am delighted to announce that I am mentoring Adetoye Anointing for the December 2024 round of Outreachy. Anointing will be working on librsvg, on implementing the SVG2 text layout algorithm. This is his first blog post about the internship.

There is a lot of work to do! Text layout is a complex topic, so rather than just saying, "go read the spec and write the code", Anointing and I have decided to have a little structure to our interactions:

  • We are having two video calls a week.

  • During the calls, I'm sharing my screen to walk him through the code.

  • I'm using my friend Abrahm's Pizarra and a Wacom tablet to have a "digital chalkboard" where I can quickly illustrate explanations while Anointing and I chat:

Screenshot of Pizarra, an electronic blackboard

  • Conveniently, Pizarra also produces SVG files from whatever you doodle in it, so it's easier to include the drawings in other documents.

  • We are using a shared document in pad.gnome.org as a development journal. Here I can write long explanations, leave homework, link to stuff, etc. Anointing can put in his own annotations, questions, or anything else. I'm hoping that this works better than scrolling through a Matrix chat channel.

I have big hopes for this project. Please welcome Anointing if you see him around the Rust ♥️ GNOME channel!

2024 GNOME Infrastructure Annual Review

Table of Contents

1. Introduction

Time is passing by very quickly and another year will go as we approach the end of 2024. This year has been fundamental in shaping the present and the future of GNOME’s Infrastructure with its major highlight being a completely revamped platform and a migration of all GNOME services over to AWS. In this post I’ll try to highlight what the major achievements have been throughout the past 12 months.

2. Achievements

In the below is a list of individual tasks and projects we were able to fulfill in 2024. This section will be particularly long but I want to stress the importance of each of these items and the efforts we put in to make sure they were delivered in a timely manner.

2.1. Major achievements

  1. All the applications (except for ego, which we expect to handle as soon as next week or in January) were migrated to our new AWS platform (see GNOME Infrastructure migration to AWS)
  2. During each of the apps migrations we made sure to:
    1. Migrate to sso.gnome.org and make 2FA mandatory
    2. Make sure database connections are handled via connection poolers
    3. Double check the container images in use were up-to-date and GitLab CI/CD pipeline schedules were turned on for weekly rebuilds (security updates)
    4. For GitLab, we made sure repositories were migrated to an EBS volume to increase IO throughput and bandwidth
  3. Migrated away our backup mechanism away from rdiff-backup into AWS Backup service (which handles both our AWS EFS and EBS snapshots)
  4. Retired our NSD install and migrated our authoritative name servers to CloudNS (it comes with multiple redundant authoritative servers, DDOS protection and automated DNSSEC keys rotation and management)
  5. We moved away from Ceph and the need to maintain our own storage solution and started leveraging AWS EFS and EBS
  6. We deprecated Splunk and built a solution around promtail and Loki in order to handle our logging requirements
  7. We deprecated Prometheus blackbox and started leveraging CloudNS monitoring service which we interact with using an API and a set of CI/CD jobs we host in GitHub
  8. We archived GNOME’s wiki and turned it into a static HTML copy
  9. We replaced ftpadmin with the GNOME Release Services, thanks speknik! More information around what steps should GNOME Maintainers now follow when doing a module release are available here. The service uses JWT tokens to verify and authorize specific CI/CD jobs and only allows new releases when the process is initiated by a project CI living within the GNOME GitLab namespace and a protected tag. With master.gnome.org and ftpadmin being in production for literally ages, we wanted to find a better mechanism to release GNOME software and avoid a single maintainer SSH key leak to allow a possible attacker to tamper tarballs and potentially compromise milions of computers running GNOME around the globe. With this change we don’t leverage SSH anymore and most importantly we don’t allow maintainers to generate GNOME modules tarballs on their personal computers rather we force them to use CI/CD in order to achieve the same result. We’ll be coming up shortly with a dedicated and isolated runner that will only build jobs tagged as releasing GNOME software.
  10. We retired our mirroring infrastructure based on Mirrorbits and replaced it with our CDN partner, CDN77
  11. We decoupled GIMP mirroring service from GNOME’s one, GIMP now hosts its tarballs (and associated rsync daemon) on top of a different master node, thanks OSUOSL for sponsoring the VM that makes this possible!

2.2. Minor achievements

  1. Retired multiple VMs: splunk, nsd0{1,2}, master, ceph-metrics, gitaly
  2. We started managing our DNS using an API and CI/CD jobs hosted in GitHub (this to avoid relying on GNOME’s GitLab which in case of unavailability would prevent us to update DNS entries)
  3. We migrated smtp.gnome.org to OSCI in order to not lose IP reputations and various whitelists we received throughout the years by multiple organizations
  4. We deprecated our former internal DNS authoritatives based on FreeIPA. We are now leveraging internal VPC resolvers and Route53 Private zones
  5. We deprecated all our OSUOSL GitLab runners due to particularly slow IO and high steal time and replaced them with a new Heztner EX44 instance, kindly sponsored by GIMP. OSUOSL is working on coming up with local storage on their Openstack platform. We are looking forward to test that and introduce new runners as soon as the solution will be made available
  6. Retired idm0{1,2} and redirected them to a new FreeIPA load balanced service at https://idm.gnome.org
  7. We retired services which weren’t relevant for the community anymore: surveys.gnome.org, roundcube (aka webmail.gnome.org)
  8. We migrated nmcheck.gnome.org to Fastly and are using Synthetic responses to handle HTTP responses to clients
  9. We upgraded to Ansible Automation Platform (AAP) 2.5
  10. As part of the migration to our new AWS based platform, we upgraded Openshift to release 4.17
  11. We received a 2k grant from Microsoft which we are using for an Azure ARM64 GitLab runner
  12. All of our GitLab runners fleet are now hourly kept in sync using AAP (Ansible roles were built to make this happen)
  13. We upgraded Cachet to 3.x series and fixed dynamic status.gnome.org updates (via a customized version of cachet-monitor)
  14. OS Currency: we upgraded all our systems to RHEL 9
  15. We converted all our Openshift images that were using a web server to Nginx for consistency/simplicity
  16. Replaced NRPE with Prometheus metrics based logging, checks such as IDM replication and status are now handled via the Node Exporter textfile plugin
  17. Migrated download.qemu.org (yes, we also host some components of QEMU’s Infrastructure) to use nginx-s3-gateway, downloads are then served via CDN77

2.3 Minor annoyances/bugs that were also fixed in 2024

  1. Invalid OCSP responses from CDN77, https://gitlab.gnome.org/Infrastructure/Infrastructure/-/issues/1511
  2. With the migration to USE_TINI for GitLab, no gpg zombie processes are being generated anymore

2.3. Our brand new and renewed partnerships

  1. From November 2024 and ongoing, AWS will provide sponsorship and funding to the GNOME Project to sustain the majority of its infrastructure needs
  2. Red Hat kindly sponsored subscriptions for RHEL, Openshift, AAP as well as hosting, bandwidth for the GNOME Infrastructure throughout 2024
  3. CDN77 provided unlimited bandwidth / traffic on their CDN offering
  4. Fastly renewed their unlimited bandwidth / traffic plan on their Delivery/Compute offerings
  5. and thanks to OSUOSL, Packet, DigitalOcean, Microsoft for the continued hosting and sponsorship of a set of GitLab runners, virtual machines and ARM builders!

Expressing my gratitude

As I’m used to do at the end of each calendar year, I want to express my gratitude to Bartłomiej Piotrowski for our continued cooperation and also to Stefan Peknik for his continued efforts in developing the GNOME Release Service. We started this journey together many months ago when Stefan was trying to find a topic to base his CS bachelor thesis on. With this in mind I went straight into the argument of replacing ftpadmin with a better technology also in light of what happened with the xz case. Stefan put all his enthusiasm and professionality into making this happen and with the service going into production on the 11th of December 2024 history was made.

That being said, we’re closing this year being extremely close to retire our presence from RAL3 which we expect to happen in January 2025. The GNOME Infrastructure will also send in a proposal to talk at GUADEC 2025, in Italy, to present and discuss all these changes with the community.

Christian Hergert

@hergertme

Layered Settings

Early on Builder had the concept of layered settings. You had an application default layer the user could control. You also had a project layer which allowed the user to change settings just for that project. But that was about the extent of it. Additionally, these settings were just stored in your normal GSettings data repository so there is no sharing of settings with other project collaborators. Boo!

With Foundry, I’d like to have a bit more flexibility and control. Specifically, I want three layers. One layer for the user’s preferences at the application level. Then project settings which can be bundled with the project by the maintainer for needs specific to the project. Lastly, a layer of user overrides which takes maximum preference.

Of course, it should still continue to use GSettings under the hood because that makes writing application UI rather easy. As mentioned previously, we’ll have a .foundry directory we place within the project with storage for both user and project data. That means we can use a GKeyFile back-end to GSettings and place the data there.

You can git commit your project settings if you’re the maintainer and ensure that your projects conventions are shared to your collaborators.

Of course, since this is all command-line based right now, there are tab-completable commands for this which again, makes unit testing this stuff easier.

# Reads the app.devsuite.foundry.project config-id gsetting
# taking into account all layers
$ foundry settings get project config-id

# Sets the config-id setting for just this user
$ foundry settings set project config-id "'org.example.app.json'"

# Sets the config-id for the project default which might
# be useful if you ship multiple flatpak manifest like GTK does
$ foundry settings set --project project config-id "'org.example.app.json'"

# Or maybe set a default for the app
$ foundry settings set --global project stop-signal SIGKILL

That code is now wired up to the FoundryContext via foundry_context_load_settings().

Next time I hope to cover the various sub-systems you might need in an IDE and how those services are broken down in Foundry.

Status update, 13/12/24

Its been an interesting and cold month so far. I made a successful trip to the UK, one of the first times I’ve been back in winter and avoided being exposed to COVID19 since the pandemic, so that’s a step forwards.

I’ve been thinking a lot about documentation recently in a few different places where I work or contribute as a volunteer. One such place is within openQA and the GNOME QA initiative, so here’s what’s been happening there recently.

The monthly Linux QA call is one of my 2024 success stories. The goal of the call is to foster collaboration between distros and upstreams, so that we share testing effort rather than duplicating it, and we get issue reports upstream as soon as things break. Through this call I’ve met many of the key people who are do automated testing of GNOME downstream, and we are starting to share ideas for the future.

What I want for GNOME is to be able to run QA tests for any open merge request, so we can spot regressions before they even land. As part of the STF+GNOME+Codethink collaboration we got a working prototype of upstream QA for GNOME Shell, but to move beyond a prototype, we need to build a more solid foundation. The current GNOME Shell prototype has about 100 lines of copy-pasted openQA code to set up the VM, and this would need to be copied into every other GNOME module where we might run QA tests. I very much do not want so many copies of one piece of code.

Screenshot of openQA web UI showing GNOME Tour

I mentioned this in the QA call and Oli Kurz, who is the openQA product owner at openSUSE, proposed that we put the setup logic directly into os-autoinst, which is openQA’s test runner. The os-autoinst code has a bare ‘basetest‘ module which must be customed for the OS under test. Each distro maintains their own infrastructure on top of that to wait for the desktop to start, log in as a user, and so on.

Since most of us test Linux, we can reasonably add a more specific base class specific to Linux, and some further helpers for systemd-based OSes. I love this idea as we could now share improvements between all the different QA teams.

So the base test class can be extended, but how do we document its capabilities? I find openQA’s existing documentation pretty overwhelming as a single 50,000 word document. It’s not feasible for me to totally rework the documentation, but if we’re going to collaborate upstream then we need to have some way to document the new base classes.

Of course I also wrote some GNOME specific documentation for QA; but hidden docs like this are doomed to become obsolete. I began adding a section on testing to the GNOME developer guide, but I’ve had no feedback at all on the merge request, so this effort seems like a dead end.

So what should we do to make the QA infrastructure easier to understand? Let me know your ideas below.

Swans on a canal at sunset

Looking at the problem from another angle, we still lack a collective understanding of what what openQA is and why you might use it. As a small step towards making this clearer, I wrote a comparison of four testing tools which you can read here. And at Oli’s suggestion I proposed a new Wikipedia page for openQA.

Screenshot of Draft:OpenQA page from Wikipedia

Please suggest changes here or in the openQA matrix channel. If you’re reading this and are a Wikipedia reviewer, then I would greatly appreciate a review so we can publish the new page. We could then also add openQA to the Wikipedia “Comparison of GUI testing tools”. Through small efforts like this we can hopefully reduce how much documentation is needed on the GNOME side, as we won’t need to start at “what even is openQA”.

I have a lot more to say about documentation but that will have to wait for next month. Enjoy the festive season and I hope your 2025 gets off to a good start!

When should we require that firmware be free?

The distinction between hardware and software has historically been relatively easy to understand - hardware is the physical object that software runs on. This is made more complicated by the existence of programmable logic like FPGAs, but by and large things tend to fall into fairly neat categories if we're drawing that distinction.

Conversations usually become more complicated when we introduce firmware, but should they? According to Wikipedia, Firmware is software that provides low-level control of computing device hardware, and basically anything that's generally described as firmware certainly fits into the "software" side of the above hardware/software binary. From a software freedom perspective, this seems like something where the obvious answer to "Should this be free" is "yes", but it's worth thinking about why the answer is yes - the goal of free software isn't freedom for freedom's sake, but because the freedoms embodied in the Free Software Definition (and by proxy the DFSG) are grounded in real world practicalities.

How do these line up for firmware? Firmware can fit into two main classes - it can be something that's responsible for initialisation of the hardware (such as, historically, BIOS, which is involved in initialisation and boot and then largely irrelevant for runtime[1]) or it can be something that makes the hardware work at runtime (wifi card firmware being an obvious example). The role of free software in the latter case feels fairly intuitive, since the interface and functionality the hardware offers to the operating system is frequently largely defined by the firmware running on it. Your wifi chipset is, these days, largely a software defined radio, and what you can do with it is determined by what the firmware it's running allows you to do. Sometimes those restrictions may be required by law, but other times they're simply because the people writing the firmware aren't interested in supporting a feature - they may see no reason to allow raw radio packets to be provided to the OS, for instance. We also shouldn't ignore the fact that sufficiently complicated firmware exposed to untrusted input (as is the case in most wifi scenarios) may contain exploitable vulnerabilities allowing attackers to gain arbitrary code execution on the wifi chipset - and potentially use that as a way to gain control of the host OS (see this writeup for an example). Vendors being in a unique position to update that firmware means users may never receive security updates, leaving them with a choice between discarding hardware that otherwise works perfectly or leaving themselves vulnerable to known security issues.

But even the cases where firmware does nothing other than initialise the hardware cause problems. A lot of hardware has functionality controlled by registers that can be locked during the boot process. Vendor firmware may choose to disable (or, rather, never to enable) functionality that may be beneficial to a user, and then lock out the ability to reconfigure the hardware later. Without any ability to modify that firmware, the user lacks the freedom to choose what functionality their hardware makes available to them. Again, the ability to inspect this firmware and modify it has a distinct benefit to the user.

So, from a practical perspective, I think there's a strong argument that users would benefit from most (if not all) firmware being free software, and I don't think that's an especially controversial argument. So I think this is less of a philosophical discussion, and more of a strategic one - is spending time focused on ensuring firmware is free worthwhile, and if so what's an appropriate way of achieving this?

I think there's two consistent ways to view this. One is to view free firmware as desirable but not necessary. This approach basically argues that code that's running on hardware that isn't the main CPU would benefit from being free, in the same way that code running on a remote network service would benefit from being free, but that this is much less important than ensuring that all the code running in the context of the OS on the primary CPU is free. The maximalist position is not to compromise at all - all software on a system, whether it's running at boot or during runtime, and whether it's running on the primary CPU or any other component on the board, should be free.

Personally, I lean towards the former and think there's a reasonably coherent argument here. I think users would benefit from the ability to modify the code running on hardware that their OS talks to, in the same way that I think users would benefit from the ability to modify the code running on hardware the other side of a network link that their browser talks to. I also think that there's enough that remains to be done in terms of what's running on the host CPU that it's not worth having that fight yet. But I think the latter is absolutely intellectually consistent, and while I don't agree with it from a pragmatic perspective I think things would undeniably be better if we lived in that world.

This feels like a thing you'd expect the Free Software Foundation to have opinions on, and it does! There are two primarily relevant things - the Respects your Freedoms campaign focused on ensuring that certified hardware meets certain requirements (including around firmware), and the Free System Distribution Guidelines, which define a baseline for an OS to be considered free by the FSF (including requirements around firmware).

RYF requires that all software on a piece of hardware be free other than under one specific set of circumstances. If software runs on (a) a secondary processor and (b) within which software installation is not intended after the user obtains the product, then the software does not need to be free. (b) effectively means that the firmware has to be in ROM, since any runtime interface that allows the firmware to be loaded or updated is intended to allow software installation after the user obtains the product.

The Free System Distribution Guidelines require that all non-free firmware be removed from the OS before it can be considered free. The recommended mechanism to achieve this is via linux-libre, a project that produces tooling to remove anything that looks plausibly like a non-free firmware blob from the Linux source code, along with any incitement to the user to load firmware - including even removing suggestions to update CPU microcode in order to mitigate CPU vulnerabilities.

For hardware that requires non-free firmware to be loaded at runtime in order to work, linux-libre doesn't do anything to work around this - the hardware will simply not work. In this respect, linux-libre reduces the amount of non-free firmware running on a system in the same way that removing the hardware would. This presumably encourages users to purchase RYF compliant hardware.

But does that actually improve things? RYF doesn't require that a piece of hardware have no non-free firmware, it simply requires that any non-free firmware be hidden from the user. CPU microcode is an instructive example here. At the time of writing, every laptop listed here has an Intel CPU. Every Intel CPU has microcode in ROM, typically an early revision that is known to have many bugs. The expectation is that this microcode is updated in the field by either the firmware or the OS at boot time - the updated version is loaded into RAM on the CPU, and vanishes if power is cut. The combination of RYF and linux-libre doesn't reduce the amount of non-free code running inside the CPU, it just means that the user (a) is more likely to hit since-fixed bugs (including security ones!), and (b) has less guidance on how to avoid them.

As long as RYF permits hardware that makes use of non-free firmware I think it hurts more than it helps. In many cases users aren't guided away from non-free firmware - instead it's hidden away from them, leaving them less aware that their freedom is constrained. Linux-libre goes further, refusing to even inform the user that the non-free firmware that their hardware depends on can be upgraded to improve their security.

Out of sight shouldn't mean out of mind. If non-free firmware is a threat to user freedom then allowing it to exist in ROM doesn't do anything to solve that problem. And if it isn't a threat to user freedom, then what's the point of requiring linux-libre for a Linux distribution to be considered free by the FSF? We seem to have ended up in the worst case scenario, where nothing is being done to actually replace any of the non-free firmware running on people's systems and where users may even end up with a reduced awareness that the non-free firmware even exists.

[1] Yes yes SMM

comment count unavailable comments

Hans de Goede

@hansdg

IPU6 camera support is broken in kernel 6.11.11 / 6.12.2-6.12.4

Unfortunately an incomplete backport of IPU6 DMA handling changes has landed in kernel 6.11.11.

This not only causes IPU6 cameras to not work, this causes the kernel to (often?) crash on boot on systems where the IPU6 is in use and thus enabled by the BIOS.

Kernels 6.12.2 - 6.12.4 are also affected by this. A fix for this is pending for the upcoming 6.12.5 release.

6.11.11 is the last stable release in the 6.11.y series, so there will be no new stable 6.11.y release with a fix.

As a workaround users affected by this can stay with 6.11.10 or 6.12.1 until 6.12.5 is available in your distributions updates(-testing) repository.



comment count unavailable comments

Aryan Kaushik

@lucifer_rekt

GNOME Asia India 2024

Namaste Everyone!

Hi everyone, it was that time of the year again when we had our beloved GNOME Asia happening.

Last year GNOME Asia happened in Kathmandu Nepal from December 1 - 3 and this time it happened in my country in Bengaluru from 6th to 8th.

Btw, a disclaimer - I was there on behalf of Ubuntu but the opinions over here are my own :)

Also, this one might not be that interesting due to well... reasons.

Day 0 (Because indexing starts with 0 ;))

Before departing from India... oh, I forgot this one was in India only haha.

This GNOME Asia had a lot of drama, with the local team requiring an NDA to sign which we got to know only hours before the event and we also got to know we couldn't host an Ubuntu release party there even when it was agreed to months and again a few weeks ago and even on the same day as well in advance... So yeah... it was no less than an India Daily soap episode, which is quite ironic lol.

But, in the end, I believe the GNOME team would have not known about it as well, and felt like local team problems.

Enough with the rant, it was not all bad, I got to meet some of my GNOMEies and Ubunties (is that even a word?) friends upon arriving, and man did we had a blast.

We hijacked a cafe and sat there till around 1 A.M. and laughed so hard we might have been termed Psychopaths by the watchers.

But what do we care, we were there for the sole purpose of having as much fun as we could.

After returning, I let my inner urge win and dived into the swimming pool on the hotel rooftop, at 2 A.M. in winter. Talk about the will to do anything ;)

Day 1

Upon proceeding to the venue we were asked for corporate ID cards as the event was in the Red Hat office inside a corporate park. We didn't know this and thus had to travel 2 more K.M. to the main entrance and get a visitor pass. Had to give an extra tip to the cab so that he wouldn't give me the look haha.

Upon entering the tech park, I got to witness why Bengaluru is often termed India's Silicon Valley. It was just filled with companies of every type and size so that was a sight to behold.

The talk I loved that day was "Build A GNOME Community? Yes You Can." by Aaditya Singh, full of insights and fun, we term each other as Bhai (Hindi for Brother) so it was fun to attend his talk.

This time I wasn't able to attend many of the talks as I now had the responsibility to explore a new venue for our release party.

Later I and my friends took a detour to find the new venue, and we did it quite quickly about 400 metres away from the office.

This venue had everything we needed, a great environment, the right "vibe", and tons of freedom, which we FOSS lovers of course love and cherish.

It also gave us the freedom to no longer be restricted to event end, but to shift it up to the Lunch break.

At night me, Fenris and Syazwan went to "The Rameshwaram Cafe" which is very famous in Bengaluru, and rightly so, the taste was really good and for the fame not that expensive either.

Fenris didn't eat much as he still has to sober up to Indian dishes xD.

Day 2

The first talk was by Syazwan and boy did I have to rush to the venue to attend it.

Waking up early is not easy for me hehe but his talks are always so funny, engaging and insightful that you just can't miss attending it live.

After a few talks came my time to present on the topic “Linux in India: A perspective of how it is and what we can do to improve it.”

Where we discussed all the challenges faced by us in boosting the market share of Linux and open source in India and what measures we could take to improve the situation.

We also glimpsed over the state of Ubuntu India LoCo and the actions we are taking to reboot it, with multiple events like the one we just conducted.

My talk can be viewed at - YouTube - Linux in India: A perspective of how it is and what we can do to improve it.

And that was quite fun, I loved the awesome feedback I got and it is just amazing to see people loving your content. We then quickly rushed to the venue of the party, track 1 was already there and with us, we took track 2 peeps as well.

To celebrate we cut cake and gave out some Ubuntu flavours stickers, Ubuntu 24.10 Oracular Oriole stickers and UbuCon Asia 2024 stickers followed by a delicious mix of vegetarian and non-vegetarian pizzas.

Despite the short duration of just one hour during lunch, the event created a warm and welcoming space for attendees, encapsulating Ubuntu’s philosophy: “Making technology human” and “Linux for human beings.”

The event was then again followed by GNOME Asia proceedings.

At night we all Ubunties and GNOMEies and Debian grouped for Biryani dinner. We first hijacked the Biryani place and then moved on to hijacking another Cafe. The best thing was that none of them kicked us out, I seriously believed they would considering our activities lol. I for the first time played Jenga and we had a lot of jokes which I can't say in public for good reasons.

At that place, the GNOME CoC wasn't considered haha.

Day 3

Day 3 was a social visit, the UbuCon Asia 2025 organising team members conducted our own day trip, exploring the Technology Museum, Beautiful Cubbon Park, and the magnificent Vidhana Soudha of Karnataka.

I met my friend Aman for the first time since GNOME Asia Malaysia which was Awesome! And I also met my Outreachy mentee in person, which was just beautiful.

The 3-day event was made extremely joyful due to meeting old friends and colleagues. It reminded me of why we have such events so that we can group the community more than ever and celebrate the very ethos of FOSS.

As many of us got tired and some had flights, the day trip didn't last long, but it was nice.

At night I had one of my best coffees ever and tried "Plain Dosa with Mushroom curry" a weird but incredibly tasty combo.

End

Special thanks to Canonical for their CDA funding, which made it possible for me to attend in person and handle all arrangements on very short notice. :smiley:

Looking forward to meeting many of them again at GUADEC or GNOME Asia 2025 :D

Cassidy James Blaede

@cassidyjames

Publish Your Godot Engine Game to Flathub

If you follow me on the fediverse (@cassidy@blaede.family), you may have seen me recently gushing about ROTA, a video game I recently discovered. Besides the absolutely charming design and ridiculously satisfying gameplay, the game itself is open source, meaning the developer has published the game’s underlying code out to the world for anyone to see, learn from, and adapt.

Screenshot of ROTA, a colorful 2D platformer

As someone passionate about the Linux desktop ecosystem broadly and Flathub as an app store specifically, I was excited by the possibility of helping to get ROTA onto Flathub so more people could play it—plus, such a high-quality game being on Flathub helps the reputation and image of Flathub itself. So I kicked off a personal project (with the support of my employer¹) to get it onto Flathub—and I learned a lot—especially what steps were confusing or unclear.

As a result, here’s how I recommend publishing your Godot Engine game to Flathub. Oh, and don’t be too scared; despite the monumental size of this blog post, I promise it’s actually pretty easy! 😇

Overview

Let’s take a look at what we’re going to achieve at a high level. This post assumes you have source code for a game built with a relatively recent version of Godot Engine (e.g. Godot Engine 3 or 4), access to a Linux computer or VM for testing, and a GitHub account. If you’re missing one of those, get that sorted before continuing! You can also check the list of definitions at the bottom of this page for reference if you need to better understand something, and be sure to check out the Flathub documentation for a lot more details on Flatpak publishing in general.

Illustration with the Godot Engine logo, then an arrow pointing to the Flathub logo

To build a Flatpak of a Godot Engine game, we only need three things:

  1. Exported PCK file
  2. Desktop Entry, icon, and MetaInfo files
  3. Flatpak manifest to put it all together

The trick is knowing how and where to provide each of these for the best experience publishing your game (and especially updates) to Flathub. There are a bunch of ways you can do it, but I strongly recommend:

  1. Upload your PCK file to a public, versioned URL, e.g. as a source code release artifact.

  2. Include the Desktop Entry, icon, and MetaInfo files in the repo with your game’s source code if it’s open source, or provide them via a dedicated repo, versioned URL, or source code release artifact.

    You can alternatively upload these directly to the Flatpak Manifest repository created by Flathub, but it’s better to keep them with your game’s other files if possible.

  3. Your manifest will live in a dedicated GitHub repo owned by the Flathub org. It’s nice (but not required) to also include a version of your manifest with your game’s source code for easier development and testing.

Before we get into each of those steps in more detail, you will need to pick an app ID for your game. This is a unique machine-oriented (not intended for humans to need to know) ID used across the Linux desktop and throughout the Flatpak process. It must be in valid reverse domain name notation (RDNN) format for a domain or code hosting account associated with the game; for example, if your website is example.com, the ID should begin with com.example. I strongly recommend using your own domain name rather than an io.itch. or io.github. prefix here, but ultimately it is up to you. Note that as of writing, Itch.io-based IDs cannot be verified on Flathub.

1. Handling Your PCK File

When you export a Godot Engine game for PC, you’re actually creating a platform-agnostic PCK file that contains all of your game’s code and assets, plus any plugins and libraries. The export also provides a copy of the platform-specific binary for your game which—despite its name—is actually just the Godot Engine runtime. The runtime simply looks for a PCK file of the same name sitting on disk next to it, and runs it. If you’re familiar with emulating retro games, you can think of the binary file as the Godot “emulator”, and the PCK file as your game’s “ROM.”

To publish to Flathub, we’ll first need your game’s exported PCK file accessible somewhere on the web via a public, versioned URL. We’ll include that URL in the Flatpak manifest later so Flatpak Builder knows where to get the PCK file to bundle it with the Godot Engine binary into a Flatpak. Technically any publicly-accessible URL works here, but if your game is open source, I highly recommend you attach the PCK file as a release artifact wherever your source code is hosted (e.g. GitHub). This is the most similar to how open source software is typically released and distributed, and will be the most familiar to Flathub reviewers as well as potential contributors to your game.

No matter where you publish your PCK file, the URL needs to be public, versioned, and stable: Flatpak Builder should always get the exact same file when hitting that URL for that release, and if you make a new release of your game, that version’s PCK file needs to be accessible at a new URL. I highly recommend semantic versioning for this, but it at least needs to be incrementally versioned so it’s always obvious to Flathub reviewers which version is newest, and so it matches to the version in the MetaInfo (more on that later). Match your game’s regular versioning scheme if possible.

Bonus Points: Export in CI

Since Godot Engine is open source and has command-line tools that run on Linux, you can use a source code platform’s continuous integration (CI) feature to automatically export and upload your PCK file. This differs a bit depending on your source code hosting platform and Godot Engine version, but triggered by a release, you run a job to:

  1. Grab the correct version of the Godot Engine tools binary from their GitHub release
  2. Export the PCK file from the command line (Godot Docs)
  3. Upload that PCK file to the release itself

This is advantageous because it ensures the PCK file attached to the release is exported from the exact code from in the release, increasing transparency and reducing the possibility of human error. Here is one example of such a CI workflow.

About That Binary…

Since the exported binary file is specific to the platform and Godot Engine version but not to your game, you do not need to provide it when publishing to Flathub; instead, Flathub builds Godot Engine runtime binaries from the Godot Engine source code for each supported version and processor architecture automatically. This means you just provide the PCK file and specify the Godot Engine version; Flathub will build and publish your Flatpak for 64-bit Intel/AMD PCs, 64-bit ARM computers, and any supported architectures in the future.

2. Desktop Entry, Icon, and MetaInfo Files

Desktop Entry and MetaInfo are FreeDesktop.org specifications that ensure Linux-based OSes interoperate; for our purposes, you just need to know that a Desktop Entry is what makes your game integrate on Linux (e.g. show in the dock, app menus, etc.), while MetaInfo provides everything needed to represent an app or game in an app store, like Flathub.

Writing them is simple enough, especially given an example to start with. FreeDesktop.org has a MetaInfo Creator web app that can even generate a starting point for you for both, but note that for Flathub:

  • The icon name given must match the app ID, which the site lists as a “Unique Software Identifier”

  • The “Executable Name” will be godot-runner for Godot Engine games

If included in your source code repository, I recommend storing these files in the project root (or in a linux/ folder) as YOUR.APP.ID.desktop, YOUR.APP.ID.metainfo.xml, and, if it doesn’t exist in a suitable format somewhere else in the repo, YOUR.APP.ID.png.

If your game is not open source or these files are not to be stored in the source code repository, I recommend storing and serving these files from the same versioned web location as your game’s PCK file.

Here are some specifics and simple examples from the game ROTA to give you a better idea:

Desktop Entry

You’ll only ever need to set Name, Comment, Categories, and Icon. See the Additional Categories spec for what you can include in addition to the Game category. Note the trailing semicolon!

[Desktop Entry]
Name=ROTA
Comment=Gravity bends beneath your feet
Categories=Game;KidsGame;
Icon=net.hhoney.rota
Exec=godot-runner
Type=Application
Terminal=false
net.hhoney.rota.desktop

Icon

This is pretty straightforward; you need an icon for your game! This is used to represent your game both for app stores like Flathub.org and the native app store clients on players’ computers, plus as the launcher icon e.g. on the player’s desktop or dock.

Screenshot of ROTA, a colorful 2D platformer

ROTA's icon in the GNOME Dash

If your game is open source, it’s easy enough to point to the same icon you use for other platform exports. If you must provide a unique icon for Flathub (e.g. for size or style reasons), you can include that version in the same place as your Desktop Entry and MetaInfo files. The icon must be a square aspect ratio as an SVG or 256×256 pixel (or larger) PNG.

MetaInfo

I won’t cover absolutely everything here (see the Flathub docs covering MetaInfo Guidelines for that), you should understand a few things about MetaInfo for your game.

The top-most id is your game’s app ID, and must be in valid RDNN format as described above. You should also use the same prefix for the developer id to ensure all of your apps/games are associated with one another.

Screenshots should be at stable URLs; e.g. if pointing to a source code hosting service, make sure you’re using a tag (like 1.0.0) or commit (like 6c7dafea0993700258f77a2412eef7fca5fa559c) in the URL rather than a branch name (like main). This way the right screenshots will be included for the right versions, and won’t get incorrectly cached with an old version.

You can provide various URLs to link people from your game’s app store listing to your website, an issue tracker, a donation link, etc. In the case of the donation link, the Flathub website displays this prominently as a button next to the download button.

Branding colors and screenshots are some of your post powerful branding elements! Choose colors that compliment (but aren’t too close to) your game’s icon. For screenshots, include a caption related to the image to be shown below it, but do not include marketing copy or other graphics in the screenshots themselves as they may be rejected.

Releases must be present, and are required to have a version number; this must be an incrementing version number as Flatpak Builder will use the latest version here to tag the build. I strongly recommend the simple Semantic Versioning format, but you may prefer to use a date-based 2024.12.10 format. These release notes show on your game’s listing in app stores and when players get updates, so be descriptive—and fun!

Content ratings are developer-submitted, but may be reviewed by Flathub for accuracy—so please, be honest with them. Flathub uses the Open Age Ratings Service for the relevant metadata; it’s a free, open source, and straightforward survey that spits out the proper markup at the end.

This example is pretty verbose, taking advantage of most features available:

<?xml version="1.0" encoding="UTF-8"?>
<component type="desktop-application">
  <id>net.hhoney.rota</id>
  
  <name>ROTA</name>
  <summary>Gravity bends beneath your feet</summary>

  <developer id="net.hhoney">
    <name translatable="no">HHoney Software</name>
  </developer>

  <description>
    <p>Move blocks and twist gravity to solve puzzles. Collect all 50 gems and explore 8 vibrant worlds.</p>
  </description>

  <content_rating type="oars-1.1">
    <content_attribute id="violence-cartoon">mild</content_attribute>
  </content_rating>
  
  <url type="homepage">https://hhoney.net</url>
  <url type="bugtracker">https://github.com/HarmonyHoney/ROTA/issues</url>
  <url type="donation">https://ko-fi.com/hhoney</url>

  <branding>
    <color type="primary" scheme_preference="light">#ff99ff</color>
    <color type="primary" scheme_preference="dark">#850087</color>
  </branding>

  <screenshots>
    <screenshot type="default">
      
      <caption>Rotate gravity as you walk over the edge!</caption>
    </screenshot>
    <screenshot>
      
      <caption>Push, pull and rotate gravity-blocks to traverse the stage and solve puzzles</caption>
    </screenshot>
    <screenshot>
      
      <caption>Collect all 50 gems to unlock doors and explore 8 vibrant worlds!</caption>
    </screenshot>
  </screenshots>

  <releases>
    <release version="1.0" date="2022-05-07T22:18:44Z">
      <description>
        <p>Launch Day!!</p>
      </description>
    </release>
  </releases>

  <launchable type="desktop-id">net.hhoney.rota.desktop</launchable>
  <metadata_license>CC0-1.0</metadata_license>
  <project_license>Unlicense</project_license>
</component>
net.hhoney.rota.metainfo.xml

Bonus Points: Flathub Quality Guidelines

Beyond Flathub’s base requirements for publishing games are their Quality Guidelines. These are slightly more opinionated human-judged guidelines that, if met, make your game eligible to be featured in the banners on the Flathub.org home page, as a daily-featured app, and in other places like in some native app store clients. You should strive to meet these guidelines if at all possible!

Screenshot of Flathub.org with a large featured banner for Crosswords

Crosswords, a featured game on Flathub, meets the quality guidelines

One common snag is the icon: generally Flathub reviewers are more lenient with games, but if you need help meeting the guidelines for your Flathub listing, be sure to reach out on the Flathub Matrix chat or Discourse forum.

3. Flatpak manifest

Finally, the piece that puts it all together: your manifest! This can be a JSON or YAML file, but is named the same as your game’s app ID.

The important bits that you’ll need to set here are the id (again matching the app ID), base-version for the Godot Engine version, the sources for where to get your PCK, Desktop Entry, MetaInfo, and icon files (in the below example, a source code repository and a GitHub release artifact), and the specific build-commands that describe where in the Flatpak those files get installed.

In the build-commands, be sure to reference the correct location for each file. You can also use these commands to rename any files, if needed; in the below example, the Desktop Entry and MetaInfo files are in a linux/ folder in the project source code, while the icon is reused (and renamed) from a path that was already present in the repo. You can also use ${FLATPAK_ID} in file paths to avoid writing the ID over and over.

For the supported Godot Engine versions, check the available branches of the Godot Engine BaseApp.

For git sources, be sure to point to a specific commit hash; I also prefer to point to the release tag as well (e.g. with tag: v1.2.3) for clarity, but it’s not strictly necessary. For file sources, be sure to include a hash of the file itself, e.g. sha256: a89741f…). For a file called export.pck, you can generate this on Linux with sha256sum export.pck; on Windows with CertUtil -hashfile export.pck sha256.

id: net.hhoney.rota
runtime: org.freedesktop.Platform
runtime-version: '24.08'
base: org.godotengine.godot.BaseApp
base-version: '3.6'
sdk: org.freedesktop.Sdk
command: godot-runner

finish-args:
  - --share=ipc
  - --socket=x11
  - --socket=pulseaudio
  - --device=all

modules:
  - name: rota
    buildsystem: simple

    sources:
      - type: git
        url: https://github.com/HarmonyHoney/ROTA.git
        commit: be542fa2444774fe952ecb22d5056a048399bc25

      - type: file
        url: https://github.com/HarmonyHoney/ROTA/releases/download/something/ROTA.pck
        sha256: a89741f56eb6282d703f81f907617f6cb86caf66a78fce94d48fb5ddfd65305c

    build-commands:
      - install -Dm644 ROTA.pck ${FLATPAK_DEST}/bin/godot-runner.pck
      - install -Dm644 linux/${FLATPAK_ID}.desktop ${FLATPAK_DEST}/share/applications/${FLATPAK_ID}.desktop
      - install -Dm644 linux/${FLATPAK_ID}.metainfo.xml ${FLATPAK_DEST}/share/metainfo/${FLATPAK_ID}.metainfo.xml
      - install -Dm644 media/image/icon/icon256.png ${FLATPAK_DEST}/share/icons/hicolor/256x256/apps/${FLATPAK_ID}.png

net.hhoney.rota.yml

Once you have your manifest file, you’re ready to test it and submit your game to Flathub. To test it, follow the instructions at that link on a Linux computer (or VM); you should be able to point Flatpak Builder to your manifest file for it to grab everything and build a Flatpak of your game.

The Flathub Submission PR process is a bit confusing; you’re just opening a pull request against a specific new-pr branch on GitHub that adds your manifest file; Flathub will then human-review it and run automated tests on it to make sure it all looks good. They’ll provide feedback on the PR if needed, and then if it’s accepted, a bot will create a new repo on the Flathub org just for your game’s manifest. You’ll automatically have the correct permissions on this repo to be able to propose PRs to update the manifest, and merge them once they pass automated testing.

Please be sure to test your manifest before submitting so you don’t end up wasting reviewers’ time. 🙏

You Did It!

You published your game to Flathub! Or at least you made it this far in the blog post; either way, that’s a win.

I know this was quite the slog to read through; my hope is that it can serve as a reference for game developers out there. I’m also interested in adapting it into documentation for Flatpak, Flathub, and/or Godot Engine—but I wasn’t sure where it would fit and in what format. If you’d like to adapt any of this post into proper documentation, please feel free to do so!

If you spot something wrong or just want to reach out, hit me up using any of the links in the footer.

Bonus Points: Publishing Updates

When I wrapped this blog post up, I realized I missed mentioning how to handle publishing updates to your game on Flathub. While I won’t go into great detail here, the gist is:

  1. Update your MetaInfo file with the new release version number, timestamp, and release notes; publish this either in your source code repo or alongside the PCK file; if you have new screenshots, be sure to update those URLs in the MetaInfo file, too!

  2. Export a new PCK file of your release, uploading it to a public, stable URL containing the new version number (e.g. a GitHub release)

  3. Submit a pull request against your Flatpak manifest’s GitHub repo, pointing the manifest at new versioned locations of your files; be sure to update the file hashes as well!

After passing automated tests, a bot will comment on the PR with command to test your Flatpak. Do this as the resulting Flatpak is what will be published to players after the PR is merged. If it all looks good, merge it, and you’re set! If not, repeat the above steps until everything is as expected. :)



Definitions

There are a lot of terms and technologies involved on both the Godot Engine and Flathub side, so here are some brief definitions. Don’t worry if you don’t fully understand each of these, and you can always use this as a cheat sheet to refer back to.

Godot Engine

Open source game engine that includes the editor (the actual app you use to create a game), tools (command-line tools for exporting a game), and runtime (platform-specific binary distributed with your game which actually runs it)

Export

Prepare your game for distribution; Godot Engine’s export workflow packages up your game’s code, assets, libraries, etc. and turns it into a playable game.

PCK File

The platform-agnostic result of a Godot Engine export to use along with the platform-specific runtime. Contains all of your game’s code, assets, etc. packed up with a .pck extension.

Flatpak

App/game packaging format for Linux that works across nearly every different Linux distribution. An important design of Flatpak is that it is sandboxed, which keeps each app or game from interfering with one another and helps protect players’ privacy.

Flathub

The de facto Linux app store with thousands of apps and games, millions of active users, and a helpful community of open source people like me! It uses Flatpak and other open standards to build, distribute, and update apps and games.

Flatpak Manifest

A structured file (in JSON or YAML format) that tells Flatpak how to package your game, including where to get the game itself from. Flathub hosts the manifest files for apps and games on their GitHub organization, regardless of where your game is developed or hosted.

Flatpak Builder

Command-line tool that takes a Flatpak manifest and uses it to create an actual Flatpak. Used for local testing, CI workflows, and Flathub itself.

Flatpak BaseApp

Shared base for building a Flatpak; i.e. all Godot 3.6 games can use the same BaseApp to simplify the game’s manifest, and Flatpak Builder will take care of the common Godot 3.6-specific bits.

Desktop Entry

A simple INI-like file that determines how your game shows up on Linux, i.e. its name, icon, and categories.

MetaInfo

Open standard for describing apps and games to be displayed in app stores; used by Flathub and Linux app store clients to build your game’s listing page.

App ID

A unique ID for your game in reverse domain name notation (RDNN), based on a valid web domain or source code hosting account you control. Required by Flatpak and validated by Flathub to ensure an app or game is what it claims to be.

Flathub Verification

Optional (but highly recommended!) process to verify that your game on Flathub is published by you. Uses your game’s app ID to verify ownership of your domain or source code hosting account.

Felipe Borges

@felipeborges

Announcement: GNOME will have an Outreachy intern working on librsvg

We are excited to announce that the GNOME Foundation is sponsoring an Outreachy internship for the December-March round!

The intern will work with mentor Federico Mena Quintero on the project, “Implement the SVG2 text layout algorithm in librsvg.”

The intern’s blog will soon be added to Planet GNOME, where you can follow their project updates and learn more about them. Stay tuned!

Udo Ijibike

@Udo_I

Outreachy Internship Series: Files Usability Test Report

During my Outreachy internship with GNOME, Tamnjong Larry Tabeh and I conducted user research exercises under the inspiring mentorship of Allan Day and Aryan Kaushik.

In this blog post, I’ll discuss the usability test we conducted for GNOME’s Files, also known as Nautilus.

This blog post will introduce the study, outline our methodology, and present our key findings from the usability test. I’ve also attached a downloadable report at the end of this blogpost that discusses (in detail) our observations and recommendation(s) for each task performed in the usability test.

Without further ado, let’s jump right in!

1.  Introduction

Files is the default file manager of the GNOME desktop. It provides a simple and integrated way of managing files when running a Linux-based OS by supporting all the basic functions of a file manager and more.

With recent GNOME releases introducing significant changes to the Files user experience, and more improvements planned for subsequent releases, the design team wanted to assess the effectiveness of these updates and learn more about other aspects of the user experience.

To support these efforts, we executed a user research project to identify areas for improvement, and gather actionable insights from observed user behaviours that can inform design decisions when addressing identified issues.

1.1.  Research Goals

Our research goals were to:

    • Assess the effectiveness of the new menu structure and the discoverability of the following menu items:
      1. Icon Size editors
      2. Properties
      3. Select All
      4. Undo/Redo
      5. Sort
      6. Open Item Location
      7. Show Hidden Files
      8. Add To Bookmark
    • Evaluate the ease of use of Files’s Search feature, and the intuitiveness of its Search Filters.
    • Investigate the extent to which any difficulty experienced when right-clicking an empty space in List View impacts the user experience when accessing a folder context-menu.

1.2.  Research Questions

Upon completion of the study, we wanted to be able to answer the following questions:

    • Menu Structure
      1. Is the current organization of the menus effective?
      2. Can people find the buttons they need for basic tasks when they need them?
    • Search Experience
      1. Do people understand how to search in Files?
      2. Do people understand the search filters and how to use them?
      3. Are the search filters effective for their context of use?
    • List View Layout
      1. Is it challenging for people to access the folder context menu in list view when they have a lot of files?
      2. Does the current design meet user expectations when accessing folder context menu in list view?

2.  Study Design

2.1.  Approach

To answer our research questions, we opted for a moderated task-based usability test. This approach meant that we could simulate typical usage conditions and observe participants interact with Files. This made it easy for us to identify pain-points and gaps in the specific aspects of the Files user experience that we were interested in, and allowed us to engage participants in discussions that deepened our understanding of the challenges they experienced with Files.

To plan the study, we started by defining the ideal participant for our research goals. Next, we established an optimal sequence for the tasks we wanted participants to perform, then crafted a scenario for each, after which we designed the testing environment. Then concluded preparations with a pilot test to identify weaknesses in the study plan and implement revisions where necessary before testing with recruited participants.

2.2.  Recruitment Criteria

To generate the data we needed, we had to observe individuals who were unfamiliar with the Files menu structure. This requirement was crucial, as previous use of Files could influence a participant’s interactions, which would have made it difficult for us to discern valid usability issues from their interactions.

We also needed participants to be able to perform basic computing tasks independently: tasks like navigating software and managing files on their computer. This proficiency was important to ensuring that any challenges observed during the study were specifically related to the Files user experience, rather than stemming from a lack of general computer skills.

Therefore, we defined our recruitment criteria as follows:

    1. Has never used GNOME prior to their usability test session.
    2. Is able to use a computer moderately well.

2.3.  Testing Environment

During testing, participants interacted with development versions of Files, specifically, versions 47.rc-7925df1ba and 47.rc-3faeec25e. Both versions were the latest available at the time of testing and had identical implementations of the features we were targeting.

To elicit natural interactions from the participants, we enhanced the testing environment with a selection of files and folders that were strategically organized, named, and hidden, to create states in Files that encouraged and facilitated the tasks we planned to observe.

3.  Participant Overview

We recruited and tested with six first-time GNOME users, aged twenty-one to forty-seven, from diverse backgrounds, with varying levels of computer expertise. This diversity in the sample helped us keep our findings inclusive by ensuring that we considered a broad range of experiences in the usability test.

Although the majority of the participants reported current use of Windows 11 as shown below, a few also reported previous use of macOS and earlier versions of Windows OS.

4.  Methodology

For this usability test:

    • We conducted in-person usability tests with six computer users who met our selection criteria.
    • The moderating researcher followed a test script and concluded each session with a brief, semi-structured interview.
    • Participants attempted eleven tasks in the following order:
      1. Change the icon size
      2. Find the size of a folder with Properties
      3. Select all files in a folder with “Select All”
      4. Undo an action with the “Undo” button
      5. Change the sort order
      6. Change Files display from grid view to list view
      7. Create a new folder while in list view
      8. Find a file using the search feature, with filters
      9. Go to a file’s location from search results with “Open Item Location”
      10. Reveal hidden items in a folder with “Show Hidden Files”
      11. Add a folder to the sidebar with “Add to Bookmarks”
    • Participants were encouraged to continuously think aloud while the performing tasks and each session lasted at least 40 minutes.
    • All sessions were recorded with participant consent and were later transcribed for analysis.

5.  Usability Test Result

Applying Jim Hall’s Heat Map technique we summarized the observed experience for all tasks performed in the usability test. The heatmap below shows the completion rate for each task and the level of difficulty experienced by participants when performing them.

The tasks are in rows and participants are represented in columns. The cell where a row (Task) intersects with a column (Participant) captures the task outcome and relative difficulty experienced by a participant during their attempt.

A cell is green if the participant completed the task without any difficulty, yellow if the participant completed the task with very little difficulty, orange if the participant completed the task with moderate difficulty, red if the participant completed the task with severe difficulty, black if the participant was unable to complete the task, and gray if the participant’s approach was outside the scope of the study.

6.  Key Insights

1.  Menu structure

    • The menu structure was generally easy for participants to navigate. Despite using GNOME and Files for the first time during their testing sessions, they adapted quickly and were able to locate most of the buttons and menu items required to complete the tasks.
    • The best performing tasks were “Change the sort order” and “Reveal hidden items in a folder”, and the worst performing tasks were “Change the icon size” and “Add a folder to Bookmark”.
    • Overall, the participants easily found the following menu items when needed:
      1. Sort
      2. Show Hidden Files
      3. Properties
      4. Open Item Location
    1. But struggled to find these menu items when needed
      1. Icon size editors
      2. Select All
      3. Undo/Redo
      4. Add To Bookmark
    • In situations where participants were familiar with a shortcut or gesture for performing a task, they almost never considered checking the designated menus for a button.
    • We observed this behavior in every participant, particularly when they performed the following tasks:

    • Nonetheless, Files excelled in this area with its remarkable support for widely used shortcuts and cross-platform conventions.
    • We also observed that when these actions worked as expected it had the following effects on the user’s experience:
      1. It reduced feelings of apprehension in participants and encouraged them to engage more confidently with the software.
      2. It made it possible for the participants to discover Files’s menu structure without sacrificing their efficiency.

2.  Search

The “Search Current Folder” task flow was very intuitive for all participants. The search filters were also very easy to use and they effectively supported participants during the file search.

However, we found that the clarity of some filter labels could be reasonably improved by tailoring them to the context of a file search.

3.  List View Layout

The current List View layout did not effectively support typical user behavior when accessing the folder context menu.

4.  General Observation

When the participants engaged in active discovery of Files, we observed behaviour patterns that are linked to the following aspects of the design:

    • Familiarity:
    • We observed that when participants attempted familiar tasks, they looked for familiar cues in the UI. We noticed that when a UI component looked familiar to participants, they interacted without hesitation and with the expectation that this interaction would lead to the same outcomes that they’re accustomed to from their prior experience with similar software. Whereas, when a UI component was unfamiliar, participants were more restrained and cautious when they interacted with it.
    • For example, we noticed participants interact differently with the “Search Current Folder” button compared to the “List/Grid View” and “View Options” buttons.
    • With the “Search Current Folder” button, participants took longer to identify the icon, and half of the sample checked the tooltip for confirmation before clicking the button, because the icon was unfamiliar.
    • In contrast, participants reacted a lot quicker during the first task, as they instinctively clicked on the “List/Grid View” or “View Options” icons without checking the tool tip. Some even did so while assuming the two buttons were part of a single control and interacted with them as if they were combined, because they were familiar with the icons and the design pattern.
    • Tool tips
    • With a lot of icon buttons in the Files UI, we observed participants relying heavily on tooltips to discover the UI. Mostly as a way to validate their assumptions about the functionality of an icon button as highlighted above.
    • Clear and effective labels:
    • We observed that, the more abstract or vague a label was, the more participants struggled to interpret it correctly.
    • In the “Open Item Location” tasks, we guided the participants who were unable to find the menu item to the file’s context menu, then asked them if they thought there was a button that could have helped them complete the task. Both participants who gave up on this task instantly chose the correct option.
    • Whereas, in the “Add To Bookmarks” tasks, almost everyone independently found the menu item but the majority of them were hesitant to click on it because of the word “Bookmarks” in the label.
    • Layout of Files
    • By the end of most of the sessions, participants had concluded that controls in the white (child) section of the layout affected elements within that section, while controls in the sidebar were relevant to just the elements in the sidebar, even though this wasn’t always the case with how the Files menu structure is actually organized.
    • Therefore, when participants needed to perform an action they believed would affect elements in the child section of the layout, most of them instinctively checked the same section for an appropriate control.

7.  Conclusion, Reflections and Next Steps

If you’d like to learn about our findings and the identified usability issues for each task, here is a detailed report that discusses our how the participants interacted alongside our recommendations: Detailed Report for Files Usability Test

Overall, the usability test effectively supported our research goals and provided qualitative insights that directly addressed our research questions.

Beyond these insights, we also noted that users have preferences for performing certain tasks. Future research efforts can build on this insight by exploring the usage patterns of Files users to inform decisions around the most effective ways to support them.

Reflecting on the study’s limitations, a key aspect that may have influenced our result was the participant sample. We tested with a sample that was predominantly composed of Windows 11 users, although unintended. Ideally, a more diverse group that included current users of different operating systems could have further enriched our findings by providing a broader range of experiences to consider. However, we mitigated this limitation by recognizing that the participants who had previous experience with more operating systems brought their knowledge from those interactions into their use of Files, which likely influenced their behaviors and expectations during the test.

8.  Acknowledgements

I gained a lot of valuable skills from my internship with GNOME; I significantly improved my communications skills, learned practical skills for designing and executing user research projects using different qualitative and quantitative user research methods, and developed a sense for the more nuanced but critical considerations necessary for ensuring reliability and validity of research findings through the various phases of a study and how to mitigate them in research planning and execution.

So, I’d like to conclude by expressing my profound gratitude to everyone who made this experience so impactful.

I’d like to appreciate my mentors (Allan day and Aryan Kaushik) for their guidance, insightful feedback, and encouragement, throughout and beyond the internship; the GNOME community, for the warm welcome and support; and Outreachy, for making it possible for me to even have this experience.

I greatly enjoyed working on this project and I expect to make more user research contributions to GNOME.

Thank you!

 

 

Cambalache 0.94 Released!

Hello, I am pleased to announce a new Cambalache stable release.

Version 0.94.0 – Accessibility Release!

    • Gtk 4 and Gtk 3 accessibility support
    • Support property subclass override defaults
    • AdwDialog placeholder support
    • Improved object description in hierarchy
    • Lots of bug fixes and minor UI improvements

How it started?

A couple of months ago I decided to make a poll on Mastodon about which feature people would like to see next.

Which feature should be added next in Cambalache? Results: - 28% GtkExpression support - 28% GResource - 36% Accessibility - 8% In App polls

To my surprise GtkExpression did not come up first and GResources where not the last one.

Data Model

First things firsts, how to store a11y data in the project”

So what are we trying to sotre? from Gtk documentation:

GtkWidget allows defining accessibility information, such as properties, relations, and states, using the custom <accessibility> element:

<object class="GtkButton" id="button1">
  <accessibility>
    <property name="label">Download</property>
    <relation name="labelled-by">label1</relation>
  </accessibility>
</object>

These looks a lot like regular properties so my first idea was to store them as properties in the data model.

So I decided to create one custom/fake interface class for each type of a11y data CmbAccessibleProperty, CmbAccessibleRelation and CmbAccessibleState.

These are hardcoded in cmb-catalog-gen tool and look like this

# Property name: (type, default value, since version)
self.__a11y_add_ifaces_from_enum([
  (
    "Property",
    "GtkAccessibleProperty",
    {
      "autocomplete": ["GtkAccessibleAutocomplete", "none", None],
      "description": ["gchararray", None, None],
      ...
    }
  ),
  (
    "Relation",
    "GtkAccessibleRelation",
    {
      "active-descendant": ["GtkAccessible", None, None],
      "controls": ["CmbAccessibleList", None, None],  # Reference List
      "described-by": ["CmbAccessibleList", None, None],  # Reference List
      ...
    }
  ),
  (
    "State",
    "GtkAccessibleState",
    {
      "busy": ["gboolean", "False", None],
      "checked": ["CmbAccessibleTristateUndefined", "undefined", None],
      "disabled": ["gboolean", "False", None],
      "expanded": ["CmbBooleanUndefined", "undefined", None],
      ...
    }
  )
])

This function will create the custom interface with all the properties and make sure all values in the GtkEnumeration are covered.

One fundamental difference with properties is that some a11y relations can be used more than once to specify multiple values.

To cover this I created a new value type called CmbAccessibleList which is simply a coma separated list of values.

This way the import and export code can handle loading and exporting a11y data into Cambalache data model.

Editing a11y data in the UI

Now since these interfaces are not real, no actual widget implements them, they wont show up automatically in the UI.

This can be easily solved by adding a new tab “a11y” to the object editor which only shows a11y interface properties.Cambalache screenshot showing A11y tab with all propertiesNow at this point it is possible to create and edit accessibility metadata for any UI but as Emmanuelle pointed out not every a11y property and relation is valid for every role.

@xjuan @GTK make sure you're not setting accessible properties/relations that do not match the roles that define them; GTK will use the role to read attributes, but we're missing a strong validation suite

To know what is valid or not you need to read WAI-ARIA specs or write a script that pulls all the metadata from it.

With this metadata handy is it easy to filter properties and relations depending on the a11y role.Cambalache screenshot showin a11y tab with properties filtered by accessible roleBTW keep in mind that accessible-role property should not be changed under normal circumstances.

Where to get it?

From Flathub

flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo

flatpak install flathub ar.xjuan.Cambalache

or directly from gitlab

git clone https://gitlab.gnome.org/jpu/cambalache.git

Matrix channel

Have any question? come chat with us at #cambalache:gnome.org

Mastodon

Follow me in Mastodon @xjuan to get news related to Cambalache development.

Happy coding!

Hackweek 24

It's the time for a new Hack Week. The Hack Week 24 was from November 18th to November 22th, and I've decided to join the New openSUSE-welcome project this time.

The idea of this project is to revisit the existing openSUSE welcome app, and I've been trying to help here, specifically for the GNOME desktop installation.

openSUSE-welcome

Right now after installing any openSUSE distribution with a graphical desktop, the user is welcomed on first login with a custom welcome app.

This custom application is a Qt/QML with some basic information and useful links.

The same generic application is used for all desktops, and for popular desktops right now exists upstream applications for this purpose, so we were talking on Monday morning about it and decided to use specific apps for desktops.

So for GNOME, we can use the GNOME Tour application.

gnome-tour

GNOME Tour is a simple rust/gtk4 application with some fancy images in a slideshow.

This application is generic and just shows information about GNOME desktop, so I created a fork for openSUSE to do some openSUSE specific customization and use this application as openSUSE welcome in GNOME desktop for Tumbleweed and Leap.

Desktop patterns, the welcome workflow

After some testing and investigation about the current workflow for the welcome app:

  1. x11_enhanced pattern recommends opensuse-welcome app.
  2. We can add a Recommends: gnome-tour to the gnome pattern
  3. The application run using xdg autostart, so gnome-tour package should put the file in /etc/xdg/autostart and set to hidden on close.
  4. In the case of having a system with multiple desktops, we can choose the specific welcome app using the OnlyShowIn/NotShowIn config in desktop file

So I've created a draft PR to do not show the openSUSE-welcome app in GNOME, and I've also the gnome-tour fork in my home OBS project.

I've been testing this configuration in Tumbleweed with GNOME, KDE and XFCE installed and it works as expected. The openSUSE-welcome is shown in KDE and XFCE and the gnome-tour app is only shown in GNOME.

Next steps

The next steps to have the GNOME Tour app as default welcome for openSUSE GNOME installation are:

  1. Send forked gnome-tour package to GNOME:Next project in OBS.
  2. Add the Recommends: gnome-tour to patterns-gnome to GNOME:Next project in OBS.
  3. Make sure that any other welcome application is not shown in GNOME.
  4. Review openQA tests that expect opensuse-welcome and adapt for the new application.

Adrien Plazas

@Kekun

Capitole du Libre et discriminations

Le weekend du 16 et 17 novembre 2024, j’ai eu le plaisir d’aller au Capitole du Libre (CdL), une chouette conférence tenue tous les ans à Toulouse, et j’ai envie de revenir dessus. Le CdL rassemble la communauté libriste française, avec une représentation notable du milieu associatif. On y trouve un village associatif rassemblant un large pan du milieu libriste français au delà du logiciel, des présentations techniques accessibles à tous les niveaux de connaissance, et des présentations plus politiques qui proposent de réfléchir et faire évoluer le mouvement libriste. Ça en fait une conférence très joviale et conviviale où l’on peut venir en famille.

J’ai déjà participé au CdL par le passé, y tenant des stands pour parler de GNOME sur smartphones en 2018, 2019 et 2022. De plus en 2019 j’y ai donné une présentation sur mes travaux pour porter GNOME sur smartphones, et en 2022 j’ai eu le plaisir d’interviewer David Revoy dans les couloirs de la conférence. C’est une conférence que j’apprécie et à laquelle j’aime participer, et 2024 est la première édition où j’y suis allé en visiteur.

Des présentations marquantes

Une fois n’est pas coutume, je suis venu au CdL pour assister à des présentations, et quelques unes m’ont agréablement marqué. Voici un bref récapitulatif par ordre chronologique de celles qui ont constitué les points hauts de ma visite.

Le samedi , Valentin Deniaud a donné une présentation piquante et très drôle sur la lutte de l’éditeur de logiciels libres Entr’ouvert contre Orange, narrant comment le géant des télécoms a été condamné après des décennies de lutte pour non respect d’une licence libre. La présentation pleine d’anecdotes et d’humour dresse un parcours laborieux dans divers types de droits et dans la justice française.

Le même après-midi , Armony Altinier et Jean-Philippe Mengual nous ont fait une démonstration des limitation de l’accessibilité de la plate-forme d’apprentissage Moodle aux personnes non-voyantes. La présentation explique surtout comment les personnes utilisant cette plate-forme pour proposer du matériel éducatif peuvent adapter le contenu pédagogique et les méthodes d’évaluation pour une plus grande inclusivité.

Le dimanche , un intervenant représentant l’association Skeptikón a dressé les liens entre logiciels libres, scepticisme et politique. La présentation nous invite à faire face aux temps sombres que profile la montée globale du fascisme, présentant divers moyens d’y faire face comme le scepticisme, le librisme et le syndicalisme, le point d’orgue étant mis sur la nécessité de combattre le sentiment d’inanité et le fatalisme en créant du nous.

Elle a été suivie par une présentation d’Isabella Vanni décrivant le fonctionnement de l’April et son cheminement pour mieux inclure la diversité de genres. Partant des origines de l’informatique dévalorisée et féminisée pour aller jusqu’à sa valorisation et sa réappropriation par les hommes dans les années 1980, elle présente les mécanismes qui freinent la ré-inclusion des femmes dans le secteur et des moyens de lutter contre.

La conférence reprend par une présentation de Khrys, nous offrant une plongée dans les origines féminines de l’informatique, tissant des liens avec l’intelligence artificielle et le luddisme. Par le prisme du mythe de Prométhée et de ses diverses réinterprétations en science-fiction, elle confronte les visions patriarcales et féminines des innovations techniques, nous invitant à être technocritiques envers les intelligences artificielles via une approche féministe.

Des intentions d’inclusion sans application

Le CdL est doté d’un code de conduite que chaque participant·e se doit de respecter, qu’ils et elles soient visiteurs·euses, invités·es ou organisateurs·ices. Le code de conduite déclare que les organisateurs·ices souhaitent éviter tout type de discrimination, que le non respect de ces règles de bienséance pourra entraîner l’exclusion de l’évènement et nous invite à signaler toute discrimination dont on est victime ou témoin.

Tout cela est louable mais sa mise en application me pose quelques problèmes. Comment faire un retour quand c’est l’organisation de l’évènement dans sa globalité qui est discriminante, empêchant des pans entiers de la population d’y accéder ? Comment faire un retour quand les discriminations sont causées sur la scène principale par des intervenants·es pendant des présentations ou la table-ronde, sous les yeux de l’organisation qui laisse faire sans répondre ? Car de la discrimination au CdL il y en a, il y en a beaucoup même, mais elle n’est pas forcément où l’organisation s’y attend.

Si ce n’est pas ma première participation au CdL, c’est ma première en tant que visiteur et confronter ces deux expérience me faire prendre conscience des œillères qu’on peut avoir vis-à-vis du déroulement de l’évènement quand on est affairé·e à son animation, qu’elles soient par volonté de ne pas faire de vagues ou parce qu’on a la tête dans le guidon. Je suis donc convaincu de l’honnêteté des intentions de l’organisation du CdL, tout autant que je suis convaincu que les problèmes sont de fond, et sont communs aux milieux du libre et à de trop larges pans des milieux socialistes. Allez dans n’importe quelle conférence et vous trouverez une partie de ces problèmes, et probablement d’autres encore.

Pour ces raisons, j’ai décidé de faire un retour sur les problèmes dont j’ai été témoin non pas à l’organisation du CdL directement, mais par cet article pour appeler la communauté libriste toute entière à se remettre en question. Dans la suite de cet article, je vais tenter d’expliquer ce qui constitue à mes yeux des problèmes de l’évènement et d’offrir des pistes pour y remédier.

Invisibilisation des luttes des travailleurs·euses du libre

Le samedi s’est clôturé par une table-ronde sur les modèles de gouvernance des projets libres. Si les échanges étaient intéressants, on a pu entendre plusieurs fois qu’il n’y a pas de licenciements dans les logiciels libres. Je ne comprends pas d’où peut venir une telle affirmation, sinon peut-être d’une vision très limitée de ce qu’il se passe dans nos milieux.

Je connais beaucoup trop de personnes licenciées qui travaillaient pour des multinationales du libre, mais également des organisations à but non-lucratif, des coopératives ou encore de petites associations. Les vagues de licenciements chez Red Hat et Mozilla de ces dernières années devraient suffire de vous convaincre, pour ne citer que les exemples les plus médiatisés. Et au delà des licenciements, les travailleurs·euses du libre sont aussi victimes de harcèlements et des mises au placard.

Et ça c’est sans parler de la précarité du milieu, supérieure à celles des autres milieux de l’informatique. Contrats précaires, salariat déguisé, freelance, exploitation du travail passion, travail bénévole, alternance entre embauche et non-emploi, le tout sans nécessairement de chômage entre temps. Tout ça est très fréquent. Je me suis énormément retrouvé dans ma lecture de Te plains pas, c’est pas l’usine, quand bien même je n’ai jamais travaillé dans le milieu associatif, il y a une exploitation très similaire dans les milieux libristes, exploitation fondée sur un sentiment de devoir pour le bien commun et des injonctions à l’abnégation et au don de soi. Alors qu’il s’agît certes de luttes, mais avant tout de relations de travail dans une économie capitaliste.

On parle tout le temps des licences libres, mais trop peu des licenciés·es du libre, et ce genre de discours déconnectés de la réalité dans nos milieux participent à l’invisibilisation de leurs luttes. Par solidarité on devrait s’y intéresser. Peut-être que cet écueil ne serait pas arrivé si la table était réellement ronde et incluait l’auditoire au lieu d’être une discussion descendante entre quatre grands noms du milieu.

Incommodation de personnes handicapées

Pour plein de raisons que ce soit, on n’a pas toustes assez d’énergie pour tenir une journée entière, d’autant plus une journée de conférence. Les rares lieux de repos sont la cour intérieure et des bancs de béton dans le bruyant, passant et illuminé hall principal. Les conférences peuvent être épuisantes pour moi, et lorsque j’ai dû à certains moments trouver un endroit calme où me reposer à l’abris du froid, le mieux que j’ai trouvé était par terre dans un couloir passant, ce qui vous vous en doutez n’est pas idéal.

Il y a plein de choses à faire pour améliorer ça qui ne coûtent pas grand chose. Une salle de repos un peu isolée, avec des chaises, calme, dans la pénombre et clairement indiquée serait plus que bienvenue. En complément ou à minima, avoir quelques chaises disposées çà et là permettrait un repos de meilleure qualité et plus fréquent. En complément, avoir plus de tables de bar près de la buvette permettrait aux personnes de manger leur crêpe et de boire leur café sans occuper une des rares places assises simplement pour répondre à leur besoin de libérer leurs mains.

Je dois louer l’organisation du CdL pour fournir un son clair et audible, ce qu’en tant que personne malentendante j’apprécie. Cela dit le son était parfois beaucoup trop fort, notamment lors de projections de vidéos promotionnelles auxquelles le son n’apportait rien de pertinent. Étant également hypersensible auditif, je dois avouer que ces rares moments étaient particulièrement pénibles et venaient contribuer à ma fatigue. Je n’ai pas eu le temps de me ruer sur mes bouchons d’oreille, le son était tellement fort et incommodant que je n’ai eu d’autre choix que de me boucher les oreilles avec les doigts en attendant que ça passe. Faire plus attention au niveau sonore serait bienvenu.

Stigmatisation des personnes psychiatrisées

Lors d’une présentation, une slide déclare que « le progrès technique est comme une hache qu’on aurait mise dans les mains d’un psychopathe », soulignée d’une image d’Elon Musk éclatant une porte avec une hache, faisant référence au film Shining. La personne présentant affirmant que c’est une citation d’Albert Einstein, comme pour appuyer la spiritualité du propos. Un peu plus tard dans la présentation, et à moins que ma mémoire ne me fasse défaut, le terme est réutilisé pour qualifier les personnes qui emploient des termes misogynes.

Psychopathe est une insulte saniste qui stigmatise les personnes psychiatrisées. Il disqualifie une personne en lui associant une tare psychologique infamante, méritant au mieux le mépris et le rejet, au pire l’enfermement ou la mort. Le simple fait d’utiliser ce terme participe à légitimer le système saniste dont souffrent les personnes psychiatrisées.

Ce terme médicalise les comportements perçus comme déviants, l’appliquer à Elon Musk pour qualifier ses actes revient à expliquer son fascisme par une tare mentale. Ses comportements s’expliquent pourtant très bien par son status social, et chercher à l’en extraire pour les expliquer par d’autres moyens dépolitise la situation. Elon Musk a de tels comportements parce que c’est un milliardaire fasciste, un influenceur libertarien, un colon blanc, un transphobe. Il en va de même pour les misogynes, je pense que n’importe quelle féministe sera d’accord pour dire que le patriarcat n’est pas une question de pathologie, et je suis convaincu que les féministes psychiatrisées n’apprécient pas d’être associées à des misogynes. Cette psychiatrisation du politique prend également forme dans l’injonction à voir un·e psy.

Pourtant en entretenant le sanisme, on entretien un système qui vise en grande partie à enfermer les personnes en lutte pour leur émancipation. Les esclaves luttant pour leur libération étaient psychiatrisés·es, les femmes luttant contre le patriarcat étaient psychiatrisées, les personnes homosexuelles étaient psychiatrisées, les opposants politiques étaient ou sont toujours psychiatrisés·es, les personnes trans sont toujours très activement psychiatrisées. Pour maintenir les hiérarchies on médicalise, psychiatrise, enferme, médicamente, camisole et ôte de toute agentivité les personnes en révolte contre les dominations qu’elles subissent. Voilà ce qui se cache derrière le terme psychopathe. Jamais un fasciste n’a été enfermé pour ses idées, jamais un homme pour sa misogynie.

Quand après sa présentation, je suis allé brièvement demander à la personne de ne plus utiliser ce terme, elle a justifié que c’est une citation d’Einstein. La comparaison des personnes psychiatrisée à des fascistes et des misogynes, ça n’est pas d’Einstein, pas plus que c’est Einstein qui a inclus ce terme dans la présentation. Einstein a dit plein de choses, pourquoi d’entre toutes retenir celle là ? Et si le but est d’avoir une citation à propos, pourquoi en choisir une saniste datant de 1917, ignorant plus d’un siècle de luttes ?

Il aurait été possible de choisir n’importe quelle autre citation, voire de se passer de citation, mais c’est celle là qui a été retenue. Il aurait même été possible d’utiliser la citation mais en la commentant, soulignant qu’elle est problématique et pourquoi, mais à ce compte autant ne pas l’utiliser puisque c’est hors-propos de la présentation. Mais surtout, ll aurait été possible de ne pas réemployer ce terme plus tard, hors de toute citation servant d’excuse à son utilisation. Les personnes psychiatrisées ne sont pas des punchlines.

Lors d’une autre présentation on apprend que lors du procès de Nuremberg, le nazi Hermann Göring a été jugé contre toute attente parfaitement sain par les psychologues. La personne présentant tentait de démontrer que les idées politiques ne sont pas une question de santé mentale, mais l’a fait sans démonter l’idée même de santé mentale, laissant penser que le fait que ces personnes aient été jugées saines serait une anomalie. Il me semble donc important de compléter en démontant l’idée même de santé mentale, qui comme j’ai tenté de l’expliquer plus haut est avant tout un outil d’oppression. Pour aller plus loin, je vous recommande de lire l’article L’abolition carcérale doit inclure la psychiatrie. Je précise que lutter contre la psychiatrie ne revient pas à nier les difficultés neurologiques ou psychologiques que peuvent rencontrer des personnes, ni le fait que le système psychiatrique peut parfois les aider. Mais si le système psychiatrique peut les aider, c’est parce que c’est l’unique moyen alors à notre disposition pour faire du soin mental et ce principalement pour des raisons de légalité, ce qui ne doit pas servir à nier que le système psychiatrique est avant tout un système de contrôle des corps et des esprits et une extension du système carcéral. Le soin doit se faire malgré la psychiatrie, pas grâce à elle. Le fait que les nazis aient été trouvés sains par les psychologues du procès de Nuremberg complète et illustre ce que je disais plus haut sur le rôle de la psychiatrie comme outil de domination.

On apprend dans le même temps que les psychologues ont donné à Hermann Göring un quotient intellectuel de 138, soulignant avec surprise que les nazis ne sont pas nécessairement idiots. Au delà du fait que la notion d’intelligence est en elle même très discutable, le QI est intrinsèquement un outil de hiérarchisation conçu par des bourgeois blancs pour se dresser au dessus des autres, réduisant les personnes à un unique nombre masquant sa méthode de calcul et les biais qui la constitue, mais offrant une illusion de scientificité. Le QI a principalement été utilisé en soutien au racisme, hiérarchisant l’intelligence des races pour justifier le colonialisme, et il suffit de regarder une carte mondial des QIs pour s’en convaincre. Rien d’étonnant donc qu’un bourgeois blanc ai un haut QI, l’outil fonctionne comme prévu. Et au delà du QI, ramener le fascisme à une notion d’intelligence dépolitise là encore le sujet et stigmatise les personnes qui en sont en réalité victimes.

Je tiens à présenter mes excuses à la première personne citée dans cette section et à qui je suis allé brièvement parler en aparté après un présentation autrement louable, j’espère ne pas avoir contribué au stress de l’évènement, mais par antivalidisme je ne pouvais pas laisser passer l’utilisation d’un tel terme. De même, je tiens à présenter mes excuses à la seconde personne et à son audience pour avoir monopolisé la parole pendant le court temps alloué aux questions, après là encore une excellente présentation.

Stigmatisation des personnes racisées

Lors de la table-ronde du samedi soir, une personne évoque la récente faille de sécurité injectée au logiciel xz suite à une longue infiltration. Elle semble avant tout avoir retenu une chose de cet épisode, à savoir la nationalité chinoise de la personne infiltrée puisqu’elle a cru pertinent de commenter en feignant une gêne que, puisqu’on n’est qu’entre nous cette personne peut l’avouer, elle a peur quand elle voit des contributions aux logiciels libres qu’elle maintient venant de personnes de l’est, de Russie, d’Asie du sud. Cette personne savait pertinemment que la table-ronde était filmée et allait être publiée, et elle a pu dire ça sans la moindre réaction ni des autres personnes sur scène, ni de l’organisation de l’évènement. Le public quant-à lui n’a jamais eu la parole de toute la table-ronde, empêchant toute réponse tierce dans le cadre de la conférence.

Erratum du  : il y a bel et bien eu un tour de questions que j’avais fini par oublier, et ce malgré une intervention que j’avais trouvée salutaire répondant à une vision très libérale de l’inclusion partagée sur scène.

Virtuellement tous les pays, tous les états pratiquent l’infiltration, l’injection de backdoors, les attaques virales, y compris les états occidentaux, y compris la France. Je serais tenté de dire que c’est probablement avant tout les états occidentaux qui en sont la source, il n’y a qu’à voir l’étendue du virus étasunien Stuxnet pour s’en convaincre. Pourtant ce n’est pas des logiciels développés par la CIA ou encore l’armée française que cette personne a remis en question, comme si l’informatique devait rester avant tout une préoccupation d’occidentaux. Le problème, c’est les sud asiatiques. Disons-le clairement, ce à quoi on assisté durant cette table-ronde n’est rien de plus que de la xénophobie éhontée, du racisme.

Le pire étant que ces immondices racistes ont été dites en réponse au fait qu’une autre personne de la table-ronde ai loué les contributions des personnes venant de régions en guerre au Moyen-Orient. Des tas de nos camarades libristes viennent de Russie ou de Chine, d’autres y vivent et subissent au quotidien le fascisme de ces états. Et que dire des camarades libristes des pays réprimés par les états occidentaux et les USA plus particulièrement ? Des mainteneurs du noyau Linux ont très récemment été virés du projet car russes, mais il n’y a pas de licenciements dans le libre nous a-t-on annoncé dans cette même table-ronde. Ni de racisme, manifestement, puisque l’organisation du CdL n’a pas réagi.

Plus tard, cette même personne nous annonce fièrement tutorer pour le Google Summer of Code. J’ai été tuteur pour cet évènement à deux reprises, et je sais qu’il y a une représentation notable des personnes d’Asie du sud parmi les candidats·es stagiaires. Ça m’inquiète pour la sélection des candidats·es que cette personne peut opérer, de même que pour la qualité de l’encadrement et le traitement des stagiaires.

Exclusion des personnes sourdes

Au delà d’être très intéressante, la présentation d’Armony Altinier et Jean-Philippe Mengual était lunaire et ce pour une raison toute simple : des personnes sourdes étaient venues assister à l’unique présentation de toute la conférence sur l’accessibilité et absolument rien n’a été mis en place par le CdL pour les inclure. Armony et Jean-Philippe se sont retrouvés·es à pallier le manque en ouvrant un éditeur de texte et en s’échangeant le clavier à tour de rôle pour retranscrire tant bien que mal ce que l’autre disait. Même si elle a trouvé ses limites lors des démonstrations où l’éditeur de texte a dû être masqué et où le clavier était sollicité, leur démarche et adaptivité a été plus que louable pour pallier les manquements de l’organisation de la conférence.

Imaginez la scène, deux personnes avec différents handicaps qui se retrouvent à devoir bricoler pour compenser l’accessibilité de la conférence à des personne ayant encore une autre catégorie de handicap ! Et tout ça, de nouveau, pendant l’unique présentation de toute la conférence ayant rapport avec le handicap et plus exactement avec le manque d’accessibilité. Pourtant des solutions existent. Idéalement, avoir des interprètes en langue des signes française pour inclure les personnes sourdes et avoir des personnes pour faire la transcription en sous-titres pour inclure les personnes mal-entendantes.

Ces solutions peuvent certes coûter cher en main d’œuvre, mais même sans moyens il est possible de bricoler des choses. Des logiciels libres de transcription existent comme Live Captions, et même si ces logiciels sont imparfaits, les avoir sur un écran dédié déporté ou sur la machine des présentateurs·ices permettrait de limiter l’exclusion. Et si on considère que ces logiciels libres ne sont pas suffisants, il ne faut pas hésiter à passer par des logiciels propriétaires, l’inclusion doit passer avant le purisme.

De plus, écrire les sous-titres en direct et sur place permettrait de compenser des soucis de captation audio et permettrait de plus rapidement publier les captations vidéo avec leurs sous-titres, pour toujours mieux inclure les personnes sourdes et mal entendantes. Enfin ce ne sont pas les seules personnes handicapées à bénéficier de sous-titres et de nombreuses personnes souffrant de troubles de l’attention arrivent mieux à suivre quand il y a à la fois de l’audio et des sous-titres, avoir des sous-titres en direct faciliterait leur participation et limiterait leur fatigue.

Addendum du  : on m’a fait remarquer que des sous-titres français permettent de mieux inclure les personnes dont le français n’est pas la première langue que l’audio français seul. Je suis pourtant bien placé pour le savoir, ayant réussi début octobre à suivre un documentaire grâce à ses sous-titres en italien alors que je ne connais pas la langue. De la même manière, on m’a fait remarquer que l’interprétation en langue des signes française permet de mieux inclure les personnes dont le français n’est pas la première langue que des sous-titres français.

Exclusion des personnes craignant pour leur santé et propagation des épidémies

Hé, regardez sous le tapis, c’est le covid, il n’est jamais parti, on l’y a glissé et on fait comme s’il n’existait plus. Pourtant c’est toujours une cause de mortalité majeure, et le covid long continue d’handicaper un grand nombre de personnes. J’ai des amis·es et camarades libristes qui ont acquis des handicaps parfois très sévère suite à de « petites grippes » ou du covid, et ce n’étaient pas des personnes dites « à risque ». Je parle de perte d’odorat définitif, de grande fatigue chronique ou de grande réduction de la mobilité. Et c’est sans mentionner les autres maladies comme la coqueluche, ni sans parler des morts. On continue de faire comme si de rien n’était, on n’a rien appris du début de la pandémie du covid et on redevient eugénistes pour un confort fantasmé.

Je peux porter un FFP2 tant que je veux pendant la conférence, il ne fait que vous protéger des virus que je pourrais diffuser, il ne me protège pas de ceux que vous diffusez. Pourtant éviter la propagation de maladies aéroportées, les gênes qu’elles occasionnent, les handicaps et les morts ne demande pas grand chose. Aérer les endroits clos tant que possible et se masquer dans les endroits de passage et confinés tels que les transports en commun, les supermarchés ou les conférences suffiraient à grandement réduire le nombre d’infections. Mais pour que ça fonctionne, encore faut-il qu’on se protège les uns·es les autres. Refuser de se masquer est eugéniste, c’est considérer que la maladie c’est pour les autres, qu’on est fort·e et que les handicaps et les morts sont acceptables. Il ne faut surtout pas attendre d’avoir des symptômes pour se masquer, on peut être porteur·se asymptomatique et participer à la diffusion de maladies, qu’on finisse par les développer ou non. De plus lorsqu’on développe une maladie telle que le covid ou la grippe, on a déjà participé à sa propagation pendant plusieurs jours. Porter un masque quand on le peut et quand c’est pertinent est devenu un acte radical de soin communautaire, c’est déprimant.

L’organisation du CdL contribue à cette situation, je n’ai vu aucune recommandation à se masquer, aucun système de filtration, aucune aération, pas même des fenêtres ouvertes qui ne coûtent pourtant littéralement rien. Ce n’est pourtant pas faute de sensibilisation, de documentation et d’actions de la part de l’Association pour la Réduction des Risques Aéroportés. Cabrioles ou Autodéfense Sanitaire fournissent également des ressources à ce sujet. L’autodéfense ne peut pas être individuelle, et sans actes collectifs tout le monde est vulnérable, sans prise en compte sérieuse des risques sanitaires par l’organisation de l’évènement personne n’est protégé.

Au delà de ça, en rassemblant un nombre conséquent de participants·es de tout le pays dans des salles bondées sans la moindre mesure de prévention, le CdL participe activement à la propagation des épidémies.

Erratum du  : les masques FFP2 protègent bel et bien leur porteurs·ses, mais ils ne sont réellement efficaces une journée que si tout le monde joue le jeu.

Addendum du  : toutes les personnes qui souhaiteraient se masquer ne peuvent pas le faire, c’est pourquoi il faut que toutes les personnes qui le peuvent se masquent pour les protéger. Le but n’est pas de faire les choses parfaitement mais de les faire au mieux de nos moyens, et à l’heure actuelle nous sommes collectivement lamentables. L’organisation de l’évènement a le pouvoir de participer à reverser la donne, de conscientiser nos milieux, de protéger et inclure nos camarades.

Une conférence de mecs blancs

Les conférences que j’ai citées comme marquantes pourraient faire croire à une parité de genre des intervenants·es, mais il n’en est rien. Les militant·es antipatriarcales doivent malheureusement faire un énorme travail pour leur inclusion et l’organisation du CdL s’est déjà faite remonter les bretelles il y a quelques années pour avoir refusé des présentations de femmes quand dans le même temps elle en accordait plusieurs à des hommes.

Lors de sa keynote, le présentateur nous expliquait que la mode est aux IAs, que c’est là que sont les financements en ce moment et que par conséquent et pour leur propre bien, les projets logiciels libres se doivent de suivre l’exemple de VLC et inclure des fonctionnalités en IA. La démonstration m’a laissé peu convaincu. Le lendemain, dans la même salle et au même créneau horaire, Khrys donnait une présentation nous invitant à être technocritiques des IAs via une angle féministe, soulignant la nécessité du libre. La présentation de Khrys était pertinente, percutante, intéressante, stimulante et salvatrice en nous invitant à aller contre le capitalisme et le patriarcat, et non pas à s’en accommoder comme la keynote de la veille. Sujet similaire, angle différent, la seconde présentation était à mon sens bien meilleure, mais c’est à une homme plutôt qu’à une femme que l’organisation a choisi de donner le créneau de keynote, créneau sans autres présentations pour lui faire concurrence. Khrys qui quant-à-elle a dû partager l’audience avec les nombreuses autres présentations du créneau a malgré tout réussi à faire salle comble.

De la même manière, si la présentation d’Isabella Vanni sur l’inclusion des femmes et des minorités de genre à l’April s’est vue allouée l’amphithéâtre, elle a été mise en concurrence avec une présentation sur la censure d’internet en France qui a fait salle comble, face à un amphi presque désert pour Isabella. Si la présentation sur la censure d’internet a été annulée au dernier moment, ce manquement de programmation a été relevé et critiqué.

La conférence se clôt sur une slide nous annonçant 1 200 participants à cette édition. On ne saura pas combien de participantes. Probablement que les personnes ayant réalisé ces slides n’ont pas assisté à la présentation d’Isabella Vanni qui nous expliquait pourtant bien combien le masculin neutre participe activement à l’absence des femmes dans les milieux de l’informatique et du libre.

Un autre point tristement notable est la blanchité des intervenants·es. La conférence est un véritable entre-soi blanc, pas étonnant que des propos racistes puissent être proférés pendant la table ronde sans soulever la moindre réaction, et il y a fort à parier que ça participe du fait que les personnes racisées n’y participent pas plus.

J’ai cru comprendre que certaines conférences vont activement rechercher des personnes pour présenter, mettant réellement la diversité du milieu en avant et aidant de ce fait à la rétablir en normalisant la présence, la visibilité et les paroles des personnes minorisées. J’ai entendu du bien de MiXiT et de Paris Web du point de vue de l’inclusivité, peut-être y-aurait-il à creuser dans la façon dont elles s’organisent ? Je ne dis pas que l’organisation du CdL ne se soucie pas de la diversité des intervenants·es, mais je suis convaincu que d’autres parviennent beaucoup mieux à la réaliser et que ces conférences devraient servir de points de référence.

Le CdL a beaucoup de tracks en parallèle mais je me demande si cette quantité ne se fait pas au détriment de la qualité de la conférence, et ce malgré la diversité des sujets abordés. Peut-être vaudrait-il mieux avoir moins des tracks mais des salles plus pleines, notamment l’amphithéâtre qui héberge la track principale ? Je peux entendre que réduire le nombre de tracks augmenterait l’occupation des salles de cours déjà bondées, ce qui serait effectivement un problème, mais peut-être y’a-t-il d’autres amphis à utiliser ? J’imagine que si l’organisation du CdL pouvait en avoir d’autres, elle ne les aurait pas boudés, et que donc elle ne peut en avoir qu’un seul.

Vous noterez que si les présentations qui m’ont marquées sont pour moitié présentées par des présentatrices et ce malgré une très vaste majorité de présentateurs, cela veut dire que j’ai trouvé en moyennes les présentations données par des femmes de meilleure qualité. Je serais taquin, je suggèrerais que réduire le nombre de présentations en donnant la priorité aux non-mecs et non-blancs augmenterait la qualité de la conférence. Allez, soyons taquins, je le suggère. Mais qu’on s’entende, je ne dis pas que la parité et l’inclusivité doivent être atteintes pour avoir une meilleure conférence, je note juste qu’une meilleure conférence serait un effet de bord bénéfique de leur atteinte.

Conclusion

J’ai volontairement omis de détailler les personnes qui ont commis ces impairs parce que je ne souhaite pas m’en prendre à elles mais aux problèmes. On vit dans des sociétés patriarcales, racistes, sanistes, validistes, et le milieu du libre n’y échappe pas. Je ne souhaite pas lutter contre des personnes mais contre des systèmes et les discours qui les soutiennent. J’espère que les personnes qui se reconnaîtraient dans cet article ne prendront pas mes remarques comme des attaques mais comme un appel à faire plus attention. Il y a certainement des tas d’autres problèmes que je n’ai pas vus, soit parce que je n’en ai pas eu conscience, soit parce que je n’en ai pas été témoin, soit parce que je n’ai pas le recul ou le vécu nécessaire. Je n’ai par exemple aucune idée de combien la conférence est accessible en fauteuil roulant. Mon but n’est de toute façon pas de faire une liste exhaustive mais de faire un retour sur mon vécu de la conférence, pointant du doigt des choses que je trouve graves et qui je pense devraient être sérieusement prises en compte. De plus, je tiens à présenter mes excuses de ne pas avoir plus sourcé et référencé mes propos, cet article a été écrit dans l’urgence et sa rédaction m’a fatigué, je n’ai plus l’énergie pour plus de recherches.

Je crois sincèrement aux volontés d’inclusion du CdL, tout autant que je sais qu’on vit dans des sociétés où les oppressions sont tellement banalisées qu’elles sont invisibles à la majorité. J’appelle néanmoins l’organisation du CdL à se remettre en question, les intentions d’inclusivité ne doivent pas rester des mots sur une page web et doivent être activement mises en pratique. Je ne souhaite pas spécifiquement jeter la pierre à son organisation, le CdL est une conférence que j’aime sincèrement et ce genre de problèmes sont malheureusement extrêmement répandus, non seulement dans la société mais également dans les milieux du libres. En disant ça, je souhaite pointer du doigt l’entièreté des mouvements des logiciels libres comme de la culture libre.

Conférence majeure du libre, le FOSDEM accueille des milliers de personnes et probablement plus de 10 000 dans un espace incroyablement sous-dimensionné. Évènement hautement international, les participants·es viennent de partout autour du monde. Le FOSDEM est un véritable lieu d’échange international d’épidémies où l’on blague à demi-mots que l’on tu n’as pas pleinement vécu la conférence si l’on re rentre pas avec une grippe du FOSDEM. L’organisation du FOSDEM ferme volontairement les yeux sur le problème et n’a absolument aucune politique sanitaire, la rendant activement complice de la propagation des épidémies et pandémies. À cette complicité doit s’ajouter celles des entreprises du libre qui incitent sinon forcent leurs employés·es à participer à la conférence bruxelloise.

Bien qu’étant intimiste avec sa cinquantaine de participants·es, j’ai de nouveau attrapé le covid pendant le Berlin Mini GUADEC 2024. Les mesures de protection mises en place étaient là encore insuffisantes, et nous étions de mémoire seulement 4 à se masquer, tout en devant passer des journées entières dans le même espace mal aéré. Encore une fois, je participais à la protection de personnes qui refusaient de m’accorder la même en ne se masquant pas, et l’organisation est responsable de l’insuffisance des mesures mises en place.

Je ne demande pas à ce que les conférences soient parfaites, aucune ne le sera jamais, et je ne prétends surtout pas pouvoir faire aussi bien sinon mieux. Je tiens à saluer l’organisation du CdL pour faire avoir fait un évènement assez chouette pour qu’on ait envie de le voir aller de l’avant, quitte à devoir le secouer un peu pour qu’il devienne réellement inclusif. J’espère que l’équipe du CdL ne prendra pas les problèmes que je remonte comme des attaques, tout autant que j’espère que les autres conférences du libre sauront s’assurer de ne pas faire les mêmes erreurs. J’espère également que les pistes d’amélioration que j’ai données aideront, je ne prétends pas qu’elles sont toutes faciles à mettre en place mais je veux bien, à mon échelle et avec l’énergie que j’ai, me tenir disponible pour aider l’organisation du CdL ou d’une autre conférence à trouver comment arranger la situation.

hidreport and hut: two crates for handling HID Report Descriptors and HID Reports

A while ago I was looking at Rust-based parsing of HID reports but, surprisingly, outside of C wrappers and the usual cratesquatting I couldn't find anything ready to use. So I figured, why not write my own, NIH style. Yay! Gave me a good excuse to learn API design for Rust and whatnot. Anyway, the result of this effort is the hidutils collection of repositories which includes commandline tools like hid-recorder and hid-replay but, more importantly, the hidreport (documentation) and hut (documentation) crates. Let's have a look at the latter two.

Both crates were intentionally written with minimal dependencies, they currently only depend on thiserror and arguably even that dependency can be removed.

HID Usage Tables (HUT)

As you know, HID Fields have a so-called "Usage" which is divided into a Usage Page (like a chapter) and a Usage ID. The HID Usage tells us what a sequence of bits in a HID Report represents, e.g. "this is the X axis" or "this is button number 5". These usages are specified in the HID Usage Tables (HUT) (currently at version 1.5 (PDF)). The hut crate is generated from the official HUT json file and contains all current HID Usages together with the various conversions you will need to get from a numeric value in a report descriptor to the named usage and vice versa. Which means you can do things like this:

  let gd_x = GenericDesktop::X;
  let usage_page = gd_x.usage_page();
  assert!(matches!(usage_page, UsagePage::GenericDesktop));
  
Or the more likely need: convert from a numeric page/id tuple to a named usage.
  let usage = Usage::new_from_page_and_id(0x1, 0x30); // GenericDesktop / X
  println!("Usage is {}", usage.name());
  
90% of this crate are the various conversions from a named usage to the numeric value and vice versa. It's a huge crate in that there are lots of enum values but the actual functionality is relatively simple.

hidreport - Report Descriptor parsing

The hidreport crate is the one that can take a set of HID Report Descriptor bytes obtained from a device and parse the contents. Or extract the value of a HID Field from a HID Report, given the HID Report Descriptor. So let's assume we have a bunch of bytes that are HID report descriptor read from the device (or sysfs) we can do this:

  let rdesc: ReportDescriptor = ReportDescriptor::try_from(bytes).unwrap();
  
I'm not going to copy/paste the code to run through this report descriptor but suffice to day it will give us access to the input, output and feature reports on the device together with every field inside those reports. Now let's read from the device and parse the data for whatever the first field is in the report (this is obviously device-specific, could be a button, a coordinate, anything):
   let input_report_bytes = read_from_device();
   let report = rdesc.find_input_report(&input_report_bytes).unwrap();
   let field = report.fields().first().unwrap();
   match field {
       Field::Variable(var) => {
          let val: u32 = var.extract(&input_report_bytes).unwrap().into();
          println!("Field {:?} is of value {}", field, val);
       },
       _ => {}
   }
  
The full documentation is of course on docs.rs and I'd be happy to take suggestions on how to improve the API and/or add features not currently present.

hid-recorder

The hidreport and hut crates are still quite new but we have an existing test bed that we use regularly. The venerable hid-recorder tool has been rewritten twice already. Benjamin Tissoires' first version was in C, then a Python version of it became part of hid-tools and now we have the third version written in Rust. Which has a few nice features over the Python version and we're using it heavily for e.g. udev-hid-bpf debugging and development. An examle output of that is below and it shows that you can get all the information out of the device via the hidreport and hut crates.

$ sudo hid-recorder /dev/hidraw1
# Microsoft Microsoft® 2.4GHz Transceiver v9.0
# Report descriptor length: 223 bytes
# 0x05, 0x01,                    // Usage Page (Generic Desktop)              0
# 0x09, 0x02,                    // Usage (Mouse)                             2
# 0xa1, 0x01,                    // Collection (Application)                  4
# 0x05, 0x01,                    //   Usage Page (Generic Desktop)            6
# 0x09, 0x02,                    //   Usage (Mouse)                           8
# 0xa1, 0x02,                    //   Collection (Logical)                    10
# 0x85, 0x1a,                    //     Report ID (26)                        12
# 0x09, 0x01,                    //     Usage (Pointer)                       14
# 0xa1, 0x00,                    //     Collection (Physical)                 16
# 0x05, 0x09,                    //       Usage Page (Button)                 18
# 0x19, 0x01,                    //       UsageMinimum (1)                    20
# 0x29, 0x05,                    //       UsageMaximum (5)                    22
# 0x95, 0x05,                    //       Report Count (5)                    24
# 0x75, 0x01,                    //       Report Size (1)                     26
... omitted for brevity
# 0x75, 0x01,                    //     Report Size (1)                       213
# 0xb1, 0x02,                    //     Feature (Data,Var,Abs)                215
# 0x75, 0x03,                    //     Report Size (3)                       217
# 0xb1, 0x01,                    //     Feature (Cnst,Arr,Abs)                219
# 0xc0,                          //   End Collection                          221
# 0xc0,                          // End Collection                            222
R: 223 05 01 09 02 a1 01 05 01 09 02 a1 02 85 1a 09 ... omitted for previty
N: Microsoft Microsoft® 2.4GHz Transceiver v9.0
I: 3 45e 7a5
# Report descriptor:
# ------- Input Report -------
# Report ID: 26
#    Report size: 80 bits
#  |   Bit:    8       | Usage: 0009/0001: Button / Button 1                          | Logical Range:     0..=1     |
#  |   Bit:    9       | Usage: 0009/0002: Button / Button 2                          | Logical Range:     0..=1     |
#  |   Bit:   10       | Usage: 0009/0003: Button / Button 3                          | Logical Range:     0..=1     |
#  |   Bit:   11       | Usage: 0009/0004: Button / Button 4                          | Logical Range:     0..=1     |
#  |   Bit:   12       | Usage: 0009/0005: Button / Button 5                          | Logical Range:     0..=1     |
#  |   Bits:  13..=15  | ######### Padding                                            |
#  |   Bits:  16..=31  | Usage: 0001/0030: Generic Desktop / X                        | Logical Range: -32767..=32767 |
#  |   Bits:  32..=47  | Usage: 0001/0031: Generic Desktop / Y                        | Logical Range: -32767..=32767 |
#  |   Bits:  48..=63  | Usage: 0001/0038: Generic Desktop / Wheel                    | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
#  |   Bits:  64..=79  | Usage: 000c/0238: Consumer / AC Pan                          | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
# ------- Input Report -------
# Report ID: 31
#    Report size: 24 bits
#  |   Bits:   8..=23  | Usage: 000c/0238: Consumer / AC Pan                          | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
# ------- Feature Report -------
# Report ID: 18
#    Report size: 16 bits
#  |   Bits:   8..=9   | Usage: 0001/0048: Generic Desktop / Resolution Multiplier    | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  10..=11  | Usage: 0001/0048: Generic Desktop / Resolution Multiplier    | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  12..=15  | ######### Padding                                            |
# ------- Feature Report -------
# Report ID: 23
#    Report size: 16 bits
#  |   Bits:   8..=9   | Usage: ff00/ff06: Vendor Defined Page 0xFF00 / Vendor Usage 0xff06 | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  10..=11  | Usage: ff00/ff0f: Vendor Defined Page 0xFF00 / Vendor Usage 0xff0f | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bit:   12       | Usage: ff00/ff04: Vendor Defined Page 0xFF00 / Vendor Usage 0xff04 | Logical Range:     0..=1     | Physical Range:     0..=0     |
#  |   Bits:  13..=15  | ######### Padding                                            |
##############################################################################
# Recorded events below in format:
# E: .  [bytes ...]
#
# Current time: 11:31:20
# Report ID: 26 /
#                Button 1:     0 | Button 2:     0 | Button 3:     0 | Button 4:     0 | Button 5:     0 | X:     5 | Y:     0 |
#                Wheel:     0 |
#                AC Pan:     0 |
E: 000000.000124 10 1a 00 05 00 00 00 00 00 00 00
  

Richard Hughes

@hughsie

Firmware SBOMs for open source projects

You might be surprised to hear that closed source firmware typically contains open source dependencies. In the case of EDK II (probably the BIOS of your x64 machine you’re using now) it’s about 20 different projects, and in the case of coreboot (hopefully the firmware of the machine you’ll own in the future) it’s about another 10 — some overlapping with EDK II. Examples here would be things like libjpeg (for the OEM splash image) or libssl (for crypto, but only the good kind).

It makes no sense for each person building firmware to write the same SBOM for the OSS code. Moving the SBOM upstream means it can be kept up to date by the same team writing the open source code. It’s very similar to what we encouraged desktop application developers to do with AppStream metadata a decade or so ago. That was wildly successful, so maybe we can do the same trick again here.

My proposal would to submit a sbom.cdx.json to each upstream project in CycloneDX format, stored in a location amenable to the project — e.g. in ./contrib, ./data/sbom or even in the root project folder. The location isn’t important, only the file suffix needs to be predictable.

Notice the CycloneDX word there not SPDX — the latter is great for open source license compliance, but I was only able to encode 43% of our “example firmware SBOM” into SPDX format, even with a lot of ugly hacks. I spent a long time trying to jam a round peg in a square hole and came to the conclusion it’s not going to work very well. SPDX works great as an export format to ensure license compliance (and the uswid CLI can already do that now…) but SPDX doesn’t work very well as a data source. CycloneDX is just a better designed format for a SBOM, sorry ISO.

Let’s assume we check in a new file to ~30 projects. With my upstream-maintainer hat on, nobody likes to manually edit yet-another-file when tagging releases, so I’m encouraging projects shipping a CycloneDX sbom.cdx.json to use some of the auto-substituted tokens, e.g.

  • @VCS_TAG@git describe --tags --abbrev=0 e.g. 1.2.3
  • @VCS_VERSION@git describe --tags e.g. 1.2.3-250-gfa2371946
  • @VCS_BRANCH@git rev-parse --abbrev-ref HEAD e.g. staging
  • @VCS_COMMIT@git rev-parse HEAD e.g. 3090e61ee3452c0478860747de057c0269bfb7b6
  • @VCS_SBOM_AUTHORS@git shortlog -n -s -- sbom.cdx.json e.g. Example User, Another User
  • @VCS_SBOM_AUTHOR@@VCS_SBOM_AUTHORS@[0] e.g. Example User
  • @VCS_AUTHORS@git shortlog -n -s e.g. Example User, Another User
  • @VCS_AUTHOR@@VCS_AUTHORS@[0] e.g. Example User

Using git in this way during the built process allows us to also “fixup” SBOM files with either missing details, or when the downstream ODM patches the project to do something upstream wouldn’t be happy with shipping upstream.

For fwupd (which I’m using as a cute example, it’s not built into firmware…) the sbom.cdx.json file would be something like this:

{
  "bomFormat": "CycloneDX",
  "specVersion": "1.6",
  "version": 1,
  "metadata": {
    "authors": [
      {
        "name": "@VCS_SBOM_AUTHORS@"
      }
    ]
  },
  "components": [
    {
      "type": "library",
      "bom-ref": "pkg:github/fwupd/fwupd@@VCS_TAG@",
      "cpe": "cpe:2.3:a:fwupd:fwupd:@VCS_TAG@:*:*:*:*:*:*:*",
      "name": "fwupd",
      "version": "@VCS_VERSION@",
      "description": "Firmware update daemon",
      "supplier": {
        "name": "fwupd developers",
        "url": [
          "https://github.com/fwupd/fwupd/blob/main/MAINTAINERS"
        ]
      },
      "licenses": [
        {
          "license": {
            "id": "LGPL-2.1-or-later"
          }
        }
      ],
      "externalReferences": [
        {
          "type": "website",
          "url": "https://fwupd.org/"
        },
        {
          "type": "vcs",
          "url": "https://github.com/fwupd/fwupd"
        }
      ]
    }
  ]
}

Putting it all together means we can do some pretty clever things assuming we have a recursive git checkout using either git modules, sub-modules or sub-projects:

$ uswid --find ~/Code/fwupd --fixup --save sbom.cdx.json --verbose
Found:
 - ~/Code/fwupd/contrib/sbom.cdx.json
 - ~/Code/fwupd/venv/build/contrib/sbom.cdx.json
 - ~/Code/fwupd/subprojects/libjcat/contrib/spdx.json
Substitution required in ~/Code/fwupd/contrib/sbom.cdx.json:
 - @VCS_TAG@ → 2.0.1
 - @VCS_VERSION@ → 2.0.1-253-gd27804fbb
Fixup required in ~/Code/fwupd/subprojects/libjcat/spdx.json:
 - Add VCS commit → db8822a01af89aa65a8d29c7110cc86d78a5d2b3
Additional dependencies added:
 - pkg:github/hughsie/libjcat@0.2.1 → pkg:github/hughsie/libxmlb@0.2.1
 - pkg:github/fwupd/fwupd@2.0.1 → pkg:github/hughsie/libjcat@0.2.1
~/Code/fwupd/venv/build/contrib/sbom.cdx.json was merged into existing component pkg:github/fwupd/fwupd@2.0.1

And then we have a sbom.cdx.json that we can use as an input file used for building the firmware blob. If we can convince EDK2 to merge the additional sbom.cdx.json for each built module then it all works like magic, and we can build the 100% accurate external SBOM into the firmware binary itself with no additional work. Comments most welcome.

Martin Pitt

@pitti

Learning web components and PatternFly Elements

Today at Red Hat is day of learning again! I used the occasion to brush up my knowledge about web components and taking a look at PatternFly Elements. I’ve leered at that for a long time already – using “regular” PatternFly requires React, and thus all the npm, bundler, build system etc. baggage around it. In Cockpit we support writing your own plugins with a simple static .html and .

Jiri Eischmann

@jeischma

We’re More Offline at Conferences, and That’s Probably a Good Thing

I’ve just been to two traditional Czech open source conferences – LinuxDays and OpenAlt – and I’ve noticed one interesting shift: the communication on social media during the conferences has disappeared.

After 2010, we suddenly all had a device in our pocket that we could easily use to share experiences and observations from anywhere. And at least at IT events, people started doing this a lot. Under the hashtag of the given conference, there was a stream of messages from participants about which talks they liked, where they could find a good place to eat in the area, what caught their attention among the booths. The event organizers used this to inform visitors, and the booth staff to attract people to their booth. I remember writing about what we had interesting at our booth, and people actually came to have a look based on that.

At the peak of this trend, the popular so-called Twitter walls were in use. These were typically web applications that displayed the latest messages under a given hashtag, and they ran on screens in the corridors or were projected directly in the lecture rooms, so that even those who weren’t following it on their mobile phones could keep track.

And today, all of this has practically disappeared. When I counted it after LinuxDays, there were a total of 14 messages with the hashtag on Mastodon during the conference, and only 8 on Twitter. During OpenAlt, there were 20 messages with the hashtag on Mastodon and 8 on Twitter. I also checked if it was running on Bluesky. There were a few messages with the hashtags of both conferences there, but except for one, they were all bridged from Mastodon.

In any case, these are absolutely negligible numbers compared to what we used to see ten years ago. Where did it all go? I thought about it and came up with four reasons:

  1. Microblogging is much more fragmented today than it was ten years ago. Back then, we were all on Twitter. That is now in decline. The open-source community has largely moved to Mastodon, but not entirely. Some are still on LinkedIn, some on Bluesky, etc. When there is no single place where everyone is, the effect of a universal communication channel disappears.
  2. Conference communication has partly shifted to instant messaging. This trend started 8-9 years ago. A group (typically on Telegram) was created for conference attendees, and it served for conference communication. Compared to a microblogging platform, this has the advantage that it is not entirely open communication. What happens at the conference, stays at the conference. It doesn’t take the form of publicly searchable messages. For some, this is a safer space than a social network. It’s also faster, with features like location sharing, etc. However, this mode of communication has also declined a lot. During OpenAlt, there were only 20 messages in its Telegram group.
  3. People are much more passive on social media today. Rather than sharing their own posts from the conference, they’d rather leave it to some influencer who will make a cool video from there, which everyone will then watch and like. All the major social networks have shifted towards a small group creating content for a passive majority. New platforms like TikTok have been functioning this way from the start.
  4. After Covid, people simply don’t have the same need to share their conference experiences online. They are somewhat saturated with it after the Covid years, and when they go somewhere, they don’t want to tap messages into their phone about how they’re doing there.

Overall, I don’t see it as a bad thing. Yes, it had its charm, and it was easier during the conference to draw attention to your booth or talk, but in today’s digital age, any shift towards offline is welcome. After all, conferences are there for people to meet in person. Otherwise, we could just watch the streams from home and write about them on social media. We’ve been there before, and it wasn’t quite right. 🙂

How do you see it? Do you also notice that you share less online from conferences?

Arun Raghavan

@arunsr

GStreamer Conference 2024

All of us at Asymptotic are back home from the exciting week at GStreamer Conference 2024 in Montréal, Canada last month. It was great to hang out with the community and see all the great work going on in the GStreamer ecosystem.

Montréal sunsets are 😍

There were some visa-related adventures leading up to the conference, but thanks to the organising team (shoutout to Mark Filion and Tim-Philipp Müller), everything was sorted out in time and Sanchayan and Taruntej were able to make it.

This conference was also special because this year marks the 25th anniversary of the GStreamer project!

Happy birthday to us! 🎉

Talks

We had 4 talks at the conference this year.

GStreamer & QUIC (video)

Sancyahan speaking about GStreamer and QUIC

Sanchayan spoke about his work with the various QUIC elements in GStreamer. We already have the quinnquicsrc and quinquicsink upstream, with a couple of plugins to allow (de)multiplexing of raw streams as well as an implementation or RTP-over-QUIC (RoQ). We’ve also started work on Media-over-QUIC (MoQ) elements.

This has been a fun challenge for us, as we’re looking to build out a general-purpose toolkit for building QUIC application-layer protocols in GStreamer. Watch this space for more updates as we build out more functionality, especially around MoQ.

Clock Rate Matching in GStreamer & PipeWire (video)

Arun speaking about PipeWire delay-locked loops
Photo credit: Francisco

My talk was about an interesting corner of GStreamer, namely clock rate matching. This is a part of live pipelines that is often taken for granted, so I wanted to give folks a peek under the hood.

The idea of doing this talk was was born out of some recent work we did to allow splitting up the graph clock in PipeWire from the PTP clock when sending AES67 streams on the network. I found the contrast between the PipeWire and GStreamer approaches thought-provoking, and wanted to share that with the community.

GStreamer for Real-Time Audio on Windows (video)

Next, Taruntej dove into how we optimised our usage of GStreamer in a real-time audio application on Windows. We had some pretty tight performance requirements for this project, and Taruntej spent a lot of time profiling and tuning the pipeline to meet them. He shared some of the lessons learned and the tools he used to get there.

Simplifying HLS playlist generation in GStreamer (video)

Sanchayan also walked us through the work he’s been doing to simplify HLS (HTTP Live Streaming) multivariant playlist generation. This should be a nice feature to round out GStreamer’s already strong support for generating HLS streams. We are also exploring the possibility of reusing the same code for generating DASH (Dynamic Adaptive Streaming over HTTP) manifests.

Hackfest

As usual, the conference was followed by a two-day hackfest. We worked on a few interesting problems:

  • Sanchayan addressed some feedback on the QUIC muxer elements, and then investigated extending the HLS elements for SCTE-35 marker insertion and DASH support

  • Taruntej worked on improvements to the threadshare elements, specifically to bring some ts-udpsrc element features in line with udpsrc

  • I spent some time reviewing a long-pending merge request to add soft-seeking support to the AWS S3 sink (so that it might be possible to upload seekable MP4s, for example, directly to S3). I also had a very productive conversation with George Kiagiadakis about how we should improve the PipeWire GStreamer elements (more on this soon!)

All in all, it was a great time, and I’m looking forward to the spring hackfest and conference in the the latter part next year!

Tim Janik

@timj

JJ-FZF - a TUI for Jujutsu

JJ-FZF is a TUI (Terminal-based User Interface) for Jujutsu, built on top of fzf. It centers around the jj log view, providing key bindings for common operations on JJ/Git repositories. About six months ago, I revisited JJ, drawn in by its promise of Automatic rebase and conflict resolution. I have…