Interview with Matthieu Herrb: Testing the X.Org Server

Original author: Sergey Bronnikov
  • Transfer
Xorg

This year, Xorg, the free implementation of the X Window System , is turning 30 years old . Despite the existence and development of alternatives, Xorg remains alive.

On the occasion of the anniversary, I asked a few questions to the person who has been working on the development of this project for 23 (!) Years. His name is Matthieu Herrb . In addition to his participation in the X.Org project, he also stands at the origins of creating a separate version of Xorg for the OpenBSD project - Xenocara .



X.Org is a large and complex project. What does the development process look like?

The project is not so large compared to other projects like Firefox, Chrome, or even Gnome or KDE.

The development process is slightly different for the different most actively developed components (the X server itself, a few libraries and drivers) and outdated components like libXt and all applications based on this library.

There is also constant interaction between the two groups: these are the developers of Mesa and DRM modules in the Linux kernel.

Ideas are discussed during developer meetings (once a year, in Europe or North America, the next will be in Bordeaux, France this September) or on the xorg-devel mailing list.

Over the past few years, we have adapted a development model that is very similar to the Linux kernel development model: patches (generated using git format-patch) are sent to the mailing list for review, discussion is held, and if an agreement is reached on the discussion, the patch commits to the maintainer.

There is one maintainer for the X server (currently Kate Packard). For other components, accepting commits is simpler and, as a rule, it is enough for the author of the patch to propose it once so that the review is successful.

And to be complete, at present, the development of popular drivers is almost entirely entrusted to companies (Intel, AMD, VMWare), so engineers from these companies make most of the changes.

How many developers are involved in the development process?

If we consider the Mesa developers, the graphics stack in the Linux kernel and the developers of the X server, then there are about 50-60 people who commit on an ongoing basis to one of the repositories.

What does the testing process look like? Do you use regular testing (running tests on every commit) or is it intermittent testing?

We have several tools for continuous, automated testing, but they are not as effective as we would like.

What tools, tests and test frameworks do you use? I found many tests (e.g. XTS , rendercheck , glean , piglit , etc.) in the repository ( http://cgit.freedesktop.org/), but many of them look outdated. Do developers create tests on a regular basis for new functionality and based on bug fixes?

In addition to all these existing test suites, which are usually very cumbersome to use on a regular basis, Peter Hutterer has developed a relatively new integration test suites for the X server, which is supposed to be launched automatically from the X server build system (using the 'make test') and on our server with tinderbox. The build.sh script used by many developers also runs these tests by default.

But given the huge range of supported systems (although this number has been steadily decreasing since switching from XFree86 to X.Org), only a small part of them receive actual regular testing.

Most tests are done by people who integrate X.Org into other systems and distributions.

This is my case among others. I support X.Org in OpenBSD (and helped in NetBSD before), so I test configurations that are not covered by the main developers of the X server and often find errors that are skipped during testing, either because they are platform specific (e.g. OpenBSD is one of several systems that still runs on some exotic architectures such as VAX, m88k or even sparc32), or simply because our implementation of malloc () is able to catch errors that elude other tools used on Linux.

What types of testing are used (performance testing, functional testing, compatibility testing, stability testing, unit testing, etc.)?

The new test framework for the X server mainly uses unit testing and functional testing to ensure that the components of the X server work as expected, independent of the driver.

When using tests, do you measure code coverage with tests?

Not. Since most often the same person writes code and tests, he has some understanding about the coverage of this code,
but there is no formal tool for measuring coverage.

How often do you test: from time to time or on a regular basis?

The Tinderbox platform was intended to run tests as often as possible, but most other tests are run manually from time to time.

How are new features tested?

New features in X, are you kidding, right? But seriously, a number of new features were added mainly in the Mesa (OpenGL) code and input driver. Either new tests for features are added to the test suite at the same time as the code itself, or, in the case of OpenGL, external compliance checks are used.

Are you using Continuous Integration during the development process?

Yes, I have already mentioned Tinderbox several times, although this is far from perfect.

What tool do you use to deal with defects? Who is responsible for working with bugs?

We have a Bugzillasupplemented by a patchwork tracking system that keeps track of that there are no unauthorized patches that no submitted patch gets forgotten or unhandled.

X.Org sometimes finds security issues ( http://www.x.org/wiki/Development/Security/ ). Do you use regular code audits?

And yes and no :) As far as I know, X.Org does not have a dedicated person to audit on a regular basis. But some distributions (for example, Oracle / Solaris represented by Alan Coopersmith) regularly use tools aimed at identifying security problems and make corrections to the project. Sometimes, when a specific new type of vulnerability appears (such as formatting strings or integer overflows about 10 years ago), we do a huge cleanup of existing code to try to fix everything in it.

We also get external help from independent security researchers who monitor curious vulnerabilities, and since the X server is still running with superuser privileges on many systems, this is still justified.

Last year, Ilja Van Sprundel reported a very large number of vulnerabilities in the libraries of the X server and in the X server itself, mainly related to the lack of good message validation in the X server protocol.

Do you apply static code analysis?

The answer is similar to my previous one. Tinderbox does not start any static analyzers other than gcc with the -Wall option and some additional options. But some developers (including Alan from Oracle) have access to powerful static code analyzers and they run them from time to time.

Coverity has a program for conducting static analysis of free organizations. X.Org was part of this program and they helped us find a number of problems.

X.Org supports increasingly less popular operating systems: Linux, FreeBSD, NetBSD, OpenBSD, Solaris, Microsoft Windows. How do you ensure confidence in stable operation on all these OS?


As I explained above, this is provided by volunteers (or paid employees in some cases) from various projects. Most developers focus on Linux, which has become the main development platform in the last 10 years. But on my own I want to add that I'm a little sorry that the developers do not intervene a bit more in support on other systems. From my experience, much needs to be learned for development on more than one platform, and from the point of view of code security, diversity is of great importance (even if this increases the cost of development).

Who is responsible for the release of new versions? What are the criteria for the release?

There is a maintainer for the X server that is responsible for releasing the release. We are currently working in a 6-month development cycle to release a new release every 6 months. The previous release receives a -stable maintainer and is mainly supported for over 12 months.

In addition to the X Server releases, we still release Katamari releases with a complete, consistent set of libraries and utilities in addition to the X server. This is done once or several times a year. (The current release of Katamari is 7.7, based on the X server 1.14). But the need for Katamari releases is often questioned, as distribution vendors tend to maintain their own Katamari (with a lot of upstream merges), regardless of the official ones released by X.Org.

The times when the XFree86 project provided binary assemblies for most supported systems (from SVR4 to Linux, including NetBSD, OS / 2, and several others) have certainly ended.

Tell us about the most interesting bug in your practice. :)

Working with code that was designed and implemented when code security was not a big deal is not that interesting. The X server was originally a permissive system (remember “xhost +”?). People did not care about buffer overflows or other malicious ways to exploit coding errors. Features like the X-SHM extension were broken initially. (SHM has been fixed by using a new API based on file descriptor transfer).

But the most interesting problem, from my point of view, is described in the Loic Dufflot article on CanSecWest 2006, where he explained that even with the privelege escalation, which I added to OpenBSD, there remains the possibility of "simple" code implementation to gain control over the OS kernel from because the X server has direct access to hardware.

This is what has always been known (and I even talked about it in my report at RMLL in 2003), but the lack of a PoC (Proof of Concept) implementation allows many developers to ignore the problem.

Thanks for the answers and wish you less bugs in the code!

Thanks.

In conclusion, I want to add that yes, X.Org is far from ideal in terms of testing. We are trying to make it better, but this is not the most attractive area for contributors, things are not done quickly, since most developers prefer more attractive things.

Also popular now: