HDR support on Linux has spent years moving forward more slowly than many desktop users, creators, and gamers would have liked. The bottleneck has not really been the hardware. In many cases, modern GPUs and displays have been ready for quite some time. The harder part has been coordinating the entire graphics stack, from the kernel to the driver, compositor, and advanced color management. Against that backdrop, AMD has now taken an important technical step to improve color handling on Linux, especially in the KDE ecosystem, and one detail has attracted particular attention: part of the work was developed with help from Claude, Anthropic’s AI model.
The engineer behind this effort is Harry Wentland, a long-time AMD developer deeply involved in Linux graphics. His latest work focuses on improving the advanced color path needed for modern HDR workflows, which means touching several layers at once: the Linux kernel, the AMDGPU driver, and KDE’s KWin compositor. The result should not be read as “HDR on Linux is now solved,” because that would go too far. What it does represent is a meaningful step toward a more modern and capable color pipeline, one that still needs review, refinement, and upstream integration before it becomes part of the standard Linux experience.
A meaningful advance for Linux color management
To understand why this matters, it helps to look at the technical foundation. The Linux kernel’s DRM Color Pipeline API was designed to allow complex color transformations to be performed efficiently, using dedicated hardware blocks when possible instead of pushing more work onto shaders or the CPU. That matters a great deal for HDR, where correct output depends on more than just brightness. A proper HDR pipeline also needs accurate tone mapping, color-space conversion, gamma handling, and a clear way to describe how image data moves from one stage to the next.
Linux has historically had a harder time with this than more tightly controlled platforms. On Windows or macOS, the relationship between the graphics stack and the display pipeline is more centralized. On Linux, there are multiple compositors, multiple drivers, and many different combinations of hardware and desktop environments. That flexibility is one of Linux’s strengths, but it also makes consistent color management far more difficult. This is why progress in this area has often felt slow, even when the hardware itself was capable.
Wentland’s latest contribution focuses on adding support for color-space conversion, or CSC, through a new drm_colorop implementation. That may sound like a narrow technical detail, but it addresses a real limitation in the current stack. KWin can already receive video in formats such as NV12 or P010 through Wayland, but without proper support for this conversion step in the kernel color pipeline, the compositor has had to do more of the work itself. In practice, that means handling color conversion, tone mapping, scaling, and composition in OpenGL instead of offloading more of it to display hardware.
By adding CSC support to the pipeline, Linux can move closer to doing that work where it belongs: in the hardware blocks designed for exactly this purpose. Wentland’s own examples go further by showing how a 3D LUT can be used to represent KWin’s internal color pipeline and then push that work into AMD display hardware. That is one of the more significant parts of the story, because it suggests a path toward more advanced hardware composition for HDR and better efficiency overall.
Claude helped write code, but the engineer still owns it
The most unusual part of the story is the role played by Claude. Wentland openly stated that he used Claude Sonnet extensively during the work and even said that it “basically wrote all the code.” That line has understandably drawn attention, especially because kernel and graphics work is not usually seen as an easy place for AI-assisted development. These are complex codebases, heavily reviewed, with long histories and strict expectations around quality and maintainability.
What makes Wentland’s comments especially valuable is that he did not frame the AI as a replacement for engineering judgment. Quite the opposite. He stressed that large language models are still language models, not real intelligence, and that developers must continue to take ownership of everything they submit. In other words, using a model to understand a complex codebase or generate code that fits within an existing structure can be useful, but that does not reduce the responsibility to review it, guide it carefully, and ensure it is good enough for upstream maintainers.
That distinction matters. Open source graphics development is not a space where low-quality patches survive for long. Linux kernel and KDE maintainers expect contributors to understand what they are proposing, defend it technically, and refine it through review. Wentland’s warning not to “throw trash at maintainers” gets to the heart of the issue. AI may speed up exploration and implementation, but it does not remove accountability. The code still carries the name and reputation of the developer sending it.
Claude’s role also appears to have gone beyond just producing small code fragments. Wentland discussed additional tooling used during the process, including KWin-side debugging work to make it easier to inspect whether surfaces were being offloaded properly to the GPU pipeline. That suggests AI was part of a broader engineering workflow involving navigation of the codebase, prototyping, refinement, and experimentation, not just autocomplete with a better marketing label.
Linux HDR is improving, but the work is not finished
The biggest takeaway is not that AI wrote some kernel-related patches. The more important point is that Linux is finally getting closer to a modern color pipeline that can better support HDR displays, wide color gamuts, and advanced desktop color workflows. That matters not just for enthusiasts, but also for creators, media playback, OLED monitors, gaming, and any workflow where correct color reproduction is increasingly expected rather than optional.
Still, it would be misleading to present this as a finished job. Wentland himself has been careful about that. He noted that the code still needs more testing, that the KWin side will likely need feedback and changes before it is accepted, and that some issues remain. He mentioned occasional trouble with 3D LUT application, uncertainty about some offload candidates, and mixed results depending on the workload. Some local video playback scenarios appear to work well, while other cases, such as YouTube playback in Firefox or certain game-related paths, still need more investigation.
That is actually one of the most encouraging aspects of the story: it sounds like real engineering rather than a polished marketing claim. The progress is tangible, but so are the limitations. This is what graphics infrastructure work on Linux often looks like in practice. Big changes do not arrive all at once. They come in layers, with kernel groundwork first, then driver work, compositor integration, debugging, testing, and finally upstream review.
Phoronix placed this latest work in the wider context of the DRM Color Pipeline API introduced in Linux 6.19 after a long development cycle. That context matters. AMD’s current HDR-related work is not an isolated patch series or an anecdote about AI-generated code. It is part of a broader modernization effort across the Linux desktop graphics stack. And in that sense, the Claude angle may end up being less important than what it signals: AI tools are starting to become part of real low-level development workflows, even in parts of the software world once thought too specialized, too fragile, or too demanding for this kind of assistance.
The headline may be that AMD improved HDR on Linux with help from Claude. The deeper story is that Linux desktop graphics is slowly becoming more capable, more modern, and more competitive in areas where it has lagged behind for years. And if AI tools can help experienced engineers move faster without lowering standards, this may be one of the first clear examples of that shift becoming visible in core graphics infrastructure.
FAQ
What exactly did AMD improve for HDR on Linux?
AMD worked on adding color-space conversion support to the DRM color pipeline, along with AMDGPU and KWin-side integration. This helps move more color processing work into hardware instead of leaving it to the compositor.
Did Claude build the AMD Linux driver by itself?
No. Harry Wentland said he used Claude Sonnet extensively and that it wrote much of the code, but he also made clear that the developer remains fully responsible for reviewing, guiding, and validating everything before it is submitted.
Does this mean HDR on Linux is now fully solved?
Not yet. The work is still under review and still needs more testing and refinement. Some parts are promising, but there are still issues and edge cases to work through before this becomes a polished end-user feature.
Why is color-space conversion so important for Linux HDR?
Because HDR is not just about brightness. A proper HDR pipeline needs accurate color transformations, tone mapping, gamma handling, and calibration. Without those steps working correctly, the final image can look wrong even on capable hardware.
