a display may not have a single native colorspace. it may be able to switch.
embedded devices can do this as the display panel may have extra control lines
for switching to a different display gammut/profile. it may be done at the gfx
card output level too... so it can change on the fly.
That's not a typical situation though, but nothing special would be
happening - a new profile may be installed by the user as well,
in which case an application should re-render to accommodate
yes. compositors right now work in display colorspace. they do no conversions.
eventually they SHOULD to display correctly. to do so they need a color profile
for the display.
For enhanced color management yes. But core comes first, and is necessary
for many color critical applications, because the compositor will never
have the color transformations they require.
it may be that a window spans 8 different screens all with different profiles.
As I've explained several times, what happens is that the application
is aware of this, and transforms each region appropriately - just
as they currently do on X11/OS X/MSWin systems.
currently the image looks a bit different on each display.
That would be because you haven't implemented color management support
yet, making it possible for applications to implement color management.
proper color correcting compositor it can make them all look the same.
As will a color aware application given appropriate color management support.
want apps to be able to provide "raw in screen colorspace pixels" this is going
to be horrible especially as windows span multilpe screens.
The code is already there to do all that in color critical application.
if i mmove the
window around the client has drawn different parts of its buffer with different
colorspaces/profiles in mind and then has to keep redrawing to adjust as it
you'll be ablew to see "trails" of incorrect coloring around the
boundaries of the screens untl the client catches up.
It's damage, just like any other, and color critical users using
color critical applications will take "trails" over wrong color
anytime. No "trails" and wrong color = a system they can't use.
the compositor SHOULD do any color correction needed at this point.
Not at all. That's a way to do it under some circumstances yes, but
it's not satisfactory for all.
if you want
PROPER color correction the compositor at a MINIMUM needs to be able to report
the color profile of a screen even if it does no correcting.
Yes - exactly what I'm suggesting as core color management support.
yes you may have
multiple screens. i really dislike the above scenario of incorrect pixel tails
because this goes against the whole philosophy of "every frame is perfect".
"Every pixel being perfect" except they are the wrong color, isn't perfect.
There are multiple ways of doing the best thing possible - you can't re-render
a frame in the compositor if it doesn't have the pixels needed to render it,
so you can 1) not re-render until the application provides the pixels
needed 2) Render the wrong color pixels until the application catches up
or 3) (if the compositor has some color management capability and
the application sets it up) get it to do an approximate correction to
the pixels until the application catches up with the correct color.
cannot do this given your proposal. it can only be done if the compositor
handles the color correction and the clients just provide the colorspace being
used for their pixel data.
And a compositor can't know how to transform color in the way some
applications require. This trumps such goals.
i'm totally ignoring the case of having alpha. yes. blending in gamma space is
"wrong". but it's fast. :)
I'm not sure what you mean by that. Traditionally applications render
to the display colorspace. Changing the display setup (i.e. switching
display colorspace emulation) is a user action, complicated only by the
need to make the corresponding change to the display profile, and re-rendering
anything that depends on the display profile.
being able to modify what the screen colorspace is in any way is what i
That's the reality of how displays work. The user presses a button on the
front that says "emulate sRGB" or "native" or "Preset 1" or something else.
only the compositor should affect this based on it's own decisions.
And color critical users will scream bloody murder at anything related
to color that isn't under their control, if it affects the accuracy or scope
of the color workflow.
No, not supported = native device response = not color managed.
and for most displays that is sRGB.
Not in the slightest. Having (ahem!) profiled a few displays, none of them
are exactly sRGB. Some may aspire to be sRGB, they may approach sRGB,
but (because they are real devices, not an idealized norm) none are sRGB.
[ Black point alone is miles out for most LCD based displays. ]
either way monitors tend to have slightly different color reproduction and most
are "not that good" so basically sRGB.
All slightly different is certainly not the same as sRGB. That's why anyone
critically interested in color, profiles their display.
the compositor then is effectively
saying "unmanaged == sRGB, but it may really be anything so don't be fussy".
No display profile = can't know what to transform to = don't do anything.
No compositor is involved. If the application doesn't know
the output display profile, then it can't do color management.
it can assume sRGB.
That's up to the user. The user may have something else they can
assign if they are unable to profile the display (EDID derived
profile, model generic profile, etc.)
Please read my earlier posts. No (sane) compositor can implement CMM
capabilities to a color critical applications requirements,
so color management without any participation of a compositor
is a core requirement.
oh course it can. client provides 30bit (10bit per rgb) buffers for example and
compositor can remap. from the provided colorspace for that buffer to the real
It's not about bit depth, it's about algorithms. No compositor can
do a transformation that it doesn't have an algorithm for.
Relying on an artificial side effect (the so called "null color transform")
to implement the ability to directly control what is displayed, is a poor
approach, as I've explained at length previously.
but that is EXACTLY what you have detailed to rely on for color managed
applications. for core color management you say that the client knows the
colorspace/profile/mappings of the monitor and renders appropriately and
expects its pixel values to be presented 1:1 without remapping on the screen
because it knows the colorspace...
Yes, a switch (Don't do color management) is far cleaner than trying
to trick a constant color management compositor into not doing color
management by feeding it a source profile that is (hopefully)
the same as the destination profile (and how do you do that
if the surface spans more than one Monitor ?)
No compositor should be involved for core support. The application
should be able to render appropriately to each portion of the span.
then no need for any extension. :) compositor HAs to be involved to at least
tell you the colorspace of the monitor... as the screen is its resource.
As I've explained a few times, and extension is needed to provide
the Output region information for each surface, as well as each
outputs color profile, as well as be able to set each Outputs
per channel VideoLUT tables for calibration.
Post by Carsten Haitzler (The Rasterman)
this way client doesnt need to know about outputs, which outputs it spans
etc. and compositor will pick up the pieces. let me give some more complex
That only works if the client doesn't care about color management very much -
i.e. it's not a color critical application. I'd hope that the intended use of
Wayland is wider in scope than that.
how does it NOT work?
It doesn't work when the compositor doesn't have the color transform
capability that the application requires.
let me give a really simple version of this.
you have a YUV buffer. some screens can display yuv, some cannot. you want to
know which screens support yuv and know where your surface is mapped to which
screens so you can render some of your buffer (some regions) in yuv and some
in rgb (i'm assuming packed YUVxYUVxYUVx and RGBxRGBxRGBx layout here for
example)... you wish to move all color correct rendering, clipping that correct
(yuv vs rgb) rendering client-side and have the compositor just not care.
Let me give you an example. The application has a ProPhotoRGB buffer,
and wants to render it with image specific gamut mapping into the display
space. It has code and algorithms to 1) Gather the image gamut, 2) Compute
a gamut mapping from the image gamut to the Output Display gamut, invert
the A2B cLUT tables of the Output display profile to floating point
precision with gamut clipping performed in a specially weighted CIECAM02 space.
I'm not quite sure how the Wayland compositor is going to manage all that,
especially given that the application could tweak or change this in
the point of wayland is to be "every frame is perfect". this breaks that.
A pixel is not perfect if it is the wrong color.
If you don't care so much about color, yes. i.e. this is
what I call "Enhanced" color management, rather than core.
It doesn't have to be as flexible or as accurate, but it has
the benefit of being easy to use for applications that don't care
as much, or currently aren't color managed at all.
how not? a colorspace/profile can be a full transform with r/g/b points in
space... not just a simple enum with only fixed values (well thats how i'm
A color profile can be quite complex, including scripted in the case of
something like an OCIO or ACES profile
But the device profile is only half the story - it does nothing on its
own, it needs to be linked with another device profile. And the flexibility
at that point is unlimited.
in this case the api's tell the client the available colorspaces
and chooses the best. it would have NO CHOICE in your core mangement anyway.
it'd be stuck with that colorspace and have to render accordingly which is the
exact same thing you are proposing for core.
Not at all. How it transforms from the source colorspace to
the display is then completely under its control, something
needed for color critical applications as well as calibration
and profiling software.
profide a list of 1 colorspace -
the monitor native one. application renders accordingly. if colorspace of
rendered buffer == colorspace of target screen, compositor doesn't touch pixel
Bad way of doing it, for reasons I've pointed out multiple times.
Be explicit rather than rely on a trick - use a switch.
if it's RGB or YUV (YCbCr) it's the same thing. just vastly different color
mechanisms. color correction in RGB space is actually the same as in YUV. it's
different spectrum points in space that the primaries point to.
I'm aware of what YCbCr is - I've implemented code to convert many
such color formats.
color management require introducing such things. BT.601, BT.709, BT.2020.
the compositor MUST KNOW which colorspace the YUV data uses to get it correct.
Sure, but that's not an aspect I've mentioned. Ultimately the display
is RGB, irrespective of the encoding using to carry that information
i'm literally starting at datasheets of some hardware and you have to tell it
to use BT.601 or 709 equation when dealing with YUV. otherwise the video data
will look wrong. colors will be off. in fact BT.709 == sRGB.
Sure - complexity in managing encodings. But that has nothing
directly to do with color management, which is about colorspace
now here comes the problem... each hardware plane yuv may be assigned to MAY
have a different colorspace. they also then get affected by the color
reproduction of the screen at the other end.
To be fare, I'm not that aware of how the hardware presents itself
in regard to such things (data sheets seem hard to come by, and I
have gone looking for them in vain on a few occasions), but for many color
critical uses, it's not an immediate concern because such applications
are not going to be using yuv buffers. (Exception might be a video
editing/color grading application sending previews to a TV or
studio monitor - but all that is about encodings rather than
any list of colorspaces IMHO should also include the yuv colorspaces where/if
I don't think so. If you look at the video standards, the color spaces are
all specified as RGB. YCbCr is a different encoding of the same color
space with a precise definition of the transformation to/from.
if a colorspace is not supported by the compositor then the appjust
needs to take a "best effort". the default colorspace today could be considered
BT.709/sRGB. also you could say "it's null transform" colorspace. i.e. you know
nothing so don't try colorcorrect.
There is a distinction between color encoding and color space.
my point was i don't think it's needed to split this up.
compositor lists available colorspaces. a list of 1 sRGB or null-transform or
adobe-rgb(with transform matrix), wide-gammut, etc. means thathat is the one
and only output supported.
I'm not quite sure of the context here - the display system only
knows about color spaces it has been told about. Someone has
to tell it what the color profile of its displays are, and
the application is the thing that knows what the color spaces
of the input spaces it deals with are.
not as i see it. given a choice of output colorspaces the client can choose to
do its own conversion, OR if it's colorspace of preference is supported by the
compositor then choose to pass the data in that colorspace to the compositor
and have the compositor do it.
Yes. But one is not the equivalent of the other, if the compositor
doesn't have the same color transformation capability.
*sigh* and THAT IS WHY i keep saying that the client can choose to do it's own!
I'm in furious agreement with this bit. I just want to make sure that
it is a core capability.
BUT this is not going to be perfect across multiple screens unless all screens
This is an already solved problem in other systems, including X11.
1 screen is a professional grade monitor with wide gammut rgb output.
1 screen is a $50 thing i picked up from the bargain basement bin at walmart.
null transform RGB
Why is it reporting an encoding rather than a colorspace,
and why isn't it providing the two display profiles ?
null transform RGB
wide gammut RGB
I don't see how extra encodings are useful without their
corresponding color spaces.
in the dumb case your app can't do much.
In the dumb or core case, it has two display profiles, one
for the professional grade monitor with wide gammut rgb output,
and the other for the bargain basement bin display from walmart.
It can then transform source images in whatever colorspace they
are tagged with, into the appropriate display colorspace,
in the way the application and user needs it to be transformed.
the smart case means that pixels
displaying on the pro montior either with null transform OR with wide gammut
colorspace get no transform done. pixels in sRGB, BT.709 and BT.601 have to be
transformed to the wide gammut rgb colorspace by the compositor. of course the
user would place the window on the best quality screen. within the color
spectrum the screens share colors SHOULD look identical.the client KNOWS the
colorspace being used and can transform/render data accordingly.
In the smart or enhanced case, the application would provide a source colorspace
profile, and the compositor would transform to the appropriate display
colorspace and encoding in the limited fashion it is capable of. This
is probably quite acceptable to for many applications with a limited range
of input formats or color conversion requirements.
the point of wayland is "every frame is perfect". you want clients to
rendering their content differently based on what screen their window is on
then a compositor can NEVER get this right no matter how hard they try because
clients are fighting them and making assumptions they absolutely should not. i
already told you of more realistic cases of windows in miniature in pagers that
are not on the same screen as the full sized window (as opposed to the silly
bunnyrabit example above, but it's meant to make a point).
If this is really the case, then the conclusion is that Wayland is
not suitable for serious applications, and certainly is not a replacement
for X11. I don't actually think that that is true.
you HAVE to abstract/hide this kind of information to ALLOW the compositor to
get things right.
I doubt that. You just have to make some allowance for the
application being able to determine the RGB values sent to the
display, if it wishes to. Given that this is basically
the case without compositor color management, and that
in the compositor there is a definition of how surfaces
get mapped to displays, I don't see at all why this is
now impossible, when it is supported in other serious
A color critical user won't put up with such things - they expect to
be in control over what's happening, and if a system has proper
color management (core + enhanced), there is absolutely no
reason for them to run the display in anything other than it's native gamut.
a user actually should not have to deal with most of these issues at all. even
a color critical one. they likely shouldn't have to remember which one of their
16 screens has the best colorspace support for that image.
Ideally, yes - but few have the money to get a full set of 16 EIZO displays.
No, it's a list of N output display colorspaces, one for each display.
see above. it should not be per display.
How can there be any color management unless the
colorspaces of each display is recorded somewhere ???
As explained, yes, core color management needs support -
control over VideoLUT state, plus registration of the output
display colorspaces + knowledge of which output the different
parts of a surface map to.
as you describe "core color management" - it's not control. that's simple
passive reading of the state and providing to the client. control is when you
start determining the state of these.
That part is just information needed for the client application to
perform color management, but calibration needs control over CRTC
per channel VideoLUTs.
sRGB is the colorspace of every HD display (or should be). how does it not come
Because that's not actually true. Each real display has it's own
response. That's why I write tools to profile displays, and why
people use those tools.
you don't need anything special for color calibration beyond a null transform
and a compositor that won't go ignoring that null transform anyway for the
purpose of color calibration (when used by a calibration app).
Agreed, + control over calibration curves.
It's the simplest possible support, (hence calling it "core").
It's needed internally anyway for a compositor to implement CMM
operations for "enhance" color management.
it's also broken when you attach the color profile to a specific output. see
No it's not - it already works on X11/OS X/MSWin.
that's out of scope for wayland.
Exactly, which is why you can't hope to cover all possible
client application requirements with color management
done in the compositor.
HOW it is transformed is either done
client-side to present whatever source data in a given output colorspace to the
compositor OR it's done by the compositor to fix colorspaces provided by
clients to display as correctly as possible on a given screen + hardware.
Right - so the client-side needs proper support for doing this, which
is what a "core" color management extension provides.
Hmm. Not really. Mostly a lot of other stuff has to go on top of that
to make things turn out how people expect (source colorspace definition,
white point mapping, gamut clipping or mapping, black point mapping etc.)
source definition is out of scope.
It can't be out of scope if the compositor is to do color management.
that's up the the app (e.g. photoshop). the
colorspce defintition indeed covers what you say. and it is about adjusting. i
was saying the exact same thing. i am not unfamiliar with colorspaces, color
correction and mapping. it's necessary for YUV->RGB and is fundamentally the
same as RGB->RGB
I'm now wondering if we are talking about different things.
The color management protocol I'm commenting on, is about
transforming between different device color spaces,
defined by ICC profiles etc. You seem to be referring
mainly to color encoding transforms, although you are then
throwing in references to sRGB, which is a colorspace definition.
1 colorspace which is the screen's output space is NOT the same? is that not
the same as a single screen system with the display colorspace on that 1
screen? how is it not the same? it's 1 colorspace exposed by compositor to
client in both cases. the SAME colorspace. how is this not the same?
the difference is that i dont think it should be per monitor.
The whole point is that each display has a different color response,
and incoming color should be transformed to compensate for these
differences. So each display (ideally) should have an associated
and that is why when a compositor DOES know the display colorspace it would
list that likely in addition to a null transform (there is basically no
downside to listing a null transform. it's the compositor just doing nothing
which is about as efficient as it gets).
This isn't typically true. A->B + B->A is not actually a null transform
for (say) a cLUT based ICC profile, since the B2A is not an exact inverse of
A2B. So you have to add a hack, that declares it a null transform.
if the colorspace of a provided buffer == colorspace of output then it IS
effectively a null transform for the compositor and it does (or should do) just
This depends on technical details of the profiles. Some
sorts of profiles will be very close to null transforms, and some
will not. (See above).
is a profile "exactly" invertible (i.e. to floating point
precision). Use a LUT for the per channel curves (such as the
original sRGB profile), and it's not quite perfectly invertible
(although it may be to low precision). Use cLUT based profiles,
and it certainly isn't. So it has to be declared to be
a special case and assumed to be a null transform.
no one is asking anyone to transform anything (thus invert or anything else)
with a null transform.
That's what a null transform is though - a forward conversion
(Device space to PCS) followed by an inverse transform
(PCS to Device space).
and if colorspaces match no one is converting anything
That's the hack - declaring matching profiles to be a null
transform, even though if you actually performed the transform,
pixel values might be altered.
"colorspaces match". to me that means either a strictly standards defined
colorspace with fixed constants and both sides agree to use it, or its
something where the constants have adjustments based on doing a color profile
of the screen. in BOTH cases i argue that if you flatten the data into some
memory blob memcmp() == 0 if they match. the best way i see is the compositor
provides a list, client chooses and just says "i used the colorspace #6 you
told me". then it does match when on display/hardware that really exactly
physically matches. if it doesn't match compositor will have to "choose what to
do". see above.
The only way to make it almost certain, is for the client application
to download the display profile from the compositor, and then set it as the
surface source profile. But this breaks down if the surface covers more than
one display - you would need a means of setting a source profile for the
different regions that correspond to each display. This isn't a requirement
for normal compositor implemented color management. But if you can simply
mark the surface as "do not color manage" instead, then there are no such
problems, each pixel of the surface is in the display colorspace it maps to.
* If a surface straddles two displays, then labeling all the pixels
with one of the two displays profile is not the same as not
touching the pixels.
either way if the client is colorcorrecting itself based on the display output
it thinks it might be on (and it may be on many display outputs or wrapped
around bunnies)... then it WILL look incorrect on at LEAST one of those
displays at some point. and the point is to not look incorrect.
I don't really see why that should be the case, any more than the situation
with any other client display content change.
* What happens at startup, before the output display profiles are
loaded into the compositor, or if there is no display profile ?
How do you create a null transform to do an initial calibration
or profile ?
at startup a compositor would load the color profiles that were
configured/stored from any previous execution that it knows match the displays
That's not possible if there was no previous execution.
it has. you mean at setup time - like when someone buys a new monitor...
There may be no profile initially, - one is only available after
the system is running, and the user is able to profile it.
i'd have the compositor use a null transform (do nothing to rgb values) UNTIL
it has calibration data for that screen. you dont have to "create" a null
transform. it's just listed in the colorspaces supported. it is the "do
There's no colorspace to "list" without a profile though.
why specialize it to a flag when it actually is just an "identity transform"
really which math-wise == no nothing as a fast path which is already what
Because they aren't the same thing. A flag is a "null" transform irrespective
of what the output colorspace is - it's the equivalent of a wild card profile.
A specific color profile will only match a specific display profile to be
a null transform. The surface spanning more than a single display is an
example of this ddistinction.
let me roll back in time. long ago in a land far far away i was working with
Hey - so did I. I was hacking on Labtam X terminal cfb code in the late 80's/early 90's,
making sure we had the fastest X terminals in the world :-) :-)
and you then found some x11 apps that refused to work on your
xserver... because they NEEDED an 8bpp visual, but your display was just a 1
bit mono one? no emulation. apps were specifically bound to a specific depth
because thats how x11 worked. it strictly defined the output pixel value of
operations so emulation was disallowed. result - you cant run the app at all.
Sure - it took effort to write portable applications, and not everyone
was aware, or could be bothered. X terminals worked really well with
some applications, but quite poorly with those that had been written
on the assumption that the X server was running on the same box as
the client. Not much has changed - a lot of web applications seem to
the same - they run really badly in real life, but I'm sure they
run perfectly on the developers machine!
the not long after i had 8bpp x11 apps that refused to run on 16bpp. they also
didn't work on 1bpp. hooray! i ended up actually porting quake to 16bpp myself
(i had some ... let's say dubiously obtained source to have a linux and even
solaris/sparc (8bbp), osf1/alpha(8bpp) and linux/ix86(16bpp) port of quake to
Dealing with 8bpp displays was what actually led me to take an interest in
color science. I started investigating perceptually uniform colorspaces
in developing 24->8 bpp color quantization code for my xli fork of xloadimage.
the problem was that you ended up with apps that just refused to work and if
i didn't have source and the time and desire to fix them, they would have
continued to not work and if i was a regular user i would likely have just
sworn and gotten unhappy and eventually moved to a platform where this doesn't
i do not want to see this kind of thing happen again in wayland land. that's
why it matters to me. it leads to a frustrating user experience.
I'm not sure of the relevance. There are many color managed applications
written for other graphics systems, and while there are things can trigger
color management issues (OS X is somewhat notorious for issues caused
by Apples API changes), I can't see how the situation could be analogous
to the 1bpp/8bpp/24bpp X11 situation you illustrate above.