NSImage/Bitmap and colorspaces


Jonathan Taylor
 

Hi all,

I have realised I’m having some subtle problems with images not being saved quite how I need them to be, and the problem seems to be caused by colorspace issues. Up until now I’ve kind of ignored the issue of colorspaces and presumed that things will behave “reasonably” if I don’t go out of my way to specify something unusual - it turns out this optimism was misplaced! I am hoping somebody can help me with understanding colorspaces just enough that I don’t inadvertently cause anything unexpected to happen. I am not concerned about pedantic photorealism on screen or on printed paper, but it is essential that I have precise control over the exact 3x8bit channel values written out to the TIFF files I am saving to disk.

At the moment I create a bitmap (specifying NSCalibratedRGBColorSpace just because that seemed like the most obvious one to use), populate it with the pixel values I want, add it to a new NSImage, call lockFocus on that NSImage, draw some annotations, and save the image as a TIFF (by passing the bitmap data to libtiff). I have realised that this pipeline is interfering with the original pixel values I set - merely calling lockFocus is enough to cause the red channel to bleed slightly into the green.

I noticed that the colorspace for the bitmap has been changed to NSDeviceRGBColorSpace by the call to lockFocus, and the pixel values have been changed (bleeding into green). If I instead create the bitmap using NSDeviceRGBColorSpace (instead of NSCalibratedRGBColorSpace) then a subsequent call to lockFocus does not seem to interfere with the pixel/channel values.

I also notice that if, instead of adding to an NSImage and drawing, I use:
    [NSGraphicsContext graphicsContextWithBitmapImageRep:theBitmap]
then there does not seem to be any problems with channel bleed-through (whichever colorspace I set for the bitmap).

So, I think my questions are:
- Can anyone point me to a basic explanation of Cocoa and colorspaces, giving me the minimal understanding I need given that I *don’t* want to do anything clever with colorspaces, I just want to “do the right things” so that nothing weird happens.
- Do both the solutions I’ve described (NSDeviceRGBColorSpace + lockFocus, and graphicsContextWithBitmapImageRep) seem like they should be robust for what I want (i.e. not messing with the exact pixel values I initially set manually in the bitmap)?
- Is one solution better than the other for any reason? I’m wondering, for example, if specifying NSDeviceRGBColorSpace from the outset would give better performance when it comes to drawing these images to a window in my program (which I also do, so optimizing performance there would be a nice secondary consideration).
- A beyond-cocoa question: in the TIFF files I output (via libtiff), I write out the colorspace information (because I figure I might as well). Does anybody happen to have any advice on whether any scientific image tools like ImageJ or commercial scientific renderers would pay any attention to the colorspace specified in the TIFF files? How important is it that I get the exact right colorspace for downstream software reading them (and if so then what is the “right” one…)?

Cheers
Jonny