Glen Charles posted ...

Keyword search
Start a new topic

Was this topic helpful? Give it a thumbs up or down. 0Likes:

Beginner

By Glen Charles

Here we discuss the technology of making pictures.
As usual the story begins at the bottom of the posts.

Replies

Reply from Glen Charles on 04-3-13 3:39 AM
Now lets talk about dodging and burning the image through local contrast changes.

Later.
Reply from Glen Charles on 04-3-13 3:34 AM
Here is a warning:
If you make drastic changes to your image, you will see color changes occurring where you didn't mean it. You can see this in many pictures on LI, where it is obvious that the photographer really could not have wanted purple edges to his clouds, or blue highlights on his red roses.

In this situation, you are seeing the limitation of the sRGB color definition. Unfortunately in sRGB in order to change the lightness you have to change the RGB values. There is no independence beween color and lightness in RGB as there is in say, L*a*b* or HSV, or LCH, which all allow you to change the lightness without changing the color.
As the lightness, or saturation changes are made in RGB, sometimes you will reach the limits of the RGB envelope with two colors and only one can advance, and the color becomes out of balance and you see blue petals on a red rose, for instance.

There are examples here on LI of over saturated parts of the image in which all detail and texture has been lost. At least that is how it appears on my monitor, but perhaps the photographers monitor was very "flat," uncalibrated and looked okay to him.

To prevent this sort of color change you can take a couple of approaches.

First you could set your working color space in the editor to something that will hold a lot more colors than sRGB. You won't be able to see any colors outside sRGB, because the monitor color definition is about equal to sRGB (this is deliberate), but you can see the numbers of the colors and can tell whether or not the colors are being mangled.
Some people choose Adobe RGB 1998 for a working space. I sometimes use J. Holmes Ektaspace PS5 as my working space.

The other approach is to do your color changes using another color definition, like HSV, or LHC (I don't suggest L*a*b* unless you are willing to devote many hours to understanding it - it is not intuitive. If you want to understand it get the book LAB by Dan Margulis.)

I often use HSV in my work.

And thirdly you could work with an editor that can use 16 bit images which will allow many more colors than sRGB. If you start with RAW and convert to PSD and set your working color space to Adobe RGB 1998, you will have plenty of room to make color changes.

I use Live Picture which is a 48 bit (3 x16) editor and although my original images are all 8 bit, every edit I make is a 16 bit edit (color changes, sharpness, blur etc.) My working color space is Ektaspace. So, I have room to make a lot of changes.

Eventually the image will have to be converted to sRGB to be shown in a browser window. In the conversion the colors that you have created are re-mapped to fit the sRGB color definition in a way that makes the most sense to you - perceptual, relative calorific, absolute calorific, and all the colors are saved without damaging the picture.
Reply from Glen Charles on 04-3-13 3:05 AM
Okay, now you have your TIFF image.
What we have to do to it first, is to adjust the overall color balance of the picture. You may remember the original scene as being warmer in color, so adjust the warmth setting. Everything else will be adjusted specifically i.e we are going to make image changes to various parts of the picture - a little lighter here, a little sharper here, more blur for this part, an adjustment to a color in this part, and so on.
We are going to replicate the changes that they eye made as it scanned the scene. The eye adjusted the aperture as it scanned from light to dark. We used a fixed aperture to make the shot. Now we are going to effectively adjust the aperture for every location that was made too dark or too light in the image. We do this using several methods.

The most obvious is to locally change the lightness of the image itself directly. We do this by brushing in a color adjustment. This is the technique you can use with Nikon NX2 with its u-point technology.
The next approach is to create multiple images - a darker image with high contrast, a lighter image, or an image with low contrast - all made from the original image and layered over one another - and then we blend in parts of the image with the variations.
A third approach is to create luminance masks which select part of the image, to which we can apply changes.
These latter methods require a multi-layered capability like Photoshop, Painter, and Live Picture (Mac Classic app.)

So, you see that a simple photo editor with no ability to create layers is not going to be able to do this local image adjustment. NX2 is an exception - through its U-point technology local image modifications can be made, and for some pictures it will be enough.
Reply from Glen Charles on 04-3-13 2:42 AM
To begin with, there is the task of recreating with our software the tones in the scene. Usually the camera will miss the shadow detail which was very clear to the eye, or it will not be able to distinguish the details in the bright clouds. This is due to the contrast range of the scene (the brightest through darkest part) being wider than the sensor can cope with. So the first thing we must do is adjust the camera to reduce the contrast range. There is often a histogram display that shows the contrast range of the scene. Reduce the contrast response of the camera ("low" setting) until the histogram shows that all of the scene can be captured. The camera may not have enough adjustment. On my camera I swap the lens for a low contrast lens (usually a lens I use for portraits.) I have one camera with a zoom lens that seems to be able to capture the widest contrast range with no problem.

So this is step one. If you want to end up with a high contrast picture, do this later in post processing.

Now save the image in RAW or TIFF for later editing. RAW is a bit of a pain to use. For one thing you have to spend a lot of time doing the conversion from RAW to TIFF. A camera that can save in TIFF (like the professional Nikons, and the Digilux 1 amongst others) saves all sorts of time because the image is ready to edit straight from the camera. There is no conversion needed.

Most photographers don't have cameras that save as TIFF, and many of them save as JPEGS for convenience. No ,matter, you can convert the JPEGS to TIFF in a second or two, and edit the TIFF files. Some software can convert JPEG to RAW and get back some of the quality that RAW or TIFF have over JPEG (which is a compressed file that throws away a lot of image data to make the file smaller, so that more will fit on the SD card - in the early days of digital photography this was a real issue - one TIFF file filled the entire SD card.)

Now in the process of conversion from RAW to TIFF some file data will be thrown away. This extra data is of no use to us on the WEB, but is good for commercial print production.

The original file in the camera is a 16 bit file and we have to have an 8 bit file for the web browser, and for the monitor. In the conversion to RAW this change is made. You can do this automatically, or you can make adjustments to the RAW data before conversion (usually white balance, and lens correction.) Any other changes can be made in your editing software which is much more capable, and can handle multiple images and multiple layers which is essential to making a good picture.

After the conversion you will have an 8 bit TIFF file. AFter converting from JPEG to TIFF, you will also have an 8 bit TIFF file.

All the files that are going to be edited will be TIFF files. We never edit JPEGS, we just store them away in a safe location.

BTW, though there are many ways of storing and filing images, the safest way is to name the file for it's content, or for the camera and lens used plus a unique identifier. Filing by date is hopeless. FIling with sequential numbers is hopeless. Using a program like Aperture or Lightroom to file your pictures can be a disaster when you are looking for an image ten years down the road.
This means you have to rename every file.

I confess, I have not renamed all 20,000+ images in my computer. But I use a very reliable database program called Cumulus to catalog my pictures. Each picture is categorized (I have some 300 categories) with several categories. I can find a specific picture in a couple of minutes. If the database is corrupted by a power off, or something accidental, it will rebuild itself. In ten years I have never lost a database. In any case the images are always safe in their folder - I have one folder 100GB, which is duplicated on two other external drives (which I name the same, so the catalog index works.)
I'm not suggesting you use Cumulus, because it is rather expensive and is designed to be used by major Corporations. But you should have your own system for finding images that doesn't rely on proprietary software - which one day won't be available, because of platform changes.
Reply from Glen Charles on 04-3-13 2:04 AM
You are beginning to see, I hope, that there is quite a bit of work to be done just to get the colors you saw in the scene onto the monitor being used by the viewer.

I will tell you how to do this, further on.

Right now let's focus on our own monitor and software, and try to reproduce what we saw at the scene.

Reply from Glen Charles on 04-3-13 1:55 AM
On the camera you can sometimes select between two or three color descriptions. Usually these include sRGB, and Adobe RGB 1998. Some cameras give you no choice, and they describe their colors in "camera RGB." By the time your image is in your editing software, this color description has been changed to the sRGB of your display. Now, if you don't like the colors, you can tell the software to assign a different set of colors to the image by forcing the software to assign another color space to the image. On most programs you can choose sRGB as your default source, or you can select from dozens of color spaces in a pull down menu. This would be a good experiment so that you can see how different color interpretations (color spaces or profiles) create colors from exactly the same data in the file.

In this case the CMS converts the color space you have selected into sRGB so that you can see it on the monitor.

There are many photographers who think that RAW images have no color space, that it is just RAW data. But until that data is assigned a color profile, nothing can be seen on the monitor. So, a RAW file contains a bit of information on what colors the numbers in the file should relate to.

The advantage of CMS (which is also troublesome at the same time) is that images can be made to look the same on any system that has a properly calibrated color profile available to the display software. Unfortunately many monitors have no proper profile, and a generic profile is assumed by the software, and anything can result. Uncalibrated monitors show a strong blue cast to every image.
Another issue is that some software is unaware of the profile that comes with the image, and of the profile of the monitor (all Photoshop prior to PS 6, most browser software today, many simple editing programs and filters - like Snapseed.)

Images that contain a color profile will display different colors depending on whether the browser is Safari, Internet Explorer, Chrome, or Firefox. Only Firefox responds to that color profile, the rest ignore it and substitute their own browser color profile.
So, if the value of the colors in your image are determined by the built in color profile, you must expect that no-one but you will ever know what the real colors are.

There are ways to fix this.
Reply from Glen Charles on 04-3-13 1:33 AM
What are the difficulties that we face?

The light through the lens touches the sensor photosite and its energy is stored during the exposure. There are usually four parts to a photosite, each collecting energy from different bands of the spectrum. One part collects short wavelength light, one long wavelength light, and two medium wavelength light. The industry refers to this light as red blue and green, and the sensor is called an RGB sensor. In actual fact there are no colors in the light at all, as there are no colors in the scene.

The human brain creates colors from the energy (and wavelength as these two terms are complementary) of the light source. There are millions of detectors in the eye that resonate to different wavelengths of light, and this produces electrical signals that affect the chemicals that are sent to the brain, and here the brain creates color.

The camera creates color from the electrical signals that are accumulated in each of the four parts of millions of photo sites. For example the M8 camera has about 5 million photo sites that collect long wavelength energy, 5 million that collect short wavelength energy, and 10 million that collect medium wavelength energy.

The physical device that captures light of a particular wavelength (actually a group of wavelengths) is called an optical filter. In an RGB camera, the filter is constructed of two layers of colored "glass" which together allow only red-ish, blue-ish, and green-ish wavelengths of light to reach the photo sites. The filters also absorb many wavelengths of light, and these do not touch the photosites including yellow-ish light, cyan-ish light, and magenta-ish light. Roughly, two thirds of the light coming through the lens never makes it to the photo site. Wavelengths of light beyond the visible spectrum also come through the lens (Ultra violet and Infra-red,) and would create an electrical signal if they were not stopped by an IR filter and UV filter (also "glass".)
Now, the electrical signals stored in each of the four parts of the photo site is transferred to the camera electronics as a number.
All these numbers are stored on the camera card in a file. Numbers range from 0 to 1024, and represent the total of the energy captured by each of the four parts of the photo site. The software in the camera has to interpret these numbers and apply electrical signals to a display device that you can look at. This device is made of thousands of picture elements and optical filters over each of its four parts (red, blue and green and green.) Up to this point there is still no color in any of this image. The light coming through the display are bands of monochromatic light of short wavelength (in the visible range only), long wavelength (similarly), and medium wavelength (similarly.) The eye filters this light and channels it to the brain, and here for the first time, colors are constructed.

Now, one of the purposes of the camera is to reconstruct the visible light from the scene and send a similar light to your eye, such that the reconstructed light matches the scene luminance, and color.

As I said, two thirds of the energy in the original scene gets absorbed by the camera filter. (when I say energy, it also means wavelength of light) The camera software has to guess at which energies are missing and recreate them. It evaluates the amount of the RGB signals for each photo site, and creates yellow, or cyan, or magenta, or whatever, based on how much signal came from the part of the photo site that is red, and that is blue, and green. Of course, it is not making colors, it is only making numerical data.

Now, the camera software has to recreate the light energies from the original scene into light energies for your eye, as we said, but the filters in the camera are not the same as the filters in the display, not in number nor in "color," In fact these filters are unable to pass all the wavelengths that the camera sensor was able to detect, because camera displays have to be limited to keep down the cost of the camera. If you want to see all the wavelengths you would need a $35,000 monitor. It is possible to send the data from the camera to a monitor like this and view your images in their full glory (this is what NASA and Medical doctors do.)

You are not going to be seeing the full captured image on your home monitor. The industry has established what you will see according to how much money you are willing to spend.
An ordinary photographer working at home is going to see a range of created colors described in the industry as sRGB. This color description provides for most of the colors in nature, but few of the colors that are created by man beyond what nature provides. For instance: Art. The colors used by artists are way beyond what nature offers. These are made by chemistry that doesn't exist in nature. You cannot photograph an oil painting with your camera and expect to capture the colors, because the sRGB color space doesn't include deep blues, and saturated yellows that are painted onto the canvas.
And, again: in Advertising colors are used that cannot be replicated by an sRGB camera - these colors can be printed, but not displayed on an inexpensive (under $25,000) monitor. In advertising and in fact in any industry that prints or paints color, the only color space that can describe these colors is L*a*b*. If you want to describe the color of the car that you are ordering, the color swatches that you are looking at are described by L*a*b* numbers. And these numbers describe the color exactly.
You certainly couldn't describe the color in sRGB, because not only does it have limited colors that probably can't describe the deep red of your new car, but sRGB colors vary from device to device - they are "device dependent." So, a file of numbers which create red in sRGB on your monitor, will create a different color red (with the exact same numbers) with different software on another monitor. Imagine if the car manufacturer was looking at the red on his monitor and when you got the car it wasn't the color you expected.

So, in industry colors are described by numbers that are universally understood - not sRGB, or Adobe RGB, or Wide RGB, or any other RGB.

But we are stuck with some description of RGB to display our pictures. Just realize that no one else can create exactly the same colors on different hardware using different software.

Although L*a*b* can describe all colors that ever existed (and some that are beyond) we do not have any device that can display in L*a*b*. Everything has to be converted into sRGB for ordinary monitors, or into CMYK for ordinary printers (which color description has way more colors that we can see on a monitor.)

I have several monitors and no two of them create the same color from the same image file.

The computer industry has tried to address this problem and since about 1998 os so, computers include color management software (CMS) that can be called on by programs to display images in the proper color. In Apple computers it is "colorsync," and there is usually choices of "Heidelberg CMS," "Kodak CMS, Nikon CMS" etc. Each device is given a file that has the description of how that device responds to the numbers in the image file. The color management system interprets the data according to the device. These files are called "profiles," and you need one for every device in the image generation chain from camera to display.

Reply from Glen Charles on 04-3-13 1:33 AM
What are the difficulties that we face?

The light through the lens touches the sensor photosite and its energy is stored during the exposure. There are usually four parts to a photosite, each collecting energy from different bands of the spectrum. One part collects short wavelength light, one long wavelength light, and two medium wavelength light. The industry refers to this light as red blue and green, and the sensor is called an RGB sensor. In actual fact there are no colors in the light at all, as there are no colors in the scene.

The human brain creates colors from the energy (and wavelength as these two terms are complementary) of the light source. There are millions of detectors in the eye that resonate to different wavelengths of light, and this produces electrical signals that affect the chemicals that are sent to the brain, and here the brain creates color.

The camera creates color from the electrical signals that are accumulated in each of the four parts of millions of photo sites. For example the M8 camera has about 5 million photo sites that collect long wavelength energy, 5 million that collect short wavelength energy, and 10 million that collect medium wavelength energy.

The physical device that captures light of a particular wavelength (actually a group of wavelengths) is called an optical filter. In an RGB camera, the filter is constructed of two layers of colored "glass" which together allow only red-ish, blue-ish, and green-ish wavelengths of light to reach the photo sites. The filters also absorb many wavelengths of light, and these do not touch the photosites including yellow-ish light, cyan-ish light, and magenta-ish light. Roughly, two thirds of the light coming through the lens never makes it to the photo site. Wavelengths of light beyond the visible spectrum also come through the lens (Ultra violet and Infra-red,) and would create an electrical signal if they were not stopped by an IR filter and UV filter (also "glass".)
Now, the electrical signals stored in each of the four parts of the photo site is transferred to the camera electronics as a number.
All these numbers are stored on the camera card in a file. Numbers range from 0 to 1024, and represent the total of the energy captured by each of the four parts of the photo site. The software in the camera has to interpret these numbers and apply electrical signals to a display device that you can look at. This device is made of thousands of picture elements and optical filters over each of its four parts (red, blue and green and green.) Up to this point there is still no color in any of this image. The light coming through the display are bands of monochromatic light of short wavelength (in the visible range only), long wavelength (similarly), and medium wavelength (similarly.) The eye filters this light and channels it to the brain, and here for the first time, colors are constructed.

Now, one of the purposes of the camera is to reconstruct the visible light from the scene and send a similar light to your eye, such that the reconstructed light matches the scene luminance, and color.

As I said, two thirds of the energy in the original scene gets absorbed by the camera filter. (when I say energy, it also means wavelength of light) The camera software has to guess at which energies are missing and recreate them. It evaluates the amount of the RGB signals for each photo site, and creates yellow, or cyan, or magenta, or whatever, based on how much signal came from the part of the photo site that is red, and that is blue, and green. Of course, it is not making colors, it is only making numerical data.

Now, the camera software has to recreate the light energies from the original scene into light energies for your eye, as we said, but the filters in the camera are not the same as the filters in the display, not in number nor in "color," In fact these filters are unable to pass all the wavelengths that the camera sensor was able to detect, because camera displays have to be limited to keep down the cost of the camera. If you want to see all the wavelengths you would need a $35,000 monitor. It is possible to send the data from the camera to a monitor like this and view your images in their full glory (this is what NASA and Medical doctors do.)

You are not going to be seeing the full captured image on your home monitor. The industry has established what you will see according to how much money you are willing to spend.
An ordinary photographer working at home is going to see a range of created colors described in the industry as sRGB. This color description provides for most of the colors in nature, but few of the colors that are created by man beyond what nature provides. For instance: Art. The colors used by artists are way beyond what nature offers. These are made by chemistry that doesn't exist in nature. You cannot photograph an oil painting with your camera and expect to capture the colors, because the sRGB color space doesn't include deep blues, and saturated yellows that are painted onto the canvas.
And, again: in Advertising colors are used that cannot be replicated by an sRGB camera - these colors can be printed, but not displayed on an inexpensive (under $25,000) monitor. In advertising and in fact in any industry that prints or paints color, the only color space that can describe these colors is L*a*b*. If you want to describe the color of the car that you are ordering, the color swatches that you are looking at are described by L*a*b* numbers. And these numbers describe the color exactly.
You certainly couldn't describe the color in sRGB, because not only does it have limited colors that probably can't describe the deep red of your new car, but sRGB colors vary from device to device - they are "device dependent." So, a file of numbers which create red in sRGB on your monitor, will create a different color red (with the exact same numbers) with different software on another monitor. Imagine if the car manufacturer was looking at the red on his monitor and when you got the car it wasn't the color you expected.

So, in industry colors are described by numbers that are universally understood - not sRGB, or Adobe RGB, or Wide RGB, or any other RGB.

But we are stuck with some description of RGB to display our pictures. Just realize that no one else can create exactly the same colors on different hardware using different software.

Although L*a*b* can describe all colors that ever existed (and some that are beyond) we do not have any device that can display in L*a*b*. Everything has to be converted into sRGB for ordinary monitors, or into CMYK for ordinary printers (which color description has way more colors that we can see on a monitor.)

I have several monitors and no two of them create the same color from the same image file.

The computer industry has tried to address this problem and since about 1998 os so, computers include color management software (CMS) that can be called on by programs to display images in the proper color. In Apple computers it is "colorsync," and there is usually choices of "Heidelberg CMS," "Kodak CMS, Nikon CMS" etc. Each device is given a file that has the description of how that device responds to the numbers in the image file. The color management system interprets the data according to the device. These files are called "profiles," and you need one for every device in the image generation chain from camera to display.

Reply from Glen Charles on 04-2-13 11:47 PM
Any good picture editor will provide a means for adjusting the lightness of any object in the picture, without affecting the lightness of those around it. Also the editor will allow the color of any object to be adjusted without affecting the color of other objects close by.
Any good picture editor will provide a means for adjusting the sharpness and softness of objects without affecting the sharpness and softness of objects close by.

In addition you must be able to make global adjustments to color to eliminate color cast. And to adjust the darkest and brightest parts of the picture to match the range of the software that will be used to display the picture on a monitor.

If you want to show the picture on another monitor, then you must make sure that the software and monitor understands the color data in the file, and presents the picture just as you intended it to be seen.
Reply from Glen Charles on 04-2-13 11:28 PM
To begin, we will study how the eye gathers information from the scene about the objects in the scene and the spacial relationships between objects, because we want to know how to duplicate this vision with our equipment. Our experience with the camera has been that the camera has a completely different understanding of the scene. What we see on the monitor is two dimensional representation of a three dimensional experience.
The human experience is that some objects are in front and some behind. The camera shows some objects sharp and some blurred. The human vision scans the scene in two degree sections adjusting the iris of the eye to brighten up the dark areas, and darken down the bright areas. The camera cannot do this - the aperture is fixed for the entire scene at the moment of exposure.

After we have made the picture, in order to match the vision of the eye, we must adjust the picture in our picture editor. We must brighten up the dark areas, tone down the bright areas, and create clues that will indicate the spatial arrangement of the objects in the picture.

Also we must adjust the colors in the picture to better match what we saw in the scene. It is not often that we can apply a global adjustment for color, and get something that looks true to our vision.

Please login to post or reply