The Sensor - Pixels
It's time to talk about the little pixels, all the pixels on our camera and how important they are and what they're doing. So, pixels of course comes from the word picture element and so it's kind of an (mumbles) of that, and it is the light sensitive cell in our camera that is recording the light, it's what has replaced film. Now a very common sensor by today's standards will have about 4000 rows and about 6000 columns across and if you run 4000 times 6000 times your calculator, you're gonna get 24 million which is what we call 24 megapixels, so 24 million pixels on the sensor each recording its own unique bit of information. Now the pixels themselves are kinda like this and there's these little micro lenses in front to make sure all the light is getting into the sensor properly and I bet you've noticed that newer cameras have more pixels than older cameras, right, we've all seen that. Now what's happening, because the sensors, not necessarily getting bigger on these cameras. There's ...
a variety of sensors but they'll have more pixels than they did last time around. Well what's happening is they put smaller pixels so they can put more pixels in the same area. And that's good, bad, more good right? But smaller, no, that's not as good, wait. Is it better or is it worse? And so there's these counter things that pushing it better or is it worse? But we also have technology working for us so there's a number of issues that are controlling the quality of the pixels, the information that it's providing us. Now as you might guess, I like to test out different equipment to see how good this is versus that. And I wanted to compare out some very modern cameras, so we're gonna be looking at a 42 megapixel full frame camera, a 30 full frame, 24 crop sensor which is probably the most common sensor on the market these days right there in the middle in blue. 20 megapixel crop frame and them a 20 megapixel four-third sensor. There should be an image quality difference here 'cause we've got more pixels generally as we go to the left and we have bigger sensors as we go to the left, and so let's go ahead, zoom in on this image and take a look at the image quality from crops of these different sensors on here. Now I can generally see that it is sharper on the left and less sharp on the right-hand side. Now there are a couple of things that are a little bit funny, for instance, on the right two photos, they're both 20 megapixels. And one is a much smaller size sensor but the image quality is about the same, and that's just because the 20 megapixel on the 1.6 crop sensor is a little bit older technology. And so the age of the technology also matters on that. And so the more pixels you have, the bigger size the sensor in general, the better picture you're gonna get when you really zoom in. Now, normally this is what is known as pixel peeping. You don't do this for enlarging, you're not gonna say this is a great image here. You're not gonna enlarge the size, you're just enlarging it to see how sharp it is, and some people can get a little obsessive about checking for sharpness. It's like are you looking at content whether it's a good photograph, but there is a time and place to dive in and really see if you are pixel perfect when it comes to sharpness and image resolution and so forth, so for a moment, we're diving into this world. So you do have to be realistic of how it's gonna play out in your final image. And so when it comes to the number of pixels that you need and what is better, I think a good analogy to work with is donuts, all right. I'm sure we all like donuts, right? So imagine you are tasked with going to the donut store and picking up donuts for the office, and you got 12 workers, you're gonna get a box of 12 donuts, right, makes perfectly good sense. Now you get up to the register and they say, "Hey, tell you what, we got a two for one offer. "You can get 24 donuts for the same price as 12. "Do you wanna do that?" Yeah, think I'd take 24 for the same price as 12, makes perfectly good sense. He takes away your box of donuts, brings out another box and puts in smaller donuts. Wait a minute. You're giving me more but it's not really more, it's just more in numbers because they're smaller donuts, so every person's gonna have to eat two donuts to get the same fill as they would get on one donut. And so is more better? Maybe, not necessarily, but if they're smaller, size versus quantity, there's a balance here of what's appropriate and necessary for any particular situation, and so it's true whether you're talking about pixels or donuts. Do you want more or do you want less? And what it comes down to is more is good if you need it. If you don't need it, you're gonna be wasting stuff. If you have a 50 megapixel camera and you shoot pictures that go on Facebook and Instagram at 1000 pixels on the long side, you're gonna be throwing away thousands and thousands and thousands of pixels all the time 'cause you don't need them for resolution. It really depends on what you're doing with your images and how much resolution you need. So what is the difference between 12 and 24? And I know this sounds like a really dumb, simple question. 24 is twice as many as 12, would we all agree on that? It's not twice the resolution. Think about it. You have three rows. Or you go four rows across the top, you now have six rows of resolution. This double the megapixel sensor here has double the pixels but it only has 33% more resolution and that's because we're working in height and width. And so what this really means in the real world is that if you have a few more megapixels, it makes virtually no difference at all. You need to have a lot of megapixels to really make a big difference from what you currently have, so if you have something that's inadequate, you're gonna need to double it or quadruple it to make a significant difference in resolution and so if somebody is at 24 and somebody is at 28, (scoffs) it's all in the wash. Not even gonna notice a difference between that. One of the differences that can be important is the size of the sensor. Now 24 megapixels seems to be the most common sensor on the market today but we have cameras that are crop frame and we have cameras that are full frame that are using the 24 megapixels. How do they stick more pixels on the smaller sensor? Well they just make smaller pixels and they put closer together. Now if you could dive into a really small section on the sensor and compare these two, what you would see is two different areas and let's look at one millimeter and how many different pixels we're gonna see and to really see a clear difference, let's go in to a tenth of a millimeter, so one-tenth of a millimeter square, this is what the pixel layout would look like between the two cameras. And so we have more of them on the left in that given area but we have bigger pixels on the right and if you wanna record good, clean image quality under low light conditions, you want big pixels, 'cause they can record more light easily. And what happens is that when you shoot both of these cameras at ISO 100 which is kind of the optimum setting for theses sensors, you're gonna see little to no difference. The whole full frame advantage here, not really playing out when it comes to image quality. They both have 24 megapixels. Where it comes into play is when you crank up the ISO to something pretty high, there's gonna be less noise, and we're gonna talk more about this noise, but image quality is gonna be better with a full frame sensor, all things being equal because the pixel size is larger. Now the way the pixels are working, it's a pretty good analogy to use a bucket collecting rain water, so if we can imagine light coming into the pixel, it's kind of light, rain water filling up the bucket, and so the idea is to try to fill up that bucket and collect as much information as possible. Let's say there's not as much light coming in in this case, this next case, so we have a little bit of light coming in and that's gonna fill up the bucket halfway. If the bucket's halfway, we know that somewhere between light that fills the bucket up completely or if it doesn't fill it up at all, it's gonna be very dark, so this is a gray pixels here. If we have just a little bit of light coming into the pixel being recorded by, we know it's gonna be dark or charcoal gray or something really, really on the dark side. One of the problems is if we record too much light and our cup spilleth over. We get too much light in there and if you fill up a rain bucket and it goes over the top, you just start losing that information. In photography we call this clipping. We have lost that information, it's too bright, or in some cases you can go too dark on the other side. Now one of the interesting things that we can do here, 'cause this is not a real bucket, this is a virtual bucket, is we can fill it up halfway and say, okay, this is supposed to be a gray pixel, right? Well we know what it's supposed to be. What if we were to just tell it to be brighter? Just turn up the brightness level. Remember the old TV sets that had the brightness knob on it? Let's just crank up the brightness and make it brighter. If you want, you can do that on your camera. Make something that's bright and make it white just by turning it up brighter with the sensor. But there is a problem in doing that and so we're gonna fill up our remaining buckets with just a little bit of light, not very much. The problem is is that in our sensors, there's all sorts of electronics, and these electronics are pointed off the individual sensors, and these electronics heat up and they cause a little bit of noise, disturbance, you might say, and if we're going back to the analogy of the rain bucket, I think of it as a layer of scum at the bottom of the bucket, okay, not very pleasant, but it's down there. And what do you think is gonna have cleaner water, a big bucket with just a little scum layer down there or a bucket that's got barely any water and that scum layer down there? It's contaminating a lot more of it and so when you don't record much light, the noise becomes more significant, and what happens is when you start comparing these pixels which should be identical, they're a little bit different because they've been contaminated by not enough information. And what it's gonna look like in a final photograph is this rough, unsmooth area of noise, and this is the problem with shooting your cameras at high ISOs where we're taking this little bit of information and we're cranking it up and trying to make something more out of very little information. So one of the areas where you're gonna see a great difference is if you wanna compare, I'm sorry, a real camera against a smart phone that has a camera, I know, and so this is something that the smart phone makers, they've been working, Apple has something like 600 employees that work solely on the camera system, that's all they do. But they're at a serious disadvantage because of the size of the sensor in the camera. So I decided to take a relatively modern, full frame camera against the latest, greatest, classified technically right now as the best camera in a smart device, the Apple iPhone X, and compare, what's the difference between these two cameras? Well, one's got a full frame sensor, one has a one-third inch type sensor which means it's not really one-third inch but it's really small in size. All right, if we were to compare the pixels size relative to one another, this is the size of the pixel size difference and it doesn't take a rocket scientist to figure out which one of these pixels do you think is gonna record low light levels better? It's like a solar ray, it's able to collect more light. And in most general lighting situations, the phone is gonna do a really good job. All you have to do, I'm a track and field guy so I like using track and field, is just give the smallest little hurdle, see if you can get over this hurdle and where it's gonna start tripping up these devices and so the small hurdle that I'm giving it is slightly low light. I wanna photograph at sunset. So let's photograph Seattle at sunset and let's take a look at a cropped in area and see if you can see a detail difference between a phone and a full frame camera. Look at the detail in the streets, in the buildings and so forth, and you know what, when you're sending out a small picture of what you're doing this evening, the iPhone photo is perfectly acceptable. There is nothing wrong with it. But if you really wanna get into the detail and work with it, enlarge it and work with it in many different ways, any camera, for the most part, is gonna be able to beat that. And so it's that inherent benefit of the larger size sensor. So let's take a look at some image quality differences between these different cameras at ISO 200, and so you'll notice that the Canon 7D II, which right now is getting a little bit long in the tooth as they say, it's getting a little bit older and the micro four-third system is a little bit more advanced technology and they're kinda catching up to this. And so we'll see some other slight differences over here from one to the next but you gotta enlarge your images pretty large to see this difference. Now where this is gonna probably become more apparent is when we start cranking up the ISO even higher, and so up here at a very high setting, the smaller size sensors start falling apart and you just don't get the image quality that you do with a larger size sensor. And so, this is always going to be the case, but one of the things that I think is really important when it comes down to real world assessment is how high of ISO do I normally need to use? You know what, it's fine that I'm telling you that full frame is better at 12800, but do you know how often I shoot at 12800? Practically never. Most people never shoot there. I shoot there most frequently when I'm testing cameras, and so when it comes to testing cameras, this is the winner, but for real shooting, it's not nearly as important 'cause I'm not shooting at that level. But it depends on where you shoot. There are a number of different pixel patterns and we're not gonna dive too deep into the weeds on how these are manufactured in all the different systems out there but you should know that most cameras use a bare pattern system which is a red, green, red, green and then green, blue, green, blue. There's twice as many greens 'cause humans are more sensitive to green light. I would never have guessed this in a million years but this is the way it works in order to get proper image quality. And so your sensor is filled with all these different pixels and so when it actually records information at one site, it's only recording it in that one color. And then it shares information and they extrapolate what the image actually looks like in the image. And when you do this, lines run in certain directions very, very easily, but when you start turning the camera slightly, there are some lines that don't work as well and we have a moire problem. Now I love illusions and I have a very disturbing illusion for you here, keep your eyes on the screen, this is good for you, folks. And when you align certain things, this aligned with something else, you get these kinda visual feedback problems that doesn't look good, and so camera manufacturers have known this and what happens is if you're photographing a textile for instance that has a very fine weave to it and you have it with a camera with just the right resolution, it's just gonna look weird. I remember back in the 70s when TV news anchors wore suits and ties that did not cooperate well with the lenses and sensors that they were using on their cameras and it looked like their clothes were on fire 'cause you're getting all this moire problem. We typically don't have that problem these days because we have such high resolution. The higher the resolution you have, the less the moire problems you have. And so we don't worry about it on really, really high end stuff, but what a lot of cameras use, most cameras, is an anti-aliasing filter. Now pretty much all cameras will have an infrared filter, but they'll also have one or two different styles of AA filters in front of the sensor, and what this does is with the light coming into the sensor, it splits the bean horizontally and then splits it vertically so that one ray of light hits in four different places. Now you would think this would cause the photo to become out of focus. Yes but only technically very, very slightly so, so they slightly defocus the image just a smidgen so that we don't have a moire problem. And so if you have a camera with an AA filter, you can shoot textiles and fine grains and architecture with grain patterns and things like that and not be worried about it. But what they've decided to do on a few of the higher resolution cameras out there these days, typically higher than 24 is that they've taken off this AA filter so that you can get greater resolution but potentially a moire problem, and so it depends on what type of work would do this. Both Nikon and Canon have offered identical cameras with and without the sensor depending on what your needs are, and moire is not a huge problem for most people but some people in certain industries, that can be a big issue and so there's a number of cameras and this is by no means a comprehensive list, but these are a number of the cameras without an AA filter out there. Nikon's been taking it off on a lot of their cameras to get greater sharpness on it. Fuji has taken it off for a completely different set of reasons that I'll explain on the next slide. Canon has only done it on I believe one of their cameras which is the 50 megapixel super high resolution camera. So it's something that kind of is and kind of isn't being done depending on the model and make that you go with. Now Fuji is not doing it at this time because they have a different system, rather than the bare pattern system, they've gone with their one X-Trans CMOS sensor on this which, trying to mimic their film that they used to make made a more random pixel pattern which doesn't have as much moire problem when they take off that AA filter. And so it's been doing very good for them and so what a lot of people have found is that the Fuji cameras punch above their weight level when it comes to sharpness, it's just like for a camera that's that size sensor with that many megapixels, it looks really good, and so it tends to do very well in comparison tests because they are using their own unique sensor system. Most of the sensors are made, there's only a handful of sensor manufacturers out there. When it comes to making camera sensors, there's like three or four companies, that's it. A lot of the manufacturers buy it from the competitor and then they alter it to fit their cameras and so forth. Another unusual one is Sigma which is using a Foveon chip which is recording it in a similar way that we did the film. Now this held a lot of promise in the early days because each pixel could be sensitive to red, blue, or green light and it would hopefully yield to greater resolution, greater color accuracy. It's pretty good but it's only in a few specialized systems and it's something I would not hold my breath for, okay, it's not necessarily gonna make its way out there but there are different systems out there and that's the main thing I just wanted to impress at this point now.
How does the sensor differ in some of those astro photography cameras?
Okay, yeah, so there's a few cameras, Nikon's notably made some that are specially designed for astro photography and I believe in those cameras they have pulled off the infrared filter that we saw in that one slide and so they've taken off this one extra filter so it can see a slighter greater number of wavelengths of light and so they will be altered in just a little bit of way and there's a number of cameras you could have altered to do that after the fact. There's also an infrared camera operation that can be done that some people when they buy a new camera, they'll take their old camera and they'll send it in to have it modified to record infrared light and so there are some different models out there and then if you wanna get really extreme, there are some scientific and forensic cameras that record light levels that are different than our normal cameras.
It's a drastic price jump and you just said they actually take stuff out.
Right and it's the cost of, we have a customized product and now you wanna have something removed, well we're gonna have to send somebody back in to disassemble and take that part out and make sure everything works right. Yeah, and so it's less stuff but it takes more work and it's more specialized.
Hi, my question was on the section that we were punching in and we're seeing them close up.
Were they all jpeg and is there a certain amount of processing that might actually account for some of the variability?
Right, that's very, very observant of you to notice that. I typically try to shoot all my tests in raw, and not do any work on them and export them as a jpeg and so when I make them into jpeg in the computer, there is gonna be some downsizing but because I'm enlarging them so large, you shouldn't really see any difference there. But of course for anything that's important that you're doing, run your own tests and see how they work for you, but I think my tests will prove out to be pretty fair in that regard.
Question from Maria who says are pixels different shapes or always square or always round?
Generally they're square. There was a period of time where Nikon cameras had them that were slightly elongated in one direction, it was kind of odd 'cause they were a little bit longer. Fuji was one time working with a completely different system where they had kind of an octangular pixel next to a smaller one so one would record bright light and one would record dim light but they've kind of passed on that and so in general they're square 'cause that's how they fit them as close together as they can.