# Are they really sure this isn't an Airy disk? How was that ruled out?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

How do they know that this is a spherical shell of gas, and not just something like an Airy pattern-like artifact produced by the VLT's large interferometric aperture?

Image from: https://www.eso.org/public/images/eso0906b/

I get 180 pixels from center to the first minimum, and 72 pixels = 4 mas. That makes the first minimum at 10 mas or 4.8E-08 radians. The wavelengths are around 1.6 microns so that would correspond to a circular aperture of 40 meters using 1.22 λ/d.

Are they really sure this is a real shell, and not an Airy disk? How was that proven?

From https://www.eso.org/public/news/eso0906/

“We were able to construct an amazing image, and reveal the onion-like structure of the atmosphere of a giant star at a late stage of its life for the first time,” says Antoine Mérand, member of the team. “Numerical models and indirect data have allowed us to imagine the appearance of the star before, but it is quite astounding that we can now see it, and in colour.”

Although it is only 15 by 15 pixel across, the reconstructed image shows an extreme close-up of a star 100 times larger than the Sun, a diameter corresponding roughly to the distance between the Earth and the Sun. This star is, in turn, surrounded by a sphere of molecular gas, which is about three times as large again.

T Leporis, in the constellation of Lepus (the Hare), is located 500 light-years away. It belongs to the family of Mira stars, well known to amateur astronomers. These are giant variable stars that have almost extinguished their nuclear fuel and are losing mass. They are nearing the end of their lives as stars, and will soon die, becoming white dwarfs. The Sun will become a Mira star in a few billion years, engulfing the Earth in the dust and gas expelled in its final throes.

Mira stars are among the biggest factories of molecules and dust in the Universe, and T Leporis is no exception. It pulsates with a period of 380 days and loses the equivalent of the Earth's mass every year. Since the molecules and dust are formed in the layers of atmosphere surrounding the central star, astronomers would like to be able to see these layers. But this is no easy task, given that the stars themselves are so far away - despite their huge intrinsic size, their apparent radius on the sky can be just half a millionth that of the Sun.

“T Leporis looks so small from the Earth that only an interferometric facility, such as the VLTI at Paranal, can take an image of it. VLTI can resolve stars 15 times smaller than those resolved by the Hubble Space Telescope,” says Le Bouquin.

To create this image with the VLTI astronomers had to observe the star for several consecutive nights, using all the four movable 1.8-metre VLT Auxiliary Telescopes (ATs). The ATs were combined in different groups of three, and were also moved to different positions, creating more new interferometric configurations, so that astronomers could emulate a virtual telescope approximately 100 metres across and build up an image.

“Obtaining images like these was one of the main motivations for building the Very Large Telescope Interferometer. We have now truly entered the era of stellar imaging,” says Mérand.

A perfect illustration of this is another VLTI image showing the double star system Theta1 Orionis C in the Orion Nebula Trapezium. This image, which was the first ever constructed from VLTI data, separates clearly the two young, massive stars from this system. The observations themselves have a spatial resolution of about 2 milli-arcseconds. From these, and several other observations, the team of astronomers, led by Stefan Kraus and Gerd Weigelt from the Max-Planck Institute in Bonn, could derive the properties of the orbit of this binary system, including the total mass of the two stars (47 solar masses) and their distance from us (1350 light-years).

## Are they really sure this isn't an Airy disk? How was that ruled out? - Astronomy

Finally got back ETX, and like many of the others, the scope was again out of collimation. I have a idea why this is happening as I will explain latter. The optics, drive and cleanliness tested out very good so I decided to collimate the unit myself, It is not difficult. ( Please note! this may Void your warranty) If you notice when you get you etx back that the secondary obstruction shadow is at 5:00, try this. (In daylight) Flip the mirror down and look strait through the back, you should see the secondary mirror offset at around seven o'clock. Remove the focus knob and 3 screws that hold the OTA to the plastic housing, carefully pull the ota out of the housing and place on a table face the ota with the focus the same way it came out of the housing (right side). There are six screws on the back all take an allen wrench to adjust. Three of those screws have a large flat head, these hold the mirror assembly in the ota, do not touch these. You will notice 3 other set screws with lock tight painted on them. Mark their current position with a pencil making sure to keep the orientation the same. With a allen wrench loosen the two set screws on the left, (you have to break the seal of the lock tight so it takes a little force) and just back off all pressure on these two screws, Its about an eight of a turn. Now look through the OTA you should notice that the secondary now seems centered or closer than it was, tip the scope back up and put a little pressure on these screws, check the ota again. Finally put the unit back into the housing and check the collimation on a Christmas ornament. Fine adjust those two screws until the collimation is perfect. (It should be nearly perfect by just backing off the pressure on two set screws.

THE RESULT.
I tested the unit against a 6 inch Quantum, and felt the the image was just as good, even better on some of the darker planetary detail. Please DO NOT attempt to do this unless you a sure your optics are good and you have had some experience collimating telescopes.

Why so many units out of collimation?? I've had three. Either their collimating device is not calibrated correctly, or the lock tight is causing the problem.

## Are they really sure this isn't an Airy disk? How was that ruled out? - Astronomy

We will cover tester theory, design, and construction exhaustively in another treatise. Our purpose here is to learn the test procedure, so we will limit our discussion of tester features to essentials. Our test apparatus is comprised of two basic functional components: (1) A mounting platform stage providing linear, translational motion in X and Y axes and: (2) A very minute light source and knife-edge carried on this stage in a plane perpendicular to our mirror's optical axis.

The knife-edge and the light source are both mounted congruently in a plane (mounted in the same plane) through which the mirror's OA passes perpendicularly. This assembly is in turn mounted on the moveable platform stage so that it can be moved at right angles to and also along (parallel to) the mirror's OA. A dial or screw micrometer is provided for reading the amount of travel of the Y movement stage (motion along or parallel to the OA). Inasmuch as our light source and knife-edge are both mounted on the same plate carried on the platform stage (moving source tester), they move together as a unit in both X and Y axes. Special note: most experienced workers are more familiar with testers having a stationary light source, with only the KE moveable. In our treatise on testers, we will show why carrying both KE and light source together is more advantageous.

Surveying and Measuring the Paraboloid

Figure 8a through 8f shows the appearance of a fully parabolized short focal length mirror for six different positions along its OA as viewed with the KE. By convention we will always begin by pre-setting the micrometer for our tester's Y-axis movement at zero after locating the tester to null the central region of the mirror. From there we will work the KE backwards along the OA away from the mirror, to find the null point, successively, for several different designated zones on the mirror. Below each depiction of the mirror's appearance ("apparition") for each setting of the KE, we show a drawing depicting the mirror's apparent cross-section. Remember, we said we would always think of a concave spherical mirror as flat when viewed as nulled from its center of curvature. Similarly, we will think of the shape of the paraboloid when viewed with the KE as a variation from the flatness of our "flat" reference sphere.

After nulling the very central region of the mirror, we advance the KE away from the mirror and stop at the location shown at fig. 8b. Note that the mirror appears to have an annular, circular "crest" surmounting an apparent, gentle bulge all around its center just a little ways out. Our KE is exactly at the C of C of a very narrow zone surmounting this crest. More accurately (as no zone on a paraboloid can truly have a center of curvature) we are at that point on the OA where that zone's rays are exactly crossing it. Our micrometer will show us, when we inspect it, how far the KE moved backwards to provide this particular apparition of the mirror. The micrometer indicator will show us the KE's location along the OA where this zone's light rays cross over it, relative to its previous location.

We may continue backing the KE away from the mirror, noting, in succession, the other apparitions at c,d,e, and f. The micrometer will always show us the relative location along the OA for the C of C of the narrow zone represented by the crest of the bulge. In addition to being able to locate the C of C for any zone being nulled by our tester fairly precisely along the OA, we can also measure the location of the zone itself on the mirror, its radius from the center of the mirror. And these are the only two quantities we need to determine accurately during the figuring of our mirror in order to shape it into a section of the true paraboloid!

We will pre-determine which zones' centers of curvature we want to monitor before we begin figuring. Conventions or rules about the number and locations of zones for testing vary with workers. The popular convention of dividing the mirror into zones of equal area probably is most advantageous. Zones of equal area will provide for increasingly narrower and more closely bunched zones, successively, outwards towards the mirror's edge. This seems reasonable in that we must figure the outer zones to tighter tolerances than the inner zones. It is very much true, as an older master once told me, that: "The edge zone sets the mirror's performance."

Let us take for an example a project to figure a ten-inch mirror of sixty inches' focal length. To find the location of the middle of each zone (as a radius from the center of the mirror) for any diameter mirror, multiply the mirror's radius (in this case, 5 inches) successively by: 0.316 0.548 0.707 0.837 and 0.945. For our ten inch mirror the middle of each zone computed in this way will, be, successively: 1.58" 2.74" 3.53" 4.185" and 4.725" as measured from the mirror's center (i.e., as radii).

For the mirror's curve to be the correct section of a true paraboloid for its given diameter and focal length, the C of C of each zone is fixed by formula. The location of each zone's C of C is farther away from the C of C of the very central region of the mirror by the following distances:

 Zone 1 (1.58"r) 0.01" Zone 2 (2.74"r) 0.031" Zone 3 (3.53"r) 0.052" Zone 4 (4.18"r) 0.073" Zone 5 (4.72"r) 0.093"

These values are determined by formula (fig. A) where "r" represents the radius of a zone on the mirror and "R" represents the radius of curvature of the mirror (as imagined, of course, as spherical, before figuring). This is not quite the formula most experienced workers are familiar with, as more commonly their testers have their light source fixed and only move the knife edge along the mirror's OA. As we explained previously, we will carry both the KE and the light source on a small plate together in order that we may move them simultaneously along the OA.

It is singular and curious how some obsolete practices continue to be popular very long after much improved ones have been demonstrated. In our article on tester design and construction, we will show several enormous advantages for carrying both KE and light source together, mounted in a specific way.

Locating the Mirror's Zones

We need a practical method for accurately locating any given zone on the mirror for nulling with the tester's knife-edge. Let's look at figure 8d again. This illustration depicts the number three zone (3.53"r) being nulled by our tester's knife edge. This zone divides the mirror into two equal areas, and is by convention referred to as the ".707 zone". How can we be sure that the area being nulled (equally gray all the way around- the gray "crest" of the bulge) is actually centered on the 3.53"r zone? We will put a specially prepared marker in front of the mirror for locating its zones.

Zone locating masks for Foucault testing are of two basic types. The traditional type has two equal sized apertures cut into the mask for the left and right side of each zone. Over the years I evolved some major improvements in their design and application that improved their accuracy and convenience in use. Finally, though, I discovered the advantages of the "Everest" style zone locating mask, and began to make and use this type exclusively, rapidly incorporating improvements in Everest's basic concept just as I had with the traditional zone locating masks.

Everest's basic approach was to hang a section of yardstick in front of the mirror being tested with pairs of straight pins protruding from one edge to mark the radii of zones for testing. The straight pins would be seen in sharp silhouette against the zone being nulled- one could see their outlines, each on either side of the mirror, against the crest of the "doughnut" behind them. As embodied by Everest, the test is somewhat hampered by a perceptual defect. I have noted this defect and have improved the design by making each pair of straight, vertically standing markers (Everest's "pins") into markers curved to the same radii as the zones they represent. The improvement in certainty when locating a zone with this kind of mask is dramatic.

An example of this kind of mask is shown in figure 9. In this particular example (an early form of my improved design) the little marker "horns" protrude up from the crosspiece that supports them. This earlier example has the pointed tips of the indicator horns lying along a meridian across the mirror's horizontal diameter. Horns about twice as long as these, extending equally above and below the mirror's meridian of horizontal diameter, are even better. These curved indicator horns can be made quite long, since they accurately locate a zone lying everywhere underneath each horn's entire length. A curious perceptual effect is at work here. The longer the horns, the more certain the impression of the crest's location underneath them is. When you make your first zone locating masks, make these horns as long as you please, but each of them must be curved along its entire length to the radius of the zone it is intended to mark.

The mask is easily prepared with a beam compass on poster or illustration board, and then cut out with a sharp hobby knife. The configuration shown in figure 9 is just about perfect - but extend the narrow, curved horns upwards through the mirror's middle, horizontal diameter farther than I show them. I have gotten best results with masks that have the horns extending an equal distance above and below the mirror's horizontal diameter. They should be kept quite narrow, especially for smaller mirrors.

Fig. 9 shows the .707r zone being nulled. The middle pair of indicator horns (third pair, outwards from mirror's center) appears to be lying directly atop the crest of the torus-like or doughnut-like bulge. We can have confidence with this indication that the KE is very close to the C of C of this zone. The appearance will be the same for the other zones represented by the other indicator horns when the KE is at their respective centers of curvature. In each case, that zone's particular indicator horns will appear to be lying directly atop the crest of the bulge.

As it turns out, figuring the mirror so accurately that the readings for the KE's positions along the optical axis fall precisely as predetermined is neither possible nor necessary. There are two reasons for this. Firstly, there will always be at least a very small domain of ambiguity for the position of the KE when we try to null a zone with the KE on the optical axis. This is because the C of C of any zone being considered, no matter how narrow we define the zone as, does not truly lie on the OA. Secondly, the physical properties of light also decree a range of ambiguity in the location of the plane of focus for any given bundle of rays of light being focused into a point in the focal plane. In fact, no lens or mirror can actually focus light into an infinitesimally small point of light in its focal plane. Rather, when examined up close, we find the tip of the cone of a focused bundle of light not to be a tiny sharp point, but rather a very small disk with a measurable diameter. This little disk of light is the so-called Airy disk (sometimes also referred to as the "diffraction disk").

An image in the focal plane of any mirror or lens is an accumulation of tiny Airy disks all over its surface, representing the tips of many cones of focused light from many different points of origin in the object or field of view being imaged. Each of these myriad cones of focused light is a reflected bundle of parallel light from a single point source in the field of view of the telescope. Each entire bundle of parallel light represents each point source in the field of view and approaches the mirror or lens at a slightly different angle. Each of these bundles of light is then reflected (or transmitted through a lens) at an angle that corresponds to the angle it approached the lens or mirror. Consequently, each bundle of focused light places its Airy disk in a place in the focal plane that corresponds to its point of origin in the field of view. We may think of these little disks as image "pixels", somewhat analogous to the image pixels on the screen of one's computer, although these "pixels" (Airy disks) are circular in shape, unlike the square pixels in a computer screen's image. Or, alternatively, we might think of these Airy disks as analogous to the halftone engraving dots in a newspaper photograph: an accumulation of them all over a plane of focus builds up an image. In our telescope, this plane of Airy disks (the focal plane) might lie on the surface of a piece of ground glass, or on the surface of a photographic plate or piece of photographic film, or on a modern CCD image sensing array, depending on what we are doing with the telescope. Usually, this field of Airy disks is just floating in space in the plane of the field stop of an eyepiece, when we observe visually.

Now, the size of the Airy disks at the tips of each of these bundles of focused rays can be measured, and is different for different sized lenses or mirrors. The size of the Airy disk is a function of the focal ratio of the mirror or lens. If we move slightly inwards along the OA (towards the mirror) from one of these disks in the focal plane for a cone of focused light, we will finally come to a place along the cone where a cross section of it will be a disk having the same diameter as the Airy disk at its tip. Conversely, if we move outwards along the OA (farther away from the mirror or lens) from the Airy disk in the focal plane, we will again come to a point where the re-expanding cone of light has a circular cross section that again equals the diameter of the Airy disk. If we inserted a small square of finely ground glass in the focal plane and moved it back and forth through the focal plane between these two locations we would not see the little focused dot of light on the glass change diameter. In short, it is quite impossible for us to find a precisely defined focal plane for any mirror or lens. Rather, we will have this very short region in which the focus will be found to be acceptable. Thus, we require to figure our mirror only accurately enough that the tip of the cone of light focused by any given zone on the mirror will fall somewhere between these two locations along its optical axis. This range of locations for the C of C of any zone constitutes our allowed (tolerance) error for its location.

The amount of error that is allowed for the location of the focal plane to deviate from its ideal location for any given zone on a mirror has been worked out for us with the science of geometry. For our purposes it is not necessary to elucidate the entire method of determining the allowed error. Rather, we want to know how these allowed amounts of error translate into allowed ranges of location for centers of curvature of any given zone for our mirror. In other words, how large a range of position is allowed for the location of the KE for any given zone under test?

This range of allowed locations for the KE is determined by the simple formula in fig. B. We will call this amount of allowed range of variation in the location of the C of C for any zone "X". This quantity, X, represents the amount of distance the KE may be closer to the mirror by, or farther away from the mirror by, than the computed ideal location of each zone's C of C. We show a summary of the meaning of X in illustration in fig. B(a).

In this diagram we see the cone of light returning from our tester's light source, focusing down to a near point in its focal plane located at its center of curvature. We have inserted a small square of ground glass in this focal plane and note the tiny spot of light representing the Airy disk projected onto it. We may move the ground glass closer to the mirror by the amount "-X", before the cross section of this cone of light represented by the spot projected onto its surface is larger than the Airy disk (position marked "1st"). Also we may move it farther away from the mirror, passing through the focal plane at C of C and advancing beyond it again by a distance equal to "+X", (position marked "2nd") before the cross section of the re-expanding cone of light is again as large as the Airy disk.

For the other terms of the formula, "p" is the radius of the Airy disk at the mirror's focus for infinity, "R" is again the radius of curvature of the mirror and "r" is again the radius of the zone on the mirror under test. To find "p", the radius of the Airy disk for any mirror at its focus, we will use the expression in fig. D, where "F" is the focal length, and "D" is the diameter of the mirror, and "w" is the wavelength of yellow-green light (.0000216") that has by convention been adopted as the standard for these purposes.

After determining the radius of the Airy disk for our mirror, we can plug it into the formula as in fig. B and determine X, the allowed variation of location of C of C for any zone. Now, we've already computed "d" for the five zones whose centers of curvature we wish to command into their predetermined locations on the OA through figuring. For each value of "d" for each zone, we add "X" to and subtract "X" from. Any reading for the location of the C of C for any zone that falls between these computed values is acceptable- with a certain caveat that we shall shortly stipulate.

Interpreting Test Results

Figuring our mirror so that the centers of curvature of each zone as measured with the KE fall within tolerances will give us an acceptable mirror. However, using a graph to visualize the relationships of the plot of each KE setting with each other KE setting will help us to visualize and plan the best approach to refine and idealize the mirror's figure.

A graph of the values of "d", and "X", and the actual locations of C of C for each zone as measured with the KE is easy to construct. We show such a graph to help manage testing and figuring in figure C. The vertical line on the left side of the graph has index marks in hundredths of an inch.

The horizontal line at the bottom of the graph represents the mirror from its center outwards, radius-wise the vertical lines extending up from this line represent the locations of the five zones, radius-wise from the center of the mirror that we will test for. The horn shaped figure sweeping upwards to the right and away from the center of the mirror represents the envelope or domain of allowed readings of the KE for the centers of curvature of any zone on the mirror under test. The middle curved line (inside the "horn") is for the pre-computed plots for "d" for any zone on the mirror (location of C of C relative to C of C for center of mirror). The upper curved line of the tolerance horn represents the allowed range of positions for "d" that are farther away from the mirror than the ideal positions. The bottom curved line of the tolerance horn represents the allowed range of positions for "d" that are closer to the mirror than the ideal positions. In order to plot relatively smooth and accurate lines for the values of "d" and "X", it is helpful to compute them for zones with radii in half inch increments for the mirror, even though we will be testing for only the five zones previously computed for.

Metric ruled graph paper is convenient for making these test result graphs, as the centimeter markings are a convenient size to represent hundredths of an inch, and they are subdivided into ten smaller units (millimeters) to help one represent thousandths of an inch. Use these to represent the vertical ordinate, for plotting the relative locations of the KE settings. For the horizontal, radius-wise ordinate extending to the right, use a ruler. An inexpensive machinist's ruler divided in tenths and hundredths of an inch is handy for this purpose.

Quick Summary:
Procedure and Analysis

You now have everything you need to know to accurately test and plot your test results for the mirror you are figuring. To get everything concisely and compactly in mind, we will now summarize the test procedure and management of test data.

Set the "Y" axis stage of your tester to its zero setting, and carefully locate it along the mirror's optical axis to null its very central region. With your pre-cut zone testing mask in front of the mirror, back the Y-axis stage carrying the KE away from the mirror to find the C of C of the first zone and note its location as indicated by the micrometer (write it down). Then, back the KE up again until the next zone as indicated by the horns on the mask is nulled and note, again, your micrometer's reading. Next, repeat the procedure for the third zone out from the center, the fourth, and finally the fifth, recording the location of each one's C of C as indicated by the micrometer. You will find during testing that unless the Y-axis stage runs truly along the mirror's OA you will have to manipulate the lateral, X-axis movement, to make a good null each time. This is perfectly okay as all we are interested in here is that the zones are evenly grayed out at each reading of the Y-axis stage. The difference in reading between a perfectly aligned Y-axis and what you measure is a function of the cosine of the angle of error between the Y-axis of the mirror and the direction that the Y-axis of the stage is and this is very small for a few degrees although it's always nicest that you don't have to move the KE very far, if at all, as it's bothersome to do so.

Plot your recorded locations of each zone's C of C on your previously prepared graph for this purpose in their correct locations, and then connect these plots with lines as shown in our example in Fig. E.

Of course, at the beginning of figuring, the line of the KE settings will probably be "all over the place", not even approximately fitting inside the tolerance horn of the graph. But, you might get a pleasant surprise: you might be "in the ballpark" from the start. I knew a gentleman who "accidentally" figured his mirror into a good paraboloid just by polishing it out! (Don't expect this). Let's consider fig. E as a representative example of typical KE settings for one test run somewhere near the end of the figuring process. Note that the first reading of the KE for the first zone is actually closer to the mirror than the C of C of its central region. I.e., we had to advance the KE towards the mirror to find it, rather than find it pleasantly located in its proper location a tiny ways away from the C of C of the central region. The plots for the second, third, and fourth zone fall outside the tolerance horn. But note that the overall shape of the connected plots approximates the shape of the tolerance horn, envelope.

In fig. F we have relocated all of the plots farther up on the graph by an equal amount, each of them, until they all fit inside the tolerance envelope. This is allowed: it is merely the equivalent of starting with the tester located closer to the mirror by that amount of distance or that the center zone has a little bit of error in it.

Now the plots of all centers of curvature are lying everywhere inside of the tolerance envelope. Our ten-inch mirror of sixty inches' focal length (focal ratio of six to one, or "f/6") is now well enough figured that it will show no spherical aberration in use. Even if the tips of the cones of focused bundles of light from any zone on the mirror come to a focus into a plane farther away from their ideally computed ones, the blur circles representing the cross sections of these cones of light where they intersect their planes of ideal focus and pass through them will be no larger than their Airy disks at true focus would be in their ideal focal planes.

The tolerances as computed by the formulae given are considered "loose" by most authorities that is to say, they are considered to be the least demanding for acceptable performance for a telescope's objective mirror, and many authorities recommend making a mirror's curve to at least twice as demanding tolerances. By all means, one may continue figuring until his or her mirror's plots of KE settings fall very close to the middle curve of the graph (for values of "d").

I have tested many mirrors on the stars whose plots were spread out for the full tolerance envelope allowed by the graph. On nights of extremely steady air and at magnifications approaching 50X per inch (for larger mirrors) none of them ever showed any detectable halo of spherical aberration. However, the caveat we promised to convey to you in this regard (using up all the available space inside the tolerance horn) we should now specify. The connected line of plots for KE positions should not be wildly irregular, but rather deviate in a rather smooth, consistent fashion as with the example in figures E and F.

Please note that I have never defined the allowable tolerances for the disparity of focal planes for different zones of a mirror in terms of fractions of wavelengths of light. Instead, I have defined them in far more unequivocal terms, terms that lead one to an intuitive understanding of what tolerances mean. Simple geometry, algebra, and extensive practical verification have unequivocally validated these procedures for me.

Before leaving our introduction to Foucault testing, it will be interesting to survey a mirror with the knife-edge whose figure is very irregular. Fig. 10(a) and 10(b) show the same mirror (actual mirror from my extensive files) from two different vantagepoints along its OA for the KE. Note how radically different this mirror looks with the KE located in widely disparate positions along the OA.

I have kept this treatment of Foucault testing to the very barest essentials. A much more exhaustive treatment is possible however, a well-explained introduction to the subject for beginners is what has been most wanted.

Other titles to help the beginning amateur telescope maker will shortly be in preparation. The very next help article will be a well-illustrated description with complete instructions for building a very capable "over and under" type Foucault tester. In the article I will show how to build a high precision measuring engine (the tester) with little or only minimal machining.

I purchased my 16" f/4-f/12 classical Cassegrain mirror set from Scope City, Parks Optical's largest distributor. After I received them, I designed and built a special, secure storage/carrying case for them. Visit Scope City

Contact Dave Harbour via E-mail

## Diffraction with a subpixel sensor

I'd like to raise an issue after comments made in the news story about Lytro's new Illum camera.

I am going to write a blog article about this. I figured out that Lytro light field and Canon dual pixel are just special cases of a more generic class of subpixel sensor cameras. Neither Canon nor Lytro have hit the sweet spot yet though .

A side effect may be that Canon's patent on dual pixel AF is void.

Moreover, it does probably mean that Canon's dual pixel AF causes diffraction problems at very high fstops. Something worth to be studied

Yes, exactly. I realized this back when Canon introduced DPAF. What really excited me, amongst other things, were the the implications it might have for phase detection AF. That is, you could in principle build in sensitivity to horizontal, vertical, diagonal, etc. lines as you increase the number of photodiodes underneath each microlens. I haven't heard anyone explicitly talk about this, but I'm sure Lytro/Canon are thinking along these lines.

A 9-pixel (3x3) grid underneath each microlens could give you horizontal, vertical, and diagonal AF sensors. For a 20MP final image, that'd mean 180 MP sensor!

Now, as the photodiodes underneath a microlens shrink (to maintain reasonable resolution), pixel-level SNR suffers and that'll impact AF performance. But I do wonder if you could build in appropriate pixel-binning technology to add signals from corresponding photodiodes across microlenses to increase SNR (the resulting resolution cost would probably be OK for the purposes of AF).

"A 9-pixel (3x3) grid underneath each microlens could give you horizontal, vertical, and diagonal AF sensors."

So could a 4 pixel (2x2) grid.

Canon's DPAF shouldn't cause extra problems as it's combining the pixels so covering the same real estate as a 20MP sensor wrt Airy disks. Also even taking it as a 40MP camera the pixels aren't that much smaller in their linear dimensions than the many 24MP cameras. Plus all cameras get diffraction issues at a high enough F-stop.

The Lytro is a 1" 40MP sensor, so potentially quite a bit more exciting for diffraction. (The Lytro CEO specifically gave those details.) It will be interesting to see how that affects the detail, as the output images may be 4-5-ish MP but you'll need detail from all 40MP to sort out the final pixels.

I expect Canon's patents will be fine, as they will be carefully worded and cover PDAF which I'm sure no-one else had mentioned before.

First, let me repeat my understanding of a plenoptical camera (1908 knowledge), using the Lytro Illum as an example:

I assume the Illum to use a 7.84 mm^2 sensor (leading to the 3.19 crop factor specified by Lytro) with 6320^2 subpixels (40 Megarays) and 10^2 subpixels per microlens (unspecified, but this is what the predessor used. So, it will be 10^2 or more).

A subpixel is 1.24 µm, a microlens is 12.4 µm.

As a traditional camera, the Airy disk radius would be 1.34 µm (F/2), therefore, it is at the edge of what diffraction allows to resolve.

However, the subpixels receive light from a smaller region of the lens' exit pupil. Let's idealize and assume each subpixel only receiving light from a 1/10 x 1/10 square region of the exit pupil (microlens would project the exit pupil into the sensor surface). This would be similiar to what a traditional AF phase detect sensor does to an image behind the focus plane.

Therefore, each subpixel is prone to an Airy disk 10 times in size (13.4 µm). However, the 632^2 pixels (microlenses) would still resolve their images coming from their portion of the exit pupil.

Don't think in terms of pixels now. This is the wrong concept. Think in terms of 10^2 images we have captured, each coming from a different portion of the exit pupil, each having a slightly different parallax error.

What can we say about the images?

Well, each image is low resolution (632^2) and diffraction limited (F/20, or F/64 in 35mm-equivalent terms) and with very near hyperfocal distance.

Had we captured the complex-valued light field (a hologram), could we reconstruct a high resolution image in the focus plane (constructive adding).

However, we only captured the real-valued amplitude field (an image) because the subpixels already converted photons into electrons thereby destroying the photon phase information.

Therefore, a higher resoluton image cannot be reconstructed anymore.

This is the big problem of any light field camera: diffraction destroys resolution. It is unavoidable because you cannot capture position and impulse exactly at the same time (the Heisenberg principle of uncertainty). It is not a technical deficit, it is a physical restriction. Just like photon shot noise is.

The Canon dual pixel AF is just the same, except it uses 2x1 subpixels per microlens rather than 10^2. Nothing new, nothing to be patented by Canon. Especially, it should suffer from the same resolution-limiting effect (which is certainly in effect to some degree for Lytro)

I assume that two subpixels are binned electronically for the final image. If two subpixels are binned in a common charge well (an electronically removable well boundary or whatever), then everything is fine because the two half images would be added constructively. However, I assume binning during read out.

Does the Canon 70D have twice the (or significantly higher) diffraction blur in one of the two orientations? Did anybody test, e.g., at F/22? I already contacted the physicists at DxO about their findings.

Are there any patents about electronically controlled well boundaries between subpixels?

Is this post the first addressing the issue?

The above posts address the issues of subpixel SNR etc.

However, this may be a problem in how Canon implemented dual pixel phase AF. However, it is no problem if done as it should. Here is why:

As I wrote above, drop the pixel notion and think of separate images captured by each set of subpixels.

In Canon's case, it is two full images, but taken with the left or right half subpixels resp. Both images would have a non-circular out-of-focus circle of confusion in their image centers. And different parallax in the plane of focus.

An adequate AF algorithm would now determine a ROI (region of interest, aka AF area) which should be in focus. Then, the left and right images would be cross-correlated (possibly applying a spatial frequency weight filter) with one image shifted by amount p wrt the other image. The value of p where the cross-correlation maxes is the AF-phase. Individual pixels never get compared directly.

As a side effect, you can have different subpixel pairs (left/right, top/bottom, small/big per microlens). You still get separate images (more, different ones although now at a lower resolution). This would make the max p algorithm more robust.

In a follow-up patent to Canon's original dual pixel AF patent (filed just days after Fuji's original phase sensel patent), Canon already patented the small/big kind of subpixel pairs. Which is interesting as it reduces the diffraction problem mentioned earlier.

But nobody ever talked about a diffration problem with Canon's dual pixel AF method.

By Dr_Jon (23 min ago)

Canon's DPAF shouldn't cause extra problems as it's combining the pixels so covering the same real estate as a 20MP sensor wrt Airy disks.

as I wrote above, combining the pixels to cover the full mircolens real estate shouldn't work.

It works as long as you don't destroy the photon phase information. Radio astronomy antennas and optical fibers to transmit photons from an array of telescopes to a single receptor therefore can work if they manage to add the full (complex-valued) photon signals, i.e., both amplitude and phase.

But Canon most probably just bins two subpixels together in their final read-out. If the two subpixels maintain two separate charge wells each one containing converted electrons, then only the amplitude was captured and the phase is lost (converted electrons don't contain the originating photons phase or even exact amplitude).

Ultimately, we are touching the quantum observer paradoxon here (Schrödinger's cat): when does a measurement actually takes place?

However, I think in this case, it is pretty clear: a dual subpixel sensor should suffer from worse diffraction.

So a thought experiment where I take my Sony RX100 (20MP 1" sensor, so exactly twice the pixel area/1.4x the size of a lytro I guess) and shoot at f16 (which will have diffraction softening) then scale the image to 5MP externally (!) are you saying I'll still see a lot of diffraction softening in my 5MP image or not?

Although of course this is complicated by whether you are taking a conventional CoC for a 1" sensor of 0.015 or a pixel-level one, ho hum. I think we need pixel-level for this so divide that by 3 for 20MP (0.005) or 1.5 for 5MP (0.01). Do you get my drift here. as allows me to stop typing and get a beer

Actually that's very curious, that a conventional CoC is for a pretty low pixel count. Hadn't noticed that before. 0.03 on a 36x24mm sensor is. wow. *

(* about 2400x1600 = 4MP of sharp resolution, assuming some values for the AA filter and resolution loss on de-bayering, of course that will be 4MP on any size sensor to get the same result)

Jon, let me try to clarify a bit. We should get the classic case straight before we get to the more advanced topic of subpixel diffraction .

RX10 20MP isn't exactly twice the puxel area. Because the asoect ratio differs.

From Lytro spec (3.19 crop), a pixel is 1.24 micron.

Sony RX 10 pixel is 2.41 micron, a factor 1.9 rather than 1.4 larger.

The RX10 shot at f/16 has an Airy disk radius of 10.7 micron. A diffraction-limited image with pixel pitch equal to the Airy disk Radius (aka Raleigh criterion) has 9% contrast. Common wisdom uses this as the diffraction limit (details still recoverable via sharpening) where twice the pixel pitch is said to be free from diffraction blur. A simplification of course, but a useful one. The two resolutions are also often referred to as MTF10 and MTF50.

10.7 micron is 4.4x the pixel size, so resolution is limited to 20MP/4.4^2 or 1.0 MP. So yes, your 5MP RX10 f/16 image is blurred heavily by diffraction.

Wrt circle of confusion (CoC) or depth of field (DoF).

DoF has nothing whatsoever to do with pixels. It is based on the human eye's capability to resolve detail from a normal viewing distance. It is typically normalized to be 1/1500 or 1/1730 of the image final diagonal, whatever that be (final means after possible cropping or stitching). Personally, I use 1/2200 of the image diagonal as this happens to be full HD resolution pixel size and is a bit more conservative. More recent results suggest that 4k images appear sharper from normal viewing distancrs, so even 1/3000 may be a good guess (leading to 0.014 rather than 0.03 CoC on a 135mm frame). But pixels are almost always smaller than the CoC.

The discussion in this thread is about diffraction-limited resolution in the focus plane with subpixel microlenses. Therefore, CoC plays no role here.

Dr_Jon wrote:

So a thought experiment where I take my Sony RX100 (20MP 1" sensor, so exactly twice the pixel area/1.4x the size of a lytro I guess) and shoot at f16 (which will have diffraction softening) then scale the image to 5MP externally (!) are you saying I'll still see a lot of diffraction softening in my 5MP image or not?

Yes, diffraction softening in that situation should affect several of the downsampled pixels, roughly the area covered by 4x4 pixels (visibly probably more like 3x3 or even 2x2).

The radius of the first zero of the Airy disc is given by 1.22*lambda*N. With lambda=0.55 microns and N=16 that works out to about 10.736 microns, meaning that the area of the Airy disc would encompass about 63 of the RX100's 2.4 micron pixels - or about 16 of the downsampled ones.

Although of course this is complicated by whether you are taking a conventional CoC for a 1" sensor of 0.015 or a pixel-level one, ho hum. I think we need pixel-level for this so divide that by 3 for 20MP (0.005) or 1.5 for 5MP (0.01). Do you get my drift here. as allows me to stop typing and get a beer

Actually that's very curious, that a conventional CoC is for a pretty low pixel count. Hadn't noticed that before. 0.03 on a 36x24mm sensor is. wow. *

(* about 2400x1600 = 4MP of sharp resolution, assuming some values for the AA filter and resolution loss on de-bayering, of course that will be 4MP on any size sensor to get the same result)

If one is not sure what size the finished image will be displayed at, better stick to pitch imho. Lots of literature around suggesting that 0.03 is woefully outdated.

I think we have now reached a consensus that at f/20, the image from a 1" sensor has low resolution, like 1MP only.

Q: The question now is: Do the Lytro subpixels divide the effective f-stop by 10, do the Canon subpixels divide the effective f-stop (in one direction) by 2?

Effective as far as diffraction is concerned, not light gathering or DoF.

Light gathering and DoF can be understood in the particle model (or geometrical ray model) of light and shouldn't be affected by the existance of subpixels. But diffraction can be understood in the wave model (or quantum model) of light and should be affected by subpixels.

So, my personal guess at the question above is:

For those that missed it I picked a suggested CoC of 2 pixels (rounded up from 4.8 to 5 for 20MP then doubled to 10 for 5MP), which is what you seem to agree as a reasonable number for the diffraction limited resolution for a RX100. sorry if it wasn't obvious?

Not sure where DoF comes in, I don't recall mentioning it?

(Sorry, ran out of the editing window on my response.) I thought part of the question, and what I was discussing, perhaps too obliquely, was whether you could just add the sub-pixel values together to avoid diffraction effects due to the 40MP "resolution" in a Canon DPAF sensor (as you seemed to suggest wasn't possible without phase information). I don't see that and think adding should work fine, which was what I was trying to start a discussion on, in case you could point out stuff I'd missed or v.v.

Dr_Jon wrote:

I thought part of the question, and what I was discussing, perhaps too obliquely, was whether you could just add the sub-pixel values together to avoid diffraction effects due to the 40MP "resolution" in a Canon DPAF sensor

Jon, I think I must have expressed myself not clear enough.

The problem is NOT the number of subpixels. Diffraction isn't influenced by the pixel resolution, not at all. Therefore, there is no point in adding up 2 times 20 MP to get to 40 MP. Your computation may be ok, but it has no point.

The point is: with several subpixels PER microlens, each subpixel is seeing only a PART of the lens aperture (clipped exit pupil). And it is this clipping of the exit pupil which would cause extra diffraction blur.

I.e,, the Airy disk shapes seen by the different sets of subpixels will be different from that of a sensor with no subpixels. The Airy disks increase in size! Remember the double  slit experiment? Right! The resulting interference pattern is NOT that of two slits added! The same must hapoen here: The Airy pattern of the larger pixel must differ from the two subpixel Airy patterns added! It isn't intuitive, but still undergraduate physics.

Think in these simple terms:

A normal sensor pixel is like a double slit interference pattern. A combined dual pixel AF sensor pixel is like two single slit interference patterns added. Both are NOT the same!

As for DoF: I was mentioning it because you mentioned CoC. The CoC is used in the computation of DoF and nowhere else in optics. This is why I referred to DoF.

I may have found a reason why dual pixel AF sensors aren't suffering from additional diffraction blur at large f-stop values.

They would still suffer from extra diffraction blur at lower f-stop values (larger apertures) such as f/5.6 where AF is actually working. But at large apertures, the blur from diffraction is small anyway. I assume the effect is measurable but not visible.

At small apertures (large fstop values), the exit pupil is tiny, more like a point than a disk. I believe it is then very unlikely that each subpixel in a pair if two is receiving a clipped image from the exit pupil. Both should then see the entire (tiny) pupil. And if they do, no extra diffraction will occur.

But a lightfield camera like Lytro still doesn't escape: it needs to use the full aperture and as most of its subpixels will be far from the center of the microlens, clipping the exit pupil images is unavoidable. Actually, most would black out if stoppung down the lens were permitted.

Therefore, I think a dual pixel or quad pixel AF sensor is just getting away with the issue, as all subpixels share area with the microlens center. But starting with a 3x3 subpixel AF, extra diffraction should indeed start to become a problem. As are the smaller subpixels in Canon's more recent patent.

My degree is in Electronic Engineering, so the Physics was more Materials and Quantum rather than Optics. I did spend some time in a Research Lab doing Image Processing, but that was a while back and a lot of it was to do with noise. I also did some work on sensor selection for cameras, but just one micro-lens per pixel in that case, plus the selection criteria have a bit less to do with what Photographers care about than they'd like to hear. My optics knowledge comes more from Astronomy than Physics. Thanks for explaining your point though, I now get what you are saying, I just can't evaluate it without some software/expert help. I suspect that's one for others though. (Strangely I usually find Photographers take a simpler view of Airy disks than Astronomers, it's weird to see it the other way around.) So basically "not a physicist".

I will try one question, well it might be in two+ parts - I assume the Microlens is normally there just to aim the photons down a light-pipe to the pixel, dodging around the wiring, and has no point other than that. (I don't see why I care about diffraction after the top of the Microlens as there is no resolution to be affected?) In DPAF they give the Microlens a point so presumably have two light pipes as one would be complicated and none inefficient? (I haven't read the patent and it's kinda late, plus they probably didn't have to say anyway.)

Presumably the aim on taking a photograph is usually to have the image in-focus where it hits the Microlenses, as anywhere else would be less good. So the Airy disk exists there and may be significantly bigger than the pixel (or not). In DPAF is the focus point different? I'm just not sure why I care what happens below that if we aren't using the pixels separately to gain resolution? What am I missing?

Oh, and I was using CoCs in the sense of comparing with Airy disk sizes, plus you do care about pixel sizes wrt the Airy Disks if we are looking at pixel-level sharpness. While diffraction is independent of pixel size it's the pixel size that affects how much effect it has at the pixel level (now that's a sentence). I usually use CoCs with diffraction as well as DoF as pixel size doesn't say what the system can resolve (without a LPF and with a conventional CFA you presumably do better than about 2 pixels, I'm not sure by how much).

It's late and I'm tired, so sorry if some of that isn't clear or was edited to make less sense than it started out as.

falconeyes wrote:

I think we have now reached a consensus that at f/20, the image from a 1" sensor has low resolution, like 1MP only.

Q: The question now is: Do the Lytro subpixels divide the effective f-stop by 10, do the Canon subpixels divide the effective f-stop (in one direction) by 2?

Effective as far as diffraction is concerned, not light gathering or DoF.

Light gathering and DoF can be understood in the particle model (or geometrical ray model) of light and shouldn't be affected by the existance of subpixels. But diffraction can be understood in the wave model (or quantum model) of light and should be affected by subpixels.

So, my personal guess at the question above is:

A: yes.

Any other takes?

No. The angular target resolution diffraction limit on the projection area stays the same. The diffraction limit stays the same on the projection area, no matter at what resolution or subdivision of angles on that area [with pixels or subpixels] you measure it.

In both the particle and the wave model, the parameter affecting diffraction restriction of angular target resolution is the angular restriction (numerical aperture) between two focal end-points and the intermediate aperture. That angle is constant no matter what resolution you try to resolve the projection [of either focal point] with. It does not matter to focal point A' on the sensor surface (or ML assembly surface) if it's situated on a small pixel or a large pixel - or even if it's situated on a dead band area - it's still the same point in space. All the interactions between the front focal point, the aperture, and the rear focal point in this system has already happened - they're "dead" from a quantum view, and "dissipated" from a wave view - when the energy reaches point A'.

So, target referred angular diffraction happens BEFORE the subpixel division. The subpixel division in itself is really just a divisor for the integral sum of energy already present at the surface containing point A'. That this divisor works by subdividing a larger cone into angular cutouts doesn't matter.

Each individual subdivision does of course then have it's own diffraction effect - but that effect is no longer dependent on the interaction between the two main focal points and the main aperture. This "new" system brings its' own aperture so to say. So the diffraction effect here is that of two new focal points and a new aperture. The front focal point for this system is the point on the aperture surface where the light was "collected from", the aperture is the ML surface area that propagates light to each subpixel, and the rear focal point is in the PD. Since the distance between the ML surface and the PD is (inherently) quite small compared to the new aperture area, that angle is very large, and hence this added diffraction effect is almost negligible. It's a very large numerical aperture system working on micron-scale focal lengths.

The only numerical difference between the models here is the resolution of the simulation result - for the wave model it's infinity (continuous), for the quantum model it's limited (quantized) to the Planck resolution if assuming an infinite exposure time (lower resolution otherwise). I would strongly advice against trying to apply the quantum model to this system, since it's a complex multiplication of two already quite extensive path integrals.

Should my understanding (or explanation) of the underlying effects that causes this be wrong, I'd be happy to be corrected. Sorry for the rather convoluted post, but I really need to get some sleep now, no time for editing.

Hi, you made me think it over again and will post another comment. While I now agree on most things you wrote, I think I can still contribute a few points to your posting. Thanks for it anyway.

The_Suede wrote:

No. The angular target resolution diffraction limit on the projection area stays the same. The diffraction limit stays the same on the projection area, no matter at what resolution or subdivision of angles on that area [with pixels or subpixels] you measure it.

Actually, you can't measure on a subdivision of angles in the projection plane. I agree on your resolution remark. But "no matter at what subdivision of angles on that area" is not correct. That was my whole point. Imagine you construct a mask, a kind of tube in front of every pixel (of an ordinary sensor), only letting through a fraction of rays coming from the rear lens. If you did that, you'd effectively lowered your aperture and increased diffraction. Because now not all rays coming from the lens can't interfere with each other on the projection plane anymore.

And now, even if you'd take a manifold of images with varying tube angles to cover all rays from the rear lens, and added them up to a single image, it still would have the increased diffraction.

The adding must be done in terms of the wave function, not the image data.

In both the particle and the wave model, the parameter affecting diffraction restriction of angular target resolution is the angular restriction (numerical aperture) between two focal end-points and the intermediate aperture. That angle is constant no matter what resolution you try to resolve the projection [of either focal point] with. It does not matter to focal point A' on the sensor surface (or ML assembly surface) if it's situated on a small pixel or a large pixel - or even if it's situated on a dead band area - it's still the same point in space.

All the interactions between the front focal point, the aperture, and the rear focal point in this system has already happened - they're "dead" from a quantum view, and "dissipated" from a wave view - when the energy reaches point A'.

Refer to my tube thought experiment above to see.
What you call dead is when the photon is converted to an electrom and notime earlier.

At any point in the projection plane, the Airy disc is not an image, it is a complex-valued wave function. If you mask out parts of the rays (in the path integral sense) then you alter the wave function. In practice, the Airy disc will smear out (rather than getting dimmer) if you block parts of the interference pattern.

There is another, more elegant way to see all this:

A lens measures both a photon position (determined by the aperture area it passes through) and photon impulse (determined by the angle of incidence onto the lens, where a given angle corresponds to a single point on the projection plane, assuming an infinite focus distance for the sake of simplicity).

The Heisenberg principle of uncertainty which can't be escaped then translates into you can't measure both position and impulse exactly at the same time. The smaller the position uncertainty (the smaller the aperture) the higher the impulse uncertainty (the more does the projected point on the focus plane smears out).

Therefore, if indeed would you try to measure the Airy pattern made from parts of the lens aperture only, inevitably must the projected point on the focus plane smear out more.

So, target referred angular diffraction happens BEFORE the subpixel division. The subpixel division in itself is really just a divisor for the integral sum of energy already present at the surface containing point A'. That this divisor works by subdividing a larger cone into angular cutouts doesn't matter.

As I said, there is no before and after. If you'd indeed subdivide a larger cone into angular cutouts, then you would alter the wave function on the projection plane itself (because you would block rays which would interfere with each other otherwise). Thinking in terms of energy is not a good idea. Energy is the magnitude squared of the complex wave function while in reality, it is the wave function itself which must be analyzed. Energy is ok after the measurement, i.e., after the photo effect created an electron.

However, I now think I am wrong in my idea that the dual pixel sensor has worse diffraction. But for a different reason. I'll explain what I mean in a different comment below.

## Diameter of hole - circular arpetrure

Light from a helium-neon laser (λ=633nm) passes through a circular aperture and is observed on a screen 4.0 m behind the aperture. The width of the central maximum is 2.5 cm . (Note the width of the central maximum is the distance between the dark fringes on either side of the maximum)

What is the diameter (in mm) of the hole?

So I've done angle = 0.025/4 = 0.0060 rads

then wavelength / angle = 6.33*10^-7 / 6*10^-3 = 1.055*10^-4 m => 0.11mm. But the quiz says i've got the answer wrong. what have I done wrong?

The light diffracted by a circular Aperture is given by an Airy Pattern. The angle of the disk from the center to the first minimum is given by sin(theta) = 1.22 lambda/diameter. Once you have the angle it is a basic geometry problem. The Radius of the Airy disk is just given by tan(theta) = R /distance. Solve for R. The small angle approximation (tan(theta) = sin(theta)=theta) should be fine.

In optics, the Airy disk (or Airy disc) and Airy pattern are descriptions of the best-focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light. The Airy disk is of importance in physics, optics, and astronomy.

The diffraction pattern resulting from a uniformly illuminated, circular aperture has a bright central region, known as the Airy disk, which together with the series of concentric rings around is called the Airy pattern. Both are named after George Biddell Airy.

All optical systems produce a blurred image as a result of diffraction. On a fundamental level, we require a ruler to measure how much blur has occurred in a system. MTF, MTF50, and other measures are all "resolved" quantities mathematically. They are produced by taking an intensity profile and performing some mathematics on it. These methods cannot tell the "source" of blur, only that blur has occurred.

When you consider things like chromatic aberration, however, it becomes clear that things carry some dependency on wavelength, or color. As it turns out this is also true for the wave behavior of light. Blue light doesn't travel faster than red light, but it carries more energy per photon and consequently blurs less when it diverts its path around an aperture. (E=mc^2 after all, it must be more massive and thus have greater inertial).

In this respect, we use the wavelength of light as a ruler. More massive photons don't divert as much, so they stay tightly packed and produce a small, high intensity spot.

However, this is largely irrelevant to photography, as consumer lenses are simply too aberrated.

Here I present to you the spots from several lenses, as examined on an MTF bench. The lens is f/2.4 and covers a 120 degree field of view. As designed, it is diffraction limited (corrected to less than lambda/6 waves of aberration, lambda/4 is generally considered diffraction limited).

First we have an excellent spot:

We may also see a disturbed optical system, i.e. one with some amount of misalignment. This particular sample has about a wave or so of coma on-axis. This is more typical of a consumer lens, since they simply do not cost enough to be designed and aligned to this specification (nor is it truly necessary).

As another example, here's about a half wave of coma, but also about a wave of astigmatism. Not very pretty.

Here's the MTF of the three spots in the same order:

Now let's look at a consumer lens universally regarded as being super duper sharp and a lens that is a favorite of mine, the Zeiss 100mm f/2 Makro Planar.

I apologize for the change of format. The big kicker here is that nowhere in the field of view does the MTF at 50lp/mm surpass that of the highly disturbed sample. It's at about 0.6 across the entire field, where the highly disturbed but perfectly designed lens achieves about 0.7 even in its worse plane.

Maybe in 10-25 years when consumer interchangeable lenses are designed as well as this $25,000 wide angle fixed lens will the airy disk matter in photography, but today it does not. We all know that Light beams travel in a straight line (beam from the German straight like a tree trunk). We also known that sound waves bend around an obstacle. We can hear someone yelling behind a tree. You might know that water waves also bend around an obstacle. But did you know that light beams grazing a sharp opaque edge bleed into the path of light rays that just clear the edge of an iris diaphragm. We are talking about the blades of the lens aperture. We are talking diffraction (Latin to change direction). The light that bleeds into the geometrical shadow of the iris illuminates the line of demarcation making the separation of light and shadow, indistinct. However this diffraction does more. The diffracted light beams interfere with the direct beams and form a series of interference bands that surround each point of light projected by the lens. What we see are concentric bands surrounding this point of light. These indistinct circles decrease in intensity and spacing from the central point. We can only see these bands if we examine the image cast by the lens with a magnifying glass. We actually are looking at two phenomena, interference and diffraction. These intertwine to cause what we want to be a point of light too small to be discerned as dimensionless, to appear as a circle of light with scalloped edge. This we name “Airy Disc” (disk of light that we view in the air with the aid of a magnifying glass). This was well studied by John Strutt, an English nobleman, 3rd Baron Rayleigh 1842 – 1919 Nobel laureate). The Rayleigh Criterion for lenses remains valid despite our best efforts. The resolving power of a lens is given as lines resolved per millimeter. This means we are able to distinguish a space between closely ruled lines. Resolving Power ( RP) = 1392 /f-number (different for each wavelength however photographically we use 1392). ## Review of the Meade 152ED APO Refractor [ARTICLEIMGR="1"]The Meade 152ED seems to have never garnered a particularly large following, and yet here is a telescope that most assuredly is capable of providing 100% of at least one of the main big refractor promises: Ultra-sharp, high contrast visual observing. The Meade 152ED is a physically big refractor. It is an f/9 (1370mm) ED lens design so with the dew shield in place it is quite long. It also uses a 7” outer diameter OTA, so the baffling is some of the most effective I have ever seen on a refractor. The Vixen 102 Achromat I owned also used an oversize tube (114mm O.D.) and it also had superb baffling. The focuser is a 2.7” unit, though it comes with a 2” adapter in place. While I have never heard a great deal of praise about this focuser in reviews, I would rate it as being among the better focusers in the “Mass-Market” scopes I have used, aside from those with Crayford focusers. For example, I am a big fan of Vixen products, and the Meade 152ED is a bit better than the Vixen refractor focusers I have used. The movement is heavy (though it is lightening up with use), but smooth. I added a couple of drops of Teflon lubricant where I think the rails are, and this also helped. There is also a tiny bit of gear-lash. It asserts itself when attempting to focus at higher powers. The Meade 152ED is a nicely executed package. Workmanship and finish overall is actually quite good, again appearing equal to or slightly better than Vixen quality. I see that I am using Vixen as the benchmark here, and frankly that seems fair. I regard Vixen as being among the better of the “Mass Market” telescopes, being clearly better than most other mass produced brands, so saying the Meade is similar or slightly better in quality is to me, a compliment considering the price point. Now I will be honest and say that mine had an optical problem when I purchased it, but some phone calls to Meade, some shipping effort (paid for by Meade), and some patience, and that problem was corrected. When it was received, stars had prism-like appearance, being tinged with red on one side of the Airy disk, white in the center, and blue on the other side. I tried collimation and centering, but these did not correct the problem. Only by returning it to Meade was I able to get the problem rectified. This scope requires a BIG mount. I have mine on an EQ6 SkyScan mount. This combination is acceptable for visual use, but for imaging, I suspect that a Ci-700 or G-11 would be the minimum that would be considered acceptable. I am compelled to say this… This scope is more demanding to set up than any scope I have owned before. The mount is heavy, and I either have to drop the weights and grunt it out of the house, or further dissemble it to move it. The OTA is LONG. It will not easily fit into most cars unless they are a hatchback, or have a trunk pass-through. This is something of a specialty instrument that requires lots of compromise from the owner. If you haven’t owned a really big refractor before and are considering it, I would say that you should consider these issues carefully. I still struggle with them, and I really do like this telescope. From time to time though, I wonder if I will continue to use as much as I have been lately over the years just because of the set-up and transport issues. Now, I got here the long way around… I tried smaller refractors. The 102mm APO I owned didn’t wow me at all. Oh sure, it was compact and offered crisp views, but in a nutshell, it was just too small to gather enough light for use in my central Austin TX location. Even the glorious M42 was positively underwhelming. I didn’t find it all that much better than the Orion 127 Mak I owned on many targets (The 4” WAS better though, just not what 5 times the price would suggest it should be). I sold the 4” APO for a Vixen 102 Achromat which I considered to be almost as good, and at a small fraction of the cost. I sold that one too though, because once again, 4 inches of aperture just didn’t cut it for me. I bought a Vixen 140NA refractor for my low power, wide field work, and I have been happier with it than with the 4” APO or Achromat. I also owned a couple of Celestron CR150 refractors, both of which I found to be very good. Only at higher powers or when viewing solar system targets did chromatic aberration become a problem, and it was a serious problem for me, clearly lowering contrast at high powers on Jupiter and the Moon. The Vixen 102 was almost color-free by comparison. A 6” f/8 achromat lens is going to produce violet at the eyepiece. Lots of it…. But the allure of a big refractor haunted me. I mean, I do indeed “Get” the notion that there is a fine quality to the views delivered by a good refractor, but the smaller refractors I owned lacked the punch I longed for, and the big f/8 achromatic refractors lacked the image fidelity of the smaller APOs. I had my name on a waiting list for a large high-end APO for several years, but grew frustrated by the wait, and finally decided to purchase a new Meade 152ED. Providence shined on me, as the decision was a good one. This particular Meade 152ED has some of the better optics of the scopes I have owned. The polish is very smooth, and there are no obvious defects visible in the star test. Spherical Aberration may be present to some tiny degree, because the inside and outside of focus Airy disk patterns look a bit brighter on one side than the other. Still, whatever SA might be present must be of such negligible amount as to be meaningless. IN-focus, the Airy disk is virtually perfect as depicted in star testing charts for un-obstructed telescopes. The Airy disk is a small bright dot, surrounded by an Oh-so-very-faint first diffraction ring. In fact, on dimmer stars, the first diffraction ring is barely visible. Bad seeing conditions will make it a bit more apparent, but for the most part, the in-focus Airy disk is almost perfect. This telescope clearly presents some of the most pleasing stellar views I have ever had. While my 4’ APO presented as-good of an in-focus Airy disk, it just didn’t focus the stars down to such small points. At moderate powers, the 4” Scope started to show stars more like textbook drawings of Airy disks rather than like stars suspended in space. The 6 inch telescope allows them to keep looking like stars at considerably higher powers. If I remember my book learnin correctly, a refractor with perfect optics will put 84% of the light into the Airy disk, with 7% going into the first diffraction ring, and the remainder going into outer rings that will be to faint to detect on all but the very brightest stars. Now, on this point of having 84% of the available light going into the Airy disk…if you think about it, you will understand why these big refractors seem so special. The main reason is related to the arch-nemesis of obstructed (reflecting) systems:seeing conditions… When seeing is less than perfect, a system with a 25 percent obstruction which is already putting about 20% to 25% (or more) of its light into the first diffraction ring will see more and more of the light being spilled into the successive rings and scattered among them. The result is that in anything less than just about PERFECT seeing, the reflector image will start to break down quicker. My 11” SCT shows this to an extreme degree. It is large (which makes it more affected by seeing because of the nature of the turbulence cells in the atmosphere), it has a big central obstruction (the first diffraction ring around the Airy disk of a perfect SCT will be about as bright as the ring around the Airy disk in a refractor with ¼ wave SA. The net effect is that even though the Airy disk itself is theoretically smaller in the large reflector, typical seeing will spill enough light into the diffraction rings to bloat the image of the star. As a result, on the vast majority of my viewing nights, the Meade 152ED presents much more pleasing views than anything else I own. On EXCEPTIONAL nights (1 in 25 most of the year, 1 in 10 in the Summer), the SCT will present noticably smaller Airy Disks, but even then, they still look somewhat dull when compared to the Meade 152ED. In fact, one book (Telescope Optics: Complete Manual for Amateur Astronomers, by Rutten and Van Venrooij) provides a chart that suggests that a 6” refractor is perhaps one of the best choices for an all around telescope when trying to balance the desire for large aperture against the negative effects of poor seeing. My own experience is now confirming this for me. On many nights, I enjoy the view through the big refractor more than with my NexStar 11 when viewing a large percentage of stellar and solar system subjects. Only when viewing objects such as globular clusters and faint open clusters or galaxies does the bigger reflector show a decisive advantage, and only because it reaches deeper, turning up stars maybe a magnitude fainter, which in a Globular Cluster, translates into a LOT more stars. Regarding secondary color, well, there simply isn’t any visible on 99.9% of the subjects I have viewed (including Jupiter and Saturn). Some people will say that the Meade 152ED is not an APO, or prefer to call it a “Semi-APO” and I am not at all inclined to argue either point. I will only say that color is so well corrected as to be unnoticeable in normal visual observing. For me, that is enough. I can see a very faint, very narrow yellow tinge just inside the limb when viewing the Moon off axis in some of my eyepieces, but I think that some of this may come from the eyepieces themselves. It is more pronounced in some eyepiece designs (complex) than in others. On axis, I see practically none at all on the limb, and on the disk itself, I see absolutely no false color. None. Nada. So, I will call it a “Practically color-free, larger aperture, high quality refractor that is a true pleasure to view with.” And frankly, if you asked me for just a lay-person’s opinion, I would say that I think Meade was justified in using APO in its marketing material. You must remember that this telescope design (and Meade’s accompanying marketing material) was introduced 10 year ago, and when Meade introduced it, by the standards of the day, I believe that it would have easily been accepted as a true APO. Some of the other “APO” scopes from that time also showed a tiny bit of color too, so when I put to it into perspective, I don’t think Meade was misleading with their labeling. I have looked through a couple of very expensive 4” APOs from 10 years ago, and I would say the big Meade is as good, and compared to even the BEST achromats from then and now, it still would be considered color-free. Ok, so, how is it to view with a practically color-free, larger aperture, high quality refractor. It is in a word, Sublime. Yes, there is indeed something special about the view through a practically color-free, larger aperture, high quality refractor. The incredible sharpness of stars in the field is hard to describe. People say “Pin-point,” but in an attempt to build artificial stars, I have done many pin-points in foil, and they don’t come close to matching the nature of an in-focus star in the Meade 152ED. On a big cluster like M37, the stars are such finely focused little points of light that they seem impossibly small. On nights of average seeing, the Meade 152ED presents perhaps the most pleasing open-cluster viewing that I have ever enjoyed. My 4” APO didn’t gather but half the light, and the Airy disks, being much larger, didn’t give the same pin-point like impact. Yes, they were excellent, but at even moderate powers, the Airy disk would show, while at similar powers in the 152ED, they still look like tiny points. Light throughput is excellent. Theory says that light throughput should be a bit less than an 8”SCT, but in side-by-side comparisons on several open clusters, I could not see any stars in my 8” SCT that were not visible in the 6 inch refractor and everything is sharper in the refractor to boot. Now oddly, it will be a bit harder to find a very dim star in the 152ED when doing these comparisons with the C8, again because the star is such an ultra-fine point in the refractor. Once you find it though, it is conspicuous. In the 8”, the star is visible because it presents itself as a larger blob. In the refractor, it presents itself as an oh-so-tiny point, which you have to actually look harder to find it, but once the position is located, suddenly you can’t HELP but seeing it. Odd sensation, really, but this is what I see. With deep sky objects, the 6” Meade 152ED is probably closer to the Celestron 9.25 inch SCT that I used to own. Now I didn’t really notice that often mentioned difference is sky blackness. When using eyepieces that produce similar magnifications and similar fields of view, I just don’t see the sky as being “Blacker” in the refractor on deep sky objects. And my bet is that the sheer power of aperture in a 9” scope will start to show slightly fainter stars … Ah, but not as sharp… There IS a difference when very bright objects are in the field of view, especially the Moon. On the Moon, there is indeed a darker sky just off of the limb in the 6” refractor. In fact, the sky looks ultra-black in this case, while in the SCT, it starts to show a very faint glow. Similarly, planets will show a tiny faint glow around the planet in the SCTs. Some of this comes from reflections in the eyepiece, and while I couldn’t match powers and eyepiece designs to make an exact comparison, the sky immediately around Jupiter is quite black in the refractor, while there is a faint glow immediately around the planet in the SCT. Now don’t think that this is a disparagement to the 8” and 9.25” SCT, because an 8” SCT on a computerized mount costs half of the price of the Meade 152ED OTA alone (new), and a C 9.25 OTAs (new) only costs about half of what the 6” refractor OTA would cost. I have incredible respect for these two OTAs and if either budget or overall manageability were my highest priority, either one of these scopes would be VERY difficult to pass by. The 8” SCT on a computerized mount is still my first recommendation for people wanting to enter into amateur astronomy with a mid-sized GEM mounted scope and having a reasonable budget. I own a Celestron 8” SCT that I have on a Meade LXD55 mount, and to this day, it still gives me great satisfaction to use it. When used with a 35mm Panoptic, the Meade 152ED can provide a 1.7 degree field and it is a spectacular field indeed. Now this field is 1.2 degrees shy of what I can achieve with my Vixen 140 refractor, and of course I can’t reach the same low powers, but the quality of the view in this size field is unsurpassed by anything I have ever owned. The Vixen 140 NA that I own presents an in-focus Airy disk that appears to have about ¼ wavelength of spherical correction error, and it is not quite as bright as the 12mm deficit in aperture would indicate (it is a 4 element design), so that while it does present a very lovely field, the view of similar sized fields in the 6” 152ED is clearly superior. Stars just appear as much smaller, more intense light sources. Likewise, nothing I have owned can do double stars better on typical nights. On maybe a couple of dozen nights in the year, the NX11 can eek out some incredibly close or high contrast difference doubles, but the Meade 152ED reaches its theoretical performance on MOST nights. On very bright stars, poor seeing will start to affect even this scope. Seeing is MUCH less a factor than with my 11” SCT though. Deep sky performance, when compared to my NexStar 11, is as you would expect, not as good. Aperture clearly dominates. Still, in a recent trip to dark skies in north Texas, the 6” refractor easily showed the inclusion on M82, and spiral structure on M51, along with the companion. Deep sky views between this and the 11” SCT were more similar than I would have thought they would be. But the larger aperture did prevail on most targets. M13 was more resolved in the larger scope, as was M92. The dust lane in the Sombrero Galaxy was a bit more prominent in the 11” SCT. M37 was much richer in the 11” SCT, though better framed at low power in the 6”. The Ring Nebula (M57) was similar in both, but the wider field of the Meade 152ED could offer a much nicer framing here as well. For some reason, the Ring Nebula is as enjoyable to me at low power as it is at high power. At the same time, though, the crispness of the view in the 6” refractor always seemed better. This is a major point. I SEE more with the larger scope, but the views seem more pleasing in the 6” refractor. I never got this with smaller refractors, again because of the fact that they just didn’t go deep enough, and also because of the issue of the Airy disk sizes. The Meade comes in second to my NexStar 11 on lunar performance. Seeing again is an important factor, and on nights of average seeing, the 6” refractor usually presents an image that is very close (closer than anything I have owned previously), but the resolution of the much larger instrument just shows more detail. Last night (18 May, 05), the seeing was actually quite good here in central Texas, and in a side-by-side comparison, the NexStar 11 just eeked out a bit more detail everywhere I looked. In past comparisons when seeing conditions weren’t as good though, the Meade 152ED held its own. No other telescope I have ever owned has done so well on the moon when compared to the NexStar 11. On planets the situation is clouded… Saturn usually looks a tiny bit better in the NexStar 11. With the 11” SCT, I see more evidence of the polar shading, and the Cassini division seems a bit more pronounced as does shading of the rings. The main factor, I think is that image brightness of the Meade 152ED starts to become the limiting factor at about 40x per inch of aperture. Now this is quite good. I have never owned a telescope that could achieve improved performance much above 30x to 35x per inch of aperture on solar system objects. I hear of people using 50x per inch of aperture and more, but frankly, I have never owned a scope that showed me any MORE detail at 50x per inch than at 35x per inch on planets. But the Meade 152ED does work extremely well at 40x per inch of aperture (38x actually, which is 228x with a .66mm exit pupil, using a 6mm Radian). Using a 5mm Radian for 274x (.55mm exit pupil) produces a bigger, more pleasing, but dimmer image, so that while the image is a more comfortable scale, I can’t say that I see additional details. Also, floaters in my eye become a much bigger bother at this exit pupil. But the image is still quite sharp at this magnification. . I would say that no telescope I have owned previously has been able to achieve this kind of magnification ratio. Also, it is a VERY rare night that I can use over about 190x with the big SCT. With the Meade, I can often use the 228x which results in a better exit pupil for detecting low contrast objects, and I do have the ability to go to 274X if I want to get a bit bigger image scale. In the past, I have owned a few scopes that seemed to present results on Jupiter nearly on par with my 11” SCT (MN61 was closest), but on most nights, if I was patient, I could see more in the larger SCT. Extremely low level contrast detail in the SCT, as mentioned before, is seriously affected by seeing. To get good performance on Jupiter with a large SCT, the seeing has to be almost perfect. In the last year, I have only had perhaps a couple of dozen nights of seeing this good. Still against most telescopes, on most nights, patience would allow the 11” to glimpse detail that was not visible in the smaller scopes. This simply isn’t true when viewing Jupiter and comparing the NX11 with the Meade 152ED. On MOST nights, the Meade 152ED simply provides a consistently better viewing experience on Jupiter than anything I have owned previously. How good? Very good indeed… Detail in the GRS, detail in the turbulent streams following it, ovals, festoons, barges, subtle belts, its all there on nights of even reasonably good seeing. The moons of Jupiter ALWAYS look sharper in the refractor. Now to be fair, when seeing is good for the NexStar 11 (a minimum of a half-formed, in motion first diffraction ring) the big SCT can provide more detail than the Meade 152ED. Most notably, some of the small white ovals in the southern hemisphere don’t resolve as well in the Meade 152ED. Once again, the superior resolution of the big SCT does a great deal to offset the contrast loss imposed by its large central obstruction in the SCT. While contrast IS an important factor in planetary observing, you would do well to not dismiss resolving power. While I have not seen Mars in the 152ED, I have compared the NexStar 11 to several APO refractors during the Mars close approach a couple of years ago, and no refractor I viewed through showed as much detail as the 11” SCT did. The resolution of the larger SCT just totally overpowered the 4” and 6” refractors I viewed through on this moderately high contrast object. I just didn’t feel as strongly about the appeal of a 4”,$2000 (or more) APO, but at 6 inches and $2500, the argument for a practically color-free, larger aperture, high quality refractor looks far more compelling to me. I find myself actually picking a “Smaller” scope over a larger one for a fair percentage of my viewing. On many nights, I take out both the NexStar 11 AND the Meade 152ED. Prior to getting the Meade 152ED, if I was only going to take out ONE scope, it would have almost always been the NexStar 11. Now, I find myself taking out the 6” refractor on a single-scope night almost as frequently as the 11” SCT. My Vixen 140NA is sitting idle far more now (though summer is coming and the 140NA is still my favorite wide-field scope). The 152ED doesn’t collect the same amount of light and can’t match the resolution of the larger scope, but the views are just too beautiful to ignore. So yes, their truly IS strong merit to the argument that large refractors are some of the most enjoyable scopes to view with. And at$2500, the Meade 152ED does indeed provide a practically color-free, larger aperture, high quality refractor viewing experience.

I really, really like this telescope. Can you tell? Reports of issues with quality control and reports of some with maybe less than really good spherical correction (which in my mind might negate the entire value proposition) make it difficult to offer an unrestrained endorsement, but a good example like mine could turn out to be a prized possession. Mine certainly is to me.

An ENTIRE Marine Corps squad was tragically lost in Iraq in the past week. Several members of that squad were killed by insurgents hiding and shooting from under the floor in a house. When one of the squad members went down, others were either killed or injured when they attempted to rescue the first, which is what we Marines do. A couple of days later, the surviving members of the same squad where killed by a roadside bomb.

My anguish knows no boundary.

My wish is that this war should not have happened, and that you were all safely in the arms of your family or loved ones. I can’t have that wish, so in its place, I offer you my compassion, and my condolences for the buddies and families of the fallen.

And to all of our forces in the middle-east, may the stars in the skies over you guide you home, and to peace. Please come back to the world safe.

Whenever light passes a boundary, it diffracts, or bends, due to the wavelike property of light interacting with that boundary. An aperture in an optical system, typically circular or circle-like, is one such boundary.

How light interacts with the aperture is described by the point spread function (PSF), or how much and to what degree a point source of light spreads as a result of passing through the optical system. The PSF is determined by the geometry of the system (including the shape and size of the aperture the shape(s) of the lenses etc.) and the wavelength of light passing through the optical system. The PSF is essentially the impulse response of the optical system to an impulse function, a point of light of some unit amount of energy that is infinitesimally narrow or tightly bounded in 2D space.

The convolution of light from the subject with the point spread function results in a produced image that appears more spread out than the original object. By Wikipedia user Default007, from Wikimedia Commons. Public Domain.

For a perfectly round aperture in a theoretical optically-perfect imaging system, the PSF function is described by an Airy disk, which is a bullseye-target-like pattern of concentric rings of alternating regions of constructive interference (where the light's waves interact constructively to "add up") and destructive interference (where the light's waves interact so as to cancel themselves out).

It's important to note that the Airy disk pattern is not a result of imperfect lens qualities, or errors in tolerances in manufacturing, etc. It is strictly a function of the shape and size of the aperture and the wavelength of light passing through it. Thus, the Airy disk is a sort of upper-bound on the quality of a single image that can be produced by the optical system 1 .

A point source of light passing through a round aperture will spread to produce an Airy disk pattern. By Sakurambo, from Wikimedia Commons. Public Domain.

When the aperture is sufficiently large, such that most of the light passing through the lens does not interact with the aperture edge, we say the image is no longer diffraction limited. Any non-perfect images produced at that point are not due to the diffraction of the light by the aperture edge. In real (non-ideal) imaging systems, these imperfections include (but limited to): noise (thermal, pattern, read, shot, etc.) quantization errors (which can be considered another form of noise) optical aberrations of the lens calibration and alignment errors.

There are techniques to improve the images produced, such that the apparent optical quality of the imaging system is better than the Airy disk –limit. Image stacking techniques, such as lucky imaging, increase the apparent quality by stacking multiple (often hundreds) different images of the same subject together. While the Airy disk looks like a fuzzy set of concentric circles, it really represents a probability of where a point source of light entering the camera system will land on the imager. The resulting increase in quality produced by image stacking is due to increasing the statistical knowledge of the locations of the photons. That is, image stacking reduces the probabilistic uncertainty produced by diffraction of the light through the aperture as described by the PSF, by throwing a surplus of redundant information at the problem.

Regarding the relation in apparent size to brightness of the star or point source: a brighter source of light increases the intensity ("height") of the PSF, but does not increase its diameter. But increased light intensity coming into an imaging system means that more photons illuminate the boundary pixels of the region illuminated by the PSF. This is a form of "light blooming", or apparently "spilling" of light into neighboring pixels. This increases the apparent size of the star.

## Binoculars and light amplification

you really need to go do some reading on telescope and other optical instrument systems
Again, despite your earlier comment of knowing about how they work, it's very obvious you don't understand the basics, else you wouldn't be asking these same questions over and over

do some google searching on optical ray paths for lenses and telescopes

here's a starting point with images showing ray paths etc I will let you
start doing some further research yourself

no it doesn't . a laser is a different ball game

a laser is a coherent light source, light from stars, the sun and other objects is non-coherent

you really need to go do some reading on telescope and other optical instrument systems
Again, despite your earlier comment of knowing about how they work, it's very obvious you don't understand the basics, else you wouldn't be asking these same questions over and over

do some google searching on optical ray paths for lenses and telescopes

here's a starting point with images showing ray paths etc I will let you
start doing some further research yourself

again, that has already been answered ( a single lens). a magnifying glass will do that

again, no it cant . read the last section of my previous post

All optical systems that magnify the object being viewed are inherently narrow field of view systems
telescope, binoculars, microscope, camera
The higher the magnification, the smaller the field of view, one is proportional to the other

The wider the angle of view, say a wide angle fish eye lens system for a camera will have a very short focal length, less than 20mm
and could have a field of view easily up to 180 degrees. The front of the lens will be highly curved.

did you read my post #18 ?
same answer for this latest Q from you .
do you understand why the screen looks black ( or whatever colour you make it) ?
consider again why a green leaf looks green . ALL colours except green are going to be absorbed
can you then understand why you are going to see somewhere between a tiny amount and no blue light at all ?

did you read my post #18 ?
same answer for this latest Q from you .
do you understand why the screen looks black ( or whatever colour you make it) ?
consider again why a green leaf looks green . ALL colours except green are going to be absorbed
can you then understand why you are going to see somewhere between a tiny amount and no blue light at all ?

Ok. Let's replace the green leafs with the blue of the sky or better yet.. let's make the target view as a compact fluorescent lamp.

"Nowadays, there's an increase in the use of digital devices and modern lighting—such as LED lights and compact fluorescent lamps (CFLs)—most of which emit a high level of blue light. CFLs contain about 25% of harmful blue light and LEDs contain about 35% of harmful blue light. Interestingly, the cooler the white LED, the higher the blue proportion. And by 2020, 90% of all of our light sources are estimated to be LED lighting. So, our exposure to blue light is everywhere and only increasing."

Supposed there was a compact fluorescent lamp 20 meters away. Using the 8X binocular viewing it, would your retina receive more blue light than using just naked eye? .

not sure why you would want to do that, but yes the lenses would have the effect of concentrating the gathered light into a smaller area
This would increase the apparent intensity that your eye sees through the optics compared to naked eye

do you intend making a habit of looking at white LED and CFL lighting through binoculars ?not really sure what the point is that you are trying to make

it was described way back earlier in the thread that lenses can concentrate light

not sure why you would want to do that, but yes the lenses would have the effect of concentrating the gathered light into a smaller area
This would increase the apparent intensity that your eye sees through the optics compared to naked eye

do you intend making a habit of looking at white LED and CFL lighting through binoculars ?not really sure what the point is that you are trying to make

it was described way back earlier in the thread that lenses can concentrate light

I'm asking so I'd be determined to know to avoid those blue light when using my binoculars. I use them mostly at a daytime looking at the mountains, skies, birds and buildings.. I want to avoid or lessen blue light that is scattered from the sky to reach my eyes.

A binocular is said to magnify light. When you look at touchscreen cellphone to look at photos.. and press zoom. there is no additional light that isn't there. In the case of telescope. There is really focusing of light.. so the term magnification may not be enough.. perhaps we must use other terms like.. hmm.. focal amplification via collected light density. I need other examples where when you focus something.. you not just magnify it.. but also amplify it. in the case of objective lens. the extra energy to amplify it comes from the additional light rays in the objective lens.. so magnification is not accurate.

don't make up terms . you have already been told several times that there is NO amplification .
again . amplification requires the input of additional power

what additional energy or light rays ? . there isn't any
sorry, the rest of what you wrote doesn't make any sense in the physics world

don't make up terms . you have already been told several times that there is NO amplification .
again . amplification requires the input of additional power

what additional energy or light rays ? . there isn't any
sorry, the rest of what you wrote doesn't make any sense in the physics world

I think you may be right above. But I think some of us get wrong somewhere before.

It's like this. a 7x35 binocular.. there is 49X of light focus into a point (compare to our pupil). But it's subtending 7 times the angle on the retina. So. the light density is reduced by 49x!

So we are not really looking at 49X of the light intensity at the focal point.. it's reduced because its subtending 7 times!

Therefore when looking at a compact fluorescent lamp 20 meters away. You won't have more intensity of the blue light harming your eyes. It's same intensity as original. Is it correct.. or not.

You may well have a point as far as a single point is concerned. That was what I just bumped into in my last paragraph.
But we have to be careful here! if a point is really a point, something infinitely small, then how can we have any light from it at all?! Say the object emits so much light per square metre, how much light does it emit from a point of zero diameter and area 0 m 2 ?
On the other hand, if the point does have a finite size, however small, then the light it emits is focused to another point of finite size, determined by the magnification of the lens system. So the light may be less or more bright per unit area.

The point I was thinking of in my final comment, was that for a very tiny source like a star light years away, the light is focused to a point which should be very small, but is limited by diffraction to a size much bigger than it should be. When this image is magnified the image point should still be very small and is still limited by diffraction to a similar size. In that case the greater amount of light collected by the telescope is indeed focussed to the same area that the smaller amount collected by our eye would be. Then the image is brighter.

When we abandon very tiny sources like distant stars and start looking at a leaf on a tree for example, we now get images whose sizes are determined by the magnification of the lenses, not just diffraction. There will be a total amount of light from a given area of the object and this will be concentrated or spread over the area of the corresponding area of the image

The light from any point on the leaf will be spread out into an airy disk, the size of which depends on the optical properties of the whole system, including the eye.

For an extended object, yes, the image through the telescope or binoculars is the same brightness as it is through the naked eye. For a point-like source, like a far away star, the brightness is generally increased since even after you magnify the image the object is still unable to be resolved as anything but point-like. (Just like Merlin explained in post #23)

The leaf doesn't have a single airy disc. Every point on the leaf has its own airy disc, and since there are an infinite number of points, there are an infinite number of airy discs. It is this overlapping pattern of airy discs that forms the image on your retina. When you magnify the image of the leaf, you magnify the pattern of airy discs, which spreads out the light.

Not necessarily. If the lamp is small enough, then magnifying its image won't spread the light out very much, so the intensity increases drastically, similar to how a point-like source acts.

The light from any point on the leaf will be spread out into an airy disk, the size of which depends on the optical properties of the whole system, including the eye.

For an extended object, yes, the image through the telescope or binoculars is the same brightness as it is through the naked eye. For a point-like source, like a far away star, the brightness is generally increased since even after you magnify the image the object is still unable to be resolved as anything but point-like. (Just like Merlin explained in post #23)

The leaf doesn't have a single airy disc. Every point on the leaf has its own airy disc, and since there are an infinite number of points, there are an infinite number of airy discs. It is this overlapping pattern of airy discs that forms the image on your retina. When you magnify the image of the leaf, you magnify the pattern of airy discs, which spreads out the light.

Not necessarily. If the lamp is small enough, then magnifying its image won't spread the light out very much, so the intensity increases drastically, similar to how a point-like source acts.

I can't help thinking that introducing the Airy disc into this discussion is not helping at all. We are at a more basic level than that - partly to do with the actual definition of 'Brightness'. Wiki (convenient but not 100%, I know) refers to it as (R+G+B)/3, which implies we are talking in terms of energy from a sub division (pixel) of an image or object and not the total energy being emitted by or received from it. Stars, being point sources, will have a brightness that's independent of the telescope magnification.

A good example of this is a well designed telephoto lens for a camera, in which the sensor is well positioned and the limiting pupil is large enough to produce uniform illumination of the sensor. But even some expensive lenses exhibit Vignetting (darkening of the corners of the picture), which is where all the off-axis light is not getting through the limiting pupil. It's a common problem with eyepieces that you can't see a thing if you move your eye slightly from side to side. But, of course, the pupil is much smaller.
I have to ask just how the direction(s) that the thread is taking is helping to further answer the actual question in the OP? We have dealt with the magnification / amplification question satisfactorily, I think. The link in the OP actually deals with all of this pretty well. Perhaps reading it again (plus the Brightness link) would sort out the problem.

Ok. I'm just confused by the difference between sources as extended object and light sources. This is because in my daytime use of the binocular tracking birds in flight.. I can see many sunshines.. sometimes sunlight reflecting off shiny poles and windows.. so wonder how it magnifies in my focal point and affect my eyes.

Anyway. When you use a magnifying glass on paper. It burns.. so does it burn because the airy disc has so much light intensity or is it because the sunlight is spread to large area in the paper. I guess it is the former. Isn't it. Is there a test of this where the paper are detectors. Anyway. I learnt in this thread there is a difference between sources as extended object (like leafs) and light sources (light sunlight reflecting off poles, etc). that is not in the original web link.