After writing about telescopes in space in my last article, I was reminded of this “illustrated interview” of Howard Grubb, published in the Strand Magazine in 1896. It starts perhaps not very promisingly: “The poverty of Ireland is such that the superficial observers are apt to wonder whether any good thing can really come out of that distressful country”. It does improve from there! It was sent to me by a descendant of Grubb. It is very interesting, especially the part at the end about future large telescopes which, of course, will be floating in water. The image below is supposed be “casting the mirror for the great Melbourne telescope” but it doesn’t look like any kind of “astronomical” ceremony to me!
The Euclid space mission (which I have already written about here) plans to stringently test our cosmological model by precisely measuring the shapes and positions of a billion faint galaxies. But you are not going to take pictures of each galaxy individually, obviously, that would be too slow! You need to do a survey, which means with each image you want to take a picture of largest possible area of the sky. So, you need a survey telescope, and because you have a limited budget you need to be able to launch your telescope on the smallest possible rocket. How do you do this? That is what I am going to write about here.
Designing survey telescopes has always been challenging. Photographic plates made possible for the first time measuring the positions and brightnesses of a large number of objects. The field-of-view (the amount of sky you see) for these telescopes was comparatively small. The search soon started in earnest for a telescope which would allow astronomers to even more rapidly the sky. The Schmidt telescope design comprises a spherical mirror paired with an aspherical correcting lens. One of the most famous such telescopes, the 48-inch Palomar Schmidt telescope, covered the entire northern sky using thousands of 14-inch photographic plates and provided an invaluable discovery tool for astronomers.
In the 1970s, the advent of automated plate-scanning machines meant that the first digital surveys of the sky were in fact made with photographic plates by scanning all these plates! Similar southern sky surveys were made by the UK Schmidt telescope in Australia and such surveys were only surpassed by the arrival of true digital sky surveys like the SDSS. The SDSS, incidentally, bypasses the need for extremely wide-field optics by cleverly reading out the camera at exactly the rate of the earth’s rotation, but that’s another story.
However, a significant disadvantage with Schmidt telescopes is that the focal plane — where the image is recorded — is curved, making them impractical for use with flat electronic detectors unless heavy corrective optics are installed (yes, I grudgingly admit that photographic plates are not practical in a space observatory (although amazingly there were once spy satellites which used film).
In addition to this, don’t forget that also that one key requirement for Euclid is not simply to measure the positions, brightnesses and distances of all objects but also their shapes. For faint galaxies seen through ground-based telescopes, object shapes are dominated by atmospheric effects. For exposures longer than a few seconds, object light profiles are smeared out, severely limiting our ability to extract useful information.
For these reasons, Euclid is space: above the atmosphere, the telescope’s shape-measuring capabilities are limited only by the satellite’s optics and detectors. This is also why telescope designs which might have been fine for an instrument lying at the bottom of the murky soup of Earth’s atmosphere are simply not good enough for space. Essentially we need a design which preserves as much as information as possible concerning the intrinsic light profile of objects. And we also need to be able to calculate how much this light profile has been distorted by presence of telescope and detector optics – usually this is done by making observations of perfect point-like sources. In astronomy terms, stellar sources fit this bill very well. But what design is this?
Automated ray-tracing revolutionized telescope design in the second half of the 20th century. Without having to cut glass, computer programs could calculate the optical performances of a telescope even before it was built. In series of papers and described at length in a classic book, the optical engineer Detrich Korsch used these new techniques to perfect a compact, three mirror design which had the great advantage that it features a wide field of view, few optical surfaces, and almost no aberrations over the entire field of view. It’s worth mentioning that knowing the optical performance of Euclid requires an intimate knowledge of optical properties of all surfaces. For this reason, ground testing and qualification of all components are an important part of verifying the Euclid optical design.
But it’s not enough to have an excellent optical design if it is not stable and image quality cannot be maintained during normal operations. So, thermal expansion and contractions must be minimised. Euclid features a silicon carbide baseplate on which all the instruments and telescope are mounted. Silicon Carbide has the unique feature that it expands and contracts very little with changes in ambient temperature, meaning that the path length of the whole telescope can be rigidly controlled. The baseplate is actually created from a mould of particles of silicon carbide which are stuck together under pressure.
What’s the best place to put Euclid? At first, one might think, well, in orbit around the Earth, right? It turns out that a low-Earth orbit is a surprisingly hostile environment. In addition to constant sunrises and sunsets, there are also bands of nasty charged particles, in particular in the region called the South Atlantic anomaly. Hubble Space Telescope observatory is in such an low orbit, and the unfortunate consequence is that there are “blackout” periods which no observations cannot be made. In addition, there a not inconsiderable amount of background light.
There is a much better place to put a space observatory — the second Sun-Earth Lagrangian point, or L2, shown below. Here, a satellite’s location can be maintained with only a minimal expenditure of propellant. At L2, the gravitational pull of the Earth-Sun system almost perfectly balances inertial forces. Moreover, an object maintains an approximately constant distance to the Earth, making it ideal for high-bandwidth satellite communications: Euclid will need to send a lot of data to earth. In this orbit it will also be possible to control rigorously angle the sun’s rays fall on the telescope’s sunshield: this is essential to maintain the optical stability of the telescope. These factors have made L2 one of the best locations in the solar system to place an observatory, and many future telescopes will be placed there. The Planck and Herschel satellites clearly demonstrated all the benefits of this.
With this unique telescope design, capable of taking a high-resolution one-degree image of the sky with each exposure, Euclid’s primary mission will be completed in around six years of observations. No space satellite has ever flown before with such a unique set of instruments and telescope, and Euclid’s images of our Universe will be one of the most lasting legacies of the mission.
This article is one of a series of articles which will be appearing on the official blog of the Euclid Consortium before the end of the year!
Making discoveries: planning the Euclid space mission
Let’s start with some philosophy. Where does new knowledge come from? Well, from doing experiments, and comparing the results of those experiments with ideas — hypotheses — concerning physical laws. This works: technology created from knowledge gained this way has transformed the world.
However, as our knowledge of the Universe increases, so each new experiment becomes more complicated, harder to do and more expensive. They have to because each new hypothesis must also explain all the previous experiments. In astronomy, technology enables new voyages to some unknown part of “parameter space”, which in turn lead to ever more stringent tests of our hypotheses concerning how the world works. These experiments allows us to take a good long look at something fainter, faster, further away. Something which was undetectable before but detectable now.
Space missions are really different from traditional science experiments. For one, the margin of error is minuscule and generally errors are catastrophic although there can be a few happy counter-examples. What this means is that a careful web of “requirements” must be written before launch. The influence of every aspect of the mission on the final scientific goal is estimated together with the likely margin of error.
So here’s the paradox: how do you build a vastly complicated experiment which is supposed to find out something new and be certain that it will work? How to make sure that you covered all the bases, that you thought of all things and still leave open the possibility for discovery? Even harder, how do you persuade someone to give you a big chunk of change to do it? The answer is a kind of weird mixture of psychology of and book-keeping. So first the (conceptually) straightforward bit: the book-keeping, which comes from trying to carefully chart all the tiny effects which will perturb your final measurement. This is actually notoriously difficult.
There are many celebrated examples of this kind of thing which didn’t quite work out from the annals of astronomy missions. After the launch of the Gaia satellite, astronomers were dismayed to find that there was an unknown source of background light. It turned out that this came from sunlight scattering off fibres sticking out from the sun-shield, which nobody had thought about before. Some changes to the instrument configuration and processing have helped mitigate this problem.
An even more epic example is Gravity Probe B experiment, designed to make a stringent test of General Relativity. This experiment featured the smoothest metal balls ever produced. Planning, launching and analysing data from this satellite took almost a half-century (work started in 1963 and the satellite survived multiple threatened cancellations). The objective was to measure how relativistic effects changed the rotation of these two metal balls. After an enormous amount of work analyzing data from the satellite featuring very smart people indeed, a result was announced, confirming General Relativity — but with errors around an order of magnitude larger than expected (Cliff Will has an excellent write-up here. Despite decades of work, three sources of error were missed, the most important of which being stray patches of static electricity on the balls’ surface, which exploded the final error budget. In the case of both Gaia and Gravity Probe B the missions were successful overall, but unknown sources of error were not entirely accounted for in mission planning.
Part of the Euclid challenge is to make the most accurate measurement of galaxy shapes of all time. Light rays passing through the Universe are very slightly perturbed by the presence of dark matter. If two rays pass next to the same bit of dark matter, they are perturbed in the same way. Euclid aims to measure this signal on the “cosmic wallpaper” of very faint distant background galaxies. These galaxies are effectively a big sheet of cosmic graph paper: by measuring how this correlated alignment depends on distance between the galaxies you can find out about the underlying cosmology of the Universe.
So how do you do this? The problem is that on an individual galaxy the effect is undetectable. Millions of sources must be measured and combined, and instrumental effects can completely submerge the very weak cosmological signal. We need to know what these effects are, and to correct for them. Some are conceptually straightforward: the path of light inside the telescope will also deform the galaxies. Or maybe as the camera is read out electric charge falls into holes in the detector silicon (drilled by passing charged particles) and gets trailed out. This, annoyingly, also changes galaxy shapes. Even worse: imagine that galaxies are not really randomly orientated on the sky, but line up because that’s how they were made back in the mists of cosmic time. You need to find some way to measure that signal and subtract it from the one coming from dark matter. In general, the smaller the effect you want to measure, the more care you need to take. This is all the more important today, where in general the limiting factor is not how many things you have to measure (as it was before) but how well you can measure them. In the end, your only hope is to try to list each effect and leave enough margin so that if anything goes wrong, if you miss anything, the mission is not compromised.
After accountancy, psychology, or more particularly cognitive biases. For example “strong gravitational lensing”– where background galaxies are visibly deformed by dark matter present in massive objects like galaxy clusters– had already been seen on photographic plates well before it was “discovered” in electronic images in the 1980s. Before that, people were not expecting it, and in any case those distorted galaxies on photographic plates looked too much like defects and were ignored.
So how do you plan an experiment to derive cosmological parameters without including some cosmological parameters in your analysis? After dodging all the bullets of unknown systematic errors, how to do you make sure you haven’t included an unknown bias which comes from people just having some preconceived ideas about how the universe should be? The answer must come from trying to design an analysis with as few unwarranted assumptions as possible, and if there are any to be made, hide them from the researchers doing the sums.
The recent story of the discovery of gravitational waves provides a fine example. Most scientists didn’t know until very late on that the signal they were dealing with was real and not a simulation. Such simulations had been routinely injected into the computers by colleagues wanting to make sure everything was working (this was how they had been testing everything). For Euclid, that would be the “Matrix” solution: most astronomers wouldn’t know if data under analysis was real or a simulation after some secret sleight-of-hand early on. But making a realistic simulation of whole Universe as seen by the satellite might be, to say the least, very challenging. More realistically this test might happen later on, with catalogues of objects being shuffled around so that only a few people would know which one corresponded to the real Universe. Like drug trials, but with galaxies.
In the end, you can’t plan for the unexpected because, well, it’s unexpected. But you can at least try to prepare for it. You have to, if you want your results to stand up to the scrutiny of peer-review and make that new discovery about how the universe really works.
21st-century Insoluble Pancakes: dark matter, dark energy and how we know what we know
These days, we know far more about the origin, nature and fate of the Universe than at any time in history. Justification? Well, any good description of the Universe has to be able both to provide a framework for understanding what has happened in the past and provide predictions for what will happen in the future. It should, more than anything, be consistent with observations. During the past few centuries, the quantity and quality of observations we have made have vastly increased. Applying our new-found knowledge of physics we’ve constructed new instruments and these have allowed us to probe the contents of the Universe right back to the “last scattering surface”, the brick wall beyond which no photon can penetrate.
But there is a problem…
But there is a problem. Our current best cosmological model, the one which matches most observations, happens to contain two substances whose precise nature is still somewhat, shall we say, uncertain. This model is called “Lambda CDM” which means that it has Lambda, “dark energy”, and CDM, which stands for “cold dark matter”. Perhaps that should be with a comma, as in cold, dark, matter? In any case, these two substances, dark matter and dark energy, according to this “standard model”, account for most of the energy content of the Universe. Ordinary material is just the few percent left over. Needless to say, intellectually, this is not a satisfactory state of affairs.
Worse yet, this “standard model” has proved surprisingly robust. Data from the last big cosmology mission, Planck, analysed in part by my colleagues at the IAP, provided a final data set which seems to be in almost perfect agreement with the predictions of the standard model. There is just a hint of a discrepancy with a few measurements at lower redshifts from separate experiments which could be very well explained by imperfect astrophysical and not cosmological knowledge. At the same time, many teams have been trying for the best part of the last few decades directly detect dark matter particles. Other than the mysterious DAMA/LIBRA result, which shows an oscillating signal of who-knows-what there has been no hint of a dark matter particle. The range of particle masses excluded by other experiments are getting smaller and smaller. In particle accelerators like the Large Hadron Collider, no evidence has been found for kinds of particles that dark matter is supposed to be made of (although their mass limits have been narrowed).
Certainly this situation has penetrated the popular consciousness. Many people aware that there is some “dark stuff” which nobody knows what it is. But you see, this is only half true. In fact, the characteristics of dark matter are known very well because it must have those properties for the fabulously successful standard model to match most of the observations.
Now, notice that I said most of the data. There are a selection of problematic data which may or may not be in agreement with our cosmological theory. Here’s the thing though. Any theory which purports to explain any discrepant observations has to only explain the discrepant observations, but everything else as well. That’s hard. Maybe there is no dark matter at all? Maybe it’s like Ptolemy’s “epicycles”, a complicated construct masking an underlying simpler truth? Or maybe gravity works differently on large scales?
The answer is…
So a few days ago on Facebook, buried amongst the cat videos, I came across this article by David Merritt which promised to be a philosophical attack on Lambda-CDM. I had high hopes, but reading the paper in more detail, it doesn’t seem to deliver a knockout blow to Lambda-CDM. Merritt characterises dark matter and dark energy as “conventionalist strategies”, a term borrowed from the great philosopher of science Karl Popper. This is bad: Popper explains that hypotheses which are added to a theory which do not increase its degree of falsifiability are conventionalist. I have to say, I adhere strongly to Popper’s ideas: if you cannot prove a theory wrong by observations, then it is not a real scientific theory. These “conventionalist strategies” are “sticking plasters” added to an existing theory when it should in fact be discarded.
Merritt also argues that a large number of the difficult and as yet unresolved problems in the standard model (many dynamical in nature) have been ignored by textbook writers. He provides an extensive hit-list of cosmology textbooks and whether or not his three named problems are discussed or not. They mostly are not. But is this a problem?
It seems to me that Lambda-CDM has been very successful given our ignorance of its constituents. The weight of observations consistent with theory is large. No other explanation has been proposed which agrees just as well with all this data, and I suspect many of the problems on Merritt’s list may simply be resolved by a better understanding of how normal matter interacts with dark matter. This is a very complicated process, and probably can only be solved numerically using very large computer simulations.
I am not saying that the current situation is satisfactory. I think simply that the hypothesis of dark matter and dark energy is more palatable, for instance, than arbitrary modifications of general relativity. Should we really discard Lambda-CDM for such a theory? Merritt argues that dark matter and dark energy are unverifiable hypotheses, but surely a modification to general relativity without any theoretical motivation is worse? That said, there are theories of modified gravity which have more robust origins. But as I said, we must not forget that that theories must also match all the existing observations, including the discrepant ones!
This century’s insoluble pancake…
If Flann O’ Brien was around today I am sure he would have had a lot of fun with these ideas. After all, O’Brien’s philosopher-scientist De Selby claimed that night was an accumulation of black air … but did he mean dark matter?
O’Brien was writing at a time when the strange ideas of quantum mechanics were slowly becoming common currency. Schrodinger was lived in Dublin at the same time as O’Brien. O’Brien was keen to show how our modern conception of the Universe could sometimes lead to troubling conclusions. The “spooky action at a distance”, Einstein’s description of quantum mechanics, led to O’Brien’s rural police station with a direct link to eternity. And today, with dark matter and dark energy?