Browsed by
Tag: space

Surveying the Universe from space

Surveying the Universe from space

The Euclid space mission (which I have already written about here) plans to stringently test our cosmological model by precisely measuring the shapes and positions of a billion faint galaxies. But you are not going to take pictures of each galaxy individually, obviously, that would be too slow! You need to do a survey, which means with each image you want to take a picture of largest possible area of the sky. So, you need a survey telescope, and because you have a limited budget you need to be able to launch your telescope on the smallest possible rocket. How do you do this? That is what I am going to write about here.

Designing survey telescopes has always been challenging. Photographic plates made possible for the first time measuring the positions and brightnesses of a large number of objects. The field-of-view (the amount of sky you see) for these telescopes was comparatively small. The search soon started in earnest for a telescope which would allow astronomers to even more rapidly the sky. The Schmidt telescope design comprises a spherical mirror paired with an aspherical correcting lens. One of the most famous such telescopes, the 48-inch Palomar Schmidt telescope, covered the entire northern sky using thousands of 14-inch photographic plates and provided an invaluable discovery tool for astronomers.

In the 1970s, the advent of automated plate-scanning machines meant that the first digital surveys of the sky were in fact made with photographic plates by scanning all these plates! Similar southern sky surveys were made by the UK Schmidt telescope in Australia and such surveys were only surpassed by the arrival of true digital sky surveys like the SDSS. The SDSS, incidentally, bypasses the need for extremely wide-field optics by cleverly reading out the camera at exactly the rate of the earth’s rotation, but that’s another story.

The UK Schmidt telescope, built in 1973, is a wide-field telescope, but much too large to be sent into space! (c) AAO

However, a significant disadvantage with Schmidt telescopes is that the focal plane — where the image is recorded — is curved, making them impractical for use with flat electronic detectors unless heavy corrective optics are installed (yes, I grudgingly admit that photographic plates are not practical in a space observatory (although amazingly there were once spy satellites which used film).

In addition to this, don’t forget that also that one key requirement for Euclid is not simply to measure the positions, brightnesses and distances of all objects but also their shapes. For faint galaxies seen through ground-based telescopes, object shapes are dominated by atmospheric effects. For exposures longer than a few seconds, object light profiles are smeared out, severely limiting our ability to extract useful information.

For these reasons, Euclid is space: above the atmosphere, the telescope’s shape-measuring capabilities are limited only by the satellite’s optics and detectors. This is also why telescope designs which might have been fine for an instrument lying at the bottom of the murky soup of Earth’s atmosphere are simply not good enough for space. Essentially we need a design which preserves as much as information as possible concerning the intrinsic light profile of objects. And we also need to be able to calculate how much this light profile has been distorted by presence of telescope and detector optics – usually this is done by making observations of perfect point-like sources. In astronomy terms, stellar sources fit this bill very well. But what design is this?

Automated ray-tracing revolutionized telescope design in the second half of the 20th century. Without having to cut glass, computer programs could calculate the optical performances of a telescope even before it was built. In series of papers and described at length in a classic book, the optical engineer Detrich Korsch used these new techniques to perfect a compact, three mirror design which had the great advantage that it features a wide field of view, few optical surfaces, and almost no aberrations over the entire field of view. It’s worth mentioning that knowing the optical performance of Euclid requires an intimate knowledge of optical properties of all surfaces. For this reason, ground testing and qualification of all components are an important part of verifying the Euclid optical design.

The optical path of the Euclid survey telescope. (c) ESA

But it’s not enough to have an excellent optical design if it is not stable and image quality cannot be maintained during normal operations. So, thermal expansion and contractions must be minimised. Euclid features a silicon carbide baseplate on which all the instruments and telescope are mounted. Silicon Carbide has the unique feature that it expands and contracts very little with changes in ambient temperature, meaning that the path length of the whole telescope can be rigidly controlled. The baseplate is actually created from a mould of particles of silicon carbide which are stuck together under pressure.

What’s the best place to put Euclid? At first, one might think, well, in orbit around the Earth, right? It turns out that a low-Earth orbit is a surprisingly hostile environment. In addition to constant sunrises and sunsets, there are also bands of nasty charged particles, in particular in the region called the South Atlantic anomaly. Hubble Space Telescope observatory is in such an low orbit, and the unfortunate consequence is that there are “blackout” periods which no observations cannot be made. In addition, there a not inconsiderable amount of background light.

L2 orbit of Euclid

There is a much better place to put a space observatory — the second Sun-Earth Lagrangian point, or L2, shown below. Here, a satellite’s location can be maintained with only a minimal expenditure of propellant. At L2, the gravitational pull of the Earth-Sun system almost perfectly balances inertial forces. Moreover, an object maintains an approximately constant distance to the Earth, making it ideal for high-bandwidth satellite communications: Euclid will need to send a lot of data to earth. In this orbit it will also be possible to control rigorously angle the sun’s rays fall on the telescope’s sunshield: this is essential to maintain the optical stability of the telescope. These factors have made L2 one of the best locations in the solar system to place an observatory, and many future telescopes will be placed there. The Planck and Herschel satellites clearly demonstrated all the benefits of this.

With this unique telescope design, capable of taking a high-resolution one-degree image of the sky with each exposure, Euclid’s primary mission will be completed in around six years of observations. No space satellite has ever flown before with such a unique set of instruments and telescope, and Euclid’s images of our Universe will be one of the most lasting legacies of the mission.

This article is one of a series of articles which will be appearing on the official blog of the Euclid Consortium before the end of the year!

Making discoveries: planning the Euclid space mission

Making discoveries: planning the Euclid space mission

Let’s start with some philosophy. Where does new knowledge come from? Well, from doing experiments, and comparing the results of those experiments with ideas — hypotheses — concerning physical laws. This works: technology created from knowledge gained this way has transformed the world.

However, as our knowledge of the Universe increases, so each new experiment becomes more complicated, harder to do and more expensive. They have to because each new hypothesis must also explain all the previous experiments. In astronomy, technology enables new voyages to some unknown part of “parameter space”, which in turn lead to ever more stringent tests of our hypotheses concerning how the world works. These experiments allows us to take a good long look at something fainter, faster, further away. Something which was undetectable before but detectable now.

This telescope (at the Observatoire de Haut-Provence) discovered the first planet outside our solar system

Space missions are really different from traditional science experiments. For one, the margin of error is minuscule and generally errors are catastrophic although there can be a few happy counter-examples. What this means is that a careful web of “requirements” must be written before launch. The influence of every aspect of the mission on the final scientific goal is estimated together with the likely margin of error.

So here’s the paradox: how do you build a vastly complicated experiment which is supposed to find out something new and be certain that it will work? How to make sure that you covered all the bases, that you thought of all things and still leave open the possibility for discovery? Even harder, how do you persuade someone to give you a big chunk of change to do it? The answer is a kind of weird mixture of psychology of and book-keeping. So first the (conceptually) straightforward bit: the book-keeping, which comes from trying to carefully chart all the tiny effects which will perturb your final measurement. This is actually notoriously difficult.

Did we think of everything? (iStock)

There are many celebrated examples of this kind of thing which didn’t quite work out from the annals of astronomy missions. After the launch of the Gaia satellite, astronomers were dismayed to find that there was an unknown source of background light. It turned out that this came from sunlight scattering off fibres sticking out from the sun-shield, which nobody had thought about before. Some changes to the instrument configuration and processing have helped mitigate this problem.

An even more epic example is Gravity Probe B experiment, designed to make a stringent test of General Relativity. This experiment featured the smoothest metal balls ever produced. Planning, launching and analysing data from this satellite took almost a half-century (work started in 1963 and the satellite survived multiple threatened cancellations). The objective was to measure how relativistic effects changed the rotation of these two metal balls. After an enormous amount of work analyzing data from the satellite featuring very smart people indeed, a result was announced, confirming General Relativity — but with errors around an order of magnitude larger than expected (Cliff Will has an excellent write-up here. Despite decades of work, three sources of error were missed, the most important of which being stray patches of static electricity on the balls’ surface, which exploded the final error budget. In the case of both Gaia and Gravity Probe B the missions were successful overall, but unknown sources of error were not entirely accounted for in mission planning.

A gravity probe B gyro rotor and housing (NASA)

Part of the Euclid challenge is to make the most accurate measurement of galaxy shapes of all time. Light rays passing through the Universe are very slightly perturbed by the presence of dark matter. If two rays pass next to the same bit of dark matter, they are perturbed in the same way. Euclid aims to measure this signal on the “cosmic wallpaper” of very faint distant background galaxies. These galaxies are effectively a big sheet of cosmic graph paper: by measuring how this correlated alignment depends on distance between the galaxies you can find out about the underlying cosmology of the Universe.

So how do you do this? The problem is that on an individual galaxy the effect is undetectable. Millions of sources must be measured and combined, and instrumental effects can completely submerge the very weak cosmological signal. We need to know what these effects are, and to correct for them. Some are conceptually straightforward: the path of light inside the telescope will also deform the galaxies. Or maybe as the camera is read out electric charge falls into holes in the detector silicon (drilled by passing charged particles) and gets trailed out. This, annoyingly, also changes galaxy shapes. Even worse: imagine that galaxies are not really randomly orientated on the sky, but line up because that’s how they were made back in the mists of cosmic time. You need to find some way to measure that signal and subtract it from the one coming from dark matter. In general, the smaller the effect you want to measure, the more care you need to take. This is all the more important today, where in general the limiting factor is not how many things you have to measure (as it was before) but how well you can measure them. In the end, your only hope is to try to list each effect and leave enough margin so that if anything goes wrong, if you miss anything, the mission is not compromised.

A selection of cognitive biases. My favourite is number 8, because there is a cute dog, although in astronomy 7 and 3 are probably the most pernicious (from Business Insider)

After accountancy, psychology, or more particularly cognitive biases. For example “strong gravitational lensing”– where background galaxies are visibly deformed by dark matter present in massive objects like galaxy clusters– had already been seen on photographic plates well before it was “discovered” in electronic images in the 1980s. Before that, people were not expecting it, and in any case those distorted galaxies on photographic plates looked too much like defects and were ignored.

So how do you plan an experiment to derive cosmological parameters without including some cosmological parameters in your analysis? After dodging all the bullets of unknown systematic errors, how to do you make sure you haven’t included an unknown bias which comes from people just having some preconceived ideas about how the universe should be? The answer must come from trying to design an analysis with as few unwarranted assumptions as possible, and if there are any to be made, hide them from the researchers doing the sums.

Structural and thermal model of the VIS camera focal plane assembly. This is a realistic model of what the real Euclid visible camera will be like (Courtesy M. Sauvage, CEA).

The recent story of the discovery of gravitational waves provides a fine example. Most scientists didn’t know until very late on that the signal they were dealing with was real and not a simulation. Such simulations had been routinely injected into the computers by colleagues wanting to make sure everything was working (this was how they had been testing everything). For Euclid, that would be the “Matrix” solution: most astronomers wouldn’t know if data under analysis was real or a simulation after some secret sleight-of-hand early on. But making a realistic simulation of whole Universe as seen by the satellite might be, to say the least, very challenging. More realistically this test might happen later on, with catalogues of objects being shuffled around so that only a few people would know which one corresponded to the real Universe. Like drug trials, but with galaxies.

Left: Tiny part of a simulated raw VIS image, showing the tracks of thousands of cosmic rays; right: four combined and processed images. You have got to do this right, otherwise no measurement of dark energy ! (Courtesy VIS PF team)

In the end, you can’t plan for the unexpected because, well, it’s unexpected. But you can at least try to prepare for it. You have to, if you want your results to stand up to the scrutiny of peer-review and make that new discovery about how the universe really works.