Browsed by
Tag: dark matter

The road to the space coast (looking back on the ideas that led to Euclid)

The road to the space coast (looking back on the ideas that led to Euclid)

On Florida’s Space Coast, the Euclid satellite is undergoing the final preparations for launch on a Falcon 9 rocket next Saturday, July 1st at 11:12 EDT. Although the Euclid mission was approved by ESA in 2011, the origins of the project date back more than a decade before that, starting with the realisation that the expansion of the Universe is accelerating.

In cinema, great discoveries are usually accompanied by the leading lights throwing their hands in the air and exclaiming, “This changes everything!” But in real life, scientists are cautious, and the first reaction to any new discovery is usually: is there a mistake? Is the data right? Did we miss anything? You need to think carefully about finding the right balance between double-checking endlessly (and getting scooped by your competitors) or rushing into print with something that is wrong. At the end of the 1990s, measurements of distant supernovae suggesting the accelerated expansion of the Universe were initially greeted by scepticism.

Conceptually, what those measurements were saying was simple: the further away an object is, the faster it is receding from us. Edwin Hubble’s early observations of galaxies demonstrated that there was a straight-line relationship between the distance of an object and the speed of movement. The most simple explanation (although one that scientists took a while to accept) was that the Universe was expanding.

Over the next few decades, researchers embarked on a long quest to find different classes of objects for which they could estimate distances. Supernovae were one of the best: it turned out that if you could measure how the brightness of supernovae changed with time, you could estimate their distances. You could then compare how the distance depended on redshift, which you could measure with a spectrograph. Wide-field cameras on large telescopes allowed astronomers to find supernovae further and further away, and by the end of the 90s, samples were large enough to detect the first tiny deviations from Hubble’s simple straight-line law. The expansion was accelerating. The origin of the physical process of expansion was codified by “Lambda”. Or “dark energy”.

First measurements of distant supernovae from two teams. The most distant measurements are above the straight-line measurements by ~20%.

But those points on the right-hand side of the graphs which deviated from Hubble’s straight-line law had big error bars. Everyone knew that supernovae were fickle creatures in any case, subject to a variety of poorly understood physical mechanisms that could mimic the effect that the observers were reporting.

Initially, there was a lot of resistance to this idea of an accelerating Universe, and to dark energy. Nobody wanted Lambda. Not the theorists, because there were no good theoretical explanations for Lambda. And not the simulators, because Lambda unnecessarily complicated their simulations. And not even the observers, because it meant that every piece of code used to estimate physical properties of distant galaxies had to be re-written (a lot of boring work). Meanwhile, the supernovae measurements became more robust and the reality of the existence of Lambda become harder and harder to avoid. But what was it? It was hard to get large samples of supernovae, what other techniques could be used to discover what Lambda really is? Soon, measurements of the cosmic microwave background indicated that Lambda was indeed the preferred model, but because the acceleration only happens at relatively recent epochs in the Universe, microwave background observations only have limited utility here.

Meanwhile, several key instrumental developments were taking place. At the Canada France Hawaii Telescope and other observatories, wide-field cameras with electronic detectors — charge coupled devices, or CCDs — were being perfected. These instruments enabled astronomers for the first time to survey wide swathes of the sky and measure the positions, shapes and colours of tens of thousands of objects. At the same, at least two groups were testing the first wide-field spectrographs for the world’s largest telescopes. Fed by targets selected from the new wide-field survey cameras, these instruments allowed the determination of the precise distances and physical properties of tens of thousands of galaxies. This quickly led to many new discoveries of how galaxies form and evolve. But these new instruments would also allow us to return to the still-unsolved nature of the cosmic acceleration, using a variety of new techniques which were first tested with these deep, wide-field surveys.

In the 1980s, observations of galaxy clusters with CCD cameras led to the discovery of the first gravitational arcs. These are images of distant galaxies which are, incredibly, magnified by the passage of light near the cluster. The deflection of light by mass is one of the key predictions of Einstein’s theory of general relativity. The grossly distorted images can only be explained if a large part of the mass of the cluster is concealed in invisible or ‘dark’ matter. However, in current models of galaxy formation, the observed growth of structures in the Universe can only be explained if this dark matter is distributed throughout the Universe and not only in the centres of galaxy clusters. This means also that even the shapes of galaxies of the ‘cosmic wallpaper’ throughout the night sky should be very slightly correlated, as light rays from these distant objects pass close to dark matter everywhere in the Universe. The effect would be tiny, but it should be detectable.

Simulation of the passage of light rays through the Universe, passing close to dark matter (S. Colombi, IAP).

Around the world, several teams raced to measure this effect in new wide-field survey camera data. The challenges were significant: the tiny effect required a rigorous control of every source of instrumental error and detailed knowledge of telescope optics. But by the early 2000s, a few groups had measured the ”correlated shapes” of faint galaxies. They also showed that this measurement could be used to constrain how rapidly structures grow in the Universe. At the same time, other groups, using the first wide field spectroscopic surveys, found that measurements of galaxy clustering could be used to independently constrain the parameters of the cosmological model.

Halfway through the first decade of the 21st century, it was beginning to become clear that both clustering combined with gravitational lensing could be an excellent technique to probe the nature of the acceleration. Neither method was easy: one required very precise measurements of galaxy shapes, which was very hard to do with ground-based surveys which suffered from atmospheric blurring; the other required spectroscopic measurements of hundreds of thousands of galaxies. And both techniques seemed highly complementary to supernovae measurements.

In 2006, the report from a group of scientists from Europe’s large observatories and space agencies charted a way forward to understand the origin of the acceleration. Clearly, what was needed was a space mission to provide wide-field high-resolution imaging over the whole sky to measure the shapes, coupled with an extensive spectroscopic survey. Both these ideas were submitted as proposals for two satellites: one would provide the spectroscopic survey (SPACE) and the other would provide the high-resolution imaging (Dune). The committee, finding both concepts compelling, asked the two teams to work together to design a single mission, which would become Euclid. In 2012, the mission was formally approved.

Euclid in the clean room
Euclid in the clean room at Thales Alenia

Euclid aims to make the most precise measurement ever of the geometry of the Universe and to derive the most stringent possible constraints on the parameters of the cosmological model. Euclid uses two methods: galaxy clustering with the spectrograph and imager NISP (sensitive to dark energy) and gravitational lensing with the imager VIS (sensitive to dark matter). Euclid‘s origins in ground-based surveys makes it unique. Euclid aims to make a survey of the whole extragalactic sky over six years. But unlike in ground-based surveys, no changes can be made to the instrument after launch. After launch, Euclid will travel to the remote L2 orbit, one of the best places in the solar system for astronomy, to begin a detailed instrument checkout and prepare for the survey.

I have been involved in the team which will process VIS images for more than a decade. The next few weeks will be exciting and stressful in equal measures. VIS is the “Leica Monochrom” of satellite cameras: there is only one broad filter. The images will be in black-and-white. It will (mostly) not make images deeper than Hubble or James Webb: Euclid‘s telescope mirror is relatively modest (there are some Euclid deep fields, but that is another story). But to measure shapes to the required precision to detect dark matter, every aspect of the processing must be rigorously controlled.

VIS images will cover tens of thousands of square degrees. Over the next few years, our view of the Universe will dramatically snap into high resolution. That, I am certain, will reveal wonders. Those images will be one of the great legacies of Euclid, together with a much deeper understanding of the cosmological model underpinning the Universe that will come from them and the data from NISP.

This Thursday, I’ll be travelling to Florida to see Euclid start its journey to its L2 orbit for myself. I’ll be awaiting anxiously with many of my colleagues for our first glimpse of the wide-field, high-resolution Universe that will arrive a few weeks later.

Making discoveries: planning the Euclid space mission

Making discoveries: planning the Euclid space mission

Let’s start with some philosophy. Where does new knowledge come from? Well, from doing experiments, and comparing the results of those experiments with ideas — hypotheses — concerning physical laws. This works: technology created from knowledge gained this way has transformed the world.

However, as our knowledge of the Universe increases, so each new experiment becomes more complicated, harder to do and more expensive. They have to because each new hypothesis must also explain all the previous experiments. In astronomy, technology enables new voyages to some unknown part of “parameter space”, which in turn lead to ever more stringent tests of our hypotheses concerning how the world works. These experiments allows us to take a good long look at something fainter, faster, further away. Something which was undetectable before but detectable now.

This telescope (at the Observatoire de Haut-Provence) discovered the first planet outside our solar system

Space missions are really different from traditional science experiments. For one, the margin of error is minuscule and generally errors are catastrophic although there can be a few happy counter-examples. What this means is that a careful web of “requirements” must be written before launch. The influence of every aspect of the mission on the final scientific goal is estimated together with the likely margin of error.

So here’s the paradox: how do you build a vastly complicated experiment which is supposed to find out something new and be certain that it will work? How to make sure that you covered all the bases, that you thought of all things and still leave open the possibility for discovery? Even harder, how do you persuade someone to give you a big chunk of change to do it? The answer is a kind of weird mixture of psychology of and book-keeping. So first the (conceptually) straightforward bit: the book-keeping, which comes from trying to carefully chart all the tiny effects which will perturb your final measurement. This is actually notoriously difficult.

Did we think of everything? (iStock)

There are many celebrated examples of this kind of thing which didn’t quite work out from the annals of astronomy missions. After the launch of the Gaia satellite, astronomers were dismayed to find that there was an unknown source of background light. It turned out that this came from sunlight scattering off fibres sticking out from the sun-shield, which nobody had thought about before. Some changes to the instrument configuration and processing have helped mitigate this problem.

An even more epic example is Gravity Probe B experiment, designed to make a stringent test of General Relativity. This experiment featured the smoothest metal balls ever produced. Planning, launching and analysing data from this satellite took almost a half-century (work started in 1963 and the satellite survived multiple threatened cancellations). The objective was to measure how relativistic effects changed the rotation of these two metal balls. After an enormous amount of work analyzing data from the satellite featuring very smart people indeed, a result was announced, confirming General Relativity — but with errors around an order of magnitude larger than expected (Cliff Will has an excellent write-up here. Despite decades of work, three sources of error were missed, the most important of which being stray patches of static electricity on the balls’ surface, which exploded the final error budget. In the case of both Gaia and Gravity Probe B the missions were successful overall, but unknown sources of error were not entirely accounted for in mission planning.

A gravity probe B gyro rotor and housing (NASA)

Part of the Euclid challenge is to make the most accurate measurement of galaxy shapes of all time. Light rays passing through the Universe are very slightly perturbed by the presence of dark matter. If two rays pass next to the same bit of dark matter, they are perturbed in the same way. Euclid aims to measure this signal on the “cosmic wallpaper” of very faint distant background galaxies. These galaxies are effectively a big sheet of cosmic graph paper: by measuring how this correlated alignment depends on distance between the galaxies you can find out about the underlying cosmology of the Universe.

So how do you do this? The problem is that on an individual galaxy the effect is undetectable. Millions of sources must be measured and combined, and instrumental effects can completely submerge the very weak cosmological signal. We need to know what these effects are, and to correct for them. Some are conceptually straightforward: the path of light inside the telescope will also deform the galaxies. Or maybe as the camera is read out electric charge falls into holes in the detector silicon (drilled by passing charged particles) and gets trailed out. This, annoyingly, also changes galaxy shapes. Even worse: imagine that galaxies are not really randomly orientated on the sky, but line up because that’s how they were made back in the mists of cosmic time. You need to find some way to measure that signal and subtract it from the one coming from dark matter. In general, the smaller the effect you want to measure, the more care you need to take. This is all the more important today, where in general the limiting factor is not how many things you have to measure (as it was before) but how well you can measure them. In the end, your only hope is to try to list each effect and leave enough margin so that if anything goes wrong, if you miss anything, the mission is not compromised.

A selection of cognitive biases. My favourite is number 8, because there is a cute dog, although in astronomy 7 and 3 are probably the most pernicious (from Business Insider)

After accountancy, psychology, or more particularly cognitive biases. For example “strong gravitational lensing”– where background galaxies are visibly deformed by dark matter present in massive objects like galaxy clusters– had already been seen on photographic plates well before it was “discovered” in electronic images in the 1980s. Before that, people were not expecting it, and in any case those distorted galaxies on photographic plates looked too much like defects and were ignored.

So how do you plan an experiment to derive cosmological parameters without including some cosmological parameters in your analysis? After dodging all the bullets of unknown systematic errors, how to do you make sure you haven’t included an unknown bias which comes from people just having some preconceived ideas about how the universe should be? The answer must come from trying to design an analysis with as few unwarranted assumptions as possible, and if there are any to be made, hide them from the researchers doing the sums.

Structural and thermal model of the VIS camera focal plane assembly. This is a realistic model of what the real Euclid visible camera will be like (Courtesy M. Sauvage, CEA).

The recent story of the discovery of gravitational waves provides a fine example. Most scientists didn’t know until very late on that the signal they were dealing with was real and not a simulation. Such simulations had been routinely injected into the computers by colleagues wanting to make sure everything was working (this was how they had been testing everything). For Euclid, that would be the “Matrix” solution: most astronomers wouldn’t know if data under analysis was real or a simulation after some secret sleight-of-hand early on. But making a realistic simulation of whole Universe as seen by the satellite might be, to say the least, very challenging. More realistically this test might happen later on, with catalogues of objects being shuffled around so that only a few people would know which one corresponded to the real Universe. Like drug trials, but with galaxies.

Left: Tiny part of a simulated raw VIS image, showing the tracks of thousands of cosmic rays; right: four combined and processed images. You have got to do this right, otherwise no measurement of dark energy ! (Courtesy VIS PF team)

In the end, you can’t plan for the unexpected because, well, it’s unexpected. But you can at least try to prepare for it. You have to, if you want your results to stand up to the scrutiny of peer-review and make that new discovery about how the universe really works.

21st-century Insoluble Pancakes: dark matter, dark energy and how we know what we know

21st-century Insoluble Pancakes: dark matter, dark energy and how we know what we know

These days, we know far more about the origin, nature and fate of the Universe than at any time in history. Justification? Well, any good description of the Universe has to be able both to provide a framework for understanding what has happened in the past and provide predictions for what will happen in the future. It should, more than anything, be consistent with observations. During the past few centuries, the quantity and quality of observations we have made have vastly increased. Applying our new-found knowledge of physics we’ve constructed new instruments and these have allowed us to probe the contents of the Universe right back to the “last scattering surface”, the brick wall beyond which no photon can penetrate.

The Javalambre Survey Telescope

But there is a problem…

But there is a problem. Our current best cosmological model, the one which matches most observations, happens to contain two substances whose precise nature is still somewhat, shall we say, uncertain. This model is called “Lambda CDM” which means that it has Lambda, “dark energy”, and CDM, which stands for “cold dark matter”. Perhaps that should be with a comma, as in cold, dark, matter? In any case, these two substances, dark matter and dark energy, according to this “standard model”, account for most of the energy content of the Universe. Ordinary material is just the few percent left over. Needless to say, intellectually, this is not a satisfactory state of affairs.

Worse yet, this “standard model” has proved surprisingly robust. Data from the last big cosmology mission, Planck, analysed in part by my colleagues at the IAP, provided a final data set which seems to be in almost perfect agreement with the predictions of the standard model. There is just a hint of a discrepancy with a few measurements at lower redshifts from separate experiments which could be very well explained by imperfect astrophysical and not cosmological knowledge. At the same time, many teams have been trying for the best part of the last few decades directly detect dark matter particles. Other than the mysterious DAMA/LIBRA result, which shows an oscillating signal of who-knows-what there has been no hint of a dark matter particle. The range of particle masses excluded by other experiments are getting smaller and smaller. In particle accelerators like the Large Hadron Collider, no evidence has been found for kinds of particles that dark matter is supposed to be made of (although their mass limits have been narrowed).

From Bernabei et al., 2010. The wiggles could be … interactions with dark mater particles?

Certainly this situation has penetrated the popular consciousness. Many people aware that there is some “dark stuff” which nobody knows what it is. But you see, this is only half true. In fact, the characteristics of dark matter are known very well because it must have those properties for the fabulously successful standard model to match most of the observations.

Now, notice that I said most of the data. There are a selection of problematic data which may or may not be in agreement with our cosmological theory. Here’s the thing though. Any theory which purports to explain any discrepant observations has to only explain the discrepant observations, but everything else as well. That’s hard. Maybe there is no dark matter at all? Maybe it’s like Ptolemy’s “epicycles”, a complicated construct masking an underlying simpler truth? Or maybe gravity works differently on large scales?

XKCD and dark matter (R. Monroe).

The answer is…

So a few days ago on Facebook, buried amongst the cat videos, I came across this article by David Merritt which promised to be a philosophical attack on Lambda-CDM. I had high hopes, but reading the paper in more detail, it doesn’t seem to deliver a knockout blow to Lambda-CDM. Merritt characterises dark matter and dark energy as “conventionalist strategies”, a term borrowed from the great philosopher of science Karl Popper. This is bad: Popper explains that hypotheses which are added to a theory which do not increase its degree of falsifiability are conventionalist. I have to say, I adhere strongly to Popper’s ideas: if you cannot prove a theory wrong by observations, then it is not a real scientific theory. These “conventionalist strategies” are “sticking plasters” added to an existing theory when it should in fact be discarded.

Merritt also argues that a large number of the difficult and as yet unresolved problems in the standard model (many dynamical in nature) have been ignored by textbook writers. He provides an extensive hit-list of cosmology textbooks and whether or not his three named problems are discussed or not. They mostly are not. But is this a problem?

It seems to me that Lambda-CDM has been very successful given our ignorance of its constituents. The weight of observations consistent with theory is large. No other explanation has been proposed which agrees just as well with all this data, and I suspect many of the problems on Merritt’s list may simply be resolved by a better understanding of how normal matter interacts with dark matter. This is a very complicated process, and probably can only be solved numerically using very large computer simulations.

I am not saying that the current situation is satisfactory. I think simply that the hypothesis of dark matter and dark energy is more palatable, for instance, than arbitrary modifications of general relativity. Should we really discard Lambda-CDM for such a theory? Merritt argues that dark matter and dark energy are unverifiable hypotheses, but surely a modification to general relativity without any theoretical motivation is worse? That said, there are theories of modified gravity which have more robust origins. But as I said, we must not forget that that theories must also match all the existing observations, including the discrepant ones!

This century’s insoluble pancake…

If Flann O’ Brien was around today I am sure he would have had a lot of fun with these ideas. After all, O’Brien’s philosopher-scientist De Selby claimed that night was an accumulation of black air … but did he mean dark matter?

De Selby with some dark matter (John Farman)

O’Brien was writing at a time when the strange ideas of quantum mechanics were slowly becoming common currency. Schrodinger was lived in Dublin at the same time as O’Brien. O’Brien was keen to show how our modern conception of the Universe could sometimes lead to troubling conclusions. The “spooky action at a distance”, Einstein’s description of quantum mechanics, led to O’Brien’s rural police station with a direct link to eternity. And today, with dark matter and dark energy?