Wednesday, November 26, 2014

Quasar structures: a postscript

A few days ago I discussed the purported 'spooky' alignment of quasar spins and the cosmological principle here. So as to focus better on the main point, I left a few technical comments out of that discussion which I want to mention here. These don't have any direct bearing on the main argument made in that post — rather they are interesting asides for a more expert audience.

Quasars can't prove the Universe is homogeneous


Readers of the original post might have noticed that I was quite careful to always state that the distribution of quasars was statistically homogeneous, but not that the quasars showed the Universe was homogeneous. The reason for this lies in the properties of the quasar sample itself.

There are two main ways of constructing a sample of galaxies or quasars to use for further analysis, such as testing homogeneity. The first is that you simply include every object seen by the survey instruments within a certain patch of sky that lies between two redshifts of interest. But these objects will vary in their intrinsic brightness, and the survey instruments have a limited sensitivity, so can only record dim objects when they are relatively close to us. Intrinsically bright objects are rarer, but if they are very far away we will only be able to see the rare bright ones. So this strategy results in a sample with very many, but largely dim, galaxies or quasars relatively close to us, and fewer but brighter objects far away. This is known as a flux-limited sample.

The other strategy is to correct the measured brightness of each object for the distance from us, to determine its 'intrinsic' brightness (otherwise known as its absolute magnitude), and then select a sample of only those objects which have similar absolute magnitudes. The magnitude range is chosen in accordance with the range of distances such that within the volume of the Universe surveyed, we can be confident we have seen every object of that magnitude that exists. This is called a volume-limited sample.

Testing the homogeneity of the Universe requires a volume-limited survey of objects. For a flux-limited sample the distribution in redshift (i.e., in the line-of-sight direction) would not be expected to be uniform in the first place: the number density of objects would ordinarily decrease sharply with redshift. But looking out away from Earth also involves looking back in time; so if the redshift range of the survey is large, the farthest objects are seen as they were at an earlier time than the closest ones. If the objects in question had evolved significantly in that time, near and far objects could represent significantly different populations even in a volume-limited sample, and once again we wouldn't expect to see homogeneity along the line of sight, even if the Universe were homogeneous.

So to really test the cosmological principle without having to assume homogeneity at the outset,1 we really need a volume-limited sample of galaxies that cover a very large volume of the Universe but span a relatively narrow range of redshifts. Such surveys are hard to come by. For example, the study confirming homogeneity in WiggleZ galaxies (see here and here) actually used a flux-limited sample, so required additional assumptions. In this case one doesn't obtain a proof, rather a check of the self-consistency of those assumptions — which people may regard as good enough, depending on taste.

Anyway, the key point is that the DR7QSO quasar sample everyone uses is most definitely flux-limited and not volume-limited (I was myself reminded of this point by Francesco Sylos Labini). Despite this, the redshift distribution of quasars is remarkably uniform (between redshifts 1 and 1.8). So what's going on? Well, unlike certain types of galaxies that live much closer to home, distant quasar populations are expected to evolve rather quickly with time. And the age difference between objects at redshifts 1 and 1.8 is more than 2 billion years!

It would appear that this effect and the flux-limited nature of the survey coincidentally roughly cancel each other out for the sample in question. A volume-limited subset of these quasars would be (is) highly inhomogeneous — but then because of the time evolution the homogeneity or otherwise of any sample of quasars says nothing much about the homogeneity or otherwise of the Universe in general.

Luckily this is only incidental to the main argument. The fact that the distribution of these (flux-limited) quasars is statistically homogeneous on scales of 100-odd Megaparsecs despite claims for the existence of Gigaparsec-scale 'structures' simply demonstrates the point that the existence of single structures of any kind doesn't have any bearing on the question of overall homogeneity. Which is the main point.

Homogeneity is sample-dependent 


Of course the argument above cuts both ways.

Let's imagine that a study has shown that the distribution of a particular type of galaxy — call them luminous red galaxies — approaches homogeneity above a certain distance scale, say 100 Megaparsecs. Such a study was done by David Hogg and others in 2005. From this we may reasonably conclude (though not, strictly speaking, prove) that the matter distribution in the Universe is homogeneous above at most 100 Mpc. But we are not allowed to conclude that the distribution of some other sample of objects — radio galaxies, quasars, blue galaxies etc. — approaches homogeneity above the same scale, or indeed at all!

Even in a Universe with a homogeneous matter distribution, the scale above which a volume-limited sample of galaxies whose properties are constant with time approaches homogeneity depends on the galaxy bias. This number depends on the type of galaxies in question, and so too to a lesser extent will the expected homogeneity scale. Of course if the sample is not volume-limited, or does evolve with time, all bets are off anyway.

More generally, for each sample of galaxies that we wish to use for higher order statistical measurements, the statistical homogeneity of that particular sample must in general be demonstrated first. This is because higher order statistical quantities, such as the correlation function, are conventionally normalized in units of the sample mean, but in the absence of statistical homogeneity this becomes meaningless.

There was a time when the homogeneity of the Universe was less well accepted than it is today, and the possibility of a fractal distribution of matter was still an open question. At that time demonstrating the approach to homogeneity on large scales in a well-chosen sample of galaxies was worth a publication (even a well-cited publication) in itself. This is probably no longer the case, but it remains a necessary sanity check to perform for each galaxy survey.

1Properly speaking, even the creation of a volume-limited sample requires an assumption of homogeneity at the outset, since the determination of absolute magnitudes requires a cosmological model, and the cosmological model used will assume homogeneity. In this sense all "tests" of homogeneity are really consistency checks of our assumption thereof.

Sunday, November 23, 2014

A 'spooky alignment' of quasars, or just hype?

In the news this week we've had a story on the alignment of quasar spins with large-scale structure, based on this paper by Hutsemekers et al. The paper was accompanied by this press release from the European Space Observatory, which was then reproduced in various forms in a number of blogs and news outlets — almost all of which stress the 'spooky' or 'mysterious' nature of the claimed alignment 'over billions of light years'.

At least one of these blogs (the one at The Daily Galaxy) explicitly claims that the alignment of these quasar spins is a challenge for the cosmological principle, which is the assumption of large-scale statistical homogeneity and isotropy of the Universe, on which all of modern cosmology is based. This claim is not contained in the press release, but originates from a statement in the paper itself, where the authors say
The existence of correlations in quasar axes over such extreme scales would constitute a serious anomaly for the cosmological principle.
I'm afraid that this claim is completely unsupported by any of the actual results contained within the paper, and is therefore one of those annoying examples of scientific hype. In this post I will try to explain why.

I have actually covered much of this ground before — in a blog post here, but more importantly in a paper published in Monthly Notices last year — and I must admit I am a little surprised at having to repeat these points (especially since my paper is cited by Hutsemekers et al.). Nevertheless, in what follows I shall try not to sound too grumpy.

The immediate story started with a paper by Roger Clowes and collaborators, who claimed to have detected the 'largest structure' in the Universe (dubbed the 'Huge-LQG') in the distribution of quasars, and also claimed that this structure violated the cosmological principle. My paper last year was a response to this, and made the following points:

  1. the detection of a single large structure has essentially no relevance to the question of whether the Universe is statistically homogeneous and isotropic;
  2. the quasar sample within which the Huge-LQG was identified is statistically homogeneous, and approaches homogeneity at the scale we expect theoretically, thus providing an explicit demonstration of point 1;
  3. the definition of 'structure' by which the Huge-LQG counts as a structure is so loose that by using it we would find equally vast 'structures' even in completely random distributions of points which (by construction!) contain no correlations and therefore no structure whatsoever; and 
  4. therefore the classification of the Huge-LQG set of quasars as a 'structure' is essentially empty of meaning.


Quasar structures don't violate homogeneity


Since I am already repeating myself, let me elaborate a little more on points 1 and 2. Our Universe is not exactly homogeneous. The fact that you exist — more generally, the fact that stars, galaxies and clusters of galaxies exist — is sufficient proof of this, so it would a very poor advertisement for cosmology indeed if it were all founded on the assumption of exact homogeneity. Luckily it isn't. In fact our theories could be said to predict the existence of structure in the potential $\Phi$ on all scales (that's what a scale-invariant power spectrum from inflation means!), and even the galaxy-galaxy correlation function only goes asymptotically to zero at large scales.

Instead we have the assumption of statistical homogeneity and isotropy, which means that we assume that when looked at on large enough scales, different regions of the Universe are on average the same. Clearly, since this is a statement about averages, it can only be tested statistically by looking at large numbers of different regions, not by finding one particular example of a 'structure'. In fact there is a well-established procedure for checking the statistical homogeneity of the distribution of a set of points (the positions of galaxies or quasars, in this case), which involves measuring its fractal dimension and checking the scale above which this is equal to 3. I've described the procedure before, here and here, and Peter Coles describes a bit of the history of it here.

The bottom line is that, as I showed last year, the quasar distribution in question is statistically homogeneous above scales of at most $\sim130h^{-1}$Mpc. There is therefore no 'structure' you can find in this data which could violate the cosmological principle. End of story.

Scaled number counts in spheres as a measure of the fractal dimension of the quasar distribution. On scales where this number approaches 1, the distribution is statistically homogeneous. From arXiv:1306.1700.


Structures and probability


Of course, there are many different ways of being statistically homogeneous. It is perfectly possible that within a statistically homogeneous distribution one could find a particular structure or feature whose existence in our specific cosmological model (which is one of many possible models satisfying the cosmological principle) is either very unlikely or impossible. This would then be a problem for that cosmological model despite not having any wider implications for the cosmological principle. But to prove this requires some serious analysis, which should include a proper treatment of probabilities — you can't just say "this structure is big, so it must be anomalous."

In particular, any serious analysis of probabilities must take into account how a 'structure' is defined. Given infinitely many possible choices of definition, and a very large Universe in which to search, the probability of finding some 'structure' that extends over billions of light years is practically unity. In fact the definition used for the Huge-LQG would be likely to throw up equally vast 'structures' even if quasar positions were not at all correlated with each other (and we know they must be at least somewhat correlated, because of gravity). So it really isn't a very useful definition at all.


'Spooky' alignments


This brings us to the current paper by Hutsemekers et al. The starting assumption of this paper is that the Huge-LQG is a real structure which is somehow distinguished from its surroundings. This assumption is manifest in the decision that the authors make to try to measure the polarization of light from only those quasars that are classified as part of the Huge-LQG rather than a more general sample of quasars. This classic case of circular reasoning is the first flaw in the logic, but let's put it to one side for a minute.

The press release then tells us that the scientists
found that the rotation axes of the central supermassive black holes in a sample of quasars are parallel to each other over distances of billions of light years
and that the spins of the central black holes are aligned along the filaments of large-scale structure in which they reside.

I find this statement extremely problematic. Here is a figure from the paper in question, showing the sky positions of the 93 quasars in question, along with the polarization orientations for the 19 which are used in the actual analysis:

Quasar positions (black dots) and polarization alignments (red lines). From arXiv.1409.6098.

Do you see the alignment? No, me neither. In fact, looking at the distribution of angles in panel b, I would say that looks very much like a sample drawn from a perfectly uniform distribution.

So what is the claim actually based on? Well, for a start one has to split up the (arbitrarily defined) 'structure' into several (even more arbitrarily defined) 'sub-structures'. Each of these sub-structures then defines a different reference angle on the sky:

Chopping the data to suit the argument (Figure 4 of arXiv:1409.6098). On what basis are sub-structures 1 and 2 defined as separate from each other?

And now one has to measure the angles between the quasar polarization direction and the reference direction of the particular sub-structure, and the direction perpendicular to the reference direction, and choose the smaller of the two. In other words, rather than prove that quasars are aligned parallel to each other over distances extending over billions of light years (the claim in the press release), what Hutsemekers et al. are actually doing is attempting to show that given arbitrary choices of some smaller sub-structures and reference directions, quasars in different sub-structures are typically aligned either parallel to or perpendicular to this direction. This is a much less exacting standard.

Even this claim is not particularly well supported by the evidence. That is, looking at the distribution of angles, I am really not at all convinced that this shows evidence for a bimodal distribution with peaks at 0 and 90 degrees:

Distribution of angles purportedly showing two distinct peaks at 0 and 90. Figure 5 of arXiv:1409.6098.

So in summary I think the statistical evidence of alignment of quasar spins is already pretty weak. I don't see any analysis in the paper dealing with the effects of a different arbitrary choice of sub-structures, nor do I see any error analysis (the error in measuring the polarization direction of a quasar can be as large as 10 degrees!). And I haven't even dealt with the fact that the polarization data is used for only 19 quasars out of the full 93 — in other words, for the majority of quasars in the sample the central black hole spins are aligned along some other, undetermined, direction such that we can't measure the polarization.


Extraordinary claims require extraordinary evidence


Now, it's worth repeating that we've already seen that in fact the space distribution of quasars is statistically homogeneous in accordance with the cosmological principle. That simple test has been done, the cosmological principle survives. So if you've got some more nuanced claim of an anomaly, I think the onus is on you not only to describe the measurement you made, but also say what exactly is anomalous about it. What is the theoretical prediction we should compare it to? Which model is being rejected (or otherwise) by the new data?

So, for instance, if quasar spins in sub-structures are indeed aligned either parallel or perpendicular to each other (and I still remain to be convinced that they are), is this really something 'spooky', or would we expect some degree of alignment in the standard $\Lambda$CDM model?

Such an analysis has not been presented, but even if it had, it's worth bearing in mind the principle that extraordinary claims require extraordinary evidence. I'm afraid throwing out a p-value of about 1% simply doesn't cut it. Not only is that actually not an enormously impressive number (especially given all the other things I mentioned above), such a frequentist statistic doesn't take account of all our prior knowledge.

Other people have banged this drum at length before, but the point is easily summarized: the p-value tells us the probability of getting this data given the model, but doesn't tell us the probability of the model being correct despite the new data appearing to contradict it. This is the question we really wish to answer. To do this requires a Bayesian analysis, in which one must account for the prior belief in the model, which is the result of confidence built up from all other experimental results that agree with it. We have an incredible amount of observational evidence in favour of our current model, that would probably not be consistent with a model in which gigantic structures could exist (I say 'probably' because no such model actually exists at present). 

So my prior in favour of $\Lambda$CDM is pretty high — 19 quasars and an analysis so full of holes are not going to change that so quickly.