Wednesday 8 August 2012

Shiny Dark Matter

A couple of months ago, I discussed a recent paper by Christoph Weniger that claimed strong evidence for dark matter from the Fermi Satellite Large Area Telescope.  In short, the Fermi LAT detects gamma rays, light of very short wavelength/very high energy.  Looking at gamma rays from the centre of the galaxy, Weniger claimed to see a feature in the spectrum:
Weniger's main result: black points are data with error bars; the green line is a featureless background; and the red line the combination of that and the proposed signal, shown in blue at the bottom.
Such a feature is difficult to produce by conventional astrophysics (stars, pulsars, etc).  It can be easily produced from dark matter.  For example if two dark matter particles annihilate into two photons, then conservation of energy and momentum forces those two photons to have the same energy, equal to the mass-energy of the dark matter.  Throw in the effect of experimental resolution and we get something that looks like the above, a small peak in the spectrum.

Since Weniger's original paper there has been a lot of work done.  In summary, this feature stands up to reanalysis but is not statistically strong enough to claim a true discovery.  Non-dark matter explanations have been offered, but are not compelling.  However, the dark matter explanation has problems of its own; the signal seems to be too big.


There are two reasons for this.  To understand the first, first remember that dark matter is dark; it does not directly interact with photons.  We can get an effective interaction through intermediate particles.  For example, assume that dark matter is a fermion that couples to W bosons.  Then we have the following Feynman diagram:
This leads to the desired process, two dark matter particles annihilating to two photons, thanks to the W being electrically charged.  However, the same dark matter coupling also leads to a different process:
So dark matter can annihilate to a pair of W bosons as well as a pair of photons.

Still, we have the coupling to photons, so what's the problem?  Well, the annihilation to W bosons is much bigger:
$ \frac{\chi\chi \to \gamma\gamma}{\chi\chi \to W^+ W^-} \sim \alpha^2 \sim 10^{-4}$
where $\alpha$ is the fine structure constant, the electric charge in dimensionless units.  The W will further decay.  The decay chain can be long and complicated, but the short version is that further photons are frequently produced from that decay (particularly from neutral pions).  These photons have much less energy, as the mass-energy of the dark matter particle is shared out among many particles.  But we produce a lot more of these, thanks to producing so many Ws, and when we look for those low energy photons, we can't find them.

Now, you might wonder if this argument is specific to the W, but it isn't.  With the exception of the electron and muon, all other Standard Model states that have charge will decay and produce enough low-energy photons that we would have expected to see them.  Even then the electron and muon final states are still in tension with non-observations.

The other problem with the hypothesized dark matter photon cross section relates to the origin of dark matter in the Universe, and in particular the origin of the amount of it.  The most popular explanation was that in the early Universe, much less than a second after the Big Bang, dark matter and the Standard Model where in thermal equilibrium.  The density of dark matter was roughly constant, but individual dark matter particles were continually being created and destroyed.  As the Universe expanded, it cooled, till there was not enough energy to produce new dark matter particles; they continued to annihilate, lowering their density.  A combination of this lowering and the Universe expanding in turn lead to the dark matter particles being too far apart to annihilate, and it froze out.  From then on, the number of dark matter particles in the Universe was fixed.

The key thing to this process is that the modern density of dark matter is not set by some initial fine tuning, but only by its own properties.  For a given dark matter mass and interaction, we can predict how much dark matter there should be today.  Stronger dark matter interactions allow dark matter to continue to annihilate for longer, and so lead to lower densities in the modern Universe.  For Weniger's candidate particle, the cross section for the annihilation to two photons is roughly one-tenth the cross section needed to reproduce the observed density.  If we add another annihilation channel that is ten thousand times larger, it would lead to too little dark matter to match observations.

Of these two arguments, the first is stronger, as it makes fewer assumptions.  There are some loopholes in the latter based on producing dark matter some other way, or having multiple dark matter particles.  I'm not going to pursue those points further here.  Instead, I'm going to talk about a recent paper by Sean Tulin, Hai-Bo Yu and Kathryn Zurek.  In this paper, they outline three basic ways that you can get dark matter that enhances the production of the two photon final state, while matching other observations and getting the correct total cross section for the observed abundance.  They also give toy models for each of the different options, to show that it can all work.

The first possibility is one that has actually been studies a lot, because it shows up in supersymmetry.  It is called coannihilation, and involves a second particle not much heavier than the dark matter particle.  Let us call the dark matter particle x1, and this second particle x2.  Instead of the dark matter density being set by the annihilation of x1 with itself, it is the process x1 + x2 -> Standard Model that is important in the early Universe.  The need for the two particles to be close in mass is so that the x2 is still around when x1 is freezing out.  However, by today all the x2 has decayed, so the only processes that lead to photons come from x1 + x1 -> Standard Model.

By itself, this does not get you anything.  So Tulin et al give two explicit models that show why this can help.  In the first they observe that while dark matter must be electrically neutral, it can have a magnetic moment, i.e. behave like a tiny magnet.  This interaction allows dark matter to dominantly annihilate to two photons, so you would not expect to see anything else.  But now you have a different problem, that your prediction for the dark matter abundance today is too large.  This is where coannihilation saves you; the extra way to remove dark matter from the Universe lowers the amount to what we actually observe.

The other approach is in a sense simpler.  Simply have the dark matter partner be charged, so that loops involving it contribute to the dark matter annihilation to photons.  For example, if the dark matter and its partner annihilate to the Standard Model through some intermediary phi:

then we automatically get the following diagram:
This can boost the rate for dark matter annihilating to photons sufficiently.  This will again be the dominant annihilation for the dark matter, with coannihilation serving to get the right abundance.

One thing to note in the last example was that the dark matter partner contributed to the loop production of photons.  It can not be produced in the modern Universe because it is heavier than the dark matter, and the dark matter particles don't have enough energy to produce them.  This idea is another way to get the desired enhancement.  We return to the case with a single dark matter particle, but we add some charged particle F.  Dark matter can annihilate to photons through loops involving F, but cannot annihilate directly to F due to insufficient energy.  One possibility is given by the diagrams
The process on the left is kinematically forbidden: the dark matter has insufficient energy.  The diagram on the right is allowed, and boosts the photon production.  This approach is honest in that we the dark matter annihilation continues to set its abundance, we're simply cranking up the fraction of times we produce photons.

The last possibility can also be considered a variation on the coannihilation idea.  This time, instead of a separate particle we have the dark matter antiparticle playing the role of the dark matter partner.  This is asymmetric dark matter, where today most dark matter is found in the role of particles rather than antiparticles.  This idea has been somewhat in vogue recently, though it's actually quite old.  It is also similar to ordinary matter.  There are lots of protons in the Universe today, but very few antiprotons.

The point of the asymmetric dark matter model is that the abundance today is no longer set by the annihilation cross section, but rather by the initial generation of more dark matter particles than antiparticles.  How this initial asymmetry is created varies from model to model, and is irrelevant here.  All we care about is that with the annihilation rate now unconstrained, we can make it much larger, explaining Weniger's claim straight-forwardly.  You still have the problem of the low-energy photons, which Tulin et al explain through having the annihilation to stuff other than photons by velocity suppressed.  In the modern Universe, with dark matter having small velocities v ~ 0.001 c, this is sufficient.  However, I find this last idea unsatisfying.  Once you completely decouple the dark matter density from its annihilation rates, it is hard to still call it thermal dark matter, as the title of the paper does.

The interesting question is whether these three ideas are exhaustive.  Tulin, Yu and Zurek are wise enough to make no such claims.  I don't have any ideas myself, but if I have time I might ponder that question a little more.

No comments:

Post a Comment