Monday 22 October 2012

Three is a Large Number

One thing I sometimes like to joke is that in  physics, there are only three numbers: zero, one and infinity.  By that I mean that you can get a decent rough estimate in many cases by treating the relevant parameters as one of those three values.  The entire field of dimensional analysis involves setting numbers to be one in the appropriate units; for example, consider atomic physics.  We are in the quantum regime, so we need Planck's constant h; the dominant force is electromagnetism, so we'll need the vacuum permittivity ε0; and the electrons form the "outside" of an atom, so let's also consider the electron charge e and mass me.  There's only one way to combine these objects to have the dimensions of energy:
$\frac{m_e e^4}{\epsilon_0^2 h^2}$
Up to an overall constant, this is the Rydberg, which indeed characterises the energies of atomic physics, and which is normally derived after several weeks of quantum mechanics.

Setting things to be zero is fairly intuitive.  Small things normally have small effects, and can be ignored at first.  Correcting for them being non-zero is then precisely a perturbation series.  Interestingly, setting numbers to infinity is pretty similar; there are plenty of situations where the mathematics can be exactly solved when a coupling g goes to infinty, and then corrections come as a series in inverse powers of g.  A somewhat different example is in the strong interaction, which has three colours (analogous to the single electric charge).  Before I was born, Dutch physicist Gerard 't Hooft was able to successfully analyse the strong interaction by setting the number of colours to be infinity.  Despite three not being very big, the approximation was successful.

In a similar vein, we have the paper I want to discuss today.  Like 't Hooft, Bai and Torroba are approximating a number that equals three by infinity.  Instead of gauge interactions and colour, they have chosen to look at flavour and the number of generations.

Let us first clarify what is meant by a generation in particle physics.  The matter fields of the Standard Model exhibit an interesting multiplicity that at first seems unnecessary.  You, me and everything we can see are made up from electrons, up quarks and down quarks; the latter two in the form of the composite proton and neutron.  The Weak interaction also needs the electron neutrino for consistency.

A theory that contains only those particles is perfectly fine, so it is at first surprising that nature copies it twice.  By that I mean in addition to, for example, the electron, there are two other particles—the muon and the tau—with exactly the same properties except for being heavier.  Similarly, the up is partnered by the charm and top; the down by the strange and bottom; and the electron neutrino by the muon and tau neutrinos.  Each set of particles is called a generation.

Why we have these generations is not fully known.  We know that for there to be a difference between particles and anti-particles, there must be at least three generations; and plenty of models have attempted to explain them.  But as yet there is no clear winner.  In a sense, the famous question asked when the muon was first discovered—"Who ordered that?!"—remains unanswered.

The paper I'm looking at today did not attempt to address this problem.  Rather, it looked at a different feature of this structure; the existence of mixing between generations.  The fact that the electron, muon and tau have identical properties makes it possible that particles couple not to the individual particles, but to a quantum superposition of them.  A hypothetical heavy Higgs-like particle could, for example, couple strongly to an equal superposition of the electron and muon; but weakly to an equal superposition of the muon and tau.  When this happens, we can choose to write the interaction in one of two natural ways: either in terms of the physical particles, but with complicated couplings; or where the couplings are simple but the particles are superpositions.  The relation between the two bases is known as the mixing for that coupling.

In the Standard Model, the only type of such mixing that can occur is with the W boson, which couples the electron/muon/tau to the neutrinos.  It is more natural to describe the mixing as taking place among the neutrinos.  This mixing is mathematically described by a  three-by-three complex matrix, but  only four parameters are physically meaningful: three mixing angles and one complex phase.  Of these, the three mixing angles have been measured, the last only this year at the Daya Bay experiment.

As an aside, the W also leads to mixing among the quarks, most conveniently taken as down-strange-bottom mixing.  All four parameters of that matrix have been measured, and it's interesting—and slightly confusing—that the two matrices have different structure.  The quark mixing angles are small, with the first and third generations mixing the least.  The neutrino mixing angles are large—one is nearly maximal—and all generations seem to mix equally.  The reason for this is another unanswered question.

Bai and Torroba decided to approach this problem by ignoring the quarks and simply thinking about the neutrinos.  What if whatever physics that determines the mixing matrix does so randomly?  Can this tell us anything?  To this end, the relevant point is that random matrix theory simplifies in the limit that the matrices become infinite-dimensional; that is, when the number of generations is very large.  This is where the three equals infinity approximation comes in.

Now, I don't really know a great deal about random matrix theory.  As I understand it, though, the essential idea was to use extant results in the literature, together with a little numerical simulation, and see what they imply for the neutrinos.  And the results are fairly interesting.  It seems that a random origin for the neutrino mixing is quite plausible, but only for certain types of neutrino spectra and certain origins of neutrino masses.  This makes this approach not only falsifiable in principle, but in the near future too.

To understand this further, we need to examine where the neutrino mixing matrices come from.  To this end, note that if the neutrinos where massless then the only way to tell them apart would be through their interactions with the W.  There would therefore be no mixing, as that requires two physically meaningful descriptions.  It follows that the mixing matrices must be intrinsically related to the mass of the neutrinos (and, as it happens, also of the electron etc).

For this reason it is useful to note what is known about the neutrino spectrum.  We do not know the overall scale; that is, we do not know the mass of any of the three neutrinos.  We do know the difference between two pairs of neutrinos.  From this, we find three consistent spectra:

  • The normal hierarchy, with the two states closest in mass the lightest; called normal because it is the pattern seen for the quarks and electron/muon/tau.
  • The inverted hierarchy, with the two states closest in mass the heaviest.
  • The degenerate hierarchy, with the mass differences small compared to the overall masses.
Getting one of these spectra is the first test any model must face.


The simplest way for the neutrinos to get mass is from the Higgs.  This is a little trickier than for the other particles of the Standard Model, as we have to use a non-renormalisable interaction:
$\frac{(H L)^2}{\Lambda}$
with H the Higgs, L the lepton doublet (includes the neutrino) and lambda some high energy scale.  Without worrying about this too much, we can just replace the Higgs by its vacuum expectation and obtain a Majorana mass for the neutrinos.  Treating that mass matrix as random, we find that the neutrinos should be roughly equally spaced in mass, with one very light one.  Since this does not describe the neutrino spectrum, it is not worth pursuing this case further.
Unsuccessful spectrum from random neutrino Majorana mass 


The most popular origin of neutrino masses is the seesaw mechanism.  There are several variants, but they all share the defining trait that gives them their name: the existence of a heavy partner of the neutrino, such that as the partner gets heavier the neutrinos get lighter.  This is convenient, since the neutrinos are so much lighter than the other particles it would be nice if we could explain why.  Bai and Torroba only considered the simplest seesaw model.  They find that even in such a case, it is possible to fit the observations quite easily, but only for the normal hierarchy.  In general the lightest state is expected to be very light from random matrix models; typically 1/N3 relative to the most massive state.  The intermediate state is also somewhat suppressed, down by 1/N.  As for mixing angles, these too seem to fit quite easily.  The smallest mixing angle is usually 1/N, which is a bit large compared to observations but not disastrously so.
Normal hierarchy spectrum from random seesaw

The interesting things here are the following.  First, this model postdicts a non-zero smallest mixing angle; whereas much of model building over the last decade or so tried to get a zero for that parameter.  It predicts a normal hierarchy, something that can potentially be ruled out in the next few years.  And if it is not ruled out, it serves as more evidence for the seesaw mechanism, which hints at physics at very high energy scales, well beyond anything we can hope to directly probe.  All in all, an interesting little paper.

No comments:

Post a Comment