Particle Physics and Cosmology
\abstractsIn the first Lecture, the Big Bang and the Standard Model of particle physics are introduced, as well as the structure of the latter and open issues beyond it. Neutrino physics is discussed in the second Lecture, with emphasis on models for neutrino masses and oscillations. The third Lecture is devoted to supersymmetry, including the prospects for discovering it at accelerators or as cold dark matter. Inflation is reviewed from the viewpoint of particle physics in the fourth Lecture, including simple models with a single scalar inflaton field: the possibility that this might be a sneutrino is proposed. Finally, the fifth Lecture is devoted to topics further beyond the Standard Model, such as grand unification, baryo and leptogenesis  that might be due to sneutrino inflaton decays  and ultrahighenergy cosmic rays  that might be due to the decays of metastable superheavy dark matter particles.
Lectures presented at the Australian National University Summer School on the New Cosmology, Canberra, January 2003
CERNTH/2003098 astroph/0305038
1 Introduction to the Standard Models
1.1 The Big Bang and Particle Physics
The Universe is currently expanding almost homogeneously and isotropically, as discovered by Hubble, and the radiation it contains is cooling as it expands adiabatically:
(1) 
where is the scale factor of the Universe and is the temperature. There are two important pieces of evidence that the scale factor of the Universe was once much smaller than it is today, and correspondingly that its temperature was much higher. One is the Cosmic Microwave Background [1], which bathes us in photons with a density
(2) 
with an effective temperature K. These photons were released when electrons and nuclei combined to form atoms, when the Universe was some 3000 times hotter and the scale factor correspondingly 3000 times smaller than it is today. The second is the agreement of the Abundances of Light Elements [2], in particular those of He, Deuterium and Li, with calculations of cosmological nucelosynthesis. For these elements to have been produced by nuclear fusion, the Universe must once have been some times hotter and smaller than it is today.
During this epoch of the history of the Universe, its energy density would have been dominated by relativistic particles such as photons and neutrinos, in which case the age of the Universe is given approximately by
(3) 
The constant of proportionality between time and temperature is such that second when the temperature MeV, near the start of cosmological nucleosynthesis. Since typical particle energies in a thermal plasma are , and the Boltzmann distribution guarantees large densities of particles weighing , the history of the earlier Universe when MeV was dominated by elementary particles weighing an MeV or more [3].
The landmarks in the history of the Universe during its first second presumably included the epoch when protons and neutrons were created out of quarks, when MeV and s. Prior to that, there was an epoch when the symmetry between weak and electromagnetic interactions was broken, when GeV and s. Laboratory experiments with accelerators have already explored physics at energies GeV, and the energy range GeV, corresponding to the history of the Universe when s, will be explored at CERN’s LHC accelerator that is scheduled to start operation in 2007 [4]. Our ideas about physics at earlier epochs are necessarily more speculative, but one possibility is that there was an inflationary epoch when the age of the Universe was somewhere between and s.
We return later to possible experimental probes of the physics of these early epochs, but first we review the Standard Model of particle physics, which underlies our description of the Universe since it was s old.
1.2 Summary of the Standard Model of Particle Physics
The Standard Model of particle physics has been established by a series of experiments and theoretical developments over the past century [5], including:

1897  The discovery of the electron;

1910  The discovery of the nucleus;

1930  The nucleus found to be made of protons and neutrons; neutrino postulated;

1936  The muon discovered;

1947  Pion and strange particles discovered;

1950’s  Many stronglyinteracting particles discovered;

1964  Quarks proposed;

1967  The Standard Model proposed;

1973  Neutral weak interactions discovered;

1974  The charm quark discovered;

1975  The lepton discovered;

1977  The bottom quark discovered;

1979  The gluon discovered;

1983  The intermediate bosons discovered;

1989  Three neutrino species counted;

1994  The top quark discovered;

1998  Neutrino oscillations discovered.
All the above historical steps, apart from the last (which was made with neutrinos from astrophysical sources), fit within the Standard Model, and the Standard Model continues to survive all experimental tests at accelerators.
The Standard Model contains the following set of spin1/2 matter
particles:
\bea& Leptons:&
(
νee
),
(
νμμ
),
(
νττ
)
& Quarks:&
(
ud
),
(
cs
),
(
bt
)
\eeaWe know from experiments at CERN’s LEP accelerator in 1989 that there can
there can only be three neutrinos [6]:
(4) 
which is a couple of standard deviations below 3, but that cannot be considered a significant discrepancy. I had always hoped that might turn out to be noninteger: would have been good, and would have been even better, but this was not to be! The constraint (4) is also important for possible physics beyond the Standard Model, such as supersymmetry as we discuss later. The measurement (4) implies, by extension, that there can only be three charged leptons and hence no more quarks, by analogy and in order to preserve the calculability of the Standard Model [7].
The forces between these matter particles are carried by spin1 bosons: electromagnetism by the familiar massless photon , the weak interactions by the massive intermediate and bosons that weigh GeV, respectively, and the strong interactions by the massless gluon. Among the key objectives of particle physics are attempts to unify these different interactions, and to explain the very different masses of the various matter particles and spin1 bosons.
Since the Standard Model is the rock on which our quest for new physics must be built, we now review its basic features [5] and examine whether its successes offer any hint of the direction in which to search for new physics. Let us first recall the structure of the chargedcurrent weak interactions, which have the currentcurrent form:
(5) 
where the charged currents violate parity maximally:
(6) 
The charged current (6) can be interpreted as a generator of a weak SU(2) isospin symmetry acting on the matterparticle doublets in (1.2). The matter fermions with lefthanded helicities are doublets of this weak SU(2), whereas the righthanded matter fermions are singlets. It was suggested already in the 1930’s, and with more conviction in the 1960’s, that the structure (6) could most naturally be obtained by exchanging massive vector bosons with coupling and mass :
(7) 
In 1973, neutral weak interactions with an analogous currentcurrent structure were discovered at CERN:
(8) 
and it was natural to suggest that these might also be carried by massive neutral vector bosons .
The and bosons were discovered at CERN in 1983, so let us now review the theory of them, as well as the Higgs mechanism of spontaneous symmetry breaking by which we believe they acquire masses [8]. The vector bosons are described by the Lagrangian
(9) 
where is the field strength for the SU(2) vector boson , and is the field strength for a U(1) vector boson that is needed when we incorporate electromagnetism. The Lagrangian (9) contains bilinear terms that yield the boson propagators, and also trilinear and quartic vectorboson interactions.
The vector bosons couple to quarks and leptons via
(10) 
where the are covariant derivatives:
(11) 
The SU(2) piece appears only for the lefthanded fermions , whereas the U(1) vector boson couples to both left and righthanded compnents, via their respective hypercharges .
The origin of all the masses in the Standard Model is postulated to be a weak doublet of scalar Higgs fields, whose kinetic term in the Lagrangian is
(12) 
and which has the magic potential:
(13) 
Because of the negative sign for the quadratic term in (13), the symmetric solution is unstable, and if the favoured solution has a nonzero vacuum expectation value which we may write in the form:
(14) 
corresponding to spontaneous breakdown of the electroweak symmetry.
Expanding around the vacuum: , the kinetic term (12) for the Higgs field yields mass terms for the vector bosons:
(15) 
corresponding to masses
(16) 
for the charged vector bosons. The neutral vector bosons have a 22 masssquared matrix:
(17) 
This is easily diagonalized to yield the mass eigenstates:
(18) 
that we identify with the massive and massless , respectively. It is useful to introduce the electroweak mixing angle defined by
(19) 
in terms of the weak SU(2) coupling and the weak U(1) coupling . Many other quantities can be expressed in terms of (19): for example, .
With these boson masses, one indeed obtains chargedcurrent interactions of the currentcurrent form (6) shown above, and the neutral currents take the form:
(20) 
The ratio of neutral and chargedcurrent interaction strengths is often expressed as
(21) 
which takes the value unity in the Standard Model, apart from quantum corrections (loop effects).
The previous fieldtheoretical discussion of the Higgs mechanism can be rephrased in more physical language. It is well known that a massless vector boson such as the photon or gluon has just two polarization states: . However, a massive vector boson such as the has three polarization states: . This third polarization state is provided by a spin0 field. In order to make , this should have nonzero electroweak isospin , and the simplest possibility is a complex isodoublet , as assumed above. This has four degrees of freedom, three of which are eaten by the amd as their third polarization states, leaving us with one physical Higgs boson . Once the vacuum expectation value is fixed, the mass of the remaining physical Higgs boson is given by
(22) 
which is a free parameter in the Standard Model.
1.3 Precision Tests of the Standard Model
The quantity that was measured most accurately at LEP was the mass of the boson [6]:
(23) 
as seen in Fig. 1. Strikingly, is now known more accurately than the muon decay constant! Attaining this precision required understanding astrophysical effects  those of terrestrial tides on the LEP beam energy, which were MeV, as well as meteorological  when it rained, the water expanded the rock in which LEP was buried, again changing the beam energy, and seasonal  variations in the level of water in Lake Geneva also caused the rock around LEP to expand and contract  as well as electrical  stray currents from the nearby electric train line affected the LEP magnets [9].
LEP experiments also made precision measurements of many properties of the boson [6], such as the total cross section:
(24) 
where is the total decay rate (rate for decays into , hadrons). Eq. (24) is the classical (treelevel) expression, which is reduced by about 30 % by radiative corrections. The total decay rate is given by:
(25) 
where we expect because of lepton universality, which has been verified experimentally, as seen in Fig. 2 [6]. Other partial decay rates have been measured via the branching ratios
(26) 
as seen in Fig. 3.
Also measured have been various forwardbackward asymmetries in the production of leptons and quarks, as well as the polarization of leptons produced in decay, as also seen in Fig. 3. Various other measurements are also shown there, including the mass and decay rate of the , the mass of the top quark, and lowenergy neutralcurrent measurements in nucleon scattering and parity violation in atomic Cesium. The Standard Model is quite compatible with all these measurements, although some of them may differ by a couple of standard deviations: if they did not, we should be suspicious! Overall, the electroweak measurements tell us that [6]:
(27) 
providing us with a strong hint for grand unification, as we see later.
1.4 The Search for the Higgs Boson
The precision electroweak measurements at LEP and elsewhere are sensitive to radiative corrections via quantum loop diagrams, in particular those involving particles such as the top quark and the Higgs boson that are too heavy to be observed directly at LEP [10, 11]. Many of the electroweak observables mentioned above exhibit quadratic sensitivity to the mass of the top quark:
(28) 
The measurements of these electroweak observables enabled the mass of the top quark to be predicted before it was discovered, and the measured value:
(29) 
agrees quite well with the prediction
(30) 
derived from precision electroweak data [6]. Electroweak observables are also sensitive logarithmically to the mass of the Higgs boson:
(31) 
so their measurements can also be used to predict the mass of the Higgs boson. This prediction can be made more definite by combining the precision electroweak data with the measurement (29) of the mass of the top quark. Making due allowance for theoretical uncertainties in the Standard Model calculations, as seen in Fig. 4, one may estimate that [6]:
(32) 
whereas is not known from first principles in the Standard Model.
The Higgs production and decay rates are completely fixed as functions of the unknown mass , enabling the search for the Higgs boson to be planned as a function of [12]. This search was one of the main objectives of experiments at LEP, which established the lower limit:
(33) 
that is shown as the light yellow shaded region in Fig. 4. Combining this limit with the estimate (32), we see that there is good reason to expect that the Higgs boson may not be far away. Indeed, in the closing weeks of the LEP experimental programme, there was a hint for the discovery of the Higgs boson at LEP with a mass GeV, but this could not be confirmed [13]. In the future, experiments at the Fermilab Tevatron collider and then the LHC will continue the search for the Higgs boson. The latter, in particular, should be able to discover it whatever its mass may be, up to the theoretical upper limit TeV [4].
1.5 Roadmap to Physics Beyond the Standard Model
The Standard Model agrees with all confirmed experimental data from accelerators, but is theoretically very unsatisfactory [14, 15]. It does not explain the particle quantum numbers, such as the electric charge , weak isospin , hypercharge and colour, and contains at least 19 arbitrary parameters. These include three independent vectorboson couplings and a possible CPviolating stronginteraction parameter, six quark and three chargedlepton masses, three generalized Cabibbo weak mixing angles and the CPviolating KobayashiMaskawa phase, as well as two independent masses for weak bosons.
The Big Issues in physics beyond the Standard Model are conveniently grouped into three categories [14, 15]. These include the problem of Mass: what is the origin of particle masses, are they due to a Higgs boson, and, if so, why are the masses so small; Unification: is there a simple group framework for unifying all the particle interactions, a socalled Grand Unified Theory (GUT); and Flavour: why are there so many different types of quarks and leptons and why do their weak interactions mix in the peculiar way observed? Solutions to all these problems should eventually be incorporated in a Theory of Everything (TOE) that also includes gravity, reconciles it with quantum mechanics, explains the origin of spacetime and why it has four dimensions, makes coffee, etc. String theory, perhaps in its current incarnation of M theory, is the best (only?) candidate we have for such a TOE [16], but we do not yet understand it well enough to make clear experimental predictions.
As if the above 19 parameters were insufficient to appall you, at least nine more parameters must be introduced to accommodate the neutrino oscillations discussed in the next Lecture: 3 neutrino masses, 3 real mixing angles, and 3 CPviolating phases, of which one is in principle observable in neutrinooscillation experiments and the other two in neutrinoless doublebeta decay experiments. In fact even the simplest models for neutrino masses involve 9 further parameters, as discussed later.
Moreover, there are many other cosmological parameters that we should also seek to explain. Gravity is characterized by at least two parameters, the Newton constant and the cosmological vacuum energy. We may also want to construct a fieldtheoretical model for inflation, and we certainly need to explain the baryon asymmetry of the Universe. So there is plenty of scope for physics beyond the Standard Model.
The first clear evidence for physics beyond the Standard Model of particle physics has been provided by neutrino physics, which is also of great interest for cosmology, so this is the subject of Lecture 2. Since there are plenty of good reasons to study supersymmetry [15], including the possibility that it provides the cold dark matter, this is the subject of Lecture 3. Inflation is the subject of Lecture 4, and various further topics such as GUTs, baryo/leptogenesis and ultrahighenergy cosmic rays are discussed in Lecture 5. As we shall see later, neutrino physics may be the key to both inflation and baryogenesis.
2 Neutrino Physics
2.1 Neutrino Masses?
There is no good reason why either the total lepton number or the individual lepton flavours should be conserved. Theorists have learnt that the only conserved quantum numbers are those associated with exact local symmetries, just as the conservation of electromagnetic charge is associated with local U(1) invariance. On the other hand, there is no exact local symmetry associated with any of the lepton numbers, so we may expect nonzero neutrino masses.
However, so far we have only upper experimental limits on neutrino masses [17]. From measurements of the endpoint in Tritium decay, we know that:
(34) 
which might be improved down to about 0.5 eV with the proposed KATRIN experiment [18]. From measurements of decay, we know that:
(35) 
and there are prospects to improve this limit by a factor . Finally, from measurements of decay, we know that:
(36) 
and there are prospects to improve this limit to MeV.
Astrophysical upper limits on neutrino masses are stronger than these laboratory limits. The 2dF data were used to infer an upper limit on the sum of the neutrino masses of 1.8 eV [19], which has recently been improved using WMAP data to [20]
(37) 
as seen in Fig. 5. This impressive upper limit is substantially better than even the most stringent direct laboratory upper limit on an individual neutrino mass.
Another interesting laboratory limit on neutrino masses comes from searches for neutrinoless double decay, which constrain the sum of the neutrinos’ Majorana masses weighted by their couplings to electrons [21]:
(38) 
which might be improved to eV in a future round of experiments.
Neutrinos have been seen to oscillate between their different flavours [22, 23], showing that the separate lepton flavours are indeed not conserved, though the conservation of total lepton number is still an open question. The observation of such oscillations strongly suggests that the neutrinos have different masses.
2.2 Models of Neutrino Masses and Mixing
The conservation of lepton number is an accidental symmetry of the renormalizable terms in the Standard Model Lagrangian. However, one could easily add to the Standard Model nonrenormalizable terms that would generate neutrino masses, even without introducing any new fields. For example, a nonrenormalizable term of the form [24]
(39) 
where is some large mass beyond the scale of the Standard Model, would generate a neutrino mass term:
(40) 
However, a new interaction like (39) seems unlikely to be fundamental, and one should like to understand the origin of the large mass scale .
The minimal renormalizable model of neutrino masses requires the introduction of weaksinglet ‘righthanded’ neutrinos . These will in general couple to the conventional weakdoublet lefthanded neutrinos via Yukawa couplings that yield Dirac masses . In addition, these ‘righthanded’ neutrinos can couple to themselves via Majorana masses that may be , since they do not require electroweak summetry breaking. Combining the two types of mass term, one obtains the seesaw mass matrix [25]:
(41) 
where each of the entries should be understood as a matrix in generation space.
In order to provide the two measured differences in neutrino massessquared, there must be at least two nonzero masses, and hence at least two heavy singlet neutrinos [26, 27]. Presumably, all three light neutrino masses are nonzero, in which case there must be at least three . This is indeed what happens in simple GUT models such as SO(10), but some models [28] have more singlet neutrinos [29]. In this Lecture, for simplicity we consider just three .
The effective mass matrix for light neutrinos in the seesaw model may be written as:
(42) 
where we have used the relation with . Taking or and requiring light neutrino masses to eV, we find that heavy singlet neutrinos weighing to GeV seem to be favoured.
It is convenient to work in the field basis where the chargedlepton masses and the heavy singletneutrino mases are real and diagonal. The seesaw neutrino mass matrix (42) may then be diagonalized by a unitary transformation :
(43) 
This diagonalization is reminiscent of that required for the quark mass matrices in the Standard Model. In that case, it is well known that one can redefine the phases of the quark fields [30] so that the mixing matrix has just one CPviolating phase [31]. However, in the neutrino case, there are fewer independent field phases, and one is left with 3 physical CPviolating parameters:
(44) 
Here contains three phases that can be removed by phase rotations and are unobservable in lightneutrino physics, though they do play a rôle at high energies, as discussed in Lecture 5, is the lightneutrino mixing matrix first considered by Maki, Nakagawa and Sakata (MNS) [32], and contains 2 CPviolating phases that are observable at low energies. The MNS matrix describes neutrino oscillations
(45) 
The three real mixing angles in (45) are analogous to the Euler angles that are familiar from the classic rotations of rigid mechanical bodies. The phase is a specific quantum effect that is also observable in neutrino oscillations, and violates CP, as we discuss below. The other CPviolating phases are in principle observable in neutrinoless double decay (38).
2.3 Neutrino Oscillations
In quantum physics, particles such as neutrinos propagate as complex waves. Different mass eigenstates travelling with the same momenta oscillate with different frequencies:
(46) 
Now consider what happens if one produces a neutrino beam of one given flavour, corresponding to some specific combination of mass eigenstates. After propagating some distance, the different mass eigenstates in the beam will acquire different phase weightings (46), so that the neutrinos in the beam will be detected as a mixture of different neutrino flavours. These oscillations will be proportional to the mixing between the different flavours, and also to the differences in massessquared between the different mass eigenstates.
The first of the mixing angles in (45) to be discovered was , in atmospheric neutrino experiments. Whereas the numbers of downwardgoing atmospheric were found to agree with Standard Model predictions, a deficit of upwardgoing was observed, as seen in Fig. 6. The data from the SuperKamiokande experiment, in particular [22], favour nearmaximal mixing of atmospheric neutrinos:
(47) 
Recently, the K2K experiment using a beam of neutrinos produced by an accelerator has found results consistent with (47) [33]. It seems that the atmospheric probably oscillate primarily into , though this has yet to be established.
More recently, the oscillation interpretation of the longstanding solarneutrino deficit has been established, in particular by the SNO experiment. Solar neutrino experiments are sensitive to the mixing angle in (45). The recent data from SNO [23] and SuperKamiokande [34] prefer quite strongly the largemixingangle (LMA) solution to the solar neutrino problem with
(48) 
though they have been unable to exclude completely the LOW solution with lower . However, the KamLAND experiment on reactors produced by nuclear power reactors has recently found a deficit of that is highly compatible with the LMA solution to the solar neutrino problem [35], as seen in Fig. 7, and excludes any other solution.
Using the range of allowed by the solar and KamLAND data, one can establish a correlation between the relic neutrino density and the neutrinoless double decay observable , as seen in Fig. 8 [37]. PreWMAP, the experimental limit on could be used to set the bound
(49) 
Alternatively, now that WMAP has set a tighter upper bound (37) [20], one can use this correlation to set an upper bound:
(50) 
which is difficult to reconcile with the neutrinoless double decay signal reported in [21].
The third mixing angle in (45) is basically unknown, with experiments such as Chooz [38] and SuperKamiokande only establishing upper limits. A fortiori, we have no experimental information on the CPviolating phase .
The phase could in principle be measured by comparing the oscillation probabilities for neutrinos and antineutrinos and computing the CPviolating asymmetry [39]:
as seen in Fig. 9 [40]. This is possible only if and are large enough  as now suggested by the success of the LMA solution to the solar neutrino problem, and if is large enough  which remains an open question.
A number of longbaseline neutrino experiments using beams from accelerators are now being prepared in the United States, Europe and Japan, with the objectives of measuring more accurately the atmospheric neutrino oscillation parameters, and , and demonstrating the production of in a beam. Beyond these, ideas are being proposed for intense ‘superbeams’ of lowenergy neutrinos, produced by highintensity, lowenergy accelerators such as the SPL [41] proposed at CERN. A subsequent step could be a storage ring for unstable ions, whose decays would produce a ‘ beam’ of pure or neutrinos. These experiments might be able to measure via CP and/or T violation in neutrino oscillations [42]. A final step could be a fullfledged neutrino factory based on a muon storage ring, which would produce pure and (or and beams and provide a greatly enhanced capability to search for or measure via CP violation in neutrino oscillations [43].
We have seen above that the effective lowenergy mass matrix for the light neutrinos contains 9 parameters, 3 mass eigenvalues, 3 real mixing angles and 3 CPviolating phases. However, these are not all the parameters in the minimal seesaw model. As shown in Fig. 10, this model has a total of 18 parameters [44, 45]. The additional 9 parameters comprise the 3 masses of the heavy singlet ‘righthanded’ neutrinos , 3 more real mixing angles and 3 more CPviolating phases. As illustrated in Fig. 10, many of these may be observable via renormalization in supersymmetric models [46, 45, 47, 48], which may generate observable rates for flavourchanging lepton decays such as and , and CPviolating observables such as electric dipole moments for the electron and muon. Some of these extra parametrs may also have controlled the generation of matter in the Universe via leptogenesis [49], as discussed in Lecture 5.
3 Supersymmetry
3.1 Why?
The main theoretical reason to expect supersymmetry at an accessible energy scale is provided by the hierarchy problem [51]: why is , or equivalently why is ? Another equivalent question is why the Coulomb potential in an atom is so much greater than the Newton potential: , where is a typical particle mass?
Your first thought might simply be to set by hand, and forget about the problem. Life is not so simple, because quantum corrections to and hence are quadratically divergent in the Standard Model:
(52) 
which is if the cutoff , which represents the scale where new physics beyond the Standard Model appears, is comparable to the GUT or Planck scale. For example, if the Standard Model were to hold unscathed all the way up the Planck mass GeV, the radiative correction (52) would be 36 orders of magnitude greater than the physical values of !
In principle, this is not a problem from the mathematical point of view of renormalization theory. All one has to do is postulate a treelevel value of that is (very nearly) equal and opposite to the ‘correction’ (52), and the correct physical value may be obtained by a delicate cancellation. However, this fine tuning strikes many physicists as rather unnatural: they would prefer a mechanism that keeps the ‘correction’ (52) comparable at most to the physical value [51].
This is possible in a supersymmetric theory, in which there are equal numbers of bosons and fermions with identical couplings. Since bosonic and fermionic loops have opposite signs, the residual oneloop correction is of the form
(53) 
which is and hence naturally small if the supersymmetric partner bosons and fermions have similar masses:
(54) 
This is the best motivation we have for finding supersymmetry at relatively low energies [51]. In addition to this first supersymmetric miracle of removing (53) the quadratic divergence (52), many logarithmic divergences are also absent in a supersymmetric theory [52], a property that also plays a rôle in the construction of supersymmetric GUTs [14].
Supersymmetry had been around for some time before its utility for stabilizing the hierarchy of mass scales was realized. Some theorists had liked it because it offered the possibility of unifying fermionic matter particles with bosonic forcecarrying particles. Some had liked it because it reduced the number of infinities found when calculating quantum corrections  indeed, theories with enough supersymmetry can even be completely finite [52]. Theorists also liked the possibility of unifying Higgs bosons with matter particles, though the first ideas for doing this did not work out very well [53]. Another aspect of supersymmetry, that made some theorists think that its appearance should be inevitable, was that it was the last possible symmetry of field theory not yet known to be exploited by Nature [54]. Yet another asset was the observation that making supersymmetry a local symmetry, like the Standard Model, necessarily introduced gravity, offering the prospect of unifying all the particle interactions. Moreover, supersymmetry seems to be an essential requirement for the consistency of string theory, which is the best candidate we have for a Theory of Everything, including gravity. However, none of these ‘beautiful’ arguments gave a clue about the scale of supersymmetric particle masses: this was first provided by the hierarchy argument outlined above.
Could any of the known particles in the Standard Model be paired up in supermultiplets? Unfortunately, none of the known fermions can be paired with any of the ‘known’ bosons , because their internal quantum numbers do not match [53]. For example, quarks sit in triplet representations of colour, whereas the known bosons are either singlets or octets of colour. Then again, leptons have nonzero lepton number , whereas the known bosons have . Thus, the only possibility seems to be to introduce new supersymmetric partners (spartners) for all the known particles, as seen in the Table below: quark squark, lepton slepton, photon photino, Z Zino, W Wino, gluon gluino, Higgs Higgsino. The best that one can say for supersymmetry is that it economizes on principle, not on particles!
Particle  Spin  Spartner  Spin 

quark:  squark:  0  
lepton:  slepton:  0  
photon:  1  photino:  
1  wino:  
1  zino:  
Higgs:  0  higgsino: 
The minimal supersymmetric extension of the Standard Model (MSSM) [55] has the same vector interactions as the Standard Model, and the particle masses arise in much the same way. However, in addition to the Standard Model particles and their supersymmetric partners in the Table, the minimal supersymmetric extension of the Standard Model (MSSM), requires two Higgs doublets with opposite hypercharges in order to give masses to all the matter fermions, whereas one Higgs doublet would have sufficed in the Standard Model. The two Higgs doublets couple via an extra coupling called , and it should also be noted that the ratio of Higgs vacuum expectation values
(55) 
is undetermined and should be treated as a free parameter.
3.2 Hints of Supersymmetry
There are some phenomenological hints that supersymmetry may, indeed, appear at the TeV scale. One is provided by the strengths of the different Standard Model interactions, as measured at LEP [56]. These may be extrapolated to high energy scales including calculable renormalization effects [57], to see whether they unify as predicted in a GUT. The answer is no, if supersymmetry is not included in the calculations. In that case, GUTs would require a ratio of the electromagnetic and weak coupling strengths, parametrized by , different from what is observed (27), if they are to unify with the strong interactions. On the other hand, as seen in Fig. 11, minimal supersymmetric GUTs predict just the correct ratio for the weak and electromagnetic interaction strengths, i.e., value for (27).
A second hint is the fact that precision electroweak data prefer a relatively light Higgs boson weighing less than about 200 GeV [6]. This is perfectly consistent with calculations in the minimal supersymmetric extension of the Standard Model (MSSM), in which the lightest Higgs boson weighs less than about 130 GeV [58].
A third hint is provided by the astrophysical necessity of cold dark matter. This could be provided by a neutral, weaklyinteracting particle weighing less than about 1 TeV, such as the lightest supersymmetric particle (LSP) [59]. This is expected to be stable in the MSSM, and hence should be present in the Universe today as a cosmological relic from the Big Bang [60, 59]. Its stability arises because there is a multiplicativelyconserved quantum number called parity, that takes the values +1 for all conventional particles and 1 for all sparticles [53]. The conservation of parity can be related to that of baryon number and lepton number , since
(56) 
where is the spin. There are three important consequences of conservation:

sparticles are always produced in pairs, e.g., , ,

heavier sparticles decay to lighter ones, e.g., , and

the lightest sparticle (LSP) is stable, because it has no legal decay mode.
This last feature constrains strongly the possible nature of the lightest supersymmetric sparticle [59]. If it had either electric charge or strong interactions, it would surely have dissipated its energy and condensed into galactic disks along with conventional matter. There it would surely have bound electromagnetically or via the strong interactions to conventional nuclei, forming anomalous heavy isotopes that should have been detected.
A priori, the LSP might have been a sneutrino partner of one of the 3 light neutrinos, but this possibility has been excluded by a combination of the LEP neutrino counting and direct searches for cold dark matter. Thus, the LSP is often thought to be the lightest neutralino of spin 1/2, which naturally has a relic density of interest to astrophysicists and cosmologists: [59].
Finally, a fourth hint may be coming from the measured value of the muon’s anomalous magnetic moment, , which seems to differ slightly from the Standard Model prediction [61, 62]. If there is indeed a significant discrepancy, this would require new physics at the TeV scale or below, which could easily be provided by supersymmetry, as we see later.
3.3 Constraints on Supersymmetric Models
Important experimental constraints on supersymmetric models have been provided by the unsuccessful direct searches at LEP and the Tevatron collider. When compiling these, the supersymmetrybreaking masses of the different unseen scalar particles are often assumed to have a universal value at some GUT input scale, and likewise the fermionic partners of the vector bosons are also commonly assumed to have universal fermionic masses at the GUT scale  the socalled constrained MSSM or CMSSM.
The allowed domains in some of the planes for different values of and the sign of are shown in Fig. 12. The various panels of this figure feature the limit GeV provided by chargino searches at LEP [63]. The LEP neutrino counting and other measurements have also constrained the possibilities for light neutralinos, and LEP has also provided lower limits on slepton masses, of which the strongest is 99 GeV [64], as illustrated in panel (a) of Fig. 12. The most important constraints on the supersymmetric partners of the squarks and on the gluinos are provided by the FNAL Tevatron collider: for equal masses 300 GeV. In the case of the , LEP provides the most stringent limit when is small, and the Tevatron for larger [63].
Another important constraint in Fig. 12 is provided by the LEP lower limit on the Higgs mass: GeV [13]. Since is sensitive to sparticle masses, particularly , via loop corrections:
(57) 
the Higgs limit also imposes important constraints on the soft supersymmetrybreaking CMSSM parameters, principally [67] as displayed in Fig. 12.
Also shown in Fig. 12 is the constraint imposed by measurements of [66]. These agree with the Standard Model, and therefore provide bounds on supersymmetric particles, such as the chargino and charged Higgs masses, in particular.
The final experimental constraint we consider is that due to the measurement of the anomalous magnetic moment of the muon. Following its first result last year [68], the BNL E821 experiment has recently reported a new measurement [61] of , which deviates by about 2 standard deviations from the best available Standard Model predictions based on lowenergy hadrons data [62]. On the other hand, the discrepancy is more like 0.9 standard deviations if one uses hadrons data to calculate the Standard Model prediction. Faced with this confusion, and remembering the chequered history of previous theoretical calculations [69], it is reasonable to defer judgement whether there is a significant discrepancy with the Standard Model. However, either way, the measurement of is a significant constraint on the CMSSM, favouring in general, and a specific region of the plane if one accepts the theoretical prediction based on hadrons data [70]. The regions preferred by the current experimental data and the hadrons data are shown in Fig. 12.
Fig. 12 also displays the regions where the supersymmetric relic density falls within the range preferred by WMAP [20]:
(58) 
at the 2 level. The upper limit on the relic density is rigorous, but the lower limit in (58) is optional, since there could be other important contributions to the overall matter density. Smaller values of correspond to smaller values of , in general.
We see in Fig. 12 that there are significant regions of the CMSSM parameter space where the relic density falls within the preferred range (58). What goes into the calculation of the relic density? It is controlled by the annihilation cross section [59]:
(59) 
where the typical annihilation cross section . For this reason, the relic density typically increases with the relic mass, and this combined with the upper bound in (58) then leads to the common expectation that GeV.
However, there are various ways in which the generic upper bound on can be increased along filaments in the plane. For example, if the nexttolightest sparticle (NLSP) is not much heavier than : , the relic density may be suppressed by coannihilation: NLSP [71]. In this way, the allowed CMSSM region may acquire a ‘tail’ extending to larger sparticle masses. An example of this possibility is the case where the NLSP is the lighter stau: and , as seen in Figs. 12(a) and (b) [72].
Another mechanism for extending the allowed CMSSM region to large is rapid annihilation via a directchannel pole when [73, 74]. This may yield a ‘funnel’ extending to large and at large , as seen in panels (c) and (d) of Fig. 12 [74]. Yet another allowed region at large and is the ‘focuspoint’ region [75], which is adjacent to the boundary of the region where electroweak symmetry breaking is possible. The lightest supersymmetric particle is relatively light in this region.
3.4 Benchmark Supersymmetric Scenarios
As seen in Fig. 12, all the experimental, cosmological and theoretical constraints on the MSSM are mutually compatible. As an aid to understanding better the physics capabilities of the LHC and various other accelerators, as well as nonaccelerator experiments, a set of benchmark supersymmetric scenarios have been proposed [76]. Their distribution in the plane is sketched in Fig. 13. These benchmark scenarios are compatible with all the accelerator constraints mentioned above, including the LEP searches and , and yield relic densities of LSPs in the range suggested by cosmology and astrophysics. The benchmarks are not intended to sample ‘fairly’ the allowed parameter space, but rather to illustrate the range of possibilities currently allowed.
In addition to a number of benchmark points falling in the ‘bulk’ region of parameter space at relatively low values of the supersymmetric particle masses, as see in Fig. 13, we also proposed [76] some points out along the ‘tails’ of parameter space extending out to larger masses. These clearly require some degree of finetuning to obtain the required relic density [77] and/or the correct mass [78], and some are also disfavoured by the supersymmetric interpretation of the anomaly, but all are logically consistent possibilities.
3.5 Prospects for Discovering Supersymmetry at Accelerators
In the CMSSM discussed here, there are just a few prospects for discovering supersymmetry at the FNAL Tevatron collider [76], but these could be increased in other supersymmetric models [79]. On the other hand, there are good prospects for discovering supersymmetry at the LHC, and Fig. 14 shows its physics reach for observing pairs of supersymmetric particles. The signature for supersymmetry  multiple jets (and/or leptons) with a large amount of missing energy  is quite distinctive, as seen in Fig. 15 [80, 81]. Therefore, the detection of the supersymmetric partners of quarks and gluons at the LHC is expected to be quite easy if they weigh less than about 2.5 TeV [82]. Moreover, in many scenarios one should be able to observe their cascade decays into lighter supersymmetric particles. As seen in Fig. 16, large fractions of the supersymmetric spectrum should be seen in most of the benchmark scenarios, although there are a couple where only the lightest supersymmetric Higgs boson would be seen [76], as seen in Fig. 16.
Electronpositron colliders provide very clean experimental environments, with egalitarian production of all the new particles that are kinematically accessible, including those that have only weak interactions, and hence are potentially complementary to the LHC, as illustrated in Fig. 16. Moreover, polarized beams provide a useful analysis tool, and , and colliders are readily available at relatively low marginal costs. However, the direct production of supersymmetric particles at such a collider cannot be guaranteed [84]. We do not yet know what the supersymmetric threshold energy may be (or even if there is one!). We may well not know before the operation of the LHC, although might provide an indication [70], if the uncertainties in the Standard Model calculation can be reduced. However, if an collider is above the supersymmetric threshold, it will be able to measure very accurately the sparticle masses. By combining its measurements with those made at the LHC, it may be possible to calculate accurately from first principle the supersymmetric relic density and compare it with the astrophysical value.
3.6 Searches for Dark Matter Particles
In the above discussion, we have paid particular attention to the region of parameter space where the lightest supersymmetric particle could constitute the cold dark matter in the Universe [59]. How easy would this be to detect?
One strategy is to look for relic annihilations in the galactic halo, which might produce detectable antiprotons or positrons in the cosmic rays [85]. Unfortunately, the rates for their production are not very promising in the benchmark scenarios we studied [86].
Alternatively, one might look for annihilations in the core of our galaxy, which might produce detectable gamma rays. As seen in the left panel of Fig. 17, this may be possible in certain benchmark scenarios [86], though the rate is rather uncertain because of the unknown enhancement of relic particles in our galactic core.
A third strategy is to look for annihilations inside the Sun or Earth, where the local density of relic particles is enhanced in a calculable way by scattering off matter, which causes them to lose energy and become gravitationally bound [87]. The signature would then be energetic neutrinos that might produce detectable muons. Several underwater and ice experiments are underway or planned to look for this signature, and this strategy looks promising for several benchmark scenarios, as seen in the right panel of Fig. 17 [86]. It will be interesting to have such neutrino telescopes in different hemispheres, which will be able to scan different regions of the sky for astrophysical highenergy neutrino sources.
The most satisfactory way to look for supersymmetric relic particles is directly via their elastic scattering on nuclei in a lowbackground laboratory experiment [88]. There are two types of scattering matrix elements, spinindependent  which are normally dominant for heavier nuclei, and spindependent  which could be interesting for lighter elements such as fluorine. The best experimental sensitivities so far are for spinindependent scattering, and one experiment has claimed a positive signal [89]. However, this has not been confirmed by a number of other experiments [90]. In the benchmark scenarios the rates are considerably below the present experimental sensitivities [86], but there are prospects for improving the sensitivity into the interesting range, as seen in Fig. 18.
4 Inflation
4.1 Motivations
One of the main motivations for inflation [95] is the horizon or homogeneity problem: why are distant parts of the Universe so similar:
(60) 
In conventional Big Bang cosmology, the largest patch of the CMB sky which could have been causally connected, i.e., across which a signal could have travelled at the speed of light since the initial singularity, is about 2 degrees. So how did opposite parts of the Universe, 180 degrees apart, ‘know’ how to coordinate their temperatures and densities?
Another problem of conventional Big bang cosmology is the size or age problem. The Hubble expansion rate in conventional Big bang cosmology is given by:
(61) 
where or is the curvature. The only dimensionful coefficient in (61) is the Newton constant, GeV. A generic solution of (61) would have a characteristic scale size s and live to the ripe old age of s. Why is our Universe so longlived and big? Clearly, we live in an atypical solution of (61)!
A related issue is the flatness problem. Defining, as usual
(62) 
we have
(63) 
Since during the radiationdominated era and during the matterdominated era, it is clear from (63) that rapidly: for to be as it is today, must have been at the Planck epoch when s. The density of the very early Universe must have been very finely tuned in order for its geometry to be almost flat today.
Then there is the entropy problem: why are there so many particles in the visible Universe: ? A ‘typical’ Universe would have contained particles in its size .
All these particles have diluted what might have been the primordial density of unwanted massive particles such as magnetic monopoles and gravitinos. Where did they go?
The basic idea of inflation [96] is that, at some early epoch in the history of the Universe, its energy density may have been dominated by an almost constant term:
(64) 
leading to a phase of almost de Sitter expansion. It is easy to see that the second (curvature) term in (64) rapidly becomes negligible, and that
(65) 
during this inflationary expansion.
It is then apparent that the horizon would have expanded (near) exponentially, so that the entire visible Universe might have been within our preinflationary horizon. This would have enabled initial homogeneity to have been established. The trick is not somehow to impose connections beyond the horizon, but rather to make the horizon much larger than naively expected in conventional Big Bang cosmology:
(66) 
where is the number of efoldings during inflation. It is also apparent that the term in (64) becomes negligible, so that the Universe is almost flat with . However, as we see later, perturbations during inflation generate a small deviation from unity: . Following inflation, the conversion of the inflationary vacuum energy into particles reheats the Universe, filling it with the required entropy. Finally, the closest preinflationary monopole or gravitino is pushed away, further than the origin of the CMB, by the exponential expansion of the Universe.
From the point of view of general relativity, the (near) constant inflationary vacuum energy is equivalent to a cosmological constant :
(67) 
We may compare the righthand side of (67) with the energymomentum tensor of a standard fluid:
(68) 
where is the fourmomentum vector for a comoving fluid. We can therefore write
(69) 
where
(70) 
Thus, we see that inflation has negative pressure. The value of the cosmological constant today, as suggested by recent observations [97, 98], is many orders of magnitude smaller than would have been required during inflation: GeV compared with the density GeV required during inflation, as we see later.
Such a small value of the cosmological energy density is also much smaller than many contributions to it from identifiable physics sources: GeV, GeV, GeV and GeV. Particle physics offers no reason to expect the presentday vacuum energy to lie within the range suggested by cosmology, and raises the question why it is not many orders of magnitude larger.
4.2 Some Inflationary Models
The first inflationary potential to be proposed was one with a ‘doubledip’ structure à la Higgs [96]. The old inflation idea was that the Universe would have started in the false vacuum with , where it would have undergone many efoldings of de Sitter expansion. Then, the Universe was supposed to have tunnelled through the potential barrier to the true vacuum with , and subsequently thermalized. The inflation required before this tunnelling was
(71) 
The problem with this old inflationary scenario was that the phase transition to the new vacuum would never have been completed. The Universe would look like a ‘Swiss cheese’ in which the bubbles of true vacuum would be expanding as or , while the ‘cheese’ between them would still have been expanding exponentially as . Thus, the fraction of space in the false vacuum would be
(72) 
where is the bubble nucleation rate per unit fourvolume. The fraction only if , but in this case there would not have been sufficient efoldings for adequate inflation.
One of the fixes for this problem trades under the name of new inflation [99]. The idea is that the nearexponential expansion of the Universe took place in a flat region of the potential that is not separated from the true vacuum by any barrier. It might have been reached after a firstorder transition of the type postulated in old inflation, in which case one can regard our Universe as part of a bubble that expanded nearexponentially inside the ‘cheese’ of old vacuum, and there could be regions beyond our bubble that are still expanding (near) exponentially. For the Universe to roll eventually downhill into the true vacuum, could not quite be constant, and hence the Hubble expansion rate during inflation was also not constant during new inflation.
An example of such a scenario is chaotic inflation [100], according to which there is no ‘bump’ in the effective potential , and hence no phase transition between old and new vacua. Instead, any given region of the Universe is assumed to start with some random value of the inflaton field and hence the potential , which decreases monotonically to zero. If the initial value of is large enough, and the potential flat enough, (our part of) the Universe will undergo sufficient expansion.
Another fix for old inflation trades under the name of extended inflation [101]. Here the idea is that the tunnelling rate depends on some other scalar field that varies while the inflaton is still stuck in the old vacuum. If is initially small, but then changes so that becomes large, the problem of completing the transition in the ‘Swiss cheese’ Universe is solved.
All these variants of inflation rely on some type of elementary scalar inflaton field. Therefore, the discovery of a Higgs boson would be a psychological boost for inflation, even though the electroweak Higgs boson cannot be responsible for it directly. Moreover, just as supersymmetry is well suited for stabilizing the mass scale of the electroweak Higgs boson, it may also be needed to keep the inflationary potential under control [102]. Later in this Lecture, I discuss a specific supersymmetric inflationary model.
4.3 Density Perturbations
The above description is quite classical. In fact, one should expect quantum fluctuations in the initial value of the inflaton field , which would cause the rollover into the true vacuum to take place inhomogeneously, and different parts of the Universe to expand differently. As we discuss below in more detail, these quantum fluctuations would give rise to a Gaussian random field of perturbations with similar magnitudes on different scale sizes, just as the astrophysicists have long wanted. The magnitudes of these perturbations would be linked to the value of the effective potential during inflation, and would be visible in the CMB as adiabatic temperature fluctuations:
(73) 
where is a typical vacuum energy scale during inflation. As we discuss later in more detail, consistency with the CMB data from COBE et al., that find , is obtained if
(74) 
comparable with the GUT scale.
Each density perturbation can be regarded as an embryonic potential well, into which nonrelativistic cold dark matter particles may fall, increasing the local contrast in the massenergy density. On the other hand, relativistic hot dark matter particles will escape from smallscale density perturbations, modifying their rate of growth. This also depends on the expansion rate of the Universe and hence the cosmological constant. Presentday data are able to distinguish the effects of different categories of dark matter. In particular, as we already discussed, the WMAP and other data tell us that the density of hot dark matter neutrinos is relatively small [20]:
(75) 
whereas the density of cold dark matter is relatively large [20]:
(76) 
and the cosmological constant is even larger: .
The cold dark matter amplifies primordial perturbations already while the conventional baryonic matter is coupled to radiation before (re)combination. Once this epoch is passed and the CMB decouples from the conventional baryonic matter, the baryons become free to fall into the ‘holes’ prepared for them by the cold dark matter that has fallen into the overdense primordial perturbations. In this way, structures in the Universe, such as galaxies and their clusters, may be formed earlier than they would have appeared in the absence of cold dark matter.
All this theory is predicated on the presence of primordial perturbations laid down by inflation [103], which we now explore in more detail.
There are in fact two types of perturbations, namely density fluctuations and gravity waves. To describe the first, we consider the density field and its perturbations , which we can decompose into Fourier modes:
(77) 
The density perturbation on a given scale is then given by
(78) 
whose evolution depends on the ratio , where is the naive horizon size.
The evolution of smallscale perturbations with depends on the astrophysical dynamics, such as the quation of state, dissipation, the Jeans instability, etc.:
(79) 
where is the sound speed: . If the wave number is larger than the characteristic Jeans value
(80) 
the density perturbation oscillates, whereas it grows if