IOP SCHOOLS LECTURE.

Particles and the universe.

Peter Kalmus Department of Physics, Queen Mary & Westfield College, London E14NS, UK.

The discovery that the great diversity of the universe stems from a limited number of elementary objects acting under the influence of a few fundamental forces is one of the most significant scientific achievements of the twentieth century. This is the basis of particle physics, which now has strong links with astronomy and cosmology.

We start this story with an immense explosion observed in 1987. A star 170,000 light years away in our neighbouring galaxy, the Large Magellanic Cloud, ran out of nuclear fuel and hence collapsed under the influence of its own powerful gravity. This gave rise to a tremendous explosion, giving out an almost unimaginable amount of energy: a billion, billion, billion times as much as a hydrogen bomb (1 billion = 109). It emitted in a few seconds a hundred times as much energy as our Sun has poured out in its entire lifetime.

It was Supernova 1987A. However, even before the light was noticed, ghostly messengers called neutrinos were detected in two huge underground particle detectors in the USA and Japan. These detectors, consisting of a few thousand tonnes of very pure water, instrumented with photomultipliers and electronics, had been built for a quite different purpose, not to detect neutrinos but to check whether protons, the nuclei of hydrogen atoms, were stable or whether they might undergo a very slow radioactive decay. No proton decays have yet been seen, but the detection of supernova neutrinos gave information both about these particles and about stellar collapse, and was a dramatic illustration of the interplay between astronomy and particle physics.

The early universe.

The biggest explosion of all was the Big Bang, the creation of the universe about 12 billion years ago. The early universe was incredible, a primordial soup of elementary particles colliding repeatedly at tremendous energies: a brilliant fireworks display. And indeed the present universe with all its beauty and complexity is merely the wisp of smoke remaining after the fireworks show.

Today's particle physics allows us, in a way, to recreate some of the conditions in the early universe, and to try to answer some of the most fundamental questions in science. Where were the nuclei and atoms of our bodies created? What are the ultimate building blocks out of which we and the universe are constructed? What are the forces through which they interact?

Those of you who spent many hours learning history, spanning perhaps a mere few thousand years. may be pleased to see the history of the universe displayed on a rather simple single graph. In figure I the temperature of the universe is plotted against time. Both axes are logarithmic and each unit is a thousand times the previous one. Also plotted on the vertical axis is energy per particle, which is proportional to temperature. Going backwards on the graph starting from the right, we note that our Sun was formed about five billion years ago. A bit further left, the whole universe was hot as hell, 445ûC the boiling point of brimstone (sulphur).

More notably, atoms only formed when the universe was about 300000 years old. Before then the universe was too hot. Nuclei could not hold on to electrons. If you heat something to a few thousand degrees all the electrons are stripped away from the atoms: they become completely ionised. Before the existence of atoms, the universe was opaque: light was trapped by interaction with the electrons. Stars and galaxies only existed in the period of the bottom right corner. Similarly helium nuclei were only formed out of protons and neutrons when the universe was a few minutes old, and the protons and neutrons themselves formed at around one microsecond, when the average energy was 1 GeV, the rest mass of these particles. Before that we had the primordial soup. A tin showing the ingredients of this was passed around. The soup conditions corresponded to today's largest accelerators, which have energies of around 100 GeV per elementary constituent. The soup label is shown in figure 2.

There is, however, a missing ingredient in this soup: dark matter. What is this? It may surprise you to know that we believe that the most of the material in the universe, even in our neighbourhood, has not yet been detected. How do we know this? Consider our solar system. We know, from Kepler's laws which laid the foundation to Newton's gravity, that there is a simple relationship between the velocity of a planet, its distance from the Sun and the mass of the Sun. This is how we measure the mass of the Sun. If we now look at a galaxy, a collection of about 1011 stars, we see rotation about the centre, and we note that there are many more stars in the centre than in the arms. We can measure the stars' velocities by the Doppler effect, the change in frequency of light waves when the source travels towards or away from us. We can estimate the total mass by measuring the total light. We then find that the stars are moving: too fast. There must be more matter in the galaxy than we have been able to detect with light, radio waves, x-rays etc. This is the dark matter, and we don't know what it is, although there are several experiments searching for it.

Now let us consider how nuclei are made. Hydrogen and helium, and a tiny amount of lithium, were made in the early universe. The rest came very much later. In fact nearly all other nuclei were manufactured in the centres of early stars by nuclear reactions. Some of these stars exploded, as supernovae, and polluted the local part of the cosmos with the newly created elements. Stellar systems which were born later, like our Sun and solar system, incorporated these nuclei. So every carbon, nitrogen and oxygen nucleus in your body started its existence in the centre of some, now exploded, star. You are all made of star material! By the way, it is interesting to note that the relative abundance of the various chemical elements is the same in the Sun and the Earth, if you forget about the hydrogen and helium. This indicates a common origin, but the Earth's gravity was too weak to hold on to these lightest gases.

Inside the atom.

Let us now consider the atom. The electron was discovered in 1897 and the nucleus in 1911. By the early 1930s we had a rather simple picture of the atom. It consisted of a nucleus orbited by electrons. Quantum physics tells us that we cannot locate these particles exactly, so we should think of the electrons as forming a kind of cloud. The cloud is like a time exposure, its variable density give the relative probability of finding an electron in that region. The electrons are held in the atom by the electromagnetic attraction between the negatively charged electron and the positive nucleus. The nucleus itself is made of positive protons, whose charge we call +1, and rather similar but uncharged neutrons. The protons and neutrons, whose sizes are about 10-15 m, are in contact like a bunch of grapes of two different sorts. They are held together by a different interaction: the strong nuclear force. The atom is much bigger, about 10-10m. Hence if protons and neutrons were scaled up to the size of actual grapes, the atoms would be the size of a small town. The nucleus occupies only 10-15 of the volume of an atom. A neutron star has the density of nuclear matter, and although it its mass might be twice that of the Sun, it will only be about 10 km in size.

The simple picture of three building blocks - proton, neutron, electron - did not last long. In the 1930s neutrinos were predicted to exist in order to account for the apparent non-conservation of energy in radioactive beta decay. To conserve energy, a new particle had to be emitted which did not interact with any of the apparatus, hence was electrically neutral and did not feel the strong nuclear force. Energy balance showed that this particle, the neutrino, had either zero or very small mass compared with the electron. It would feel a new 'weak' force, but this is so feeble that 1014 neutrinos from the Sun pass through each of you every second. To reduce this flux by a significant amount would require a metal shield several light-years thick, and there is not enough space between you and the Sun for this! The neutrino was detected in the 1950s when extremely high fluxes became available from the decay of fission fragments in nuclear reactors.

Antiparticles were predicted also in the 1930s. Every particle has an antiparticle which has some properties equal (such as mass) and other others numerically equal but opposite in sign (such as electric charge) to the corresponding particle. Antielectrons, now called positrons, were discovered in cosmic ray interactions very soon after the prediction. It is quite easy to create an electron-positron pair in a collision: we merely need a collision energy greater than about 1 MeV, the sum of the rest masses. Antiprotons and antineutrons, having nearly 2000 the rest masses, were only discovered in the 1950s when an accelerator of sufficient energy started working. Conversely, when a particle and its antiparticle come in contact they can annihilate into lighter particles or photons.

In the 1950s and 1960s, with the advent of the new high energy accelerators, we found that many new particles could be created out of the available energy in collisions. After some years more than 200 so-called elementary particles had been discovered, more than the number of chemical elements. Physics was getting as bad as chemistry ! It was suggested that instead of awarding prizes, the discoverers should pay a fine. Since my colleagues and I were amongst the culprits, we and you will be relieved to know that now things are much simpler again. We only have two types of elementary building block: the quarks, which feel the strong force, and the leptons, which do not. Our everyday world is made up of just two of each. The electron (electric charge - I unit) and the neutrino (zero charge) are leptons. There are two quarks, labelled up 'u' (charge +2/3) and down (d) (charge -1/3). The proton and the neutron are no longer elementary but are each made up of three quarks, (uud) and (udd) respectively, which give the correct charges. There are then just two small complications, involving multiplication by 3 and by 2. At higher energies this pattern is repeated, so we have a second and a third generation each of two leptons and two quarks, as shown in figure 3. The various neutrinos have different properties and are labelled with suffixes denoting their associated lepton. On this nomenclature, the 'original' neutrino from beta decay is actually the electron-type antineutrino. We must also multiply by 2, because every quark and lepton has an antiparticle.

Antimatter.

The strong and electromagnetic interactions between antiparticles are precisely the same as between the corresponding particles. Hence it should be possible for antiprotons and antineutrons to bind together into antinuclei. If these are orbited by positrons we have antiatoms. So far just a few atoms of antihydrogen have been made in high energy laboratories. If antiatoms exist, there is the possibility of bulk antimatter, much loved in science fiction. This would be stable provided it did not come into contact with normal matter. Where might this exist? Not on Earth, nor on the Moon: the astronauts were not annihilated. Indeed there cannot be significant amounts in our solar system. The Sun continuously emits particles: the solar wind. This streams past the planets without any annihilation. However, it is conceivable that there might be stars of antimatter in our galaxy, or in some other galaxy, or indeed whole galaxies made of antistars. How could we tell?

It is not easy. The light emitted by antiatoms is precisely the same as from the corresponding atoms. So a star of antimatter would look the same and give the same spectrum as its normal counterpart. Telescopes and spectrometers could not tell the difference. Cosmic rays could provide evidence. These are high energy particles from outer space. Most originate somewhere in our galaxy, but those of the very highest energy might be extragalactic. Primary cosmic rays consist predominantly of high energy protons, but there are also nuclei of all stable elements in small amounts. Antiprotons can be formed in high energy cosmic ray interactions between normal particles, but if heavier antinuclei were found this would be strong evidence for the existence of antimatter. So far no such antinuclei have been found, but a more sensitive experiment - the Alpha Magnetic Spectrometer (AMS) -has flown on the Space Shuttle and will be onboard the International Space Station when this goes into orbit.

In the early universe, when particle creation was commonplace, we expect to have had equal numbers of particles and antiparticles. So far the evidence is against the existence bulk antimatter in the present universe, and indeed there are theories which explain this total imbalance. However, symmetric matter-antimatter cosmologies have also been investigated, and were popular in the past. Suppose that whole galaxies of antimatter existed. We might occasionally expect a galaxy and an antigalaxy to be pulled together by gravity, and we might see spectacular annihilation, giving a rather clear signal: the 0.511 MeV gamma rays of electron-positron annihilation. If antigalaxies exist, why have we not seen this? Hannes Alfven, a Swedish physicist and Nobel laureate, proposed a mechanism. If such galaxies approach each other, the first parts to come into contact would be the edges. There would be some annihilation, but the pressure of the annihilation products would then drive the galaxies apart again. Alfven also proposed a nice analogy, which can easily be demonstrated. A metal plate is heated gradually from room temperature, and periodically drops of water are placed on the surface. Close to 100ûC, the water boils away vigorously: in our language the cold water is 'annihilated' by the hot metal. However, when the temperature of the plate is around 200ûC there is a change: drops around cm in diameter can persist for many minutes. Again in our language the annihilation product, the steam, lifts the drop away from direct contact with the hot metal, and hence acts as a thermal insulator, greatly slowing down the 'annihilation'. This is shown in figure 4.

Detecting particles.

We have listed the constituents of matter, but how do we know about them? How can we tell the structure of any small object? Suppose we take something moderately small, such as a fly or a flea. If we want to know whether it has eyes or hairs on its legs we can determine more detail by looking through a single lens - a magnifying glass. If we want more detail, then a rather fancy arrangement of two compound lenses called a microscope will show this. We might expect that adding a third or even more stages would reveal even greater detail, but this is not so. We are limited by the wavelength of light, and cannot resolve two objects which are separated by much less than this wavelength. Visible light has a wavelength of around 5 x 10-7 m, which is 5000 times the size of an atom and 500 million times the size of a nucleus, so we cannot see protons unless we can reduce the wavelength by more than a factor of a billion. Fortunately quantum physics comes to our help. Material particles - in fact all objects - have wave properties, and have an associated wavelength which is equal to the Planck constant divided by the momentum. Hence for very short wavelengths we need very high momenta, and therefore high energies where we have both relativistic effects and particle creation and annihilation.

Let us now consider some experimental techniques. There are two basic pieces of kit in high energy physics. For most experiments we need a particle accelerator: a device that increases the energy of the particle. For all experiments we need a detector: a device that records the interaction or the decay of particles. A single high energy collision between two particles normally results in the creation of several particles, sometimes a large number, many of which are short-lived and decay into more stable particles. At high energies, the speeds are very close to that of light, and acceleration results mainly in an increased particle mass. The humble television set is a particle accelerator: it accelerates electrons to some tens of thousands of electron volts (eV) by applying a single high voltage; but this technique cannot be applied to much higher energies. Most large accelerators are circular. A pipe, a few centimetres in diameter, is formed into a horizontal ring several metres to several kilometres in circumference. The air is continuously pumped out to maintain a high vacuum, so that the particles moving in the pipe do not collide with air molecules. Particles, normally protons or electrons, are injected in bunches, usually from a smaller machine. Electromagnets, each some metres in length, are placed all around the ring. Some have the property of bending the particles' trajectory into the circular orbit, and others are used to focus the beam and keep it from hitting the wall of the vacuum pipe. Radio-frequency cavities are also placed around the ring and give the particles an electric 'kick' every time they pass through.

As the particle energy is increased the field strength of the magnets is ramped up so as to keep the particles in the same orbit. This acceleration can be accomplished in a few seconds, or less, during which time the particles circulate the ring very many times, travelling up to a million kilometres. When the magnets reach their highest field no further acceleration can take place. There are then two possibilities. One is that the beam is extracted and hits a target which acts as a source of secondary beams, which are then steered to detectors by means of magnets and other devices. This classical technique is very versatile, but has the disadvantage that it cannot provide the most energetic collisions. When a high energy particle collides with a stationary one in a target, much of the energy is needed to propel the interaction products forwards in the laboratory in order to conserve momentum, leaving only a small fraction of the initial energy for the interesting physics such as the creation of new particles. This can be overcome if the collision is between two particles moving in opposite directions. When two such bunches of particles meet, the vast majority will not collide, so in order to have a good collision rate it is normal to store the colliding beams: to keep them circulating in the vacuum ring for hours in order that the non-colliding particles have repeated opportunities to collide.

Many techniques are used to detect particles' interactions and decays, and in most experiments several are combined to give the fullest possible information. Wire chambers are widely used as tracking devices: these give an electronically reconstructed 'picture' of the trajectories of charged particles. The principle is shown in figure 5. They show the number of such particles, and if the chambers are inside a magnetic field the curvature of a track is a measure of the particle's momentum, and shows whether the particle is negatively or positively charged. Basically such chambers consist of 'planes' of thin parallel wires, spaced one or several centimetres apart, inside boxes containing various gases at atmospheric pressure. Voltages are applied across adjacent planes. If a high energy charged particle passes through the gas, it will ionise the gas along its trajectory, in other words knock electrons out of their atoms. These electrons will be attracted to the positively charged plane. When they get close to a very thin wire the strong electric field surrounding it will accelerate the electrons so that they in turn cause further ionisation and initiate an electric avalanche. Wires hit by this avalanche receive a small electric charge which can then be amplified and registered. By a suitable arrangement of chambers, co-ordinates in three dimensions can be obtained, and using various electronic techniques this information can be processed rapidly. If it satisfies certain criteria it can be stored digitally and used to reconstruct the 'event'.

Another device is used to measure the energy either of a single particle or of a group of related particles. This is called a calorimeter. In one design a multiple sandwich is constructed out of dense material, for example iron slabs a few cm thick, interleaved with material such as plastic scintillator which registers that particles have passed through it by emitting a small amount of light which can be picked up by a photomultiplier. If a high energy particle enters the calorimeter it will collide with nuclei in the calorimeter material and can create several particles, which give rise to scintillation in the next sheet. Each of these in turn can create new particles by interaction with nuclei, and so a shower of particles develops. The principle is shown in figure 6. Because the initial energy is now shared by many particles, each will have lower energies and will gradually be stopped, so the shower dies down. By measuring the total amount of light from all scintillator sheets the energy of the initial particle can be obtained. Different designs are used for particles which shower by electromagnetic interactions (electrons, positrons and photons) and for those that shower by means of the strong interactions (particles containing quarks, collectively called hadrons).

Forces.

We now come on to the forces. At the most basic level there appear to be no more than four fundamental forces, and these are shown in figure 7. Gravity is the best known. All masses attract one another. The force between two masses is proportional to the product of the masses divided by the square of the distance between them. This inverse square law means that as we separate the masses the force gets weaker and weaker, but it never stops: it is infinite in range. Gravity is responsible for falling objects, for planetary orbits, for the spherical shape of large objects like stars. Electromagnetism also has an inverse square law, but there are two main differences from gravity. At the basic level electromagnetism is vastly stronger. The electric attraction between a proton and an electron is 1040 times stronger than the gravitational force. Also there are positive and negative electric charges, with the property that like charges repel and unlike charges attract. A consequence of this is that neutral systems form. This is why the Earth pulls us gravitationally but not electrically. Of course, if atoms are close together the positive nuclei and negative electrons are not in the same position, and it is the slight imbalance of electric force that allows molecules to form. Hence all chemical and biological structures are cemented by electromagnetism.

Physics, however, is greedier. Two further forces exist. These are both short range and were only discovered during the twentieth century. The strong force holds the quarks together to form protons and other hadrons. When the quarks are in contact the force is very strong, but it soon falls off to zero when the quarks are separated. An analogy would be spheres covered with Velcro. Velcro can bind the spheres together quite strongly, but the force disappears when the separation is greater than the Velcro thickness. The residual effect of the interquark force binds protons and neutrons into nuclei. The weak force is also short range. It is too weak to bind real particles together, but is responsible for radioactive beta decay. It is also responsible for fusion reactions in stars and the Sun.

If, hypothetically, we could label a particular proton in the Sun's core, we could note that it collides many times per second with other protons. However, on average it would take five billion years to fuse, again illustrating the fantastic weakness of this interaction. Hence in the five billion years of its life our Sun has used up about half of its hydrogen, providing us with the environment and timescale required for biological evolution.

How do these forces work? How, for example, does an electron know that it is being repelled by another electron? Not by magic, but, we believe, by the exchange of 'carrier' objects. Electromagnetism is carried, or mediated, by the exchange of photons, the quantum packets of light. Gravity, we believe, is mediated by gravitons, These, however, have not yet been discovered. The carriers of the strong force, which glue the quarks together, are unimaginatively called gluons. These were discovered at the DESY laboratory in 1980. Even less imagination was used in naming the carriers of the remaining force. They were called W for weak.

One of the important factors in science is to see whether apparently different things have some connection. Science does not just consist of making observations and measurements and cataloguing the results. So attempts have been made to see whether there is some deep connection between the different forces. Since gravity and electromagnetism both have inverse square laws, there were attempts to unify them. Einstein spent some time trying to do this but was not successful, and no one has succeeded in this so far. However, perhaps surprisingly, about 30 years ago, a theory was constructed which unified electromagnetism and the weak force into a single theoretical framework. In electroweak theory, three massive particles that mediate the weak force - the charged W+ and W- particles and the neutral ZO particle - join the photon as the carriers of the electroweak force. The intrinsic strengths of these carriers are identical, but the massive nature of the W and Z particles limits their range to very short distances. Hence protons colliding in the Sun's core seldom come close enough for W exchange to occur. At energies of around 100 GeV, however, close encounters are common, showing electroweak unification.

By taking the values of some measured quantities from electromagnetism and weak interactions, it was possible to estimate that the values for the masses of the W and Z particles were likely to be around 80 and 90 GeV, about 100 times the mass of the proton. This presented a challenge to experimentalists, to see whether such particles existed or whether this theory was just a mathematical construction. The largest existing accelerators, at CERN in Geneva and Fermilab in the USA, did not have sufficient energy to create such massive particles if their beams were slammed into a stationary target. Carlo Rubbia, an Italian physicist at CERN, suggested that the CERN Super Proton Synchrotron (SPS) could be converted into a proton-antiproton collider. This was a very bold suggestion, since the antiprotons would have to be created first in a collision, then stored for many hours without touching any matter and then made to circulate in the SPS and collide with protons going the other way round. A brilliant technique for providing the carefully prepared beam of antiprotons was invented by Simon van der Meer, a Dutch physicist at CERN. It took three years of effort to convert the SPS into a collider, and during this time an international collaboration, with Rubbia as spokesman and including our group from Queen Mary, designed and built a huge apparatus, called UAI, which would explore the new energy domain and, we hoped, would discover the W and Z particles.

It was predicted that even at the new high energy of the collider only about one in a hundred million collisions would produce a W particle. Finding this would not be easy. It would decay immediately and its 'signature' would be an electron or a muon (which could be identified) plus a neutrino (which would leave no trace in the detector). In addition, these particles would generally be accompanied by dozens of unwanted particles arising in these very high energy collisions. It was like looking for a needle in a haystack. The detector, shown in figure 8, combined most of the available techniques: large wire tracking chambers in a magnetic field, calorimeters to measure the energies of all particles, and sophisticated electronics to decide which million 'events' out of billions of collisions should be collected on computer tape for subsequent analyses. However, we were fortunate. In the second year of running the experiment, the intensity of collisions had reached a value were there was a hope of finding these elusive particles. The W particle was discovered in January 1983 and the Z particle a few months later. We were also fortunate that another experiment at CERN found these particles almost simultaneously. We celebrated with champagne. The discovery was reported widely in the media. We continued running for some years and were able to check that the properties of the W and Z were as predicted. Electroweak unification was verified. Nature's four forces had been reduced to three, and we were all delighted when Rubbia and van der Meer, who had made the biggest contribution to our great adventure, were awarded the Nobel Prize for Physics.