Royal Academy of Sciences New Zealand Open Science
Open Science

Comparison of Rutherford’s atomic model with the Standard Model of particle physics and other models

Published:

ABSTRACT

Ernest Rutherford is known almost universally as the discoverer of the structure of the atom. He is less well known for his discovery of the proton. Even less well known is a set of hypotheses on the structure of matter that was proposed by Newton 300 years ago. Here Newton’s hypotheses and Rutherford’s observations are described and compared with today’s Standard Model of particle physics, as well as with a NZ precursor to the Standard Model. A proposal to construct an electron–proton collider at CERN to examine the proton with resolution 1000 times finer than that achieved in the Rutherford era is also described. Three experiments that were conducted in New Zealand with connections to Rutherford’s experiments are also described. It is concluded that New Zealand’s small size and isolation can offer advantages for fundamental research.

Hypotheses of Newton on the structure of matter

Although this review focuses primarily on the work of Ernest Rutherford and others of the modern era, we commence the discussion with brief remarks on a set of remarkable hypotheses on the structure of matter that were published by Isaac Newton some 200 years prior to Rutherford’s time. We also remark here that the entire discussion which follows, from the days of Newton to the present, represents an unabashedly personal account of the development of particle physics as seen by the author.

Newton’s hypotheses appeared in the early 1700s in the closing pages of his treatise on ‘Opticks’. They appear to have received little attention, possibly because of their placement at the end of a publication on another topic. Here we compare the hypotheses with the observations of Rutherford and others of the modern era. Coincidentally, Newton’s hypotheses were reviewed earlier this year, prior to the announcement of the present celebration of Rutherford’s birth (Yock 2020). Our remarks here are therefore brief.

Newton’s most far-reaching hypothesis in Opticks was undoubtedly the following excerpt (Newton 1730, p. 394):

There are therefore Agents in Nature able to make the Particles of Bodies stick together by very strong Attractions. And it is the Business of experimental Philosophy to find them out. Now the smallest Particles of Matter may cohere by the strongest Attractions, and compose bigger Particles of weaker Virtue; and many of these may cohere and compose bigger Particles whose Virtue is still weaker, and so on for divers Successions, until the Progression end in the biggest Particles on which the Operations in Chymistry, and the Colours of natural Bodies depend, and which by cohering compose Bodies of a sensible Magnitude.

These words were quoted previously (Yock 1970, 2020; Weinberg 2015). Nowadays, with the benefit of hindsight, we may easily identify the members of Newton’s sequence of ‘particles’, from smallest to largest, as quarks and gluons, nucleons, nuclei, atoms, molecules and macromolecules according to current thinking. With the possible exception of the first members (quarks and gluons), the sequence appears to define an ordered array of comparable objects with increasing size and decreasing binding energies, as Newton assumed.

Newton further assumed in Opticks that the sequence would display properties of self-similarity, simplicity and purpose, and this may also be seen to apply, except to the quarks and gluons (Yock 2020). All members of the sequence are similar in the sense that all are described by quantum mechanics or quantum field theory, all can be dissociated (except the nucleon) to reveal members of the next layer consistent with the reductionist philosophy, and each appears necessary to form the ingredients of life except for most of the quarks. According to today’s Standard Model of particle physics, there are three generations of quarks, of which only the first is needed (Feynman 1985; Weinberg 2015; Yock 2020).

Newton also proposed in Opticks remarkable hypotheses that may now be recognised as precursors to fundamental elements of quantum mechanics and quantum field theory (Yock 2020). This occurred some 200 years before these theories were formulated. In what follows we consider further the hypotheses of Newton. Newton’s speculations on alchemy, however, are not discussed here, as they were not included by Newton in Opticks.

The Rutherford atom

Rutherford’s contributions to physics are legion. They deservedly earned him the title ‘father of nuclear physics’. His discoveries include the identification of chemical elements that undergo radioactive decay, the atomic nature of radioactivity, the determination of basic properties of α, β and γ radiation, the law of radioactive decay, the discoveries of the atomic nucleus and of the proton, and the determination of the age of the Earth. He is also well known for gathering productive scientists around him, including Niels Bohr, James Chadwick, Hans Geiger and Frederick Soddy.

Rutherford’s discovery of the structure of the atom was his most transformative work, and it is this and its aftermath that is focused on in this paper. The gold-foil experiment is well known, and needs no description here. It revealed the atom to be made up of a small, massive positive nucleus surrounded by a cloud of electrons of equal and opposite charge but low mass.

In subsequent research Rutherford discovered the proton as the long-range product of collisions of α particles with nitrogen (Rutherford 1919). The discovery of the artificial transmutation of elements is sometimes attributed to Rutherford for this work, as in (right panel), but Rutherford himself made no such claim. Although the nitrogen target in the proton experiment was converted to oxygen via the process α +14N  p +17O, Rutherford’s experiment provided insufficient information to draw this conclusion, and the discovery of artificial transmutation had to wait six years until Patrick Blackett observed the process in a cloud chamber (Blackett 1925; Galison 1997).

Figure 1. Centenary stamps issued by NZ Post in 1971. The 1c stamp celebrates Rutherford’s discovery of the structure of the atom, and the 7c stamp his discovery of the proton. The 7c stamp also suggests that Rutherford was first to observe the artificial transmutation of elements, but in fact Rutherford did not demonstrate the presence of the oxygen atom in the reaction shown on the 7c stamp. Thus, he did not demonstrate the transmutation of nitrogen to oxygen.

Figure 1. Centenary stamps issued by NZ Post in 1971. The 1c stamp celebrates Rutherford’s discovery of the structure of the atom, and the 7c stamp his discovery of the proton. The 7c stamp also suggests that Rutherford was first to observe the artificial transmutation of elements, but in fact Rutherford did not demonstrate the presence of the oxygen atom in the reaction shown on the 7c stamp. Thus, he did not demonstrate the transmutation of nitrogen to oxygen.

For completeness we note here another twist in the discovery of the proton. In the late 1800s, Eugene Goldstein reported the presence of ‘anode rays’ in discharge tubes with perforated cathodes. Their e/m values were measured by Wilhelm Wien and J J Thomson in the early 1900s and found to include that of the proton (Moore et al. 1985). This was probably the first observation of the proton, although it did not establish that particle as a basic constituent of the atomic nucleus.

In the 1920s and 1930s Rutherford, Chadwick and others repeated the gold-foil experiment with lighter targets, including aluminium and magnesium (Rutherford and Chadwick 1925). The α particle was able to enter the nucleus in these experiments, and probe the nuclear force, where previously Coulomb repulsion had kept them apart. These experiments were the first true analogue of today’s experiments conducted at the Large Hadron Collider at CERN, in which head-on collisions of protons and heavier nuclei are studied.

In 1932, Chadwick discovered the neutron by irradiating beryllium with α particles and showing kinematically that neutral particles were emitted with mass similar to that of the proton (Chadwick 1932); in 1935, analysis of the data obtained with the lighter foils by Ernest Pollard (Pollard 1935; Evans 1955) showed the nucleus to be an approximately spherical assemblage of closely packed protons and neutrons, which were themselves tiny spheres of radii approximately 1.3 × 10−15 m. Although Pollard authored this important result from Yale University, we note that he had beforehand been a student at the Cavendish Laboratory (Wilson 1983). Pollard’s result of 1935 is illustrated in . In the following year Francis Ashton showed that the force holding nucleons within nuclei was very strong in comparison to the electromagnetic force that retains electrons in atoms (Ashton 1936). This measurement may be said to have completed the discovery phase of the Rutherford atom.

Figure 2. Familiar depiction from the Rutherford era of an atomic nucleus as a roughly spherical assemblage of closely packed spherical neutrons and protons with radii ∼ 1.3 × 10–15 m. The neutral atom includes orbital point-like electrons as well, but they are ∼ 10,000 × further out.

Figure 2. Familiar depiction from the Rutherford era of an atomic nucleus as a roughly spherical assemblage of closely packed spherical neutrons and protons with radii ∼ 1.3 × 10–15 m. The neutral atom includes orbital point-like electrons as well, but they are ∼ 10,000 × further out.

The discovery of the atom and the determination of its basic properties by Rutherford and his contemporaries can be seen as a truly remarkable achievement when one considers that the observations were conducted with table-top apparatus by small teams of scientists, typically numbering one or two, using equipment that cost a mere few hundred pounds, (Wilson 1983) achieving a resolution of the order of a fraction of a femtometre.

The Standard Model of particle physics and a precursor

Elementary particle physics evolved as a separate discipline from nuclear physics during the 1930s to the 1950s with the discoveries of new particles, and the development of ‘quantum field theory’ and the associated ‘renormalization theory’. The latter yielded extremely accurate results, the most accurate known in science, but was based on dubious mathematics, at least in the opinions of Dirac and Feynman, two of its leading architects (Dirac 1958; Feynman 1985).

While the discoveries discussed here appeared as guiding lights in the then-new field of particle physics, they included puzzles. Some of the particles that were found appeared to serve no purpose, and, as noted earlier, the mathematics underlying the physics appeared dubious. Today, 80 years on, these puzzles remain. One can legitimately enquire if Newton and Rutherford would be content with the status quo.

In 1935, Hideki Yukawa proposed the ‘meson’ theory of the strong nuclear force that holds the atomic nucleus together (Yukawa 1935). Yukawa assumed that nucleons within the nucleus continually exchange particles (now dubbed π-mesons or pions) between them, and thereby bind themselves to each other as shown in . Yukawa showed that a pion mass ∼ 200 me would be required, and a strongly interacting particle with this mass was subsequently found in the cosmic radiation (Lattes et al. 1947). In the 1950s, electron scattering experiments were conducted at Stanford University that resulted in precision measurements of the effective radii of protons and neutrons of (1.07 ± 0.02) × 10−15 m (Hahn et al. 1956). These observations lent strong support to Yukawa’s theory.

But, as noted earlier, not all discoveries made at the time were anticipated. In particular, unexpected partners to nucleons and pions, dubbed ‘strange particles’, were found for which no obvious raison d’être existed. Their discovery eventually led to the proposal of Gell-Mann that all observed hadrons (strongly interacting particles) could be classified as bound states of three hypothetical particles with fractional charges (+⅔|e| or −⅓|e|), which he termed ‘quarks’ (Gell-Mann 1964).

The 1964 model was subsequently extended to include further quarks, and gluons, as well as colour charges to bind the quarks and gluons together (Fritzsch et al. 1973). This work is well known. It forms the foundation of today’s Standard Model of particle physics, but it also raises questions.

One question is the dubious mathematics of the renormalisation theory mentioned earlier. This motivated me to consider an alternative model of quark-like particles that I termed ‘subnucleons’ (Yock 1969). In what follows, the Standard Model and the alternate model of subnucleons are compared. The latter was based on eigenvalue equations for electric charge that Gell-Mann and Low (1954) and Johnson et al. (1967) had derived previously. These suggested that the constituents of hadrons could be highly electrically charged, and bound together by a very strong attraction between subnucleons and antisubnucleons. This scenario shared features with the colour theory of Fritzsch et al. that followed four years later (Yock 2016b), but there were also major differences.

One difference involved the nature of the forces assumed to bind quarks or subnucleons together. Searches for free quarks with fractional charges were conducted in several countries, including New Zealand, with negative results (except for some initial false alarms). The negative results led to the assumption (without proof) that colour-neutral combinations of quarks and gluons might be permanently confined within particles such as nucleons and pions.

In the subnucleon model I followed a different route. I assumed very strong binding between subnucleons, but not permanent confinement. This maintained Newton’s hypothesis of self-similarity, with the strongest (but not infinite) binding occurring at the smallest distances. The model also followed the familiar gauge principle of electromagnetism. It was in fact the first gauge theory of strong interactions to be proposed. It also provided a possible rationale for the existence of the strange and similar particles. A comparison of the proton according to the Standard Model and the subnucleon model appears later.

The Standard Model was originally constructed to account for the particles that had been found by the 1960s, and it did a relatively good (but not perfect) job in this regard. It was furthermore shown, under various assumptions, to explain a large body of high-energy scattering data, but the underlying assumptions were not clearly explained. For example, in interactions of electrons with protons, it was assumed that the electron interacts with quarks contained in the proton as if the quarks were free particles. Final-state interactions between quarks were then assumed to occur in a process known as ‘hadronization’ that produced outgoing jets of integrally charged hadrons, but the interactions that caused this were not fully explained. This was the case originally (Feynman 1972), and it remains the case today (Agostini et al. 2020; Section 1.1.1). In addition, low-energy phenomena of nuclear physics have generally remained outside the model (Ishi et al. 2007; Doi et al. 2017). It has not been shown, for instance, how the geometrical arrangement from the Rutherford era shown in might arise in the Standard Model.

The author’s model of the proton as shown in the right panel of is also beset with problems, but of a different nature. The construction of a finite quantum field theory was its driving goal, and only a partial correspondence with observed particles has as yet been demonstrated. In electron–proton interactions at current energies it was assumed that the electron scatters off the tightly bound bare mesons and nucleons in the proton as if the latter were elementary particles, and that interactions subsequently occur in the final state in which the outgoing bare mesons and nucleons ‘dress’ themselves with the emission of jets of integrally charged hadrons (Yock 2002). Consistent with this, the low energy interactions of nuclear physics were assumed to occur as in the older meson models (e.g. Machleidt 1989). The model given here is far from complete, but it makes assumptions that appear less drastic to the author than those of the Standard Model. Most importantly, all of the fundamental particles of the theory are assumed to be observable, and not merely those that do not undergo strong interactions as assumed in the Standard Model. In this sense, the alternative model attempts to follow Newton’s hypothesis of self-similarity.

Figure 3. Models of the proton in the Standard Model (left panel) and in the author’s model of subnucleons. The left panel shows a colour-neutral combination of u and d quarks with fractional electric charges +⅔|e| and -⅓|e| respectively. The wavy lines signify a colour-neutral combination of gluons holding the quarks together in some unspecified arrangement. The right panel shows the proton in the subnucleon model where the dots represent highly electrically charged subnucleons and antisubnucleons in tightly bound, neutral clumps (Yock 2002). The clump of three pictures a ‘bare’ nucleon; the clumps of two represent ‘bare’ mesons. The overall radius is assumed to be ∼ 1 × 10−15 m in the Standard Model. In the subnucleon model the radius is fixed at this value by the mass of the π-meson (Yukawa 1935).

Figure 3. Models of the proton in the Standard Model (left panel) and in the author’s model of subnucleons. The left panel shows a colour-neutral combination of u and d quarks with fractional electric charges +⅔|e| and -⅓|e| respectively. The wavy lines signify a colour-neutral combination of gluons holding the quarks together in some unspecified arrangement. The right panel shows the proton in the subnucleon model where the dots represent highly electrically charged subnucleons and antisubnucleons in tightly bound, neutral clumps (Yock Citation2002). The clump of three pictures a ‘bare’ nucleon; the clumps of two represent ‘bare’ mesons. The overall radius is assumed to be ∼ 1 × 10−15 m in the Standard Model. In the subnucleon model the radius is fixed at this value by the mass of the π-meson (Yukawa Citation1935).

Electron scattering, as used in electron microscopy, is the classic technique for resolving small structures. Such studies confirmed the geometry shown in for the atomic nucleus (Hahn et al. 1956). Subsequent studies of the proton were conducted at higher energies at the Stanford Linear Accelerator Centre (SLAC), and at the Hadron Electron Ring Accelerator (HERA) in Hamburg, but, as noted earlier, the analyses of these studies appear ambiguous.

One class of events was observed at HERA, however, whose interpretation was relatively straightforward. These were so-called ‘rapidity gap’ events in which a high-energy neutron was observed to emerge in the final state travelling in the direction of the incident proton (Derrick et al. 1996). These events occurred frequently, and they clearly required the incoming proton to fluctuate into a neutron and a positive π+-meson, with the π+ subsequently acting as the target for the incoming electron. This enabled the neutron to propagate forwards as a spectator particle, as observed (Lu et al. 2000; Yock 2002). The rapidity gap events thus followed Yukawa’s old model, although this was not recognised at SLAC at the time, because the energy of that machine was insufficient to distinguish the outgoing products from the π+ and the neutron (Boros and Zuo-tang 1995).

The superior energy of the HERA collider was also used to conduct a search for fine structure deep within the proton. Such structure could be produced in the subnucleon model by the bare particles shown in the right panel of . The high electric charges of subnucleons making up bare particles would be expected to produce an excess of electrons at high momentum transfers over smooth extrapolations of the data from lower momentum transfers.

Such an excess was initially reported by the H1 and ZEUS collaborations at HERA in 1997 at momentum transfers of order a few × 100 GeV/c (Adloff et al. 1997; Breitweg et al. 1997). Further running of the HERA collider, combined with a modified analysis procedure, reduced the excess to an almost insignificant level, but also produced systematic deviations in the data (Abramowicz et al. 2016). The latter are suggestive of fine structure in the proton at a few × 10−19 m (Yock 2020).

It would be of interest to conduct further measurements at higher momentum transfers, and also at higher luminosities to reduce statistical uncertainties. A proposal for a new collider, termed the Large Hadron Electron Collider, or LHeC, is presently under consideration at CERN that could achieve this goal (Agostini et al. 2020). The LHeC is planned to collide 50–60 GeV electrons from a purpose-built electron accelerator (using energy recovery linac technology) with 7 TeV protons from the LHC at very high luminosity. This would yield peak momentum transfers ∼1000 GeV/c, and probe substructure at scales ∼ 10−19 m using techniques similar to those already developed at HERA. The resolution of 10−19 m would improve on that achieved by Hahn et al. in 1956 by two orders of magnitude, and by Rutherford’s colleagues in the gold-foil era by three orders of magnitude.

A simple picture of what could emerge with the LHeC, assuming the presence of highly electrically charged constituents, may be seen by modelling the bare proton of (right panel) as a charge of (say) 21|e| smeared uniformly over a region of radius 10−19 m, and two charges of −10|e| smeared over a larger region of radius 2 × 10−19 m. Model II by Hofstadter (1956) predicts the results shown in and by Agostini et al. (2020). These are consistent with the upper limit for substructure of 4.7 × 10−19 m set previously at HERA (Abramowicz et al. 2016). We conclude that if the proposed substructure is present at the level assumed here, it would be detectable with the LHeC.

Figure 4. Form factor effect in the electron–proton interaction produced by a bare proton of finite size according to Model II of Hofstadter (1956) with the model parameters given in the text. The vertical axis shows the effect of the form factor normalised to a point-like bare proton. The horizontal axis shows the magnitude of the momentum transferred in GeV/c.

Figure 4. Form factor effect in the electron–proton interaction produced by a bare proton of finite size according to Model II of Hofstadter (Citation1956) with the model parameters given in the text. The vertical axis shows the effect of the form factor normalised to a point-like bare proton. The horizontal axis shows the magnitude of the momentum transferred in GeV/c.

Comparison of the particle physics models with the prior work of Newton and Rutherford

We may summarise the discussion thus far by noting that 300 years ago Newton envisaged a layered structure of matter, with the innermost layer being the most tightly bound, and successive layers being progressively less tightly bound and larger in size, in a manner that exhibited self-similarity, simplicity and purpose (Newton 1730).

Newton’s predictions are in impressive agreement with observations of nuclei, atoms, molecules and macromolecules, but appear to be in conflict with the Standard Model of particle physics. The fractional charges and confinement of quarks, and the lack of clear understanding of the interactions and generations of quarks, clash with the Newtonian concepts of simplicity, self-similarity and purpose (Yock 2020).

From the NZ perspective it is gratifying to see that the Rutherford model of the atom dovetails beautifully with Newton’s scheme. Rutherford’s atoms literally provide the bricks and mortar of chemistry and biology. It is therefore perplexing that the current Standard Model of particle physics appears to diverge away from the Newtonian picture. It is also surprising that the Standard Model diverges from Einstein’s theories of special and general relativity in the sense that those theories were founded on overarching physical principles, when the Standard Model appears not to be (Yock 2020).

This author believes, on quite general grounds, that alternative models of particle physics that attempt to satisfy Newton’s hypotheses might usefully be examined. One such attempt was described earlier. It follows the beat of a different drum in comparison to the Standard Model, and is seriously incomplete, but it is testable.

In what follows, some exploratory research that was conducted in New Zealand is described. The aim here is to demonstrate that exploratory research can be conducted in small, isolated countries such as Aotearoa New Zealand despite contrary opinions sometimes being voiced. The research described here draws, albeit indirectly, on Rutherford’s legacy, in the sense that important goals were tackled using conceptually simple techniques. New Zealand’s location on the globe was also utilised, although this cannot be regarded as a legacy of Rutherford.

Fragile nuclei from violent interactions

Here some observations that were carried out in New Zealand on the emission of unexpectedly fragile nuclei from high-energy nuclear collisions are described. They have bearing on the range and strength of the nuclear force as determined in the Rutherford era.

As mentioned earlier, several searches were made for free quarks from the 1960s onwards. Searches were also made for magnetic monopoles, anomalously heavy particles and highly electrically charged particles. Accelerators were generally used for these searches, but the cosmic radiation was also used following the historic successes that yielded the positron in 1933, the muon in 1937, the pi-meson or pion in 1947, and the strange particles from 1947 onwards, including the well-known Ω- particle in 1954.

Here we describe a search that was conducted at the University of Auckland in the 1980s. No new particles were found, but surprising results were nevertheless recorded. The Auckland search was carried out with the range telescope shown in (Yock 1986). This recorded the charges and masses of cosmic ray particles that traversed the telescope with speeds from ∼ 0.4c to ∼ 0.6c. In approximately two years of running some tens of deuterons, tritons and 3He nuclei were recorded where none were expected. The null expectation was based on the assumption that only stopping nuclei, α particles or elementary particles could be present in the cosmic radiation at sea level after penetrating some 10 mean free paths for nuclear interactions, and that fragile nuclides such as deuterons, tritons and 3He nuclei would in particular be absent because of their low binding energies and large radii. It was therefore assumed that if long-lived particles with masses greater than those of nucleons were found, they could be candidates for new particles.

Figure 5. Range telescope operated at the University of Auckland in the 1980s to record the masses and charges of slow (0.4c < v < 0.6c) particles in the cosmic radiation at sea level. Charged particles were detected by 6 scintillators viewed by 12 photomultipliers, and by tracking spark chambers. Speeds of particles were determined by time-of-flight measurements, charges by ionizations in the scintillators, and masses by losses of energy in the steel absorber shown in blue in the left panel.

Figure 5. Range telescope operated at the University of Auckland in the 1980s to record the masses and charges of slow (0.4c < v < 0.6c) particles in the cosmic radiation at sea level. Charged particles were detected by 6 scintillators viewed by 12 photomultipliers, and by tracking spark chambers. Speeds of particles were determined by time-of-flight measurements, charges by ionizations in the scintillators, and masses by losses of energy in the steel absorber shown in blue in the left panel.

Towards the completion of data taking for the experiment, it was learnt that surprising detections of deuterons and tritons had already been made in accelerator experiments with heavy targets conducted at CERN (Cocconi et al. 1960) and the Brookhaven National Laboratory (Schwarzschild and Zupančič 1963). This was in the days before the World Wide Web and Google searches, and the results from CERN and Brookhaven were unknown at Auckland at the time.

A possible explanation for the CERN/Brookhaven results had also been offered by Schwarzschild and Zupančič (1963). It posited that outgoing nucleons emitted from heavy targets with similar speeds and directions could coalesce in the final state to form the large and fragile nuclei that were observed. This clearly required the presence of strongly attractive interactions between nucleons at large separations, a possibility that seemed feasible for the Rutherford model ().

With the passage of time, and the recognition of the occurrence of strongly attractive forces caused by π and f0(500) exchange between nucleons (Yock 2020), this conclusion has remained viable. However, such an explanation appears problematic for the Standard Model, as that model appears unable to produce a strongly attractive nucleon-nucleon force of long range (Ishi et al. 2007; Doi et al. 2017).

We note that the data recorded at the University of Auckland appear to be sound. In 2002 a high abundance of 3He was observed with the Alpha Magnetic Spectrometer (AMS) on board the International Space Station (Aguiler et al. 2002; Figure 4.33). The AMS collaboration reported an almost pure 3He composition of secondary cosmic rays with Z = 2 at low rigidities and trajectories that intersected the atmosphere, consistent with the Auckland results. In addition, the ALICE collaboration at CERN recently reported the production of 2H, 3H and 3He nuclides, and more complex bound states, in high-statistics observations made at the LHC (Braun-Munzinger and Dönigus 2019). Both these observations appear consistent with the prior observations described earlier from the 1960s and the 1980s.

In summary, the experimental observations on fragile nuclei appear to be consistent with the Rutherford model, and with the subnucleon model, but in likely conflict with the Standard Model as it presently stands.

Supernova SN1987A

One of Rutherford’s strengths, as mentioned earlier, was his ability to attract groups of productive researchers around him. Here an effort made since 1987 to form and support a Japan/NZ collaboration in astrophysics within New Zealand is described. It is still active today after 33 years (Yock 2012), and prospects look promising for continuation with the United States for another decade.

On 24 February 1987 a supernova (exploding star) was independently discovered in the Large Magellanic Cloud by Ian Shelton of Chile and the late Albert Jones of New Zealand. The supernova, known as SN1987A, was the brightest supernova in 400 years, and it offered unique opportunities for research.

The first aim of the Japan/NZ collaboration, which initially included Australia and was named JANZOS, was to seek evidence for cosmic ray emission by SN1987A. The source of the cosmic radiation was unknown at the time, although the remnants of supernova explosions had long been thought to be a likely contributor (Axford 1994).

The energy density of the cosmic radiation in the galaxy is similar to that of starlight, and the energies per particle extend to ∼ 1020 eV, some seven orders of magnitude greater than the energies of protons accelerated by the LHC. The origin of the cosmic rays is therefore of interest. Their trajectories in the galaxy are scrambled by the galactic magnetic field, so their origins cannot be determined by projecting backwards from their arrival directions at Earth.

The proximity of SN1987A in the Large Magellanic Cloud offered unique opportunities for observation. Within days of the discovery, the writer received a cable from a Japanese colleague, Yasushi Muraki, then at the University of Tokyo, requesting assistance to install a cosmic ray detector at high altitude in New Zealand to monitor SN1987A for the emission of cosmic rays.

Urgency was requested on the grounds that the ejecta of the supernova were expected to present a thickness of order one nuclear mean free path to cosmic rays emitted by the supernova for a duration of about one year, and hence enable the conversion of cosmic rays in this time to gamma rays that could subsequently propagate rectilinearly to Earth undeviated by the galactic magnetic field, and then be detected.

The unique opportunity offered by the request could hardly be turned down, and the setup shown in was installed at altitude 1600 m in the Black Birch range in Marlborough during the winter of 1987. It covered five hectares and detected cosmic rays from the showers of particles they produced in the Earth’s atmosphere. The detectors were sensitive to cosmic rays with energies from 1012 to 1015 eV. The scintillators shown in acted as an all-sky 24/7 camera with state-of-the-art directional sensitivity of 1° at 1014 to 1015 eV energies. This was confirmed by verifying that shadows of the Sun and the Moon were present in the data.

Figure 6. MSc students David Hirst, Peter Norris and Mark Conway of the University of Auckland installing cabling for the JANZOS cosmic ray detector at Black Birch in Marlborough during the winter of 1987. The detector consisted of an array of 76 plastic scintillators that were sensitive to cosmic rays with energies 1014 eV to 1015 eV, three Cerenkov telescopes that were sensitive from 1012 eV to 1013 eV, and a central electronics hut. The entire detector covering 5 hectares is shown at Yock (2012).

Figure 6. MSc students David Hirst, Peter Norris and Mark Conway of the University of Auckland installing cabling for the JANZOS cosmic ray detector at Black Birch in Marlborough during the winter of 1987. The detector consisted of an array of 76 plastic scintillators that were sensitive to cosmic rays with energies 1014 eV to 1015 eV, three Cerenkov telescopes that were sensitive from 1012 eV to 1013 eV, and a central electronics hut. The entire detector covering 5 hectares is shown at Yock (Citation2012).

The supernova was monitored from 1987 to 1994 with the prime aim of searching for γ-rays coming from the direction of SN1987A via its ejecta as described earlier. Excesses from the direction of SN1987A and other possible sources of high-energy radiation in the southern sky were searched for from 1987 to 1994, but all results proved to be negative, and the origin of the cosmic radiation remained unknown. The results by the JANZOS collaboration are available in a series of publications from the 1980s and 1990s. A full bibliography is available at (Yock 2012).

The publications noted earlier include a report of an unsuccessful search for 100 TeV γ-rays from southern supernova remnants (Allen et al. 1995). In 2013 a detection of 60 MeV to 2 GeV γ-rays from the remnants of supernovae was reported by NASA’s Fermi Gamma-ray Space Telescope (Ackerman et al. 2013). The Fermi Space Telescope was sensitive to γ-rays with considerably less energy than those that were detectable by JANZOS from Black Birch. This enabled successful detections to be made, and confirmation of supernova remnants as a contributory source of the cosmic radiation.

Despite the physically challenging qualities of the Black Birch site, which often included snow or sleet with winds too strong to stand in, several excellent theses were written by students from both Japan and New Zealand on the JANZOS project, and a healthy working relationship was formed between the contributing countries. The inclement weather presented a challenge that neither country wished to be first to give in to.

In 1994 a joint decision was made to re-direct our skills in a new direction, notably the hunt for dark matter and exoplanets (known then as extra-solar planets) using the then-new technique of ‘gravitational microlensing’. The collaboration was rebranded ‘Microlensing Observations in Astrophysics’ or MOA, and moved from windswept Black Birch to the University of Canterbury Mt John Observatory in Canterbury. Thus was born the MOA project, which is described in the following section. As described later, the MOA project utilises a technique comparable to that used by Rutherford in the gold-foil experiment.

Hopefully the JANZOS collaboration did not vacate Black Birch too soon. Recently, two groups reported possible evidence for a compact remnant of SN1987A, possibly a neutron star (Cigan et al. 2019; Page et al. 2020). As Page et al. state, this presents an unprecedented opportunity to follow the early evolution of the compact object.

Exoplanets by gravitational microlensing

In 1994, at the close of the JANZOS project described earlier, the search for exoplanets was not fashionable. However, the following year saw the first discovery of a planet orbiting a Sun-like star other than the Sun. The discovery was made by Swiss astronomers Michel Mayor and Didier Queloz using a ‘radial velocity’ technique in which the presence of the planet was sensed by the reflex motion of its host star. Their discovery was rewarded with the award of a Nobel Prize in 2019.

The discovery attracted wide attention and the search for, and the study of, extra-solar planets since became one of the more popular fields in astronomy. Prior to that, the field was only noted occasionally by forward thinkers. Newton, for example, speculated in the Principia that ‘if the fixed stars are the centres of similar (planetary) systems, they will all be constructed according to a similar design’ (Newton 1713), and Winston Churchill published prescient thoughts in 1931 (Livio 2017).

New Zealand has played a significant role in the hunt for exoplanets by conducting observations with Japan (and more recently the United States) in the MOA project. This utilises an exotic technique of ‘gravitational microlensing’ which is illustrated in and compared to Rutherford’s gold-foil experiment.

Figure 7. Rutherford’s gold-foil experiment (left panel) compared to Einstein’s gravitational microlensing phenomenon (right panel). In the latter process light from a distant star (the ‘source’ star) is deviated by the gravitational field of an intermediate star (the ‘lens’) with a planetary system before reaching an observer on Earth. Planets orbiting the lens star can deflect the light significantly, thus revealing their presence. In the former process the satellites (i.e. the electrons) are much lighter than the projectile (the α particle), and they reveal their presence by decelerating the projectile significantly.

Figure 7. Rutherford’s gold-foil experiment (left panel) compared to Einstein’s gravitational microlensing phenomenon (right panel). In the latter process light from a distant star (the ‘source’ star) is deviated by the gravitational field of an intermediate star (the ‘lens’) with a planetary system before reaching an observer on Earth. Planets orbiting the lens star can deflect the light significantly, thus revealing their presence. In the former process the satellites (i.e. the electrons) are much lighter than the projectile (the α particle), and they reveal their presence by decelerating the projectile significantly.

The process of gravitational microlensing is better illustrated head-on than side-on, as shown in .

Figure 8. A ‘source’ star shown in yellow moves left to right behind a ‘lens’ star in its rest-frame. Two moving images are formed: one interior to the Einstein ring shown by the dashed circle and one exterior. The Einstein ring denotes the position of the single image formed if the lens and source stars are perfectly aligned. is from Yock (2017).

Figure 8. A ‘source’ star shown in yellow moves left to right behind a ‘lens’ star in its rest-frame. Two moving images are formed: one interior to the Einstein ring shown by the dashed circle and one exterior. The Einstein ring denotes the position of the single image formed if the lens and source stars are perfectly aligned. Figure 8 is from Yock (Citation2017).

If either of the images for the non-aligned case passes close to a planet orbiting the lens star, that image may be measurably perturbed, thus revealing the presence of the planet. The process is resonant. The lens star magnifies the source star, and planets orbiting the lens star perturb the magnified image. Two planetary fits to the data are generally possible: one interior to the ring and one exterior, as shown in green in the example given in . Heavier planets produce larger perturbations, as do planets closer to the ring. Planetary systems with multiple planets produce multiple perturbations, and non-linear (‘caustic-crossing’) effects can also occur. Limb darkening of the source star needs to be allowed for in accurate fitting, as well as the non-rectilinear motion of the Earth about the Sun.

The multiple images shown in are not resolvable with Earth-based telescopes. The lensing effect is seen only as an apparent magnification and dimming of the source star as it passes behind the lens. The peak magnification may be greater than 100 × in well-aligned events, but the process is rare. About one star in a million in the galactic bulge is measurably lensed by a star in the galactic disc at any one time, and each lensing event persists for about a month.

To find microlensing events, ‘survey’ telescopes with wide fields of view are required to scan the galactic bulge continuously. Two such telescopes are the OGLE 1.3 m telescope in Chile and the MOA 1.8 m telescope at Mt John. Both have been in operation for several years. The latter is shown in . Recently, Korea commenced operating a network of three 1.6 m survey telescopes located in Australia, South Africa and Chile, and Japan will soon commence operating a near-infrared survey telescope from South Africa. All these telescopes are located in the Southern Hemisphere because the centre of the galaxy lies in the southern sky.

Figure 9. Telescopes used by the MOA collaboration at the University of Canterbury Mt John Observatory in New Zealand. The nearer one has an aperture of 1.8 m, the more distant one 0.6 m. NZ astronomers Nicholas Rattenbury, Grant Christie, Marilyn Head, Warwick Kissling and Jennie McCormick are also shown. A ‘typical’ planet detection by gravitational microlensing is shown in .

Figure 9. Telescopes used by the MOA collaboration at the University of Canterbury Mt John Observatory in New Zealand. The nearer one has an aperture of 1.8 m, the more distant one 0.6 m. NZ astronomers Nicholas Rattenbury, Grant Christie, Marilyn Head, Warwick Kissling and Jennie McCormick are also shown. A ‘typical’ planet detection by gravitational microlensing is shown in Figure 10.

To date approximately 80 exoplanets have been discovered or co-discovered by the MOA group. Most were detected in microlensing events of relatively high magnification, of order 100. Searches for planets in such events were originally advocated by a US group (Griest and Safizadeh 1998) and subsequently endorsed strenuously in New Zealand (Rattenbury et al. 2002; Abe et al. 2013; Yock 2017).

The measured masses of the detected planets range from a few Earth masses to a few Jupiter masses. Most orbit red dwarfs at orbital radii of a few au. They are therefore cool, uninhabitable, Neptune-like planets, although considerably closer to their host stars than Neptune is to the Sun. They are generally difficult or impossible to detect via other means. They typically lie at distances of some kpc in the direction of the galactic bulge, and they provide useful statistical information.

It has been found that Neptune-like planets are common members of planetary systems, and also that Jovian planets frequently orbit red dwarfs. Amateur NZ astronomers have distinguished themselves by supplying crucial data on several planet discoveries ().

Figure 10. The light curve of gravitational microlensing event OGLE-2016-BLG-1195Lb over a 12 d period in 2016. The data obtained by the OGLE telescope are shown in black, and those by the MOA telescope in red. The somewhat higher quality of the OGLE data due to the better seeing at Chile is apparent. The arrows indicate the discoveries of the microlensing event as reported from Chile and New Zealand respectively. A planetary perturbation at HJD = 2457569.1 is apparent. The fitting procedure yielded a cold planet of 3 Earth masses in a 2 au wide orbit around a 0.2 solar mass star at a distance of 7.1 kpc (Bond et al. 2017).

Figure 10. The light curve of gravitational microlensing event OGLE-2016-BLG-1195Lb over a 12 d period in 2016. The data obtained by the OGLE telescope are shown in black, and those by the MOA telescope in red. The somewhat higher quality of the OGLE data due to the better seeing at Chile is apparent. The arrows indicate the discoveries of the microlensing event as reported from Chile and New Zealand respectively. A planetary perturbation at HJD = 2457569.1 is apparent. The fitting procedure yielded a cold planet of 3 Earth masses in a 2 au wide orbit around a 0.2 solar mass star at a distance of 7.1 kpc (Bond et al. Citation2017).

NASA currently plans to launch a telescope named the Roman Space Telescope in the current decade, and to devote a large fraction of telescope time to observations of exoplanets by microlensing (Penny et al. 2019). The Roman Space Telescope will have the same aperture as the Hubble Space Telescope, but a field of view approximately 10 × larger. The image quality will be similar to that of the Hubble, and this will enable stars in the galactic bulge to be resolved, a major advantage over current ground-based observations. The wavelength coverage of the Roman telescope will extend significantly into the infrared, enabling observations to be made close to the densely populated galactic plane.

Summary and conclusions

Rutherford’s discoveries on the structure of the atom have been described and considered in light of hypotheses published 200 years beforehand by Newton. The observations and the hypotheses were found to be entirely consistent. Subsequent developments in chemistry and biology were noted as lending further support for Newton’s hypotheses and Rutherford’s observations.

Today’s Standard Model of particle physics was also considered. It was found not to follow the spirit of Newton’s hypotheses. The fractional charges of quarks, their confinement, the lack of understanding of the interactions of quarks, and the apparent occurrence of redundant quarks were all seen as problematical for the Standard Model. The lack of a cohesive plan underlying the model, comparable to the founding assumptions of Einstein’s theories of relativity, was also seen as surprising.

We conclude that Rutherford’s research on the structure of the atom is fully consistent with the hypotheses of Newton from 200 years beforehand, but that today’s Standard Model of particle physics is subject to uncertainty if Newton’s hypotheses are accepted.

Needless to say, there is no guarantee that Newton’s hypotheses are correct, or that the Standard Model is incorrect. The latter is referred to as a model, and to this extent it may not represent reality. Indeed, it does not include gravity. The author’s viewpoint is that while the use of models in science is very often useful, the study of the elementary particles of matter is an exceptional topic where, by definition, one seeks reality and not a model.

An NZ-based precursor to the Standard Model was also considered earlier in this article. This was found to be consistent with Newton’s hypotheses and, although incomplete, testable with the proposed LHeC electron–proton collider described earlier, as was noted at CERN (Brüning and Klein 2020).

Some NZ-based experimental projects with links to Rutherford’s research were also described. All received confirmation by other groups, and all continue to be pursued using improved techniques. They lend support to the practicality of small countries pursuing fundamental or new science.

In this regard we note a study that was reported recently by Wu et al. (2019). In an analysis of citations received by 65 million papers and related documents in the years 1954–2014, the authors concluded that small teams tend to produce the most disruptive science and technology, while large teams tend to develop existing science and technology. They further concluded that small teams search more deeply into the past and succeed further into the future, if at all. It was suggested that the team effect of larger teams occurs because the scientists, inventors and software designers involved in larger teams are qualitatively different from those in smaller teams. It was concluded that both small and large teams are required, and that both should be supported.

This author concurs with those conclusions. In 2016 it was argued that New Zealand’s small size and isolation could stimulate new science (Yock 2016a). It was noted, for example, that the high plateau in Antarctica awaits the NZ astronomical community as the best astronomical site on the planet (Freeman 2016). Confirmation of this was reported recently (Ma et al. 2020). In general, several characteristics of New Zealand’s physical environment appear to the author to be well suited to scientific research, and the country’s small size and isolation as conducive to independent thought.

Acknowledgement

The author thanks his co-authors in the work cited herein, Steven Krivit for correspondence on the artificial transmutation of elements, and the reviewers for constructive comments.

Disclosure statement

No potential conflict of interest was reported by the author(s).