As seen a couple of postings down, dark matter has been a topic of recent interest on this blog. A few weeks ago, Texas Tech astrophysicist Ron Wilhelm gave a talk on campus about dark matter and his research on visible matter from the stars, which I was able to attend (on Ron's website, you'll notice his self-description as a "stellar astronomer;" whether the pun is intentional or not, I don't know).
An exciting development for Ron is that an article from his research team has just appeared in the prestigious general-science journal, Nature (Texas Tech news release; abstract from Nature). The article is about our Milky Way galaxy's two stellar halos.
This University of Tennessee document provides a nice overview of the visible and invisible components of a galaxy, with the dark-matter halo included among the latter. The dark matter halo is said to be "of unknown composition but [it] makes itself felt by its gravitational influence on the visible matter." This page also notes that:
In our own galaxy, the observed rotation of the stars and gas clouds indicates that the visible matter is surrounded by a halo of this dark matter containing the major portion of the total galaxy mass and extending very far beyond the visible matter.
The Wikipedia provides this page on dark matter halos, whereas this news release from the University of California, Santa Cruz discusses how computer simulation has been used to characterize them.
Also, the January 2008 issue of Discover magazine has awarded dark-matter research the No. 7 slot in the "100 Top Science Stories of 2007." Quoting from the dark-matter article:
In the early universe, astronomers believe, dark matter provided the gravitational scaffolding on which ordinary matter coalesced and grew into galaxies.
In addition to dark-matter halos surrounding visible-matter galaxies, dark matter galaxies also appear to exist. The Discover article notes that "small, dark matter-dominated satellite galaxies," which had been hypothesized but gone undetected, have now been found in greater numbers (this finding is elaborated in this CalTech news release).
As far as future approaches to detecting dark matter, the Discover article cites the following:
Some astrophysicists believe that dark matter particles may occasionally annihilate each other, producing bursts of high-energy gamma rays. If the Milky Way has dark matter satellites, and if they do emit gamma rays, the Gamma-Ray Large Area Space Telescope, scheduled for launch in February, might detect them.
The GLAST search for dark matter, and related topics, are summarized in this University of California, Irvine document.
In conclusion, the various lines of research described herein should help the scientific community become less in the "dark" about this unusual type of matter.
Sunday, December 16, 2007
Sunday, October 14, 2007
Explanation of Science Behind Recent Nobel Prize
Over at Daily Kos -- which is primarily a political website (left-leaning), but which also discusses cultural and societal issues -- one of the writers attempts to explain the research that won the Nobel Prize for physics last week via an analogy to square-dancing. Here's a link to the official Nobel Prize announcement, for a more traditional description of the research.
Saturday, October 13, 2007
Book Review: "Dark Cosmos"
I read Dan Hooper's book Dark Cosmos: In Search of Our Universe's Missing Mass and Energy a while back. In fact, I cited it my late August posting on supersymmetry. For whatever reason, however, I've kept putting off writing a review of Dark Cosmos.
My delay is in no way a reflection of my opinion of Dark Cosmos, as I'm quite favorably disposed toward it. This past Thursday, I attended a physics colloquium at Texas Tech pertaining to dark energy (given by Gary Hill of the University of Texas, Austin) and that's motivated me to write again about the book. So, here's my review.
Dark Cosmos introduces dark matter and dark energy for a general, well-educated audience, with both clarity and relative brevity (a little over 200 pages). Even though I'd read about the book's two key concepts previously, I was still a little bit in the "dark" about them, so reading the book was a welcome learning experience. For those who might want to consider reading the book, an excerpt of it from Smithsonian magazine is available here.
Hooper writes considerably more about dark matter than about dark energy, given that the former has been studied for longer and more is known about it, than the latter. As a way to compile the main ideas from the book in an easily digestible format, I made the following chart (which you can click on to enlarge).

The initial chapters of the book, which are on dark matter, are written almost in detective-novel style. Possible candidates for what might comprise dark matter are discussed -- and cast in doubt -- sequentially. One class of objects that had been considered candidates are:
...massive compact halo objects, or MACHOs. The group includes dead stars like white dwarfs, stars that never burned like brown dwarfs, strange exotic stellar entities like neutron stars and black holes, and large Jupiter-like planets... (p. 36).
On p. 41, however, Hooper reviews research suggesting that MACHOs are not plentiful enough to constitute the dark matter, rendering them "rather poor candidates..." As an aside, I found Hooper's tutorial on the life cycle of stars and related entities to be very helpful.
Other dark-matter candidates discuss include neutrinos and supersymmetric superpartners.
The book then shifts to dark energy, which in turn sparks a discussion of the Big Bang and inflationary cosmology. The latter concepts are fascinating in their own right, so I think I'll write a separate entry about them in the future.
As Hill alluded to in his Texas Tech talk, Science magazine, upon its 125th anniversary, identified 125 outstanding scientific questions. Heading the list is the question of "What Is the Universe Made Of?," which implicates dark matter and dark energy. For an excellent introduction to these concepts, I highly recommend Hooper's Dark Cosmos.
My delay is in no way a reflection of my opinion of Dark Cosmos, as I'm quite favorably disposed toward it. This past Thursday, I attended a physics colloquium at Texas Tech pertaining to dark energy (given by Gary Hill of the University of Texas, Austin) and that's motivated me to write again about the book. So, here's my review.
Dark Cosmos introduces dark matter and dark energy for a general, well-educated audience, with both clarity and relative brevity (a little over 200 pages). Even though I'd read about the book's two key concepts previously, I was still a little bit in the "dark" about them, so reading the book was a welcome learning experience. For those who might want to consider reading the book, an excerpt of it from Smithsonian magazine is available here.
Hooper writes considerably more about dark matter than about dark energy, given that the former has been studied for longer and more is known about it, than the latter. As a way to compile the main ideas from the book in an easily digestible format, I made the following chart (which you can click on to enlarge).

The initial chapters of the book, which are on dark matter, are written almost in detective-novel style. Possible candidates for what might comprise dark matter are discussed -- and cast in doubt -- sequentially. One class of objects that had been considered candidates are:
...massive compact halo objects, or MACHOs. The group includes dead stars like white dwarfs, stars that never burned like brown dwarfs, strange exotic stellar entities like neutron stars and black holes, and large Jupiter-like planets... (p. 36).
On p. 41, however, Hooper reviews research suggesting that MACHOs are not plentiful enough to constitute the dark matter, rendering them "rather poor candidates..." As an aside, I found Hooper's tutorial on the life cycle of stars and related entities to be very helpful.
Other dark-matter candidates discuss include neutrinos and supersymmetric superpartners.
The book then shifts to dark energy, which in turn sparks a discussion of the Big Bang and inflationary cosmology. The latter concepts are fascinating in their own right, so I think I'll write a separate entry about them in the future.
As Hill alluded to in his Texas Tech talk, Science magazine, upon its 125th anniversary, identified 125 outstanding scientific questions. Heading the list is the question of "What Is the Universe Made Of?," which implicates dark matter and dark energy. For an excellent introduction to these concepts, I highly recommend Hooper's Dark Cosmos.
Saturday, September 15, 2007
Large Hadron Collider Part VIII
Today's posting will be the eighth and final entry in my series on the Large Hadron Collider (LHC). I wanted to end with a brief discussion of possible new things that may be found at the LHC, beyond the Higgs boson and supersymmetric particles, which I've already covered.
At this moment, however, Peter Woit's "Not Even Wrong" blog is reporting rumors that technical difficulties may delay the start of physics data collection at the LHC until 2009. Thus, we may end up waiting longer than expected to find out if the various proposed types of new particles are discovered.
One development that some researchers are looking toward is the possible appearance of evidence for extra spatial dimensions beyond the three we're familiar with. String theory, among other ideas, posits the existence of extra dimensions.
The December 2005-January 2006 issue of Symmetry magazine included an article entitled "The Search for Extra Dimensions," which suggests what kinds of signs could be suggestive of extra dimensions. The article focuses on research at Fermilab's Tevatron collider, but also mentions the LHC. A key step in the search would be finding evidence for the graviton, an as-yet-undiscovered particle that's posited to communicate the gravitational force. According to the article:
One way Fermilab experimentalists... hope to detect extra dimensions is to catch a graviton in the act of disappearing into another dimension. Collisions create a symmetrical ball of energy and, like fireworks, particles should spray in all directions. A tell-tale sign of extra dimensions would be a collision in which visible particles sprayed only in one direction, suggesting that an invisible particle traveled in the other direction. This particle could be the key to extra dimensions—a graviton, leaving our visible universe and disappearing into a fourth spatial dimension. Unfortunately, gravitons are not the only invisible particles. Lightweight particles called neutrinos, which very rarely interact with matter, can also travel right through a detector without a trace.
The ability to search for extra dimensions hinges on a researcher's ability to track neutrinos. "If you don't know your neutrinos, you don't know anything about extra dimensions," says [Joe] Lykken. By calculating the probability of creating a neutrino and comparing that to the number of asymmetrical events observed in the Tevatron, Fermilab researchers hope to discover an excess of events unaccounted for by neutrinos. Such a discrepancy could be the first experimental evidence of gravitons disappearing into extra dimensions.
The article concludes thusly:
But even if the Tevatron fails to find evidence of extra dimensions, CERN's LHC will continue the search in 2007. With significantly more energy, the LHC will be able to probe ever smaller radii.
"If we haven't found extra dimensions with the Tevatron by then, the LHC may still do it," says Lykken. "This is the type of discovery we should be able to make in the next five years."
I'll leave you with this Seed magazine piece from July 2006, in which several prominent scientists discuss their visions for discoveries at the LHC.
At this moment, however, Peter Woit's "Not Even Wrong" blog is reporting rumors that technical difficulties may delay the start of physics data collection at the LHC until 2009. Thus, we may end up waiting longer than expected to find out if the various proposed types of new particles are discovered.
One development that some researchers are looking toward is the possible appearance of evidence for extra spatial dimensions beyond the three we're familiar with. String theory, among other ideas, posits the existence of extra dimensions.
The December 2005-January 2006 issue of Symmetry magazine included an article entitled "The Search for Extra Dimensions," which suggests what kinds of signs could be suggestive of extra dimensions. The article focuses on research at Fermilab's Tevatron collider, but also mentions the LHC. A key step in the search would be finding evidence for the graviton, an as-yet-undiscovered particle that's posited to communicate the gravitational force. According to the article:
One way Fermilab experimentalists... hope to detect extra dimensions is to catch a graviton in the act of disappearing into another dimension. Collisions create a symmetrical ball of energy and, like fireworks, particles should spray in all directions. A tell-tale sign of extra dimensions would be a collision in which visible particles sprayed only in one direction, suggesting that an invisible particle traveled in the other direction. This particle could be the key to extra dimensions—a graviton, leaving our visible universe and disappearing into a fourth spatial dimension. Unfortunately, gravitons are not the only invisible particles. Lightweight particles called neutrinos, which very rarely interact with matter, can also travel right through a detector without a trace.
The ability to search for extra dimensions hinges on a researcher's ability to track neutrinos. "If you don't know your neutrinos, you don't know anything about extra dimensions," says [Joe] Lykken. By calculating the probability of creating a neutrino and comparing that to the number of asymmetrical events observed in the Tevatron, Fermilab researchers hope to discover an excess of events unaccounted for by neutrinos. Such a discrepancy could be the first experimental evidence of gravitons disappearing into extra dimensions.
The article concludes thusly:
But even if the Tevatron fails to find evidence of extra dimensions, CERN's LHC will continue the search in 2007. With significantly more energy, the LHC will be able to probe ever smaller radii.
"If we haven't found extra dimensions with the Tevatron by then, the LHC may still do it," says Lykken. "This is the type of discovery we should be able to make in the next five years."
I'll leave you with this Seed magazine piece from July 2006, in which several prominent scientists discuss their visions for discoveries at the LHC.
Wednesday, September 05, 2007
Physics of Baseball -- Flight Trajectories of Home Runs
University of Illinois physics professor Alan Nathan, whom I've gotten to know in recent years through the annual conferences of SABR (Society for American Baseball Research), has just posted a brief analysis of the flight trajectory of Barry Bonds's record-breaking 756th career home run. If you watch baseball games on television, from time to time you've probably heard announcers provide estimates, following a home run, of how far the ball has travelled. Professor Nathan's write-up demonstrates how such estimates can be obtained.
Sunday, August 26, 2007
Large Hadron Collider Part VII
In addition to the Higgs boson (discussed in the previous entry, below), another type of particle physicists are hoping to find are so-called supersymmetric "superpartners" of the particle of the Standard Model.
To review the Standard Model, we have fermions (matter particles such as quarks, electrons, and neutrinos, which all share the property of half-unit spin) and bosons (force-carrying particles such as photons, for the electromagnetic force, and gluons, for the strong nuclear force, which all have full-unit spins).
As depicted on this page by physics instructor Bram Boroson, supersymmetry involves each standard fermion having a supersymmetric boson partner, and each standard boson having a supersymmetric fermion partner.
For the boson superpartner of a fermion, the letter "s" is placed at the beginning of the fermion's name, thus creating terms such as squark and selectron.
For the fermion superpartner of a boson, the suffix "-ino" is added to the bosonic name, yielding terms such as photino and gluino.
Boroson also raises a very important issue. From the perspective of why physicists might support supersymmetry, he notes that whereas it "doubl[es] the number of kinds of particles. There's got to be some kind of payoff, where this makes the theory simpler in the long run!"
Various introductions to supersymmetry are available. These include online sources such as Boroson's and this web document by Michal Szleper. Dan Hooper's (2006) book Dark Cosmos (which I will review in an upcoming posting) and Gordon Kane's (2000) book Supersymmetry (particularly the section entitled "Some Mysteries Supersymmetry Would Solve", pp. 55-62) also are useful.
Based on these sources, four potential benefits of supersymmetry stand out to me:
1. According to Grand Unification Theories (GUTs), the strong nuclear, weak nuclear, and electromagnetic forces should exhibit the same strength at a given energy level. As shown in Figure 7 of this report by Swagato Banerjee, such convergence does not occur within the Standard Model, but is projected to occur within supersymmetry.
2. Certain types of supersymmetric particles (or even combinations thereof) are possible candidates for something known as dark matter. This topic is discussed extensively in the aforementioned book by Hooper, in the chapter entitled "A Grand Symmetry."
3. According to Hooper, early string theories had a problem with tachyons (something that is "defined as a particle that travels faster than the speed of light," but also can be viewed "in some other frame of reference [as] moving backward in time" (p. 123).
Hooper goes on to note, however, that the first superstring revolution came up with a way to overcome the tachyon problem: "Namely, it was found that if supersymmetry was included in a string theory, the tachyons present in the theory would naturally disappear" (p. 124). The term superstring theory thus derives from this contribution of supersymmetry.
4. Supersymmetry also appears to keep the Higgs boson to a reasonable mass. A May 15, 2007 NY Times article on the Large Hadron Collider states that: “These superpartners cancel out all the quantum effects that make the Higgs mass skyrocket. “Supersymmetry is the only known way to manage this,” Dr. [Joe] Lykken said.”
Book authors Kane and Hooper each discuss detection of supersymmetric superpartners at the particle colliders. Kane writes that:
The superpartners that weigh the least are the ones most likely to be produced first, because it takes less energy to produce them... We don't know for sure which are the lightest ones, but most approaches suggest they might be the photino, Wino, Zino, and higgsinos (p. 157).
Adds Hooper:
Assuming that supersymmetry exists in nature, there should be a slew of new particles with masses between roughly a few tens and a few thousands of giga-electron volts -- perfectly suited for discovery at the LHC (p. 224).
Not all researchers find the idea of supersymmetry convincing. One such skeptic is Lee Smolin, author of the 2006 book The Trouble with Physics. Says Smolin, "My own guess, for what it's worth, is that (at least in the form so far studied) supersymmetry will not explain the observations at the LHC" (p. 78).
Further detail on Smolin's views is available in my review of his book.
To review the Standard Model, we have fermions (matter particles such as quarks, electrons, and neutrinos, which all share the property of half-unit spin) and bosons (force-carrying particles such as photons, for the electromagnetic force, and gluons, for the strong nuclear force, which all have full-unit spins).
As depicted on this page by physics instructor Bram Boroson, supersymmetry involves each standard fermion having a supersymmetric boson partner, and each standard boson having a supersymmetric fermion partner.
For the boson superpartner of a fermion, the letter "s" is placed at the beginning of the fermion's name, thus creating terms such as squark and selectron.
For the fermion superpartner of a boson, the suffix "-ino" is added to the bosonic name, yielding terms such as photino and gluino.
Boroson also raises a very important issue. From the perspective of why physicists might support supersymmetry, he notes that whereas it "doubl[es] the number of kinds of particles. There's got to be some kind of payoff, where this makes the theory simpler in the long run!"
Various introductions to supersymmetry are available. These include online sources such as Boroson's and this web document by Michal Szleper. Dan Hooper's (2006) book Dark Cosmos (which I will review in an upcoming posting) and Gordon Kane's (2000) book Supersymmetry (particularly the section entitled "Some Mysteries Supersymmetry Would Solve", pp. 55-62) also are useful.
Based on these sources, four potential benefits of supersymmetry stand out to me:
1. According to Grand Unification Theories (GUTs), the strong nuclear, weak nuclear, and electromagnetic forces should exhibit the same strength at a given energy level. As shown in Figure 7 of this report by Swagato Banerjee, such convergence does not occur within the Standard Model, but is projected to occur within supersymmetry.
2. Certain types of supersymmetric particles (or even combinations thereof) are possible candidates for something known as dark matter. This topic is discussed extensively in the aforementioned book by Hooper, in the chapter entitled "A Grand Symmetry."
3. According to Hooper, early string theories had a problem with tachyons (something that is "defined as a particle that travels faster than the speed of light," but also can be viewed "in some other frame of reference [as] moving backward in time" (p. 123).
Hooper goes on to note, however, that the first superstring revolution came up with a way to overcome the tachyon problem: "Namely, it was found that if supersymmetry was included in a string theory, the tachyons present in the theory would naturally disappear" (p. 124). The term superstring theory thus derives from this contribution of supersymmetry.
4. Supersymmetry also appears to keep the Higgs boson to a reasonable mass. A May 15, 2007 NY Times article on the Large Hadron Collider states that: “These superpartners cancel out all the quantum effects that make the Higgs mass skyrocket. “Supersymmetry is the only known way to manage this,” Dr. [Joe] Lykken said.”
Book authors Kane and Hooper each discuss detection of supersymmetric superpartners at the particle colliders. Kane writes that:
The superpartners that weigh the least are the ones most likely to be produced first, because it takes less energy to produce them... We don't know for sure which are the lightest ones, but most approaches suggest they might be the photino, Wino, Zino, and higgsinos (p. 157).
Adds Hooper:
Assuming that supersymmetry exists in nature, there should be a slew of new particles with masses between roughly a few tens and a few thousands of giga-electron volts -- perfectly suited for discovery at the LHC (p. 224).
Not all researchers find the idea of supersymmetry convincing. One such skeptic is Lee Smolin, author of the 2006 book The Trouble with Physics. Says Smolin, "My own guess, for what it's worth, is that (at least in the form so far studied) supersymmetry will not explain the observations at the LHC" (p. 78).
Further detail on Smolin's views is available in my review of his book.
Sunday, August 12, 2007
Large Hadron Collider Part VI
Having completed the first part of my series on the Large Hadron Collider (LHC) -- pertaining to the workings of particle colliders -- it's time to move on to the second part, dealing with new particles (or types of particles) physicists will be looking out for.
The first priority would appear to be discovery of something called a Higgs boson. A New York Times article on the LHC from May 15, 2007 notes that: “The new collider was specifically designed to hunt for the Higgs particle, which is key both to the Standard Model and to any greater theory that would supersede it.”
Tonight's entry will first detail what a field of Higgs particles is hypothesized to embody, drawing from Brian Greene's (2004) book The Fabric of the Cosmos. How a Higgs boson is likely to be detected -- which was already touched upon in my previous, July 14, 2007, entry on particle detection in connection with an online interactive activity called “The Hunt for Higgs” -- will next be addressed briefly. Finally, the implications of various potential findings at LHC regarding the Higgs will be addressed.
Envisioning what a Higgs field is
What I grasped about Higgs fields from Greene's book is that two key areas of theoretical interest they address are: differential masses of different types of particles; and unification of different forces of nature.
Higgs fields create barriers to the movement of other kinds of particles, with different types of particles encountering different degrees of resistance. The more resistance a type of particle encounters, the more mass it is said to have. Higgs fields have been analogized to molasses or a bunch of paparazzi photographers. As Greene explains:
If we liken a particle's mass to a person's fame, then the Higgs ocean is like the paparazzi: those who are unknown pass through the swarming photographers with ease, but famous politicians and movie stars have to push much harder to reach their destination (p. 263).
Greene also notes that:
Photons pass completely unhindered through the Higgs ocean and so have no mass at all. If, to the contrary, a particle interacts significantly with the Higgs ocean, it will have a higher mass. The heaviest quark (it's called the top quark), with a mass that's about 350,000 times an electron's, interacts 350,000 times more strongly with the Higgs ocean than does an electron; it has greater difficulty accelerating through the Higgs ocean, and that's why it has a greater mass (p. 263).
In terms of unifying the different forces, Greene discusses how photons ("messenger particles" of the electromagnetic force) and W and Z particles (particles of the weak nuclear force) were indistinguishable at one point (known as "electroweak unification"), but are now considered to be different, due to the influence of the Higgs field.
Glashow, Salam, and Weinberg:
...realized that before the Higgs ocean formed, not only did all the force particles have identical masses -- zero -- but the photons and W and Z particles were identical in essentially every other way as well... At high enough temperatures, therefore, temperatures that would vaporize today's Higgs-filled vacuum, there is no distinction between the weak nuclear force and the electromagnetic force... The symmetry between the electromagnetic and weak forces is not apparent today because as the universe cooled, the Higgs ocean formed, and -- this is vital -- photons and W and Z particles interact with the condensed Higgs field differently. Photons zip through the Higgs ocean... and therefore remain massless. W and Z particles... have to slog their way through, acquiring masses that are 86 and 97 times that of a proton, respectively (excerpts from pp. 264-265).
A common term used to describe this phenomenon is "symmetry breaking."
Detecting the Higgs
The Higgs boson is posited to decay into other particles, thus forcing scientists to look for a "signature" indicative of the Higgs. This Physics Worldarticle (especially the section entitled, "Getting to know the Higgs") goes into further detail, such as in the following excerpt:
The Higgs signature depends on its mass. A relatively light Higgs with a mass of about 120 GeV will decay into pairs of B-mesons, tau leptons or photons, which will be easy to produce but hard to detect among the background of other processes and particles. If the Higgs is heavy, about 160 GeV, it will decay to pairs of W or Z bosons. These will be harder to produce, says [CERN's Jos] Engelen, but easier to spot. If there is no Higgs mechanism, the LHC will see scattering events between pairs of W bosons that would otherwise be "absorbed" by the standard Higgs mechanism.
Implications of possible Higgs-related findings
The March 23, 2007 issue of Science ran a spread of articles on the LHC, including one entitled "Physicists' Nightmare Scenario: The Higgs and Nothing Else." Probably the most succinct summary of the situation is revealed in the following quote:
Discovering the Higgs would complete the standard model. But finding only the Higgs would give physicists little to go on in their quest to answer deeper questions, such as whether the four forces of nature are somehow different aspects of the same thing... (pp. 1657-1658).
Were the LHC to find additional types of new particles, such as supersymmetric superpartners, or evidence of extra dimensions, for example, these would provide strong stimuli for further theorizing and discovery. Or, as the Science article notes:
If, on the other hand, the LHC sees no new particles at all, then the very rules of quantum mechanics and even Einstein's special theory of relativity must be wrong... (p. 1657).
For background on the Standard Model of particle physics, which the Higgs boson would round out, see this review I wrote of a book on the topic.
The first priority would appear to be discovery of something called a Higgs boson. A New York Times article on the LHC from May 15, 2007 notes that: “The new collider was specifically designed to hunt for the Higgs particle, which is key both to the Standard Model and to any greater theory that would supersede it.”
Tonight's entry will first detail what a field of Higgs particles is hypothesized to embody, drawing from Brian Greene's (2004) book The Fabric of the Cosmos. How a Higgs boson is likely to be detected -- which was already touched upon in my previous, July 14, 2007, entry on particle detection in connection with an online interactive activity called “The Hunt for Higgs” -- will next be addressed briefly. Finally, the implications of various potential findings at LHC regarding the Higgs will be addressed.
Envisioning what a Higgs field is
What I grasped about Higgs fields from Greene's book is that two key areas of theoretical interest they address are: differential masses of different types of particles; and unification of different forces of nature.
Higgs fields create barriers to the movement of other kinds of particles, with different types of particles encountering different degrees of resistance. The more resistance a type of particle encounters, the more mass it is said to have. Higgs fields have been analogized to molasses or a bunch of paparazzi photographers. As Greene explains:
If we liken a particle's mass to a person's fame, then the Higgs ocean is like the paparazzi: those who are unknown pass through the swarming photographers with ease, but famous politicians and movie stars have to push much harder to reach their destination (p. 263).
Greene also notes that:
Photons pass completely unhindered through the Higgs ocean and so have no mass at all. If, to the contrary, a particle interacts significantly with the Higgs ocean, it will have a higher mass. The heaviest quark (it's called the top quark), with a mass that's about 350,000 times an electron's, interacts 350,000 times more strongly with the Higgs ocean than does an electron; it has greater difficulty accelerating through the Higgs ocean, and that's why it has a greater mass (p. 263).
In terms of unifying the different forces, Greene discusses how photons ("messenger particles" of the electromagnetic force) and W and Z particles (particles of the weak nuclear force) were indistinguishable at one point (known as "electroweak unification"), but are now considered to be different, due to the influence of the Higgs field.
Glashow, Salam, and Weinberg:
...realized that before the Higgs ocean formed, not only did all the force particles have identical masses -- zero -- but the photons and W and Z particles were identical in essentially every other way as well... At high enough temperatures, therefore, temperatures that would vaporize today's Higgs-filled vacuum, there is no distinction between the weak nuclear force and the electromagnetic force... The symmetry between the electromagnetic and weak forces is not apparent today because as the universe cooled, the Higgs ocean formed, and -- this is vital -- photons and W and Z particles interact with the condensed Higgs field differently. Photons zip through the Higgs ocean... and therefore remain massless. W and Z particles... have to slog their way through, acquiring masses that are 86 and 97 times that of a proton, respectively (excerpts from pp. 264-265).
A common term used to describe this phenomenon is "symmetry breaking."
Detecting the Higgs
The Higgs boson is posited to decay into other particles, thus forcing scientists to look for a "signature" indicative of the Higgs. This Physics Worldarticle (especially the section entitled, "Getting to know the Higgs") goes into further detail, such as in the following excerpt:
The Higgs signature depends on its mass. A relatively light Higgs with a mass of about 120 GeV will decay into pairs of B-mesons, tau leptons or photons, which will be easy to produce but hard to detect among the background of other processes and particles. If the Higgs is heavy, about 160 GeV, it will decay to pairs of W or Z bosons. These will be harder to produce, says [CERN's Jos] Engelen, but easier to spot. If there is no Higgs mechanism, the LHC will see scattering events between pairs of W bosons that would otherwise be "absorbed" by the standard Higgs mechanism.
Implications of possible Higgs-related findings
The March 23, 2007 issue of Science ran a spread of articles on the LHC, including one entitled "Physicists' Nightmare Scenario: The Higgs and Nothing Else." Probably the most succinct summary of the situation is revealed in the following quote:
Discovering the Higgs would complete the standard model. But finding only the Higgs would give physicists little to go on in their quest to answer deeper questions, such as whether the four forces of nature are somehow different aspects of the same thing... (pp. 1657-1658).
Were the LHC to find additional types of new particles, such as supersymmetric superpartners, or evidence of extra dimensions, for example, these would provide strong stimuli for further theorizing and discovery. Or, as the Science article notes:
If, on the other hand, the LHC sees no new particles at all, then the very rules of quantum mechanics and even Einstein's special theory of relativity must be wrong... (p. 1657).
For background on the Standard Model of particle physics, which the Higgs boson would round out, see this review I wrote of a book on the topic.
Saturday, July 14, 2007
Large Hadron Collider Part V
To conclude the series of postings on technical aspects of the Large Hadron Collider, tonight we will look at how physicists interpret the information they receive in their particle detectors.
The Science Museum (UK) has an online interactive activity called "The Hunt for Higgs," which uses the example of a Higgs boson, a particle many physicists expect to be found at the LHC. In this demonstration activity, viewers click on the animated image of a detector and are taken through the steps that would be involved in identifying a potential Higgs particle.
An important concept illustrated by the demonstration exercise is that massive particles -- such as the Higgs -- can be very short-lived, and can thus be detected only by the presence in the detector of combinations of other, lighter particles. As the demonstration indicates, however, the combinations are only ones physicists think might be the "signatures" of a Higgs.
The discovery of the top quark -- the heaviest of the quarks -- in 1995 serves as an excellent case study in how the existence of a highly tranient particle was inferred. According to the linked Wikipedia page on the top quark, "The Standard Model predicts its lifetime to be roughly 1×10 [to the]−25 [power] seconds...", which would be .0000000000000000000000001 of a second.
This Fermilab information page on the discovery of the top quark and the process of confirming its existence, is very useful. In particular, the two following headings on the Fermlab page are worth clicking on, for further detail: "Is it a top quark? The signature of a top event" and "How do we know when we've found the top quark?"
For all of the hype and expense of the LHC, it is important to note that it is not expected to offer the ultimate in detector clarity. Texas A&M University physicist Teruki Kamon, whom I once saw give a talk at Texas Tech, has a slide in some of his PowerPoint presentations that uses a photographic analogy to demonstrate the upgrades in information (quantity and quality) that will be available as researchers move from Fermilab's Tevatron to the LHC to the proposed International Linear Collider (ILC).
If you go to this slide show and then scroll down to page 3, you'll see a sequence of three depictions of the same picture (a girl jumping in the air, with her shadow showing below). At the Tevatron, according to this analogy, only perhaps the bottom one-fifth of the picture would be observable, and with a very fuzzy appearance, at that. At the LHC, the full picture would be available, but would still be fuzzy. Only at the ILC would the picture be both fully available and sharp.
The next set of LHC-related postings will discuss new types of particles and other phenomena that scientists hope to find.
The Science Museum (UK) has an online interactive activity called "The Hunt for Higgs," which uses the example of a Higgs boson, a particle many physicists expect to be found at the LHC. In this demonstration activity, viewers click on the animated image of a detector and are taken through the steps that would be involved in identifying a potential Higgs particle.
An important concept illustrated by the demonstration exercise is that massive particles -- such as the Higgs -- can be very short-lived, and can thus be detected only by the presence in the detector of combinations of other, lighter particles. As the demonstration indicates, however, the combinations are only ones physicists think might be the "signatures" of a Higgs.
The discovery of the top quark -- the heaviest of the quarks -- in 1995 serves as an excellent case study in how the existence of a highly tranient particle was inferred. According to the linked Wikipedia page on the top quark, "The Standard Model predicts its lifetime to be roughly 1×10 [to the]−25 [power] seconds...", which would be .0000000000000000000000001 of a second.
This Fermilab information page on the discovery of the top quark and the process of confirming its existence, is very useful. In particular, the two following headings on the Fermlab page are worth clicking on, for further detail: "Is it a top quark? The signature of a top event" and "How do we know when we've found the top quark?"
For all of the hype and expense of the LHC, it is important to note that it is not expected to offer the ultimate in detector clarity. Texas A&M University physicist Teruki Kamon, whom I once saw give a talk at Texas Tech, has a slide in some of his PowerPoint presentations that uses a photographic analogy to demonstrate the upgrades in information (quantity and quality) that will be available as researchers move from Fermilab's Tevatron to the LHC to the proposed International Linear Collider (ILC).
If you go to this slide show and then scroll down to page 3, you'll see a sequence of three depictions of the same picture (a girl jumping in the air, with her shadow showing below). At the Tevatron, according to this analogy, only perhaps the bottom one-fifth of the picture would be observable, and with a very fuzzy appearance, at that. At the LHC, the full picture would be available, but would still be fuzzy. Only at the ILC would the picture be both fully available and sharp.
The next set of LHC-related postings will discuss new types of particles and other phenomena that scientists hope to find.
Friday, June 29, 2007
Large Hadron Collider Part IV
Tonight, we return to my series on the Large Hadron Collider (LHC). I previously wrote about one very important property of a collider, its energy. Tonight, I write about another important property, luminosity. Dictionary definitions of luminosity typically refer to the presence of light or brightness; in other words, a luminous object is easy to see, such as something that is illuminated.
Conceptually, the term luminosity as used in particle physics also refers to ease of detecting something. In his book, Not Even Wrong (associated with a blog of the same name), Peter Woit states the following:
Besides the energy of its [particle] beams, the most important characteristic of any collider is its "luminosity." The luminosity of a collider is a measure of how many particles are in the beam and how small the interaction region is in which the beams coming from opposite directions are brought together inside a detector (p. 19).
The most layperson-friendly description of the units of luminosity I've been able to find is "collisions per square centimeter per second...", from this Berkeley Lab document.
Greater density of particles' concentration -- how many of them there are, in how small of an area -- thus would make them easier to detect. The LHC's luminosity is planned to be 10 to the 34th power (that's a lot of zeroes).
Increasing colliders' luminosities, however, is very difficult to do, as discussed by Woit and also in this Stanford Linear Accelerator Center (SLAC) document. The SLAC document also includes a graph that shows the luminosity levels for a number of colliders, including the LHC.
Luminosity is serious business. Major international conferences known as LUMI '06 and LUMI '05 have been held to track research developments in this area.
Conceptually, the term luminosity as used in particle physics also refers to ease of detecting something. In his book, Not Even Wrong (associated with a blog of the same name), Peter Woit states the following:
Besides the energy of its [particle] beams, the most important characteristic of any collider is its "luminosity." The luminosity of a collider is a measure of how many particles are in the beam and how small the interaction region is in which the beams coming from opposite directions are brought together inside a detector (p. 19).
The most layperson-friendly description of the units of luminosity I've been able to find is "collisions per square centimeter per second...", from this Berkeley Lab document.
Greater density of particles' concentration -- how many of them there are, in how small of an area -- thus would make them easier to detect. The LHC's luminosity is planned to be 10 to the 34th power (that's a lot of zeroes).
Increasing colliders' luminosities, however, is very difficult to do, as discussed by Woit and also in this Stanford Linear Accelerator Center (SLAC) document. The SLAC document also includes a graph that shows the luminosity levels for a number of colliders, including the LHC.
Luminosity is serious business. Major international conferences known as LUMI '06 and LUMI '05 have been held to track research developments in this area.
Friday, June 15, 2007
Book Review: "The Physics of Basketball"
I thought I'd take a break from my series on the Large Hadron Collider (see the three previous postings below) to write a mini-review of a book I recently finished reading. The book is The Physics of Basketball, by U.S. Naval Academy professor John Fontanella, himself a former player.
It’s actually more of a lengthy monograph (~130 pages) than a typical book. The book concentrates heavily on the trajectory of basketball shots, both of the “nothing but net” variety and those that bounce around off the backboard and/or rim. Fontanella’s enumeration of all the forces acting on a released basketball shot (e.g., gravity, magnus, drag, and buoyancy) may have some similarity to the talk University of Illinois physicist Alan Nathan presented at last year’s Society for American Baseball Research (SABR) meeting on the aerodynamics of a flying baseball.
Fontanella also addresses a few other issues besides basketball shooting. One is the illusion of "hang time," or a player (former NBA star Gus Johnson being a favorite of the author) who's driving to the hoop "hanging or floating in air, apparently defying gravity and the laws of physics" (p. 121).
The author has a ready explanation for this phenomenon -- a player's initial upward movement is quick, but gravity slows the upward motion. According to a calculation by Fontanella, "a jumping player spends only 29% of the time in the bottom half of the jump. Correspondingly, a player must spend 71% of the time in the upper half of the jump" (p. 122). Thus, we see prolonged periods of a player high in the air.
Another question Fontanella addresses -- for those of you dying of curiosity -- is why you hear so much squeaking of the players' basketball shoes when you're observing from near the court (p. 115).
There’s an online NPR interview (link) with Fontanella you can listen to, which might help you decide if you want to read the book.
It’s actually more of a lengthy monograph (~130 pages) than a typical book. The book concentrates heavily on the trajectory of basketball shots, both of the “nothing but net” variety and those that bounce around off the backboard and/or rim. Fontanella’s enumeration of all the forces acting on a released basketball shot (e.g., gravity, magnus, drag, and buoyancy) may have some similarity to the talk University of Illinois physicist Alan Nathan presented at last year’s Society for American Baseball Research (SABR) meeting on the aerodynamics of a flying baseball.
Fontanella also addresses a few other issues besides basketball shooting. One is the illusion of "hang time," or a player (former NBA star Gus Johnson being a favorite of the author) who's driving to the hoop "hanging or floating in air, apparently defying gravity and the laws of physics" (p. 121).
The author has a ready explanation for this phenomenon -- a player's initial upward movement is quick, but gravity slows the upward motion. According to a calculation by Fontanella, "a jumping player spends only 29% of the time in the bottom half of the jump. Correspondingly, a player must spend 71% of the time in the upper half of the jump" (p. 122). Thus, we see prolonged periods of a player high in the air.
Another question Fontanella addresses -- for those of you dying of curiosity -- is why you hear so much squeaking of the players' basketball shoes when you're observing from near the court (p. 115).
There’s an online NPR interview (link) with Fontanella you can listen to, which might help you decide if you want to read the book.
Saturday, June 09, 2007
Large Hadron Collider Part III
Continuing on with our series on the Large Hadron Collider (LHC), today we'll address its energy aspects. Specifically, two primary questions are why high energies are needed, and how they are achieved. First, though, let's briefly go over the terminology for energy scales of magnitude.
The unit of energy referred to in particle physics is the electronvolt (eV). The most straightforward explanation of an electronvolt comes from this Fermilab document, which characterizes 1 eV as "very tiny." As illustrated on the page, if an electron passes through a 1.5 V battery (which many of you probably have in your home), that would represent 1.5 eV of energy.
Further, from dealing with computer memory sizes (bytes) or other measurment contexts, many of you are probably familiar with the terminology for thousandfold orders of magnitude, such as kilo (1,000 or 10^3, where ^ stands for raising to a power) and mega (1 million or 10^6). Continuing onward, we have giga (1 billion or 10^9) and tera (1 trillion or 10^12). This University College London document lists these milestones.
In fact, the Tevatron collider at Fermilab is so named because of its ability to reach 1 tera electronvolts ("TeV") of energy. As discussed below, the LHC will go even higher.
So why do we need ever-increasing energy performance from our particle colliders? It's because the new particles scientists are looking for tend to be very massive and, as a result of the direct relationship between energy and mass in Einstein's E = mc^2 (c standing for the speed of light, which gets squared), the creation of high-mass particles requires high energy. This Canadian document makes the basic point:
It is realized that the mass-energy relation (E = mc^2) provides a new way to get information about particles. If particles could be made very energetic and then used to collide with other particles, some of their energy could be converted into the creation of previously unknown particles. When particles are produced in a collision, they are not particles that were somehow inside the colliding ones. They are really produced by converting the collision energy into mass, the mass of other particles… Which particles will be produced is partly determined by their mass - the lighter they are, the easier it is to produced them [sic], other things being equal - and also by the probabilities calculated from the Feynman diagrams.
One type of particle that is eagerly being sought at the LHC is something known as the Higgs boson, about which I will have much to say later. The New Yorker magazine article that I linked to in the opening essay of this series talks about how the Higgs has not been found at the Tevatron, so the LHC, with its higher energy, is the next hope.
By now, the Higgs has been sought for so long that physicists have a pretty clear idea of how much it must weigh. The lower bound is around 120 times more than a proton... The upper bound is about 210 times as much as a proton. The most powerful collider currently in operation is Fermilab’s Tevatron, outside Chicago. The Tevatron, which smashes protons into antiprotons, can accelerate particles to an energy of just under a trillion electron volts, or one TeV... So far, the Tevatron has failed to reveal the Higgs, though physicists there are actively looking for it. The L.H.C. will accelerate particles to seven TeV, which means that it will be seven times as powerful as the Tevatron. This should be more than enough energy to produce the Higgs, if there is a Higgs to produce. It may also be enough to uncover much more than the Higgs... (page 3 of online New Yorker article).
OK, so finally, how have accelerators and colliders over the years been able to keep raising the energy levels capable of being studied? This Stanford Linear Accelerator Center (SLAC) document provides an excellent summary of the historical evolution of accelerators and colliders, including their increasing energy levels. Referring to a graph of accelerators’ particle energy plotted against a timeline from 1930-1990, the report states the following:
One of the first things to notice is that the energy of man-made accelerators has been growing exponentially in time. Starting from the 1930s, the energy has increased – roughly speaking – by about a factor of 10 every six to eight years. A second conclusion is that this spectacular achievement has resulted from a succession of technologies rather than from construction of bigger and better machines of a given type (pp. 38-39).
Further:
The [e]nergy that really matters in doing elementary particle physics is the collision energy – that is, the energy available to induce a reaction, including the creation of new particles… If two particles of equal mass traveling in opposite directions collide head on, however, the total kinetic energy of the combined system after collision is zero, and therefore the entire energy of the two particles becomes available as collision energy. This is the basic energy advantage offered by colliding-beam machines, or colliders (p. 40).
The brief summaries and excerpts above are meant simply to convey rudimentary ideas on the topic. The linked documents are, of course, available for you to read in toto, should you wish further information. Additional web documents and books are also available, and shouldn't be too hard to find via some web searching.
The unit of energy referred to in particle physics is the electronvolt (eV). The most straightforward explanation of an electronvolt comes from this Fermilab document, which characterizes 1 eV as "very tiny." As illustrated on the page, if an electron passes through a 1.5 V battery (which many of you probably have in your home), that would represent 1.5 eV of energy.
Further, from dealing with computer memory sizes (bytes) or other measurment contexts, many of you are probably familiar with the terminology for thousandfold orders of magnitude, such as kilo (1,000 or 10^3, where ^ stands for raising to a power) and mega (1 million or 10^6). Continuing onward, we have giga (1 billion or 10^9) and tera (1 trillion or 10^12). This University College London document lists these milestones.
In fact, the Tevatron collider at Fermilab is so named because of its ability to reach 1 tera electronvolts ("TeV") of energy. As discussed below, the LHC will go even higher.
So why do we need ever-increasing energy performance from our particle colliders? It's because the new particles scientists are looking for tend to be very massive and, as a result of the direct relationship between energy and mass in Einstein's E = mc^2 (c standing for the speed of light, which gets squared), the creation of high-mass particles requires high energy. This Canadian document makes the basic point:
It is realized that the mass-energy relation (E = mc^2) provides a new way to get information about particles. If particles could be made very energetic and then used to collide with other particles, some of their energy could be converted into the creation of previously unknown particles. When particles are produced in a collision, they are not particles that were somehow inside the colliding ones. They are really produced by converting the collision energy into mass, the mass of other particles… Which particles will be produced is partly determined by their mass - the lighter they are, the easier it is to produced them [sic], other things being equal - and also by the probabilities calculated from the Feynman diagrams.
One type of particle that is eagerly being sought at the LHC is something known as the Higgs boson, about which I will have much to say later. The New Yorker magazine article that I linked to in the opening essay of this series talks about how the Higgs has not been found at the Tevatron, so the LHC, with its higher energy, is the next hope.
By now, the Higgs has been sought for so long that physicists have a pretty clear idea of how much it must weigh. The lower bound is around 120 times more than a proton... The upper bound is about 210 times as much as a proton. The most powerful collider currently in operation is Fermilab’s Tevatron, outside Chicago. The Tevatron, which smashes protons into antiprotons, can accelerate particles to an energy of just under a trillion electron volts, or one TeV... So far, the Tevatron has failed to reveal the Higgs, though physicists there are actively looking for it. The L.H.C. will accelerate particles to seven TeV, which means that it will be seven times as powerful as the Tevatron. This should be more than enough energy to produce the Higgs, if there is a Higgs to produce. It may also be enough to uncover much more than the Higgs... (page 3 of online New Yorker article).
OK, so finally, how have accelerators and colliders over the years been able to keep raising the energy levels capable of being studied? This Stanford Linear Accelerator Center (SLAC) document provides an excellent summary of the historical evolution of accelerators and colliders, including their increasing energy levels. Referring to a graph of accelerators’ particle energy plotted against a timeline from 1930-1990, the report states the following:
One of the first things to notice is that the energy of man-made accelerators has been growing exponentially in time. Starting from the 1930s, the energy has increased – roughly speaking – by about a factor of 10 every six to eight years. A second conclusion is that this spectacular achievement has resulted from a succession of technologies rather than from construction of bigger and better machines of a given type (pp. 38-39).
Further:
The [e]nergy that really matters in doing elementary particle physics is the collision energy – that is, the energy available to induce a reaction, including the creation of new particles… If two particles of equal mass traveling in opposite directions collide head on, however, the total kinetic energy of the combined system after collision is zero, and therefore the entire energy of the two particles becomes available as collision energy. This is the basic energy advantage offered by colliding-beam machines, or colliders (p. 40).
The brief summaries and excerpts above are meant simply to convey rudimentary ideas on the topic. The linked documents are, of course, available for you to read in toto, should you wish further information. Additional web documents and books are also available, and shouldn't be too hard to find via some web searching.
Monday, May 28, 2007
Large Hadron Collider Part II
Following up on the introductory essay (below) to my series on the Large Hadron Collider (LHC), today I provide the first substantive posting, on the basics of particle-physics colliders.
A good place to start is with an analogy from George Mason University professor Robert Oerter, in his book The Theory of Almost Everything (which I reviewed here). The analogy is presented primarily on pp. 135-137, but the remainder of the chapter also provides important historical information.
To paraphrase Oerter's analogy, let's say we have a sponge-rubber Nerf football that may have objects embedded within it, such as a baseball or even a ball made of steel.
We are not allowed to squeeze or cut open the Nerf ball, nor can we see inside of it. Instead, we must fire pellets into the ball and make inferences about the inner content of the ball by what happens to the fired pellets. Depending upon whether pellets go completely through the ball and come out, get stuck in the ball, or ricochet back out at some angle, estimates of the size, position, and hardness of the inner objects can be made.
Similarly, it is through inferences from records of particle collisions that physicists learn about the micro world of matter. I will have a later write-up specifically on detection of particle collisions.
In looking at the LHC and earlier facilities such as Fermilab's Tevatron, CERN's Large Electron Positron collider (whose tunnel has been converted to use with the LHC), and Stanford Linear Accelerator Center, three major distinctions arise. These distinctions, and the pros and cons of different alternatives, are discussed below.
Fixed vs. Particle-Beam Targets
One distinction is between experiments that aim a beam of particles at a fixed target, and those that aim two particle beams into each other. Quoting from Peter Woit's book Not Even Wrong, "Accelerators that collide two beams together are now called colliders..." (p. 15).
This CERN document discusses the arguments in favor of fixed-target and beam-on-beam (collider) designs. In short, colliders are said to be more economical to run and can produce higher energies. On the other hand, "Firing a particle beam into a solid metal target or large tank of liquid ensures that almost every particle will collide with a nucleus but getting two particle beams to interact is much harder."
Lepton (e.g., electron) vs. Hadron (e.g., proton) Colliders
The same CERN document as discussed in the preceding section also addresses lepton vs. hadron colliders. Leptons are said to be advantageous for precision energy measurement, whereas hadrons are better suited to discovery of new particles. Electrons (one type of lepton) also have a problem with something called "synchrotron radiation," as discussed below.
For additional background on leptons and hadrons, you can look over a couple of my previous postings. One on leptons is available here, whereas one on hadrons can be found here.
Linear vs. Circular
This Wikipedia entry on particle accelerators lists some advantages of circular designs over linear ones:
In the circular accelerator, particles move in a circle until they reach sufficient energy. The particle track is typically bent into a circle using electromagnets. The advantage of circular accelerators over linear accelerators (linacs) is that the ring topology allows continuous acceleration, as the particle can transit indefinitely. Another advantage is that a circular accelerator is relatively smaller than a linear accelerator of comparable power (i.e. a linac would have to be extremely long to have the equivalent power of a circular accelerator).
Circular accelerators have evolved from the original cyclotrons to the more elaborate synchrotrons. As Woit explains, "Higher-energy accelerators could not be built using a single magnet, but instead used a doughnut-like ring of smaller magnets. This design was called a 'synchrotron'..." (p. 13).
The technical mathematical name for a doughnut-like structure is a torus. In fact, one of the experiments to be carried out at the LHC is known as ATLAS, for "AToroidal LHC ApparatuS." See the Wikipedia page on the LHC for further information on the individual experiments.
According to the Wikipedia page on synchrotrons:
While a cyclotron uses a constant magnetic field and a constant-frequency applied electric field, and one of these is varied in the synchrocyclotron, both of these are varied in the synchrotron...
In a cyclotron the maximum radius is quite limited as the particles start at the center and spiral outward, thus this entire path must be a self-supporting disc-shaped evacuated chamber... The arrangement of the single pair of magnets the full width of the device also limits the economic size of the device.
Synchrotrons overcome these limitations, using a narrow beam pipe which can be surrounded by much smaller and more tightly focused magnets.
Under certain conditions, synchotrons can lose energy through a process called "synchrotron radiation," as is described in this excellent educational graphic from the Nobel Foundation.
Synchrotron radiation appears to be a problem when two conditions coexist: (1) a circular accelerator is being used, and (2) electrons are being studied. A Fermilab document states the following:
Linear electron accelerators do not produce much synchrotron radiation. Circular electron accelerators and storage rings produce copious synchrotron radiation in all the bending magnets, especially when the magnetic fields are high or the beam energy is high.
Further, Woit notes that:
While the main constraint on the energy of an electron-positron ring is the problem of synchrotron radiation loss, for a proton ring this is not much of an issue, since protons are so much heavier than electrons (p. 23).
For further information, I would suggest this CERN LHC page of Frequently Asked Questions and this PowerPoint slide show by Florida State University professor Horst Wahl.
A good place to start is with an analogy from George Mason University professor Robert Oerter, in his book The Theory of Almost Everything (which I reviewed here). The analogy is presented primarily on pp. 135-137, but the remainder of the chapter also provides important historical information.
To paraphrase Oerter's analogy, let's say we have a sponge-rubber Nerf football that may have objects embedded within it, such as a baseball or even a ball made of steel.
We are not allowed to squeeze or cut open the Nerf ball, nor can we see inside of it. Instead, we must fire pellets into the ball and make inferences about the inner content of the ball by what happens to the fired pellets. Depending upon whether pellets go completely through the ball and come out, get stuck in the ball, or ricochet back out at some angle, estimates of the size, position, and hardness of the inner objects can be made.
Similarly, it is through inferences from records of particle collisions that physicists learn about the micro world of matter. I will have a later write-up specifically on detection of particle collisions.
In looking at the LHC and earlier facilities such as Fermilab's Tevatron, CERN's Large Electron Positron collider (whose tunnel has been converted to use with the LHC), and Stanford Linear Accelerator Center, three major distinctions arise. These distinctions, and the pros and cons of different alternatives, are discussed below.
Fixed vs. Particle-Beam Targets
One distinction is between experiments that aim a beam of particles at a fixed target, and those that aim two particle beams into each other. Quoting from Peter Woit's book Not Even Wrong, "Accelerators that collide two beams together are now called colliders..." (p. 15).
This CERN document discusses the arguments in favor of fixed-target and beam-on-beam (collider) designs. In short, colliders are said to be more economical to run and can produce higher energies. On the other hand, "Firing a particle beam into a solid metal target or large tank of liquid ensures that almost every particle will collide with a nucleus but getting two particle beams to interact is much harder."
Lepton (e.g., electron) vs. Hadron (e.g., proton) Colliders
The same CERN document as discussed in the preceding section also addresses lepton vs. hadron colliders. Leptons are said to be advantageous for precision energy measurement, whereas hadrons are better suited to discovery of new particles. Electrons (one type of lepton) also have a problem with something called "synchrotron radiation," as discussed below.
For additional background on leptons and hadrons, you can look over a couple of my previous postings. One on leptons is available here, whereas one on hadrons can be found here.
Linear vs. Circular
This Wikipedia entry on particle accelerators lists some advantages of circular designs over linear ones:
In the circular accelerator, particles move in a circle until they reach sufficient energy. The particle track is typically bent into a circle using electromagnets. The advantage of circular accelerators over linear accelerators (linacs) is that the ring topology allows continuous acceleration, as the particle can transit indefinitely. Another advantage is that a circular accelerator is relatively smaller than a linear accelerator of comparable power (i.e. a linac would have to be extremely long to have the equivalent power of a circular accelerator).
Circular accelerators have evolved from the original cyclotrons to the more elaborate synchrotrons. As Woit explains, "Higher-energy accelerators could not be built using a single magnet, but instead used a doughnut-like ring of smaller magnets. This design was called a 'synchrotron'..." (p. 13).
The technical mathematical name for a doughnut-like structure is a torus. In fact, one of the experiments to be carried out at the LHC is known as ATLAS, for "AToroidal LHC ApparatuS." See the Wikipedia page on the LHC for further information on the individual experiments.
According to the Wikipedia page on synchrotrons:
While a cyclotron uses a constant magnetic field and a constant-frequency applied electric field, and one of these is varied in the synchrocyclotron, both of these are varied in the synchrotron...
In a cyclotron the maximum radius is quite limited as the particles start at the center and spiral outward, thus this entire path must be a self-supporting disc-shaped evacuated chamber... The arrangement of the single pair of magnets the full width of the device also limits the economic size of the device.
Synchrotrons overcome these limitations, using a narrow beam pipe which can be surrounded by much smaller and more tightly focused magnets.
Under certain conditions, synchotrons can lose energy through a process called "synchrotron radiation," as is described in this excellent educational graphic from the Nobel Foundation.
Synchrotron radiation appears to be a problem when two conditions coexist: (1) a circular accelerator is being used, and (2) electrons are being studied. A Fermilab document states the following:
Linear electron accelerators do not produce much synchrotron radiation. Circular electron accelerators and storage rings produce copious synchrotron radiation in all the bending magnets, especially when the magnetic fields are high or the beam energy is high.
Further, Woit notes that:
While the main constraint on the energy of an electron-positron ring is the problem of synchrotron radiation loss, for a proton ring this is not much of an issue, since protons are so much heavier than electrons (p. 23).
For further information, I would suggest this CERN LHC page of Frequently Asked Questions and this PowerPoint slide show by Florida State University professor Horst Wahl.
Tuesday, May 15, 2007
Large Hadron Collider Part I
As most people who would visit this blog are probably aware, the Large Hadron Collider, a facility straddling the Swiss-French border that will host the highest-energy physics experiments ever, is scheduled to begin data collection at some point in the relatively near future (official LHC website, Wikipedia page).
In anticipation of the LHC's opening for research, two major publications from the Big Apple, the New York Times and the New Yorker magazine, have each published major articles on the collider in recent days. The two articles are very similar contentwise -- each covering both the physical construction and science of the LHC -- and lengthwise (prepare to block out some time to read either).
New York Times articles tend to be taken down from the web fairly quickly, so the best online option would probably be to seek out the piece on Lexis/Nexis at a library that subscribes to this service ("A Giant Takes On Physics’ Biggest Questions," by Dennis Overbye, May 15, 2007). I'm not sure how long the New Yorker makes its articles available free, full-text, but here it is, for now at least.
Peter Woit, whose "Not Even Wrong" blog (associated with his book of the same name) I read frequently, questions the timing of this media barrage:
I do fear all this LHC coverage is peaking too early. With still probably at least a year to go before the machine even starts taking data, the coverage may already be generating an LHC overexposure problem...
For the last several months, I have been collecting articles to do my own series of postings on the LHC. On the positive side, I think the NYT and New Yorker articles will motivate me to get my stuff up here "sooner rather than later."
Over the next few months, you can expect a new posting here every couple of weeks. The topics I plan to address in my series include:
*What is a particle-physics collider, in the first place?
*Energy aspects of the LHC (why high energies are needed and how they are achieved).
*Another important aspect of a collider, its luminosity.
*How the data are detected, collected, and interpreted.
*How the various LHC studies might (or might not) inform several theoretical proposals of the Standard Model and beyond, including Higgs physics, supersymmetry, and extra dimensions.
In anticipation of the LHC's opening for research, two major publications from the Big Apple, the New York Times and the New Yorker magazine, have each published major articles on the collider in recent days. The two articles are very similar contentwise -- each covering both the physical construction and science of the LHC -- and lengthwise (prepare to block out some time to read either).
New York Times articles tend to be taken down from the web fairly quickly, so the best online option would probably be to seek out the piece on Lexis/Nexis at a library that subscribes to this service ("A Giant Takes On Physics’ Biggest Questions," by Dennis Overbye, May 15, 2007). I'm not sure how long the New Yorker makes its articles available free, full-text, but here it is, for now at least.
Peter Woit, whose "Not Even Wrong" blog (associated with his book of the same name) I read frequently, questions the timing of this media barrage:
I do fear all this LHC coverage is peaking too early. With still probably at least a year to go before the machine even starts taking data, the coverage may already be generating an LHC overexposure problem...
For the last several months, I have been collecting articles to do my own series of postings on the LHC. On the positive side, I think the NYT and New Yorker articles will motivate me to get my stuff up here "sooner rather than later."
Over the next few months, you can expect a new posting here every couple of weeks. The topics I plan to address in my series include:
*What is a particle-physics collider, in the first place?
*Energy aspects of the LHC (why high energies are needed and how they are achieved).
*Another important aspect of a collider, its luminosity.
*How the data are detected, collected, and interpreted.
*How the various LHC studies might (or might not) inform several theoretical proposals of the Standard Model and beyond, including Higgs physics, supersymmetry, and extra dimensions.
Saturday, April 21, 2007
String Theory "Cribsheet"
As I gather, Seed magazine occasionally puts together one-page "cribsheets" on various scientific topics. It has recently put one out on string theory. Clicking here will take you to Seed's page about the string theory cribsheet, where, after scrolling down a bit on the new page, you can download your own copy of the sheet.
Naturally, I commend the idea of trying to bring physics concepts to a broad audience. Further, I would say that, given the space limitations of a single page, the cribsheet covers a lot of ground. Further reading is obviously necessary for a fuller understanding, however, such as some of the books I have reviewed over the last couple of years on this blog.
Naturally, I commend the idea of trying to bring physics concepts to a broad audience. Further, I would say that, given the space limitations of a single page, the cribsheet covers a lot of ground. Further reading is obviously necessary for a fuller understanding, however, such as some of the books I have reviewed over the last couple of years on this blog.
Tuesday, April 10, 2007
Book Review: "Not Even Wrong"
I recently finished reading Peter Woit's book Not Even Wrong, one of the two books that have received great attention in recent months for their critical examinations of string theory. The other such book is Lee Smolin's The Trouble with Physics, which I reviewed here.
I have been a longtime visitor to Woit's Not Even Wrong blog. That gave me a good idea of what to expect and, indeed, I found the book to be a systematic expansion of ideas discussed on the blog.
In the Preface (p. xii), Woit explains the expression from Wolfgang Pauli that became the title of his (Woit's) book. Woit notes that the phrase "carries two different connotations, both of which Pauli likely had in mind." As Woit continues:
A theory can be "not even wrong" because it is so incomplete and ill-defined that it can't be used to make firm predictions whose failure would show it to be wrong. This has been the situation of superstring theory from its beginnings to the present day...
...there is a second connotation of "not even wrong": something worse than a wrong idea... worse than being wrong is to refuse to admit it when one is wrong.
For all the hype over Woit's book in relation to string theory, it doesn't even take up the topic until about halfway through. The first half presents a history of particle physics, with a heavy emphasis on the interface between physics and mathematics. The standard model of particle physics receives a lot of attention, and there's also a nice early chapter on accelerators and colliders. Woit himself, in both his blog and book, warns about the technical references to mathematical concepts in the book, but I didn't find it to be any worse than other books in terms of mathematical complexity (which is not to say that I understood any more than a little of the math).
A substantial component of Woit's critique of string theory has to do with the potentially huge (i.e., hundreds of thousands, if not hundreds of millions, or more) number of possible scenarios that would satisfy string theory's requirements, but not produce a unique solution characterizing a single universe. Part of the problem stems from the need for extra dimensions and the applicability of something called Calabi-Yau manifolds (discussed in this earlier posting of mine). Writes Woit:
The predictions of heterotic string theory [one form of the theory] strongly depend on which Calabi-Yau spaces one chooses. In 1984, only a few Calabi-Yau spaces were known, but by now hundreds of thousands have been constructed (p. 153).
A few chapters later, he resumes the discussion, in terms of possible background spaces for the universe(s):
There are many, perhaps infinitely many, classes of background spaces that appear to be possible consistent choices, and each one of these classes comes with a large number of parameters that determine the size and shape of the background space-time... [Regarding possible parameters, one of the proposed mechanisms] picks out not a unique value for the values..., but a very large set of values, any one of which should be as good as any other. Estimates of the number of these possible values are absurdly large (e.g., 10^100, 10^500, or even 10^1000 [where ^ stands for raising to a power]) far beyond the number of particles in the universe or anything else one can imagine counting... The possible existence of, say, 10^500 consistent different vacuum states for superstring theory probably destroys the hope of using the theory to predict anything... (pp. 237-239).
Woit also discusses the landscape/anthropic principle, an attempt by some to salvage the string theory conception of innumerable universes. According to the Wikipedia entry linked in the previous sentence, "The idea of the string theory landscape has been used to propose a concrete implementation of the anthropic principle, the idea that fundamental constants may have the values they have not for fundamental physical reasons, but rather because such values are necessary for life."
Also thrown into the mix by Woit is a chapter entitled "The Bogdanov Affair," the larger point of which seems to be that certain areas of physics have gotten so complex that few people can even follow what is being written. In brief, two brothers named Bogdanov managed to publish several articles that appeared to contain little or no merit, but whose prose was so impenetrable that journal referees appeared to let the manuscripts get by. Writes Woit:
Leaving aside the issue of whether the Bogdanovs are hoaxers or really believe in their own work, this episode definitively showed that in the field of quantum gravity one can easily publish complete gibberish in many journals, some of them rather prominent... Why did the referees in this case accept for publication such obviously incoherent nonsense? One reason is undoubtedly that many physicists do not willingly admit that they don't understand things...(pp. 218-219).
Sean Carroll, who blogs at Cosmic Variance, had an entry about a week and a half ago entitled, "String Theory is Losing the Public Debate." He writes:
...there are very good reasons to think that something like string theory is going to be part of the ultimate understanding of quantum gravity, and it would be nice if more people knew what those reasons were.
Of course, it would be even nicer if those reasons were explained (to interested non-physicists as well as other physicists who are not specialists) by string theorists themselves. Unfortunately, they’re not. Most string theorists... seem to not deem it worth their time to make much of an effort to explain why this theory with no empirical support whatsoever is nevertheless so promising. (Which it is.) Meanwhile, people who think that string theory has hit a dead end and should admit defeat — who are a tiny minority of those who are well-informed about the subject — are getting their message out with devastating effectiveness.
As I've noted periodically on the present blog, at least one prominent physicist, Columbia University's Brian Greene (whom Carroll also mentions), has been assiduously arguing for the promise of string theory in various forums. I agree with Carroll, though. I too would like to see more advocates of string theory explain their perspective.
I have been a longtime visitor to Woit's Not Even Wrong blog. That gave me a good idea of what to expect and, indeed, I found the book to be a systematic expansion of ideas discussed on the blog.
In the Preface (p. xii), Woit explains the expression from Wolfgang Pauli that became the title of his (Woit's) book. Woit notes that the phrase "carries two different connotations, both of which Pauli likely had in mind." As Woit continues:
A theory can be "not even wrong" because it is so incomplete and ill-defined that it can't be used to make firm predictions whose failure would show it to be wrong. This has been the situation of superstring theory from its beginnings to the present day...
...there is a second connotation of "not even wrong": something worse than a wrong idea... worse than being wrong is to refuse to admit it when one is wrong.
For all the hype over Woit's book in relation to string theory, it doesn't even take up the topic until about halfway through. The first half presents a history of particle physics, with a heavy emphasis on the interface between physics and mathematics. The standard model of particle physics receives a lot of attention, and there's also a nice early chapter on accelerators and colliders. Woit himself, in both his blog and book, warns about the technical references to mathematical concepts in the book, but I didn't find it to be any worse than other books in terms of mathematical complexity (which is not to say that I understood any more than a little of the math).
A substantial component of Woit's critique of string theory has to do with the potentially huge (i.e., hundreds of thousands, if not hundreds of millions, or more) number of possible scenarios that would satisfy string theory's requirements, but not produce a unique solution characterizing a single universe. Part of the problem stems from the need for extra dimensions and the applicability of something called Calabi-Yau manifolds (discussed in this earlier posting of mine). Writes Woit:
The predictions of heterotic string theory [one form of the theory] strongly depend on which Calabi-Yau spaces one chooses. In 1984, only a few Calabi-Yau spaces were known, but by now hundreds of thousands have been constructed (p. 153).
A few chapters later, he resumes the discussion, in terms of possible background spaces for the universe(s):
There are many, perhaps infinitely many, classes of background spaces that appear to be possible consistent choices, and each one of these classes comes with a large number of parameters that determine the size and shape of the background space-time... [Regarding possible parameters, one of the proposed mechanisms] picks out not a unique value for the values..., but a very large set of values, any one of which should be as good as any other. Estimates of the number of these possible values are absurdly large (e.g., 10^100, 10^500, or even 10^1000 [where ^ stands for raising to a power]) far beyond the number of particles in the universe or anything else one can imagine counting... The possible existence of, say, 10^500 consistent different vacuum states for superstring theory probably destroys the hope of using the theory to predict anything... (pp. 237-239).
Woit also discusses the landscape/anthropic principle, an attempt by some to salvage the string theory conception of innumerable universes. According to the Wikipedia entry linked in the previous sentence, "The idea of the string theory landscape has been used to propose a concrete implementation of the anthropic principle, the idea that fundamental constants may have the values they have not for fundamental physical reasons, but rather because such values are necessary for life."
Also thrown into the mix by Woit is a chapter entitled "The Bogdanov Affair," the larger point of which seems to be that certain areas of physics have gotten so complex that few people can even follow what is being written. In brief, two brothers named Bogdanov managed to publish several articles that appeared to contain little or no merit, but whose prose was so impenetrable that journal referees appeared to let the manuscripts get by. Writes Woit:
Leaving aside the issue of whether the Bogdanovs are hoaxers or really believe in their own work, this episode definitively showed that in the field of quantum gravity one can easily publish complete gibberish in many journals, some of them rather prominent... Why did the referees in this case accept for publication such obviously incoherent nonsense? One reason is undoubtedly that many physicists do not willingly admit that they don't understand things...(pp. 218-219).
Sean Carroll, who blogs at Cosmic Variance, had an entry about a week and a half ago entitled, "String Theory is Losing the Public Debate." He writes:
...there are very good reasons to think that something like string theory is going to be part of the ultimate understanding of quantum gravity, and it would be nice if more people knew what those reasons were.
Of course, it would be even nicer if those reasons were explained (to interested non-physicists as well as other physicists who are not specialists) by string theorists themselves. Unfortunately, they’re not. Most string theorists... seem to not deem it worth their time to make much of an effort to explain why this theory with no empirical support whatsoever is nevertheless so promising. (Which it is.) Meanwhile, people who think that string theory has hit a dead end and should admit defeat — who are a tiny minority of those who are well-informed about the subject — are getting their message out with devastating effectiveness.
As I've noted periodically on the present blog, at least one prominent physicist, Columbia University's Brian Greene (whom Carroll also mentions), has been assiduously arguing for the promise of string theory in various forums. I agree with Carroll, though. I too would like to see more advocates of string theory explain their perspective.
Sunday, March 11, 2007
Scientific American Special Edition on Astrophysics
Scientific American magazine has recently come out with a "Scientific American Reports: Special Edition on Astrophysics," the focus of which is on black holes. Scientific American's website has information on the special issue, although no free full-text access is available.
Many researchers have been exploring the links between astrophysics/cosmology and particle physics, as studied at accelerators such as the upcoming Large Hadron Collider (LHC). In fact, according this MSNBC article, one scenario for the LHC is, "Upon collision, the beams are expected to create many new particles and possibly a reconstruction of the universe in its very first moments."
On a related note, one of the articles in the Scientific American issue, by Bernard Carr and Steven Giddings, talks about the possibility of creating microscopic black holes at the LHC. The article notes that, under conventional thinking, the LHC would not have sufficient energy to produce black holes. However, if there are extra, hidden dimensions there could be sufficient energy. The appearance of black holes at the LHC would then offer "compelling evidence of the previously hidden dimensions of space..." (p. 27).
Convrersely, there is some new research suggesting that, in some sense, the Milky Way galaxy's black hole may emulate what will be happening at the LHC. As reported in this news release from the University of Arizona:
"It's similar to the same kind of particle physics experiments that the Large Hadron Collider being built at CERN will perform," UA astrophysicist David Ballantyne said.
When complete, the Large Hadron Collider in Switzerland will be able to accelerate protons to seven trillion electronvolts. Our galaxy's black hole whips protons to energies as much as 100 trillion electronvolts, according to the team's new study. That's all the more impressive because "Our black hole is pretty inactive compared to massive black holes sitting in other galaxies," Ballantyne noted.
Getting back to the Scientific American special issue, other articles cover topics such as the history of black hole research (which involves, among others, Albert Einstein and J. Robert Oppenheimer), the possibility of time travel, and holographic theories of the universe.
Many researchers have been exploring the links between astrophysics/cosmology and particle physics, as studied at accelerators such as the upcoming Large Hadron Collider (LHC). In fact, according this MSNBC article, one scenario for the LHC is, "Upon collision, the beams are expected to create many new particles and possibly a reconstruction of the universe in its very first moments."
On a related note, one of the articles in the Scientific American issue, by Bernard Carr and Steven Giddings, talks about the possibility of creating microscopic black holes at the LHC. The article notes that, under conventional thinking, the LHC would not have sufficient energy to produce black holes. However, if there are extra, hidden dimensions there could be sufficient energy. The appearance of black holes at the LHC would then offer "compelling evidence of the previously hidden dimensions of space..." (p. 27).
Convrersely, there is some new research suggesting that, in some sense, the Milky Way galaxy's black hole may emulate what will be happening at the LHC. As reported in this news release from the University of Arizona:
"It's similar to the same kind of particle physics experiments that the Large Hadron Collider being built at CERN will perform," UA astrophysicist David Ballantyne said.
When complete, the Large Hadron Collider in Switzerland will be able to accelerate protons to seven trillion electronvolts. Our galaxy's black hole whips protons to energies as much as 100 trillion electronvolts, according to the team's new study. That's all the more impressive because "Our black hole is pretty inactive compared to massive black holes sitting in other galaxies," Ballantyne noted.
Getting back to the Scientific American special issue, other articles cover topics such as the history of black hole research (which involves, among others, Albert Einstein and J. Robert Oppenheimer), the possibility of time travel, and holographic theories of the universe.
Monday, January 15, 2007
Book Review: "The Trouble with Physics"
Over the holiday break, I read Lee Smolin's book, The Trouble with Physics. I've already alluded to the book a couple of times on this blog -- citing a New Yorker magazine review of it in the posting immediately below the present one and linking to the audio of a radio interview with Smolin and Brian Greene in my August 19, 2006 entry -- so I don't think a detailed review on my part is needed.
Rather, I just want to share, briefly, a few of my major reactions to Smolin's book. First, I feel Smolin provides very clear explanations of a number of concepts in string theory, supersymmetry, etc. For example, regarding Edward Witten's famous "M-Theory" Second Superstring Revolution, in which Witten integrated the five major string theories extant at the time, Smolin explains the use of dualities to bring the theories together, in a very readable manner.
Second, from my position of having no technical training or expertise in the subject matter, just an educated layperson's interest, the book portrays string theory and supersymmetry (together known as "superstring theory") as having a fairly tenuous foundation. Even having read a number of critical writings in the past, Smolin's book has enhanced my skepticism of superstring theory. Three areas of concern are as follows:
(a) String/M theory, as I interpreted Smolin's writing, pertained to "the five consistent superstring theories in ten-dimensional spacetime," but there were "millions of variants in the cases where some dimensions were wrapped up" (p. 129). Smolin had earlier defined a mathematically consistent theory as one that, "...never gives two results that contradict each other... [and] all physical qualities the theory describes involve finite numbers" (p. 112, footnote). Ten spacetime dimensions refer to time, plus nine dimensions of space (six beyond the three we see). Further, Witten "did not actually present a new unified superstring theory; he simply proposed that it existed and that it would have certain features" (p. 129).
(b) Another concern involves the apparent malleability of the equations in superstring theory. Smolin notes that, "...the original standard model has about 20 free constants we have to adjust by hand to get predictions that agree with experiment. The [Minimally Supersymmetric Standard Model] adds 105 more free constants. The theorist is at liberty to adjust them all to ensure that the theory agrees with experiments... There turn out to be many different ways to tune the dials to ensure that all the particles we don't see are so heavy that they're as yet unseeable" (pp. 75-76).
(c) In addition, the generality of many calculations is in question. Regarding work following the second revolution, "To get any results, we had to choose special examples and conditions. In many instances, we were left not knowing whether the calculations we could do gave results that were a true guide to the general situation or not" (p. 146).
Another direction taken in an attempt to advance string theory is its potential applicability to black holes. However, such attempts "all suffer from a general problem, which is that whenever they stray from the very special black holes where we can use supersymmetry to do calculations, they fail to lead to precise results... So we are left with the same dilemma that afflicts so much research in string theory: We get marvelous results for very special cases, and we are unable to decide whether the results extend to the whole theory or are true only of the special cases where we can do the calculations" (p. 190).
The first 250 or so pages of the book, focusing on substantive theoretical physics, would probably be more than sufficiently filling (using the metaphor of "food for thought") for most readers. However, another 100 pages of more epistemological discussion follow, which leads to my third major reaction.
Smolin's perceptions of how science in general, and university-based science in particular, do operate and should operate, are reasonably interesting, as is his call for the support of unconventional thinkers. What struck me the most, however, was a possibility raised by Smolin. Perhaps what is hampering the string-theory enterprise to integrate general relativity (gravity) and quantum mechanics is that the traditionally accepted conceptualizations of relativity and quantum mechanics are themselves flawed.
After discussing some lines of inquiry questioning relativity, Smolin writes:
So much for questioning relativity. What if quantum theory is wrong? This is the soft underbelly of the whole project of quantum gravity. If quantum theory is wrong, then trying to combine it with gravity will have been a huge waste of time... (pp. 316-317).
Whether the foundations of theoretical physics will ultimately be rocked to this extent is unknown. But Smolin asks his readers to remain open-minded to a variety of possibilities.
Rather, I just want to share, briefly, a few of my major reactions to Smolin's book. First, I feel Smolin provides very clear explanations of a number of concepts in string theory, supersymmetry, etc. For example, regarding Edward Witten's famous "M-Theory" Second Superstring Revolution, in which Witten integrated the five major string theories extant at the time, Smolin explains the use of dualities to bring the theories together, in a very readable manner.
Second, from my position of having no technical training or expertise in the subject matter, just an educated layperson's interest, the book portrays string theory and supersymmetry (together known as "superstring theory") as having a fairly tenuous foundation. Even having read a number of critical writings in the past, Smolin's book has enhanced my skepticism of superstring theory. Three areas of concern are as follows:
(a) String/M theory, as I interpreted Smolin's writing, pertained to "the five consistent superstring theories in ten-dimensional spacetime," but there were "millions of variants in the cases where some dimensions were wrapped up" (p. 129). Smolin had earlier defined a mathematically consistent theory as one that, "...never gives two results that contradict each other... [and] all physical qualities the theory describes involve finite numbers" (p. 112, footnote). Ten spacetime dimensions refer to time, plus nine dimensions of space (six beyond the three we see). Further, Witten "did not actually present a new unified superstring theory; he simply proposed that it existed and that it would have certain features" (p. 129).
(b) Another concern involves the apparent malleability of the equations in superstring theory. Smolin notes that, "...the original standard model has about 20 free constants we have to adjust by hand to get predictions that agree with experiment. The [Minimally Supersymmetric Standard Model] adds 105 more free constants. The theorist is at liberty to adjust them all to ensure that the theory agrees with experiments... There turn out to be many different ways to tune the dials to ensure that all the particles we don't see are so heavy that they're as yet unseeable" (pp. 75-76).
(c) In addition, the generality of many calculations is in question. Regarding work following the second revolution, "To get any results, we had to choose special examples and conditions. In many instances, we were left not knowing whether the calculations we could do gave results that were a true guide to the general situation or not" (p. 146).
Another direction taken in an attempt to advance string theory is its potential applicability to black holes. However, such attempts "all suffer from a general problem, which is that whenever they stray from the very special black holes where we can use supersymmetry to do calculations, they fail to lead to precise results... So we are left with the same dilemma that afflicts so much research in string theory: We get marvelous results for very special cases, and we are unable to decide whether the results extend to the whole theory or are true only of the special cases where we can do the calculations" (p. 190).
The first 250 or so pages of the book, focusing on substantive theoretical physics, would probably be more than sufficiently filling (using the metaphor of "food for thought") for most readers. However, another 100 pages of more epistemological discussion follow, which leads to my third major reaction.
Smolin's perceptions of how science in general, and university-based science in particular, do operate and should operate, are reasonably interesting, as is his call for the support of unconventional thinkers. What struck me the most, however, was a possibility raised by Smolin. Perhaps what is hampering the string-theory enterprise to integrate general relativity (gravity) and quantum mechanics is that the traditionally accepted conceptualizations of relativity and quantum mechanics are themselves flawed.
After discussing some lines of inquiry questioning relativity, Smolin writes:
So much for questioning relativity. What if quantum theory is wrong? This is the soft underbelly of the whole project of quantum gravity. If quantum theory is wrong, then trying to combine it with gravity will have been a huge waste of time... (pp. 316-317).
Whether the foundations of theoretical physics will ultimately be rocked to this extent is unknown. But Smolin asks his readers to remain open-minded to a variety of possibilities.
Subscribe to: Posts (Atom)