Thursday, August 10, 2017

Amidst the hype, an Eclipse book of value

kw: book reviews, nonfiction, science, history, eclipses

OK, let's get the ooh-and-aah stuff out of the way first. This image shows the eclipsed Sun in an intermediate state: a medium amount of corona and several prominences are visible. The solar prominences are the red bits around the rim of the Moon. The image was enhanced by unsharp masking to show more of the corona, which has a sharp drop-off of intensity with distance from the solar photosphere (the "surface").

Viewing an eclipse without magnification, you are unlikely to see the prominences, so it helps to have a telescope set up ahead of time, its clock drive running, ready for action the instant that second contact occurs. A magnification of 30-to-60x is sufficient. This is about how the Sun would look at 30x.

Perhaps you know that the Sun has an 11-year cycle of activity. During periods of low activity, it is more likely to look like this (and this photo was enhanced also). This is an older, black-and-white photo, but I suspect few prominences would have been visible in a color image.

Interestingly, even in quiet years the corona may be quite extended, though it tends to be smoother. 2009 was a very quite year, according to records at spaceweather.com, the Sun's face was free of sunspots on 260 days, 71% of the year.

At the peak of a sunspot cycle, sunspots are typically visible every single day, or very nearly. Sunspots are evidence of the "wound-up" condition of the magnetic fields inside the Sun. Prominences and flares are triggered by magnetic re-combination events.

A large, active corona is seen here. Looking carefully (click on the image for a larger version), you can see prominences. The rather bright blob at right might be a coronal mass ejection. When one of these occurs in the center of the Sun's face, we can expect a magnetic storm on Earth in 2-3 days' time.

To see what the outer corona looks like any time, look at the LASCO images at the Solar and Heliospheric Observatory (SOHO) satellite's image and video gallery here. One cannot see the close-in corona because the satellite's coronagraph is about two solar diameters across. Sometimes I've looked at a video of the past week or so and been able to watch a comet "auger in".

Now, to the book. John Dvorak is an exceptionally good writer, with much of value to say, and in a time of extraordinary hype about the solar eclipse that will occur across the entire U.S. in just 11 days, he has produced a valuable book of lore, history, and scientific explanations: Mask of the Sun: The Science, History, and Forgotten Lore of Eclipses.

While most people through history have viewed eclipses of both Sun and Moon as dramatic omens of misfortune, there have always been a few wiser folk who realized that though they are so infrequent, they are subject to natural laws. While a total solar eclipse is visible over a small area, a swath no more than 112 km across, partial eclipses can be seen as far as about the diameter of the Moon (3,473 km) on either side of the central path…or a bit farther because of the curvature of the Earth's surface. Thus, if there is a solar eclipse going on, the majority or people on the sunlit side of Earth at the time will be able to witness at least a partial eclipse.

Since the sky doesn't darken much during a partial solar eclipse, how were they noticed in antiquity? Think pinholes. The crescents seen here were in shadows cast by leaves of a tree. If you are used to seeing the round dots on the ground or a wall in a tree's shadow, then you'll likely be drawn to the view when they change shape. Pinhole viewing of partial solar eclipses has been recorded over at least the past 2,400 years.

So, although an average location on Earth experiences one total solar eclipse about every 330 years, a partial eclipse is likely to be seen about every 2-3 years from almost anywhere. With a bit greater frequency, almost anywhere you live you'll be able to see an eclipse of the Moon almost every year, because they are visible from an entire hemisphere at once.

In classical times, one of the seven required subjects of  a classical education was Astronomy, which actually meant learning to gather naked-eye observations and make the calculations to determine the motion of the Moon and the naked-eye visible planets (Mercury, Venus, Mars, Jupiter and Saturn), primarily for astrological purposes and to (very roughly) predict eclipses. Much of Mask of the Sun discusses the ebb and flow of lore and superstition about eclipses, both lunar and solar. Kings and emperors employed skilled mathematicians to predict eclipses, because unfriendly (or hype-engrossed) persons were making the same predictions, and then predicting the likely demise of whomever was in power at the time. A leader with better advance knowledge could then take advantage of public magical ceremonies intended to stave off the disaster and survive the eclipse, which really meant to stave of the likelihood of a revolt.

Eclipses earned great practical value during the "age of sail": they can be used to determine longitude. It isn't easy, but it was too valuable an aid to navigation to not perform. First, one must have a good (relatively speaking) time-measurement device. The water clocks and other mechanical timekeeping devices in use before the pendulum clock was invented in 1656 (by Huygens) were better than counting heartbeats, but not by much. You, the seafaring captain intent on determining the location of some distant port, would contract with an astronomer at home to determine the time at which certain critical events occurred,and their location in the sky, usually during a lunar eclipse. This requires a bit of explanation.

The shadow of a planet or satellite has two parts, the Umbra and the Penumbra. When you see a total solar eclipse, during the time of totality you are standing inside the Umbra. Before and after totality, and in any place where a partial eclipse is witnessed, that is in the Penumbra. There are thus four contacts that delimit a total solar eclipse:

  1. The Moon first impinges on the edge of the Sun.
  2. The Moon fully covers the whole Sun.
  3. The Sun first begins to exit from behind the Moon.
  4. The last bit of the Moon exits the edge of the Sun.

The same four contacts pertain to a total lunar eclipse, except they refer to the impingement of first the penumbra of Earth's shadow, then the umbra, shading the Moon, and then the Moon's exit from first the umbra and then the penumbra.

By taking readings with a sextant or octant of the Moon's position in the sky when each contact occurs, and noting the time of each as exactly as possible, both you and the astronomer back at home gather data that can be used to calculate the longitude difference between the place you were and your home port. Of course, latitude is much easier to measure in the Northern hemisphere by sighting the north star. Seeing the orientation of the Big Dipper lets you correct for the star's offset from the actual pole, which is presently about one degree (Because of Earth's precession, Thuban in the constellation Draco was the star nearest the pole 5,000 years ago, when the pyramids were a-building in Egypt). Prior to the late 1700's, when very accurate marine chronometers were invented, it took months to learn "where" you had been! And then you might still be off by a few degrees (each degree is 60 nautical miles, that is 69 mi or 111 km).

During a total solar eclipse stars become visible. In 1919, this photo was taken and the two stars marked with little dashes were among those used to verify Einstein's general theory of relativity.

Spectroscopy of the solar corona was first done in the 1860's, and led to a paradox that has not yet been resolved. The spectroscope had revealed that the Sun's photosphere is at a temperature of about 5800K (about 10,000°F), and later that the middle part of the chromosphere, a thin pinkish layer just above it, is at about 3800K (about 6,400°F). But the corona had a puzzling spectrum that wasn't figured out until the 1930's and 1940's: its temperature ranges from one to three million kelvins! That's two to more than five million °F.

Before I close I must mention the two central solar eclipses I have seen. The first was July 20, 1963, when I was not quite 16. The Moon's shadow crossed from northwestern Canada to Maine. My family took a vacation starting nearly two weeks earlier, to Montreal and Quebec, and then on the 20th we crossed into Maine at a spot where the highway would be right at the center of the umbra. I had fitted a telescope with a projection screen, with which we watched from just prior to first contact until second contact. Then we looked at the sky to see the Sun and its corona. The hillside had a view to the northwest, and we saw the umbra racing toward us just before second contact. Seeing something, even a shadow, approach at 2,000 mph is amazing! Seeing the "hole in the sky" surrounded by a large corona was amazing! In just over a minute, it ended and third contact occurred. We saw the "diamond ring", the first bright ray of sunlight peeking through a mountain pass on the Moon.

The second was the annular eclipse that passed through Ponca City, Oklahoma, May 10, 1994, when I worked for Conoco. This picture shows the projection screen attached to my telescope, and the eyepiece is visible at the right edge. This is the same telescope I used in 1963, and I still use it. Annular eclipses occur when the Moon is in a further part of its orbit, near apogee, and doesn't cover the entire Sun.

Conoco management gave everyone half the day off. School groups and others were invited on-site. A filtered video camera was used to broadcast the eclipse inside the buildings on TV monitors usually used for executive communications. At least twelve telescopes were brought onsite by Conocoans and a few others, and used, usually by projection, to show the Sun to groups of people. One friend of mine brought a large telescope fitted with a full-aperture solar filter, so you could look through his wide-angle eyepiece at a 100x view of the whole Sun. Now, that was an amazing view!

While the publication of Mask of the Sun was timed to take advantage of public interest in the solar eclipse that will be seen all across North America on August 21, 2017, it is not hyping the eclipse, but instead giving us a primer into the past and continuing importance of eclipses. For example, eclipses on earth and elsewhere (notably, shadows of Jupiter's moons on that planet's cloud tops) are still one of the key ingredients to measuring planetary distances in the solar system. I have deliberately touched on only a few of the many delightful matters covered in the book. It is well worth reading by anyone with any level of scientific education.

Saturday, August 05, 2017

To survive, dig in

kw: book reviews, nonfiction, science, paleontology, zoology, burrowing, mass extinctions

Shortly after we moved to our house 22 years ago we bought some flat stepping stones for high-traffic areas in our yard, such as the path through a "gate" in a hedge. I dug these in to be an inch or so above ground level, a little lower than the mower blade at its lowest setting. Now, nearly all of them have sunk to ground level or below. Two examples are shown here. Is this just soil compaction from the stones being walked on? Not entirely. Wherever I dig in my yard, I encounter several earthworms in every shovelful.

Charles Darwin spent about 20 years studying earthworms, and using "worm stones" plus an ingenious measuring device attached to bedrock beneath, determined that bioturbation (the modern term) of the subsoil by earthworms caused the stones to sink by an average of 2.2 mm/year. Darwin's earthworms must have been very energetic. The "sink rate" for my stepping stones is closer to 1.0-1.5 mm/year.

One of Darwin's worm stones is pictured in The Evolution Underground: Burrows, Bunkers, and the Marvelous Subterranean World Beneath Our Feet by Anthony J. Martin. Dr. Martin's thesis is simple: burrowing and other means of living below ground at least part of the time is so beneficial that many animals are burrowers. I don't know if you could say "most animals", but that might be true (he doesn't say). Also, burrowers provide homes for other species that share their spaces. The author makes a good case, with numerous examples, that living at least part time underground enabled many animal species to survive the various nastinesses we call "mass extinctions".

The "big five" mass extinctions had such profound effects on both biology and geology that they mark geological boundaries (the abbreviation "mya" means "million years ago"):

  • Ordovician-Silurian boundary, 429 mya. About half of species vanished, and about 85% of all animals died.
  • Late Devonian, 364 mya. About 75% of species became extinct.
  • Permian-Triassic boundary, 251 mya. The baddest of the bad, this one drove 96% of species extinct. All living things today are descended from the remaining 4%.
  • Triassic-Jurassic series, between 214 and 199 mya. By the end of this 15-million-year period, more than half of species had been eliminated.
  • End-Cretaceous, 65 mya. This is the best known, because it centers on an asteroid impact and led to the demise of the dinosaurs…or, at least, the non-avian dinosaurs. It is now known that birds are dinosaurs, or, if you prefer, birds are descended from theropod dinosaurs. 76% of species went extinct.

Many cases show that animals that were underground during the big smash, or whatever happened, were the most likely to survive in numbers sufficient to restore their populations afterward and become the ancestors of modern life. But before the first of the mass extinctions, there were big changes as animal life arose and developed, including the development of the first burrowing creatures. An odd group of animal species called the Ediacara Fauna did just a little burrowing, but were followed by the "Small Shelly Fauna" that burrowed more and deeper, and then the proliferation of hard shells that marks the beginning of the Cambrian period also marks the beginning of rather thorough bioturbation of ocean floor sediments.

The author shows the history of animal life from the perspective of an Ichnologist, a scientist who studies trace fossils. This picture, a 6"x8" section of a rock about 15" square, shows trace fossils on a rock I picked up from a sandstone bed near the base of the Morrison Formation in South Dakota, so it is about 150 million years old. This is a bottom cast; we are "looking up" at sediment that settled into tracks and shallow burrows in the late Jurassic sea bed.

Somewhat visible are ripples crossing from top right towards bottom left, showing that this was in rather shallow water. At least three kinds of tracks are visible, though I don't know what animal made any of them. Other dug-in structures are seen, or rather, their casts. Dr. Martin and his colleagues are experts in discerning the meaning of such traces.

Before digging into his subject, however, the author discusses "A brief history of humans underground." If you've heard of Cappadocia, you may know of the underground homes dug into the soft sandstone. That has been going on for several thousand years! Long before that, humans utilized natural caves, not only for shelter and burials but even for their art (think of the amazing art in the caves at Altamira and Lascaux).

While we tend to denigrate "cave men", thinking only Neanderthals lived in caves, the "art gallery" caves were painted by our species. When there were only a few humans worldwide, it makes sense to consider that many or most of them used caves and sometimes stayed in them for extended periods, not just during bad weather or extreme seasons. A cave is easier to defend from predators. And just as the burrows of gopher tortoises permit them to thrive in areas with tough winters, so caves shield those who dwell in them from climatic extremes. Indian Echo Caverns, in Pennsylvania about two hours from where I live, was the home of William Wilson from 1802-1821. The "Pennsylvania Hermit" stayed pretty well wrapped up most of the time, because the cave stays a nice, chilly 54°F (12°C) all the time.

There just aren't enough caves to go around, so now we build artificial caves we call "houses". One of the professors at South Dakota Tech had an "underground house" when I was there in the 1980's. It was technically a house built into a tight place between two rock outcrops. An underground house is nearly free to heat or cool, if it is in the "temperate band" across the world where average temperatures are between about 60°F and 75°F (16°C-24°C). The below-ground temperature near Rapid City, SD is closer to 47°F (8°C), so my professor had to insulate the excavation, pour concrete for the dwelling, and insulate more. South of Oklahoma in the U.S.A. an underground house would not need heating or cooling (just moisture control, perhaps!); in Europe, think Spain, Italy, Greece and Turkey, including Cappadocia.

This may become more pertinent in another generation, if the climate continues to warm. I will be even more pertinent when the "Holocene warming" that began about 12,000 years ago comes to an end and another 100,000-year Ice Age begins! Today's "global warming" caused by "carbon pollution" (an oxymoron; we are made of carbon and its oxy- and hydro-derivatives!) may actually delay an ice age by a century or so.

The most ubiquitous burrowers and tunnelers, humans aside, are invertebrates. Earthworms don't leave open tunnels; their burrows fill in behind them with the excreted feces from which they've digested key organic materials. But ants and termites produce long-lasting tunnels. Some of these have been studied by pouring in plaster or even molten aluminum. This cast of an ant nest is from leaf-cutter ants of Central America.

There is a surprising array of vertebrate burrowers, however. We are familiar with gophers and voles, perhaps, but certain birds burrow, such as kiwis, bee-eaters, and some penguins. The gopher tortoise, as its name suggests, is quite a digger, and its burrows shelter at least 400 species that are enabled to live in otherwise inhospitable places because of a tortoise's "hospitality".

The author also discusses the most amazing tunneler of all prehistory, the giant ground sloth. You might not think of an animal the size of a 4-door sedan as a burrower, but in southernmost Brazil there are hundreds, perhaps thousands, of burrows you could literally drive a truck through! The tunnels are 4-4.5 m wide (13-15 ft) and 2-2.5 m high (6.5-8 ft).

The last Brazilian ground sloths died (probably eaten by early Brazilians) about 12,000 years ago. They had used their strong claws to dig though soft, semi-cemented sandstone. The various species of giant sloth lived through numerous ice ages, having evolved about 23 million years ago, or perhaps earlier. Great bulk is itself helpful for surviving great cold, but burrowing confers an added advantage.

Biologists and paleontologists in general pay most of their attention to animals that lived above ground. True, finding and recognizing the fossil of an animal that died underground is more difficult. But there is so much going on beneath our feet, and so much of prehistory that took place underground, that we must realize that the livability of our environment is largely a result of these hidden lives. Scientists of all stripes would do well to take note.

Are we the cause of a great extinction being called, by some, the Anthropocene? If we are, it is mainly affecting the critters above ground. If we should extinct ourselves at some point, the "rulers of the underworld" will remain, and may hardly notice much difference. They will continue their ecosystem services as before, keeping a significant percentage of the subsurface a nice place to make a home.

Tuesday, July 25, 2017

How tech is changing business

kw: book reviews, nonfiction, business, technology, artificial intelligence, trends

My, my, what a long time it took me to work my way through this book! It goes to show that I still have a poor mind for business. During the latter half of my career in IT, the managers and even some supervisors would speak of the "business reasons" for doing one thing or another. One day I asked a manager named Carol, "What is a 'business reason'?" She replied, "It's something people are willing to pay for." The thought had never entered my head. I have always done things for reasons such as "it is interesting", "it will make this or that task easier", "it does things in a more excellent way" and so forth. Getting paid was nice, but it wasn't my focus. When I heard a new company president speak of having a "passion for profits", I sent him an e-mail explaining how I had always had a passion for excellence, and that profits seemed always to follow. His response was so disturbing, revealing such abysmal blindness to everything I find meaningful, that I immediately sought work in a different company among the Dupont family of companies, and luckily found one within a few months.

I am not sure what I expected once I saw the cover of Machine Platform Crowd: Harnessing Our Digital Future by Andrew McAfee and Erik Brynjolfsson. Something more techie than what it delivered, certainly. But the authors' application of technological trends to present and future business was sufficiently appealing that I read it all.

The three words that begin the title emphasize the subjects of the book, which is a follow-on to their book The Second Machine Age. These words outline three dichotomous trends that are driving businesses:

  • Mind and Machine
  • Product and Platform
  • Core and Crowd

The trends are toward the right, and it is uncertain how far each will proceed. I debated with myself, whether to use "versus" rather than "and". But these pairs are not truly at odds; rather they are synergistic and supplementary to each other. For example, I built much of my career as a scientific programmer and systems analyst on discerning the appropriate tasks for the Machine to do, so as to free up people's Mind to do the things that we do better. From the beginning of the Computer Century (now about 70 years along), computational machinery has been called "mechanical brains", and the term "artificial intelligence" began to be applied even before ENIAC's tubes first lit up.

We now have pocket phones and nearly-affordable wristwatches that are millions of times as computationally powerful as ENIAC (this article includes notes on its speed of computation). But only within the past decade have "AI applications" begun to carry out tasks that are still – usually – done better by people and many animals. Many Sci-Fi stories bring us ideas of giant computers somehow becoming conscious more-or-less by accident (e.g., "Colossus" and "The Moon is a Harsh Mistress"). There is a reason for that. Nobody yet has the slightest idea how to define consciousness in any unambiguous way, and therefore, no idea how to write appropriate code to "do consciousness". To repeat myself, I define "genuine artificial intelligence" thus:
That a mechanism, electronic or electromechanical, carries out its own investigation, does its own research, and obtains a patent or at the very least has its patent application accepted by the U.S. Patent Office.
For the time being, the next generation or two at least, there will remain numerous "real world" tasks that minds will perform better than machines. The authors contend that nearly any repetitive task, including many now deemed "too creative" for a machine to carry out, will over time become the province of machine work, and that humans will be squeezed out. Will the day arrive when humans are no longer permitted to pilot an automobile? Cook their own meals?

The discussion of Product and Platform was harder for me to follow. Having a viable Product is the essence of a Business Reason for doing something. People pay for products, including those more squishy "products" we call "services." For example, technically, nursing care is a "service", but in the context of business, it is a product, delivered as a series of "service tasks" by a skilled person on behalf of another. Where does that fit into the notion of a "platform"? I think I understand that a platform packages products and services to make them easier for a producer to deliver and for a consumer to order and obtain. Will there one day be a platform like Uber for nursing care? I am almost afraid to look; it may already be out there. But there is still the need for the nurse-person (one day, a nurse-machine?) to physically do something to or for the person receiving nursing care.

Then, Core and Crowd. Hmm. I look on this as an expansion of Mind and Machine, where the "machine" has become a human-machine synergy we call the Crowd. I love the Citizen Science efforts out there, 73 of which (to date) are available under the Zooniverse umbrella. I have participated in about a dozen of them, and am most recently active in three that are of most current interest to me. A few years ago I classified more than 6,000 galaxies in one of the early Zooniverse projects. The machine part is the image delivery and questionnaire system. I and thousands of others (many minds) do the crowd part. The designers build in lots of redundancy, so as to spot errors and the occasional troll. The key to such projects is good planning and curation.

The authors focus on more business-oriented crowd projects. Their aim is to show that many untutored folks find innovative ways to solve problems that the "experts" would never think of. Very frequently the synergy of various "out of discipline" methods come together to do something ten or 100 times as well as the best that the "experts" had produced.

This principle comes home for me. Although I long aspired to be a scientist, because I was someone who nearly always wrote software for other scientists I had little occasion to publish; I wrote stuff to support work that other scientists published about. But the key paper of mine that made it into a peer-reviewed journal (Computers and the Geosciences) applied some sideways thinking to the numerical analysis of stiff differential equations used to simulate complex chemical reaction networks. I mixed principles used by astronomers in orbital mechanics with methods devised originally by civil engineers. In my dissertation, I used, and described, another numerical method that applied descending reciprocals to Runge-Kutta methods so that linear equations (linear in the "Diff Eq" sense) could be solved to any order desired. It was just a little part of my research, but crucial for certain computations that were otherwise too lengthy to carry out on the mainframes of the late 1970's.

So, I have rambled a lot into technical areas, mainly to cover up my difficulties "getting" the business focus of the book. It is written as a self-help text, with summaries and guiding questions following each chapter. It is written for business managers and executives. It is well enough written to hold my interest, even where I was in over my head.

Not to end on a downer, but I must quibble: on page 271 it is stated that the "amino acids" are strings of the genetic bases A, C, G and T. Those who know how wrong this is, just take comfort in "the old college try" that McAfee and Brynjolfsson gave it, when they were even more out of their depth than I am in their realm of expertise. (Hint to others: ACGT make genes, which are translated into proteins, composed of amino acids that do NOT include ACGT. That is why it is called translation.)

Thursday, July 13, 2017

The most comprehensive course ever

kw: book reviews, nonfiction, science, astrophysics, cosmology, physical universe, galaxies

As a student of geophysics, I occasionally remarked that the subject's bailiwick was "from the center of the Earth to the end of the Universe." The same could be said for astrophysics. Geophysics and astrophysics are a kind of tag team, covering the same realm from different perspectives. Astrophysics deals in part with how stars forge the elements that wind up in planets, while geophysics deals in the main with what happens to those elements once they form a solid or semisolid body (e.g. a gas giant planet).

I have great interest in both subject areas, so it was a real treat to read Welcome to the Universe: An Astrophysical Tour by Neil deGrasse Tyson, Michael A. Strauss, and J. Richard Gott. The book is a distillation of material from a course taught by these three men at Princeton University, to non-astronomy students.
  • Part I: Stars, Planets and Life, was written (and I presume taught) primarily by Dr. Tyson with certain sections by Dr. Strauss.
  • Part II: Galaxies, was written (and presumably taught) entirely by Dr. Strauss.
  • Part III: Einstein and the Universe, was written (and presumably taught) entirely by Dr. Gott.
You could say that Tyson deals with stellar and condensed matter, Strauss with galaxies and their formation, and Gott with the gamut of cosmological theories. For me, given my lifelong love of reading astrophysical books, both popular treatments and texts and monographs, there was little I would call "new to me." But these scientists are writing at the top of their form, and present their subjects in a most enjoyable way. I had certain take-away's from each author:
  • Chapters 7 and 8 [Tyson], "The Lives and Deaths of Stars", parts I and II, are a good summary of the different types of stars based on their masses, certain features of their internal dynamics that are a result of their mass, and the fate of each type. I did not note a discussion of the first stars, those that were entirely metal-free (Astronomers call all elements heavier than helium "metals", which is understandable from a statistical viewpoint: of the 88 natural elements beginning with lithium, and also the two synthetic elements among the first 92, all but 18 are metals). Perhaps it would have been confusing, because such "zero-metallicity stars" could not have had "careers" that fit well into the Hertzsprung-Russell Diagram that does such a good job classifying all known stars in the present universe.
  • Chapter 16 [Strauss], "Quasars and Black Holes", provides a clear summary of the spectral evidence that led firstly to the discovery that quasars are receding at phenomenal rates and are thus very distant (up to more than 90% of the way to the Big Bang some 13.8 billion years ago) and thus extremely luminous; and secondly that they must be powered by matter streaming into enormous black holes at the centers of galaxies. Nearly all quasars are more distant than a few billion light years. The closest is 600 million l-y. Quasars are the highest energy "active galactic nuclei" (AGN's), and since it seems that every galaxy hosts a supermassive black hole (from millions to billions of solar masses), any galaxy could host an AGN whenever a clump of matter finds its way to the galactic center.
  • Chapter 24 [Gott], "Our Future in the Universe", discusses what has happened to the whole universe since the Big Bang, and what is expected to happen, according to current theories. It is on a sort of super-logarithmic scale, highlighting 15 events ranging from the first 10-44 second to (very approximately) 10100 years in the future. In the text other possible events are mentioned, and one is as far off as a number of years described by a number with 1034 zeroes! That number of zeroes equals the number of hydrogen atoms in about 17 billion kilos of hydrogen. There will never be enough paper to "write" it down.
I was eager to see how Dr. Gott discussed Dark Energy and the (alleged) accelerating expansion of the universe. In the seven chapters he wrote, from time to time he discusses one or another mathematical principle that seems to require cosmic inflation (near the very beginning) or accelerating expansion (ongoing). I have yet to see an explanation of accelerating expansion that makes sense to me. The "evidence" for such acceleration is the anomalous brightness of some very distant supernovae. I have read recent articles that question both the data and the interpretation.

For my own part, I have yet to see an analysis of Type 1a supernovae that originate with a C-O white dwarf that accretes material of very low metallicity, as we would expect of very ancient objects at very great distances. Accretion, however, is not certain as a mechanism; WD-WD collisions are thought to produce the more prevalent type of supernova. The mass limit that must be crossed to yield a supernova is 1.44 solar masses. Thus the product of a collision will momentarily have a mass in the range 1.44 (plus a little) to 2.88 (minus a little). So, how "standard" is the standard candle known as a Type 1a supernova?

Well, that question did not get addressed, but for now that is OK. Astrophysicists and cosmologists are not single "voting bloc" in this regard, and I continue to read with interest the work being reported in this area.

Fascinating subjects, excellent writing: I expect this book to become a classic in its field.

Wednesday, July 05, 2017

A millennial in space

kw: book reviews, science fiction, near-future, space aliens

Caution: the book reviewed was written in the language of many millennials and late Gen-Xers, including the casual cussin' my generation calls "potty mouth." It's not suitable for youngsters you wish to shelter from such language.

I wonder why space aliens are so frequently imagined as having magical attributes. In Spaceman of Bohemia by Jaroslav Kalfař, a Czech astronaut on a solo flight of 8 months' duration, to a mysterious purple cloud between Earth and Venus, spends a lot of time with a spider-like being that apparently talks to him in his language, but soundlessly, in his mind. It also rifles through his memories.

The real thrust of the story is, what is real? what is imaginary? How does the ill-starred astronaut return to Earth after the destruction of his space capsule, from a distance of tens of millions of miles? I was reminded of The Life of Pi (reviewed in 2015), and the long trip the young man Pi takes in a lifeboat with a tiger as his companion. The same ambiguity fills both stories.

In its wider sense the story is one of someone cycling back to the beginning to restart with a wiser outlook. Yet the protagonist is full of obsessions, and not all have been resolved at the end. Was his experience more delusion than fact, and is he still delusional? Probably.

About half the chapters are flashbacks to the astronaut's formative experiences, from the Velvet Revolution to the "Capitalist Invasion" of Prague. Assuming the history is accurate, there are a few things one can learn about the development of Czechoslovakia into the new nations that succeeded it after 1989, and a few things to learn about peasant life pretty much anywhere in Eastern Europe in those years.

I wonder how much astronomy and cosmology the author has been exposed to. The purple cloud is supposedly emitted by a "comet … from the Canis Major galaxy." There actually is a dwarf galaxy well behind the Canis Major constellation. It is about 25,000 light-years away. All known comets are members of our solar system, and perhaps a very few originate as far away as half a light year. So this is a book for the astronomically illiterate.

The book jacket blurbs treat the book as a great feat of humor. I found nothing funny in it. I wonder what joke I have been left out of. I'll chalk that up to a generational thing, and remark only that, if this is humor, I tremble for the generation now entering middle age.

Monday, July 03, 2017

Russian spiders at it again

kw: blogging, blogs, spider scanning

Late last evening I went in to add a post to the blog and noticed heavy traffic from Russia again. We'll see how long it lasts this time. The activity is not as regular as before (though the Russians are not as regular as the Americans), and began on June 30. That tall peak just over a day ago (as I write this) represents 96 hits in one hour. When the spiders aren't active, I seldom exceed 96 hits in two days.


Sunday, July 02, 2017

What might one learn from having cancer?

kw: book reviews, nonfiction, self help, cancer

When I saw The Cancer Whisperer: Finding Courage, Direction, and the Unlikely Gifts of Cancer by Sophie Sabbage, I wasn't sure what I would find, but I was hoping for a practical self help book. I think that is what this book is, but let me confess at the outset that I did not read the whole book: I read the Introduction and the first and last chapters in their entirety, and skipped here and there within the other 8 chapters.

I am certain this book is worth at least beginning to read, by anyone facing a new cancer diagnosis. You will know soon enough whether it suits your needs. I had cancer 17 years ago, died in the recovery room and had to be resuscitated, and fought a series of very different battles from those that Ms Sabbage describes. This was one reason I could not connect with the book's message.

The other primary reason I could not connect is that the writing style, though written in a self help style that is quite popular, simply puts me off. Sorry, Ma'am!

It is worthwhile to introduce the Compass concept, the subject of Chapter 1. In a diagram of an 8-point compass, the first item (the subject of Chapter 2) is at the top, and the subjects proceed clockwise around. They are, in order:

  1. Coming to Terms – a matter of balancing feelings and facts, and setting the boundaries you wish to preserve (such as those around work and relationships).
  2. Understanding Your Disease – learning all you can: more facts, the more the better. And here the author wisely tells (most of) us to avoid statistics, but I'll touch on this later.
  3. Knowing Your Purpose – to decide what you want and why, and establish a plan toward obtaining it.
  4. Stabilizing Your Body – prioritize actions such as changing eating habits.
  5. Clearing Your Mind – including building the support network you need when your own control slips, as it will from time to time.
  6. Directing Your Treatment – learn from your doctors, set your own priorities, and preserve your own integrity as a person not a disease. You may need help from your support network to lead your healing team, not just blindly following "what the doctors want". I'll have more on this below.
  7. Dancing With Grief – embrace grief; there are automatic losses, including the possible loss of your future. 
  8. Breaking the Shell – I am not totally sure, but this seems to entail "making friends" with your cancer to learn from it. Here we part ways. I am quite comfortable learning all I can from an enemy, all the while planning the most efficient way to totally eliminate it!

For many of us, the first in time will be 4…if we have time. In my case, I was working toward stabilizing a deteriorating situation for about two months before I had a cancer diagnosis. Once that occurred, I had no more than 8 days from diagnosis (Nov 22, 2000; the day before Thanksgiving!) to major surgery (Nov 30). I entered the hospital on Nov 27, and they took care of the stabilizing, because the doctor was not sure I could survive surgery. The bare facts:

  • Stage 3+ colon cancer, with a major mass visible in the colonoscope, about the size of my fist (I have big hands).
  • Nearly two months of enforced fasting due to intestinal blockage.
  • Loss of 25 pounds during 2 months.
  • Blood count of 8.5 and falling (15 is normal).

On Nov 27 I was placed in the hospice ward, and they began intravenous feeding. The normal "dose" is one 1-Liter bag of "lion milk" daily. I was given three bags daily. I was allowed a little walking around, steering my IV pole. I realized I was in the hospice when the message board outside all the other rooms said, "Comfort", while mine said "Comfort and Feed 3x". How many people do you know who spent 3-4 days in a hospice, and came out alive?

What led up to my diagnosis? I had a rather passive doctor. When I went to him with persistent pain that seemed to be near my stomach, he spent more than a month trying ulcer remedies and then an antibiotic. One day he said something like, "Maybe it would be a good idea to get a colonoscopy…at some point."—Appalling! At that point, I silently took charge (in the book's terms, I began directing my own treatment). I had been in the ER twice already with violent vomiting and bloody stools, and had overheard the ER doctor say, "There is a very high white blood count, but we can't find an organism." I was thinking, "Sounds more like cancer than an infection." Inside me I already had my diagnosis.

The next day, after the doctor had expressed puzzlement and made his immensely stupid statement, I went to the receptionist and innocently asked her, "He said something about seeing a gastroenterologist. Is there one he prefers?" She gave me a name. I had a fleeting thought that my inept doctor might have inept friends, but decided to give the man a try. In those days you needed a referral so I faked one. After a talk with that doctor's receptionist, she got me an appointment three weeks on. I'm not sure why I didn't immediately call some other GI doctors, but I didn't.

I made it through the 3 weeks (now it was 2 months since I had effective nourishment), and saw him on a Monday. He asked, "3 weeks? How'd you get in here so fast? My backlog is 3 months! Did you tell her you are bleeding?" I said, "Of course!" He said, "You're very pale" and took me right downstairs to a clinic that drew blood and determined my blood count was 8.5. He said, "Go to such-and-such a hospital at 7:00 AM on Wednesday and I'll meet you there." And on Wednesday the cancer was seen by my wife and me via the 'scope. But I was on Demerol and the memory didn't "take"; I had to be told about it after I came around.

Thanksgiving Weekend! What a time to suffer through telling my dear friends of my disease. They prayed for me. My wife and I had planned to go to a church conference for two days, so we went. It was just 2 hours away. There I told certain ones, who took the news to their churches so they could pray for me.

Early Monday I called my doctor. He called back saying he had a surgeon who would see me for "consultation" on Thursday. I hung up without a word, thought it over (chronic pain level had reached 8 and I had to think very slowly and thoroughly). I called him back and said, "I won't live that long." He said, "Go to the ER now. I'll call ahead that you are coming." Thus began 3 nights in a hospice, 9 days of IV feeding in 3 days, an an operation on the same Thursday that was going to be a "consultation." I was in the OR 5 hours. In the recovery room they put in an epidural to administer Morphine. It turns out I am over-sensitive to Morphine and I stopped breathing. My heart slowed to about 30/minute (any slower and it'll simply stall and stop). A nurse stood by with defibrillator paddles as another gave me mouth-to-mouth and then oxygen. Once the morphine wore off, they tapered off the oxygen and let my wife see me. After that I suppose I recovered as normally as one can.

That's enough on such a subject in this much detail. I followed up with chemotherapy. The GI doctor was frank enough to give me accurate statistics. In my case, being a mathematician, I knew exactly what they were telling me and what they were not telling me. He said, after the operation, I had a 15% chance of living for one year. After the "gold standard" chemotherapy for six months, that chance would improve to 35%. "Gold standard" is leukovorin plus 5-FU. 5-FU was originally developed as a "weapon of mass destruction", but was found, rather accidentally, to cure many cases of colon cancer. Leukovorin helps it work better.

And what does 35% mean? Survival rates in such cases follow the same statistics as failure rates in a transistor factory. Technically, it is a type of Weibull distribution. At some time 65% of the devices will have failed. The doctor's prediction put that point at one year, when 35% are still alive. Such a distribution has a very long tail such that, for example, about 10% survive for five years. In the case of colon cancer, there is very little chance of recurrence after five years, and different statistics come into play. Most folks who live for five years after colon cancer surgery will die of something besides colon cancer, 10, or 20, or 30-40 years later, depending on their original life expectancy. In my case, I was 53 at the time of my operation (pretty young for this kind of cancer), and now I am just a couple of months shy of being age 70. My father is alive, so I have some chance of living into my 90's, at least medically speaking. The last time I saw the GI doctor (he does a follow-up colonoscopy every 3 years), he called me "a trophy".

Looking back at the list above, I think I covered most of the bases of the Compass. The one thing I'd have added, perhaps as a part of "Dancing With Grief", or perhaps as a ninth point: "Laugh as much as possible". For some reason, the six months of my chemotherapy were the longest sustained period of great happiness of my life. Perhaps 5-FU has a side effect of being a superb anti-depressant (too bad about losing your hair if you are young; I didn't lose any). I also stumbled on AFV (America's Funniest Videos) on ABC, and have watched it pretty regularly every since. My kind of humor.

Considering that this is not a very popular blog, I conclude that few people think the way I do or like many of the things I like. So, while I was not so enamored by this book, I think it can help a great many people either to become cancer survivors, or to muddle their way through their cancer experience better than they might have done if left totally to their own devices.

Monday, June 26, 2017

When the math you used could mean life or death

kw: book reviews, nonfiction, mathematics, geometry, analysis, renaissance

Who would have thought that for a period of decades a student's adherence to certain mathematical methods could get him in trouble with the Inquisition, imprisoned, or even burnt at the stake. Galileo was placed under house arrest for the last two decades of his life, not only for advocating the motion of the Earth, but also for the kind of mathematical analyses he published!

Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World, by Amir Alexander, chronicles the development of a "new" kind of mathematics, one that had actually existed alongside Euclidean geometry for centuries, but had been little used and was denigrated by Aristotle and others. It flowered along with the Italian Renaissance, but ran afoul of the reactionary politics of the Jesuits.

To most mathematicians of the early Renaissance, mathematics was geometry, and all proofs and analyses that proceeded by any method other than straightedge-and-compass derivation from first principles were suspect. It is rather amazing to read how the Society of Jesus, originally rather blind to mathematics because of the proclivities of its founder, Ignatius of Loyola, took up Euclidean geometry as a point of pride within a generation after his death.

In their to-the-death struggle to throw back the influence of the Protestant Reformation, the Jesuits, brought into being as the Reformation was blossoming throughout Europe, realized that geometrical proofs provided a perfect model for their rigid theology and social structure. The Reformers declared that all persons had a right to know and understand Scripture, and offshoots such as some Anabaptists, and free-land proponents such as the Diggers, began to question the "divine right of the King" and the "natural order" of aristocracy. Dogma was being replaced by opinion. Long-held traditions were in danger of being overthrown. Chaos was imminent. The execution of the English king Charles I emphasized the danger.

If one accepts the validity of the methods of Euclid, there is no room for opinion. A geometrical constructive proof, proceeding by pure deduction, leads step by step to a conclusion that cannot be denied. But it had become evident to the disciples of Pythagoras, nearly a full twenty centuries earlier, that some propositions one could state, could not be proved. They had begun by proclaiming that all problems were subject to "rational" proof; by "rational" they meant using only ratios of whole numbers. An early demonstration that the hypotenuse of a square could not be exactly expressed as a ratio, that it was "incommensurable", led to the breakdown of the Pythagorean system and eventually to the disbanding of the Pythagoreans.

By Aristotle's time, about 200 years later, inductive methods based on "indivisible" quantities had shown some promise, and had been used to demonstrate certain propositions that geometric methods could not solve. But Aristotle, at first intrigued, later decried such methods. Euclid he could understand; the new methods seemed to allow a certain leeway for error. In his way he was as rigid as any Jesuit of the Sixteenth Century.

I have often been astounded that the Medieval Roman Catholic Church based so much of its philosophy on Aristotle, whose only brush with Theism is some vague statements about an "unmoved mover." I was further amazed to read of the process that led to this, via Thomas Aquinas. The Jesuits believed that Aristotle had it right. Mathematical induction by "indivisibles" (also called "infinitesimals" after about 1730) was unreliable. The Church needed … NEEDED! … a rigidly reliable theology and rule of society that disallowed dissent as thoroughly as a Euclidean proof disallows "alternate opinion". Galileo was only the most prominent of a large number of Italian mathematicians to learn of inductive methods, and use them to great effect, so much so that these methods swept through Europe. But over about a century's time the Jesuits drove "indivisibles" out of Italy. Indivisibles and inductive methods flourished elsewhere, in all the countries of Europe.

Reasoning similar to that of the Jesuits led Thomas Hobbes to found his political philosophy on Euclidean geometry. He strongly felt that the chaos following the Reformation simply cried out for a more totalitarian form of government. His exceedingly famous book Leviathan proposes the most profoundly totalitarian political system ever devised. When he learned that three very significant propositions were incommensurable via Euclidean methods, he realized that this left a great loophole in his philosophy.

Three problems: Squaring the Circle (making a square with the same area as a given circle), Trisecting an Angle, and Doubling a Cube (constructing a length that can be used to construct a cube with twice the volume of a given cube). None of these can be done using Euclidean geometric methods. This has been proven, using mathematical methods developed centuries after the time of Hobbes. He spent the rest of his life trying to square the circle, and eventually lost his reputation as a mathematician. He ran afoul of Gödel's Incompleteness Theorem: that every mathematical system can be used to formulate problems that cannot be solved withing the confines of that system. This includes geometry. But Kurt Gödel was two centuries in Hobbes's future.

In the opening chapters of the book, it seemed to me that "indivisibles" and "infinitesimals" were described as being in opposition. It took careful reading to understand that they were synonyms separated by a century or two of usage. They form the foundation of The Calculus, as developed by both Newton and Liebnitz. The modern world would not exist without the analytical methods of calculus. From a modest number of "demonstrations" using induction—based on lines being composed of an infinite number of "indivisible" points, planes being composed of indivisible lines, and volumes being composed of indivisible planes—calculus and modern analysis in general have become supercharged, and now include both inductive and deductive methods.

I spent much of my adult life as a working mathematician, and I find it fascinating that such a life-and-death struggle had to be won, and won decisively, for the modern, technological world to appear. I have just touched on a few of the trends and a handful of the players in the saga of Infinitesimals. I have to mention John Wallis, whose 25-year battle with Hobbes "saved" inductive mathematics in England. How much longer would the modern era have been delayed otherwise? He originated the symbol for infinity: . Infinitesimals is quite an amazing story, very well told.

Sunday, June 18, 2017

Wu Li: Circular reasoning to the max

kw: book reviews, nonfiction, physics, cosmology, buddhism, copenhagen interpretation, quantum mechanics

From time to time I have heard about The Dancing Wu Li Masters: An Overview of the New Physics, by Gary Zukav, since it was published in 1979. I had never read it until now. As a student of all the sciences, particularly the "hard" sciences (those amenable to experimental verification), since before 1960, I have at least a reading familiarity with physics, which is a hard science, and cosmology, which is not. Now having read the book, I find it contains no surprises, at least, none of a scientific nature. Of course, a lot has happened in physics and cosmology in the past nearly forty years.

The author, an admitted outsider to the field of physics, conceived of the book while on a retreat at Esalen along with a real mixed bag of folks including numerous scientists and science hangers-on (some would consider me more of a hanger-on, though I am a working scientist, even in "retirement" from a career in the sciences). Al Huang, who was teaching T'ai Chi at Esalen when Zukav was there, introduced him to the concepts of Wu Li. That is concepts, plural.

I have a great many Chinese friends. The Chinese languages, primarily Mandarin, the principal written Chinese language, abounds in homophones, words that sound the same, at least to a Westerner. Most basic Chinese words consist of one syllable, and very few require more than two syllables. Spoken Chinese sounds to us like a long string of only a few syllables repeated various ways, with a "sing-song" quality that means nothing. What Westerners miss is that the "sing-song" variations in tone are meaningful and are part of the proper pronunciation of Chinese words. Thus, the syllable "MA", depending on the tone, and its context in a sentence, has at least these meanings:

  • Mother.
  • When doubled, an affectionate term for Mother, just as in English, at least when pronounced with two flat tones.
  • Horse, using a different tone.
  • The verb "ride", when the context demands a verb rather than a noun, and using still another tone.
  • The pronounced question mark that ends (nearly) all Chinese questions, spoken with a rising tone.

The familiar greeting "Ni Hao Ma" is a lot like the New Jersey, "How are ya?" The Chinese sentence, "Ma-ma ma ma ma", with the proper string of tones, means, "Is mother riding the horse?" (Chinese has no articles, so "the" is implied).

Depending on tone and context, "WU", pronounced "woo", has about 80 meanings, and "LI", pronounced "lee", has a great many, primarily focused on pattern. Different written Chinese characters (ideographs) are used for the various meanings of wu and li. In combination, the word wu li is the primary Chinese term for "physics". But when other combinations of ideographs with the same pronunciation (except for tones) are used, there are other meanings. In the context of this book, Al Huang gathered five. The literal meaning of the ideographs used for wu li meaning "physics" is "patterns of organic energy". The other four are "my way", "nonsense", "I clutch my ideas", and "enlightenment".

The book is structured around these five concepts, with each section containing two or three chapters. As I might have expected from a book inspired at Esalen, each chapter is numbered 1.

The "new physics" on which the book is centered is quantum mechanics and its relationship to Einstein's theories of relativity (special and general). The core message is the ambiguity of quantum phenomena—when any single "particle" is studied—coupled with the exactitude of the predictions the mathematical theories of quantum mechanics make regarding the statistics of interactions when many particles are subjected to the same set of conditions. The "scripture" of quantum mechanics is the Copenhagen Interpretation, that of Niels Bohr and his followers (I almost wrote "disciples").

Thus, for example, when light is shined through a pinhole, which spreads the beam by diffraction, and this beam is passed through a pair of narrow slits, an interference pattern emerges. This works best when monochromatic light is used, such as from a laser, but "near-mono" filtered light works well enough for visual purposes. The intensity in each part of the interference pattern can be exactly calculated by the Schrödinger wave equation, although the calculations are formidable; various simplifications of the wave equation yield very precise results with less arithmetical grinding.

I mentioned diffraction. This matter is first mentioned on pages 64-65 of the book. In the upper half of an illustration, a series of waves in a harbor are shown exiting a rather broad opening, and those that get through are shown going straight onward, with a sharp edge to their pattern. In the lower half, the opening of the harbor is smaller, and the waves exiting are shown as semicircular wave fronts spreading beyond the opening. There are two major errors here. Firstly, the upper pattern should show a little spreading at the edges of the "beam" of waves exiting the harbor (you can verify this using a wave tank, as I was shown decades ago in a Freshman physics class). In other words, diffraction occurs when waves pass through any opening of any width, not just very narrow ones. Secondly, for the lower wave pattern, the wavelength of the exiting waves is drawn as much shorter than the waves in the harbor.

In actuality, diffraction produces a nonzero probability of the waves at every angle. They seem to "go straight" through a larger opening only because the off-axis waves lose energy with angle very rapidly in such a case. When a wave front passes through an opening of a size similar to the wavelength, or smaller, there are significant amounts that are found at nearly every angle, making a much more divergent beam. Zukav seems to have been ignorant of this.

Interestingly, if a double-slit setup using extra-sensitive photographic film is set up, you can get a surprising result. The best photo film can record the capture of each photon, as long as the light is blue enough, meaning the photons are energetic enough. One silver halide grain is exposed by the capture of a single photon. If the light is dimmed enough that only a few photons per second pass through the apparatus, and you let it run for less than a minute before extracting the film and developing it, the developed film will have one or two hundred tiny exposed grains that are seemingly scattered at random over the film. If instead, you leave the film in place for an entire day, there will of course be many more exposed grains, tens of thousands of them. They will show a very clear interference pattern, identical in form to the one you could see when the light was shining brightly and tens of trillions of photons per second were passing through the apparatus.

Interference is a wave phenomenon. Photons are particles; each carries a specific amount of energy and has a specific momentum (these are all the same for monochromatic light). It took me and all my fellow students a long time to become comfortable with the fact that light has both wave and particle characteristics. Eventually we thought of a photon as a "wavicle", a small wave bundle, that could somehow "sense" that both slits were open and "interfere with itself", when passing through a two-slit apparatus. It seems that light behaves as a wave when wave "behavior" is demanded of it (the two slits), and as a particle when particle "behavior" is required (exposing a silver grain in the film).

Where does Gary Zukav take this, and several other experimental results of quantum mechanics, special relativity, and general relativity? Straight to the door of a Buddhist sanctuary. The language he uses is usually as ambiguous as the language physicists typically use to describe concepts like the "collapse" of a wave function when an "observation" is made. He compares some conclusions and statements of physicists to similar statements of Buddhist doctrine, though I could seldom recognize the resemblance. The core of the Copenhagen Interpretation, at least as it is explained in this book, is that the Observer is central. But, to date, nobody has adequately defined "Observer". That doesn't stop Zukav from equating the one-is-all-all-is-one that he believes the new physics is trending toward to Buddhist teachings of the pre-Christian era. I have a question or two about observers, or Observers.

Must an Observer have a self-aware mind? Can the photographic film described above be an observer, or has no observation been made until the film has been developed and a human (or other self-aware entity) has looked at it to see the pattern? If I understand the Gary Zukav presentation of the Copenhagen Interpretation, there is no "collapse" of the wave function into an actual "event" without an observer. It is as though, outside your peripheral vision, nothing exists until you pay attention to it. Taken to an extreme, it means there was no Universe until humans evolved to be the Observers to bring it into existence. This is the reason for the title of this post. If this is actually what Niels Bohr believed, I have to say to him and his disciples, as Governer Festus long ago said to the Apostle Paul, "Much learning has driven you insane!" Paul was not insane, but I think Zukav might be. More on this anon…

At the time The Dancing Wu Li Masters was being written, some "newer" new physics concepts were arising, such as the Quark/Gluon resolution of the Particle Zoo, and the theory of the Multiverse. To take up the former: It appears that the quark is truly fundamental. All the hadrons seem to be made up of various combinations of quarks and anti-quarks. However, it takes such enormous energies to generate interactions that give evidence of the existence of quarks—and they apparently cannot be brought into independent existence—that we may need to await a particle accelerate wrapped around the equator of the Earth to achieve energies sufficient to determine whether quarks do or do not have any substructure. Apparently, electrons have no substructure, so maybe they and quarks are as fundamental as it gets. But our experiments have reached "only" into the range of 10 to 100 TeV. What might be achieved with an energy a thousand times as great, or a million? Fears have been expressed already that the current experiments at CERN could trigger destruction of the Universe. Maybe the Multiverse is real, and we inhabit a surviving Universe that didn't get destroyed.

The notion of the Multiverse is simple. Rather than the wave function for a particle "collapsing" into some actual event, an entirely random outcome within the statistical framework described by the wave function, perhaps every possible outcome actually occurs, and a new Universe is spawned to contain each of those outcomes. This is simple enough if the "outcome" is that a particular photon passes through either the left slit or the right slit of a two-slit apparatus. Two universes result. I one of them, the photon passes to the left, and in the other, it passes to the right. But there is detail in the interference pattern, and when I have done the experiment with a laser pointer and a home-made pair of slits cut in aluminum foil, I could see more than twenty interference fringes. Now what? Did each photon create twenty or more universes to accompany each outcome? When the light is bright enough to see, trillions of photons per second are "in use"; the beam of my laser pointer emits 200 trillion photons or deep red light per second. Did I inadvertently create a few quadrillion new universes, just by shining my laser pointer through a pair of slits? Were new universes being created at the same rate even when I wasn't looking?

So what are the chances that the search for the Higgs boson at CERN caused the creation of truly enormous numbers of universes, nearly all of which were immediately destroyed, and we inhabit one of those that survived. I think you can see where such thinking can lead.

And some folks say that I am crazy to believe in God, a God who knows a level of physics (if it is called that) that can resolve this stuff, without the insanity of Multiverse speculations. I think it is fair to say that "modern physics" has reached a point of adding more and more epicycles to a group of theories that seem to produce very precise results, but that they are really analogous to pre-Copernican cosmology. Actually, Copernicus used epicycles also, because he thought orbits were based on circles. It took Kepler and others to work that part out.

Another item or two that have arisen in physics since 1979:

  • On page 119 we read, "No one, not one person has ever seen an atom." If you are talking about direct visual sight without the use of a microscope, you could say the same thing about bacteria or viruses. But we have microscopes of several kinds that can show us what they look like in rather amazing detail. Since about 1981, highly refined transmission electron microscopes have been able to show atoms directly, and since the invention in 1982 of the scanning tunneling microscope and the atomic force microscope, we now have three methods for seeing where the atoms lie in a surface. Whatever point the author wished to make based on the above statement is now moot.
  • Beginning on page 292 we find an illustration using polarized light. Simply put, when light is passed through a polarizer (such as the special plastic in some sunglasses), the light that emerges is now all vibrating in the same plane (for convenience, we use the electric vector as the "direction" of polarization, though the magnetic vector could be used equally well, and is at 90° to the electric vector. Zukav does not mention this). When you place a second polarizer with its polarizing axis at 90° to the first, it blocks all the light. If you rotate it to various angles, some of the light gets through, in accordance with an elliptical formula. Now, if you set the two polarizers so their polarization axes are at precisely 90° so that no light is getting through, then put a third polarizer between them, with its axis oriented at 45° to the other two, quite a lot of light gets through! This goes on for several pages and is presented as quite a mystery. Strangely, elsewhere in the book we find the tools to solve this mystery (I didn't look up page numbers):
    • In a discussion of Feynman Diagrams and the S-Matrix (Scattering Matrix) we read that physicists consider every interaction to entail the destruction of all the impinging particles and the creation of new ones that exit the interaction locus at the appropriate angles with appropriate velocities. Thus, when a photon reflects off a mirror or any shiny surface, it is actually absorbed and a new photon is released at the appropriate angle. So they say. Refraction works similarly. Thus, the polarizer absorbs the incoming photons and releases a somewhat smaller number of photons, all with the appropriate polarization.
    • As I recall, a polarizer made of stretched plastic film passes 38% of the original light. A Nicol prism can actually split light into two beams with nearly no loss, so that 50% exits with horizontal polarization at one angle, and 50% with vertical polarization at a different angle. This would make no sense according to the "picket fence" analogy, because very, very little of the original light could get through any polarizer: only that which is already polarized the "right" way. Thus, a Nicol prism, in particular, "tests" each photon, and either twists its polarization to match the nearest direction (and shifting its exit angle according to the one or the other), or annihilates the photon and emits one of appropriate polarization and exit angle.
    • Polarizing plastic is less efficient, passing only light of one polarization, but obviously changing whatever the polarization was of most photons to match its orientation. Thus, what is happening with the 45° polarizer is this: it absorbs some photons entirely, and twists the polarization of the rest of them by 45°. Then when they reach the last polarizer, they are now subject to a further absorption or twisting, so that the "twisted ones" get through, with perhaps 5% of the original beam intensity. That is a lot more than the fraction of a percent that "sneaks through" the original set of crossed polarizers because plastic film polarizers are not perfect.
    • So polarizing devices do not just passively allow certain photons to pass and block all others, but they change the polarization of the photons that they allow to pass.
  • I cannot pass by the chance to mention circular polarization. A thin piece of calcite or quartz (or, indeed, any colorless crystalline material that does not have cubic molecular symmetry) rotates the polarization of the incoming light. What is more, if it is just the right thickness, it will produce circularly polarized light. This is sometimes thought of as two streams of photons that are related to one another. Think of a vertically polarized photon coupled with a horizontally polarized photon, and their "waves" are out of phase by a quarter of a wavelength. Then, in effect, their polarization will rotate as the go.

As interpreted by Gary Zukav, physics was becoming one with Buddhism. I wonder what he would make of today's situation, with the great popularity among physicists of cosmological string theories (at the moment, they can't decide which of the potential 10500 possible string theories to favor!), the supposed detection of increasing cosmological expansion that may lead to a "big rip" in which all things will be literally shredded to their composite quarks, and the theory of cosmological inflation (developed in the early 1980's) that supposes that the initial expansion of the big bang took off at several trillion trillion trillion times the speed of light for just a tiny fraction of a second, during which the Universe grew to a size somewhere between that of a grapefruit and a galaxy (nobody can pin that down too precisely).

In my view, coupling physics theorizing with Buddhism is tantamount to solipsism. Let us accept as a first premise that what exists, does indeed exist, and go from there. Then the extreme versions of "New Physics" simply vanish, like an unobserved photon.

Saturday, June 10, 2017

The public versus science

kw: book reviews, nonfiction, science, sociology, anti-science sentiment

Someone once described scientific law as "What always happens." They were referring to things like "the law of gravity", which is a colloquial way of saying that what goes up must come down.

There is a commonplace view that flying things such as birds "defeat" the law of gravity because they have a flying life, a "different law". Only when they die do they succumb to gravity. In reality, birds take advantage of gravity to fly. The way they fly requires the gravitational force to keep "the wind under their wings". A bird in a zero gee can't orient itself (see this video for an example). Given time, a bird might learn to compensate to some extent, but fast, directed flight requires gravity as one of the forces the bird is adapted to naturally balance.

This is one example of someone getting something partly right because of partial scientific knowledge. While we all take great advantage of technology—all the gadgets and appliances around us—most of us know little about what those things do. Decades ago Arthur Clark wrote, "Any sufficiently advanced technology is indistinguishable from magic." For most folks, their phone or auto engine may as well work by magic.

A million years ago, advanced technology was the hand axe. Anyone could make one, though few could make them well. 150 years ago, advanced technology was an automobile. Few could make one, but many could repair them. I grew up learning to do my own oil changes and even did major engine work. My Dad and I rebuilt a VW engine once. I wonder how many backyard mechanics could rebuild the engine of a 2017 Honda Civic or Chevy Impala! The fuel injection system of a 2017 Impala has more moving parts than the entire assembly under the hood of that 1964 VW I had.

There are two fundamental barriers that impede the majority of people from learning science. First, science has proceeded in a stepwise manner, primarily for the past 500 years (with a few 2,500-year-old roots), and to truly comprehend (let alone understand), say, chemistry, geology, physics, botany, zoology, or microbiology (and let's not mention medicine!!) requires years of study to build the core structures in a person's mind that were discovered by hundreds of scholars and experimenters over the past half millennium.

The simplest example is mathematics. We are all able to use basic arithmetic. We learn that 2+2=4 by counting on our fingers, by lining up stones, and many other ways. Probably our first mental step is realizing that negative numbers and zero are useful. Then we learn about fractions, maybe decimals…but it is questionable whether most people ever grasp irrational numbers, even though the vast majority of actual quantities are irrational. And we haven't even got to algebra yet, which forms the foundation for all the disciplines of calculation needed for all engineering and science. Without a solid grasp of algebra, we cannot put useful amounts of geometry, trigonometry, and calculus into our mental toolbox. I thought I was pretty good at mathematics when I gained a solid (so I thought) facility with calculus. Then in graduate school I learned that calculus just opens the door to more dramatic realms of mathematics, which I would need to master to succeed as a geophysicist. I barely made it. I was still not anywhere more than halfway up the mathematical ladder, and did not proceed farther. Most of us never need to proceed anywhere near that far, but if we can't even handle basic algebra (most of us can't), most physical science remains a mystery to us.

The second barrier is that science requires thinking. Sustained thinking. Not the kind of quick figuration we all use to perform most paying jobs. We all start out as sprinters in that realm. It is like we are all born to be pretty good at the 50- or 100-meter dash. But to grasp science requires marathon-level mental performance. Fortunately, understanding the basic concepts of most fields of science is more like running a quarter or half mile; a bit of a stretch for a sprinter, but achievable. Of course, it is hard work. It makes you tired. Most folks aren't willing to put in the work. And so, lacking a tremendous level of effort by both teachers and parents, the vast majority of people grow up with only the haziest notion of the way things really work.

Take eyesight. What happens to make your eyes see? I understand from material presented in Scienceblind: Why Our Intuitive Theories About the World are So Often Wrong, by Andrew Shtulman, that most of us think that our eyes work by sending some kind of ray outward, and receiving it back. Kind of like the comic book illustrations of Superman's X-ray vision, where the X-rays went out from his eyes so he could see through things. But if such a belief were true, you would be able to see in the dark. It would not matter whether or not the sun was up or a light was turned on. Just by turning out the light and thinking hard, we can usually figure out that "light", whatever that is, scatters off of things and gets to our eyes, which receive it and are then able to "see".

Andrew Shtulman is concerned that the level of science ignorance, particularly in America, is so great that very few of us can make proper decisions about most technical issues. For example: Do vaccinations cause autism? Certain influential people loudly proclaim that they do, to the extent that many people, not wishing to leave anything to chance, ignore the protestations of every single scientist who has actually studied that matter. If you never get it anywhere else, get this true knowledge right here: Autism is not caused by any of the chemicals or deactivated organisms in vaccines. The proportion of autistic children among those vaccinated is exactly the same as the proportion among those not vaccinated. Period.

Dr. Shtulman presents twelve kinds of knowledge in which we form "natural concepts" or what he calls "intuitive theories". They are all based on everyday experience. For example, when you throw a ball, what path does it follow? Does it rise gradually to a maximum height and then descend just as gradually? Or does it rise up, hang a while, and then fall straight down? Because of perspective, as the thrower, we see it appear to rise, hang, and drop. But have you ever carefully watched a ball thrown by someone else who is some distance away? For example, at a ball game, if you are in the seats either behind home plate or out beyond second base, watch a "clothesline peg" from the third baseman to the first baseman. It is called a "clothesline peg" because a ball thrown hard seems intuitively to go "straight" from hand to glove over a distance of about 125 feet. But if you watch carefully, you'll see that the ball rises at least 16 feet, in a smooth curve like a stretched circular arc (a parabola), and is highest when it passes over the pitcher's mound. It is actually thrown upward at an angle greater than 25°, and is descending at that same angle when it reaches the first baseman's glove.

One of the hardest concepts for most people to grasp is "deep time." I was lucky to have preparation from a young age, when my parents told me that the Earth and the Universe are very old. We were Bible-believing Christians, but one of the first things I was taught, probably from about age seven, is that there is a "gap" between Genesis 1:1 and 1:2, between "God created" and "The earth became". Thus, when science teachers in middle school began talking about millions of years having passed, I was not ready to receive it.

We naturally think of most things happening on time scales that are familiar to us. When I first knew my grandfathers, born in 1885 and 1887, they were nearly 70 years old, and in the early 1950's, that was old! Growing up on Bible stories, I was familiar with stories of Jesus and his apostles, who lived nearly 2,000 years before, and Abraham, about 2,000 years before that. I remember as a sophomore in High School learning that when Caesar and Cleopatra went sightseeing along the Nile, the Pyramids were already about 2,500 years old and were considered "ancient history". Particularly in America, few of us know of any building older than 300 years, though some kids in Illinois grow up in sight of the Cahokia Mounds, which are between 600 and 1,200 years old. Even residents of Damascus and Jerusalem seldom see a building older than 3,000 years. So to our natural way of thinking 10,000 years is "really long".

Now imagine one hundred times ten thousand: 100x10,000. That is one million. A human generation is about 25 years. A million years is about 40,000 generations! I love science, but that took me some time to grasp. I had to think about it, and think about it, and think some more. Then there was the concept of a billion years, a length of time 1,000 times as great! Now I am comfortable with such quantities, but it took work. Most people I know are not comfortable with deep time. In particular, the majority of religious Americans firmly believe that the Earth is no more than 6,000 to at most 10,000 years old, and that the first humans were created within a few days from the creation of the universe.

By the end of the book it became clear that the author's heart was in the ignorance of and opposition to the theory of evolution, particularly the idea of human evolution or human origins as anything other than direct, instantaneous creation by God. Very few writers properly distinguish the fact of biological evolution—that it did happen, that life has changed through time—from the theory of natural selection, which is the mechanism of biological evolution. When we say "theory of evolution" we mean the theory of natural selection. It is likely that most scientists, along with nearly all the public, conflate the fact and the mechanism. Fortunately, Dr. Shtulman distinguishes them, though not as clearly as I might have hoped.

To "get" evolution, one must know a great many things, including that many species are now extinct, that all life on Earth is based on DNA, and that there has been life on Earth for many millions of years, even several (3.8-4) billions of years. Without that foundation, all talk of evolution is a castle built on air, and is fruitless. Then, to "get" natural selection, one must know several things further, in addition to the facts of evolution, primarily that the offspring of one pair of creatures differ a little from one another in small, random ways; that not all those offspring will have offspring of their own; and that small differences in the DNA of that set of offspring lead to differences in how well they can grow, thrive and reproduce. Further these small differences naturally occur and over many generations and great spans of time, small differences in the health and reproductive ability among creatures of the same species add up to significant differences in the range of characteristics to be found throughout the individuals of that species.

Very, very few people willingly do the work to learn all those things. In a very real sense, then, most people have no right to an opinion about evolution! They don't have the mental tools to form a valid judgment. Sad to say, many of American society's decision makers are so ill-informed about every single branch of science that they have no proper basis to form valid judgments. But they write legislation that the American public must follow, upon pain of legal sanctions such as fines or imprisonments!

Well, I've chased that rabbit far enough. The subject of Scienceblind is fascinating. Unfortunately, all too frequently the writing is rather dull and I had to slog through it. I didn't skip any, but believe me, I was tempted to!

There was a time in both Europe and America during which a very popular form of entertainment was to attend science lectures and demonstrations by noted scientists. Scholars such as Michael Faraday and Humphrey Davy made much of their income from such lectures. It would do most folks a great deal of good to attend such lectures today. Those who are willing to watch programs such as Star Talk with Neil deGrasse Tyson and programs such as Nature and Nova get a little bit of what they need to "get" the underpinnings of modern science. But those whose eyes glaze over at such material have little hope of making valid decisions about topics, such as vaccination, that can lead to great consequences for them, their children, and for those around them.

Tuesday, June 06, 2017

The yellow-tipped little agate snail

kw: species summaries, natural history, natural science, museums, research, photographs

Earlier this year I completed two major projects to prepare about 17,000 data records at the Delaware Museum of Natural History for all the freshwater species of bivalves (clams and mussels) and gastropods, and load them to a new database system from which they can be served up via the internet. The principal portal is iDigBio. A secondary portal, from which it is easier to dig into the records on a museum-by-museum basis, is InvertEBase. Each project took about a year.

That done, I have begun working through the museum's data for terrestrial gastropods (land and tree snails), which total about 38,000 records. We decided to take these a cabinet or two at a time, for the most part. I am basically tackling between 1,500 and 2,000 records per mini-project. A first project took about a month, so I expect the sum of about 20 projects to take a couple more years, maybe three or more.

I am in the midst of inventory for three related families, and the first is Achatinellidae. These snails were so-named because they resemble the large tree snails of the family Achatinidae. The prefix "achat-" means "agate" in Greek, and refers to the striped appearance of the most familiar species, the giant African tree snail, Achatina achatina (Linné, 1758), also called the tiger snail.

The one shown in this image may have a shell as long as 8" (20cm). The suffix "-ell" means "small"; the snails of family Achatinellidae are much smaller than the Achatinidae, but many have a similar striped look.

The type genus (the one the family's description is based upon) is Achatinella, and the type species is Achatinella apexfulva (Dixon, 1789). As I was taking inventory of the specimen lots of this species, I noticed that some had been collected by a major donor to the museum, Munroe L. Walton, when he was quite young, not more than eleven years old. In the three photos below, you can see they were collected in Hawaii in 1901; Walton was born in 1890. First, the photos, which mostly speak for themselves. Commentary continues following.




Around the year 1900 it was common to distinguish the many color variations of variable species by assigning subspecies names. The original labels for the first two lots reflect this. The third lot was originally attributed to a different species because many of the shells in certain parts of Oahu are left-handed, such as the one on the right in the third picture. These are now recognized as part of the species apexfulva. The suffix "-fulva" means "yellow", and shells of this species have a yellow tip. Specimens of this species grow to 1.5-1.9 cm (0.6-0.75 inch).

The second lot shown has an added label, written by Edward W. Thwing, who may have been the actual collector of that lot or part of it. He was 22 years older than Walton. The designations "New." and "Newc." on some of the labels refer to Wesley Newcomb, a physician who became a curator of mollusks at Cornell in the 1870's and until 1888. He described the first specimens of many species in the family Achatinellidae.

Although Achatinella apexfulva does not have a common name, I call it the "yellow-tipped little agate snail" as a direct translation of its scientific name. The Achatinellidae in general are colorful and attractive. Sadly, most, including A. apexfulva, are now extinct.

Monday, May 29, 2017

Multiple utopias

kw: book reviews, science fiction, near-future, dystopias, utopias

Has Western society already become a plutocracy? A passel of disappointed Democrats, decrying the country's first billionaire President, and further decrying the number of billionaires and near-billionaires he has installed in his cabinet and other executive posts, seem to think so. Of course, they conveniently ignore that their own plutocrat, who has managed to avoid "personally" amassing too many millions, has instead created a pay-for-play foundation with, to date, close to a half a billion dollars that "everyone knows" is to be used for political purposes, a "charitable" foundation that spends three or four times as much on said plutocrat's travel and hotel expenses as on the foundation's purposes as stated in its charter. At least the plutocrat who made it into office is honest about his great wealth and doesn't play poor-face.

In a true plutocracy, only the plutocrats own anything. How close is America to that?
  • The "one percent" of Americans own 38% of all wealth in the U.S.
  • The richest 10% own just over 75% (or, you could say, "the next 9%" own "the next 37%).
  • The poorest 50% own 1%.
  • The "middle class", the remaining 40%, own just under 24% of all wealth.

Thought that is not quite full-on plutocracy, it is pretty dramatic inequality. This "wealth inequality" is greater than "income inequality", because below the median income (around $50,000 per household in recent times), it is hard to accumulate wealth, while for "upper middle class and above" (about $200,000), most income can be socked away and add to accumulated wealth, and for a genuine plutocrat, tremendous luxury can be enjoyed while spending only a few percent of income as great wealth continues to multiply.

Let's look at that $200k threshold. For someone working a 40-hour week, it would be nearly $100 per hour. Someone with a modicum of prudence can live quite well on less than half of that, and save the rest, which after taxes exceeds $60,000 yearly. About every 16.7 years, even if investment income is nil, another million dollars accumulates. If investment income is instead in the 4% range, then during the second 16.7-year period, another million will accumulate from the compounding alone. Now, there are numerous "professionals" out there who demand fees of several hundred dollars an hour, and probably earn half a million to a million yearly. Then there are CEO's of top corporations who are routinely paid a million per month. I consider that excessive.

My own take on earnings that exceed one or two dollars (2017 dollars) per minute: The only guy to whom I will pay as much as $400 per hour (an average lawyer's fee in this area), without feeling resentful, is the dude who can go in with a screwdriver and side cutters and defuse a bomb. (Before you cry "sexism", I'd pay it to a similarly skilled gal with screwdriver and side cutters. Doesn't matter to me.)

I conclude that we are well on the way to plutocracy replacing democracy in America. Don't think the current President will make that go any faster, he won't. But had the Democrat won, she'd have pushed it in that direction much, much faster! America would have become "Godfather country" in pretty short order.

OK, so what will things be like in a full-blown plutocracy? Cory Doctorow thinks he knows, and it forms a society universally called "default" in Walkaway, a Sci-Fi novel of the sorta-near future. The hyper-rich who run everything are called "zottas" (I guess that is a combination of "zetta" and "yotta", the two largest prefixes in the metric number system. "Yotta" means a trillion-trillion, or 1024, and "zetta" is 1/1,000 the size , or 1021.) Either way, I suppose a zotta is rich enough to treat the odd billion dollars as pocket change.

In the face of zotta-controlled wage-slavery for those few who are ambitious enough to work, and a grinding welfare state for the rest, increasing numbers of people have been walking away, going to unoccupied areas and learning to live without "default society". They are not as badly off as things may seem. Technology has kept pace with the times, and nearly all human needs can be "fabbed" (an advanced form of 3D printing) from suitable feedstock. That goes not only for vehicles and houses and furniture but even more so for many foods and medicines, and also recreational drugs. Walkaway society is a society of abundance. No more zero-sum. If you take my sandwich, and I can throw leaves in a hopper and fab another in five minutes, why should I care? If I do feel a bit put out, I can make ten sandwiches and throw them at you…or ten darts, if I want to do something more than just shame you.

The political discourses that the author uses to point up the differences among default and walkaway philosophy make this a rather dialog-heavy book, sort of like the Foundation books by Asimov. Abundance philosophy has the potential to create genuine utopia, but human nature is not used to it, and there'll be tremendous growing pains. Part of the dramatic thrust of Walkaway is about such growing pains. Another big part is what we might call "World War W", as "default" tries to regain control of "walkaway".

This is intensified because the walkaways possess sufficient technology to be winning the race to produce effective scanning and simulation of a person, so that they can be reincarnated in software after dying. A lot gets glossed over about this, and that's OK, because there are significant questions to address, such as, "How will a person who wakes up in silico react to the knowledge of being dead?", and "Can the scan of a person become enslaved?". Two questions that I wondered about, that are barely touched upon: "How will the simulated person communicate; is there a need to emulate the signaling systems of the Occipital and Temporal lobes of the brain, and translate machine video and audio signals to and from appropriate optic and auditory nerve signals?", and "What will replace the endocrine signaling of the body with which the brain/mind was accustomed to relate?".

Such a book raises many questions and answers few. This one had the obligatory happy ending, but it didn't have to. The downfall of a plutocratic culture takes longer than a generation. They tend to leave little but scorched earth behind. The end of Walkaway has a continued coexistence, at arms' length, of the two cultures, with default becoming the secondary, left-behind one. I found that puzzling.

Those who know me may well wonder why I subjected myself to a book containing explicitly erotic scenes. There are but a handful, and I know how to skim past what I don't want to read. Whether you roll your eyes at this and say, "Yeah, sure," or not, you're entitled to believe what you wish.