UF mathematics professor Kevin Knudson reflects on the notion that an instant means different things for a person, a redwood or a gnat, and that what’s infinitely small for one individual might be an entire lifetime for another, influencing his or her outlook on life.
How short is an “instant”? Is it a second? A tenth of a second? A microsecond? You might think all of these qualify. What about 100 years? That certainly doesn’t seem like an instant, and to a human being, it isn’t, since we’d be lucky to have a lifespan that long. But to a giant sequoia, say, 100 years is no big deal. And in geological terms it’s practically nothing.
How should we make sense of the idea of an instant? Does it cloud our judgment when we make decisions, both as individuals and as a society? Are we moving too slowly on solving big problems because we don’t see them happening “instantly”?
What does math say?
When Newton and Leibniz developed the calculus, they were forced to confront the infinitely small. The goal was to understand the idea of the “instantaneous velocity” of an object – that’s the speed at which something is moving at a particular instant in time (think of your car’s speedometer reading). They took the following approach.
We know how to compute average speed over some time interval: Simply take the total distance traveled and divide by the total time. For example, if the object travels 1 meter in 1 second then the average speed is 1 m/s. But what if you have a better measuring device? Say instead you can discover that the object really traveled 20 cm in the first 10th of a second. Then the average speed over that interval is 2 m/s and you’d probably agree that is a better approximation to what we mean by the instantaneous velocity of the object at that point.
But it’s still just an approximation. To get the real value, you would need to take smaller and smaller time intervals and have increasingly accurate measuring equipment. In the 17th century, the way mathematicians got around this was to talk about infinitesimals, quantities that were not zero yet were smaller than any positive number you can think of, including really tiny fractions like 1/1,000,000,000,000,000,000,000,000,000.
Some scientists of the day, as well as various institutions (the Jesuits, for example), rejected this idea as nonsense. Indeed, the idea that one could divide things forever flew counter to the Platonic ideal of indivisibles (also called atoms) and therefore did not sit well with the Renaissance embrace of ancient Greek philosophy. There’s a great book about this called “Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World”; I recommend it heartily. Still, this is how calculus was done until Cauchy introduced the formalism of limits, thereby pushing infinitesimals out of the picture. Roughly speaking, a function f has limit L as x approaches a if the values of f(x) can be made arbitrarily close to L by taking x sufficiently close to a. The precise mathematical definition of this idea obviates the need for the old-fashioned use of infinitesimals.
Still, it’s a shame that infinitesimals fell out of favor, because they’re really useful for thinking about relative scale. An example I always give my students when talking about the reverse problem of dealing with the infinitely large is to talk about money. If you are a billionaire, meaning you have roughly 10⁹ dollars, you sure don’t care about 100 (or 10²) dollars. That’s a difference of seven orders of magnitude, and from your billionaire point of view it’s pointless to get upset over 100 dollars (indeed, you have 10 million hundred-dollar bills at your disposal).
In a similar way, infinitesimals help us deal with the infinitely small – a microsecond (1 millionth of a second) is a short amount of time, but it’s huge relative to a picosecond (10⁻¹² of a second). In mathematical terms, if dx denotes a small amount (like a microsecond) then its square (dx)² (a picosecond) is negligible. So when you’re working on time scales in the seconds you don’t really care about microseconds, and when you’re working on microsecond scales you don’t really care about picoseconds.
(By the way, our words for time are based on these relative notions of smallness. A minute is so named because it was considered small relative to an hour. Seconds were once called “second minutes” to indicate their relative insignificance.)
What’s your point of view?
I bring this up because a pair of articles I read recently made me wonder if our human-influenced idea of “instantaneous” is leading us to unfortunate decisions.
Question: Has the planet entered a new geological epoch, the so-called “Anthropocene”? Homo sapiens has undoubtedly influenced the Earth’s environment, and some geologists are arguing for a change to the International Chronostratigraphic Chart, the official timeline of periods, eons and other geological timescales. (We currently live in the Holocene epoch, already distinguished by the appearance of human beings on the scene.)
I’m not a geologist, so I cannot comment on whether or not this is something we should do, but the obvious first problem to be solved would be settling on a start date for this proposed epoch. Should it be the beginning of the Industrial Revolution in the late 18th century? What about the beginning of mining in ancient Egypt around 2500 BC? Or how about the mid 20th century, as others have argued?
The Earth is roughly 4.5 billion years old. Even if we decided this new epoch began 3,000 years ago, that is still effectively now in geological terms. There have been a million and a half 3,000-year periods in the planet’s life. When things move on such timescales, perhaps we’re just splitting hairs when thinking about when to declare something like this has begun.
Climate change presents another example. Sea levels are rising, but the change is not immediately noticeable. Still, by the end of the 21st century, even the most conservative estimates suggest a three- or four-foot rise, with some scientists predicting it will be double that amount.
Why all the denialism and resistance to action, then? Aside from the obvious political disagreements, there is a more basic cause for the inertia: We don’t see it happening in real time. Sure, we notice there’s not as much snow in the winter as there was when we were kids or that the streets flood in Miami Beach on sunny days at high tide nowadays, but that could just be a fluke, right? Don’t we need more data?
In human terms, these changes are not instantaneous, but in the Earth’s climate cycle they effectively are. We are waiting for some catastrophic event to clearly tell us the climate has officially changed, but it simply takes longer than that. We’re looking for a sign on our human timescale, which is just infinitesimal from a geological viewpoint. But once a few more billion years have passed, some future entity will be able to spot the turning point – though not down to the year or century (a geological instant).
Fast or slow, it comes down to scale
In the absence of catastrophic planetary events, such as a large meteor collision, significant change to the Earth takes time. But it’s important to keep in mind that our relatively short lifespans distort our perception of “instantaneous” events.
As far as the planet is concerned, with its phases measured in the tens or hundreds of millions of years, things are moving pretty quickly. A 1℃ increase in global temperature in 100 years is very fast. If we use this to approximate the future, we quickly see that the planet would be virtually uninhabitable within a few hundred years. The real dynamics are complicated, of course, but perhaps we should keep this simple calculus in mind as we attempt to craft sustainable solutions. Scale is everything and our idea of small doesn’t necessarily align with reality.
This article originally appeared in The Conversation on Sept. 14, 2016.