Changing Worlds: Climate & Disaster in Antiquity

Although climate change has today become a much bigger and more globalized problem than in the past, ancient peoples did have to contend with local events that severely disrupted or even ended their way of life as they knew it. A long series of droughts in parts of the Americas led to the abandonment of such cities as Cahuachi in Peru and may have contributed to the collapse of the Maya civilization in Mesoamerica while similar climatic changes in southern Africa likely contributed to the demise of Mapungubwe and Great Zimbabwe.

Another notable catastrophe was the Bronze Age Collapse, which had devastating consequences: Climate change, combined with other stressors, brought down the Hittite Empire, the Mycenaean Civilization, Kassite Babylonia and many other states, ushering in a dark age around the Mediterranean.

There were, too, the more explosive events that brought total disaster in a matter of hours such as the great floods that are told in so many myths around the world and which archaeology has revealed often have a basis in fact. There were devastating earthquakes such as the one that brought down the walls of Jericho or toppled the Colossus of Rhodes, and the explosions of the volcanoes on Thera and at Pompeii that killed thousands in a moment. All of these events, often exacerbated by overpopulation, overworking the soil, and heavy deforestation of a specific area meant that competition for power and resources became intense as agriculture was disrupted and leaders were challenged. Sometimes, even entire cities and states succumbed. In this collection, we examine these dramatic events and their lasting consequences.

Cahuachi was abandoned from the mid-6th century CE, perhaps due to climate change as the local environment became more arid. Earthquakes, too, may have been a contributing factor to the centre’s decline. It is interesting to note that the number of geoglyphs created at this time increased, perhaps indicating the urgent need for divine help to meet the crisis. The mounds were systematically covered with earth and so the abandonment of Cahuachi was both planned and deliberate.

We also have a free lesson plan for teachers on this subject.

8 Natural Disasters of Ancient Times

Natural disasters are something that humanity has had to deal with since its inception. They have the capability to wipe out significant amounts of the human and wildlife populations where they strike. In fact, it is possible that a natural disaster will be the cause of the end of the world, whenever that inevitably happens. They could be avoided, to some extent, by removing the human population from areas where natural disasters are known to strike. However, looking back on natural disasters in the past, we see that people were just as prone to exposing themselves to the risk of natural disasters as they are today.

The Damghan Earthquake was an earthquake of magnitude 7.9, that struck a 200-mile (320 km) stretch of Iran on 22 December, 856 A.D. The earthquake&rsquos epicenter was said to be directly below the city of Damghan, which was then the capital of Iran. It caused approximately 200,000 deaths, making it the fifth deadliest earthquake in recorded history. The earthquake was caused by the Alpide earthquake belt, a name for the geologic force that created a mountain range named the Alpide belt, which is among the most seismically active areas on earth. [Source]

In late May, 526 AD, an earthquake struck in Syria and Antioch, which were then part of the Byzantine empire. The death toll was a massive 250,000. The quake caused the port of Seleucia Pieria to rise up by nearly one meter, resulting in the silting of the harbor. It was the 3rd deadliest earthquake of all time. The quake is estimated to have been over 7 on the Richter scale (VIII on the Mercalli scale). After the earthquake a fire broke out which razed all buildings that had not already been destroyed.

The Antonine Plague is named after one of its possible victims, Marcus Aurelius Antoninus, the Emperor of Rome. It is otherwise known as the plague of Galen. Galen was a Greek physician who documented the plague. Judging by his description, historians believe that the Antonine Plague was caused by smallpox or measles. We can call this plague a natural disaster because it was caused by a naturally occurring disease and it killed a significant number of people.

The Antonine Plague is thought to have come from Roman soldiers returning from battle in the east. Over time, it spread throughout the Roman Empire and some of the tribes to the north. An estimated 5 million people were killed by the Antonine plague. During a second outbreak, a Roman historian named Dio Cassius wrote that 2,000 people were dying each day in Rome. That&rsquos roughly one quarter of those who were infected.

On July 21, 365 AD, an earthquake occurred under the Mediterranean Sea. It is thought that the earthquake was centered near the Greek island of Crete, and that it was a magnitude eight or greater. It destroyed nearly all of the towns on the island. It would have also caused damage in other areas of Greece, Libya, Cyprus and Sicily.

After the earthquake, a tsunami caused significant damage in Alexandria, Egypt and other areas. It was documented best in Alexandria. Writings from the time tell us that ships were carried as far as two miles inland by the wave. A description by Ammianus Marcellinus describes the effect of the earthquake and the resulting tsunami in detail. He wrote of how the earth shook and then the ocean receded in Alexandria and how a great wave inundated the city with seawater. It is estimated that thousands of people were killed.

The 79 AD eruption of Mount Vesuvius, and the subsequent destruction of Pompeii and Herculaneum, reminds us of the awesome power of this active volcano. In fact, Vesuvius may be the most dangerous volcano on Earth. There are more people living in its vicinity than any other active volcano. Furthermore, it is most certainly going to erupt again.

When Mount Vesuvius erupted in 79 AD, it warned the people with an earthquake, which was ignored. The earthquake was later followed by the expulsion of volcanic debris and the appearance of an ominous cloud over the mountain. Pompeii was only 5 miles from the volcano Herculaneum was even closer. The people of these towns died as one might expect victims of a volcano to die they choked, burned and were subsequently covered in volcanic debris and run off. What makes this ancient natural disaster so interesting is the evidence we have of it.

For more than 1500 years, Pompeii lay buried in Italy. It was found when residents were cleaning up after another major eruption, in 1631 AD. It was not completely uncovered until the 20th century. Then, people learned all to well the horrible fate that had befallen its ancient residents. The agony of their deaths has been immortalized in plaster. Because their bodies rotted away long ago, while entombed in volcanic rock, cavities, like those found in fossils, were left behind. These were filled with plaster and what came out were near-perfect statues of the people who died in Pompeii, as they had died. There were thousands of victims. Today, there could be millions.

Sometime around 1645 BC, a volcano erupted on the island of Santorini. The massive eruption caused widespread damage on both Santorini and the nearby island of Crete. At the time, the Minoans occupied both islands. The town on Santorini was not rediscovered until modern times.

Interestingly, there is reason to believe that this natural disaster inspired Plato&rsquos tale of Atlantis. However, this is, and will likely remain, purely speculation. It is assumed that the ancient inhabitants of these islands picked up warnings that the volcano was going to erupt, and heeded them. No victims of the eruption, if there were any, have been found. Furthermore, it appears as if all transportable, valuable items were removed prior to the eruption. Nonetheless, archaeologists have discovered buildings and large belongings remained.

Helike was submerged in the Gulf of Corinth by an earthquake and a tsunami in 373 BC. It remains submerged to this day. Ancient writers commented on the destruction and some mentioned that you could see the ruins beneath the water for hundreds of years after the disaster. It is assumed that a number of people lost their lives, but how many is uncertain.

The search for Helike did not begin until the end of the past century. Since then, relics of Helike and, interestingly, other towns have been found. Walls, walkways, coins and more have been viewed and photographed. This is yet another possible scene of Atlantis, according to some. However, the destruction of Helike happened in Plato&rsquos lifetime. He wrote that it happened 9,000 years before his time. It could have been inspiration for fiction, though.

A number of other, smaller, natural disasters occurred throughout ancient times. People were subject to them then as much as we are today. It makes you wonder how many civilizations were destroyed by natural disaster that we have no knowledge of, as of yet.

The Plague of Justinian was a pandemic that afflicted the Eastern Roman Empire (Byzantine Empire), including its capital Constantinople, in the years 541&ndash542 AD. The most commonly accepted cause of the pandemic is bubonic plague, which later became infamous for either causing, or for contributing to, the Black Death of the 14th century. The plagues&rsquo social and cultural impact during this period is comparable to that of the Black Death. In the views of 6th century Western historians, it was nearly worldwide in scope, striking central and south Asia, North Africa and Arabia, and Europe as far north as Denmark and as far west as Ireland. Until about 750, the plague would return with each generation throughout the Mediterranean basin. The wave of disease would also have a major impact on the future course of European history. Modern historians named this plague incident after the Eastern Roman Emperor Justinian I, who was in power at the time. He contracted the disease, but was one of a limited number of survivors. The death toll from this series of plagues was an unbelievable 40 to 100 million. [Source]

Climate change led to the downfall of ancient civilisation - dire warning for modern world

Link copied

Christianity ‘turned to archaeology to promote bible’ says expert

When you subscribe we will use the information you provide to send you these newsletters. Sometimes they'll include recommendations for other related newsletters or services we offer. Our Privacy Notice explains more about how we use your data, and your rights. You can unsubscribe at any time.

The Indus Valley Civilisation was a Bronze Age civilisation which existed from 33000 BC to 1300 BC in south Asia. There has been some mystery as to why the civilisation rapidly disappeared, with one theory suggesting it was due to an invasion by nomadic Indo-Aryans. Another theory is that earthquakes led to the destruction of the civilisation.

Related articles

But new research throws those theories out the window, with mathematical evidence pointing to rapid climate change.

A scientist from the Rochester Institute of Technology (RIT) developed a method to study paleoclimate time series, such as analysing a certain isotope found in stalagmites from a cave in South Asia, to analyse a monsoon record from 5,700 years ago.

Nishant Malik, assistant professor in RIT's School of Mathematical Sciences, found that at the beginning of the civilisation, there was a sudden drop in monsoon activity, but by the end it had picked up rapidly again.

Prof Malik said: "Usually the data we get when analyzing paleoclimate is a short time series with noise and uncertainty in it.

Climate change led to the downfall of Indus Valley civilisation - dire warning for modern world (Image: GETTY)

The Indus Valley Civilisation was a Bronze Age civilisation which existed from 33000 BC to 1300 BC in south Asia (Image: GETTY)

"As far as mathematics and climate is concerned, the tool we use very often in understanding climate and weather is dynamical systems.

"But dynamical systems theory is harder to apply to paleoclimate data.

"This new method can find transitions in the most challenging time series, including paleoclimate, which are short, have some amount of uncertainty and have noise in them."

This is not the first time scientists have discovered climate change has led to the downfall of an empire.

There was a dramatic increase in monsoon activity come the end of the civilisation (Image: GETTY)

Related articles

The Neo-Assyrian Empire was an empire from the Iron Age which ruled much of the Middle East from 911 BC to 609 BC.

Historians had been stunned by their meteoric rise and sudden fall from grace which took just a matter of decades.

By analysing stalagmites from the Kuna Ba cave, located near Nineveh, northern Iraq, researchers were able to examine heavy and light isotopes of oxygen.

These isotopes offer a glimpse into the climatic past and researchers were able to see how during the beginning of the Neo-Assyrian Empire, the Middle East was going through an unusual wet patch.

Most groundbreaking archaeological finds (Image: EXPRESS)

The heavy rainfall allowed crops and agriculture to flourish, which in turn created a stable and strong urban environment on which the empire was built.

Study leaders Ashish Sinha, professor of Earth and climate sciences at California State University, and Gayatri Kathayat, associate professor of global environmental change at Xi&rsquoan Jiaotong University, detailed their discovery in a think-piece for the Conversation.

They wrote: &ldquoWe argue that nearly two centuries of unusually wet conditions in this otherwise semi-arid region allowed for agriculture to flourish and energised the Assyrian economy.

&ldquoThe climate acted as a catalyst for the creation of a dense network of urban and rural settlements in the unsettled zones that previously hadn&rsquot been able to support farming.&rdquo

However, towards the end of the empire, the climate in the Middle East reverted to type, and became dry with drought conditions lasting for decades.

The Earth is heating up (Image: EXPRESS)


Ultimately, the crops began to die out and people began to starve leading to the empire becoming weak and vulnerable.

And the team argue that the discovery should serve as a stark warning for the future of society as climate change becomes more evident in our modern world.

The pair added: &ldquoDroughts like this one offer a glimpse of what Assyrians endured during the mid-seventh century BC. And the collapse of the Neo-Assyrian Empire offers a warning to today&rsquos societies.

&ldquoClimate change is here to stay. In the 21st century, people have what Neo-Assyrians did not: the benefit of hindsight and plenty of observational data.

&ldquoUnsustainable growth in politically volatile and water-stressed regions is a time-tested recipe for disaster.&rdquo

Abrupt climate changes in Earth history

An important new area of research, abrupt climate change, has developed since the 1980s. This research has been inspired by the discovery, in the ice core records of Greenland and Antarctica, of evidence for abrupt shifts in regional and global climates of the past. These events, which have also been documented in ocean and continental records, involve sudden shifts of Earth’s climate system from one equilibrium state to another. Such shifts are of considerable scientific concern because they can reveal something about the controls and sensitivity of the climate system. In particular, they point out nonlinearities, the so-called “tipping points,” where small, gradual changes in one component of the system can lead to a large change in the entire system. Such nonlinearities arise from the complex feedbacks between components of the Earth system. For example, during the Younger Dryas event (see below) a gradual increase in the release of fresh water to the North Atlantic Ocean led to an abrupt shutdown of the thermohaline circulation in the Atlantic basin. Abrupt climate shifts are of great societal concern, for any such shifts in the future might be so rapid and radical as to outstrip the capacity of agricultural, ecological, industrial, and economic systems to respond and adapt. Climate scientists are working with social scientists, ecologists, and economists to assess society’s vulnerability to such “climate surprises.”

The Younger Dryas event (12,900 to 11,600 years ago) is the most intensely studied and best-understood example of abrupt climate change. The event took place during the last deglaciation, a period of global warming when the Earth system was in transition from a glacial mode to an interglacial one. The Younger Dryas was marked by a sharp drop in temperatures in the North Atlantic region cooling in northern Europe and eastern North America is estimated at 4 to 8 °C (7.2 to 14.4 °F). Terrestrial and marine records indicate that the Younger Dryas had detectable effects of lesser magnitude over most other regions of Earth. The termination of the Younger Dryas was very rapid, occurring within a decade. The Younger Dryas resulted from an abrupt shutdown of the thermohaline circulation in the North Atlantic, which is critical for the transport of heat from equatorial regions northward (today the Gulf Stream is a part of that circulation). The cause of the shutdown of the thermohaline circulation is under study an influx of large volumes of freshwater from melting glaciers into the North Atlantic has been implicated, although other factors probably played a role.

Paleoclimatologists are devoting increasing attention to identifying and studying other abrupt changes. The Dansgaard-Oeschger cycles of the last glacial period are now recognized as representing alternation between two climate states, with rapid transitions from one state to the other. A 200-year-long cooling event in the Northern Hemisphere approximately 8,200 years ago resulted from the rapid draining of glacial Lake Agassiz into the North Atlantic via the Great Lakes and St. Lawrence drainage. This event, characterized as a miniature version of the Younger Dryas, had ecological impacts in Europe and North America that included a rapid decline of hemlock populations in New England forests. In addition, evidence of another such transition, marked by a rapid drop in the water levels of lakes and bogs in eastern North America, occurred 5,200 years ago. It is recorded in ice cores from glaciers at high altitudes in tropical regions as well as tree-ring, lake-level, and peatland samples from temperate regions.

Abrupt climatic changes occurring before the Pleistocene have also been documented. A transient thermal maximum has been documented near the Paleocene-Eocene boundary (56 million years ago), and evidence of rapid cooling events are observed near the boundaries between both the Eocene and Oligocene epochs (33.9 million years ago) and the Oligocene and Miocene epochs (23 million years ago). All three of these events had global ecological, climatic, and biogeochemical consequences. Geochemical evidence indicates that the warm event occurring at the Paleocene-Eocene boundary was associated with a rapid increase in atmospheric carbon dioxide concentrations, possibly resulting from the massive outgassing and oxidation of methane hydrates (a compound whose chemical structure traps methane within a lattice of ice) from the ocean floor. The two cooling events appear to have resulted from a transient series of positive feedbacks among the atmosphere, oceans, ice sheets, and biosphere, similar to those observed in the Pleistocene. Other abrupt changes, such as the Paleocene-Eocene Thermal Maximum, are recorded at various points in the Phanerozoic.

Abrupt climate changes can evidently be caused by a variety of processes. Rapid changes in an external factor can push the climate system into a new mode. Outgassing of methane hydrates and the sudden influx of glacial meltwater into the ocean are examples of such external forcing. Alternatively, gradual changes in external factors can lead to the crossing of a threshold the climate system is unable to return to the former equilibrium and passes rapidly to a new one. Such nonlinear system behaviour is a potential concern as human activities, such as fossil-fuel combustion and land-use change, alter important components of Earth’s climate system.

Humans and other species have survived countless climatic changes in the past, and humans are a notably adaptable species. Adjustment to climatic changes, whether it is biological (as in the case of other species) or cultural (for humans), is easiest and least catastrophic when the changes are gradual and can be anticipated to large extent. Rapid changes are more difficult to adapt to and incur more disruption and risk. Abrupt changes, especially unanticipated climate surprises, put human cultures and societies, as well as both the populations of other species and the ecosystems they inhabit, at considerable risk of severe disruption. Such changes may well be within humanity’s capacity to adapt, but not without paying severe penalties in the form of economic, ecological, agricultural, human health, and other disruptions. Knowledge of past climate variability provides guidelines on the natural variability and sensitivity of the Earth system. This knowledge also helps identify the risks associated with altering the Earth system with greenhouse gas emissions and regional to global-scale changes in land cover.


Thailand experienced its worst flooding in 2011 © Photo: Shermaine Ho/IRIN

JOHANNESBURG, 27 November 2012 (IRIN) - Many of the worst natural disasters of 2011 were also the most severe the affected countries had ever experienced, revealed the Global Climate Risk Index (CRI) 2013, which was released in Doha today.

Brazil, Cambodia, El Salvador, Laos and Thailand appear in the CRI&rsquos 10 most-affected countries all recorded their severest natural hazards-related catastrophes in 2011.

Floods and landslides claimed the lives of more than 1,000 people and caused almost US$5 billion in direct losses in Brazil, said the index, which is produced by the NGO Germanwatch.

Thailand is listed as 2011&rsquos most natural disaster-affected country. The country experienced its worst flooding ever that year, triggered by the landfall of Tropical Storm Nock-ten. The flooding led to losses worth $43 billion, making it one of the most costly natural disasters of the world.

El Salvador, the smallest country in Central America, appears frequently on the annual index. In 2011, extensive floods and landslides caused damages worth over $1 billion.

In Cambodia, severe rainfalls resulted in the worst flooding in decades, killing about 250 people and destroying houses and rice crops. Its neighbour Laos also experienced heavy flooding in 10 of the country&rsquos 17 provinces over 300,000 people were affected.

Connection to climate change

&ldquoWe see that there are an increasing number of cases where science is saying, &lsquoOh these big events have likely not happened without climate change&rsquo. It is getting more visible in the disasters,&rdquo said Sven Harmeling, the lead on climate change policy at Germanwatch. &ldquoWe must expect that this will become more so in the future, that countries will experience extreme events of a strength they have never seen before.&rdquo

Because climate is the average of many weather events occurring over a span of years, one-off events cannot be directly linked to climate change. But studies do indicate that the increased occurrence of extreme climate events could likely result from climate change. Researchers from the University of Oxford and the Hadley Centre for Climate Prediction and Research showed that the rise of manmade greenhouse gases in the atmosphere has at least doubled the risk of a heatwave exceeding the record-shattering heatwave that stuck Europe in 2003.

Changes in extreme natural events have been observed since 1950, noted the Intergovernmental Panel on Climate Change (IPCC) in its 2012 special report Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX). The frequency and intensity of rainfall, drought and warm spells have likely increased in some places, the report said.

The 2013 CRI lists a selection of the record-breaking natural disasters that have occurred since 2000, including Europe&rsquos 2003 heatwave - the hottest summer in 500 years. Other events include: the wettest autumn in England and Wales since 1766, recorded in 2000 the hottest summer in Greece since 1891, recorded in 2007 the hottest summer in western Russia since 1500, occurring in 2010 and the worst flooding in Pakistan&rsquos history, occurring in 2010.

The CRI also offers a selection of the record-breaking natural disasters that have occurred since 2000. These include: the wettest autumn in England and Wales since 1766, recorded in 2000 Europe&rsquos hottest summer in 500 years, occurring in 2003 the hottest summer in Greece since 1891, recorded in 2007 the hottest summer in western Russia since 1500, occurring in 2010 the worst flooding in Pakistan&rsquos history, occurring in 2010 and many more.

The UN climate change talks in Doha also follow the deadly impact of Hurricane Sandy - one of the greatest disasters in US history. The storm caused economic losses of about $50 billion, notes the CRI.

But the US saw an even more severe disaster in 2012, one that had far-reaching impacts on global food security. Popular climatologist Jeff Masters wrote on his blog that &ldquoshockingly, Sandy is probably not even the deadliest or most expensive weather disaster this year in the United States - Sandy's damages of perhaps $50 billion will likely be overshadowed by the huge costs of the great drought of 2012&hellip While it will be several months before the costs of America's worst drought since 1954 are known, the 2012 drought is expected to cut America's GDP by 0.5 to 1 percentage points.&rdquo

With these massive economic impacts, the controversial matter of losses and damages caused by climate change will be granted &ldquoincreasing weight&rdquo at the negotiations, says the CRI.

From 1970 to 2008, more than 95 percent of disaster-related deaths occurred in developing countries, the SREX report said. &ldquoMiddle-income countries with rapidly expanding asset bases have borne the largest burden. During the period from 2001 to 2006, losses amounted to about one percent of the GDP for middle-income countries. In small, exposed countries, particularly small island developing states, losses expressed as a percentage of GDP have been particularly high, exceeding 1 percent in many cases and 8 percent in the most extreme cases, averaged over both disaster and non-disaster years for the period from 1970 to 2010.&rdquo

Harmeling surmised that countries participating in the Doha climate meeting &ldquocould decide to start building a strategic, comprehensive approach to dealing with loss and damage, which could be in the form of an international mechanism. Much of it will [focus on] adaptation, but there will also be the need to increase work on rehabilitation and multi-national insurance schemes.

&ldquoDoha is an important moment here to show the world that the most vulnerable are not left behind with the unavoidable consequences [of climate change].&rdquo

Thank you!

We live enshrouded by an atmospheric greenhouse of gases and water vapor that has maintained life-supporting conditions for hundreds of millions of years CO2 is part of that mix. But over the past three million years our greenhouse system has been highly unstable. The record of CO2 trapped in polar ice reveals that over the last 800,000 years, during dramatic swings between ice ages and warm periods, CO2 has oscillated between 180 and 280 ppm. In the last rapid warm-up from the most recent glacial period, CO2 jumped to 260 ppm, and then oscillated around 275 ppm. Since then, for about 9,000 years, our climate has been relatively stable. Agriculture, civilizations and states emerged, and global population grew from several million at the end of the last Ice Age to 1.2 billion in 1850.

Since 1850, industrial emissions have driven atmospheric CO2 levels from about 280 to 410. Human populations are now surging toward eight billion. A doubling of CO2 from preindustrial levels, which is projected by 2075 &mdash due to the combination of industrial emissions and huge volumes of ancient greenhouse gases rising from melting permafrost &ndash will put the earth at CO2 levels not seen for 35 million years, the last time that Antarctica was ice-free. A quadrupling of CO2 would put us into the extreme hothouse conditions of the Jurassic era.

Because CO2 traps heat, our industrial emissions of greenhouse gases have launched a sudden and rapid shift in global temperature and climate. For over a century we have unconsciously been playing with the delicate controls of our planet&rsquos unstable climate system. We did not understand what we were doing until recently, but we do now.

Fortunately carbon in the atmosphere isn&rsquot the only thing that can grow exponentially. Technological rates of change provide a ray of hope. Fossil-fueled economies emerged in a geological instant&mdashthe past two hundred years&mdashbut over the last decade, renewable energy systems have developed even more explosively. With strong governmental action supported by a broad popular consensus, we might yet find a way through.

We need to act just as &ldquoexponentially&rdquo to reset these global controls and to accelerate through a new global energy transition.

The means are at hand. Solar and wind renewable technologies are surging in the market place. Driven by technological advances and economies of scale, renewable energy is already a prudent financial investment. Coal power generation persists thanks to national subsidies across the United States, investment in coal is stalled and collapsing. Widespread gas leaks undermine the possible benefits of natural gas as a &ldquotransition fuel.&rdquo Indeed, power generation by natural gas may be at what some analysts see as a tipping point toward unprofitability.

Over just the last decade alternative energy markets have moved well into the exponential transition phase. The current transition to renewable energy systems creates massive employment opportunities just as past energy transitions have over the last two centuries. As in the past, and as described by researcher Carlota Perez, this transition faces a profound political struggle with entrenched, fossil fuel interests.

We cannot promise a panacea. There is no hope that we can suddenly and magically return to our era&rsquos old norms of climate and atmosphere. The effects of residual enhanced CO2 will take centuries to work out under the best of scenarios. But our planet&rsquos relationship to carbon dioxide has changed drastically before it can change again. There is hope that we can avert a fundamental civilizational crisis&mdashbut only if we take immediate and &ldquoexponential&rdquo action.

Historians’ perspectives on how the past informs the present

John Brooke, Michael Bevis and Steve Rissing teach History, Geophysics and Biology at The Ohio State University, where they team-teach a general education course on climate change.

Correction, Sept. 23, 2019

The original chart that accompanied this story misstated the multiplier for the Gross Domestic Product line. The chart shows GDP in $10 billions, not $100 millions.


2006: The warming ocean could fuel more frequent and more intense Atlantic hurricanes.

2016: Hurricane frequency has dropped somewhat hurricane intensities haven&rsquot changed much — yet.

In August 2005, Hurricane Katrina slammed into the Gulf Coast. Floodwaters covered roughly 80 percent of New Orleans, 1,836 people died, hundreds of thousands became homeless and the most active Atlantic hurricane season on record was far from over. As the last storm fizzled, damages had reached $160 billion, meteorologists had run through the alphabet of preselected storm names and many people, including Gore, were indicting global warming as a probable culprit.

&ldquoHurricanes were the poster child of global warming,&rdquo says Christopher Landsea, a meteorologist at the National Oceanic and Atmospheric Administration&rsquos National Hurricane Center in Miami. &ldquoIn reality, it&rsquos a lot more subtle than that.&rdquo

Tropical cyclones, such as Atlantic hurricanes, are stirred up where seawater is warmer than the overlying air. Because climate change raises ocean temperatures, it made sense that such storms could strike more often and with more ferocity. A closer look at hurricanes past and future suggests, however, that the relationship between warming and hurricanes is less clear-cut.

STEADY STORMS The record-smashing 2005 hurricane season raised concerns that storms were becoming stronger and more frequent. Yet, a closer look at the long-term trends revealed that Atlantic hurricane frequency has not significantly changed since 1878. Source: C. Landsea/NHC/NOAA

Several studies in the mid-2000s examining the history of Atlantic hurricanes pointed to an overall rise in the number of 20th century storms in step with warming sea surface temperatures. Scrutinizing those numbers, Landsea uncovered a problem: Hurricane-spotting satellites date back only to 1961&rsquos Hurricane Esther. Before then, storm watchers probably missed many weaker, shorter-lived storms. Taking this into account, Landsea and colleagues reported in 2010 that the number of annual storms has actually decreased somewhat over the last century.

That decrease could be explained by climate factors other than rising sea surface temperatures. Changes in atmospheric heating can increase the variation in wind speed at different elevations, known as wind shear. The shearing winds rip apart burgeoning storms and decrease the number of fully formed hurricanes, researchers reported in 2007 in Geophysical Research Letters.

The overall frequency of storms, however, is less important than the number of Katrina-scale events, says Gabriel Vecchi, an oceanographer at NOAA&rsquos Geophysical Fluid Dynamics Laboratory in Princeton, N.J. Category 4 and 5 storms, the most violent, make up only 6 percent of U.S. hurricane landfalls, but they cause nearly half of all damage. Vecchi and colleagues used the latest understanding of how hurricanes form and intensify to forecast how the storms will behave under future climate conditions.

LANDFALL Hurricane Katrina slammed into Louisiana in August 2004. The storm devastated the state and flooded much of New Orleans. Radar data from NWS New Orleans and processed by the National Climatic Data Center

The work, published in 2010 in Science, predicted that the frequency of Category 4 and 5 storms could nearly double by 2100 due to ocean warming, even if the overall number of hurricanes doesn&rsquot rise. At present, however, climate change&rsquos influence on hurricanes is probably too small to detect, Vecchi says, adding that Katrina&rsquos wrath can&rsquot be blamed on global warming.

Future hurricanes will cause more damage, Landsea predicts, whether or not there&rsquos any change in storm intensity. Rising sea levels mean floodwaters will climb higher and reach farther inland. Hurricane Sandy, which stormed over New Jersey and New York in October 2012, had weakened by the time it reached the coast. But it drove a catastrophic storm surge into the coastline that caused about $50 billion in damages. If sea levels were higher, Sandy&rsquos surge would have reached even farther inland and damage could have been much worse.

Many vulnerable areas such as St. Petersburg, Fla., are woefully underprepared for threats posed by storms at current sea levels, Landsea warns. Higher sea levels won&rsquot help. &ldquoWe don&rsquot need to invoke climate change decades down the line — we&rsquove got a big problem now,&rdquo he says.

“Hurricanes were the poster child of global warming. In reality it’s a lot more subtle than that.” Christopher Landsea

Researchers have directly monitored Atlantic circulation, which includes the Gulf Stream, since the 2004 deployment of the RAPID array (shown). Direct measurements suggest that the circulation may be slowing down. National Oceanography Centre (UK)

How Climate Change and Plague Helped Bring Down the Roman Empire

This article was originally published at Aeon and has been republished under Creative Commons.

At some time or another, every historian of Rome has been asked to say where we are, today, on Rome’s cycle of decline. Historians might squirm at such attempts to use the past but, even if history does not repeat itself, nor come packaged into moral lessons, it can deepen our sense of what it means to be human and how fragile our societies are.

In the middle of the second century, the Romans controlled a huge, geographically diverse part of the globe, from northern Britain to the edges of the Sahara, from the Atlantic to Mesopotamia. The generally prosperous population peaked at 75 million. Eventually, all free inhabitants of the empire came to enjoy the rights of Roman citizenship. Little wonder that the 18th-century English historian Edward Gibbon judged this age the ‘most happy’ in the history of our species — yet today we are more likely to see the advance of Roman civilization as unwittingly planting the seeds of its own demise.

Five centuries later, the Roman empire was a small Byzantine rump-state controlled from Constantinople, its near-eastern provinces lost to Islamic invasions, its western lands covered by a patchwork of Germanic kingdoms. Trade receded, cities shrank and technological advance halted. Despite the cultural vitality and spiritual legacy of these centuries, this period was marked by a declining population, political fragmentation and lower levels of material complexity. When the historian Ian Morris at Stanford University created a universal social-development index, the fall of Rome emerged as the greatest setback in the history of human civilization. 

Explanations for a phenomenon of this magnitude abound: in 1984, the German classicist Alexander Demandt cataloged more than 200 hypotheses. Most scholars have looked to the internal political dynamics of the imperial system or the shifting geopolitical context of an empire whose neighbours gradually caught up in the sophistication of their military and political technologies. But new evidence has started to unveil the crucial role played by changes in the natural environment. The paradoxes of social development, and the inherent unpredictability of nature, worked in concert to bring about Rome’s demise.

Climate change did not begin with the exhaust fumes of industrialization, but has been a permanent feature of human existence. Orbital mechanics (small variations in the tilt, spin and eccentricity of the Earth’s orbit) and solar cycles alter the amount and distribution of energy received from the Sun. And volcanic eruptions spew reflective sulphates into the atmosphere, sometimes with long-reaching effects. Modern, anthropogenic climate change is so perilous because it is happening quickly and in conjunction with so many other irreversible changes in the Earth’s biosphere. But climate change per se is nothing new.

The need to understand the natural context of modern climate change has been an unmitigated boon for historians. Earth scientists have scoured the planet for paleoclimate proxies, natural archives of the past environment. The effort to put climate change in the foreground of Roman history is motivated both by troves of new data and a heightened sensitivity to the importance of the physical environment.

It turns out that climate had a major role in the rise and fall of Roman civilization. The empire-builders benefitted from impeccable timing: the characteristic warm, wet and stable weather was conducive to economic productivity in an agrarian society. The benefits of economic growth supported the political and social bargains by which the Roman empire controlled its vast territory. The favorable climate, in ways subtle and profound, was baked into the empire’s innermost structure.

The end of this lucky climate regime did not immediately, or in any simple deterministic sense, spell the doom of Rome. Rather, a less favorable climate undermined its power just when the empire was imperilled by more dangerous enemies — Germans, Persians — from without. Climate instability peaked in the sixth century, during the reign of Justinian. Work by dendro-chronologists and ice-core experts points to an enormous spasm of volcanic activity in the 530s and 540s CE, unlike anything else in the past few thousand years. This violent sequence of eruptions triggered what is now called the ‘Late Antique Little Ice Age,’ when much colder temperatures endured for at least 150 years.

This phase of climate deterioration had decisive effects in Rome’s unravelling. It was also intimately linked to a catastrophe of even greater moment: the outbreak of the first pandemic of bubonic plague.

D isruptions in the biological environment were even more consequential to Rome’s destiny. For all the empire’s precocious advances, life expectancies ranged in the mid-20s, with infectious diseases the leading cause of death. But the array of diseases that preyed upon Romans was not static and, here too, new sensibilities and technologies are radically changing the way we understand the dynamics of evolutionary history — both for our own species, and for our microbial allies and adversaries.

The highly urbanized, highly interconnected Roman empire was a boon to its microbial inhabitants. Humble gastro-enteric diseases such as Shigellosis and paratyphoid fevers spread via contamination of food and water, and flourished in densely packed cities. Where swamps were drained and highways laid, the potential of malaria was unlocked in its worst form — Plasmodium falciparumva deadly mosquito-borne protozoon. The Romans also connected societies by land and by sea as never before, with the unintended consequence that germs moved as never before, too. Slow killers such as tuberculosis and leprosy enjoyed a heyday in the web of interconnected cities fostered by Roman development.

However, the decisive factor in Rome’s biological history was the arrival of new germs capable of causing pandemic events. The empire was rocked by three such intercontinental disease events. The Antonine plague coincided with the end of the optimal climate regime, and was probably the global debut of the smallpox virus. The empire recovered, but never regained its previous commanding dominance. Then, in the mid-third century, a mysterious affliction of unknown origin called the Plague of Cyprian sent the empire into a tailspin.

Though it rebounded, the empire was profoundly altered — with a new kind of emperor, a new kind of money, a new kind of society, and soon a new religion known as Christianity. Most dramatically, in the sixth century a resurgent empire led by Justinian faced a pandemic of bubonic plague, a prelude to the medieval Black Death. The toll was unfathomable   maybe half the population was felled.

The plague of Justinian is a case study in the extraordinarily complex relationship between human and natural systems. The culprit, the Yersinia pestis bacterium, is not a particularly ancient nemesis. Evolving just 4,000 years ago, almost certainly in central Asia, it was an evolutionary newborn when it caused the first plague pandemic. The disease is permanently present in colonies of social, burrowing rodents such as marmots or gerbils. However, the historic plague pandemics were colossal accidents, spillover events involving at least five different species: the bacterium, the reservoir rodent, the amplification host (the black rat, which lives close to humans), the fleas that spread the germ and the people caught in the crossfire.

Genetic evidence suggests that the strain of Yersinia pestis that generated the plague of Justinian originated somewhere near western China. It first appeared on the southern shores of the Mediterranean and, in all likelihood, was smuggled in along the southern, seaborne trading networks that carried silk and spices to Roman consumers. It was an accident of early globalization. Once the germ reached the seething colonies of commensal rodents, fattened on the empire’s giant stores of grain, the mortality was unstoppable.

The plague pandemic was an event of astonishing ecological complexity. It required purely chance conjunctions, especially if the initial outbreak beyond the reservoir rodents in central Asia was triggered by those massive volcanic eruptions in the years preceding it. It also involved the unintended consequences of the built human environment — such as the global trade networks that shuttled the germ onto Roman shores, or the proliferation of rats inside the empire.

The pandemic baffles our distinctions between structure and chance, pattern and contingency. Therein lies one of the lessons of Rome. Humans shape nature — above all, the ecological conditions within which evolution plays out. But nature remains blind to our intentions, and other organisms and ecosystems do not obey our rules. Climate change and disease evolution have been the wild cards of human history.

Our world now is very different from ancient Rome. We have public health, germ theory and antibiotic pharmaceuticals. We will not be as helpless as the Romans, if we are wise enough to recognize the grave threats looming around us, and to use the tools at our disposal to mitigate them. But the centrality of nature in Rome’s fall gives us reason to reconsider the power of the physical and biological environment to tilt the fortunes of human societies.

Perhaps we could come to see the Romans not so much as an ancient civilization, standing across an impassable divide from our modern age, but rather as the makers of our world today. They built a civilization where global networks, emerging infectious diseases and ecological instability were decisive forces in the fate of human societies. The Romans, too, thought they had the upper hand over the fickle and furious power of the natural environment.

History warns us: they were wrong.

Kyle Harper is a professor of classics and letters and senior vice president and provost at the University of Oklahoma. His latest book is The Fate of Rome: Climate, Disease, and the End of an Empire (2017).

Natural Disasters and Climate Change

Students use maps and graphs to understand how the frequency of billion-dollar natural disaster events has changed over time. They analyze how climate change affected the 2017 California wildfires and the flooding from Hurricane Harvey.

Earth Science, Geography, Human Geography



1.     Engage students in the topic by inviting them to share their knowledge of natural disasters.

Ask students to give you examples of natural disasters, including floods, earthquakes, hurricanes, droughts, wildfires, tornadoes, landslides, volcanic eruptions, tsunamis, snowstorms, and severe thunderstorms. As a class, determine a working definition of the term natural disaster. Be sure the definition includes the key components of a natural disaster: a natural event or force that causes damage to property and/or loss of life. Have students look back at their list of examples. Ask: Which of these natural disasters are related to weather? (Answer: All in the list above are related in some way to weather except earthquakes, volcanoes, and tsunamis.)

2.     Project the U.S. 2017 Billion-Dollar Weather and Climate Disasters Map from NOAA’s Billion-Dollar Weather and Climate Disasters: Overview webpage.

Read or summarize the text under the heading � in Context.” Make sure students understand that the number of billion-dollar events in 2017 was significant because it was higher than both the historic and recent five-year average and because of its high economic impact. Point out that the costs of these disasters are calculated by considering property and infrastructure damage and business interruption. Medical costs and loss of life are not considered in the final number. Ask students to make observations about the map. Ask: What types of natural disasters are shown on the map? (Answer: droughts, wildfires, flooding, tornadoes, hurricanes, hailstorms, a freeze, and severe weather.) Ask: Did you hear about any of these natural disasters in the news? What would make these events newsworthy? (Answer: Depending on where students live, they may be familiar with any of these events, but the California wildfires and the three hurricanes were covered extensively in the national news. These events are newsworthy primarily because they resulted in great damage to property and possible loss of life.) Ask: What patterns do you notice in the locations of these events? (Answer: Students may notice some types of events seem to be grouped in certain parts of the country.) Ask: Why might such damaging disaster events happen in these locations? (Answer: Students may note some events affected densely populated cities, which might increase the amount of property damage. Similarly, they may observe that some occurred in agricultural areas, which may have affected crops and damaged the economy. What is important for them to recognize is that there could be multiple factors contributing to the costliness of these events.)

3.     Have students interpret graphs to understand patterns in the frequency of major natural disasters in the United States over time.

Scroll down to the 1980� Year-to-Date United States Billion-Dollar Disaster Event Frequency graph. Ask students what variables are shown on the x and y axes of the graph (x is months and y is the number of events). Ask: What do the colored and gray lines represent? (Answer: These lines represent specific years.) Ask: What does the black line represent? (Answer: The black line represents the average of all the years in the range represented on the graph.) Ask students to work with a partner to answer a few questions about the graph to ensure they are reading it correctly. They should navigate to the website on their own devices and write the answers to the following questions on a piece of scrap paper:

  • Why don’t any of the lines on the graph decrease from left to right? (Answer: They show the cumulative, or total, number of events over the course of the year, so there can’t be fewer events by December than there were in January.)
  • How many total billion-dollar disaster events were there in 1988? 1991? 2006? 2015? (Answer: 1, 4, 6, 10)

Walk around and check students’ answers and address any problems with understanding. Then ask students what they observe about the graph. Ask: What general trend do you see? (Answer: They should see that the frequency of billion-dollar events is generally increasing over time.) Once students have identified that trend, challenge them by asking how that could be true, since there were more events in 1989 than there were in 2014. The key is for students to understand that a trend over time does not mean that every year will have more billion-dollar disaster events than the last. Ask students: What are some factors that may explain this general trend? (Answer: There are many reasons students might give, such as population growth, development into areas more at risk for natural disasters, sea-level rise, or climate change.) If students do not mention climate change, introduce the idea to them. Explain that while many factors contribute to any weather event, scientists agree that climate change in general is and will continue to lead to more extreme weather events—from droughts to flooding to hurricanes. Now scientists are increasingly looking at the role climate change is playing in specific disaster events. Review the basic causes and consequences of climate change before moving to the next step.

4.     Watch a video about the 2017 California wildfires.

Tell students they are going to focus on two extreme weather-related disaster events and look for evidence that climate change played a role. Divide students into groups of two or three and distribute the Analyzing a Natural Disaster Event handout to each student. Go over the questions on the worksheet with students so they are familiar with them. Review the environmental conditions that make wildfires more likely. Show the first minute and 35 seconds of the PBS NewsHour Segment Climate change is part of California’s perfect recipe for intense wildfire. Pause the video and ask students to briefly explain the evidence Park Williams gives linking climate change to an increase in wildfires generally. Explain that they will now watch and listen for evidence that climate change contributed to the California wildfires specifically. Continue playing the video. Ask students to just watch the first time through with the questions on the worksheet in mind, but not to try to complete the worksheet at this point. Pause the video frequently to discuss and check for understanding. Then replay the video, and this time ask students to complete the worksheet as they watch. Provide support for students as they work by pausing the video, rewinding, and modeling how to answer the questions as needed.

5.     Have students research Hurricane Harvey and analyze evidence that climate change contributed to the severity of the flooding during the hurricane.

After students have completed the worksheet while watching Climate change is part of California’s perfect recipe for intense wildfire, distribute another copy of the worksheet to each group. As a class, review the environmental conditions that lead to a hurricane. In groups, have students research Hurricane Harvey, and use the worksheet to analyze the effect climate change had on the flooding from the storm. Some useful websites are listed in the Resources for Further Exploration section.

6.     Discuss students’ findings.

Ask students to share their findings and conclusions with the class. Is there a consensus about the role of climate change in the extreme flooding from Hurricane Harvey? If not, what are the arguments for and against? Discuss the differences in the role climate change played in the California wildfires and the role it played in the flooding in Hurricane Harvey. Ask:

  • Do you think most hurricanes are affected by climate change? Why or why not?
  • Do you think most wildfires are affected by climate change? Why or why not?
  • Would these types of disaster events continue to occur even without climate change?
  • How might they be different? 
  • What steps can we take to protect lives, property, and infrastructure as more extreme weather-related natural disaster events become more common?

Informal Assessment

Assess student understanding by reviewing their work on the Analyzing a Natural Disaster Event handout that they completed about Hurricane Harvey. Additionally, use the final discussion to identify and correct any misconceptions.

Extending the Learning

Monitor the news for weather-related disaster events around the world. Research to see if scientists are able to link the events to climate change. Keep track of any such linkages over the course of the year.

Have students predict how the frequency of billion-dollar natural disaster events will change in the next one hundred years and explain their reasoning.

Have students investigate how natural disaster events affect human migration. Do people leave or move out of the areas after major natural disasters? Use this map of climate change and human migration as a starting point.


From ancient times, people suspected that the climate of a region could change over the course of centuries. For example, Theophrastus, a pupil of Aristotle, told how the draining of marshes had made a particular locality more susceptible to freezing, and speculated that lands became warmer when the clearing of forests exposed them to sunlight. Renaissance and later scholars saw that deforestation, irrigation, and grazing had altered the lands around the Mediterranean since ancient times they thought it plausible that these human interventions had affected the local weather. [1] [2] Vitruvius, in the first century BC, wrote about climate in relation to housing architecture and how to choose locations for cities. [3] [4]

The 18th and 19th-century conversion of Eastern North America from forest to croplands brought obvious change within a human lifetime. From the early 19th century, many believed the transformation was altering the region's climate—probably for the better. When farmers in America, dubbed "sodbusters", took over the Great Plains, they held that "rain follows the plow." [5] [6] Other experts disagreed, and some argued that deforestation caused rapid rainwater run-off and flooding, and could even result in reduced rainfall. European academics, convinced of the superiority of their own civilization, said that the Orientals of the Ancient Near East had heedlessly converted their once lush lands into impoverished deserts. [7]

Meanwhile, national weather agencies had begun to compile masses of reliable observations of temperature, rainfall, and the like. When these figures were analyzed, they showed many rises and dips, but no steady long-term change. By the end of the 19th century, scientific opinion had turned decisively against any belief in a human influence on climate. And whatever the regional effects, few imagined that humans could affect the climate of the planet as a whole. [7]

From the mid-17th century, naturalists attempted to reconcile mechanical philosophy with theology, initially within a Biblical timescale. By the late 18th century, there was increasing acceptance of prehistoric epochs. Geologists found evidence of a succession of geological ages with changes in climate. There were various competing theories about these changes Buffon proposed that the Earth had begun as an incandescent globe and was very gradually cooling. James Hutton, whose ideas of cyclic change over huge periods of time were later dubbed uniformitarianism, was among those who found signs of past glacial activity in places too warm for glaciers in modern times. [8]

In 1815 Jean-Pierre Perraudin described for the first time how glaciers might be responsible for the giant boulders seen in alpine valleys. As he hiked in the Val de Bagnes, he noticed giant granite rocks that were scattered around the narrow valley. He knew that it would take an exceptional force to move such large rocks. He also noticed how glaciers left stripes on the land and concluded that it was the ice that had carried the boulders down into the valleys. [9]

His idea was initially met with disbelief. Jean de Charpentier wrote, "I found his hypothesis so extraordinary and even so extravagant that I considered it as not worth examining or even considering." [10] Despite Charpentier's initial rejection, Perraudin eventually convinced Ignaz Venetz that it might be worth studying. Venetz convinced Charpentier, who in turn convinced the influential scientist Louis Agassiz that the glacial theory had merit. [9]

Agassiz developed a theory of what he termed "Ice Age"—when glaciers covered Europe and much of North America. In 1837 Agassiz was the first to scientifically propose that the Earth had been subject to a past ice age. [11] William Buckland had been a leading proponent in Britain of flood geology, later dubbed catastrophism, which accounted for erratic boulders and other "diluvium" as relics of the Biblical flood. This was strongly opposed by Charles Lyell's version of Hutton's uniformitarianism and was gradually abandoned by Buckland and other catastrophist geologists. A field trip to the Alps with Agassiz in October 1838 convinced Buckland that features in Britain had been caused by glaciation, and both he and Lyell strongly supported the ice age theory which became widely accepted by the 1870s. [8]

Before the concept of ice ages was proposed, Joseph Fourier in 1824 reasoned on the basis of physics that Earth's atmosphere kept the planet warmer than would be the case in a vacuum. Fourier recognized that the atmosphere transmitted visible light waves efficiently to the earth's surface. The earth then absorbed visible light and emitted infrared radiation in response, but the atmosphere did not transmit infrared efficiently, which therefore increased surface temperatures. He also suspected that human activities could influence climate, although he focused primarily on land-use changes. In an 1827 paper, Fourier stated, "The establishment and progress of human societies, the action of natural forces, can notably change, and in vast regions, the state of the surface, the distribution of water and the great movements of the air. Such effects are able to make to vary, in the course of many centuries, the average degree of heat because the analytic expressions contain coefficients relating to the state of the surface and which greatly influence the temperature." [12] Fourier's work build on previous discoveries: in 1681 Edme Mariotte noted that glass, though transparent to sunlight, obstructs radiant heat. [13] [14] Around 1774 Horace Bénédict de Saussure showed that non-luminous warm objects emit infrared heat, and used a glass-topped insulated box to trap and measure heat from sunlight. [15] [16]

The physicist Claude Pouillet proposed in 1838 that water vapor and carbon dioxide might trap infrared and warm the atmosphere, but there was still no experimental evidence of these gases absorbing heat from thermal radiation. [17]

The warming effect of electromagnetic radiation on different gases was examined in 1856 by Eunice Newton Foote, who described her experiments using glass tubes exposed to sunlight. The warming effect of the sun was greater for compressed air than for an evacuated tube and greater for moist air than dry air. "Thirdly, the highest effect of the sun's rays I have found to be in carbonic acid gas." (carbon dioxide) She continued: "An atmosphere of that gas would give to our earth a high temperature and if, as some suppose, at one period of its history, the air had mixed with it a larger proportion than at present, an increased temperature from its own action, as well as from an increased weight, must have necessarily resulted." Her work was presented by Prof. Joseph Henry at the American Association for the Advancement of Science meeting in August 1856 and described as a brief note written by then journalist David Ames Wells her paper was published later that year in the American Journal of Science and Arts. [18] [19] [20] [21]

John Tyndall took Fourier's work one step further in 1859 when he investigated the absorption of infrared radiation in different gases. He found that water vapor, hydrocarbons like methane (CH4), and carbon dioxide (CO2) strongly block the radiation. [22] [23]

Some scientists suggested that ice ages and other great climate changes were due to changes in the amount of gases emitted in volcanism. But that was only one of many possible causes. Another obvious possibility was solar variation. Shifts in ocean currents also might explain many climate changes. For changes over millions of years, the raising and lowering of mountain ranges would change patterns of both winds and ocean currents. Or perhaps the climate of a continent had not changed at all, but it had grown warmer or cooler because of polar wander (the North Pole shifting to where the Equator had been or the like). There were dozens of theories.

For example, in the mid-19th century, James Croll published calculations of how the gravitational pulls of the Sun, Moon, and planets subtly affect the Earth's motion and orientation. The inclination of the Earth's axis and the shape of its orbit around the Sun oscillate gently in cycles lasting tens of thousands of years. During some periods the Northern Hemisphere would get slightly less sunlight during the winter than it would get during other centuries. Snow would accumulate, reflecting sunlight and leading to a self-sustaining ice age. [10] [24] Most scientists, however, found Croll's ideas—and every other theory of climate change—unconvincing.

By the late 1890s, Samuel Pierpoint Langley along with Frank W. Very [25] had attempted to determine the surface temperature of the Moon by measuring infrared radiation leaving the Moon and reaching the Earth. [26] The angle of the Moon in the sky when a scientist took a measurement determined how much CO
2 and water vapor the Moon's radiation had to pass through to reach the Earth's surface, resulting in weaker measurements when the Moon was low in the sky. This result was unsurprising given that scientists had known about infrared radiation absorption for decades.

In 1896 Svante Arrhenius used Langley's observations of increased infrared absorption where Moon rays pass through the atmosphere at a low angle, encountering more carbon dioxide ( CO
2 ), to estimate an atmospheric cooling effect from a future decrease of CO
2 . He realized that the cooler atmosphere would hold less water vapor (another greenhouse gas) and calculated the additional cooling effect. He also realized the cooling would increase snow and ice cover at high latitudes, making the planet reflect more sunlight and thus further cool down, as James Croll had hypothesized. Overall Arrhenius calculated that cutting CO
2 in half would suffice to produce an ice age. He further calculated that a doubling of atmospheric CO
2 would give a total warming of 5–6 degrees Celsius. [27]

Further, Arrhenius' colleague Arvid Högbom, who was quoted in length in Arrhenius' 1896 study On the Influence of Carbonic Acid in the Air upon the Temperature of the Earth [28] had been attempting to quantify natural sources of emissions of CO
2 for purposes of understanding the global carbon cycle. Högbom found that estimated carbon production from industrial sources in the 1890s (mainly coal burning) was comparable with the natural sources. [29] Arrhenius saw that this human emission of carbon would eventually lead to warming. However, because of the relatively low rate of CO
2 production in 1896, Arrhenius thought the warming would take thousands of years, and he expected it would be beneficial to humanity. [29] [30]

In 1899 Thomas Chrowder Chamberlin developed at length the idea that changes in climate could result from changes in the concentration of atmospheric carbon dioxide. [31] Chamberlin wrote in his 1899 book, An Attempt to Frame a Working Hypothesis of the Cause of Glacial Periods on an Atmospheric Basis:

  • a. An increase, by causing a larger absorption of the sun's radiant energy, raises the average temperature, while a reduction lowers it. The estimate of Dr. Arrhenius, based upon an elaborate mathematical discussion of the observations of Professor Langley, is that an increase of the carbon dioxide to the amount of two or three times the present content would elevate the average temperature 8° or 9° C. and would bring on a mild climate analogous to that which prevailed in the Middle Tertiary age. On the other hand, a reduction of the quantity of carbon dioxide in the atmosphere to an amount ranging from 55 to 62 per cent, of the present content, would reduce the average temperature 4 or 5 C, which would bring on a glaciation comparable to that of the Pleistocene period.
  • b. A second effect of increase and decrease in the amount of atmospheric carbon dioxide is the equalization, on the one hand, of surface temperatures, or their differentiation on the other. The temperature of the surface of the earth varies with latitude, altitude, the distribution of land and water, day and night, the seasons, and some other elements that may here be neglected. It is postulated that an increase in the thermal absorption of the atmosphere equalizes the temperature, and tends to eliminate the variations attendant on these contingencies. Conversely, a reduction of thermal atmospheric absorption tends to intensify all of these variations. A secondary effect of intensification of differences of temperature is an increase of atmospheric movements in the effort to restore equilibrium. Increased atmospheric movements, which are necessarily convectional, carry the warmer air to the surface of the atmosphere, and facilitate the discharge of the heat and thus intensify the primary effect. [..]

The term "greenhouse effect" for this warming was introduced by John Henry Poynting in 1909, in a commentary discussing the effect of the atmosphere on the temperature of the Earth and Mars. [33]

Arrhenius's calculations were disputed and subsumed into a larger debate over whether atmospheric changes had caused the ice ages. Experimental attempts to measure infrared absorption in the laboratory seemed to show little differences resulted from increasing CO
2 levels, and also found significant overlap between absorption by CO
2 and absorption by water vapor, all of which suggested that increasing carbon dioxide emissions would have little climatic effect. These early experiments were later found to be insufficiently accurate, given the instrumentation of the time. Many scientists also thought that the oceans would quickly absorb any excess carbon dioxide. [29]

Other theories of the causes of climate change fared no better. The principal advances were in observational paleoclimatology, as scientists in various fields of geology worked out methods to reveal ancient climates. Wilmot H. Bradley found that annual varves of clay laid down in lake beds showed climate cycles. Andrew Ellicott Douglass saw strong indications of climate change in tree rings. Noting that the rings were thinner in dry years, he reported climate effects from solar variations, particularly in connection with the 17th-century dearth of sunspots (the Maunder Minimum) noticed previously by William Herschel and others. Other scientists, however, found good reason to doubt that tree rings could reveal anything beyond random regional variations. The value of tree rings for climate study was not solidly established until the 1960s. [34] [35]

Through the 1930s the most persistent advocate of a solar-climate connection was astrophysicist Charles Greeley Abbot. By the early 1920s, he had concluded that the solar "constant" was misnamed: his observations showed large variations, which he connected with sunspots passing across the face of the Sun. He and a few others pursued the topic into the 1960s, convinced that sunspot variations were a main cause of climate change. Other scientists were skeptical. [34] [35] Nevertheless, attempts to connect the solar cycle with climate cycles were popular in the 1920s and 1930s. Respected scientists announced correlations that they insisted were reliable enough to make predictions. Sooner or later, every prediction failed, and the subject fell into disrepute. [36]

Meanwhile, Milutin Milankovitch, building on James Croll's theory, improved the tedious calculations of the varying distances and angles of the Sun's radiation as the Sun and Moon gradually perturbed the Earth's orbit. Some observations of varves (layers seen in the mud covering the bottom of lakes) matched the prediction of a Milankovitch cycle lasting about 21,000 years. However, most geologists dismissed the astronomical theory. For they could not fit Milankovitch's timing to the accepted sequence, which had only four ice ages, all of them much longer than 21,000 years. [37]

In 1938 Guy Stewart Callendar attempted to revive Arrhenius's greenhouse-effect theory. Callendar presented evidence that both temperature and the CO
2 level in the atmosphere had been rising over the past half-century, and he argued that newer spectroscopic measurements showed that the gas was effective in absorbing infrared in the atmosphere. Nevertheless, most scientific opinion continued to dispute or ignore the theory. [38]

Better spectrography in the 1950s showed that CO
2 and water vapor absorption lines did not overlap completely. Climatologists also realized that little water vapor was present in the upper atmosphere. Both developments showed that the CO
2 greenhouse effect would not be overwhelmed by water vapor. [29]

In 1955 Hans Suess's carbon-14 isotope analysis showed that CO
2 released from fossil fuels was not immediately absorbed by the ocean. In 1957, better understanding of ocean chemistry led Roger Revelle to a realization that the ocean surface layer had limited ability to absorb carbon dioxide, also predicting the rise in levels of CO
2 and later being proven by Charles David Keeling. [39] By the late 1950s, more scientists were arguing that carbon dioxide emissions could be a problem, with some projecting in 1959 that CO
2 would rise 25% by the year 2000, with potentially "radical" effects on climate. [29] In the centennial of the American oil industry in 1959, organized by the American Petroleum Institute and the Columbia Graduate School of Business, Edward Teller said "It has been calculated that a temperature rise corresponding to a 10 per cent increase in carbon dioxide will be sufficient to melt the icecap and submerge New York. [. ] At present the carbon dioxide in the atmosphere has risen by 2 per cent over normal. By 1970, it will be perhaps 4 per cent, by 1980, 8 per cent, by 1990, 16 per cent if we keep on with our exponential rise in the use of purely conventional fuels.". [40] In 1960 Charles David Keeling demonstrated that the level of CO
2 in the atmosphere was in fact rising. Concern mounted year by year along with the rise of the "Keeling Curve" of atmospheric CO
2 .

Another clue to the nature of climate change came in the mid-1960s from analysis of deep-sea cores by Cesare Emiliani and analysis of ancient corals by Wallace Broecker and collaborators. Rather than four long ice ages, they found a large number of shorter ones in a regular sequence. It appeared that the timing of ice ages was set by the small orbital shifts of the Milankovitch cycles. While the matter remained controversial, some began to suggest that the climate system is sensitive to small changes and can readily be flipped from a stable state into a different one. [37]

Scientists meanwhile began using computers to develop more sophisticated versions of Arrhenius's calculations. In 1967, taking advantage of the ability of digital computers to integrate absorption curves numerically, Syukuro Manabe and Richard Wetherald made the first detailed calculation of the greenhouse effect incorporating convection (the "Manabe-Wetherald one-dimensional radiative-convective model"). [41] [42] They found that, in the absence of unknown feedbacks such as changes in clouds, a doubling of carbon dioxide from the current level would result in approximately 2 °C increase in global temperature.

By the 1960s, aerosol pollution ("smog") had become a serious local problem in many cities, and some scientists began to consider whether the cooling effect of particulate pollution could affect global temperatures. Scientists were unsure whether the cooling effect of particulate pollution or warming effect of greenhouse gas emissions would predominate, but regardless, began to suspect that human emissions could be disruptive to climate in the 21st century if not sooner. In his 1968 book The Population Bomb, Paul R. Ehrlich wrote, "the greenhouse effect is being enhanced now by the greatly increased level of carbon dioxide. [this] is being countered by low-level clouds generated by contrails, dust, and other contaminants. At the moment we cannot predict what the overall climatic results will be of our using the atmosphere as a garbage dump." [43]

Efforts to establish a global temperature record that began in 1938 culminated in 1963, when J. Murray Mitchell presented one of the first up-to-date temperature reconstructions. His study involved data from over 200 weather stations, collected by the World Weather Records, which was used to calculate latitudinal average temperature. In his presentation, Murray showed that, beginning in 1880, global temperatures increased steadily until 1940. After that, a multi-decade cooling trend emerged. Murray’s work contributed to the overall acceptance of a possible global cooling trend. [44] [45]

In 1965, the landmark report, "Restoring the Quality of Our Environment" by U.S. President Lyndon B. Johnson’s Science Advisory Committee warned of the harmful effects of fossil fuel emissions:

The part that remains in the atmosphere may have a significant effect on climate carbon dioxide is nearly transparent to visible light, but it is a strong absorber and back radiator of infrared radiation, particularly in the wave lengths from 12 to 18 microns consequently, an increase of atmospheric carbon dioxide could act, much like the glass in a greenhouse, to raise the temperature of the lower air. [31]

The committee used the recently available global temperature reconstructions and carbon dioxide data from Charles David Keeling and colleagues to reach their conclusions. They declared the rise of atmospheric carbon dioxide levels to be the direct result of fossil fuel burning. The committee concluded that human activities were sufficiently large to have significant, global impact—beyond the area the activities take place. “Man is unwittingly conducting a vast geophysical experiment,” the committee wrote. [45]

Nobel Prize winner Glenn T. Seaborg, Chairperson of the United States Atomic Energy Commission warned of the climate crisis in 1966: "At the rate we are currently adding carbon dioxide to our atmosphere (six billion tons a year), within the next few decades the heat balance of the atmosphere could be altered enough to produce marked changes in the climate--changes which we might have no means of controlling even if by that time we have made great advances in our programs of weather modification." [46]

If the earth's temperature increases significantly, a number of events might be expected to occur, including the melting of the Antarctic ice cap, a rise in sea levels, warming of the oceans, and an increase in photosynthesis. [..] Revelle makes the point that man is now engaged in a vast geophysical experiment with his environment, the earth. Significant temperature changes are almost certain to occur by the year 2000 and these could bring about climatic changes.

In 1969, NATO was the first candidate to deal with climate change on an international level. It was planned then to establish a hub of research and initiatives of the organization in the civil area, dealing with environmental topics [48] as acid rain and the greenhouse effect. The suggestion of US President Richard Nixon was not very successful with the administration of German Chancellor Kurt Georg Kiesinger. But the topics and the preparation work done on the NATO proposal by the German authorities gained international momentum, (see e.g. the Stockholm United Nations Conference on the Human Environment 1970) as the government of Willy Brandt started to apply them on the civil sphere instead. [48] [ clarification needed ]

Also in 1969, Mikhail Budyko published a theory on the ice–albedo feedback, a foundational element of what is today known as Arctic amplification. [49] The same year a similar model was published by William D. Sellers. [50] Both studies attracted significant attention, since they hinted at the possibility for a runaway positive feedback within the global climate system. [51]

In the early 1970s, evidence that aerosols were increasing worldwide and that the global temperature series showed cooling encouraged Reid Bryson and some others to warn of the possibility of severe cooling. The questions and concerns put forth by Bryson and others launched a new wave of research into the factors of such global cooling. [45] Meanwhile, the new evidence that the timing of ice ages was set by predictable orbital cycles suggested that the climate would gradually cool, over thousands of years. Several scientific panels from this time period concluded that more research was needed to determine whether warming or cooling was likely, indicating that the trend in the scientific literature had not yet become a consensus. [52] [53] [54] For the century ahead, however, a survey of the scientific literature from 1965 to 1979 found 7 articles predicting cooling and 44 predicting warming (many other articles on climate made no prediction) the warming articles were cited much more often in subsequent scientific literature. [45] Research into warming and greenhouse gases held the greater emphasis, with nearly 6 times more studies predicting warming than predicting cooling, suggesting concern among scientists was largely over warming as they turned their attention toward the greenhouse effect. [45]

John Sawyer published the study Man-made Carbon Dioxide and the “Greenhouse” Effect in 1972. [55] He summarized the knowledge of the science at the time, the anthropogenic attribution of the carbon dioxide greenhouse gas, distribution and exponential rise, findings which still hold today. Additionally he accurately predicted the rate of global warming for the period between 1972 and 2000. [56] [57]

The increase of 25% CO2 expected by the end of the century therefore corresponds to an increase of 0.6°C in the world temperature – an amount somewhat greater than the climatic variation of recent centuries. – John Sawyer, 1972

The first satellite records compiled in the early 1970s showed snow and ice cover over the Northern Hemisphere to be increasing, prompting further scrutiny into the possibility of global cooling. [45] J. Murray Mitchell updated his global temperature reconstruction in 1972, which continued to show cooling. [45] [58] However, scientists determined that the cooling observed by Mitchell was not a global phenomenon. Global averages were changing, largely in part due to unusually severe winters experienced by Asia and some parts of North America in 1972 and 1973, but these changes were mostly constrained to the Northern Hemisphere. In the Southern Hemisphere, the opposite trend was observed. The severe winters, however, pushed the issue of global cooling into the public eye. [45]

The mainstream news media at the time exaggerated the warnings of the minority who expected imminent cooling. For example, in 1975, Newsweek magazine published a story titled “The Cooling World” that warned of "ominous signs that the Earth's weather patterns have begun to change." [59] The article drew on studies documenting the increasing snow and ice in regions of the Northern Hemisphere and concerns and claims by Reid Bryson that global cooling by aerosols would dominate carbon dioxide warming. [45] The article continued by stating that evidence of global cooling was so strong that meteorologists were having "a hard time keeping up with it." [59] On 23 October 2006, Newsweek issued an update stating that it had been "spectacularly wrong about the near-term future". [60] Nevertheless, this article and others like it had long-lasting effects on public perception of climate science. [45]

Such media coverage heralding the coming of a new ice age resulted in beliefs that this was the consensus among scientists, despite this is not being reflected by the scientific literature. As it became apparent that scientific opinion was in favor of global warming, the public began to express doubt over how trustworthy the science was. [45] The argument that scientists were wrong about global cooling, so therefore may be wrong about global warming has been called “the “Ice Age Fallacy” by TIME author Bryan Walsh. [61]

In the first two "Reports for the Club of Rome" in 1972 [62] and 1974, [63] the anthropogenic climate changes by CO
2 increase as well as by waste heat were mentioned. About the latter John Holdren wrote in a study [64] cited in the 1st report, “… that global thermal pollution is hardly our most immediate environmental threat. It could prove to be the most inexorable, however, if we are fortunate enough to evade all the rest.” Simple global-scale estimates [65] that recently have been actualized [66] and confirmed by more refined model calculations [67] [68] show noticeable contributions from waste heat to global warming after the year 2100, if its growth rates are not strongly reduced (below the averaged 2% p.a. which occurred since 1973).

Evidence for warming accumulated. By 1975, Manabe and Wetherald had developed a three-dimensional Global climate model that gave a roughly accurate representation of the current climate. Doubling CO
2 in the model's atmosphere gave a roughly 2 °C rise in global temperature. [69] Several other kinds of computer models gave similar results: it was impossible to make a model that gave something resembling the actual climate and not have the temperature rise when the CO
2 concentration was increased.

In a separate development, an analysis of deep-sea cores published in 1976 by Nicholas Shackleton and colleagues showed that the dominating influence on ice age timing came from a 100,000-year Milankovitch orbital change. This was unexpected, since the change in sunlight in that cycle was slight. The result emphasized that the climate system is driven by feedbacks, and thus is strongly susceptible to small changes in conditions. [10]

The 1979 World Climate Conference (12 to 23 February) of the World Meteorological Organization concluded "it appears plausible that an increased amount of carbon dioxide in the atmosphere can contribute to a gradual warming of the lower atmosphere, especially at higher latitudes. It is possible that some effects on a regional and global scale may be detectable before the end of this century and become significant before the middle of the next century." [70]

In July 1979 the United States National Research Council published a report, [71] concluding (in part):

When it is assumed that the CO
2 content of the atmosphere is doubled and statistical thermal equilibrium is achieved, the more realistic of the modeling efforts predict a global surface warming of between 2°C and 3.5°C, with greater increases at high latitudes. . we have tried but have been unable to find any overlooked or underestimated physical effects that could reduce the currently estimated global warmings due to a doubling of atmospheric CO
2 to negligible proportions or reverse them altogether.