# Crash Course Chapter 3: Exponential Growth

For the best viewing experience, watch the above video in hi-definition (HD) and in expanded screen mode

## Transcript

In the Crash Course we will learn a few foundational Key Concepts. None are more important than exponential growth.

Understanding exponential growth will greatly enhance our odds of creating a better future.

Here’s a classic chart displaying exponential growth – a chart pattern that is often called a “hockey stick”.

We are charting an amount of something over time.

The only requirement for a graph to end up looking like this is that the thing being measured grows by some percentage over each increment of time.

The slower the percentage rate of growth, the greater the length of time we’d need to chart in order to visually see this hockey stick shape.

Another thing I want you to take away from this chart is that once an exponential function “turns the corner”, even though the percentage rate of growth might remain constant, and possibly quite low, the actual amounts do not.

They pile up faster and faster.

In this particular case, you are looking at a chart of something that historically grew at less than 1% per year.

It is world population. And because it’s only growing at roughly 1% per year, we need to look at several thousands of years to detect this hockey stick shape.

The green is history and the red is the most recent UN projection of population growth for just the next 42 years.

Certainly by now, math-minded folks might be starting to get a little uncomfortable here because they might feel that I am not presenting this information correctly.

Where mathematicians have been trained to define exponential growth in terms of the rate of change, we are going to focus on the amount of change.

Both are valid, it’s just that one way is easier to express as a formula and the other is easier for most people to intuitively grasp

Unlike the rate of change, the amount of change is NOT constant; it grows larger and larger with every passing unit of time, and that’s why it’s more important for us to appreciate than the rate.

This is such an important concept that I will dedicate the next chapter to illustrating it.

Also, mathematicians would say that there is no  “turn the corner” stage of an exponential chart because this curve is just an artifact of where we draw the left hand scale.

That is, an exponential chart always looks like a hockey stick at every moment in time as long as we adjust the left axis properly.

But if you know the limits or boundaries of what you are measuring, then you can fix the left axis and the “turn the corner” stage is absolutely real and vitally important.

This is a crucial distinction -- and our future depends on more of us appreciating this.

For example, the total carrying capacity of the earth for humans is thought to be somewhere in this zone, give or take a few billion.

Because of this, the “turn the corner” stage is very real, of immense importance to us, and not an artifact of graphical trickery.

The critical take-away for exponential functions, the one thing I want you to recall, relates to the concept of “speeding up”.

You can think of the key feature of exponential growth either as the AMOUNT that is added growing larger over each additional unit of time.

OR you can think of it as the TIME shrinking between each additional unit of amount added.

Either way, the theme is  “speeding up”.

To illustrate this using population, if we started with 1 million people and set the growth rate to a tiny 1% per year, we’d find that it would take 694 years before we achieved a billion people.

But we’d be at 2 billion people after only 100 more years while the third billion would require just 41 more years.  Then 29 years, then 22, then 18 years to add another, and finally just 15 years for the next billion bringing us to 7 billion people.

That is, each additional billion people took a shorter and shorter amount of time to achieve.  Here we can clearly see the theme of ‘speeding up’.

This next chart is of global consumption of oil, perhaps the most important resource of them all, which has been growing at the much faster rate of nearly 3% per year.

We can easily detect the ‘hockey-stick’ shape over the course of the past one hundred and fifty years since we started powering our economy with petroleum.

And here too we can fix the left axis because we know with reasonable accuracy how much oil the world can maximally produce.  So, again, having  “turned the corner” is extremely relevant and important to us.

And here’s the US money supply, which has been compounding at incredible rates ranging between 5% and 18% per year.

So this chart only needs to be a few decades long to see the ‘hockey stick’ effect.

And here’s world-wide water use, species extinction, fisheries exploited and forest cover lost.

Each one of these is a finite resource as are many other critical resources and quite a few are approaching their limits.

This is the world we live in.

If it seems like the pace of change is speeding up, well, that’s because it is.

You happen to live at a time when humans will finally have to confront the fact that our exponential money system and resource consumption will encounter hard, physical limits.

And behind all of this, driving every bit of every graph, is the number of people on the surface of the planet.

Which continues to increase – to ‘speed up’ – exponentially.

Taken one at a time, each of these charts should command the full attention of every earnest person on the face of the planet.

But we need to understand that they are, in fact, all related and connected.  They are all compound graphs and they are being driven by compounding forces.

To try and solve one, you’d need to understand how it relates to the other ones -- as well as to many others not displayed here -- because they all intersect and overlap.

The fact that you live here, at this moment in history -- in the presence of multiple exponential graphs relating to everything from money to population to species extinction -- has powerful implications for your life, and the lives of those who will follow you.

It deserves your very highest attention.

Let’s move on to an example that will help you understand these graphs a little bit better.

Please join me for the next chapter: Compounding is the Problem.

## Join the discussion

HughK
Status: Platinum Member (Offline)
Joined: Mar 6 2012
Posts: 764
Historic World GDP

I just came across some interesting estimates on Historic World GDP per capita, based on data by economic historian Angus Maddison.  Here's a graph of the data from 1400 to the present:

The image source is here, and is based on this bar chart.  The data apparently comes from Maddison's Year 1 - Year 2003 time series, but I could only find his Year 1 - Year 2008 time series, which is probably pretty much the same, and can be found on this page.

Maddison passed away in 2010 and his colleagues at the Groningen Growth and Development Centre and beyond are continuing his work at this site.  The obvious question is how can such data possibly be reliable? While it's very far from perfect, this paper gives a preliminary look at how the data was collected and organized, and how other researchers have contributed to the work.

While I think we all were pretty certain that living standards increased exponentially after the Industrial Revolution, I was happy to see some data showing this in detail.  I'd love to hear what people think of the way the Y axis is laid out.

Also, while I still don't quite understand what Geary-Khamis dollars are, but it seems that they are basically an attempt at inflation-adjusted and purchasing power parity currency units.

Based on the trend above, we're all definitely headed for WAY more prosperity by 2050.  :)

Cheers,

Hugh

DennisC
Status: Gold Member (Offline)
Joined: Mar 19 2011
Posts: 342
Not Looking Good

I was scanning through the report titled UN World Water Development Report 2015, Water for a Sustainable World.  It would seem to be a misnomer to have the word "sustainable" in the title given the projected population increases and related agricultural requirements, among many other issues.  However, "sustainable" is certainly the goal.  The report itself is worth a quick look IMO, (the Executive Summary section) and some of the charts and tables if you are pressed for time.

http://www.unesco.org/new/en/natural-sciences/environment/water/wwap/wwd...

And, the Facts and Figures document is interesting.

Michael_Rudmin
Status: Platinum Member (Offline)
Joined: Jun 25 2014
Posts: 923
I think exponential growth is the wrong model.

I understand that exponential growth is the right model for the requirements of debt service. However, exponential growth is not the right model for economic growth.

Exponential growth only occurs in unbounded growth situations. As soon as you hit a limitation, you miss the exponential growth, and go to exponential decay, or sinusoidal dynamic stasis.

This is all very complicated. But there Was an algorithm developed to make this easy, and it was developed specifically for the purpose of analyzing population dynamics.

The algorithm is called the parker-sochacki solution to the Picard iteration.

Michael_Rudmin
Status: Platinum Member (Offline)
Joined: Jun 25 2014
Posts: 923
Chris, your model is FAILING. Logistic, not Exponential

Chris,

I really think that peak prosperity had the right idea, but your model is failing.

What do I mean?  You posited that we'll have major problems due to exponential growth models.  You then acknowledge that it is impossible for them to continue.  Then you base your predictions off the exponential model, and assume that things will be close.

They aren't.  The correct model is the Logistical Equation, and a complicated version of that, to boot.

But because what happens doesn't match your expectations -- and you admirably well acknowledge it (reference, for example, your admission that the deflation took out the support for gold) -- nonetheless, the temptation is huge to credit various conspiracies.  You might be right.  You might be wrong.

But you have no idea where you are right and wrong, so your model ends up being worse than useless.  It misleads you at the critical moments.

It also distracts you with more and more energy being poured into the conspiracy theory energy sink.  In a way, I might contend that you have fallen into your own version of groupthink.

CONSIDER:  YOU COULD BE RIGHT, AND IT STILL CREATE A DEADLY DISTRACTION.

I think we have some really good programmers here:  Dave Fairtex, for example.  I could lead us in developing better mathematical models of the actual logistic equations:  I have enough understanding of the Parker Sochacki Solution to the Picard iteration.  I think we could start refining our models, and as we see that one model fails, we could look for what would correctly predict the failure, and then model that.

The PSP can really handle very complicated population dynamics:  that's what it was created for.

I think that if you want to overcome this major issue, you should look in that direction.  If you *don't* want to, I think the site is still useful for its organics section; but I'm likely to start ignoring most of it.

I think we need to start manufacturing models, and we need to start it as soon as possible.  If there is a logistic equation, the logistics are likely to be fatal for a large number of people.

Thank you,

- Michael Rudmin

Time2help
Status: Diamond Member (Offline)
Joined: Jun 9 2011
Posts: 2890
The Thing
Michael_Rudmin wrote:

You posited that we'll have major problems due to exponential growth models.  You then acknowledge that it is impossible for them to continue.  Then you base your predictions off the exponential model, and assume that things will be close.

They aren't.  The correct model is the Logistical Equation, and a complicated version of that, to boot.

Cogitate on this for a while, puny humaaan.

Michael_Rudmin
Status: Platinum Member (Offline)
Joined: Jun 25 2014
Posts: 923
Yes, T2H: that sketch is more the right idea, but only a guess.

That sketch was probably an early version of the logistic equation, and it's reference enough... but it isn't a model.  It's an idea.  We *might* have the capability of producing a model, and if our model predicts things well enough, we might be able to get more people involved in improving it.

Point being, I think it's absolutely imperative to try.  Because if we don't try, then -- yes, the resilience idea is good, but it doesn't take us out of the way of all kinds of juggernauts.

davefairtex
Status: Diamond Member (Offline)
Joined: Sep 3 2008
Posts: 5692
logistics, regressions, exponentials

MR-

I've been experimenting with Tensorflow - well, more than experimenting - and its a pretty powerful tool for coming up with a solution for something like what you are talking about.  It is very configurable, probably too much for most people, you really need to be a programmer to make it do its thing.

A less configurable but still quite effective tool is vowpal wabbit.  VW has a built-in single layer perceptron learning model which combined with the logistic loss function, if presented with enough relevant data, might come up with a solution.

Learning curve is a bit steep but if you tell me what the inputs are and what we're trying to solve for, I could cook something up pretty quickly.  And so that's my question.  What are the inputs, and what are we trying to predict?

Often that's the main problem with these sorts of things.  What is the question we are actually asking?  And we have to ask it in the form of a timeseries.  Are we projecting population?  Real earnings?

Phrasing the question so that VW can actually comprehend it is also an issue, but I can take care of that bit.

Michael_Rudmin
Status: Platinum Member (Offline)
Joined: Jun 25 2014
Posts: 923
What I'm looking at is more complete.

Dave, what I'm looking at is a lot more complete than that.  First, the prediction should really be something we understand, not a mystery prediction.  There are certain areas where each mathematical algorithm fails.  We should be able to recognize when our prediction is nonsense, and invalidate its results.  You can only do that if you can know your method.More than that, it involves automating the tracking and prediction of everything, based on everything else.

That sounds very complicated, and it is, but it can be broken down into bite sized pieces:

1.  for any data product, we have to model what *is*.  Maybe we're talking automobile sales.  We find out where we can get data and download it, preferably at will.
•  Automation program #1:  be able to point at a website, download the data (including previous data), confirm that old data has not changed and thus new data is valid, interpret the data, and then encode it to the hard drive.  This will have to be tailored to the website, but a lot of times the solution for one product can be the solution for another.
•  Automation program #2:  To break the data down by DFT (Discrete Fourier Transform) into frequency data.  Then, using the taylor series for the sine and cosine, transform that into a taylor series.  Finally, compare the results of the taylor series to the initial data for range of validity.  Tag the taylor series with its units and its validity range (including period of validity).  Store in a standardized form.
2. for each item we want to predict,
• we view it in human terms of what it depends on.
• We then come up with a system of equations and differential equations that we believe *should* predict the behavior of this item from other items.  We get this information with inside expert opinion.  I, for example, can probably describe well for you what goes into the birth/death model of jobs in prestressed concrete.
•  We find and encode the input data and the output data (see section 1 above).
• We use the system of equations to create a Parker-Sochacki solution to the Picard (PSP) iteration, which generates a taylor series.  (This isn't quite correct:  the application of the equations to existing data is simpler than a PSP, but if you understand the PSP you will understand how to generate the output taylor series from the input taylor series).   Ref:
arxiv.org/pdf/1007.1677?origin=publication_detail
• Automation program #3:  To generate a PSP set from a set of equations.
• We compare the results of the PSP to the data it is modelling.  We find variances, and then seek for expert opinion on why the variances occurred (in other words, what are we missing).  We then use that to upgrade our equation set.  If it explains the model better, we have done well.  If it doesn't, then we need to again look at why.
3. We start by attempting to predict the things we want to predict.  As we need to improve the model, we add more inputs and outputs to our model.  The more valid our model, the better our future predictions will be. This is why I say we eventually want to model everything:  Eventually, everything does depend on everything else.  Moreover, initially we are predicting the price of PMS and oil; in the end we want to be predicting the growth of ISIS, all the items on the curve in Time2Help's graph, the upstart of new significantly problematic groups, the destruction of economies, and the motions of war.  But you don't get the last without the first.  You don't get to predicting the complex without first having in hand the simple.

Luke Moffat
Status: Gold Member (Offline)
Joined: Jan 25 2014
Posts: 384
Root Cause Analysis - Proof of Concept

Michael,

Would you not be better modeling a known outcome first to prove the validity? Example, the recent decline in oil price? How far into the current cycle do you go? Starting at the 'shale miracle' or the mindless extension of credit leaving to the inevitable pop as deflation sets in?

'What is it that you're hoping to predict?' Perhaps a loaded question, perhaps a better question to ask is 'what are the relevant inputs?' And if the initial dataset is incomplete then what hope have you of a reliable outcome?

Will you start with some fundamental philosophy to help you define relevant inputs? Perhaps we take the view that all life follows the path of least resistance - hence the most accessible resources are the first to be depleted. But is that in itself too universal?

Also, oddball scenarios; how are they modeled? Artificial Intelligence, natural disasters. Do they get assigned a probability factor and perhaps a location weighting to determine their influence? Or is this purely an empirical data driven model?

I'm fascinated by the proposal by the way, so please don't take this as criticism

Judging from item 1 you're going to end up with a lot of data - hence a lot of processing power will be required to sort/categorise.

I like the premise of point 3 - start small and scale up. It'll help limit the initial data. After that is when it goes exponential :)

All the best,

Luke

davefairtex
Status: Diamond Member (Offline)
Joined: Sep 3 2008
Posts: 5692
more complete

MR-

Ah, so you're just looking at modeling the entire world, a la Hari Seldon and psychohistory, so you can predict every element of the future.  Starting small, of course.

I'm really not sure what dataset we could use that would end up predicting the rise of ISIS.  Its not like we have all that many cases to draw from, nor are the cases something we can easily reduce to a timeseries.

So sure, let's start with oil by all means.  I've tried doing exactly this, minus all the applied math that I'm not at all good at (hence my reliance on artificial neural nets as my math club/proxy that eventually beats the problem to death through many, many passes).  I have found that the further you try to look into the future, the crazier the numbers tend to become.

The other problem is, you have to wait to see if you're right.  Write the code, run it, get the output - it is projecting a big drop in oil prices in six months.  Wait six months and see if you're right.  Ooops - no, you are wrong.  Refine the model, try once more.  And then wait.

The edit-compile-run loop ends up being almost ridiculously long.

I started out projecting forward six months.  Model predicted a 25% drop in SPX.  (Clearly, that didn't happen).  Then I drew it back to six weeks.  It saw a 15% drop in SPX.  That didn't happen either.  Market eventually dropped 6%.  Big success?  Not exactly.

Now I'm just trying to see if the current trend is up or down.

Perhaps I just didn't put enough data into my model.  Or maybe I need more Picard Iterations.   :-)

LesPhelps
Status: Platinum Member (Offline)
Joined: Apr 30 2009
Posts: 811
The goal isn't to impress people with math or complicated models

The goal, as I understand it, is to present a clear picture of the direction our past and current actions are carrying us.

No person with any cognitive ability could believe that exponential growth of anything physical will blast through physical constraints.

However, not a lot of people have a good grasp of what exponential growth is, or how it behaves over time.

To be effective, the explanation needs to be presented in a way that most people can understand with a moderate amount of effort.

If it becomes a complex exercise to entertain a few math and modeling enthusiasts, the bulk of the audience will yawn and go on to some other source for their information.

Michael_Rudmin
Status: Platinum Member (Offline)
Joined: Jun 25 2014
Posts: 923

You are right, Luke. You do your modelling on rational based understanding, and then see how well it matches the reality that has been. The thing that I did is to show how to turn the data of previous reality into a Taylor series.

But also, Dave, that is why things need to be collaborative. I can program some, but I never understood programming interfaces with the Web. Yet the web is a huge source of data.

You might have trouble with the mathematical modelling; and maybe we won't ever predict ISIS. But then again, who knows what rational insights may occur to others?

Yes, I have seen you trying to predict the future behavior of oil; My approach would be to start by trying to predict the present behavior of oil.

Also, I am very agnostic about what we should predict. I would say that we should start by trying to predict well known things first; and then incorporate less-understood interactions later. In other words, train ourselves on successes, not failures. Then we try to improve our models.

Also -- Dave -- in a way, it is like Seldon. Yet I disagree with the Seldon premise. I don't think ttat with infinite information we could have infinite prediction. Rather, I think that with better analysis of information, we can have better predictions, but there may well be practical limits based on decreasing returns, such that some things will never be predictable (such as hurricanes). So we'll just have to model the responses to those things, and consider their possibility.

davefairtex
Status: Diamond Member (Offline)
Joined: Sep 3 2008
Posts: 5692
I think there is a pony in there

I do believe there is a pony in there somewhere.  With enough (accurate, relevant, complete) data, I do believe you can predict most everything big picture - law of big numbers and all that - minus asteroids hitting the earth, nuclear war, and so on.

I was just describing the problems I ran into, the primary one being having to wait around for quite some time to see if you are right the farther out you look.  What we all really want is to see forward 5-10 years, but my goodness, the primary issue is, we construct the model, and then we must wait 5-10 years to see if we're right.  And if not...boy, its not like you have a whole lot of bites at the apple.

Les- of course the presentation of whatever the results are must be relevant.  Simple math-based navel-gazing is uninteresting to me, and I suspect that's true for MR as well, regardless of how "mathy" he sounded.  I'll see if I can dig up some of the sample results I came up with, maybe it will be fun for you to see what my vision might look like.

As MR said, "its probably best to start small" - of course I didn't do that, I started big and then gradually pulled back as some of the more practical issues became more apparent.