November 26, 2008

Why the singularity may never arrive

Ray Kurzweil has been advocating for the past years that we are nearing a technological singularity. He has tracked technological progress from the slide rule and up to the present day and has found that technology moves forward at an ever faster pace, and that this pace is surprisinglys constant. The slide rule could do very few calculations at a very high price. Vacuum tubes were a lot better and cheaper than slide rules, transistors better and cheaper than vacuum tubes, and so on. This fits very well with Moore's law that states that every two years you can get twice the amount of computing power for the same amount of money. When you plot it all out on a graph you get a nice exponential rise.

This all seems reasonable, and very well documented. He then goes on to extrapolate the graph and postulates, backed up by his extensive research, that eventually the graph will become so steep that it is effectively vertical. This is the singularity where technology moves at a pace so fast that it is more or less instantaneous. According to Kurzweil this spells a new paradigm, and we have no way of knowing shat will happen after the singularity has arrived. But it will be exciting times. Or scary if you are so inclined.



If the current trend continues, according to Kurzweil, a 1000$ computer will be able to match human intelligence within 15 years. Extrapolating further he argues that within 41 years a 1000$ computer will match the intelligence of the entire human race. While hard to grasp mentally it makes sense when you look at the data - the graph pointing to the future seems believable and filled with good data backing the theory.



But there's a problem.

Thomas Malthus Put forth his theory of limits to human growth in 1826 that bears some resemblance to Kurzweils theory. He found that historically human growth had grown exponentially, and that global food supply had grown linearly. When he extrapolated the two graphs into the future he saw disastrous consequences - since food supply would grow slower and slower compared to the human population he predicted widespread famine, wars over food and other miseries. What happened was that population growth declined and has halved since its peak in 1963.

If you look at a lot of natural growth phenomena, such as population growth, rabbits in Australia, or the growth of bacteria in a petri dish, they initially follow the same trend. It starts out with a few bacteria that multiply, these bacteria multiply again and so on. This gives the initial exponential growth that is very common in nature, and that human progress has also followed since the invention of the slide rule. But the bacteria don't grow out of the petri dish to consume the lab, the country and eventually the entire world. Why is that? Because they need resources to keep growing. When the resources start to run out the graph stops the exponential growth and flattens out like an S.



The same thing will happen with technology - eventually we will run into insurmountable barriers to growth and progress will stabilise at this level. It's just a question of what the barriers are - the food for progress so to say.

The real question is what these inurmountable barriers are, and when we will run into them.

If you made it all the way to the end you must have found it interesting.

Maybe you would want to Subscribe to this blog

11 Comments:

Blogger Andrew Mayne said...

I'm a little skeptical of the overall Singularity hypothesis, but I think a better criticism (software complexity, biomechanical limitations, etc.) can be made than the fact that his graph kind of looks like Malthus's graph.

I think Kurzweil's concept is actually anti-Malthusian. The reason Malthus was wrong has more to do with the evidence that supports Kurzweil (the rate of the Industrial Revolution, The Green Revolution, Moore's Law, etc.).

Even assuming a stable population and a steady number of engineers and scientists (actual number and not percentage) the rate of progress that supports the idea of the Singularity still holds true.

1:22 AM  
Blogger Rob said...

As Andrew implied, Malthus' theory dictates a linear growth in food production, when in fact, due to technological advances in agriculture, the world's food production has grown in a series of paradigm-shifts that generalise to an almost exponential growth.
This still will not last forever, there are still absolute limits on food production, so we will hit a malthusian catastrophe eventually, unless we hit singularity first, and it renders food irrelevant or some other inexplicable thing.
I think the singularity really starts when we build an AI that can design a more competent successor for itself. I expect it to be in my lifetime

4:28 AM  
Anonymous Anonymous said...

I think Kurzweil argues that the reason this will happen is because technology is being made to improve the way technology is being made. A similar parallel to the birth rate idea would be that every time someone had a child, that child would have a shorter gestation period than the parent.

There are also overlappings in technology development. A result in physics may help something in biology, which may help biomedical engineering which may help computer science. A parallel here would be that if two parents were having children, every so often an extra child would pop up out of nowhere.

I'm most interested to see what the field of AI will do. Once we have computers doing very complex tasks for us (things like creativity) then we'll see just how far we can go and how fast. I'm interested to know what limits the universe has placed on how consciousness can exist, and if we can create something to fully take advantage of what is possible for consciousness to achieve.

5:19 AM  
Blogger Wouter said...

The "food" of progress is the fact that progress is limited by physical constraints. At some point, you can't make your chips any smaller because of quantum effects and the limited speed of light etc.

12:42 PM  
Anonymous Hetman said...

The sigmoid function appears again and again in physical phenomena. Population growth. Chemical equilibrium. The increase of speed of personal travel. The carrying capacity contraints are rarely obvious before you begin to approach them. Data on the singularity is much more esoteric and difficult to measure. And here's the real trick: How do you measure the computational power of the brain when we've never engineered anything like it?

By comparison computers are dumb boxes that can do menial arithmetic really fast. If they're going to do anything like our brain we should be well on our way to making them behave like simpler version... say the brain of a bee or an ant. Where are they?

The carrying capacity of the singularity seems obvious then. It's our brain; we can't think any faster. And it's not looking like we're anywhere near architecting something to surpass it.

2:01 PM  
Blogger rbanffy said...

The post compares a steady growth in the supply of food to a steady growth in the supply of engineers and new ideas.

One of the core propositions of the singularity ideas is that once we can build a USD 1000 machine as clever as an engineer, this limitation is gone and we can build as many engineers as we would like (and it even takes less time, as we can simply "cp -R" them).

2:03 PM  
Blogger Kyle Lahnakoski said...

I worry there are two limits to the growth of technology:

1) The fossil fuel based economy has brought human kind freedom from work. If the decline of these resources are too fast, humans may never achieve a level of technology to sustain itself on renewables. In that case, less machines can be run because of limited energy, more bodies (and minds) will be required to compensate for the less machinery, less technical progress will result from the less available minds, and so on, in an positive feedback loop until we are unable to maintain our current level of technology, and we fall back into the dark ages. Only this time we will never emerge from those dark ages again because there are no more fossil fuels to be had. Maybe that is why the universe is so devoid of intelligent life.

2) Absolute human intelligence could be the growth limiter. If no humans, or set of humans, are intelligent enough to create and artificial intelligence greater than our own, then we are limited in what software we can make. Software will continue to advance in the artistic sense, but will stagnate in the technical sense. I humbly suggest we may be witness to this now; web browsing is now over 15years old. Web 2.0 is simply recreating client server apps from the 90’s (or even 80’s). The software is not technically more sophisticated, it is only doing things it could not do before with slower machines. Just like the building architects: their trade is not getting technically better, only their choice of materials is.

3:40 PM  
Anonymous frevd said...

I am not not an academic but I see two fundamental flaws in all these theories.

Of course AI is an interesting topic, so interesting that it is inevitable that it will be realized in almost no time compared to the age of the universe. I myself investigated a comparably small, but for me significant amount of time in creating such networks and you know IBM is currently aiming high with creating a new generation of neural chips, and also has the required hardware ressources to do so (nevertheless, a bottom-up approach is always a dead-end, i.e. will produce anything but the desired effect).

Reducing the problem to graph-studies (which is fascinating but bears the potential that we lose track of reality given the complexity of the underlying system) we have to admit that there is no such thing as a singularity. Sure, the amount of time X to achieve a higher Y shortens enormously. Still, time flow is linear (so don't forget to draw the curve tilted to the right), and it's mainly a problem of scaling that it makes it appear as vertical. The time flow interval does not change, but the Y value increases exponentially. There is nothing problematic about that, except that it soon reaches a state where you cannot plot (nor imagine) the graph anymore, same problem as to graphically visualize the distances of the planets in the solar system (it is not possible find a scale where a) the real distances are shown, b) the sun would fit on a reasonable large paper and c) the smaller planets (including the earth) wouldn't shrink to a point of meaningless (invisible) size). Nevertheless, the solar system has no problem with not being plotable, so progress hasn't either. Of course it is all about ressources. But if trying to extrapolate progress (there is a book containing all the prognoses about the future that went terribly wrong, most famously the expertise about computer usage in the year 2000 from just 50 years ago where they state that there would hardly be a handfull calculating machines worldwide), one has to take into account that extraterristic ressources might become allocatable too, so there is no logical limit but the ressources in the universe, if you like that viral approach of progress of mankind. I won't argue about the universal quantity though, there are too many holes in our today's knowledge. So the problem of singularity competes with solving the mystery of the universe, maybe computers can help calculating that, the question is, would we understand the math behind it?

Second, there seem to be evolutionary principles at work that manage shrinking resources in a functional way. Life is finding a way so to say. [I think] it was Douglas Adams who wrote in his book "Last chance to see" about this bird on an isolated island who developed some insane procedure of reproduction to keep the population stable. It is said humans have a different type of dealing with that problem, well we can only hope it is better in terms of humanity (and faster to achieve in terms of exponential progress) than what natural evolution provides. At least we can think of leaving our "island". If founding extraterristical colonies is scifi for you, you should better get used to it.

How can that graph ever flatten out? And additionally, is there a possibility to have a parable-like development, so Y is also able to reverse (by not measuring "technological progress" in flops)? Flattening out would mean an end of development, which can only be realized by a) a satisfaction of the motives of progress, b) an autonomous control of progress (e.g. by reasons only the leading computers understand), or c) the worst, the end of mankind. There were many movies containing arbitrary theories which might or might not be taken serious, for however progress will not stop in either way.

The most serious and realistic flaw is this: having machines doing not only the boring work that can be automated but also the creative arts, the management and whatelse we can give (or AI would take) - what are human beings supposed to do?
Undoubtedly, only by accident would we allow to become obsolete, since as long as our society depends on economics, this would not make sense - today there is still enough to do (program the automations, be creative in arts and science), but what is tomorrow, when all these tasks are done by cognitively better computers? Markets, as long as not regulated, will find a way out of that misery. I doubt anyone can give a prognose for what the final solution could be, this extrapolation is too complex as it involves to many unknown variables (additionally, we wouldn't take into account such things as evolution of the artificial, would we). If you ever tried to forecast financial graphs you will know what I mean, there are so many degrees of freedom that it is impossible to find any better prognosis but chance (over time).

So this surely will become interesting to see. It is funny that one can often see the problem, but never imagine the final result, since that involves applying many solutions apparently dealing with the problem plus the reactions to these, which are not forecastable as a matter of fact (since having a knowledge of the forecast could change the premises, such creating a self-refential system; also the target point in time is unknown).
In the above case one can neither restrict the number of problems that can arise, nor find solutions on hypothetical problems, and of course not imagine the time after. In financial analysis good money management is the crucial thing to get along.

4:04 PM  
Anonymous frevd said...

In response to Kyle Lahnakoski :

Very good and precise analysis!

There is third point summing up my above comment post thesis - a stagnation of progress can also be the result of not being willing to hand over control (ie. 'willing' in terms of impractical behaviour according to the economic system). I mean as long as we don't, the world is ours, and it makes no sense to keep pushing upwards then. But history tells us that such stability is always temporary, as it makes no sense either to not progress.
Is that a logical problem? Not necessarily, if you admit that there are new fields of activity we cannot imagine or value from our today's perspective.

4:29 PM  
Anonymous Francis said...

Even if we accept the idea that technological progress has a 'Malthusian' limit, it doesn't necessarily follow that the Singularity may not happen.

You've said yourself Max, we don't know where those barriers are, when the 'food for the mind' runs out. If we find that we reach the limit of technological advancement within 15 years, then you're right... the Singularity is effectively dead.

However, what if it's 50 years? 100? 1000? What if we can carry on merrily increasing our computational capacity for the next 5000 years, before we run into the hard limits of the universe?

That will undoubtedly carry us well past the Singularity.

Your argument is based on the assumption that the end of advancement of computers will happen before the Singularity. That's what you have to prove.

9:21 PM  
Anonymous Anonymous said...

While it is true that technology moves forward in irregular (sometimes rapid) spurts, there is no reason to believe that anything resembling Kurzweil's fantasies will ever happen.

The only thing "singular" in this story is Kurzweil's unbounded ego.

2:49 PM  

Post a Comment

<< Home