I just finished reading the cover article in this week's Time Magazine--an extended look at the notion of the coming "Singularity," that is, the predicted revolution--espoused most famously by Raymond Kurzweil in his 2005 book, The Singularity is Near--that will fundamentally and permanently change humanity. This revolution will, supposedly, be brought on by the emergence of supercomputers that surpass the brainpower of all humans combined--a level of intelligence so profound that it will bring unthinkable changes to everything we know.
Let me say, first of all, that I love science fiction, and that the greatest science fiction writers offer speculations about the future that can sometimes hit the nail very close to the head. I love such speculation. I delight in it, and I'm grateful that creative minds engage in it. But these speculations are just that. What Kurzweil and other "Singularitarians" offer are not speculations but predictions. That is, they think they have reasons to believe that they are describing something that's likely to come true.
What are those reasons, and are they compelling? In thinking about this question, I cannot help but do so in the light of the talk I've been working on for an upcoming panel at this year's meeting of the American Academy for the Advancement of Science. The panel topic is this: "If the culture of growth is unsustainable, what needs to change?" The topic is born out of the recognition that the growth of human society--a function of both population and per capita consumption--is exponential, and is crashing up against the limits that our ecosystems can sustain. Kurzweil also speaks about exponential growth, but in his case its the growth of information technology that is at issue.
One essay I read as I was thinking about my panel topic was an essay by John Michael Greer, "The Onset of Catabolic Collapse,"--which was also an exercise in prediction. In Greer's case, however, the prediction was not of a technologically induced revolution, but of the decline and fall of the American Empire. His view is that the current recession is the first step in an ongoing process of America (and the world) coming to grips with exceeding its resource limits. His vision of peasant farmers plowing their fields "in sight of crumbling ruins of our cities" comes at the close of a timetable of fitful collapse that directly maps onto the timetable of the supercomputer revolution posited by Singularitarians.
So which is it? Peasants tilling the soil in the shadow of ruined cities that have long gone dark? Or a world of supercomputers and ageless cyborgs spreading across the universe?
What drives the vision of the Singularitarians is the observed trajectory of technological development, especially information and computer technology, which has taken on an astonishlingy consistent exponential growth curve across a range of different parameters--from the number of transistors that can be fit on a microchip to the speed of microprocessors. If this trend continues, then given the nature of exponential curves we'll be confonting almost inconceivable rates of advancement over the next decades, changes to dwarf the amazing changes that we have seen over the last five hundred years.
But the ecological sciences that stimulate Greer's grimmer picture also think in terms of exponential growth curves. But ecologists know something about these curves, something that seems to be a pretty consistent truth about them in the natural world: They can't be sustained. At some point, exponential growth culminates in collapse--sometimes in catastrophic collapse--when the growth hits up against the reality of limits.And even when these limits don't produce collapse, the can and do stop the growth. Imagine a one centimeter lily pad on 100-meter diameter pond that doubles in size every day. For a long time it won't seem like much. Then it will suddenly burst across the pond--covering an eighth of it three days before the fateful day, a quarter two days before, half the day before--and then the whole pond. And then? Well, if we assume that lilly pads can't grow beyond the limits of the ponds they inhabit, then nothing. It's done. Astonishing growth that people have a hard time fathoming, followed by...stagnation.
Does growth in information technology face inherent limits of this sort? Is there just a point at which we can't fit more transistors on a microchip? I don't know. But even if--unlike pretty much everything else--growth in information technology has no inherent limits--such growth is coming at a time when human civilzation is hitting limits all over the place: water resource limits, arable soil limits, energy resource limits, etc., etc. And the collision of human civilization with all of these limits is guaranteed to have an effect on the funnelling of labor and natural resources towards the continuing growth of information technology.
Will we hit the so-called Singularity before catabolic collapse? Will the advent of a new technological epoch usher in miraculous solutions to all our troubles (or hasten our end, as computers decide we're expendable)? Or will the exponential growth in information systems slow, stall, and slide backwards as the rest of our society comes to grips with the impossibility of limitless growth?
I don't know. But given these uncertainties, it seems that the Singularity is more speculation than prediction. Great science fiction, but not much more than that.
Thanks for a fascinating topic. This (catabolism) is similar, if in more slow motion, to Diamond's collapse hypothesis. I would have two critiques to offer..
ReplyDelete1. "... every attempt to deploy other energy resources to replace a significant amount of fossil fuels has run headfirst into crippling problems of scale."
No, they have run into crippling problems of price, not scale. If the coal and oil and natural gas are free for the taking, only having to pull them out of the ground, they are going to beat other sources that take a little more ingenuity and effort to procure. Wind can scale, solar can scale, and conservation can scale. The problem is our fossil-friendly price structure, especially not pricing the vast harm being done ecologically.
We are also not pricing the prospect of peak oil/gas, which is taking place right now, by some analyses. We should have had a massive insurance policy in place, as we would for many other actuarial certainties like death and disease. But we have instead decided to lollygag along day-to-day.
2. "... a great many jobs will go away ..." Jobs are an orthogonal issue, not related to our energy status at all. We always have work that needs to be done, from picking up litter to taking care of the grandparents, to finding new energy sources. The issue is whether our economic and political system puts a priority on employment, filling the employment gap left by the private sector. If the private sector becomes more efficient and can do all the work it thinks needs doing with 10 or 20% unemployment (as now) that is great.. except for those without work and income. But in truth, there is far more work that needs to be done, and can be done if the state steps up to the Keynesian plate, as it were. In a severely energy-limited economy, prosperity and living standards might decline, but it is public policy that guides how the decline is shared.
Just look at China- they value work highly, since civil unrest would otherwise run them out of office. So they follow the export model of mercantilism, racing to the bottom of cost, while actively preventing their people from benefiting from their work by higher incomes and higher consumption, (China sterilizes its dollars and piles them up instead), which would negate their export advantage.
As for the Singularity, I am, like you, divided. While the transistor count may keep going up for a while, I am not sure that the software quality index is going up in commensurate fashion. It has been remarkable to see language translation software come into the mainstream in the last few years- a signal advance in artificial intelligence. But on other fronts, we are still stuggling to artificially attain the mental capacity of cockroaches. It is going to be a long slog.
Hi Eric,
ReplyDeleteAs it happens this is close to my field of work and I know something about these matters. I found the Time article full of errors I’d like to comment on. But first let us remove from the table the possibility of humanity collapsing under the weight of its exponential (and indeed cancer-like) growth. This may or may not take place, but the interesting question to ponder is our technological future if we manage to keep our wits together and avoid disaster.
First of all, as Burk notices, what holds back artificial intelligence is not computer power but software smarts (and perhaps computer architecture smarts also). We need better algorithms. Time’s idea that we will shortly be able to “reverse-engineer” the brain is as stupid as the idea would be that the way to build a flying machine is to reverse-engineer the bird. The example of how Deep Blue’s massive computer power was instrumental in beating the chess world champion is a red herring, because, as it turns out, chess is actually a simple game and is therefore amenable to brute power attack. Other games, such as the Chinese “go”, are not amenable to the same tactic. What’s more I don’t think that Moore’s law about the increase of the power of computer hardware will keep indefinitely, simply because of the limits of physics. We are already hitting these limits and that’s why modern CPUs rely on including several parallel “cores” (i.e. independent computing processors) instead of one much faster one (which would be much preferable if it were technologically possible). Perhaps we shall discover a fundamentally better technology for building computing machines than today’s semiconductors, but then perhaps we won’t. And if they won’t then Moore’s curve will stop growing as fast. Which is not at all problematic, for, again, even today the challenge is not to create more brute computing power but rather how to use all that power more intelligently.
Having said that, I think there is little doubt that intelligence, being a natural phenomenon, can be mechanized. A philosophical thought experiment will suffice to demonstrate that: Suppose we do reverse-engineer a human brain and simulate it in a computer down to single atoms if need be. That simulation would produce behavior which would be indistinguishable from the actual brain’s actions. Therefore, unless one suspects that there is something non-physical (and therefore of a non-mechanical nature) moving our brain, or unless one suspects that it is not our brain which moves our physical behavior including our intelligent behavior, then the implication is clear: Human level intelligence (including creativity) can be simulated in a computer, and therefore can also be exceeded by a computer. (Incidentally, not everybody agrees on this point; some supersmart people, such as Roger Penrose, think that a digital computer cannot ever equal human intelligence, because of some limitations that any formal system is subject to.)
[continued in the next post]
[continued from above]
ReplyDeleteLet us assume that super-intelligent computers will be a reality someday. Their construction would have very interesting philosophical/religious but also economical repercussions. On the philosophical/religious front we would have to decide whether such intelligent machines are conscious (they would necessarily appear to be conscious), and whether they would be moral agents with moral rights and duties (they would necessarily appear to be capable of moral reasoning). If we would answer these two question positively then we would have to conclude that intelligent machines are persons (in the philosophical/religious sense). The economical repercussions would be momentous too. Products and services today are expensive because they all require intelligence. If intelligence becomes vanishingly cheap then, it seems to me, the cost of products and services (which not entail a human presence) would become negligible. The result would be an entirely different kind of economy.
On the other hand Kurzweil’s idea that we would be able to become practically immortal by “migrating” our mind into computer hardware does not, it seems to me, makes any sense. One could build a computer which would behave indistinguishably like one behaves, but this does not mean that one’s consciousness would have migrated into that computer, for one would still be outside of the computer observing it. Now one can visualize a state of affairs in which one’s brain is turned bit by bit into more dependable hardware, and that one would keep one’s stream of consciousness throughout that process, but 1) one would have to try for oneself to see if that’s so, and 2) to try this, even if it turned out to be technological feasible ,would make no sense at least from the point of view of a religious person. For from the point of view of religion such a future would be limiting rather than increasing one’s potential.
In any case Kurzweil’s theory about ever-growing intelligence suffers from strong empirical counter-evidence. If his theory were right then the universe should by now be swarming with intelligence, for the probability that we are the only or the first intelligent race in the universe is very low. But the universe is clearly not swarming with intelligence, so there must be something wrong with Kurzweil’s theory. My hope is that the intelligent races out there that manage to survive their technological awakening always discover the wisdom and blessing of a simple and materially self-limiting existence. I like to think that if there are vastly more advanced intelligent races out there they live in pastoral societies and leave the rest of the cosmos alone. That view is at least compatible with the evidence we have of a quiet and natural universe.
Fascinating observations Dianelos. Particularly what you said about migrating one's consciousness into a machine. From the outside looking in a super duper computer wired exactly like my brain would think the same way I would. But supposing I still existed, would I be aware of what the super computer was experiencing? Presumably not. And (on the assumption that one's consciousness ends upon death) if I died I would not be aware of anything even though there'd be something that appeared to be thinking and feeling the same way I would be were I still alive. For what this means is that consciousness is a very strange phenomenon, with all kinds of metaphysical baggage that makes it unexplainable by science. In fact, I cannot see how consciousness (as opposed to very complex brain behavior that produces external behavior of the kind we associate with consciousness) would even exist if materialism were the way things are. To me this is the clearest rebuttal to materialism, and part of what nudged me in the direction of theism.
ReplyDeleteHello Anonymous
ReplyDeleteSleep fascinates me in this regard. At least during the non-dreaming phase of sleep, my consciousness disappears. When I wake up it returns, and it/I assume I am the very same person that fell asleep. So, put the person to sleep, and then boot up the computer in the identical state, and it wakes up believing it is me, exactly as I do every morning when I wake up. If the person then dies during this sleeping phase, has the consciousness survived death? Just a thought.
Bernard
Derek Parfit, in his classic book Reasons and Persons, wrestles with some of the personal-identity-over-time issues raised in this discussion. His thought experiments are definitely worth pondering in this regard.
ReplyDeleteHi Anonymous,
ReplyDeleteYou write: “ To me this is the clearest rebuttal to materialism, and part of what nudged me in the direction of theism.”
I agree that consciousness, the greatest and most certain fact of all, cannot be explained within a materialistic view of reality, and thus cannot be explained in any epistemology based on the physical sciences. Thus one can see not only that materialism is false (actually I understand few philosophers nowadays embrace materialism, including atheist philosophers), but also that any epistemology circumscribed by the scientific method is inadequate.
As for the theism versus naturalism question, I think a good first step is to realize that reality is ultimately either purposeful or not purposeful. If the former then reality is fundamentally of a personal nature, and hence “proto-theistic”. If the latter then reality is fundamentally of a mechanical nature, and hence naturalistic. But a mechanistic (or naturalistic) conception of reality suffers from conceptual problems galore. Therefore I find that naturalism is a conceptual failure, which leaves only the idea that reality is fundamentally of a personal nature as a viable option. While naturalists complain of insufficient evidence for full-fledged theism, they overlook the fact that there is simply too much evidence against naturalism.
Having said that, I think that part of the reason that many people are still under the impression that non-theistic ontologies make sense is that the modern discourse uses language that only makes sense if one assumes naturalism, and thus kind of introduces naturalism’s assumptions in a stealthy manner into peoples’ minds. For example you speak of the “phenomenon” of consciousness. This is a very common expression; googling “phenomenon of consciousness” will get you over 100,000 hits. And this expression comports with the naturalistic view that consciousness is one more “phenomenon” produced by material systems, albeit a mysterious one. But in fact and very clearly consciousness is not a phenomenon. Rather consciousness is a pre-condition for phenomena to exist. If there were no consciousness in the world, then there would be lots of events, but no phenomena whatsoever. To call consciousness a phenomenon is akin to calling sight a color.
Hi Bernard,
ReplyDeleteYou write: “At least during the non-dreaming phase of sleep, my consciousness disappears.”
Actually we don’t know that. We only know that we never remember having experienced anything in the non-dreaming phase of our sleep.
“So, put the person to sleep, and then boot up the computer in the identical state, and it wakes up believing it is me, exactly as I do every morning when I wake up.”
Forget the computer, which may or may not be conscious in any case. How do you know that when you wake up you are the same subject that went to sleep last night?
I think it’s clear that we pass through life making a great many assumptions. Which may not be a bad thing as long as one is aware of that fact. As far as I am concerned I think there is much less to personal identity than we assume. Which is a lovely thought, indeed one which illuminates Christian ethics.
Hi Dianelos
ReplyDeleteYes, I agree. The notion of a continuous personal identity appears to me to be far less solid than we instinctively assume.
I suspect the experience of drifting in and out of a sleep state does give us some hints as to what happens to consciousness in non-dreaming phases of sleep. General Anaesthetic offers a more solid example of non-conscious existence perhaps.
Bernard