Showing posts with label Philosophy of Mind. Show all posts
Showing posts with label Philosophy of Mind. Show all posts

Friday, August 12, 2011

Ultraintelligent Machines

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.  Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." - I.J. Good

Thursday, August 11, 2011

Speculations for the future...

So, the consensus seems to be that we are at an integral point in human history (what age hasn't said that about themselves?). Regardless if the claim is hyperbole, however, it does seem fairly clear that the explosive technological expansion the world has experienced (most of the world, that is; one need not mention in length the 3rd World, which, for the most part, doesn't have running water) has grand implications for the future. The Singularity Institute (http://www.singinst.org) was established to address the issues that inevitably will arise when such a "future" comes about. Consider what sort of situation the 3rd World would be in when machines become so embedded in the global fabric. Will they prosper? Will they be consumed? Will all of us be consumed?

In my mind, the "becoming" of the newest world, a world where the threshold between human and machine is increasingly blurred, will be gradual, even if fast by any normal standard. Accordingly, such issues, ethical, political, socio-cultural, and otherwise, will also "become" gradually, even if with unprecedented haste. For example, in order to allocate the necessary resources to engage with intelligent machines, entire social and political systems will need to be uprooted, redesigned and reoriented to account for the new overarching agency of potentially more intelligent and efficient "citizens." How will the existing moralities of the various affected cultures change? Leading up to any Singularity, we would necessarily have to already have a moral foundation upon which the new paradigm could stand. It is my contention that the human virtues and vices that exist now will still be alive, but so very minimal in comparison, that any Aristotelian human excellence would be a side note to the amazing machine intelligences. Of course, the alternative is, if we find a way to merge with the machines, or exponentially enhance ourselves, we could, perhaps, find ourselves on relatively "equal" footing upon that new paradigmatic global foundation.

Then, of course, there's the prospect of "mind uploading" and brain emulation/simulation in real and virtual worlds. Consider the possibility of having more than one Self in the world. Imagine living amongst exact replicas of yourself. First of all, would they truly be exact replicas? The virtue of being human is that experience is ultimately a subjective enterprise, and the capacity for learning depends upon the cohesiveness of fluctuating experience. The "stuff" in the mind might be the same, but the experiences they encounter would necessarily, from the very outset, cause them to diverge significantly. "Clones" in that sense, can't actually exist outside the physical substrates which make up their hardware. Consider computers as an analogy, or perhaps smart phones. Two phones, same manufacturer, same developer... indeed, same phone. However, the "personality" of each phone would necessarily differ depending on its owner. Each owner experiences the world differently, and thus utilizes the computer differently. My clone might  very well be strikingly similar to me. But considering experience, a posteriori reasoning and epistemic justifications for present phenomena, clones simply cannot exist removed from considerations of physicality.

Interesting.


Wednesday, August 10, 2011

Singularity

Recently I've rediscovered the movement now known as the "Singularity (see http://www.singinst.org) for more information)." In short, due to the exponential expansion and growth of information and information technology, in many minds, it is becoming quite plausible to imagine a future, in this century, of highly advanced, intelligent technologies interacting with human beings surviving into posterity well beyond normal life expectancies through the implementation of nano-technological solutions to climate change, poverty and health; some conceptions even envisage virtual reality systems potentially replacing "reality" as we know it, and most strikingly, the development of artificial intelligences smart enough to create smarter machines which would in turn create even smarter machines, ad infinitum.

The implications of the "intelligence explosion" (http://intelligenceexplosion.com/primer.html) as it is being called are vast. First, the moment we create a machine only slightly smarter than the smartest human, theoretically, that machine will be able to create an even smarter machine, on and on until the limits of the laws of physics are reached. At that point, the networks of intelligent machines would be so massive, by many orders of magnitude, that civilization as we know it now would be long transformed. Some speculate that humans will merge with the machines, others think that perhaps we could create "friendly AI" and coexist with them while others are less optimistic. The machines may very well consume us, Matrix style, and any conceivable human-biological paradigm would disappear from the universe. Whether the intelligence explosion comes about by emulating the brain, or emulating evolution, or scientists and researchers figuring out how to implement bio-technological/nano-technological enhancements to our own intelligence, all of these ideas are exceedingly becoming more fascinating to me, for, philosophically, the implications are more and more confounding.

David Chalmers writes: "The basic argument for an intelligence explosion is philosophically interesting in itself, and forces us to think hard about the nature of intelligence and about the mental capacities of artificial machines. The potential consequences of an intelligence explosion force us to think hard about values and morality and about consciousness and personal identity. In effect, the singularity brings up some of the hardest traditional questions in philosophy and raises some new philosophical questions as well."[1]

Philosophy, it should be noted, has many many potential things to say regarding the notion of a future Singularity. The idea that academics could ignore the potential for such a paradigm shift is pathetic, and proves the shortsightedness of human beings in a culture where success is determined quarterly and not by the decade. Obviously there are problems to always gazing too far into the horizon, but the horizon is fixed, and the opportunity to see where the path might lead is up to speculative thinkers undergoing rigorous conceptual analysis, amongst other things. Perhaps most importantly, the philosopher must begin to seriously think about the ethical implications of having artificial agency. Our conceptions of Self, personal identity, ideas of Other minds, consciousness and subjectivity, and most broadly, of Mind, will need to be examined with rigor, under the kind of serious practical scrutiny that philosophers thrive on when the new world agents we deal with most likely will not be as we assume we are with each other. Consider the problems we might encounter, when an artificial mind can indeed decide, on moral grounds, to do some such thing with ethical implications. The issue would get more complex as the phenomenon unfolds.

With this blog, I intend to write, as often as I can, my thoughts regarding the ideas that I come across moving forward. I am newly inspired by minds like Ben Goertzel (http://www.goertzel.org), Ray Kurzweil, James Martin, David Chalmers, among many others. I highly recommend, if you are interested in learning more about Singularity and its many offspring, to check these names out.






---
[1] David Chalmers, “The Singularity: A Philosophical Analysis”