Sunday, April 27, 2008

What is it like to be something other than human?

When I was a naïve teen I tried breaking the mental barrier by imagining what it would be like to be dead. I thought I championed this enigma when I concluded it was like being in a deep sleep without ever waking up. At the time, I didn’t realize I was in for a meaningless endeavor. It doesn’t make sense to imagine the state-of-being a non-conscious entity would have. There is no point of trying to figure out what it is like to be a rock because rocks aren’t capable of knowing what it is like to be themselves!

In order to transcend my human state of mind I have to imagine what it would be like to be another conscious entity, like a dog or cat. In my opinion, imagining what it would be like to be a dog is a relatively simple task. As animals ourselves, we know what it is like to have desires, emotions, pleasures, pain, etc. It still is impossible for us to know exactly what it is like to be a dog, but we have a pretty good idea (most dog owners have a good idea of what their dog is feeling without thinking too hard).

I wanted to imagine a state-of-being that is truly mind blowing, some state of mind foreign to any conscious creature we know of. The first thing that came to mind was artificial intelligence; an AI with intelligence that far surpasses that of any human. How the hell can we imagine how a super smart AI would think like? If we knew how this AI would think, wouldn’t we be equally smart as it? I don’t think we can know exactly how it would think, but either way I am going to take a shot at it.

This AI would be able to change its own source code i.e. it can reprogram its brain whichever way it wants. You might be wondering how you can imagine this on an intuitive level. It would be as if you were to fundamentally change the way you think. I know this doesn’t make you any less confused so I will give an example. If you relocate the trash bin in your room there will be many instances where you throw your trash in the old location of the bin. Your mind has been conditioned to expect a bin in a certain location and sometimes you may forget it’s in a new place. Your weak brain has disabled you from efficiently throwing away your trash (without wasting time with a misfire). An AI wouldn’t have a problem here because it can erase any conditioning and reprogram itself to adjust for the different environment. Our minds are constantly fluttered with these impulses that have been conditioned in our mind. For instance, if I tell you to not think of a white elephant, you’ll think of it. An AI could choose whether his mind should be vulnerable to such impulses. You can imagine how this would help with the AI’s problem solving skills. It would have no bias, no obstacles in attaining new skill sets.

The AI would be able to discard bad or faulty ways of thinking and replace them with better ones. The ‘better’ ways of thinking would be the ways that help the AI solve more efficiently a problem or reach a goal. If you were an AI you might be able to solve Fermat’s problem in a matter of minutes. I have made a big assumption here; I assumed that the AI would want to do things. We humans constantly solve problems because we must in order to survive. We are faced with challenges, death threats, scarcity, etc. and that motivates us to problem solve. The AI would have to be programmed with desires similar to humans in order for it to want to solve similar problems we have. It would be in our interest to program an AI with the same desires as us because it would be interested in solving problems that we too care about. Of course the AI doesn’t have to be programmed to share our desires, but it does have to have some desires. Otherwise it wouldn’t be intelligent because it wouldn’t do anything!

In short, being a super smart AI would be much like being an intentional agent that can much more efficiently get what it wants. More efficient because it wouldn’t deal with the same handicaps are weak minds have like poor memory, biases, social conditioning, and any other concept you learned about in psychology.

Sunday, April 13, 2008

The Evolution of the Meme

In Richard Dawkins’ book, The Selfish Gene, there is a fascinating yet controversial chapter about memes. A meme, or i.e. a cultural unit of information, has the same interesting property a gene has – the ability to replicate. It might seem peculiar to consider a meme as a thing, much like how a gene is a thing you can observe. A meme is a thing, or specifically, a certain mental state. For example, if you think about the idea of God, a certain mental state is assembled in your mind. This state may not be a localized sector of your brain, but it is nevertheless a specific neural structure that corresponds to your God idea.

These memes replicate in a much different manner than genes do. A meme replicates by any sort of human communication, like any verbal or written language. Clearly not all memes replicate themselves throughout the meme pool. Similarly, all genes do not get passed on to future generations because they are outcompeted by genes that are more fit. But what constitutes a meme that is fit? Fit memes are ones that have a special property about them that leads them to their frequent replication and therefore proliferation in the meme pool. For example, the internet phenomenon of “Rick Rolling” has been a successful meme for its inherent nature of wanting to be replicated. If rick rolling didn’t include another participant, I would question whether it could have the same success.

Dawkins’ meme of memes has had me reflect back at all the cultural ideas lurking in my head. My meme portfolio is a product of a long meme evolution. I would assume that my memes greatly differ from the memes present in a random given individual 500 years ago. However, I don’t think it’s a sufficient answer to simply say that they differ greatly; after all, my memes have the additional 500 years of evolution with different selection pressure. The evolution of memes, as well as genes, is not random – there is a direction in where these replicators evolve to be. So are my memes merely more ‘catchy’ than their 500 year old predecessors? In my opinion, arriving at this false conclusion is the consequence of confusing between the meme and gene selection pressure.

Memes, unlike genes, are not a product of their environment. In the gene world, the environment ultimately decides which genes are selected for; this is not the case for memes. Compare the genes of organisms today versus ones 500 years ago. You would find that the genes code for organisms that are better fit in their respective environments. Do the same comparison between memes and you will not come to the same conclusion. I am sure that our memes of technological ideas and scientific thinking would outcompete the memes in 1508. Their relative usefulness would be no match for existing memes that explain the world. Maybe even today’s interpretations of certain religions would outcompete older interpretations. Could be that intelligent design theory would be much more convincing in 1508.

This is because our memes are selected by intelligent goal-seeking agents. Our intelligence allows us to select memes that work in accord with our goals, namely survival, entertainment, etc. This intelligent selection, as opposed to natural selection, means that our present memes are ‘better’ than the earlier ones. Better in the sense that they are more useful in doing what we want them to do.