Friday, February 23, 2007

I was reading John Stuart Mill up at the sledge lodge house / cafe this afternoon and I started to think over an idea Tom asked a while back about whether if there was this hypothetical / perfect AI, would I let it govern me. And would it be good to let it govern the populace. And everything would be fair and perfect, etc. Something along those lines. Basically though, I just started forming some questions. But I don't know much about AI programming, or the concept as a whole; so the questions and ideas I am asking may seem silly or just bad questions. But I want to think it out at least in my capabilities and this is where I do that. In w-o-r-d-s.

So first, and maybe this goes along with some Descartes, as I've been reading some things Tom writes or remembering what we studied about him in philosophy when I took that class with Belarmino.

As imperfect beings, we still have the ability to comprehend and create logical statements that don't collapse within themselves. What I'm going at is, is it possible with our imperfections, but ability to understand our imperfections, to create an Artificial Intelligence that won't completely ruin mankind. I mean, is it inherently possible. Because at this point I'm not really doubting mankind's ability to progress, but more like, is it possible to do it with our being imperfect and we are trying to make something more than us; something that we are giving all the power to rule us and make our decisions for us.

It came up in my mind because Mill states that individuality is the highest state a person can achieve. And further, he goes on discussing that there are geniuses--people of high intelligence--that sort of help our society move forward, and it is this power of their individuality that is important. And with a machine ruling over us, it would kill our ability to move forward. Rather, we are more or less meaningless if we strip our ability to make decisions, whether good and bad. Not that I'm going at the idea that we all have free will. I totally agree to the idea that we don't. But more that I am thinking that by giving up that power of our 'selves', we lose a quality of humanity, and therefore, are more like animals, as Mill would probably say it.

And then, is it integral that we create an emotional response in this AI. Is emotion something that needs to exist in a ruling the individual. Does it play an important role in the decision making process? Or is cold logic that smartest move?

Tom was discussing the idea that we are sometimes faced with choices that are both negative. And one must make a decision, where neither outcome is good. That a 'dilemma' is presented, with no good way out. I hope I got the gist of that right. And if thats the case, is it then solved logically, or emotionally. And how would this AI deal with such a situation. Will it simply do the greatest good for the greatest amount (Utilitarianism), or will it make a decision on some other basis. I'm really curious as to how the AI would be programmed in that aspect. Or would it simply give up, not being able to harm anything or anyone in the first place.

Then I started thinking, if I were this super powerful smart AI, would I want to know what emotion is like. These people that created me, created me simply because their emotions (greed, jealously, hate, anger, love) were creating situations that weren't fair, or were dangerous to humankind. As a machine would I endeavor to understand that. Would I want emotion. Or would that want even exist with a lack of? Would it have a want of all knowledge. It seems that that would be an important part if we were to classify the machine as intelligent. But maybe not. It needs something rather than following a giant tree of information to a right or wrong decision doesn't it? Can't we do that now?

And then, Mill states that democracy is a problem because it ends up being that people are ruled by a mob; and that this mob becomes stagnant and loses any sense of individuality and then any sense of progress because it begins to follow a routine. The whole thing withers and dies like a thing that never gets fresh air and soon poisons itself.

That instead, there are always, in every generation, a select few that are intelligent and wise enough to be leaders of the rest of the people. And that these people should rule over the people. But how do we know that these people are being just and fair, and good in everything they do. That they aren't simply seeking terrible things.

But mostly that got me thinking to the idea that are these select few that Mill talks about, are they the ones programming the AI? Because, it seems that only those people of high intelligence and individuality would be able to create something greater than themselves in the first place.

And then that got me to my final thought, that if they indeed are the only ones capable of creating it, and they indeed do, they have infused in this creature--'being'--their very essence, or thoughts and conclusions on what is necessary to properly govern and control society. And then if that is the case, hasn't Mill's idea then just come to fruition in some strange ass way.

I thought it was kind of neat. But it feels rushed and its probably full of holes. I can't follow logic too well, and the ideas are scattered. But those are just questions I had regarding the issue. More like, the possibility of such a thing ever existing. And then what is this 'AI' going to be.

I really enjoy John Stuart Mill though, and can see the attractive lure in what he writes. Especially reading parts of his autobiography and him becoming completely depressed and even suicidal when he realizes that Utilitarianism is totally wrong and now what is he going to believe this late in his life when he has been so adamant about his father's teachings for so long.

---
more than i, i live on the inside.
and its beautiful.
but every once in the while, i like to watch the dying autumn light.

1 comment:

Anonymous said...

JS Mill is some awesome writing. His ideas, pretty much have come to light in one way or another.

Devo