Post by tkorrovi on Jul 9, 2008 14:15:52 GMT -5
Well these are my general thoughts, i have no time for any thorough analysis, so i just have to write what i think in general, in the hope that this would be useful for someone.
The shortest answer is, that nature doesn't work the way, how any conventional AI suggests. We have an ability called imagination, to recognize an image, the brain generates many possibilities, and compares what we see, with these. But most importantly, what the conventional AI, including neural networks, never does, is that it doesn't really model the objects, not to talk about the processes. Like, they cannot model a clock when seeing it, so that they can model how it works. The neural network really comes from a very primitive understanding of the working of the brain, as if the neurons were some kind of logical elements. But the brain evidently works differently than the neural networks model suggests, therefore the neurons also likely work differently, than the artificial neurons. Neurons may even be like computers connected into an efficient network, but they are not exactly these either, they are very adaptive and flexible systems, which learn based on the interaction with their environment.
Also the probabilistic systems can only learn based on statistics, how something usually behaves, they don't also make any real models of the external objects or processes. For example i looked at the logs of the hal, in ai forums. This hal is based on the markov chains. It just learns how you usually answer, and then answers the same way. It doesn't model your thinking or the things which you talk about, therefore it has no idea of what you talk about, and so it cannot make no real conclusions. As a result, it often gives absurd answers, and it is no way possible to explain it even simple things.
They tend to think that some system has an unlimited power, when it is turing complete. But being turing complete is only one requirement. It is not so difficult to make something which is turing complete, it is difficult to create a system where some equivalent of the turing machine can emerge as a result of self-development. Like it is easy to compose turing machine from some logical elements when we do it by hand, but this doesn't say that a heap of logical elements on our table has any amazing powers, this heap of logical elements doesn't do anything by itself, until we would connect them in a way that they form a turing machine.
We can make powerful systems by programming, which can do amazing things. And we think that using them we can do whatever. Thus the belief that when we can write many powerful enough programs, then we can create an ai which can do whatever. But when we use these powerful systems as parts of ai, they mostly fail. There again, nature doesn't work the way how we code, and coding a huge number of powerful programs is not the solution. And it might become obvious, that we cannot make true ai by coding solutions, the ready-made code doesn't help us at all. And we may need some system which works by itself, without any coding. Sure it feels strange, we almost cannot control it and put it to do what we want. No this is not the omnipotent ai which we wanted, it seems like some randomly changing rubbish which doesn't even know how to play tic-tac-toe. But if this is what is the solution in nature, we cannot make true ai by going against the basic principles of nature, the principles by which the brain really works. The more we try to go against these principles, the more problems we cause to ourself, putting it simply, it is almost the same as banging your head against the wall.
The shortest answer is, that nature doesn't work the way, how any conventional AI suggests. We have an ability called imagination, to recognize an image, the brain generates many possibilities, and compares what we see, with these. But most importantly, what the conventional AI, including neural networks, never does, is that it doesn't really model the objects, not to talk about the processes. Like, they cannot model a clock when seeing it, so that they can model how it works. The neural network really comes from a very primitive understanding of the working of the brain, as if the neurons were some kind of logical elements. But the brain evidently works differently than the neural networks model suggests, therefore the neurons also likely work differently, than the artificial neurons. Neurons may even be like computers connected into an efficient network, but they are not exactly these either, they are very adaptive and flexible systems, which learn based on the interaction with their environment.
Also the probabilistic systems can only learn based on statistics, how something usually behaves, they don't also make any real models of the external objects or processes. For example i looked at the logs of the hal, in ai forums. This hal is based on the markov chains. It just learns how you usually answer, and then answers the same way. It doesn't model your thinking or the things which you talk about, therefore it has no idea of what you talk about, and so it cannot make no real conclusions. As a result, it often gives absurd answers, and it is no way possible to explain it even simple things.
They tend to think that some system has an unlimited power, when it is turing complete. But being turing complete is only one requirement. It is not so difficult to make something which is turing complete, it is difficult to create a system where some equivalent of the turing machine can emerge as a result of self-development. Like it is easy to compose turing machine from some logical elements when we do it by hand, but this doesn't say that a heap of logical elements on our table has any amazing powers, this heap of logical elements doesn't do anything by itself, until we would connect them in a way that they form a turing machine.
We can make powerful systems by programming, which can do amazing things. And we think that using them we can do whatever. Thus the belief that when we can write many powerful enough programs, then we can create an ai which can do whatever. But when we use these powerful systems as parts of ai, they mostly fail. There again, nature doesn't work the way how we code, and coding a huge number of powerful programs is not the solution. And it might become obvious, that we cannot make true ai by coding solutions, the ready-made code doesn't help us at all. And we may need some system which works by itself, without any coding. Sure it feels strange, we almost cannot control it and put it to do what we want. No this is not the omnipotent ai which we wanted, it seems like some randomly changing rubbish which doesn't even know how to play tic-tac-toe. But if this is what is the solution in nature, we cannot make true ai by going against the basic principles of nature, the principles by which the brain really works. The more we try to go against these principles, the more problems we cause to ourself, putting it simply, it is almost the same as banging your head against the wall.