|
Post by Abram on Oct 20, 2006 8:45:54 GMT -5
The reason such nets delete is because each new generation of nodes has less connections than the last, so there will eventually be a generation with only one connection apiece, and any such node will always delete it's neighbor.
By the way, I'm having to split up my post because the forum seems to delete any text that's put in after the code. My original post had some commentary that was cut. Strange.
As for input and output, yes, I didn't do that yet.
I've got to get to class. I'll probably have more to say sometime after 5:00.
|
|
|
Post by Abram on Oct 20, 2006 10:52:42 GMT -5
Theorem: Any mind deprived of input will eventually destroy itself or cease all activity.
Proof:
At the first stage without input, the net may be in any state, with arbitrary activated nodes and arbitrary connections.
At the second stage, the only activated nodes will be those newly created (if any) by the old nodes (connected to "common nodes").
Call a particular activated (newly created) node X. X will have N neighbors (N being arbitrary, based on the particular starting net). X will either delete all of these neighbors, or it will create new active nodes from them. (Either way, it will deactivate itself.) These new active nodes will necessarily have less connections than X, by at least 1.
This is true for each new generation of active nodes: each will have less connections. Since an active node with only one connection will inevitably delete it's neighbor, then there will eventually be some last generation of active nodes, which will delete some nodes. The remaining nodes, if there are any, will be inactive.
|
|
|
Post by tkorrovi on Oct 20, 2006 10:53:07 GMT -5
No, from a few networks which you created, you cannot make a conclusion that such networks always delete themselves. A totally closed system of course will collapse easier. First I noticed that there is a huge difference, how the system is trained, like, a system without training may die in a second, but for example trained by the Nim game training program, it would work several months (or longer, just I possibly have not enough time to try whether they work longer). Even some very simple structures start to grow very rapidly, don't remember very exactly any more, but these were some pentagon-like structures with five nodes. As you see, the start structure for training Nim for example, was also very small, consisting only of a few nodes. No I don't really see that these systems always die quickly. And of course, writing a program for such mechanism, is a bit more complicated. It's usually not just about thinking out the algorithm, writing a code, and then testing that it really works. Such system may properly work only when it works completely correctly, any smallest difference would cause it to die very soon. And development of such system is by far too difficult to follow, so that we could anyhow see, whether it works correctly or not. For such things there should be used a debugging method, that the system would be analyzed on every step, and find any errors in it, the moment when they occur. For that, I wrote the analyzer, which analyzes the whole system, and then put it in the code in different places by a function. It was very difficult to completely debug all the code, as any mistake finally caused some errors in structure, but it was very difficult to trace it back to the initial cause. Finally I got a code with no known bugs, which really is the only guarantee, that the system works correctly. So if you would like to create any such new code, I think the best at first would be to try to recreate the ADS-AC code, and compare whether the two codes do the same thing. Then there would be a necessary basis, also for doing something else. Unfortunately I don't know Lisp very well, and I don't know whether it really is a programming language proper for practical work, like, are there the necessary preprocessor directives, so that it would be possible to debug properly. I used C because C is really intended for real development, and there is everything necessary, which one may ever need, and it is the most widespread, with all operating systems written in it. I certainly agree that Lisp would give a shorter code, and it is more logically built for programming, but then is the smaller code really the most important, and good programmer would write a right program structure in C as well. At least C is made very practical, for advanced programming, and can do everything which any other language can do, and produces code which id fastest of them all. I really had no doubts about a programming language, that it should be C. But anyway, if you like to use Lisp, I have really nothing against it, Lisp is a serious thing, and there is a lot of software in Linux for it. Only maybe, estimate how much such program in Lisp would be slower than the equivalent program in C, it certainly would be slower, only the question is whether one and a half times, two times, or more. I for exampe, implemented the garbage gathering by the program as well, because I very well know that the standard garbage gathering would be much slower, in such systems every minor gain in speed is everything. But if you really want to learn the programming (if it really is a good thing to do ), then for example working with the ADS-AC code, would be very good for that. You would learn how to debug complex systems, all the posix and linux tools, and all the programming environment, like everything in SourceForge linux server. It is the most important for a programmer to know all these linux things, these are the basis and classics of all programming. So I'm the most sure, that if you ever want to work as a programmer (which I prsonally unfortunately don't think is a good idea) or something similar, then working with ADS-AC would be the most useful for you, just even regardless what results would be get from ADS-AC or similar. At least it's something really new, kind of different from just improving a bit the many other systems, which all already exist. And possible to study something cutting-edge, like system topology analysis, and maybe also some advanced statistics, which then can be used also for many other, practical purposes. In a way, to think more deeply, ADS is like about everything, where all the most fundamental things in the world come together, and then that which would be found or created by studying ADS-like systems, can be applied to many other things in the world. If one thinks about science, then this is very much what science should be.
|
|
|
Post by tkorrovi on Oct 20, 2006 11:05:35 GMT -5
"Theorem: Any mind deprived of input will eventually destroy itself or cease all activity."
Yes it's also true in general, any finite closed system would either die or then start to repeat itself (which is possible but probably somewhat less likely). Guess it's also proved somewhere in science, though couldn't succeed to find an exact paper yet. But important is not that it will die as a closed system, but that there are possible open systems, where the system will not die.
|
|
|
Post by Abram on Oct 20, 2006 17:04:11 GMT -5
Sure, it's true in general that any closed system will either cease activity or repeat itself. But in particular, ADS will destroy most nodes and the remaining will not go into repeated behavior, but instead will cease all behavior. My argument is not based on my experiments, which do conform to my prediction but could be buggy (as you mention), but rather relies on the proporties of ADS itself.
To repeat: since no input is coming in, the only active nodes will be newly created ones. Each generation of newly created nodes will have fewer connections, because the connections of a new node are taken as a subset of the connections of the parent node (in particular, the intersection of the parent's connections and some neighbor's connections). The reduction is not smooth; a particular child may have one less connection, or three, or twenty less, whatever. But each child will have less. Also, each subsequent generation will possibly delete many of the older nodes in the network. Further, we know that there will be some last generation, because each generation has fewer connections per node, and a node cannot be created without at least one connection. It is also provable that this last generation of nodes will destroy any nodes that it is connected to, because it does not create any new nodes, and it is known by the rules of ADS that every active node must, for every connection, either create a new node or destroy the node it connects to.
On the other hand, logic provides the case in which a net does not destroy itself: in the case that the net gets some input, it can create nodes that are connected to the new nodes, in which case we can get a new generation of nodes that has a (potentially) vastly increased number of connections. This undoes the tide of decay.
|
|
|
Post by tkorrovi on Oct 20, 2006 18:03:05 GMT -5
I don't know, maybe you are right. The new node would be created from links, which are different to two nodes, so there should not be an average tendency of decrease in the number of nodes. But I leave to you analyzing these things more thoroughly, may indeed be, that there is some minor details which cause decreasing the number of knots. If it really appears to be so, then we may think that something is not exactly right in the mechanism, as such mechanism should be universal, and therefore should not have any general tendencies whatsoever. Then it may be, for example, that deleting the nodes is not right after all, and there should only be some decay, as the only way of deleting nodes. Or maybe other, not so few number of possibilities. The only thing in that concern, which now can be adjusted on or off in the system, is whether there can be nodes with only one link, or there cannot be such nodes. After all, the aim is to create the right system, whatever it would be, and finally it would be found by research and experimenting. I may trust that research completely to you, considering some things which you perhaps now don't know, I'm the most sure you would know them. I made the things so far, but I feel I don't have any more much new ideas, so there is really necessary someone else who maybe looks at the things a bit differently, and can move the things further. Then, if you, say, find that the mechanism needs some changes, then it would not be only my mechanism, but mine and your mechanism, a Demski-Korrrovits mechanism, hehe...
|
|
|
Post by Abram on Oct 21, 2006 20:35:24 GMT -5
I'm not talking an average tendancy. I'm talking every single new node having less connections.
The new nodes are created from links, which are different for each node. But they get their links as an intersection of the links of others. The intersection of two groups is generally smaller than either of the two groups.
But, anyway. This is far from a disproof of ADS. Your method so far has been to systematically eliminate regular behavior from the system. But this is not my method, not by a longshot. Your method seems to have gotten results, which I wouldn't have expected. But I wouldn't continue your approach if I started tryig to make improvements. Instead, I'd search for the logical regularity that creates the good results.
But the fact that you got results by trying to eliminate regularity is interesting. It's similar to creating the rules for a game; if children are playing some game, and discover that some particular strategy assures sucess, they will often create some new rule to stop that strategy from working; they think of assuring sucess as cheating, and the new rule penalizes cheaters. A well-balanced game has unpredictable results.
Also, the approach is similar to Stephan Wolfram's. (My brother is reading "A New Kind Of Science.) He claims that calculations that display irregular behavior that cannot be predicted in any way but by running the calculation posess "free will". He uses cellular automata, which you abandoned. But he claims that any system that displays the proper complexity is equivalent to all other such systems; your net could be simulated in his automata. (That doesn't stop the net from being better, of course.) This suggests that adding more rules to your system once it reaches the point of sufficient complexity won't add anything substantial. The "free will" does not come in smaller and larger amounts, according to him.
So the fact that an unstimulated mind deletes itself might be seen as a point of interest, instead of a downside, of your system. It seems quite "human", in a way-- terrible things should happen when a mind is totally disconnected from reality.
I'll register for a sourceforge account eventually. Tuesday, perhaps. (I can get online most easily on Tuesdays and Fridays. Right now, I'm standing out in the cold to get internet.)
|
|
|
Post by tkorrovi on Oct 21, 2006 21:29:26 GMT -5
Just two things.
At least the word "intersection" is wrong, in set theory, that from which the links of a new node would be formed in ADS, is called "symmetric difference".
That one system can be simulated in another system, doesn't mean that one system is intrinsic to the other. The same as the computer can play Doom, but this doesn't mean that Doom anyhow comes from or is related to the computer hardware. So that a cellular automaton can simulate Turing machine, and Turing machine can simulate ADS, doesn't mean that the basic properties of a cellular automaton must be anyhow similar to the basic properties of ADS. And the biggest problem with cellular automaton is that it evidently cannot develop to every possible system by itself, and for that we still need to implement in it another system like ADS. Also it evidently cannot be trained in its basic mode.
|
|
|
Post by Abram on Oct 22, 2006 10:42:10 GMT -5
Ah, OK, so my understanding of the algorithm was wrong.
I see. So my program is wrong, along with my conclusion.
Wolfram argues that any complex system not only CAN simulate any other, but eventually WILL, if it isn't purposefully given a restricting program as a start-input. So if instead of feeding a computer a computer program, we just feed it a minimum input to get it started, it WILL eventually start playing Doom (but it will of course take an extremely long time to do so, and will by no means settle on Doom in particular or prefer the exact version of Doom our society has created). I think the argument runs something like this:
Because a complex system is unpredictable, it can in principle produce any particular output if we wait long enough. Because the system is continually re-updating, it will treat this output as input. So in principle, it will eventually produce and start running any particular program we choose.
According to Wolfram, as I've just stated, the automoton CAN create any system all by itself-- but this is of no use, because (as you say) it doesn't respond to training, and so it won't create systems "at the right time". Wolfram argues that the mind is just another complex system. But it seems like the mind has that difference of creating behavior not just unpredictably, but AT THE RIGHT TIME, according to input, the best it can.
That is, again, why I prefer the logical approach. An unpredictable system is certainly interesting, but a system must be able to do more than ust generate every possible behavior-- it must do it at the right time.
But, anyway, I'll fix my program and repost it.
|
|
|
Post by tkorrovi on Oct 22, 2006 14:01:04 GMT -5
Some Wolfram's conclusions are likely correct, and they can be taken as basis, and they are important. But the problem is that the evidence doesn't support one Wolfram's argument. Like, in Convey's Game of Life, there are only a several known infinitely developing structures, and all of them produce finally rather specific regular structures, by far not any systems. Furthermore, they would develop infinitely only without almost any outside influence, trying to influence them anyhow, would cause them to collapse. This can be shown by the experiments, one of which is on the ADS-AC project site. Therefore we not only need a self-developing system like cellular automaton, but we also need a system which is derived so that it can develop to whatever system, by itself or by training. And necessary for that is that the system should be changing (dynamic) enough for that, because the restrictions would cause the system to die too soon, before it even can develop to anything. There is absolutely no such derivation for cellular automata, and therefore also no theoretical reasons why cellular automaton should be able to develop to whatever system. The properties of the cellular automaton nodes are chosen rather randomly, not based on any theoretical reasons. And therefore cellular automaton is also evidently more restricted than ADS, we see it in that in every way it seems to be more "clumsy", fragile and easy to destruct. That it can simulate Turing machine (which there is very clumsy and slow as well, due to the need of using such constructs as cannons and shifting blocks, because cellular automaton has no direct connections) is not everything yet. This doesn't mean at all, that systems like Turing machine can emerge there as a result of development (with or without training). ADS is at least derived so, that any system, which means also a Turing machine, should theoretically be able to emerge as a result of development.
Theoretically, a system like ADS should not only be able to generate any possible behaviour, but be able to be trained to desired behaviour. The principles how such training could be done, are described in ADS-AC site. What theoretically enables such training, is something similar to multiple drafts principle, alreary described by Dennett. It is that in a system, where any part of a system can be deleted or disappear one or another way, there would appear certain conditions, when certain process fits in its environment (the rest of the system including training), and can therefore survive. This means there used to be "multiple drafts", ie several different processes, and of these only that one survives, which can fit into its environment. What is necessary for such system, is a mechanism which can generate any kind of possibilities during the development. As it is a basic property of the mechanism, later there would be conditions, where any inproper possibilities would not emerge, or would be deleted or disappear very soon, so this is also not just generating a random system and choosing a proper one, this is development, and faster the more developed the system is. For that we basicly need a system like ADS, which is derived to generate everything whatever possible, from the conditions and system which there is. And such generation is a kind of logic, guess it can be shown also other ways, that from that symmetric difference, all the other logical operations can be composed.
|
|
|
Post by Abram on Oct 22, 2006 20:26:37 GMT -5
The completeness of symmetric difference in set theory is like the completeness of "neither-nor" in logic. All logical operations can be composed from neither-nor. But yet just because a system uses neither-nor does not mean it is complete. A cellular automaton that has two colors and a neither-nor rule for constructing a square from the two above it is NOT turing-complete. In fact, it's behavior is quite uninteresting. So just because a system uses elements that can create anything doesn't mean the system can create anything.
Actually, I'm not too clear on the relation between logical completeness (the completeness of symmetric difference in set theory and neither-nor in boolean logic) ant Turing completeness. I'm pretty clear on what "logical completeness" means, but "turing complete" isn't quite as meaningful to me. I get the idea of one turing machine simulating another, but how can we look at non-turing-machines and call them turing complete or not? (For example, a massive database could have stored in it all outputs it should give for each possible input, and could therefore imitate any turing machine; but is it turing complete? It's not calculating, just memorizing.)
Anyway, the multiple drafts thing is important. It's clearly the logic behind the process that I was missing before. But how can you guarantee that ? If then it seems like internal consistancy and self-support become more necessary than agreement with the environment. A system that develops has to survive the bombardment of seses, but that doesn't mean it has to act intelligently. In fact, it doesn't suggest any particular scheme of output to me at all. How does the system get rewarded and punished? Particular internal systems should be eliminated not by eachother, or at least not mainly by eachother; they should mainly be eliminated as punishment for bad behavior. So which types of sensory input destroy structures? And do they tend to only destroy the structures that created them, or do they destroy just any structures within the net?
But my fingers are getting cold. Time to go inside. Talk later.
|
|
|
Post by Abram on Oct 22, 2006 20:27:41 GMT -5
(I do like the drafts idea.)
|
|
|
Post by tkorrovi on Oct 22, 2006 22:47:32 GMT -5
"So just because a system uses elements that can create anything doesn't mean the system can create anything."
No of course not. The system must be derived to be able to create anything. A cellular automaton node can calculate neither-nor only between its neighbouring four nodes, and that's a severe restriction.
It would be too long to write here all about training, please read on the project's site. But if I ever can say it shortly, what is positive for a system, is when it can expect certain input coming, then the process behind supposed to fit the best into its environment. So using that, and seeing what influences the system positively, we can infuence its behaviour, and train it a behaviour. And theoretically we supposed to be able to train anything that way.
|
|
|
Post by tkorrovi on Oct 25, 2006 7:59:28 GMT -5
To put it another way, whenever we have a system where the multiple drafts principle really works, and which can be influenced from outside, then training is possible because some processes would be connected to input and output, different processes appear there at different times, and can be influenced to fit better or worse in their environment.
In fact, multiple drafts principle likely cannot work for systems which cannot be influenced, like cellular automaton, because then usually a process cannot survive any influence neither from outside, nor by another process.
|
|
|
Post by Abramq on Oct 29, 2006 13:19:15 GMT -5
I think I fixed it now. I registered at sourceforge under abramdemski.
(defun update-ads (nodes) (setq nodes2 nodes) (dolist (node nodes nodes2) (if (not (cdr (symbol-value node))) (setq nodes2 (remove-if #'(lambda (item) (eq item node)) nodes2))) (if (car (symbol-value node)) (progn (dolist (neighbor (cdr (symbol-value node)) nil) (setq common-nodes (intersection (cdr (symbol-value node)) (cdr (symbol-value neighbor)))) (if common-nodes (progn (setq uncommon-nodes (union (set-difference (cdr (symbol-value node)) (cdr (symbol-value neighbor))) (set-difference (cdr (symbol-value neighbor)) (cdr (symbol-value node))))) (setf newnode (gensym)) (setq nodes2 (cons newnode nodes2)) (setf (symbol-value newnode) (cons t uncommon-nodes)) (dolist (uncommon-node uncommon-nodes nil) (setf (symbol-value uncommon-node) (cons (car (symbol-value uncommon-node)) (cons newnode (cdr (symbol-value uncommon-node))))))) (progn (dolist (node nodes2 nil) (setf (symbol-value node) (remove-if #'(lambda (connection) (eq connection neighbor)) (symbol-value node)))) (setq nodes2 (remove-if #'(lambda (node2) (eq node2 neighbor)) nodes2))))) (setf (car (symbol-value node)) nil)))))
|
|