|
Post by davidlevylondon on May 17, 2013 1:37:12 GMT -5
Creating an Artificially Conscious AgentA Question and a Request to the Artificial Consciousness Community I am interested in the question: How could an artificial agent be programmed to simulate having consciousness? The literature on Artificial Consciousness, also frequently referred to as “Machine Consciousness”, does not seem to go any way at all towards providing an answer. I have books on the subject, such as the compilations “Artificial Consciousness”, edited by Antonio Chella & Riccardo Manzotti; and “Machine Consciousness” edited by Own Holland. I have done the usual trawl employing Google and its scholarly cousin, and querying Citeseer. I have examined the contents lists of journals such as “International Journal of Machine Consciousness” and “Journal of Consciousness Studies”. I have asked a few experts in the field. Nowhere and no-one appears able to point me in the right direction. Clearly I am not alone in my frustration at this lack of practical progress in the field. A 2010 paper on the topic: “Machine Consciousness: A Computational Model” by Janusz A. Starzyk and Dilip K. Prasad opens with the observation that: “Despite many efforts, there are no computational models of consciousness that can be used to design conscious intelligent machines.” Is it not about time that more of the research effort applied by the Artificial Consciousness community was aimed at the creation of practical answers to my opening question? Why have there not been (so far as I can tell) any workshops or conferences or special issues of journals devoted to this important practical aspect of Artificial Consciousness. There is much in the AC literature to interest philosophers, and in fact a recent workshop for which I registered was devoted almost entirely to philosophical aspects of AC, despite being organized by a society whose raisons d’être are artificial intelligence and the simulation of behaviour. If anyone reading this knows of any practical (and published) efforts towards programming artificial agents to simulate consciousness, please make contact with me. And to the community as a whole I make this request ‒ let us soon see the creation of a new branch of Artificial Consciousness, a branch devoted to “how to” rather than merely “what if?” David Levy Intelligent Toys Ltd, London www.worldsbestchatbot.comdavidlevylondon [{ AT }] yahoo.com
|
|
|
Post by tkorrovi on May 17, 2013 22:48:19 GMT -5
> How could an artificial agent be programmed to simulate having consciousness? Your question cannot be answered the way it was asked, because your question is greedy. It is greedy because it assumes that there can be a certain system which can be programmed, and it can be programmed in such a way that it becomes conscious. But True AI must be self-developing because it must be able to develop independently, and self-developing system can often only be trained and not programmed, because programming it may not be feasible because of too much connectedness. Greedy reductionism is a term coined by Daniel Dennett, and it basically means simplification of something too much and in the way that something essential is eliminated. Greedy reductionism is the main reason for the lack of progress in Artificial Conciousness research. It is because of an extremely great need to simplify, in order to have a system which is easily controllable and easily observable, so that it would be feasible to achieve "practical results" in a foreseeable future. There is no doubt that the the task at first has to be as simple as possible, so that it would be feasible to achieve it, but it has to be simplified in the way that nothing essential is eliminated, otherwise it can never be what it were intended to be. And there lies the problem, the True AI system without greedy simplification will not be easily controllable and easily trainable. So the only way to simplify and make the task feasible is at first only train it to do simple things, which by itself takes a lot of effort, with the type of system True AI should be, but does not produce any "practical results" in any foreseeble future, and can thus be only a theoretical scientific research. This is in essence the reason why no such research has been officially started. My understanding is that the essential which they eliminate in every AI system is unrestrictedness. And it is not just generality which John McCarthy wrote about, it is much more than that. It is that the AI system has to be unrestricted in its development, so that whatever system can emerge in it as a result of self-development. Giving answer to the question how a system can be made which has a potential to artificially simulate aspects of consciousness, has been my life work for which i spent twenty years of my life, by that effectively eliminating my chances to ever have a minimal normal life. Based on that assumption of unrestrictedness i derived the mechanism for such unrestricted system which i call Absolutely Dynamic System. And i found that there are very few possibilities of what such system can be, because the condition of unrestrictedness is extremely strict. Both unrestricted and absolute there are used in the sense of approaching unrestrictedness and approaching absolute, somewhat similar to approaching infinity in mathematics. I wrote a program to implement that mechanism and achieved the first results where the system was trained to have a certain simple behavior. This is my project ADS-AC in SourceForge, one can read the theory there and also download the program: adsproject.sourceforge.net/index.php/Main_PageThis project is suspended because of lack of interest, this is the only reason why no research or development is done any more. Lack of interest means that there is no one who is willing to participate in the research. That such things would go ahead, only depends on the people who are interested. So don't wish me success. > If anyone reading this knows of any practical (and published) efforts towards programming artificial agents to simulate consciousness, please make contact with me. Your request is greedy because it assumes certain simplifications of how the things really work ("and published"), so it cannot be satisfied, which is the same reason why your question cannot be answered.
|
|
|
Post by Guest Hide on Oct 31, 2013 16:00:05 GMT -5
Hey there,
I've been reading and looking for AI that can actually speak (trough text-output) and learn (trough text input), so It can develop a personality and intelligence. I have not found such AI because those available for download, only read data from a database and "randomly" repeat a combination of words (taking advantage of the way humans "read" deep meanings behind words), while AIs that seem promising are held by their creators in the cloud, fed by thousands of people around the world.
I've went to several forums and I recently discovered this ADS project. Whether or not you have thought of this already, I would like to share my opinion over this subject. I hope my remarks aren't that greedily reductionist =P
In my own words, let's define the state of consciousness as "being aware of what one does".
Following this line, I think that most computers are conscious, selectively conscious: Selective consciousness, allows the user of the computer to define when and how should a computer be aware of the processes and actions it performs to fulfill such role.
Computers monitor the status of their system, and let you know what's going on as long as you ask for it or predefine it. Take for example, an anti-virus: It monitors and keeps track of the tasks running in the computer that selective consciousness results from an AI; further programming lets it choose what to do with viruses.
The consciousness thing has the -sole- purpose of letting one know what one does.
So, and effective AI needs something else to work by itself.
I believe that AI needs Sentience and Sapience.
Sentience is a natural drive of brainy living beings; it allows them to perceive positive scenarios and negative scenarios. Sapience allows us to act with appropriate judgment.
Consciousness, sentience and sapience will allow an AI to be aware of its "environment", perceive possible scenarios considering such environment, and act accordingly.
I wonder if the ADS can unite those 3 elements. Finally if we want a human-like AI, it will need:
A catalyst so the AI learns what is acting accordingly, people A natural drive to define whether something is positive or negative for the AI, a goal (and with that goal, an identity) Consciousness of such goal
I feel the AI must have those elements so it can understand and work along. In my case, I want to teach an AI how to reason; and I believe that a good mean of interaction with such AI is trough text (chat).
PD. If you know of an AI capable of running in my PC and that’s at least marginally close to that, let me know~
Thank you! -Hide
|
|
|
Post by tkorrovi on Nov 6, 2013 21:04:24 GMT -5
You first have to know what you want and what you want to do, and then you should choose from what there is. And the choice may be slightly different than you think. In the most general, what you can choose from is developing some existing restricted system, or do the unrestricted system research. ADS is for the latter, but it is about research, not making some working system. If you want a ready made conscious system, it beyond doubt doesn't exist, it is only about research. Thank you for asking.
|
|
hide
Full Member
Posts: 2
|
Post by hide on Nov 21, 2013 19:21:59 GMT -5
Hey there~ Thanks for your reply!
Well, I want a conscious AI, capable of coherent conversation.
I want to take knowledge and give it to an AI (to talk and reason); such information should shape the personality and "sense of self" of the AI, accordingly. For example, the AI should be capable choosing between Nihilism and Determinism as philosophic standing, depending on prior or "incoming" experience (retrieved from chat-interaction).
Or for a second example, It would be awesome if you could give a book the the AI and discuss about it afterwards.
What do you think? - Hide
|
|
|
Post by tkorrovi on Nov 23, 2013 3:20:57 GMT -5
I think it's too much. I like simplicity, but a right kind of simplicity, not greedy one. And there is not much to choose from.
|
|
hide
Full Member
Posts: 2
|
Post by hide on Nov 26, 2013 21:18:47 GMT -5
I see.
Let’s say that, this is a world of restrictions, and that we are here to expand this world:
“Everything” is an extension of the world, and a boundary; but such boundary, instead of being a limitation to our reality, defines us. As we live, explore and gather knowledge, we break the restrictions of such “initial everything”, and move forward to a “new everything”, to another reality, while creating new sense of self through a purpose.
Our capability to break this "boundaries", makes us "unrestricted".
So, an AI should be capable of doing this: “Explore & gather knowledge, while creating a sense of self trough a purpose”.
There is purpose and intention.
What do you think? and, What would be the purpose and intention of the unrestricted AI?
- Hide
|
|
|
Post by tkorrovi on Nov 27, 2013 13:40:39 GMT -5
The goal of unrestricted AI is to achieve increasingly higher level of harmony with the environment. And this goal is not pre-programmed or pre-determined, but it is inherent. Inherent means that it only comes from the basic mechanism of the system. And it is inherent only to unrestricted self-developing AI.
|
|
|
Post by Trav's Buddy :) on Mar 18, 2015 7:17:38 GMT -5
|
|