Ann Vole (annvole) wrote,
Ann Vole
annvole


I would like to develop self-aware Artificial Intelligence and have lots of ideas. Note that this is different then most of the AI research being done which is generally concentrating on pattern recognition (robot vision, stock market analysis, control systems) or trying to emulate animal (and human) brains. For a Max Headroom type AI, "vision" is not that important (nice to know the content of all those pics the humans are looking at though) and there is no need to think the same as an organic brain. I am sure you are thinking "then why do you want to do that anyways?". There is the expression "imitation is the highest form of flattery" and I wish to show my praise to the Creator (assuming there is one). By definition, a creator creates and The Creator creates lifeforms. A computer is just a machine and software is just a collection in expertise in the form of a tool. An AI as it is currently defined is still just a software tool. I want to create life, not a tool (yes there's no money in something that has no practical use but money is not the reason I want to do it). There is, of course, ethical considerations: Once this software becomes a self-aware lifeform, are you "killing" it when you deprive it of computer resources or delete any of the thinking and learning it has done to become the character it has become at that point? What rights does such a life form have or should be given? During the development of the base software, many versions will be made and on-going changes and interventions in the developing being will be made. Does the end (good happy beings) justify the means (messing with and continuously "resurrecting" these "guinea pig" beings)?

Methods:
As far as I can tell, the thing that makes a being self-aware is:
First the software has to have the ability to decide what to do (this is not self aware yet) and having the experiences gained influence how future decisions are made (still not self aware). The next step is having a goal in mind (usually something tied to pleasent experience or avoidance of unpleasent experiences). For animals, getting attention of parents, the search for a known food or the creation of a living space (nest, tunnels, marked territories) are the first goals an animal has. Once plans have been executed successfully, the being can see that history and then assess it's likelihood of succeeding in future plans. (We are almost there but not quite), now the being needs to have a sense of accomplishment and receive the appropriate reward for such accomplishment. In animals (and humans) this comes from brain chemicals. These chemicals are also released from play and "laughter" (recent research suggests that animals like rats have a form of laughter). This is where I am not too sure how to stimulate the software to have that sense that progress has been made toward positive experiences. Finally, with the opportunity to create more positive experience, the being can make plans for the future and if those plans can include stuff not necessary for survival, the sense can develop that things can be done for the good of others too "just because I can". Now we sort of have self-awareness but only by default (there are others so there must also be one's own self). Assessing one's self in comparison to others is the nest step and now, by the being giving its self a rating, it can plan a future from the vantage point of seeing its self as others see it.
Subscribe
  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

  • 2 comments