Post by K. R. Thórisson on Nov 14, 2012 11:02:40 GMT -5
For Friday's class November 16 the papers to comment on are the two assigned readings on creativity (see readings on cadia.ru.is/wiki/public:t-720-atai:main for Nov 13). Please present three questions related to those.
Please be reminded that at least one of your three questions should demonstrate that you have read the papers -- it should be of the kind that is difficult to ask without having read in reasonable detail.
Also, at this point you have read a substantial amount of papers and are starting to understand more -- and importantly you are starting to understand much better what is missing in your understanding of how to achieve AGI. It is preferable that at least one of your three questions relate to other readings and topics in the course so far. It will be a bonus if one or more of the questions relates the two papers to each other in some way. (Be creative!)
The questions are due at 11 am this coming Friday Nov 16.
1. Are there any alternatives to using cellular automata for the creativity experiments?
2. David Deutsch states that developing an AGI is a philosophical problem, do the Vélaldin experiments contradict that statement in any way?
3. So if the level of environmental complexity dictates the level of creativity, we need to make our AGI's internal environment complex or it won't be creative. Obviously, making an AGI is no simple task, but as it's architecture, it's mind, planning capabilities etc. get more advanced and more complex, how likely is it that some kind of creativity will automatically emerge instead of the "creativity feature" having to be added?
"What we count here as the basic creative artifact is a plan; more specically, a plan for survival." "According to our view, even basic logical behavior is creative, only to a minimal extent."
First, you say that creativity is to find solutions to problems that are non-obvious and useful, but this is for me not logical behavior. Logical behavior is for me reasoning with obvious knowledge. So why do you think that basic logical behavior is creative, even to a minimal extent?
What we see in the paper is that more complex environments require the generating of new plans. So perhaps it is not necessary for an AGI to be creative, but it's obviously advantageous to produce new (non-obvious)/creative plans. So I would say it is much better for an AGI in a complex environment to be creative, but it is no requirement to build an AGI. What do you think?
Going back to the question "What is intelligence?" I think creativity is not necessary part of it and I would claim that most animals are not verifiably creative. I would say creativity is a further booster for intelligence. What do you think and is creativity just used in the part of planning?
Last Edit: Nov 15, 2012 10:12:42 GMT -5 by sabrina12
Referring to the paper: Creative blocks The very laws of physics imply that artificial intelligence must be possible. What's holding us up? by David Deutsch
Citation 1: "And in any case, AGI cannot possibly be defined purely behaviourally. In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs."
Question 1: This part of the paper made me think. I agree that a brain without a body can still think and produce activities. But I think it depends on the previous experiences, still in memory, that let it to provide activities. What about a brain that has never been in touch with any stimuli (without any sensors)? Could it think?
On this point I would like to share a talk from TED: The real reason for brainswww.youtube.com/watch?v=7s0CpRfyYp8 by Daniel Wolpert. He argues on the fact that the only reason for having a brain is "to perform movements".
Question 2: In Citation 2 (below) it seems to me that the author is criticizing the approach used in the NARS project. Where is the true? Is the NARS approach wrong?
Citation 2: Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism, unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values — the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished. As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.
Referring to the paper: Creativity Evolution in Simulated Creatures: A Summary Report by Hrafn Th. Thorisson & Kristinn R. Thorisson
For this paper I have some questions and considerations:
1) I really agree when the authors say: "Creativity can be exhibited by an agent without it explicitly understanding the environment". I think it is a characteristic of creativity. It acts when we don't have enough knowledge to provide something; in a certain sense we are forced to be creative.
2) I agree with the main concept that the authors want to express: a more complex environment requires more creativity. Based on the results obtained with the experiments, can we create a measure of creativity for human being?
3) "A plan which achieves its intended results is a good plan; a plan which does this while being non-obvious (to produce and/or execute) is more "creative" than one that is more obvious." Considered two plans: one simple and another complex. Should the complex plan implies more creativity if it reach the same goal of the simple plan?
4) Referring to the Daniel Wolpert's talk, can we translate the concept of a more complex environment into the perception of more stimuli?
Last Edit: Nov 15, 2012 11:16:47 GMT -5 by fabrizio
I honestly found the article of David Deutsch a little bit confusing and not focused on the theme of creativity.
1 - "Expecting to create an AGI without first understanding how it works is like expecting skyscrapers to fly if we build them tall enough." That's a good point. I totally agree. On the other hand, however, it is difficult to imagine to create something as complex as an AGI from scratch (without first doing some experiment). In this sense, even if Constructionist methodologies are not sufficient for building AGIs, they were a (natural) step that "has led" to the Contructivist ones. Maybe - hopefully not - neither the Constructivist methodologies will be sufficient, but they will lead to the next step. Thus, I think that David Deutsch is too negative in assessing the lack of progress in AGI (e.g. "Yet today in 2012 no one is any better at programming an AGI than Turing himself would have been."). What do you think about that?
2 - "But could the Analytical Engine feel the same boredom? Could it feel anything? Could it want to better the lot of humankind (or of Analytical Enginekind)? Could it disagree with its programmer about its programming?" [...] "That AGIs are people has been implicit in the very concept from the outset." When we speak about AGI, according to one of the first readings of the course, I always think about a system that can solve a (big enough) set of complex problems in a (big enough) set of complex environment. And I totally understand that, since it is the only yardstick we have, we need to compare our AGIs with the human-level intelligence (as both the papers on creativity did). However, the more I read the more seems to me that sometime we refer to AGIs as "human clones". Thus, should AGIs be "human clones" with rights and duties, freedom, etc?
3 - "[Bayesianism] The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act." In his article David Deutsch criticizes the Bayesianism approach to AGI. However he states: "[...] genuine knowledge, though by definition it does contain truth, almost always contains error as well". Thus, since we never have the absolute truth, in a way, we constantly re-adapt our ideas. Maybe (who could know?), our mind does not do that throught the assignment of probability values. However, as Pei Wang pointed out in "Rigid Flexibility: The Logic of Intelligence", logic seems a convenient tool to implement certain kinds of reasoning. Isn't it?
4 - Is creativity needed to an AGI? And self-awareness (in a strict sense)?
"The strong view along this line of thought states that for a system to be intelligent it has to be creative". And also creative => intelligence? And is then creativity = intelligence?
"Creativity exists in the space between that which is (easily and obviously) logically deducable and that which is truly random. No creativity is without logic; randomness is not creative" What do you mean by that? My guess or feeling what creativity is: It can't be purely logical, a deterministic machine can't do logical things so the next sentance is "According to our view, even basic logical behavior is creative, only to a minimal extent." is strictly wrong. Of course you're right, fully random action is not creative, but you have to have some. I'm not taking Penrose's view that quantum physics is necessary. I guess a pseudo-random generator will do. How much of an influence should randomness have? Or will the inaccuracy in sensors do?
(1) Deutsch puts a lot of energy into criticizing the belief "that knowledge comes from extrapolating repeated observations". Which to me makes sense. But this makes me think about Helgi Páll's talks on attention: Isn't reactive, bottom-up attention specifically proposed to address the issues that he mentions with extrapolation? (For example the situation where he suddenly observes 20 instead of 19 on the calendar).
(2) What I take from this article is that Deutsch thinks that AGI will not be achieved UNTIL we fully understand how the brain works, which won't happen until "one of the best ideas ever" comes along. Isn't it a bit naive for him to say that any efforts not geared towards this (non-existing) "best idea ever" are futile? We have talked about the Wright brothers and this seems like a typical example of where history could repeat itself.