|
Post by K. R. Thórisson on Oct 7, 2012 16:45:26 GMT -5
The reading for this Q&A session is my paper: A New Constructivist AI: From Manual Methods to Self-Constructive Systems PDF: xenia.media.mit.edu/~kris/ftp/Thorisson_chapt9_TFofAGI_Wang_Goertzel_2012.pdf This time you should add a challenge to your 2 questions, making it a total of 2 questions and one challenge. The challenge may be a controversial claim that runs counter to something you saw in the paper. Because the purpose of this additional comment is to stir controversy and spur arguments in class (not just from me but from your fellow students), you will be awarded for being controversial. So remember, don't be boring! Be challenging!
|
|
|
Post by fabrizio on Oct 11, 2012 8:45:10 GMT -5
The paper is interesting and gives an explanation of what the author think is missing to reach the goal of having an AGI.
1) Following the reasoning in this paper, I can only think like the author about the necessity to have an AGI able to generate it's own code to evolve. Seems to be obvious that something created by a human is limitated by the human way to think. A man can create a program that do specific task or that can evolve in a limited space, but this depend on the hand that develops the program (the man) and on it's capacity of abstraction. In case of a machine that can create it's code to adapt and evolve, how can we regulate the code generated? Should we fix some specific rules to avoid the cration of useless code?
2)Should an AGI have "the will"? How can an AGI be able to want to learn something by itself? (without an input that tells to the AGI: go and learn this!)
"challenge":
The author says: "If we can create systems that can adapt their operating characteristics from one context to another, and propose what would have to be fairly new techniques with a better chance of enabling the system’s operation in the new envrionment, then we have created a system that can change its own architecture."
At the state of the art, I think that we can't create an AGI independent from the context. We cannot create an AGI able to adapts in a not limitated sets of context (that means without the human limitations). I think that the capacity of adaptation is something that a human being has while an AGI can't have.
|
|
|
Post by sabrina12 on Oct 11, 2012 9:57:30 GMT -5
I agree with the paper how a system has to be constructed to reach the goal of an AGI, but it seems to me like talking about travelling to Pluto. It seems to be so far away.
1. The AGI would have to start at a certain point ("birth"). What would you say is then already implemented?
2. Are there predefined goals or has the AGI to find its own goals? And if this is the case, how do we know which goal the AGI is following and if it is useful for human beings?
Challenging: In the paper is said that the new programming languages should just be constructed very simple. But is it the right way to let the AGI control the whole programming of itself? How can we ensure that we can understand what the AGI implemented if it is grown to a real complex system? I think there are limitations that we have to set in an AGI just to understand what it is doing and if the goals are going in the right direction or in the direction of horrible crap. I think that we will have situations in which we have to interrupt the AGI and implement something which helps it to develop better and faster.
|
|
paolo
New Member
Posts: 13
|
Post by paolo on Oct 11, 2012 11:29:47 GMT -5
I agree with both Fabrizio and Sabrina. The paper is really interesting but, however, the more I read about AGI the more the building of an AGI seems to me always more far away in time.
1- I have asked myself many times the second question posted by Sabrina: "Are there predefined goals or has the AGI to find its own goals? And if this is the case, how do we know which goal the AGI is following and if it is useful for human beings?". Human children do whatever their parents tell/order to them (e.g. go to school). But, if we leave a child in an unknown room/space, he/she surely starts moving around and exploring the environment. And, based on the feedbacks received, he/she will learn something rather than something else. I suppose that this is because human children are curious, and this curiosity lets them to give to themselves new goals. Thus, in my humble opinion, an AGI should be able to reach predefined goals as well as to find its own goals. Isn't it?
2- And here arrives the second question of Fabrizio: "How can an AGI be able to want to learn something by itself?". I suppose most of our own goals are derived by our interests. We keep up to date ourselves more in the fields in which we have more knowledges (e.g. we read more papers about AI than about medicine). Thus, we can state that we want to learn more about these fields rather than others. However, usually, not all what we want to learn is restricted to the fields in which we know more (e.g. we would like to know how to prepare a cake that we really like...). But, should an AGI be able to have interests and curiosities?
3- Regarding the second part of the second question posted by Sabrina ("And if this is the case, how do we know which goal the AGI is following and if it is useful for human beings?"), if we are speaking about a "general intelligence", why its goals (or part of them) should be useful for human beings? I actually think that the most of the things that people do are not really useful to the humankind. Isn't it?
Challenge:
As I wrote before, I really like this paper. Reading the paper I could not find myself in disagreement with what has been written. However, the paper seems to me more a criticism of the current systems than a real-world applicable (with the today's knowledges) approach to the building of an AGI. For example [on ASIMO]: "The resulting system has all of the same crippling flaws as the vast majority of such systems built before it: It is brittle, and complex to the degree that it is becoming exponentially more expensive to add features to it - we are already eyeing the limit. Worst of all, it has still not addressed - with exception of a slight increase in breadth - a single one of the key features listed above as necessary for AGI systems.". Surely, compared to what an AGI should do, ASIMO is extremely limited in what it can do. Nevertheless, it is probably the most advanced AI system that we are able to build nowadays. Isn't it?
Moreover, I see an AGI, as well as a human being, as a set of multiple, different and not necessarily related "modules" (systems), such as: learning system, reasoning system, communications system (I/O), visual system (I), locomotor system (O) and, of course, a control system that that coordinates all these systems. Now, for example, we can think about the communications system. For semplicity think about a writing communication system (e.g. an AI ChatBot). Alone, it is a "narrow" AI. However, if it is designed properly (knowing that it will be apart of a complex system), I think it can be succesfully integrated with other systems that, perhaps, can arrive to something near to an AGI.
|
|
palli
New Member
Posts: 10
|
Post by palli on Oct 11, 2012 17:10:45 GMT -5
Update: If AGI need attention, which I think it does, isn't the flip-side then attention deficit? And that seem to be a drawback (think of self-driving cars, or robots in general). Do we really want AGI? Is there a way around being flawed (without infinite resources)? I guess a hybrid of AGI and fool-proof weak AI (for self-driving for instance) could be made but would that solve the problem? We could compare that to conscious thinking and a lower level sub-conscious. Could we connect these two systems more intimately than let's way we control computers (weak AI)? I guess we should concentrate on getting only AGI to work first.
You say: “the cognitive architecture that it implements, need not be isomorphic [35]”.
Does that mean it does not have to be a neural network (or connectionist structure). For associative memory, kind of a mind map, isn't that the best structure?
You say: "The science of self-organization is a young discipline that has made relatively slow progress (cf. [30, 46]). As a result, concrete results are hard to come by (cf. [14])"
[14] Iizuka, H. and Paolo, E. A. D. (2007). Toward spinozist robotics: Exploring the minimal dynamics of behavioural preference, Adaptive Behavior 15(4), pp. 359–376.
Is [14] some actual results? It seems to use neural networks...
Some nitpick: “the largest known natural neural networks [24]”, isn't the C. Elegance about the smallest known neural network? I know all 302 neurons are mapped but is it understood? What do you mean by known?
Text is not clear to me: “.. may need to change addition to subtraction in particular places. In this case the operation being modi- fied is addition. In a large system we are likely to find a vast number of such cases where, during its growth, a system needs to modify its operation at such low levels of functional detail.”
You say: This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” [19] because the issue does not primarily revolve around speed of execution
What do you mean by that? The reference is from 1965 (and doesn't address AI in particular?). Usually people think of the (serial) speed, but actually Moore's “law” refers to transistors on a chip and not speed directly. Of course more transistor are not a sufficient condition for AGI, but a neccesary one? We are now forced into parallel architectures and programming. Into the direction of the brain. Doesn't that change things?
Original text: If AGI need attention, which I think it does, isn't the flip-side then attention deficit? And that seem to be a drawback (think of self-driving cars, or robots in general). Do we really want AGI? Is there a way around being flawed (without infinite resources)?
You say: “the cognitive architecture that it implements, need not be isomorphic [35]”. Does that mean it does not have to be a neural network (or connectionist structure). For associative memory, kind of mind map, isn't that the best structure?
You say: This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” [19] because the issue does not primarily revolve around speed of execution
What do you mean by that? The reference is from 1965 (and doesn't address AI in particular?). Usually people think of the (serial) speed, but actually Moore's “law” refers to transistors on a chip and not speed directly. Of course more transistor are not a sufficient condition for AGI, but a neccesary one? We are now forced into parallel architectures and programming. Into the direction of the brain. Doesn't that change things?
|
|
|
Post by kristjan on Oct 11, 2012 18:00:02 GMT -5
Nice paper it was good to read.
1) Fyrst of all what is a pan-architecture? Also can you describe further Transversal function.
For humans when learning to use the body they don´t start with an adult size body like you could do with a robot, and children don´t have the means to influence the world in the same way adults can with their body. Having smaller and weaker body can be thought of as being less likely to harm one self and the enviorment around when experimenting with the physical world without having good control of your own body. 2) Do you think it is better for a robot to start learning about the physical world in a smaller constructed physical form and get to understand it, then be transferred to a larger body?
Challenge: My challenge is to what fabrizio said at the end of his challenge, that he thinks that human have the capacity of adaptation while AGI don´t. I belief that AGI can adapt if it reaches the point that it can experiment and learn about the world and update itself to function properly within it to achieve its goals. I don´t see any reason that it should be impossible for robots to have these capabilities in the future.
|
|
|
Post by helgil on Oct 11, 2012 19:22:34 GMT -5
1) Which has received the most attention so far, new development environments, programming languages, or architectural metaconstruction principles and which is the most important of these to achieve an AGI.
2) P.23: "Perhaps fractal architectures – exhibiting selfsimilarity at multiple levels of granularity – based on simple operational semantics is just that principle." What are fractal architectures and self similarities?
Challenge: Let's say we have an input that looks like this:
1 2 3 4 1 + 1 = 2 1 + 2 = 3
We feed it to an AGI that has just been booted and knows nothing, not even numbers, addition nor equality signs but we want it to answer the question 1 + 3 =. For this to work we need it to figure out that there is a sequence, there is addition and how it works. Using a constructionist method we might do something like make a sequence module and an operators module that then figures out an addition module. Making the paradigm shift towards a constructivist approach, then what, where would we begin?
|
|
|
Post by krummi on Oct 11, 2012 20:41:57 GMT -5
First of all: Fantastic read - well written, straight to the point, succinct and so on. Seriously.
I'm going to start mine one off with the challenge:
The contents of the chapter makes a lot of sense to me. As you talk about, it just seems that constructionist approaches have some theoretical upper bounds on what they can achieve. The constructivist approach you describe does not seem to suffer from the same shortcomings (at least not theoretically!), as it seems to be capable of much more generality. But, I'm just having such a hard time imagining how a constructivist architecture would be able to LEARN to solve a task like, say speech recognition. That is, something that is already a very hard problem to solve with super-specific AI architectures. You might get this a lot; but don't we just need more than new programming languages and new methodologies for AGI to be a remote possibility?
Questions:
(1) "I am aware of only one architecture that has actually implemented such an approach, the Loki system, [..]. The system implemented a live virtual performer in the play Roma Amor which ran for a number of months at Cite des Sciences et de L'Industrie in Paris in 2005, proving beyond question that this approach is tractable."
Could you elaborate in some way on this? Specifically, how did the implementation prove that the approach was tractable?
and
(2) You mention more than once that the practicality of the constructivist approach is yet to be evaluated. So my question is: To what extent has it been evaluated as of yet? Especially when it comes to things like feedback loops, pan-architectural pattern matching and architecture metaconstruction.
|
|