|
Post by K. R. Thórisson on Oct 12, 2012 9:27:45 GMT -5
Please re-write your questions and challenges from last excercise, in as concise a form as you can, with *one post per question*.
|
|
|
Post by fabrizio on Oct 12, 2012 11:10:44 GMT -5
1) In case of a machine that can create it's code to adapt and evolve, how can we regulate the code generated? Should we fix some specific rules to avoid the cration of useless code?
|
|
|
Post by fabrizio on Oct 12, 2012 11:11:09 GMT -5
2)Should an AGI have "the will"? How can an AGI be able to want to learn something by itself? (without an input that tells to the AGI: go and learn this!)
|
|
|
Post by fabrizio on Oct 12, 2012 11:13:05 GMT -5
3) "challenge": At the state of the art, I think that we can't create an AGI independent from the context. We cannot create an AGI able to adapts in a not limitated sets of context (that means without the human limitations). I think that the capacity of adaptation is something that a human being has while an AGI can't have.
|
|
paolo
New Member
Posts: 13
|
Post by paolo on Oct 12, 2012 12:55:29 GMT -5
I have asked myself many times the second question posted by Sabrina: "Are there predefined goals or has the AGI to find its own goals?". Human children do whatever their parents tell/order to them (e.g. go to school). But, if we leave a child in an unknown room, he/she surely starts moving around and exploring the environment. I suppose this is because human children are curious, and this curiosity lets them to give to themselves new goals. Thus, in my humble opinion, an AGI should be able to reach predefined goals as well as to find its own goals. Isn't it?
|
|
paolo
New Member
Posts: 13
|
Post by paolo on Oct 12, 2012 13:27:14 GMT -5
Should an AGI be able to have its own interests and curiosities? If yes, how? If no, how can it decide its own goals?
|
|
paolo
New Member
Posts: 13
|
Post by paolo on Oct 12, 2012 13:34:06 GMT -5
Speaking about the goals of an AGI, Sabrina posted the question: "[...] how do we know which goal the AGI is following and if it is useful for human beings?" But, if we are speaking about a "general intelligence", why its goals (or part of them) should be useful for human beings? Most of the things that people do are not really useful to the humankind. But, if so, why are we creating an AGI?
|
|
paolo
New Member
Posts: 13
|
Post by paolo on Oct 12, 2012 13:34:21 GMT -5
Reading this paper I can only agree with what has been written. Constructionist approach is not suitable for the building of an AGI. Honda ASIMO is a clear example. Despite it is probably the most advanced AI system that we have been able to build (isn't it?), it is extremely limited in what it can do. Constructivist approach, instead, seems to be more suitable for the building of an AGI. At least theoretically. But, how long is the step between the theory and the practice?
|
|
|
Post by sabrina12 on Oct 14, 2012 6:23:02 GMT -5
1. The AGI would have to start at a certain point ("birth"). What would you say is then already implemented?
|
|
|
Post by sabrina12 on Oct 14, 2012 6:23:31 GMT -5
2. Are there predefined goals or has the AGI to find its own goals? And if this is the case, how do we know which goal the AGI is following and if it is useful for human beings?
|
|
|
Post by sabrina12 on Oct 14, 2012 6:25:23 GMT -5
Challenging: The new programming languages should just be constructed very simple. But is it the right way to let the AGI control the whole programming of itself? How can we ensure that we can understand what the AGI implemented if it is grown to a real complex system? I think there are limitations that we have to set in an AGI just to understand what it is doing and if the goals are going in the right direction or in the direction of horrible crap. I think that we will have situations in which we have to interrupt the AGI and implement something which helps it to develop better and faster.
|
|
|
Post by kristjan on Oct 14, 2012 21:04:48 GMT -5
Fyrst of all what is a pan-architecture? Also can you describe further Transversal function.
|
|
|
Post by kristjan on Oct 14, 2012 21:06:14 GMT -5
Do you think it is better for a robot to start learning about the physical world in a smaller constructed physical form and get to understand it, then be transferred to a larger body?
|
|
|
Post by kristjan on Oct 14, 2012 21:08:44 GMT -5
Challenge: I belief that AGI can adapt if it reaches the point that it can experiment and learn about the world and update itself to function properly within it to achieve its goals. I don´t see any reason that it should be impossible for robots to have these capabilities in the future.
|
|
|
Post by helgil on Oct 15, 2012 16:59:17 GMT -5
1) Which has received the most attention so far, new development environments, programming languages, or architectural metaconstruction principles and which is the most important of these to achieve an AGI.
|
|
|
Post by helgil on Oct 15, 2012 16:59:49 GMT -5
2) P.23: "Perhaps fractal architectures – exhibiting selfsimilarity at multiple levels of granularity – based on simple operational semantics is just that principle." What are fractal architectures and self similarities?
|
|
|
Post by helgil on Oct 15, 2012 17:00:52 GMT -5
Challenge: Let's say we have an input that looks like this:
1 2 3 4 1 + 1 = 2 1 + 2 = 3
We feed it to an AGI that has just been booted and knows nothing, not even numbers, addition nor equality signs but we want it to answer the question 1 + 3 =. For this to work we need it to figure out that there is a sequence, there is addition and how it works. Using a constructionist method we might do something like make a sequence module and an operators module that then figures out an addition module. Making the paradigm shift towards a constructivist approach, then what, where would we begin?
|
|
|
Post by krummi on Oct 15, 2012 19:15:19 GMT -5
(1) "I am aware of only one architecture that has actually implemented such an approach, the Loki system, [..]. The system implemented a live virtual performer in the play Roma Amor which ran for a number of months at Cite des Sciences et de L'Industrie in Paris in 2005, proving beyond question that this approach is tractable."
Could you elaborate in some way on this? Specifically, how did the implementation prove that the approach was tractable?
|
|
|
Post by krummi on Oct 15, 2012 19:15:41 GMT -5
(2) You mention more than once that the practicality of the constructivist approach is yet to be evaluated. So my question is: To what extent has it been evaluated as of yet? Especially when it comes to things like feedback loops, pan-architectural pattern matching and architecture metaconstruction.
|
|
|
Post by krummi on Oct 15, 2012 19:16:57 GMT -5
The challenge:
The contents of the chapter makes a lot of sense to me. As you talk about, it just seems that constructionist approaches have some theoretical upper bounds on what they can achieve. The constructivist approach you describe does not seem to suffer from the same shortcomings (at least not theoretically), as it seems to be capable of much more generality. But, I'm just having such a hard time imagining how a constructivist architecture would be able to LEARN to solve a task like, say speech recognition. That is, something that is already a very hard problem to solve with super-specific AI architectures. You might get this a lot; but don't we just need more than new programming languages and new methodologies for AGI to be a remote possibility?
|
|
|
Post by K. R. Thórisson on Oct 16, 2012 6:24:10 GMT -5
Fyrst of all what is a pan-architecture? Also can you describe further Transversal function. Pan-architectural functions are cognitive functions that is difficult or impossible to implement without affecting or involving a significant part of the architecture. An example is learning, which will of course affect not only that which is learned but also other related things, including symbolic/logical deductions and inductions, and the transfer of knowledge to other things. And what about learning to learn – getting better at learning – doesn't this improve over time (albeit more slowly than the task learning)? If a system is to be built in such a way as to improve its general learning skills – which potentially affects learning of *any* task – the learning must in some way operate on a pan-architectural basis. Another example of pan-architectural function is cognitive development. Since cognitive development, by definition, affects the operation of the whole system – remember my example from Piaget about how a child learns the concept of liquid taking some volume – then cognitive development as seen in nature is not possible without some pan-architectural functions, e.g. improvements in the performance of functions such as memory, learning, motor control, enabled by modifications to the way they operate.
|
|
palli
New Member
Posts: 10
|
Post by palli on Oct 22, 2012 16:40:27 GMT -5
I challenge you to show us a system that pays attention without being single-minded. And since the AGI needs selective attention, isn't the flip-side then attention deficit? Is there a spectrum from single-mindedness to attention to attention disorder? How do we avoid attention disorder or achieve a good balance?
|
|
palli
New Member
Posts: 10
|
Post by palli on Oct 22, 2012 16:40:53 GMT -5
If you can't have attention without attention disorder (to some degree) isn't that a big drawback (think of self-driving cars, or robots in general)? Do we really want AGI? Is there a way around being flawed (without infinite resources)? I guess a hybrid of AGI and fool-proof weak AI (for self-driving for instance) could be made but would that solve the problem? We could compare that to conscious thinking and a lower level sub-conscious.
I guess we should concentrate on getting only AGI to work first?
Could we connect AGI and weak AI more intimately than let's say how we control computers (weak AI)? And somehow without the drawbacks of AGI?
The rest in in the original thread.
|
|
|
Post by K. R. Thórisson on Nov 14, 2012 9:42:35 GMT -5
1) In case of a machine that can create it's code to adapt and evolve, how can we regulate the code generated? Should we fix some specific rules to avoid the cration of useless code? Absolutely – there is no way around this. We must provide the system with a few top-level goals, and probably some additional high-level goals intended to steer the evolution of the agent in some sensible way. One obvious sensible high-level goal is to have the system avoid self-destruction. How to program this as a high-level goal (or goals) is not obvious, however.
|
|
|
Post by K. R. Thórisson on Nov 14, 2012 9:44:47 GMT -5
2)Should an AGI have "the will"? How can an AGI be able to want to learn something by itself? (without an input that tells to the AGI: go and learn this!) A system that is supposed to get smarter and/or more knowledgeable over time must have some sort of curiosity. This curiosity, however implemented and however it operates, has the role of steering the "idle time" of the system in such ways as to increase the system's knowledge in sensible ways. In other words, we are talking about a "drive" – or high-level goal – that ensures that the system predictably moves towards more useful knowledge over time.
|
|
|
Post by K. R. Thórisson on Nov 14, 2012 9:48:51 GMT -5
I have asked myself many times the second question posted by Sabrina: "Are there predefined goals or has the AGI to find its own goals?". Human children do whatever their parents tell/order to them (e.g. go to school). But, if we leave a child in an unknown room, he/she surely starts moving around and exploring the environment. I suppose this is because human children are curious, and this curiosity lets them to give to themselves new goals. Thus, in my humble opinion, an AGI should be able to reach predefined goals as well as to find its own goals. Isn't it? You are absolutely right – take a look at my answer to one of Fabrizio's questions on this topic. A system that is intended to acquire its own knowledge and skills over time must have a-priori goals that lead it to do so. One way is that by being dumb it is more likely to be hurt, so it figures out on its own that it should increase its knowledge. Over time it may come to realize that this goal, i.e. always try to increase its own knowledge, should be operative all the time. Another way is to simply give the system a goal of curiosity. This can be done in many many ways, and the particular way will depend on the cognitive architecture to some extent. It may be difficult to do this for a large set of domains, so possibly this requires a set of domain-dependent goals, each targeted to particular circumstances.
|
|
|
Post by K. R. Thórisson on Nov 14, 2012 9:50:56 GMT -5
I have asked myself many times the second question posted by Sabrina: "Are there predefined goals or has the AGI to find its own goals?". Human children do whatever their parents tell/order to them (e.g. go to school). But, if we leave a child in an unknown room, he/she surely starts moving around and exploring the environment. I suppose this is because human children are curious, and this curiosity lets them to give to themselves new goals. Thus, in my humble opinion, an AGI should be able to reach predefined goals as well as to find its own goals. Isn't it? Actually, I must add to that – a system that is intended to operate in a variety of domains doing a variety of tasks, some or all of which are not known at design time, MUST be able to come up with its own (sub-) goals. There is no way around that. In our AERA system, for example, learning requires sub-goal generation.
|
|
|
Post by K. R. Thórisson on Nov 14, 2012 9:51:22 GMT -5
Should an AGI be able to have its own interests and curiosities? If yes, how? If no, how can it decide its own goals? See my answer to Fabrizio and your other question about autonomous goal generation.
|
|
|
Post by K. R. Thórisson on Nov 14, 2012 9:57:07 GMT -5
1. The AGI would have to start at a certain point ("birth"). What would you say is then already implemented? What must be implemented is some way for the system to avoid self-destruction. In the case of humans this is taken care of by the environment: parents, families, tribes, and societies. Also, some bootstrapping mechanisms must be available for the knowledge acquisition to start – proto-models of how to learn (which could be replaced later), and possibly some "algorithms" for steering the learning in ways that avoid overwhelming the system with too much information that would drown young acquisition mechanisms and prevent them from developing a foundation for further learning. You recognize this from school: There is no point in studying literature if you have not mastered the alphabet – one relies on the other. I suspect this is also to be found in the development of natural minds, especially humans, and it makes sense for any highly capable cognitive system to come with a-priori knowledge about how to get started. This is essentially what Piaget was claiming to have found in children, and although he may not have been correct about the particular cognitive stages, there definitely are some things that is pointless to teach children before they have reached certain "stages" of cognitive growth.
|
|
|
Post by K. R. Thórisson on Nov 14, 2012 9:59:49 GMT -5
Challenging: The new programming languages should just be constructed very simple. But is it the right way to let the AGI control the whole programming of itself? How can we ensure that we can understand what the AGI implemented if it is grown to a real complex system? I think there are limitations that we have to set in an AGI just to understand what it is doing and if the goals are going in the right direction or in the direction of horrible crap. I think that we will have situations in which we have to interrupt the AGI and implement something which helps it to develop better and faster. Absolutely right. We have achieved some success in AERA in this direction. But we have yet to explore the topic of self-engineering, which is what we call systems that can verify or even prove that they meet certain constraints and conditions that their designers want to achieve in the final system, even after significant self-development. It is not certain whether self-engineering is viable at all, but I since theoretical computer science keeps trying to come up with mathematical methods for analysis and verification of ever-larger and more complex systems, I am pretty sure someone will at least *try* to design self-engineering into AGIs, whenever they become common.
|
|