|
Post by K. R. Thórisson on Sept 17, 2012 8:06:55 GMT -5
For Friday's class the paper to comment on is this one by yours truly and Helgi P. Helgason. versita.metapress.com/content/052t1h656614848h/fulltext.pdfPlease be reminded that at least one of your questions should demonstrate that you have read the full paper -- it should be of the kind that is difficult to ask without having read it in reasonable detail. Also, at this point you have read a substantial amount of paper and are starting to understand more -- it is preferable that at least one of your two (or more) questions relate to other readings and topics in the course so far.
|
|
|
Post by helgil on Sept 19, 2012 7:34:21 GMT -5
1. P.14 Re. NARS: "Space is also addressed, with bag-based memories being suggested, as memory is finite and it can be expected that items will need to be added and removed frequently during operation." What is bag-based memory and how does it work?
2. Joscha Bach points out that children when asked what a mother is give answers like someone who brings a blanket when you're cold or someone that brings food when you're hungry while a computer might say something like a female that has offspring. The difference between these answers is that the kids define them by usefulness while a computers answer, while it may be correct is essentially useless information in the context of survival. Just how useful is the information an AI such as Watson has. Would the information an AGI has be something completely different from that kind of information. Are there any meta-learning algorithms available that deal with this kind of problem or are they ad hoc.
|
|
|
Post by sabrina12 on Sept 20, 2012 13:03:43 GMT -5
1. Is a the differentiation between reflex behavior and long-term goals also part of the real-time approach? I mean that there are different layers like in Ymir, but is it part of the real-time approach or part of the goal planning?
2. And if we would have the following situation: The robot is programmed to react reflexively when a human being is in danger to save the person. But if we have just a short reaction time, how should the robot react on kids playing cowboys with a plastic gun? How is it possible to differentiate serious or playful danger in a really short time distance?
3. You say that Ikon Flux is reaches relatively high on your autonomy scale. The paper is quite up-to-date, but are there new developments pushing Iron Flux more in the direction to be autonomous? What is the application of Ikon Flux or should it be able to learn anything?
|
|
palli
New Member
Posts: 10
|
Post by palli on Sept 20, 2012 16:23:20 GMT -5
In "Self-Programming: Operationalizing Autonomy", you and Nivel say: "We introduce Ikon Flux, a proto-architecture". What is the difference between Ikon Flux and AERA? What did you learn from Ikon Flux (that was wrong with Ikon Flux?)? Why is AERA not in the comparison? Ikon Flux gets the highest score. Are you maybe biased? You admit that Ikon Flux gets a lower score on learning (but not meta-learning). What are the flaws (or known drawbacks compared to the three architectures with a higher learning score) in Ikon Flux in that regard and even other categories. Can you combine Ikon Flux with some of the better learning-architectures or add their ideas, or are the architectures incompatible? You say the four categories are important for autonomy. But are the sufficient for a thinking (child) AGI (at an architectural level)? What would your opponents in AI (but still within AGI), most likely (rightly or wrongly) criticize in the Ikon Flux architecture or in general about the importance of these four criteria? Update: What do you mean by "sub-symbolic" learning? Ikon Flux: "Not being limited to reasoning for learning gives the architecture a considerable advantage in dealing with procedural learning compared to architectures like NARS and OSCAR" and "Ignoring practical matters clearly delays most useful applications of the technologies being developed.". If you score Ikon Flux lower on learning is that only on current hardware (practical), or do you view learning as an emergent property, that is, will it get automatically better (and better that the other architectures) with more hardware resources?
|
|
paolo
New Member
Posts: 13
|
Post by paolo on Sept 20, 2012 18:24:08 GMT -5
[5. Autonomy Discussion]: "VARIAC is an architecture with seemingly similar goals of open-ended self-growth (Hall, 2008). However, insufficient information about this architecture has prevented us from making a sufficiently thorough comparison of it to the other architectures reviewed here." 1 - I know that the paper is up-to-date, but I would like to know if you have now more information about this architecture? Could it be compare with the others named in the paper?
[3.1 Ymir]: "Recent Ymir-based systems have been outfitted with reinforcement learning modules (Jonsdottir & Thórisson, 2008)." 2 - Which (Ymir-based) systems? And, above all, can you tell us more about these reinforcement learning modules?
[3.4 NARS]: "Space is also addressed, with bag-based memories being suggested, as memory is finite and it can be expected that items will need to be added and removed frequently during operation." 3 - Since no references are provided... what is a bag-based memories?
|
|
|
Post by krummi on Sept 20, 2012 18:48:15 GMT -5
(1) "While autonomy can be envisioned without meta-learning, introspection and meta-learning provide the system with ways to change its own internal workings in a directed fashion, giving rise to self-growth and enabling significantly higher levels of autonomy. The following sections discuss each of these themes in turn." (page 5).
This hints that some of your colleageus in the AGI community do not think that meta-learning is an essential part for a highly autonomous system. Later in the paper it is said that the author of Soar does not think that control of attention is a central cognitive capability (page 12). This leads me to the question: Is there much debate in the AGI community when it comes to identifying the essential themes that systems must possess to reach high autonomity?
(2) Most of the architectures that the paper reviews seem to rely on the constructionist AI approach (e.g. Soar with its components, Ymir and its modules etc.) except Ikon Flux which relies on the constructivist approach. Actually, the approach Ikon Flux takes seems to be radically different from the other architectures. The approach does make a lot of sense to me, but how have other responded? With skepticism? Interest? Do you think this peewee granularity you guys talk about will be a game-changer in the AGI field?
|
|
|
Post by kristjan on Sept 20, 2012 18:59:29 GMT -5
I noticed in table 10 that for IKON FlUX the total score is given the number 17, but the number of pluses across "Realtime", "Resource Management", "Learning" and "Meta-learning" are in total 19 not 17.
1. IKON FLUX is ranked highest in the dimension of autonomy in table 10 according to estimates, do you think that the other architecture like CLARION, LIDA and NARS can be altered to exceed IKON FLUX in autonomy in the future?
2. How well does the AREA architecture for the HUMANOBS project handle "Realtime", "Resource Management", "Learning" and "Meta-learning"?
|
|
|
Post by fabrizio on Sept 20, 2012 19:03:11 GMT -5
While I was surfing on the Internet, I found a table that compares different Cognitive Architectures started on the 2009 with a last update dated: June 18, 2012. Maybe it can be an useful instrument to have an idea about more different architectures than those described in the paper. This is the link to the table: bicasociety.org/cogarch/architectures.htm1) During the lasts classes we talk about the fact that the programming languages that we use are in a certain way "more for human then for an AI" but also we need a language that let a machine to grow and to adapt in the way that in the paper is called autonomy. Are there any ideas about which language can we use? 2) Is the capability, that some architectures have, to "understand" what is useful from what is not really efficient? 3) This question doesn't want to be a provocation but I would like to know how the professor can reply to a person that in a webpage said: "Not surprisingly, the system that the authors worked on, Ikon Flux, scored the best. Other evaluation methods would probably give a different ranking." Obviously, in my personal opinion I have not a sufficient knowledge about Cognitive Architectures but following the reasoning in the paper I am led to believe that Ikon Flux seems to be really better than the other architectures, so I want to listen how the professor could reply.
|
|