|
Post by K. R. Thórisson on Sept 10, 2012 11:44:48 GMT -5
Friday's questions should be based on this paper: A Mind Model for Multimodal Communicative Creatures and Humanoids xenia.media.mit.edu/%7Ekris/ftp/IJAAI.pdfAt least one question should be on a particular issue addressed in the paper, the other is allowed to bring up tangentially-related matters, but should nevertheless stay on the main theme(s) presented in the paper.
|
|
|
Post by kristjan on Sept 12, 2012 19:00:53 GMT -5
1. In the section of Summary & Conclusion it says: "Current work on the architecture focuses on adding the missing features (described above) of the Gandalf prototype". Can you tell us anything about how work has progressed in the usage of Ymir architecture?
2. How good would you describe the communication skills of Asimo?
3. Are there any other projects that you find interesting that utilize communication for humanoids to interact with humans?
|
|
|
Post by fabrizio on Sept 13, 2012 9:23:29 GMT -5
I like the idea of a conversation between a machine and a human and in particular the integration of multimodal events to let the machine to understand what the human says. I find this a good way to cover the gap that is in the using of only one event like the voice recognition. Is not only with the voice that a human communicates but even with its body, with its facial expression, with gestures and the capability of a machine to understand all of them, can give more information to the machine to manage the human requests in a better way. This is a good point that emerges from these studies.
These are some questions that came to my mind after reading the paper:
1) How did the idea of the Ymir architecture get born? 2) Which improvements have been done nowadays to the architecture? 3) Which is the state of the art? 4) Are still used Blackboards for the communication between processes?
|
|
paolo
New Member
Posts: 13
|
Post by paolo on Sept 13, 2012 13:06:53 GMT -5
1) Reading this paper, the first question that came to my mind is: which are the future steps/improvements of these architecture?
2) When we speak about AGI, we always compare it with human-level intelligence. Of course, human-level intelligence is the only baseline with which we can compare our AIs. But, sometimes, it seems to me that we are overlapping the concept of AGI with the ones of "human clones". According to one of our first class discussion, the word "general" in AGI should not overinterpretated. Truly totally general intelligence - the ability to solve all conceptual problems, no matter how complex - is not possible in real world. Thus, my question is: does an intelligence need a body? And an AGI?
|
|
|
Post by sabrina12 on Sept 13, 2012 14:17:04 GMT -5
"The speed of the face/hand graphics presented one of the bottlenecks (Thrisson, 1997), limiting the frame-rate to 150 ms, about 50 ms slower than desired for precise human motor simulation." 1. Could you fix the problem with the face/hand graphics bottleneck?
2. Is Gandalf able to take part in discussions? It was said that he can be either listener or speaker, but is there a problem, when a few people are talking? Can he differentiate the persons and interrupt them to say something?
|
|
|
Post by krummi on Sept 13, 2012 15:40:43 GMT -5
1) I want to begin by elaborating on Fabrizio's 3rd question ("Which is the state of the art"). That is, what is the state-of-the-art when it comes to similar architectures AND how do they compare to the Ymir architecture. Has there been much progress since Ymir? Are some of the ideas presented in your paper still in use in the state-of-the-art architectures? How do they differ? etc. 2) How does the Ymir architecture relate to the HUMANOBS project ( wiki.humanobs.org/public:about)? I'm guessing that the HUMANOBS project is much more involved but is there any overlap?
|
|
palli
New Member
Posts: 10
|
Post by palli on Sept 13, 2012 17:02:51 GMT -5
How did the lack of computing power at the time influence the development of the architecture? Or in general for all current architectures (some are quite old). What would you do differently now (with more computing power)? I'm not so much thinking about the 3D graphics, that can always be improved but the core thinking part.
And how does the recent end in serial CPU improvements impact the above?
|
|
|
Post by helgil on Sept 13, 2012 19:25:24 GMT -5
1.
2. Has Ymir or anything similar been tested with modern hardware like the Kinect. What sort of difficulties might one run into while attempting such a thing.
|
|