|
Post by K. R. Thórisson on Nov 20, 2012 10:26:45 GMT -5
For Tuesday November 27 the paper to comment on is the assigned reading on AGI psychometrics (http://kryten.mm.rpi.edu/Bringsjord_Licato_PAGI_071512.pdf).
Please present three questions related to testing AGI systems – "proper tests for AGI systems" or "can we design a quick test for determining the 'level of intelligence an artificial or natural system has reached?". If "yes", how? If "no", why not?
Please be reminded that at least one of your three questions should demonstrate that you have read the paper -- it should be of the kind that is difficult to ask without having read in reasonable detail.
At this point you have read a substantial amount of papers and are starting to understand more -- and importantly you are starting to understand much better what is missing in your understanding of how to achieve AGI. It is preferable that at least one of your three questions relate to other readings and topics in the course so far. It will be a bonus if one or more of the questions relates the two papers to each other in some way. (Be creative!)
The questions are due at 10 am this coming Tuesday Nov 27.
|
|
|
Post by kristjan on Nov 26, 2012 0:03:16 GMT -5
An AGI may not have a human level intelligence if it fails the tests of magnet test or balancing challenge. But what about AGI with animal level intelligence. Is the Psychometric Artificial General Intelligence test only to test human level AGI systems?
Do you know if LISA is constructionist or constructivist AI system? If not constructionist do you know if a constructionist AGI has been tested with PAGI for example using asimo?
For testing AGI I think the context of what the system is supposed to handle and adapt to should be specified and then tests created for testing if the system can adapt to the unknown situation it is supposed to handle. By doing this we would be making tests for AGIs that are not necessarily at human level intelligence. What is your opinion of this method of making tests when we know the systems intended capabilities? Do you think we can find a system of making tests to test all types of AI that may be considered generally intelligent?
|
|
|
Post by fabrizio on Nov 26, 2012 11:30:33 GMT -5
"can we design a quick test for determining the 'level of intelligence an artificial or natural system has reached?". If "yes", how? If "no", why not?"
1) As always, the answer depends on the definition of intelligence. Intelligence, as we discussed in previous classes, has not a specific definition. Although we can try to measure some aspects of the intelligence, I think there isn't an absolute way to measure it. Refering to human being, do you think that IQ is a complete way to measure the human intelligence?
2) We are talking about measuring the level of intelligence that an artificial system has reached. What is the utility? We still don't have an artificial general intelligence...
3) "Piaget- MacGyver Room, which is such that, an i-p artifact can credibly be classified as general-intelligent if and only if it can succeed on any test constructed from the ingredients in this room."
Refering to the Piaget MacGyver Room, an artifact has general intelligence if and only if succeeds on any test constructed... What does it mean? If an artifac fails one time, is not intelligent? Do you think that the Piaget- MacGyver Room is a good test?
|
|
paolo
New Member
Posts: 13
|
Post by paolo on Nov 26, 2012 12:20:15 GMT -5
1- "Inspired by PAGI, [...], we define a room, the Piaget-MacGyver Room (PMR), which is such that, an i-p artifact can credibly be classified as general-intelligent if and only if it can succeed on any test constructed from the ingredients in this room." "Some i-p artifact is intelligent if and only if it can excel at all established, validated tests of neurobiologically normal cognition, even when these tests are new for the artifact." According to these definitions, if we built an AGI system that solve let's say the 80-90% of the tasks it is NOT general intelligent. But, as I wrote the last time, in one of the first readings of the course, we define an AGI as a system that can solve a (big enough) set of complex problems in a (big enough) set of complex environment. Thus, we are speaking about a (big enough) set, not about all the possible problems and/or environments. Thus, which definition is correct?
2- Do you think the psychometrics is a good way to define and measure human intelligence? I mean, I do not think that all the people that I know can solve every problem I can give them but, nevertheless, they are intelligent.
3- Do you know more about the architecture of LISA?
4- What do you think about the Cambridge Project for Existential Risk (http://cser.org)? "Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind. We are convinced that there is nowhere on the planet better suited to house such a centre. Our goal is to steer a small fraction of Cambridge's great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future."
|
|
|
Post by krummi on Nov 26, 2012 16:30:40 GMT -5
(1) "We introduce the Piaget-MacGyver Room (PMR), which is such that an -ip artifact can credibly be classified as general intelligent if and only if it can succeed on any test constructed from the ingredients in this room. [..], only the ingredients in the room are shared ahead of time." Why do the ingredients in the room need to be shared ahead of time. If an artifact can be considered generally intelligent shouldn't this be unncessary? (2) Piaget derived the sequence of four cognitive stages through which humans pass. Isn't this something that could be applied to AGI systems as well? (3) Is the LISA model in some way similar to the model that NARS is based on, if we exclude the probability;confidence which I'm guessing the LISA model does not take into account.
|
|
|
Post by sabrina12 on Nov 26, 2012 19:24:58 GMT -5
1. ) Wouldn't be a program like ANALOGY a good basis to create AGI-like systems?
2. ) Could you explain the four stages of Piaget shortly?
3. ) If we create an AI that can solve the "quite" simple tasks of the Piaget- MacGyver Room (PMR), is this really an AGI? I think it is feasible to build an AI with the ability to solve general small problems if it has enough background knowledge, but I would not call it the step to prove an AGI. E.g. could General Game Playing be a first step in this direction.
|
|
|
Post by helgil on Nov 26, 2012 20:01:24 GMT -5
1. They claim the PMR is the threshold for general intelligence. Just how likely is it that that's true? How much of a consensus is really behind that seeing as much of what we've read so far suggests intelligence, general or not, is more of a gradient?
2. If we use something like the Piaget-MacGuyver room, we need to implement things like vision and/or limbs. How good could results be using just abstract symbols like numbers and operators instead of the things in the room?
3. What's your opinion of the LISA model?
|
|
palli
New Member
Posts: 10
|
Post by palli on Nov 27, 2012 7:17:40 GMT -5
1. Isn't language required for passing the PMR-tests, as with the Turing-test? That is, thinking in terms of language. This could even include dance where you try to convey some meaning. Of course we don't always understand the language of others, including animals if they have one. But this probably excludes them and young kids?
2. What about AERA then (and NARS and all other systems of far), don't they fail? Or does it just similarly to kids, have the capability to learn language. You assume it would emerge?
3. What can we then say of the ethics, ok to kill those who do not pass (yet)?
|
|