Reading this paper has been very interesting, it describes a lot of truth.
I want to focus on two aspects that impressed me:
1) A lot of references are done to the term "life" in this paper. For example: "Another fundamental feature of life, self-reproduction" or "This requirement may be called the law of requisite knowledge. Since all living organisms are also control systems, life therefore implies knowledge, as in Maturana's often quoted statement that "to live is to cognize".
We are talking about objects and systems but these are not living beings so...creating an A.I. are we trying to create a sort of life or a living being?
2) In chapter 5 section B "The Modelling Relation" is introduced the concept of predictions. Depending on these predictions the control system can choose the best action to achieve its present goal.
In my opinion the use of predictions must be well controlled. A system could starts to make predictions on every action without doing the action or without doing the action in an usefull time. Then the predictions must be limited and their usage must be well set.
IV. "Goal-Directedness and Control" - A. "Goal-Directedness"
"Probably the most important innovation of cybernetics is its explanation of goal-directedness or purpose. An autonomous system, such as an organism, or a person, can be characterized by the fact that it pursues its own goals, resisting obstructions from the environment that would make it deviate from its preferred state of affairs. Thus, goal-directedness implies regulation of—or control over—perturbations."
1) But, human beings sometimes change their goals while they are trying to pursue them (John Lennon: "Life is what happens to you while you're busy making other plans."). This is because we sub-goaling problems and, while trying to pursue this sub-goal, we have new experiences that become part of our knowledges. Isn't it?
2) Moreover, will an AGI ever be able to decide its own goal alone? (without human operators)
II. "Relational Concepts" - C. "Entropy and Information"
"We also note that there are other methods of weighting the state of a system which do not adhere to probability theory's additivity condition that the sum of the probabilities must be 1. These methods, involving concepts from fuzzy systems theory and possibility theory, lead to alternative information theories. Together with probability theory these are called Generalized Information Theory (GIT). While GIT methods are under development, the probabilistic approach to information theory still dominates applications."
") GIT was introduced in 1991 by George J. Klir ("An Update on Generalized Information Theory"). Is it still under development?
I'm going to have to agree with Paolo and Fabrizio that this was an excellent reading, now I finally know what cybernetics is all about!
Question #1)
"Finally, a number of authors are seriously questioning the limits of mechanism and formalism for interdisciplinary modeling in particular, and science in general. The issues here thus become what the ultimate limits on knowledge might be, especially as expressed in mathematical and computer-based models. What's at stake is whether it is possible, in principle, to construct models, whether formal or not, which will help us understand the full complexity of the world around us." (from page 5, bottom paragraph).
I find this paragraph really interesting and my question is simply: What is the opinion of the class towards this? Do you guys think that there are some "ultimate limits" to this?
Question #2)
I might be wrong here, but isn't cybernetics as a field slowly drifting away from its original roots. That is, away from a field describing a theoretical framework to model systems towards something entirely different?
1. What kind of knowledge base would a cybernetics system use. Could it use anything from SQL or some kind of inference engine or are these technologies too static for this kind of processing. Is the reason a cybernetic intelligence doesn't exist that the technologies needed for such a thing don't exist or is the problem time, that is that the current technologies are too slow.
2. What is the connection between cybernetics and Ymir and Ikon Flux. Are these technologies examples of the cybernetics philosophy and if so, what is the communication layer architecture for the distribution of the systems, message passing or shared memory, assuming they are adhering to Varela's work on operationally closed systems, that is that they are creating and even destroying new processes or threads as part of their self-programming process.
Maybe I'm misunderstanding, but if cybernetics deals with information flows and feedback loops and not the physical implementation (of the brain), what are these flows and loops (besides the perception-action loop) in the brain we need to model?
If the physical implementation is not important or a strict imitation (i.e. an artificial neural network) then what comes in it's place? State support vector machines? Hidden Markov models? Etc. More than one of those and do they fit together as a heterogeneous whole?
I also agree that the reading was really interesting and explained a lot of known things in a more interesting way. There were these two paragraphs which raised my questions mostly:
1) "Note that the environment did not instruct the organism how to build the model: the organism had to find out for itself. This may still appear simple in our model with 27 possible architectures, but it suffices to observe that for more complex organisms there are typically millions of possible perceptions and thousands of possible actions to conclude that the space of possible models or control architectures is absolutely astronomical."
-> So if we look further in the future we will create a lot of more complex AI's and there should be millions of possible perceptions and thousands of possible actions. So shouldn't we try to create an AI with given goals, but a self-organizing ability to create an own model of its world, because we won't be able to build these extremely complex models ourself anymore?
2) "There is moreover invariance over observers: if different observers agree about a percept or concept, then this phenomenon may be considered "real" by consensus. This process of reaching consensus over shared concepts has been called "the social construction of reality". Gordon Pask's Conversation Theory provides a sophisticated formal model of such a "conversational" interaction that ends in an agreement over shared meanings."
-> And would it be easier for different AI's to share their models of the world and experiences with other AI's to get a "social construction of reality"?
1. For the section of goal-directedness it talks about systems, such as an organism, or a person can be characterized by the fact that it pursues its own goals. Does anyone think we will ever make machines that will make their own main goals.
2. The most important innovation of cybernetics is the explanation of goal-directedness, what do you think is the second most important.