Bridging the Gap between Symbol and Sub-symbol
Modern advancements in artificial intelligence and robotics have made significant progress in mimicking high-level human behaviors and decision-making. Particularly, Deep Reinforcement Learning (DRL) has succeeded in implementing complex behaviors that surpass traditional control problems. While Finite State Machines (FSM) were used in the past for robots performing games or simple tasks, Behavior Trees (BT) have become widely applied recently. This shift, along with technological advancements, presents new approaches to solving complex problems.
Deep Reinforcement Learning indeed offers more possibilities than traditional control methods. This technology is being applied in real-world tasks, helping to solve various challenges we face. However, the complexity and learning processes of deep neural networks function like a black box, potentially causing inductive bias. Therefore, we must exercise caution when using sub-symbolic systems, such as artificial neural networks. Recent incidents related to Tesla’s autonomous driving technology highlight these concerns, leading many countries to strengthen relevant legislation to enhance AI safety and reliability. Modern machines thus need more responsive and modular world models to address such issues effectively.
Moreover, modern machines are neither as fast nor as energy-efficient as the reflexes of humans or animals. While most machines focus on single tasks, humans and animals possess various skills to solve the more challenging and complex task of survival in the world. Constructing reactive and modular world models enables flexible responses to complex environments and easy partial modifications. To implement true autonomous agents, these reactive and modular world models are necessary. These models are closely related to high-level cognitive functions and metacognition in humans.
Humans act based on their understanding of language and the physical world through complex deep neural networks or self-supervised world models. These behaviors are often implicit or explicit and can be considered finite state machines or behavior trees at a macro level. For example, Jurgen Schmidhuber proposed the Gödel Machine, emphasizing that the core ability of consciousness is to modify internal programs and develop better ones. This suggests that humans continually develop and refine useful programs or symbolic systems from sub-symbolic systems like neural networks. Recent consciousness studies also focus on high-level cognitive functions and metacognition, demonstrating how high-level perspectives continuously influence lower-level perspectives in viewing the world (mind), biological reconstruction through learning (brain), and interactions with the physical world (behavior).
Phenomena like hypnosis and subliminal messaging techniques, which operate through language and sensory information, exemplify this self-modifying ability. However, there is little discussion on the techniques or methods for displaying the sub-symbolic contents of deep neural networks in a symbolic form. This is similar to how understanding human behaviors and habits, and expressing the underlying beliefs and subconscious, is possible only through reflection and self-evaluation. As the famous saying goes, “Beliefs become thoughts, thoughts become words, words become actions, actions become habits, habits determine one’s values, and values determine one’s destiny.” To change our destiny and values, we must change our actions and habits, and to do that, we must change our beliefs and words. For instance, several current research approaches strive to address these technical challenges.
In conclusion, it is crucial to explore how sub-symbolic systems autonomously generate and modify high-level symbolic behaviors and clear guidelines. Extracting and evaluating the behavior patterns implicitly stored in already developed sub-symbolic systems is also essential. This process is not only a technically challenging task but also a critical research field determining the future of artificial intelligence. Therefore, advancing modern AI requires an in-depth study of the interactions between sub-symbolic and symbolic systems to develop more flexible and efficient autonomous agents. We must continue our efforts to solve these complex problems.
Summary
Modern AI and robotics have made significant strides in mimicking high-level human behaviors and decision-making, particularly through Deep Reinforcement Learning (DRL), which has succeeded in implementing complex behaviors. While traditional control methods like Finite State Machines (FSM) and Behavior Trees (BT) are being replaced by DRL in practical applications, the complexity and inductive bias of deep neural networks necessitate careful use of sub-symbolic systems. Incidents such as Tesla’s autonomous driving accidents highlight these concerns. Modern machines are not as fast or energy-efficient as human or animal reflexes, and typically focus on single tasks, whereas humans and animals use reactive and modular world models for flexible adaptation. Constructing such models is essential for true autonomous agents and closely related to high-level cognitive functions and metacognition. Phenomena like hypnosis and subliminal messaging show self-modifying abilities through language and sensory information, but techniques to display sub-symbolic contents symbolically are lacking. As the saying goes, “Beliefs become thoughts, thoughts become words, words become actions, actions become habits, habits determine one’s values, and values determine one’s destiny,” indicating the need to change beliefs and words to alter actions and habits. Current research approaches aim to address these challenges. In conclusion, exploring how sub-symbolic systems autonomously generate and modify high-level symbolic behaviors is crucial, as is extracting and evaluating implicit behavior patterns. This research is vital for advancing AI and developing flexible and efficient autonomous agents, necessitating continued efforts to solve these complex problems.
Comments