As our daily lives become crowded with interactions with “Artificial Intelligence” devices and agents, it is an acute challenge for society to make new generations take ownership over this technology, at least as informed and critical users. This implies understanding how computer programs, while still relying on deterministic instructions, are now capable of learning new and sometimes unpredictable behaviors by crunching data. Yet even in recent initiatives for demystifying AI, the learning process itself remains often a black box whose internal mechanisms seem out-of-reach, if not magical. To open this black box, we envisioned the demonstration of a learning robot acquiring skills in front of the public and coupled with real-time graphic interfaces that display its algorithms details. Not only machine learning becomes very concrete as one sees the robot performance improving over time, but some evident parallels can be drawn with animals whose brain processes sensory input to issue motor commands and learns from experience. This introduces naturally the notion that the robot is itself driven by an artificial neuronal network, whose activity and transformation through learning are projected on screen. We will show how this educative product, even though it relies on the particularly difficult topic of reinforcement learning for robotics, makes children able to understand and manipulate the core concepts of deep learning, and even question some similarities with their own learnings.
After a training in Mathematics and his PhD in Computer Vision and Brain Imaging Analysis at ENS Paris, Thomas Deneux studied cortical population dynamics during his postdoctoral research in Neurobiology at the Weizmann Institute (Israel) and CNRS (France). He now pursues researches in Neurorobotics at CNRS, with a fresh interest for education as he aims the creation of a startup developing a learning robot to teach artificial intelligence to the general public.