Learning to fly on their own
Teaching a robot new tricks by trial and error is less than ideal. ‘For flying robots, such a teaching method – reinforcement learning in AI jargon – is altogether wrong, because they are even more fragile’, says Dr Guido de Croon, head of the research project of the Micro Air Vehicle Laboratory.
Foto © Sam Rentmeester
Earlier this year, De Croon received a ‘TOP’ grant from the Dutch government. He and doctoral candidate Frederico Paredes Valles are trying to make flying robots, like the fluttering Delfly – capable of learning on their own. The robots have to learn to judge distances using deep neural networks. ‘This is a radically different approach than reinforcement learning’, De Croon says.
The robot has stereo vision and can, just like people, perceive depth. But the information received by each separate eye (camera) also contains depth information, packed in the texture and colour of objects, for instance. The robot learns completely on its own during the flight to use that information to avoid obstacles.
‘By combining the information from the stereo image with the extra information from each eye, our aim is to make the robot see even better. Eventually, after lots of practice, the robot should even be able to perceive depth accurately with only one eye. That's really necessary. This will enable the robot to keep flying even if one of the cameras breaks down.’
De Croon worked on such a self-learning robot vision system for space flight for ESA and NASA some years ago. The system that he is working on now needs to be even more advanced.