Bees  make this work through a sort of minimalist brute-force approach to the problem: They fly up to a small hole or gap, hover, wander back and forth a little bit to collect visual information about where the edges of the gap, and then steer themselves through. It’s not fast, and it’s not particularly elegant, but it’s reliable and doesn’t take much to execute.

Reliable and not-taking-much to execute is one way to summarize the focus of the next generation of practical robotics—in other words, robotic platforms that offer affordable real-world autonomy. The University of Maryland’s Perception and Robotics Group has been working on a system that allows a drone to fly through very small and completely unknown gaps using just a single camera and onboard processing. And it’s based on a bee-inspired strategy that yields a success rate of 85 percent.

We’ve posted before about autonomous drones flying through small gaps, but the big difference here is that in this case, the drone has no information about the location or size of the gap in advance. It doesn’t need to build up any kind of 3D map of its environment or model of the gap, which is good because that would be annoying to do with a monocular camera. Instead, UMD’s strategy is to “recover a minimal amount of information that is sufficient to complete the task under consideration.”

To detect where the gap is, the drone uses an optical-flow technique. It takes a picture, moves a little bit, and then takes another picture. It identifies similar features in each picture, and thanks to parallax, the farther away features behind the gap will appear to have moved less than the closer features around the gap. The edges of the gap are the places where you’ve got the biggest difference between the amount that features appear to have moved. And now that you know where all those things are, you can just zip right through!

Or, almost. The other piece of this is using visual servoing to pass through the gap. Visual servoing is just using visual feedback to control motion: The drone takes a picture of the gap, moves forward, takes another picture, and then adjusts its movement to make sure that its position relative to the gap is still what it wants. This is different from a pre-planned approach, where the drone figures out in advance the entire path that it wants to take and then follows it—visual servoing is more on the fly. Or, you know, on the bee.

The UMD researchers tested this out with a Bebop 2 drone packing an NVIDIA Jetson TX2 GPU. A variety of different gaps of varying sizes and shapes were cut in a foreground wall, which was covered in newspapers to give them some extra texture, and this is where we’re obligated to point out that this technique probably won’t work out if you’re trying to fly through a gap in one white wall with another white wall on the other side. Anyway, as long as you’ve got newspapered walls, this system works quite well, the researchers say: “We achieved a remarkable success rate of 85 percent over 150 trials for different arbitrary shaped windows under a wide range of conditions which includes a window with a minimum tolerance of just 5 cm.”

The maximum speed that the drone was able to achieve while passing through the gap was 2.5 m/s, primarily constrained by the rolling shutter camera (which could mess up the optical flow at higher speeds), but again, this method isn’t really intended for high performance drones. Having said that, the researchers do mention in the conclusion of their paper that “IMU data can be coupled with the monocular camera to get a scale of the window and plan for aggressive maneuvers.” So, hopefully we’ll be seeing some of that in the near future.

 

Operation Manager at AthisTech | More Posts

As an Expert in IT & AI, Mickael brings fresh news about Emerging, Wearable Techs, and IT Innovation. He has 12+ years as a Software Engineer in IT and Telecom companies He is now contributor at Athis News.