2806 IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 3, NO. 4, OCTOBER 2018
Positive (AFP) as follows (AFN and AFP are computed only for
successful trails):
DR =
Num. Trails
k=1
λ
k
D
Num. Trails
; λ
k
D
=
G∩O
G
k
≥ 0.75
AFN =
Num. S ucc. Trails
k=1
λ
k
N
Num. Succ. Trails
; λ
k
N
=
G∩O
G
k
AFP =
Num. S ucc. Trails
k=1
λ
k
P
Num. Succ. Trails
; λ
k
P
=
G
∩O
G
k
where A
is the negation of the set A.
V. C
ONCLUSION
We present a minimalist philosophy to mimic insect be-
haviour to solve complex problems with minimal sensing and
active movement to simplify the problem in hand. This philoso-
phy was used to develop a method to find an unknown gap and
fly through it using only a monocular camera and onboard sens-
ing. A comprehensive comparison and analysis is provided. To
our knowledge, this is the first letter which addresses the prob-
lem of gap detection of an unknown shape and location with a
monocular camera and onboard sensing. As a parting thought,
IMU data can be coupled with the monocular camera to get a
scale of the window and plan for aggressive maneuvers.
A
CKNOWLEDGMENT
The authors would like to thank K. Zampogiannis for helpful
discussions and feedback.
R
EFERENCES
[1] T. Tomic et al., “Toward a fully autonomous UAV: Research platform for
indoor and outdoor urban search and rescue,” IEEE Robot. Autom. Mag.,
vol. 19, no. 3, pp. 46–56, Sep. 2012.
[2] T.
¨
Ozaslan et al., “Inspection of penstocks and featureless tunnel-like
environments using micro UAVs,” in Field and Service Robotics. Berlin,
Germany: Springer, 2015, pp. 123–136.
[3] N. Michael et al., “Collaborative mapping of an earthquake-damaged
building via ground and aerial robots,” J. Field Robot., vol. 29, no. 5,
pp. 832–841, 2012.
[4] T. Taketomi et al., “Visual slam algorithms: A survey from 2010 to 2016,”
IPSJ Trans. Comput. Vis. Appl., vol. 9, no. 16, pp. 16–26, Jun. 2017.
[5] T. Qin and S. Shen., “Robust initialization of monocular visual-inertial
estimation on aerial robots,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots
Syst., Vancouver, BC, Canada, 2017, pp. 4225–4232.
[6] M. B loesch et al., “Iterated extended Kalman filter based visual-inertial
odometry using direct photometric feedback,” Int. J. Robot. Res., vol. 36,
no. 10, pp. 1053–1072, 2017.
[7] J. Aloimonos et al., “Active vision,” Int. J. Comput. Vis., vol. 1, no. 4,
pp. 333–356, 1988.
[8] J. Bohg et al., “Interactive perception: Leveraging action in perception and
perception in action,” IEEE Trans. Robot., vol. 33, no. 6, pp. 1273–1291,
Dec. 2017.
[9] R. Bajcsy et al., “Revisiting active perception,” Auton. Robots, vol. 42,
pp. 177–196, 2018.
[10] T. S. Collett, “Insect vision: Controlling actions through optic flow,” Cur-
rent Biol., vol. 12, no. 18, pp. R615–R617, 2002.
[11] B. Mantel, T. A. Stoffregen, A. Campbell, and B. G. Bardy, “Exploratory
movement generates higher-order information that is sufficient for accu-
rate perception of scaled egocentric distance,” PloS One, vol. 10, no. 4,
2015, Art. no. e0120025.
[12] K. Kral and M. Poteser, “Motion parallax as a source of distance informa-
tion in locusts and mantids,” J. Insect Behavior, vol. 10, no. 1, pp. 145–163,
1997.
[13] Geoslam. 2018. [Online]. Available: “https://geoslam.com/
[14] F. Endres, J. Hess, J. Strum, D. Cremers, and W. Burgard, “3-D mapping
with an RGB-D camera,” IEEE Trans. Robot., vol. 30, no. 1, pp. 177–187,
Feb. 2014.
[15] Parrot slamdunk. 2018. [Online]. Available: “http://developer.
parrot.com/docs/slamdunk/
[16] J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 40, no. 3, pp. 611–625, Mar. 2017.
[17] N. Smolyanskiy et al., “Toward low-flying autonomous MAV trail navi-
gation using deep neural networks for environmental awareness,” in Proc.
IEEE/RSJ Int. Conf. Intell. Robots Syst., Vancouver, BC, Canada, 2017,
pp. 4241–4247.
[18] P. D’Alberto et al., “Generating FPGA-accelerated DFT libraries,”
in Proc.15th Annu. IEEE Symp. Field-Programmable Custom Comput.
Mach., 2007, pp. 173–184.
[19] E. Nurvitadhi et al., “Can FPGAs beat GPUs in accelerating next-
generation deep neural networks?” in Proc. ACM/SIGDA Int. Symp. Field-
Programmable Gate Arrays, 2017, pp. 5–14.
[20] S. Han et al., “ESE: Efficient speech recognition engine with sparse
LSTM on FPGA,” in Proc. ACM/SIGDA Int. Symp. Field-Programmable
Gate Arrays, 2017, pp. 75–84.
[21] G. Loianno, C. Brunner, G. McGrath, and V. Kumar, “Estimation, control,
and planning for aggressive flight with a small quadrotor with a single
camera and IMU,” IEEE Robot. Autom. Lett., vol. 2, no. 2, pp. 404–411,
Apr. 2017.
[22] D. Falanga et al., “Aggressive quadrotor flight through narrow gaps with
onboard sensing and computing using active vision,” in Proc. IEEE Int.
Conf. Robot. Autom., 2017, pp. 5774–5781.
[23] N. Franceschini, J. M. Pichon, and C. Blanes, “From insect vision to
robot vision,” Philosoph. Trans. Roy. Soc. Lond. B, vol. 337, no. 1281,
pp. 283–294, 1992.
[24] M. V. Srinivasan et al., “Robot navigation inspired by principles of insect
vision,” Robot. Auton. Syst., vol. 26, no. 2/3, pp. 203–216, 1999.
[25] J. R. Serres and F. Ruffier, “Optic flow-based collision-free strategies:
From insects to robots,” Arthropod Struct. Develop., vol. 46, no. 5,
pp. 703–717, 2017.
[26] K. Y. W. Scheper et al., “Behavior trees for evolutionary robotics,” Artif.
Life, vol. 22, no. 1, pp. 23–48, 2016.
[27] Longuet-Higgins et al., “The interpretation of a moving retinal image,”
in Proc. Roy. Soc. Lond. B, 1980, vol. 208, pp. 385–397.
[28] K. Sreenath, N. Micheal, and V. Kumar, “Trajectory generation and control
of a quadrotor with a cable-suspended load-a differentially-flat hybrid
system,” in Proc. IEEE Int. Conf. Robot. Autom., 2013, pp. 4888–4895.
[29] B. D. Lucas and T. Kanade, “An iterative image registration technique
with an application to stereo vision, in Proc. 7th Int. Joint Conf. Artif.
Intell., 1981, pp. 674–679.
[30] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artif. Intell.,
vol. 17, no. 1-3, pp. 185–203, 1981.
[31] E. Ilg et al., “Flownet 2.0: Evolution of optical flow estimation with deep
networks,” in Proc IEEE Conf. Comput. Vis. Pattern Recognit., 2017,
vol. 2, pp. 2462–2470.
[32] C. Tomasi and T. Kanade, “Detection and tracking of point features,”
Tech. Rep. CMU-CS-91-132, Carnegie Mellon University, Apr. 1991.
[33] O. Hall-Holt et al., “Finding large sticks and potatoes in poly-
gons,” in Proc. 17th Annu. ACM-SIAM Symp. Discr. Algorithm, 2006,
pp. 474–483.
[34] D. Mellinger and V. Kumar, “Minimum snap trajectory generation and
control for quadrotors,” in Proc. IEEE Int. Conf. Robot. Autom., 2011,
pp. 2520–2525.
[35] C. Godard et al., “Unsupervised monocular depth estimation with left-
right consistency,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit.,
2017, pp. 6602–6611.
[36] A. Ranjan and M. J. Black, “Optical flow estimation using a spatial pyra-
mid network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017,
pp. 2720–2729.
[37] T. Kroeger
et al., “Fast optical flow using dense inverse search,” in
Proc.Eur. Conf. Comput. Vis., 2016, pp. 471–488.
[38] J. W. Bian et al., “GMS: Grid-based motion statistics for fast, ultra-
robust feature correspondence,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recognit., 2017, pp. 2828–2837.
[39] E. Rosten, R. Porter, and T. Drummond, “Faster and better: A machine
learning approach to corner detection,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 32, no. 1, pp. 105–119, Jan. 2010.
[40] M. Bj
¨
orkman et al., “Detecting, segmenting and tracking unknown objects
using multi-label MRF inference,” Comput. Vis. Image Understanding,
vol. 118, pp. 111–127, 2014.