Flag Counter
AKILLI SİSTEMLER VE UYGULAMALARI DERGİSİ
JOURNAL OF INTELLIGENT SYSTEMS WITH APPLICATIONS
J. Intell. Syst. Appl.
E-ISSN: 2667-6893
Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License.

SIFT-driven Vision-based Positioning for UAVs:Overcoming GPS Signal Challenges in Urban Environments

How to cite: Şatır, E., Demirtaş, E., Agdemir, H., Yıldız, F.. Sift-driven vision-based positioning for uavs:overcoming gps signal challenges in urban environments. Akıllı Sistemler ve Uygulamaları Dergisi (Journal of Intelligent Systems with Applications) 2024; 7(2): 1-7

Full Text: PDF.

Total number of downloads: 75

Title: SIFT-driven Vision-based Positioning for UAVs:Overcoming GPS Signal Challenges in Urban Environments

Abstract: Unmanned Aerial Vehicles (UAVs) are increasingly utilized in various industries for tasks ranging from surveillance to logistics. However, GPS signal loss in dense urban environments poses significant challenges to the safe and accurate navigation of UAVs. This paper proposes a vision-based position estimation system that leverages the Scale-Invariant Feature Transform (SIFT) algorithm to provide robust UAV navigation in GPS￾denied environments. By detecting keypoints in high-resolution images captured by the UAV’s camera and matching these points across successive frames, the system calculates displacement and converts pixel-based movements into real-world metrics. Experimental results show that the proposed method provides a reliable alternative to GPS-based navigation with an average error rate of %6.09. The system’s real-time processing and adaptability to complex urban environments make it a viable tool for enhancing UAV navigation in GPS-compromised scenarios.

Keywords: UAV navigation, vision-based positioning, SIFT al￾gorithm, keypoint detection, real-time position estimation


Bibliography:
  • Kaplan ED, Hegarty CJ. Understanding GPS: Principles and applications. 2nd ed. Artech House, 2006.
  • Groves PD. Principles of GNSS, inertial, and multisensor integrated navigation systems. Artech House, 2013.
  • Dilaver MB, Cakmak F, Uslu E, Altuntas N, Amasyali MF, Yavuz S. Autonomous navigation algorithm for robocup rrl maneuvering 2 field. Journal of Intelligent Systems with Applications 2019; 2(1): 46-49.
  • Lowe DG. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 2004; 60(2): 91-110.
  • Yildirim YC, Yeniad M. Usage of machine learning algorithms on precision agriculture applications. Journal of Intelligent Systems with Applications 2020; 3(2): 107-113.
  • Scaramuzza D, Fraundorfer F. Visual odometry [tutorial]. IEEE Robotics & Automation Magazine 2011; 18(4): 80-92.
  • Python Software Foundation. Python Language Reference, version 3.x.
  • Bradski G. The OpenCV Library. Dr. Dobb’s Journal of Software Tools 2000.
  • Kutlu Y, Alanoglu Z, Gokcen A, Yeniad M. Raspberry pi based intelligent robot that recognizes and places puzzle objects. Journal of Intelligent Systems with Applications 2019; 2(1): 85-89.
  • Ciklacandir S, Ozkan S, Isler Y. A comparison of the performances of video-based and imu sensor-based motion capture systems on joint angles. 2022 Innovations in Intelligent Systems and Applications Conference (ASYU), Antalya, Turkey, 7-9 September 2022, pp. 1-5.
  • Mikolajczyk K, Schmid C. A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence 2005; 27(10): 1615-1630.
  • Bay H, Tuytelaars T, Van Gool L. SURF: Speeded up robust features. In: European Conference on Computer Vision, 2006, pp. 404-417. Springer.
  • Rublee E, Rabaud V, Konolige K, Bradski G. ORB: An efficient alternative to SIFT or SURF. In: 2011 International Conference on Computer Vision (ICCV), 2011, pp. 2564-2571.
  • Groves PD. Shadow matching: A new GNSS positioning technique for urban canyons. Journal of Navigation 2011; 64(3): 417-430.
  • Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM. In: European Conference on Computer Vision, 2014.
  • Mueller M, Smith N, Ghanem B. A benchmark and simulator for UAV tracking. In: European Conference on Computer Vision (ECCV), 2016, pp. 17-28.
  • Liao T, Haridevan A, Liu Y, Shan J. Autonomous vision- based UAV landing with collision avoidance using deep learning. arXiv preprint arXiv:2109.08628, 2021
  • Szeliski R. Computer Vision: Algorithms and Applications. Springer Science & Business Media, 2010.
  • Hunter JD. Matplotlib: A 2D graphics environment. Computing in Science & Engineering 2007; 9(3): 90-95.
  • Horn BKP, Schunck BG. Determining optical flow. Artificial Intelligence 1981; 17(1-3): 185-203.
  • Simonyan K, Zisserman A. Two-stream convolutional networks for action recognition in videos. Advances in Neural Information Processing Systems 2014; 27: 568-576.