banner

Detection of lanes, obstacles and drivable areas for self-driving cars using multifusion perception metrics

A. Kishore Kumar, Venkatesh Palanisamy

Abstract


Autonomous vehicles have been a recent trend and active research area from the onset of machine learning and deep learning algorithms. Computer vision and deep learning techniques have simplified the operations of continuous monitoring and decision-making capabilities of autonomous vehicles. A navigation system is facilitated by a visual system, where sensors and collectors process input in form of images or videos, and the navigation system will be making certain decisions to adhere to the safety of drivers and passers-by. This research article contemplates the model of obstacle detection, lane detection, and how the vehicle is supposed to act in terms of autonomous driving situation. This situation should resemble human driving conditions and should ensure maximum safety to both the stakeholders. A unified neural network for detecting lanes, objects, obstacles and to advise the driving speed is defined in this architecture. As far as autonomous driving is considered, these target elements are considered to be the predominant areas of focus for autonomous driving vehicles. Since capturing the images or videos have to be performed in real-time scenarios and processing them for relevant decision making have to be completed at a swift pace, a concept of context tensors is introduced in the decoders for discriminating the tasks based on priority. Every task is associated with the other tasks and also the decision-making process and hence this architecture will continue to learn every day. From the obtained results, it is evident that multitask networks can be improved using the proposed method in terms of accuracy, decision-making capability and reduced computational time. This model investigates the performance using Berkeley deep drive datasets which are considered to be a challenging dataset.


Keywords


autonomous vehicle; deep learning; image processing; self-driving car, multi-task network

Full Text:

PDF

References


1. Kumar AK, Venkatesh P. A survey on handling events, localization and perception of autonomous vehicles in real time scenarios. Design Engineering. 2021, 2021(6): 7323-7337.

2. Wang Q, Zheng J, Xu H, et al. Roadside Magnetic Sensor System for Vehicle Detection in Urban Environments. IEEE Transactions on Intelligent Transportation Systems. 2018, 19(5): 1365-1374. doi: 10.1109/tits.2017.2723908

3. Qian Y, Yang M, Wang C, et al. Pedestrian Feature Generation in Fish-Eye Images via Adversary. 2018 IEEE International Conference on Robotics and Automation (ICRA); May 2018. doi: 10.1109/icra.2018.8460565

4. Tran N. Global Status Report on Road Safety. World Health Organization: Geneva, Switzerland; 2018. pp. 5–11.

5. Jeppsson H, Östling M, Lubbe N. Real life safety benefits of increasing brake deceleration in car-to-pedestrian accidents: Simulation of Vacuum Emergency Braking. Accident Analysis & Prevention. 2018, 111: 311-320. doi: 10.1016/j.aap.2017.12.001

6. Tet BC. The Prevalence of Motor Vehicle Crashes Involving Road Debris, United States, 2011–2014. Age 2016, 20, 10-1.

7. Xu X, Fan CK. Autonomous vehicles, risk perceptions and insurance demand: An individual survey in China. Transportation Research Part A: Policy and Practice. 2019, 124: 549-556. doi: 10.1016/j.tra.2018.04.009

8. Guo Y, Xu H, Zhang Y, et al. Integrated Variable Speed Limits and Lane-Changing Control for Freeway Lane-Drop Bottlenecks. IEEE Access. 2020, 8: 54710-54721. doi: 10.1109/access.2020.2981658

9. Puglia L, Brick C. Deep learning stereo vision at the edge. Available online: http://arxiv.org/abs/2001.04552v1 (accessed on 29 November 2023).

10. Azeta J, Bolu C, Hinvi D, et al. Obstacle detection using ultrasonic sensor for a mobile robot. IOP Conference Series: Materials Science and Engineering. 2019, 707(1): 012012. doi: 10.1088/1757-899x/707/1/012012

11. Suzen AA, Duman B, Sen B. Benchmark Analysis of Jetson TX2, Jetson Nano and Raspberry PI using Deep-CNN. In: Proceedings of the 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA).

12. Ou X, Yan P, Zhang Y, et al. Moving Object Detection Method via ResNet-18 With Encoder–Decoder Structure in Complex Scenes. IEEE Access. 2019, 7: 108152-108160. doi: 10.1109/access.2019.2931922

13. Orozco-Rosas U, Picos K, Montiel O, et al. Environment Recognition for Path Generation in Autonomous Mobile Robots. Studies in Computational Intelligence. 2019, 273-288. doi: 10.1007/978-3-030-34135-0_19

14. Pan X, Shi J, Luo P, et al. Spatial as Deep: Spatial CNN for Traffic Scene Understanding. Proceedings of the AAAI Conference on Artificial Intelligence. 2018, 32(1). doi: 10.1609/aaai.v32i1.12301

15. Fu C, Hu P, Dong C, et al. Camera-Based Semantic Enhanced Vehicle Segmentation for Planar LIDAR. 2018 21st International Conference on Intelligent Transportation Systems (ITSC); November 2018. doi: 10.1109/itsc.2018.8569413

16. Caltagirone L, Scheidegger S, Svensson L, et al. Fast LIDAR-based road detection using fully convolutional neural networks. 2017 IEEE Intelligent Vehicles Symposium (IV); June 2017. doi: 10.1109/ivs.2017.7995848

17. Mehmood A, Liaquat M, Bhatti AI, et al. Trajectory Planning and Control for Lane-Change of Autonomous Vehicle. 2019 5th International Conference on Control, Automation and Robotics (ICCAR); April 2019. doi: 10.1109/iccar.2019.8813737

18. Ko W, Chang DE. Cooperative adaptive cruise control using turn signal for smooth and safe cut-in. In: Proceedings of the 2018 18th International Conference on Control, Automation and Systems (ICCAS); 17–20 October 2018; Yongpyong Resort, Seoul, Korea.

19. He K, Gkioxari G, Dollár P, et al. “Mask R-CNN,” in Proc. ICCV, Oct. 2017, pp. 2961–2969.

20. Orozco-Rosas U, Picos K, Montiel O. Acceleration of Path Planning Computation Based on Evolutionary Artificial Potential Field for Non-static Environments. Studies in Computational Intelligence. 2020, 271-297. doi: 10.1007/978-3-030-35445-9_22

21. Rawat P. Environment Perception for Autonomous Driving: A 1/10 Scale I Implementation of Low Level Sensor Fusion Using Occupancy Grid mapping [Master’s thesis].

22. Inthanon P, Mungsing S. Detection of Drowsiness from Facial Images in Real-Time Video Media using Nvidia Jetson Nano. In: Proceedings of the 2020 17th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON).

23. Romera E, Alvarez JM, Bergasa LM, et al. ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation. IEEE Transactions on Intelligent Transportation Systems. 2018, 19(1): 263-272. doi: 10.1109/tits.2017.2750080

24. Teichmann M, Weber M, Zollner M, et al. MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving. 2018 IEEE Intelligent Vehicles Symposium (IV); June 2018. doi: 10.1109/ivs.2018.8500504

25. Katare D, El-Sharkawy M. Embedded system enabled vehicle collision detection: An ANN classifier. In: Proceedings of the 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC); 7–9 January 2019; Las Vegas, NV, USA.

26. He D, Zou Z, Chen Y, et al. Obstacle detection of rail transit based on deep learning. Measurement. 2021, 176: 109241. doi: 10.1016/j.measurement.2021.109241

27. Che E, Jung J, Olsen M. Object recognition, segmentation, and classification of mobile laser scanning point clouds: A state of the art review. Sensors. 2019, 19(4): 810. doi: 10.3390/s19040810

28. Islam K, Wijewickrema S, Raj R, et al. Street sign recognition using histogram of oriented gradients and artificial neural networks. Journal of Imaging. 2019, 5(4): 44. doi: 10.3390/jimaging5040044

29. Chen C, Liu B, Wan S, et al. An Edge Traffic Flow Detection Scheme Based on Deep Learning in an Intelligent Transportation System. IEEE Transactions on Intelligent Transportation Systems. 2021, 22(3): 1840-1852. doi: 10.1109/tits.2020.3025687

30. Zhao J, Liu J, Yang L, et al. Future 5G-oriented system for urban rail transit: Opportunities and challenges. China Communications. 2021, 18(2): 1-12. doi: 10.23919/jcc.2021.02.001

31. Wang S, Yang S, Zhao C. SurveilEdge: Real-time Video Query based on Collaborative Cloud-Edge Deep Learning. IEEE INFOCOM 2020 IEEE Conference on Computer Communications; July 2020. doi: 10.1109/infocom41043.2020.9155284

32. Yang Z, Zhang Y, Yu J, et al. End-to-end multi-modal multi-task vehicle control for self-driving cars with visual perceptions. In: Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR); 20–24 August 2018; Beijing, China.

33. Han X, Lu J, Zhao C, et al. Semisupervised and Weakly Supervised Road Detection Based on Generative Adversarial Networks. IEEE Signal Processing Letters. 2018, 25(4): 551-555. doi: 10.1109/lsp.2018.2809685

34. Wan S, Ding S, Chen C. Edge computing enabled video segmentation for real-time traffic monitoring in internet of vehicles. Pattern Recognition. 2022, 121: 108146. doi: 10.1016/j.patcog.2021.108146

35. Cong Z, Li X. Track obstacle detection algorithm based on YOLOv3. In: Proceedings of the 2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI).




DOI: https://doi.org/10.32629/jai.v7i3.1059

Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 A. Kishore Kumar, Venkatesh Palanisamy

License URL: https://creativecommons.org/licenses/by-nc/4.0/