Street view images blur detection
Abstract
Blurred regions in images can hinder visual analysis and have a notable impact on applications such as navigation systems and virtual tours. Many existing approaches in the literature assume the presence of blurred regions in an image and process the entire image, even when no blurred regions are actually present. This approach leads to unnecessary computational overhead, resulting in inefficiency and resource consumption. In this paper, we introduce a Street-view images Blur Detection Network (SBDNet), consisting of two interconnected subnetworks: the Classifier network and the Identifier network. The Classifier network is responsible for categorizing street-view images as either blurred or not blurred. Once the Classifier network determines that an image is blurred, the Identifier network is then activated to estimate the specific areas that are blurred within the image. High-level semantic features from the Classifier network are used to construct the blur map estimation in the Identifier network, when necessary. The algorithm was trained and evaluated using the Street-View Blur Images dataset (SVBI) and three publicly available blur detection datasets: CUHK, DUT, SZU-BD. Our quantitative and qualitative results demonstrate that SBDNet competes with state of the arts in blur map estimation.
Keywords
Full Text:
PDFReferences
1. Shi J, Xu L, Jia J. Discriminative blur detection features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 23–28 June 2014; Columbus, USA. pp. 2965–2972.
2. Tai YM, Brown MS. Single image defocus map estimation using local contrast prior. In: Proceedings of 2009 16th IEEE International Conference on Image Processing (ICIP); 7–10 November 2009; Cairo, Egypt. pp. 1797–1800.
3. Zhuo S, Sim T, Defocus map estimation from a single image. Pattern Recognition 2011; 44(9): 1852–1858. doi: 10.1016/j.patcog.2011.03.009
4. Su B, Lu S, Tan CL. Blurred image region detection and classification. In: Proceedings of the 19th ACM international conference on Multimedia; 28 November–1 December 2011; Scottsdale, Arizona, USA. pp. 1397–1400.
5. Couzinie-Devy F, Sun J, Alahari K, Ponce J. Learning to estimate and remove non-uniform image blur. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 23–28 June 2013; Portland, USA. pp. 1075–1082.
6. Tang C, Hou C, Song Z. Defocus map estimation from a single image via spectrum contrast. Optics Letters 2013; 38(10): 1706–1708. doi: 10.1364/OL.38.001706
7. Tang C, Hou C, Hou Y, et al. An effective edge-preserving smoothing method for image manipulation. Digital Signal Processing 2017; 63: 10–24. doi: 10.1016/j.dsp.2016.10.009
8. Zhao W, Zhao F, Wang D, Lu H. Defocus blur detection via multistream bottom-top-bottom fully convolutional network. IEEE Transactions on Pattern Analysis and Machine Intelligence 2020; 42(8): 1884–1897. doi: 10.1109/TPAMI.2019.2906588
9. Zhao Z, Yang H, Luo H. Hierarchical Edge-aware Network for defocus blur detection. Complex & Intelligent Systems 2022; 8: 4265–4276. doi: 10.1007/s40747-022-00711-y
10. Zhang S, Shen X, Lin Z, et al. Learning to understand image blur. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 18–23 June 2018; Salt Lake, USA. pp. 6586–6595.
11. Available online: https://github.com/MasoudMoeini/Google-Street-View-Images-Blur-Detection (accessed on 31 July 2023).
12. Sun X, Zhang X, Xiao M, Xu C. Blur detection via deep pyramid network with recurrent distinction enhanced modules. Neurocomputing 2020; 414: 278–290. doi: 10.1016/j.neucom.2020.06.068
13. Jonna S, Medhi M, Sahay RR. Distill-dbdgan: Knowledge distillation and adversarial learning framework for defocus blur detection. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 2022; 19(2): 87. doi: 10.1145/3557897
14. Zhao W, Shang C, Lu H. Self-generated defocus blur detection via dual adversarial discriminators. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 20–25 June 2021; Nashville, USA. pp. 6933–6942.
15. Zhao Z, Yang H, Luo H. Defocus blur detection via transformer encoder and edge guidance. Applied Intelligence 2022; 52: 14426–14439. doi: 10.1007/s10489-022-03303-y
16. Lin X, Li H, Cai Q. Hierarchical complementary residual attention learning for defocus blur detection. Neurocomputing 2022; 501: 88–101. doi: 10.1016/j.neucom.2022.06.023
17. Guo W, Xiao X, Hui Y, et al. Heterogeneous attention nested u-shaped network for blur detection. IEEE Signal Processing Letters 2021; 29: 140–144. doi: 10.1109/LSP.2021.3128375
18. Tang C, Liu X, Zheng X, et al. DefusionNet: Defocus blur detection via recurrently fusing and refining discriminative multi-scale deep features. IEEE Transactions on Pattern Analysis and Machine Intelligence 2020; 44(2): 955–968. doi: 10.1109/TPAMI.2020.3014629
19. Tang C, Liu X, Zhu X, et al. R2MRF: Defocus blur detection via recurrently refining multi-scale residual features. Proceedings of the AAAI Conference on Artificial Intelligence 2020; 34(7): 12063–12070. doi: 10.1609/aaai.v34i07.6884
20. Zhao W, Zheng B, Lin Q, Lu H. Enhancing diversity of defocus blur detectors via cross-ensemble network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 15–20 June 2019; Long Beach, USA. pp. 8905–8913.
21. Ma K, Fu H, Liu T, et al. Deep blur mapping: Exploiting high-level semantics by deep neural networks. IEEE Transactions on Image Processing 2018; 27(10): 5155–5166. doi: 10.1109/TIP.2018.2847421
22. Kim B, Son H, Park SJ, et al. Defocus and motion blur detection with deep contextual features. Computer Graphics Forum 2018; 37(7): 277–288. doi: 10.1111/cgf.13567
23. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 27–30 June 2016; Las Vegas, USA. pp. 770–778.
24. Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 18–23 June 2018; Salt Lake, USA. pp. 7132–7141.
25. Yang M, Yu K, Zhang C, et al. Denseaspp for semantic segmentation in street scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 18–23 June 2018; Salt Lake, USA. pp. 3684–3692.
26. Tan M, Le Q. Efficientnet: Rethinking model scaling for Convolutional Neural Networks. Proceedings of Machine Learning Research 2019; 97: 6105– 6114.
27. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv 2014; arXiv:1409.1556. doi: 10.48550/arXiv.1409.1556
28. Xie S, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks. arXiv 2017; arXiv:1611.05431. doi: 10.48550/arXiv.1611.05431
29. Zamir AR, Shah M. Image geo-localization based on multiplenearest neighbor feature matching usinggeneralized graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence 2014; 36(8): 1546–1558. doi: 10.1109/TPAMI.2014.2299799
30. Jalled F, Voronkov I. Object detection using image processing. arXiv 2016; arXiv:1611.07791. doi: 10.48550/arXiv.1611.07791
31. Rastogi A, Ryuh BS. Teat detection algorithm: Yolo vs. Haar-cascade. Journal of Mechanical Science and Technology 2019; 33(4): 1869–1874. doi:10.1007/s12206-019-0339-5
32. Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 7–12 June 2015; Boston, USA. pp. 1–9.
33. Gulli A, Pal S. Deep Learning with Keras. Packt; 2017.
34. Tang C, Zhu X, Liu X, et al. DefusionNet: Defocus blur detection via recurrently fusing and refining multi-scale deep features. IEEE Transactions on Pattern Analysis and Machine Intelligence 2019; 44(2): 955–968. doi: 10.1109/TPAMI.2020.3014629
35. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks. Advances in Neural Information Processing Systems 2022; 63(11): 139–144. doi: 10.1145/3422622
36. Khmag A, Ramlee R, Blur removal in natural digital images using self- reference generative networks. Journal of Telecommunication, Electronic and Computer Engineering (JTEC) 2021; 13(3): 61–65.
37. Khmag A, Ramli AR, Kamarudin N. Clustering-based natural image denoising using dictionary learning approach in wavelet domain. Soft Computing 2019; 23(17): 8013–8027. doi: 10.1007/s00500-018-3438-9
38. Golestaneh SA, Karam LJ. Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 21–26 July 2017; Honolulu, USA. pp. 5800–5809.
39. Zhao W, Hou X, He Y, Lu H. Defocus blur detection via boosting diversity of deep ensemble networks. IEEE Transactions on Image Processing 2021; 30: 5426–5438. doi: 10.1109/TIP.2021.3084101
40. Cun X, Pun CM. Defocus blur detection via depth distillation. In: European Conference on Computer Vision, Proceedings of 16th European Conference; 23–28 August 2020; Glasgow, UK. Springer Cham; pp. 747–763.
41. Yi X, Eramian M. Lbp-based segmentation of defocus blur. IEEE Transactions on Image Processing 2016; 25(4): 1626–1638. doi: 10.1109/TIP.2016.2528042
DOI: https://doi.org/10.32629/jai.v7i3.562
Refbacks
- There are currently no refbacks.
Copyright (c) 2024 Masoud Moeini, Ehsan Yaghoubi, Simone Frintrop
License URL: https://creativecommons.org/licenses/by-nc/4.0/