Por favor, use este identificador para citar o enlazar este ítem: https://repositorio.cetys.mx/handle/60000/1999
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorLopez-Montiel, Miguel-
dc.contributor.authorOrozco-Rosas, Ulises-
dc.contributor.authorSánchez-Adame, Moises-
dc.contributor.authorMontiel, Oscar-
dc.contributor.authorPicos, Kenia-
dc.contributor.authorTapia, Juan Jose-
dc.date.accessioned2025-12-19T18:20:51Z-
dc.date.available2025-12-19T18:20:51Z-
dc.date.created2024-12-
dc.date.issued2025-12-
dc.identifier.urihttps://repositorio.cetys.mx/handle/60000/1999-
dc.description.abstractTraffic Sign Classification (TSC) is crucial for autonomous driving and intelligent transportation systems. Desktop implementations of deep learning achieved state-of-the-art performance on TSC benchmarks; however, they are unsuitable for real-time embedded systems due to resource limitations. We propose an Efficient GPU-Embedded Network (EGENet) for embedded platforms, such as NVIDIA’s Jetson, to overcome these drawbacks. When implemented on a desktop system with NVIDIA GeForce RTX 2080, EGENet can reduce the number of parameters by 24 million while speeding up by 2.59×. EGENet introduces a new concept called Asymmetric Depth Dilated Separable Convolution (ADDSC), which enables a reduction in parameters and inference time while maintaining the receptive window size. A novel evaluation metric is proposed, considering frames per second (FPS), accuracy, and deployment on embedded GPU devices with constrained resources, targeting at least 98.85% accuracy and a frame rate of more than 30 FPS. Thorough evaluations were performed on the NVIDIA Jetson Xavier AGX and Jetson Nano, utilizing limited resources, to validate EGENet’s real-time performance. Evaluation of GTSRB and LISAC datasets demonstrates outperforming results, with an accuracy of 99.58% and 98.18% and a response time of 253 FPS and 90 FPS on Jetson Xavier AGX and Jetson Nano devices, respectively. Our work contributes to efficient TSC systems based on embedded GPUs and offers a comprehensive performance evaluation methodology for autonomous driving. We present exhaustive statistical comparative tests against state-of-the-art systems.es_ES
dc.description.sponsorshipSpringer Nature Linkes_ES
dc.language.isoen_USes_ES
dc.relation.ispartofseriesvol. 7;núm. 12-
dc.rightsAtribución-NoComercial-CompartirIgual 2.5 México*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/2.5/mx/*
dc.subjectAutonomous vehicleses_ES
dc.subjectDeep learninges_ES
dc.subjectTraffic sign classificationes_ES
dc.subjectReal-timees_ES
dc.subjectEmbedded systemses_ES
dc.subjectConvolutiones_ES
dc.titleTraffic Sign Classification Using Real-Time GPU-Embedded Systemses_ES
dc.title.alternativeSN Computer Sciencees_ES
dc.typeArticlees_ES
dc.description.urlhttps://link.springer.com/article/10.1007/s42979-025-04634-6es_ES
dc.format.pagepp. 7-12es_ES
dc.identifier.doihttps://doi.org/10.1007/s42979-025-04634-6-
dc.identifier.indexacionSCOPUSes_ES
dc.identifier.indexacionJCRes_ES
dc.subject.sedeCampus Tijuanaes_ES
Aparece en las colecciones: Artículos de Revistas

Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
s42979-025-04634-6.pdf3.23 MBAdobe PDFVisualizar/Abrir


Este ítem está protegido por copyright original



Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons