Acta Informatica Malaysia (AIM)

A COMPARATIVE STUDY USING 2D CNN AND TRANSFER LEARNING TO DETECT AND CLASSIFY ARABIC-SCRIPT-BASED SIGN LANGUAGE

ABSTRACT

A COMPARATIVE STUDY USING 2D CNN AND TRANSFER LEARNING TO DETECT AND CLASSIFY ARABIC-SCRIPT-BASED SIGN LANGUAGE

Journal: Acta Informatica Malaysia (AIM)
Author: Karwan Mahdi Hama Rawf, Aree Ali Mohammed, Ayub Othman Abdulrahman, Peshraw Ahmed Abdalla, Karzan J. Ghafor

This is an open access article distributed under the Creative Commons Attribution License CC BY 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

Doi:10.26480/aim.01.2023.08.14

Nowadays, Sign Language Recognition (SLR) plays a significant impact in the disabled community because it is utilized as a learning tool for everyday tasks like interaction, education, training, and human activities. All three of these languages—Arabic, Persian, and Kurdish—share the same writing system, called the Arabic script. In order to categorize sign languages written in the Arabic alphabet, this article employs convolutional neural networks (CNN) and transfer learning (mobileNet) methods. The study’s primary goal is to develop a common standard for alphabetic sign language in Arabic, Persian, and Kurdish. Different activation functions were used throughout the model’s extensive training on the ASSL2022 dataset. There are a total of 81857 images included in the collection, gathered from two sources and representing the 40 Arabic-script-based alphabets. As can be seen from the data obtained, the proposed models perform well, with an average training accuracy of 99.7% for CNN and 99.32% for transfer learning. When compared to other research involving languages written in the Arabic script, this one achieves better detection and identification accuracy.

Pages 08-14
Year 2023
Issue 1
Volume 7

Download