One-Way Communication System using CNN for Interaction between Deaf and Blind People

https://doi.org/10.26594/register.v11i1.4556

Authors

  • Juli Sulaksono Universitas Udayana (Indonesia)
  • Ida Ayu Dwi Giriantari Universitas Udayana (Indonesia)
  • Made Sudarma Universitas Udayana (Indonesia)
  • Ida Bagus Alit Swarmardika Universitas Udayana (Indonesia)

Keywords:

Deaf, Blind People, SIBI, Braille, CNN

Abstract

Communication is essential for everyone, including for individuals who are deaf and blind. People with disabilities must have equal rights to communicate, just like the general public. A one-way communication system between deaf and blind people is therefore necessary. The input to the system is in the form of spoken language, and the output is in the form of Braille. The input uses SIBI (Indonesian Sign Language System), which is recorded with a camera and then processed using a Convolutional Neural Network (CNN). The CNN is divided into three parts: the Training Process using a Tecnable machine, the SIBI DataSet model, and the Detection Process. The output of this process is text. The conversion of text into Braille is conducted using an image index. The resulting Braille can be read by blind users. System performance is analysed using a Confusion Matrix. The analysis results show an accuracy of 85%, a precision of 90%, and a recall of 82%.

Downloads

Download data is not yet available.

References

[1] Y. Xu, D. Wang, P. Collins, H. Lee, and M. Warschauer, "Same benefits, different communication patterns: Comparing Children's reading with a conversational agent vs. a human partner," Comput. Educ., vol. 161, p. 104059, 2021, doi: https://doi.org/10.1016/j.compedu.2020.104059.

[2] K. L. Manikanta, N. Shyam, and S. S, "Real-Time Hand Gesture Recognition and Sentence Generation for Deaf and Non-Verbal Communication," in 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), 2024, pp. 1-6. doi: 10.1109/ADICS58448.2024.10533502.

[3] R. S. C. K. Dharmarathne, K. A. Medagedara, N. B. W. N. Madhubashinee, P. T. Maitipe, D. Sriyaratna, and K. Abeywardena, "STEP UP: Systematically Motivating the Children with Low Psychological Maturity Level and Disabled Children using Gamification and Human Computer Interaction," in 2022 IEEE 7th International conference for Convergence in Technology (I2CT), 2022, pp. 1-6. doi: 10.1109/I2CT54291.2022.9824182.

[4] S. J. Khalid and I. A. Ali, "Mind Controlled Educational Robotic Toys for Physically Disabled Children: A Survey," in 2022 International Conference on Computer Science and Software Engineering (CSASE), 2022, pp. 348-354. doi: 10.1109/CSASE51777.2022.9759670.

[5] A. Young, R. Oram, and J. Napier, "Hearing people perceiving deaf people through sign language interpreters at work: on the loss of self through interpreted communication," J. Appl. Commun. Res., vol. 47, no. 1, 2019.

[6] I. W. Leigh, J. F. Andrews, C. A. Miller, and J.L. A. Wolsey, "Deaf People and Society: Psychological, Sociological, and Educational Perspectives," Deaf People Soc, Dec 2022, doi: 10.4324/9781003183686.

[7] P. Pryandi, M. B. Dewantara, H. L. H. S. Warnars, A. Ramadhan, N. Noordin, and F. H. A. Razak, "Smartphone Application for the Deaf and the Deaf Caring Community," in 2023 International Conference on Inventive Computation Technologies (ICICT), 2023, pp. 767-773. doi: 10.1109/ICICT57646.2023.10134458.

[8] R. A. Ramadhani, I. K. G. D. Putra, M. Sudarma, and I. A. D. Giriantari, "Database of Indonesian Sign Systems," Int. Conf. Smart Green Technol. Electr. Inf. Syst., 2018.

[9] H. A. Nugroho, M. Yusro, and W. Djatmiko, "Electronic dictionary of Indonesian sign systems (ED-ISS) based on Raspberry Pi as learning media for deaf students," AIP Conf. Proc., vol. 2331, no. 1, p. 030004, Apr. 2021, doi: 10.1063/5.0041698.

[10] R. A. Ramadhani, "A new technology on translating Indonesian spoken language into Indonesian sign language system | Ramadhani | International Journal of Electrical and Computer Engineering (IJECE)." https://ijece.iaescore.com/index.php/IJECE/article/view/23386 (accessed Jul. 08, 2022).

[11] J. C. Hailianto and A. A. Purwita, "Wonder Reader: Development of a Refreshable Braille Display Using a Carriage System and Self-Locking Braille Cells," in 2023 IEEE 13th International Conference on Consumer Electronics - Berlin (ICCE-Berlin), 2023, pp. 41-46. doi: 10.1109/ICCE-Berlin58801.2023.10375659.

[12] A. N. Sihananto, E. M. Safitri, Y. Maulana, and F. Fakhruddin, "INDONESIAN SIGN LANGUAGE IMAGE DETECTION USING CONVOLUTIONAL NEURAL NETWORK ( CNN ) METHOD," pp. 13-21, 2023.

[13] I. Ketut, G. Dharma Putra, and R. A. Ramadhani, "Utilization of Fuzzy Ontology for the Meaning of Homonymous and Homophones Ambiguous Sentences," Qubahan Acad. J., vol. 3, no. 4, pp. 234-244, Nov. 2023, doi: 10.48161/QAJ.V3N4A176.

[14] G. Wimmer, M. Hafner, and A. Uhl, "Improving CNN training on endoscopic image data by extracting additionally training data from endoscopic videos," Comput. Med. Imaging Graph., vol. 86, p. 101798, 2020, doi: https://doi.org/10.1016/j.compmedimag.2020.101798.

[15] R. A. Ramadhani, I. K. G. D. Putra, M. Sudarma, and I. A. D. Giriantari, "Detecting Indonesian ambiguous sentences using Boyer-Moore algorithm," TELKOMNIKA (Telecommunication Comput. Electron. Control., vol. 18, no. 5, pp. 2480-2487, Oct. 2020, doi: 10.12928/TELKOMNIKA.V18I5.14027.

[16] R. A. Ramadani, I. K. G. D. Putra, M. Sudarma, and I. A. D. Giriantari, "A new technology on translating Indonesian spoken language into Indonesian sign language system," Int. J. Electr. Comput. Eng., vol. 11, no. 4, pp. 3338-3346, Aug. 2021, doi: 10.11591/IJECE.V11I4.PP3338-3346.

[17] A. Luque, M. Mazzoleni, A. Carrasco, and A. Ferramosca, "Visualizing Classification Results: Confusion Star and Confusion Gear," IEEE Access, vol. 10, pp. 1659-1677, 2022, doi: 10.1109/ACCESS.2021.3137630.

[18] Suharjito, N. Thiracitta, and H. Gunawan, "SIBI Sign Language Recognition Using Convolutional Neural Network Combined with Transfer Learning and non-trainable Parameters," Procedia Comput. Sci., vol. 179, pp. 72-80, 2021, doi: https://doi.org/10.1016/j.procs.2020.12.011.

[19] S. B. Abdullahi, K. Chamnongthai, V. Bolon-Canedo, and B. Cancela, "Spatial-temporal feature-based End-to-end Fourier network for 3D sign language recognition," Expert Syst. Appl., vol. 248, p. 123258, 2024, doi: https://doi.org/10.1016/j.eswa.2024.123258.

[20] J. Liu, W. Xue, K. Zhang, T. Yuan, and S. Chen, "TB-Net: Intra- and inter-video correlation learning for continuous sign language recognition," Inf. Fusion, vol. 109, p. 102438, 2024, doi: https://doi.org/10.1016/j.inffus.2024.102438.

[21] A. Garcia-Perez et al., "CNN-based in situ tool wear detection: A study on model training and data augmentation in turning inserts," J. Manuf. Syst., vol. 68, pp. 85-98, 2023, doi: https://doi.org/10.1016/j.jmsy.2023.03.005.

[22] X. Guo et al., "Multimodel ensemble estimation of Landsat-like global terrestrial latent heat flux using a generalized deep CNN-LSTM integration algorithm," Agric. For. Meteorol., vol. 349, p. 109962, Apr. 2024, doi: 10.1016/J.AGRFORMET.2024.109962.

[23] H. Xia, R. Zhu, H. Yuan, and C. Song, "Rapid quantitative analysis of cotton-polyester blended fabrics using near-infrared spectroscopy combined with CNN-LSTM," Microchem. J., vol. 200, p. 110391, May 2024, doi: 10.1016/J.MICROC.2024.110391.

[24] J. Teuwen and N. Moriakov, "Convolutional neural networks," Handb. Med. Image Comput. Comput. Assist. Interv., pp. 481-501, Jan. 2020, doi: 10.1016/B978-0-12-816176-0.00025-9.

[25] F. Hawlader, F. Robinet, and R. Frank, "Leveraging the edge and cloud for V2X-based real-time object detection in autonomous driving," Comput. Commun., vol. 213, pp. 372-381, 2024, doi: https://doi.org/10.1016/j.comcom.2023.11.025.

[26] J. Sulaksono, I. A. D. Girinatari, M. Sudarma, and I. B. A. Swarmardika, "SIBI Syllable Recognition System With LSTM," in 2023 International Conference on Smart-Green Technology in Electrical and Information Systems (ICSGTEIS), 2023, pp. 94-97. doi: 10.1109/ICSGTEIS60500.2023.10424084.

[27] G. Phillips et al., "Setting nutrient boundaries to protect aquatic communities: The importance of comparing observed and predicted classifications using measures derived from a confusion matrix," Sci. Total Environ., vol. 912, p. 168872, 2024, doi: https://doi.org/10.1016/j.scitotenv.2023.168872.

Downloads

Published

2025-06-25

How to Cite

[1]
J. Sulaksono, I. Ayu Dwi Giriantari, M. Sudarma, and I. B. A. S. Swarmardika, “One-Way Communication System using CNN for Interaction between Deaf and Blind People”, Register: Jurnal Ilmiah Teknologi Sistem Informasi, vol. 11, no. 1, Jun. 2025.

Issue

Section

Article