Model Object Detection Neural Network Berbasis Hand Gesture Recognition sebagai Kontrol Prostesis Tangan
Abstrak
Penyandang disabilitas akibat cedera kecelakaan sebagian besar merupakan pasien pasca amputasi lengan yang dapat menyebabkan gangguan psikologis, bahkan trauma besar. Dengan demikian, urgensi fungsionalitas prostesis tangan semakin dibutuhkan. Hand gesture recognition (HGR) dapat dimanfaatkan sebagai kontrol prostesis tangan, meninjau dari kesamaan bentuk objek/benda memberikan kecenderungan terhadap gerakan tangan yang sama. Pengembangan ini menggunakan tiga jenis gerakan yang umum, yaitu pinch, pick, dan grab. Diperlukan pengembangan model neural network yang mampu mengimplementasikan konsep tersebut. Model neural network yang dikembangkan menggunakan pretrained network YOLOV7 dan YOLOV7tiny dengan dataset yang dikumpulkan melalui metode scrapping data gambar publik. Dataset yang diperoleh sejumlah 317 gambar, 2278 label objek dengan rasio training dengan testing 80:20. Proses training menggunakan framework Pytorch dengan 300 epoch. Hasil nilai loss tiap epoch menunjukkan model trainable pada dataset yang diberikan. Hasil training selanjutnya dievaluasi dengan evaluation metrics jumlah parameter, frame per seconds (FPS) dan mean average precision (mAP) menggunakan testing dataset. Hasil overall menunjukkan evaluation metrics tertinggi pada model dengan pretrained YOLOV7 dengan jumlah parameter 36,9 juta, FPS 161, dan mAP 98,11%. Dengan hasil ini, model memiliki potensi untuk dikembangkan dan diimplementasikan sebagai penunjang fungsionalitas kontrol prostesis tangan.
##plugins.generic.usageStats.downloads##
Referensi
[2] P. S. Mckechnie and A. John, “Anxiety and depression following traumatic limb amputation : A systematic review,” Injury, vol. 45, no. 12, pp. 1859–1866, 2014, doi: 10.1016/j.injury.2014.09.015.
[3] Kemenkes RI, “Hasil Riset Kesehatan Dasar Tahun 2018,” Kementrian Kesehat. RI, vol. 53, no. 9, pp. 1689–1699, 2018.
[4] D. Dosen, S., Prahm, C., Amsüss, S., Vujaklija, I., & Farina, “Prosthetic Feedback Systems,” in Bionic Limb Reconstruction, Springer Nature Switzerland AG, 2021, pp. 147–170.
[5] Z. Ren, J. Yuan, J. Meng, and Z. Zhang, “Robust Part-Based Hand Gesture Recognition Using Kinect Sensor,” IEEE Trans. Multimed, vol. 15, no. 5, pp. 1–11, 2013.
[6] B. Feng, F. He, X. Wang, Y. Wu, and H. Wang, “Depth-Projection-Map-Based Bag of Contour Fragments for Robust Hand Gesture Recognition,” IEEE Trans. Hum.-Mach. Syst, vol. 47, pp. 1–13, 2016.
[7] P. K. Pisharady and M. Saerbeck, “Recent methods and databases in vision-based hand gesture recognition : A review,” Comput. Vis. Image Underst., vol. 141, pp. 152–165, 2015, doi: 10.1016/j.cviu.2015.08.004.
[8] M. Garcia, J. Cruz, C. Garza, P. DeLucia, and J. Yang, “The Effect of Object Surfaces and Shapes on Hand Grip Function for Heavy Objects,” in Proceedings of the AHFE 2018 International Conferences on Human Factors and Simulation and Digital Human Modeling and Applied Optimization, 2018, pp. 446–452.
[9] I. P. G. S. Andisana, M. Sudarma, and I. M. O. Widyantara, “Pengenalan Dan Klasifikasi Citra Tekstil Tradisional Berbasis Web Menggunakan Deteksi Tepi Canny, Local Color Histogram Dan Co-Occurrence Matrix,” Maj. Ilm. Teknol. Elektro, vol. 17, no. 3, p. 401, 2018, doi: 10.24843/mite.2018.v17i03.p15.
[10] X. Zhang, M. Shen, X. Li, and F. Feng, “A deformable CNN-based triplet model for fine-grained sketch-based image retrieval,” Pattern Recognit., vol. 125, p. 108508, 2022, doi: 10.1016/j.patcog.2021.108508.
[11] N. Dong, Y. Zhang, M. Ding, and G. H. Lee, “Incremental-DETR: Incremental Few-Shot Object Detection via Self-Supervised Learning,” 2022, [Online]. Available: http://arxiv.org/abs/2205.04042.
[12] M. Z. Alom et al., “The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches,” 2018, [Online]. Available: http://arxiv.org/abs/1803.01164.
[13] P. Bharati and A. Pramanik, “Deep Learning Techniques—R-CNN to Mask R-CNN: A Survey,” in Advances in Intelligent Systems and Computing, 2020, vol. 999, p. 7, doi: 10.1142/S0218001402001976.
[14] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” 2022. [Online]. Available: http://arxiv.org/abs/2207.02696.
[15] J. Solawetz, “YOLOv7 - A Breakdown of How it Works,” Roboflow, 2022. https://blog.roboflow.com/yolov7-breakdown/ (accessed Jan. 10, 2023).
[16] S. Mittal, “A Survey on optimized implementation of deep learning models on the NVIDIA Jetson platform,” J. Syst. Archit., vol. 97, no. December 2018, pp. 428–442, 2019, doi: 10.1016/j.sysarc.2019.01.011.
[17] C. G. I. Raditya, P. A. S. Dharma, K. A. Widyatmika, I. N. Suparta, I. M. S. Yasa, and A. A. N. G. Sapteka, “Pendeteksi Penggunaan Masker Wajah dengan ESP32Cam Menggunakan OpenCV dan Tensorflow,” Maj. Ilm. Teknol. Elektro, vol. 21, no. 2, p. 155, 2022, doi: 10.24843/mite.2022.v21i02.p01.
[18] A. Schafer, G. Reis, and D. Stricker, “Comparing Controller With the Hand Gestures Pinch and Grab for Picking Up and Placing Virtual Objects,” in IEEE VR, 2022, pp. 1–2, doi: 10.3390/mti4040091.
[19] K. U. Manjari, S. Rousha, D. Sumanth, and J. Sirisha Devi, “Extractive Text Summarization from Web pages using Selenium and TF-IDF algorithm,” Proc. 4th Int. Conf. Trends Electron. Informatics, ICOEI 2020, pp. 648–652, 2020, doi: 10.1109/ICOEI48184.2020.9142938.
[20] Tzutalin, “LabelImg,” Git code, 2015. https://github.com/heartexlabs/labelImg (accessed Jan. 10, 2023).
[21] J. Zheng, H. Wu, H. Zhang, Z. Wang, and W. Xu, “Insulator-Defect Detection Algorithm Based on Improved YOLOv7,” Sensors, vol. 22, no. 22, pp. 1–23, 2022, doi: 10.3390/s22228801.
[22] K. M. Kuo, P. C. Talley, C. H. Huang, and L. C. Cheng, “Predicting hospital-acquired pneumonia among schizophrenic patients: A machine learning approach,” BMC Med. Inform. Decis. Mak., vol. 19, no. 1, pp. 1–8, 2019, doi: 10.1186/s12911-019-0792-1.
[23] S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, and J. Gao, “Deep Learning Based Text Classification: A Comprehensive Review,” ACM Comput. Surv, vol. 54, no. 3, p. Article 62, 2021, [Online]. Available: http://arxiv.org/abs/2004.03705.

This work is licensed under a Creative Commons Attribution 4.0 International License