Abstract— Indonesian Sign Language (BISINDO) is one type of sign language used by people with disabilities in Indonesia to communicate. However, there is still a gap between people with disabilities and non-disabled people in terms of communication, especially in the use of technology. This research aims to develop a video-based BISINDO classification system using the You Only Look Once version 11 (YOLOv11) model, which is expected to help bridge communication between people with disabilities and non-disabled people. The BISINDO+ dataset consists of 26 classes with 6,389 images aggregated from Kaggle, and a personal dataset created by the author with diverse backgrounds was developed in this research. The augmentation and hyperparameter tuning effectively increase model performance using accuracy, precision, recall, and mAP metrics. The results of this study show YOLOv11 achieved the best performance on the validation set, with a precision of 0.998, recall of 1.000, and mAP of 0.995, showing a precision diffe