The rise of social media platforms has provided vast textual data that can be utilized for personality prediction using natural language processing (NLP) techniques. This study presents a comparative analysis of performance and computational efficiency between two transformer-based models, BERT and XLNet, in predicting the Big Five personality traits from Indonesian tweets collected from Twitter (now known as X). The dataset, consisting of 9,708 tweets labeled by psychologists, underwent preprocessing and was evaluated using multi-label classification. Various configurations were tested, including different optimizers, learning rates, batch sizes, epochs, and dropout rates. The results show that BERT outperforms XLNet in terms of overall F1-score (43.362% vs. 42.822%) and inference time (215.72 ms vs. 7714.83 ms), demonstrating its higher efficiency and stability. Despite XLNet showing slightly better results on certain traits such as Extraversion and Openness, its performance was more sensitive to hyperparameters and limited by longer computational times. This research highlights BERT’s suitability for real-time Indonesian text classification tasks and suggests further exploration on XLNet’s potential using larger datasets and optimized configurations.