The escalating incidence of DNS exfiltration attacks necessitates the implementation of an intrusion detection system that is not only precise but also transparent in its operation, thereby ensuring comprehensibility of model decisions by human users. To address this challenge, the present study proposes the application and comparison of various Explainable Artificial Intelligence (XAI) methods to explain Deep Neural Network (DNN) model decisions in the context of DNS exfiltration attack detection. The core methodologies employed entail constructing a deep neural network (DNN) model with a pyramid architecture and utilizing six XAI methods. Global SHAP, PFI, and ALE are among the approaches employed for global explanations, whereas local SHAP, LIME, and Anchor are used for local ones. The CIC-Bell-DNS-EXF2021 dataset was utilized for the evaluation. The results show that SHAP provides the most comprehensive interpretation, both globally and locally, although it requires significant processing resources, with 38.73 seconds and 36.8 bytes for global explanations and 5.22 seconds and 31.01 bytes for local explanations. On the other hand, PFI and LIME use fewer resources, 4.45 bytes in 28.22 seconds and 27.69 bytes in 2.58 seconds, but provide less comprehensive information. ALE consumes 10.43 bytes in 51.02 seconds, while Anchor consumes the most resources with 784.59 bytes in 9.24 seconds. This study makes a substantial contribution by methodically examining the benefits, drawbacks, and capabilities of several XAI approaches. These findings underscore the imperative for the integration of XAI to enhance the transparency of artificial intelligence-based detection systems. This study employs a post-hoc XAI approach to elucidate the decisions of the DNN model in detecting DNS exfiltration attacks.