Abstract— The utilization of Large Language Models (LLMs) like ChatGPT in calculus education offers significant potential, yet its effectiveness is highly dependent on the prompting technique employed. This study aims to systematically evaluate and compare the performance of three prompting techniques Zero-Shot, Few-Shot, and Chain-ofThought (CoT) in solving calculus problems (integrals, derivatives, and limits) using the GPT-4o mini model. A total of 270 responses from 90 problems of varying difficulty levels were manually evaluated by experts based on four criteria: clarity, correctness, strategy, and representation. Data analysis was performed using a Logistic Mixed Effects Model (LMM) to test the influence of each variable. The results consistently demonstrate that the Chain-of-Thought (CoT) technique is significantly superior, particularly in enhancing the clarity and strategic quality of the solutions. The analysis also reveals that problem type is a critical factor, with limit problems posing the greatest challenge to the model. This study concludes that the choice of prompting technique is crucial for maximizing the potential of LLMs in education and recommends CoT as the most effective strategy for generating solutions that are not only accurate but also possess high pedagogical value.