Do you often use words like “please,” “can you,” or “thank you” when talking to ChatGPT or other AI chatbots? While these phrases reflect human politeness, they may unintentionally limit the performance of advanced AI models. Reports have suggested that using such courteous language can confuse generative AI systems, affecting their ability to respond accurately. These models, including ChatGPT and Gemini AI, are designed with Natural Language Understanding and conversational context, enabling natural interaction without the need for technical jargon. However, for optimal efficiency, there are certain best practices to follow when prompting these tools.

In professional settings, politeness is a key aspect of communication, and it’s natural to extend this to AI interactions. However, phrases like “thank you” and “please” can actually increase the energy and processing time required by the AI, potentially costing companies like OpenAI millions of dollars. These additional words may trigger longer response generation and unnecessary processing, making the exchange less efficient. Experts suggest that short, clear, and direct prompts not only enhance accuracy but also help reduce energy consumption and processing load on the AI infrastructure.
Moreover, terms such as “can” and “could” have been identified as particularly problematic. According to a report by BGR, including phrases like “can you” in prompts may lead to confusion, causing the AI to misinterpret the request or deliver unexpected results. These words tend to weaken the clarity of the prompt, making responses less precise. To get the most out of AI tools like ChatGPT, users are advised to avoid vague or overly polite phrasing and instead focus on delivering direct, action-oriented instructions for improved performance and sustainability.
