The Prialto Blog

How AI Can Improve Critical Thinking

Written by Kaylee-Anna Jayaweera | Nov 17, 2025 6:00:01 PM

Recent research supports the growing narrative that artificial intelligence is eroding our ability to think critically. A study from MIT's Media Lab found that ChatGPT users demonstrated the lowest brain engagement compared to Google search users and those working without AI assistance. Over several months, AI users became progressively lazier, often resorting to copy-and-paste behavior by the study's end. 

But here's what these studies don't tell you: the problem isn't necessarily just AI itself. The issue is how we're using it. AI doesn't kill critical thinking—lazy use of AI does.  

When organizations and individuals employ AI strategically—with prompt engineering designed to challenge assumptions, iterate on ideas, and enhance their thinking process—it becomes a powerful catalyst for better outcomes. When used thoughtfully, AI can help teams build more effectively, help individual contributors challenge their own biases, and enable organizations to produce higher-quality outputs. 

Table of contents

  1. How to Use Generative AI To Improve Your Critical Thinking 
  2. The Critical Thinking Risk of AI
  3. AI + Human Expertise: The Only Sustainable Path Forward 
  4. Choose Strategic Over Convenient 

     

How to Use Generative AI To Improve Your Critical Thinking 

We’ve seen AI used effectively in our teams to preserve and even enhance critical thinking, primarily when team members push back and iterate. Rather than accepting AI's first output, your team should engage in a dialogue with the tool, questioning assumptions, challenging conclusions, and refining results. 

A lot of strategic generative AI use comes down to how you prompt the LLMs you work with. How you craft your queries to evolve your thinking. 

Here are four practical strategies to implement this approach: 


1. Use AI as Your Sparring Partner

One option is to treat AI as an intellectual sparring partner—a tool that challenges your thinking by taking opposing viewpoints. Prompt your AI tool to argue against your position, identify weaknesses in your logic, or present alternative perspectives you haven't considered. 

This type of adversarial collaboration can strengthen your ideas in every phase of project development, from initial conception through final testing. 

Example: Operations Team Workflow Optimization 

Imagine your operations team is developing a new workflow for customer onboarding. Instead of asking AI to generate the workflow, you could: 

  1. Draft your initial workflow based on team expertise 
  2. Present it to AI and ask: "What are the potential bottlenecks in this workflow?"
  3. Request counterarguments: "Argue why this approach might fail during high-volume periods"
  4. Ask for alternative perspectives: "If you were a customer going through this process, what would frustrate you?"
  5. Use the AI's challenges to refine and strengthen your workflow

 

This approach keeps your team engaged in deep thinking while leveraging AI to surface blind spots and challenge your assumptions. As an expert in your area, you know how a process is supposed to work. Your conversations with AI tools can help you unveil the factors you aren’t thinking about. 

Learn more about building workflows with AI that scale

2. Ask Questions

For typical generative AI use cases—writing emails, creating blog posts, drafting proposals—the cardinal rule is: never use the first result. Instead, engage in a questioning process that forces you to think critically about the output. 

Essential questions to ask yourself include: 

  • Is this in my voice? Does it sound like {brand}, or like generic corporate-speak? 
  • Does this represent my company well? Is the tone, language, and positioning aligned with our brand? 
  • Is this consultative? Am I providing value, or just pushing a product? 
  • Does this position me as an authoritative partner? Am I demonstrating expertise and building trust? 
  • Is this factually accurate? Have I verified any claims or statistics? 
  • Would we be proud to have our name on this? Is the quality what we want to be known for?. 

Example: Sales Representative Drafting an Email 
 

A sales rep is reaching out to a qualified lead after a discovery call. Instead of using AI's first draft, here's what they do: 

  • First attempt: Ask AI to draft an email based on the call notes 
  • Question the output: "Does this sound too generic? Rewrite it to be more consultative and reference-specific pain points from our call." 
  • Refine: "This doesn't sound like me. Here's my typical writing style: [provide examples]. Adapt the tone accordingly." 
  • Personalize: "Add a specific detail about their industry challenge that we discussed." 
  • Final check: Read the final version and make manual adjustments to ensure authenticity 

This iterative questioning and prompting process keeps you engaged with the content and ensures the final output is truly representative.  

3. Use AI as Your Devil's Advocate

This approach flips the typical AI workflow: instead of prompting your AI tool to generate content, you create first, then ask AI to identify flaws, challenge assumptions, and point out gaps in your thinking.  

This method is particularly powerful for: 

  • Strategic planning and business proposals 
  • Content creation and thought leadership 
  • Process improvement initiatives 
  • Problem-solving and troubleshooting 

 
Example: Website Sign Up Process Workflow Review 
 

A website team is designing a new signup workflow for enterprise clients. They can use AI as a devil's advocate by following these steps: 

  1. Present the full workflow to AI with full context.  
  2. Ask targeted questions such as:
    1. "What are the three biggest risks in this sign-up process?"
    2. "Where might leads get confused or frustrated?"
    3. "What assumptions am I making that might not be true for all lead segments?"
    4. "If this workflow fails, what would be the most likely reasons?"
    5. "What questions haven't I asked that I should be considering?"
  3. Use AI's critiques to strengthen weak points and address gaps
  4. Test the revised workflow with a small group before full implementation

This approach ensures you maintain ownership of the solution while using AI to identify blind spots your team might miss. Critical thinking happens before and after AI involvement, not as a replacement for it. 

4. Use AI as Your Peer Editor

Like the devil’s advocate approach, generate AI can be an excellent editor. This keeps you, the human (the expert), in the driver's seat while benefiting from AI's ability to spot patterns, suggest improvements, and identify areas for clarification. 

AI can be particularly effective at: 

  • Identifying unclear or ambiguous language 
  • Spotting logical gaps or inconsistencies 
  • Suggesting structural improvements 
  • Highlighting assumptions that need support 
  • Analyzing whether content matches the intended audience level 

 

Example: Technical Documentation Review 

 
A product team has created technical documentation for a new feature. To use AI as a peer editor: 

  1. Complete your first draft independently, using your expertise and knowledge 
  2. Define your audience: "This documentation is for software engineers with 2-5 years of experience who are familiar with REST APIs but new to our platform"
  3. Ask AI to review for that specific audience:
    1. "Does this documentation assume knowledge the target audience might not have?"
    2. "Where might someone with this background get confused?"
    3. "Are there technical terms that need better explanation?"
    4. "Is the logical flow clear from section to section?"
  4. Review AI's suggestions critically—accept changes that genuinely improve clarity and reject those that don't align with your technical judgment
  5. Make final edits manually, ensuring the documentation maintains your voice and reflects your depth of expertise
 

This workflow allows you to benefit from an external perspective while keeping your technical expertise and judgment at the forefront. Over time, you'll also notice patterns in AI's feedback that can help you improve your writing and communication style – or, alternatively, help you identify and work around weaknesses in your AI tool’s structure. 

The Critical Thinking Risk of AI 

It's essential to understand the real risks of poor AI implementation or over-reliance on it. The consequences extend beyond individual cognitive decline—they can impact brand reputation, output quality, and your team's long-term capabilities.  

Loss of Authenticity 

When teams use out-of-the-box generative AI tools without iteration or customization, outputs become generic and indistinguishable from competitors. Research shows that 56-57% of employees hide their AI use or present AI-generated content as their own, while a similar percentage admit they do not check AI-generated content for accuracy. This creates a dangerous cycle where unverified, generic content comes to represent your brand.  

Growing Dependency 

Cognitive offloading means that the more we rely on AI for routine cognitive tasks, the less we practice these skills ourselves. Duke University's Learning Innovation research notes that students using AI for writing and research tasks experienced reduced cognitive load but demonstrated poorer reasoning and argumentation skills compared to those using traditional methods. 

Eroding Technical Skills 

When team members consistently defer to AI without engaging with the underlying work, their technical abilities atrophy. Research published in Smart Learning Environments found that over-reliance on AI dialogue systems potentially impairs critical cognitive skills, including critical thinking, decision-making, and analytical thinking.  

In contrast, a Microsoft Research study of knowledge workers found that those with higher self-confidence used critical thinking more when working with AI. A trend that suggests what we already know: pairing expertise with technology is the trick to success. 

AI + Human Expertise: The Only Sustainable Path Forward 

Here's the fundamental truth that often gets lost in discussions about AI and critical thinking: AI is just a tool. It's a powerful one, but it requires human judgment, expertise, and creativity to be truly effective. 

As we explore in depth in our article on the EQ-AI execution stack, artificial intelligence can be used to enhance human capabilities, not replace them. The most successful organizations are those that understand this distinction and build it into their workflows. 

The Federal Reserve Bank of St. Louis found that knowledge workers using generative AI saved an average of 5.4% of work hours—about 2.2 hours per week for someone working 40 hours. But here's what matters more than the time savings: what people do with that freed-up time. 

The most effective AI implementation follows this framework: 

1. Human expertise defines the problem and sets the strategy

Your team understands the context, nuances, and business objectives in ways AI cannot. They should drive problem definition and strategic direction. 

2. AI assists with research, analysis, and draft generation

Use AI to gather information, identify patterns, generate initial drafts, and explore possibilities more quickly than manual methods. 

3. Human judgment evaluates, refines, and decides

Your team applies critical thinking to assessing AI outputs, identify gaps, challenging assumptions, and making final decisions. 

4. Human creativity personalizes and innovates

Add the insights, experiences, and creative touches that make work distinctive and valuable. 


This framework focuses on improving outputs rather than just pursuing efficiency for efficiency's sake. After all, the goal isn't just to work faster—it's to work better. When you free your team from repetitive tasks, they should reinvest that time into deeper thinking, better problem-solving, and more innovative solutions. 

Choose Strategic Over Convenient 

AI can erode critical thinking—but only when we let it. The difference between organizations that thrive with AI and those that see declining quality and capability comes down to a single choice: Are you using AI strategically or just conveniently? 

The organizations that will succeed in the AI era aren't those that use AI the most—they're the ones that use it the smartest. By teaching your team to prompt strategically, to push back, iterate, and think critically about AI outputs, you can harness its power while avoiding the cognitive pitfalls that concern researchers.