AI's Reasoning Revolution Explained

Transparent reasoning is making AI a better researcher, teacher, and collaborator.

Illustration featuring a graphic profile of a face looking to the right, with wavy colours in the background

A new paradigm has been slowly unfolding in the AI world, most recently highlighted by the overhyped DeepSeek R1 launch. Only a year ago experts were saying the future of AI would be driven by scaling - by creating bigger and bigger LLMs trained on even more vast sources of data and powered by virtually unlimited computing power, all of which would cost exponentially more money. And while we’re still on that trajectory to some extent, there’s been an intriguing new development that has changed the game: the emergence of language models that “think” before they answer, and demonstrate their reasoning process to the user. This shift represents a fundamental change in how artificial intelligence approaches problem-solving, and the potential impact is huge.

From Black Box to Glass Box

Illustration of a clear glass cube on a minimal background

To appreciate where we are now, it helps to look at where we started from. Earlier AI models functioned primarily as sophisticated pattern-matching systems that produced answers without explanation. While impressive, these systems left us wondering about their decision-making process and where they got their answers from. Today, we're seeing a transformation as companies like OpenAI, Google, Perplexity, and DeepSeek have released new models that explain their reasoning.

This transparency isn't just about showing intermediate steps, though that has resulted in these models producing better results with less hallucinations—it's also about making AI systems more comprehensible and trustworthy. When an AI model demonstrates its reasoning process, we gain insight into how it arrives at conclusions, helping us better understand both its capabilities and limitations. It also helps researchers ensure that the models are doing what they're being asked.

Understanding the "Thinking" Process

Illustration of a humanoid robot sitting down, resting it's chin on it's hand

To appreciate how these new AI models work, you need to know what happens behind the scenes during what's called "inference", which is the process where a model generates its response to your input. Traditional AI models rely on large neural networks for their knowledge and typically involve a single pass where the model predicts the next word based solely on statistical patterns. In this way they work like quick-thinking students who immediately blurt out answers, based on pattern recognition. Reasoning models on the other hand, are more like thoughtful problem-solvers who take time to work through questions step-by-step. They have hybrid architectures that blend neural networks with symbolic logic frameworks, achieving a "best of both worlds" approach. With these models, inference plays a more dynamic role, enabling them to “think” through a problem by generating intermediate reasoning steps, evaluating multiple potential pathways, and refining the final answer.

This new approach, sometimes called "long-thinking" inference or “chain-of-thought” reasoning, gives models extra time to consider problems from multiple angles. By leveraging extra compute time it’s able to simulate a human-like chain of thought thereby producing much better responses. OpenAI's o1 model was one of the pioneers in this space, taking extra time to think through problems before providing answers (though it didn't reveal its thoughts to users both as a safety measure and as a way to protect its competitive advantage). Now that the clear advantages of this approach are well understood, other AI companies have followed suit. Google's latest version of Gemini (2.0 Flash Thinking) and DeepSeek’s R1 use similar models. Looking to stay ahead, OpenAI recently launched 03-mini, a smaller and more efficient version of their next-generation o3 model. All of these new models show users their thought process in near real-time—much like a teacher demonstrating problem-solving on a whiteboard. Reading through their “thinking” can be almost as interesting as the final response.

There are caveats however. This more thorough approach requires more computing power and takes longer to generate responses. The benefit however, is it often leads to more reliable and trustworthy results. This trade-off between speed and accuracy is shifting how we use AI in practical applications, though it isn’t always clear to users when and how to use it.

Deep Research: Extending the Reasoning Paradigm

Abstract illustration of a tree with branches extending into colour fields filled with organic shapes

The evolution of reasoning models has opened up exciting new possibilities, perhaps most notably in the emergence of "Deep Research" tools. Major AI companies including Google, OpenAI, and Perplexity have all recently unveiled their own versions of these sophisticated research assistants, all with the same name. These tools represent a significant leap forward, combining the step-by-step reasoning we've discussed with the ability to retrieve and synthesize information in real-time.

OpenAI's Deep Research tool, released in early 2025, showcases how far this technology has come. Building on their o3 model's reasoning capabilities, it uses extended inference compute to methodically work through complex research queries. Going beyond just thinking through problems, it actively searches for and incorporates relevant information from the web as it reasons. Google's integration of Deep Research into Gemini and Perplexity's similar offering demonstrate how this approach is becoming the new standard for AI-assisted research.

What makes these tools particularly powerful is their ability to merge reasoning with retrieval. Think of it as having a research assistant who not only thinks carefully about your question but also knows exactly when and where to look for additional information, and is able to keep track and organize it all for you. The result is more comprehensive and trustworthy research reports, with the AI showing its work every step of the way, from initial reasoning to information gathering to final synthesis, and providing links to all of its sources.

Where This Approach Works Best

Reasoning-enabled models particularly excel in situations that benefit from step-by-step methodical thinking, or when there is a lot of data to make sense of. These are the use cases where reasoning models excel:

  • Complex Problem Solving: They perform well on tasks like mathematical proofs, coding challenges, and scientific problem solving by breaking down questions into intermediate steps.
  • Technical Support and Programming: Developers can greatly benefit from these models as they can generate and verify code, explain logic, and provide step-by-step debugging help.
  • Data Analysis and Decision Making: In work settings, these models are great at detailed report generation and complex decision support, as the reasoning process helps justify their conclusions.

They’re also great at providing general problem-solving help, solving challenges that require thoughtful instructions to resolve, or whenever a response benefits from clear explanation. And in situations where they can serve as a tutor they're especially helpful. Whether it’s guiding you through programming challenges or helping you analyze data by breaking down complex processes into understandable steps, by exposing the rational and process that goes into arriving at the result, you’re able to learn much more than if you were just provided the final answer.

When to Avoid It

While they can be incredibly flexible and helpful for some tasks, it's important to understand their limitations. Creative tasks don't always benefit from explicit reasoning—imagine trying to explain every word choice when writing a poem. Quick conversational exchanges can become cumbersome with too much explanation. And for straightforward questions, detailed reasoning might overcomplicate simple answers. Here are the use cases to avoid:

  • Creative and Subjective Tasks: For tasks such as creative writing, art, or nuanced emotional conversations, step-by-step reasoning may not add value and can sometimes hinder spontaneity.
  • Real-Time Interaction: The extra processing time needed for reasoning can lead to slower responses, which may not be suitable for applications that require instantaneous feedback.
  • General Knowledge Queries: For simple fact-based questions, the added reasoning step is unnecessary and can increase costs (and energy consumption) without meaningful improvements in accuracy.

Looking Ahead

Surreal landscape illustration of a person on a path at the top of a hill standing by a tree as the sun rises

The next year or so promises several meaningful developments that could reshape how we interact with AI:

More Efficient Reasoning

Researchers are working to streamline the reasoning process without sacrificing its educational value. As these improvements roll out, we could see reasoning-enabled AI becoming more suitable for real-time applications, from interactive tutoring to live problem-solving sessions. This could make sophisticated AI assistance accessible in situations where it's currently too slow to be practical.

Multimodal Understanding

Soon, these systems will likely expand beyond text to incorporate images, diagrams, and potentially other forms of data into their reasoning process. Imagine an AI that can not only solve a geometry problem but also sketch out the solution steps, or one that can analyze a scientific diagram while explaining its thinking. This integration of multiple forms of information could revolutionize fields like technical education, scientific research, and professional training, where visual and verbal understanding need to work hand-in-hand.

Informed Governance

As AI systems become more transparent about their decision-making processes, we're likely to see more productive discussions about AI governance. This visibility into AI reasoning could help policymakers, ethicists, and the public better understand both the capabilities and limitations of AI systems, as well as the risks. It may lead to more thoughtful regulation that balances innovation with safety, and help organizations develop more effective guidelines for AI deployment in sensitive areas like healthcare, finance, and education.

Enhanced Collaboration

A particularly exciting development is the potential for more sophisticated human-AI collaboration. As these systems get better at explaining their reasoning, we could see a more impactful partnership where humans and AI combine their strengths more effectively. With a more transparent and trusting relationship, AI can be left to handle the complex calculations and vast amounts of data needed to perform sophisticated pattern recognition and reveal breakthrough insights, while humans provide direction, context, creativity, and ethical judgment.

Broader Implications

Expressionist painting of a woman in silhouette raising a pen towards the sky, against an orange background

Looking beyond the next-level improvements in data synthesis and sophisticated problem-solving, this evolution in AI represents more than a technical advancement, it could potentially revolutionize our ability to leverage artificial intelligence as a learning tool. When AI systems show their work and reveal their process, they can become trusted partners in education and discovery rather than just semi-mystical answer generators.

For organizations and individuals, this creates both opportunities for deeper learning and a democratization of broader knowledge. Instead of simply providing solutions with potentially questionable answers, reasoning systems can help us understand the principles behind those solutions, enabling us to build our own problem-solving skills, should we make the effort.

Over the next few months these reasoning models will go from being “advanced” AI tools that require explicit selection and a paid subscription, to the new default. So even if you haven’t encountered them yet you likely will soon. And the way you’ll know is because they’ll tell you how they arrived at your answer.

Research assistance provided by ChatGPT o3-mini-high and Perplexity Deep Research. Editing assistance by Claude Sonnet 3.5. All images generated with Midjourney 6.1.