The Times Australia
The Times World News

.

OpenAI’s new ‘deep research’ agent is still just a fallible tool – not a human-level expert

  • Written by Raffaele F Ciriello, Senior Lecturer in Business Information Systems, University of Sydney
OpenAI’s new ‘deep research’ agent is still just a fallible tool – not a human-level expert

OpenAI’s “deep research[1]” is the latest artificial intelligence (AI) tool making waves[2] and promising to do in minutes what would take hours for a human expert to complete.

Bundled as a feature in ChatGPT Pro and marketed[3] as a research assistant that can match a trained analyst, it autonomously searches the web, compiles sources and delivers structured reports. It even scored[4] 26.6% on Humanity’s Last Exam (HLE), a tough AI benchmark, outperforming[5] many models.

But deep research doesn’t quite live up to the hype. While it produces polished reports, it also has serious flaws. According to journalists[6] who’ve tried it[7], deep research can miss key details, struggle with recent information and sometimes invents facts.

OpenAI flags this when listing the limitations of its tool. The company also says it[8] “can sometimes hallucinate facts in responses or make incorrect inferences, though at a notably lower rate than existing ChatGPT models, according to internal evaluations”.

It’s no surprise that unreliable data can slip in, since AI models don’t “know” things in the same way humans do.

The idea of an AI “research analyst” also raises a slew of questions. Can a machine – no matter how powerful – truly replace a trained expert? What would be the implications for knowledge work? And is AI really helping us think better, or just making it easier to stop thinking altogether?

What is ‘deep research’ and who is it for?

Marketed towards professionals in finance, science, policy, law and engineering, as well as academics, journalists and business strategists, deep research is the latest “agentic experience[9]” OpenAI has rolled out in ChatGPT. It promises to do the heavy lifting of research in minutes.

Currently, deep research is only available to ChatGPT Pro users in the United States, at a cost of US$200 per month. OpenAI says[10] it will roll out to Plus, Team and Enterprise users in the coming months, with a more cost-effective version planned for the future.

Unlike a standard chatbot that provides quick responses, deep research follows a multi-step process to produce a structured report:

  1. The user submits a request. This could be anything from a market analysis to a legal case summary.
  2. The AI clarifies the task. It may ask follow-up questions to refine the research scope.
  3. The agent searches the web. It autonomously browses hundreds of sources, including news articles, research papers and online databases.
  4. It synthesises its findings. The AI extracts key points, organises them into a structured report and cites its sources.
  5. The final report is delivered. Within five to 30 minutes, the user receives a multi-page document – potentially even a PhD-level thesis[11] – summarising the findings.

At first glance, it sounds like a dream tool for knowledge workers. A closer look reveals significant limitations.

Many[12] early[13] tests[14] have exposed shortcomings:

  • It lacks context. AI can summarise, but it doesn’t fully understand what’s important.
  • It ignores new developments. It has missed major legal rulings and scientific updates.
  • It makes things up. Like other AI models, it can confidently generate false information.
  • It can’t tell fact from fiction. It doesn’t distinguish authoritative sources from unreliable ones.

While OpenAI claims its tool rivals human analysts, AI inevitably lacks the judgement, scrutiny and expertise that make good research valuable.

What AI can’t replace

ChatGPT isn’t the only AI tool that can scour the web and produce reports with just a few prompts. Notably, a mere 24 hours after OpenAI’s release[15], Hugging Face released a free, open-source version that nearly matches its performance.

The biggest risk of deep research and other AI tools marketed for “human-level” research is the illusion that AI can replace human thinking. AI can summarise information, but it can’t question its own assumptions, highlight knowledge gaps, think creatively or understand different perspectives.

And AI-generated summaries don’t match the depth[16] of a skilled[17] human researcher.

Any AI agent, no matter how fast, is still just a tool, not a replacement for human intelligence. For knowledge workers, it’s more important than ever to invest in skills that AI can’t replicate: critical thinking, fact-checking, deep expertise and creativity.

If you do want to use AI research tools, there are ways to do so responsibly. Thoughtful use of AI can enhance research without sacrificing accuracy or depth. You might use AI for efficiency, like summarising documents, but retain human judgement for making decisions.

Always verify sources, as AI-generated citations can be misleading. Don’t trust conclusions blindly, but apply critical thinking and cross-check information with reputable sources. For high-stakes topics — such as health[18], justice[19] and democracy[20] — supplement AI findings with expert input.

Despite prolific marketing that tries to tell us otherwise, generative AI still has plenty of limitations. Humans who can creatively synthesise information, challenge assumptions and think critically will remain in demand – AI can’t replace them just yet.

References

  1. ^ deep research (openai.com)
  2. ^ making waves (www.forbes.com)
  3. ^ marketed (www.theguardian.com)
  4. ^ scored (www.zdnet.com)
  5. ^ outperforming (www.techradar.com)
  6. ^ According to journalists (www.theverge.com)
  7. ^ who’ve tried it (www.platformer.news)
  8. ^ The company also says it (openai.com)
  9. ^ agentic experience (openai.com)
  10. ^ says (openai.com)
  11. ^ potentially even a PhD-level thesis (futureofbeinghuman.com)
  12. ^ Many (www.theverge.com)
  13. ^ early (www.nature.com)
  14. ^ tests (www.datacamp.com)
  15. ^ 24 hours after OpenAI’s release (arstechnica.com)
  16. ^ depth (futureofbeinghuman.com)
  17. ^ skilled (www.tandfonline.com)
  18. ^ health (www.theguardian.com)
  19. ^ justice (www.theguardian.com)
  20. ^ democracy (www.theguardian.com)

Read more https://theconversation.com/openais-new-deep-research-agent-is-still-just-a-fallible-tool-not-a-human-level-expert-249496

Times Magazine

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

From Beach Bops to Alpine Anthems: Your Sonos Survival Guide for a Long Weekend Escape

Alright, fellow adventurers and relaxation enthusiasts! So, you've packed your bags, charged your devices, and mentally prepared for that glorious King's Birthday long weekend. But hold on, are you really ready? Because a true long weekend warrior kn...

Effective Commercial Pest Control Solutions for a Safer Workplace

Keeping a workplace clean, safe, and free from pests is essential for maintaining productivity, protecting employee health, and upholding a company's reputation. Pests pose health risks, can cause structural damage, and can lead to serious legal an...

The Times Features

Prefab’s Second Life: Why Australia’s Backyard Boom Needs a Circular Makeover

The humble granny flat is being reimagined not just as a fix for housing shortages, but as a cornerstone of circular, factory-built architecture. But are our systems ready to s...

Melbourne’s Burglary Boom: Break-Ins Surge Nearly 25%

Victorian homeowners are being warned to act now, as rising break-ins and falling arrest rates paint a worrying picture for suburban safety. Melbourne residents are facing an ...

Exploring the Curriculum at a Modern Junior School in Melbourne

Key Highlights The curriculum at junior schools emphasises whole-person development, catering to children’s physical, emotional, and intellectual needs. It ensures early year...

Distressed by all the bad news? Here’s how to stay informed but still look after yourself

If you’re feeling like the news is particularly bad at the moment, you’re not alone. But many of us can’t look away – and don’t want to. Engaging with news can help us make ...

The Role of Your GP in Creating a Chronic Disease Management Plan That Works

Living with a long-term condition, whether that is diabetes, asthma, arthritis or heart disease, means making hundreds of small decisions every day. You plan your diet against m...

Troubleshooting Flickering Lights: A Comprehensive Guide for Homeowners

Image by rawpixel.com on Freepik Effectively addressing flickering lights in your home is more than just a matter of convenience; it's a pivotal aspect of both home safety and en...