The Times Australia
The Times World News

.

OpenAI’s new ‘deep research’ agent is still just a fallible tool – not a human-level expert

  • Written by Raffaele F Ciriello, Senior Lecturer in Business Information Systems, University of Sydney
OpenAI’s new ‘deep research’ agent is still just a fallible tool – not a human-level expert

OpenAI’s “deep research[1]” is the latest artificial intelligence (AI) tool making waves[2] and promising to do in minutes what would take hours for a human expert to complete.

Bundled as a feature in ChatGPT Pro and marketed[3] as a research assistant that can match a trained analyst, it autonomously searches the web, compiles sources and delivers structured reports. It even scored[4] 26.6% on Humanity’s Last Exam (HLE), a tough AI benchmark, outperforming[5] many models.

But deep research doesn’t quite live up to the hype. While it produces polished reports, it also has serious flaws. According to journalists[6] who’ve tried it[7], deep research can miss key details, struggle with recent information and sometimes invents facts.

OpenAI flags this when listing the limitations of its tool. The company also says it[8] “can sometimes hallucinate facts in responses or make incorrect inferences, though at a notably lower rate than existing ChatGPT models, according to internal evaluations”.

It’s no surprise that unreliable data can slip in, since AI models don’t “know” things in the same way humans do.

The idea of an AI “research analyst” also raises a slew of questions. Can a machine – no matter how powerful – truly replace a trained expert? What would be the implications for knowledge work? And is AI really helping us think better, or just making it easier to stop thinking altogether?

What is ‘deep research’ and who is it for?

Marketed towards professionals in finance, science, policy, law and engineering, as well as academics, journalists and business strategists, deep research is the latest “agentic experience[9]” OpenAI has rolled out in ChatGPT. It promises to do the heavy lifting of research in minutes.

Currently, deep research is only available to ChatGPT Pro users in the United States, at a cost of US$200 per month. OpenAI says[10] it will roll out to Plus, Team and Enterprise users in the coming months, with a more cost-effective version planned for the future.

Unlike a standard chatbot that provides quick responses, deep research follows a multi-step process to produce a structured report:

  1. The user submits a request. This could be anything from a market analysis to a legal case summary.
  2. The AI clarifies the task. It may ask follow-up questions to refine the research scope.
  3. The agent searches the web. It autonomously browses hundreds of sources, including news articles, research papers and online databases.
  4. It synthesises its findings. The AI extracts key points, organises them into a structured report and cites its sources.
  5. The final report is delivered. Within five to 30 minutes, the user receives a multi-page document – potentially even a PhD-level thesis[11] – summarising the findings.

At first glance, it sounds like a dream tool for knowledge workers. A closer look reveals significant limitations.

Many[12] early[13] tests[14] have exposed shortcomings:

  • It lacks context. AI can summarise, but it doesn’t fully understand what’s important.
  • It ignores new developments. It has missed major legal rulings and scientific updates.
  • It makes things up. Like other AI models, it can confidently generate false information.
  • It can’t tell fact from fiction. It doesn’t distinguish authoritative sources from unreliable ones.

While OpenAI claims its tool rivals human analysts, AI inevitably lacks the judgement, scrutiny and expertise that make good research valuable.

What AI can’t replace

ChatGPT isn’t the only AI tool that can scour the web and produce reports with just a few prompts. Notably, a mere 24 hours after OpenAI’s release[15], Hugging Face released a free, open-source version that nearly matches its performance.

The biggest risk of deep research and other AI tools marketed for “human-level” research is the illusion that AI can replace human thinking. AI can summarise information, but it can’t question its own assumptions, highlight knowledge gaps, think creatively or understand different perspectives.

And AI-generated summaries don’t match the depth[16] of a skilled[17] human researcher.

Any AI agent, no matter how fast, is still just a tool, not a replacement for human intelligence. For knowledge workers, it’s more important than ever to invest in skills that AI can’t replicate: critical thinking, fact-checking, deep expertise and creativity.

If you do want to use AI research tools, there are ways to do so responsibly. Thoughtful use of AI can enhance research without sacrificing accuracy or depth. You might use AI for efficiency, like summarising documents, but retain human judgement for making decisions.

Always verify sources, as AI-generated citations can be misleading. Don’t trust conclusions blindly, but apply critical thinking and cross-check information with reputable sources. For high-stakes topics — such as health[18], justice[19] and democracy[20] — supplement AI findings with expert input.

Despite prolific marketing that tries to tell us otherwise, generative AI still has plenty of limitations. Humans who can creatively synthesise information, challenge assumptions and think critically will remain in demand – AI can’t replace them just yet.

References

  1. ^ deep research (openai.com)
  2. ^ making waves (www.forbes.com)
  3. ^ marketed (www.theguardian.com)
  4. ^ scored (www.zdnet.com)
  5. ^ outperforming (www.techradar.com)
  6. ^ According to journalists (www.theverge.com)
  7. ^ who’ve tried it (www.platformer.news)
  8. ^ The company also says it (openai.com)
  9. ^ agentic experience (openai.com)
  10. ^ says (openai.com)
  11. ^ potentially even a PhD-level thesis (futureofbeinghuman.com)
  12. ^ Many (www.theverge.com)
  13. ^ early (www.nature.com)
  14. ^ tests (www.datacamp.com)
  15. ^ 24 hours after OpenAI’s release (arstechnica.com)
  16. ^ depth (futureofbeinghuman.com)
  17. ^ skilled (www.tandfonline.com)
  18. ^ health (www.theguardian.com)
  19. ^ justice (www.theguardian.com)
  20. ^ democracy (www.theguardian.com)

Read more https://theconversation.com/openais-new-deep-research-agent-is-still-just-a-fallible-tool-not-a-human-level-expert-249496

Times Magazine

Choosing the Right Legal Aid Lawyer in Sutherland Shire: Key Considerations

Legal aid services play an essential role in ensuring access to justice for all. For people in the Sutherland Shire who may not have the financial means to pay for private legal assistance, legal aid ensures that everyone has access to representa...

Watercolor vs. Oil vs. Digital: Which Medium Fits Your Pet's Personality?

When it comes to immortalizing your pet’s unique personality in art, choosing the right medium is essential. Each artistic medium, whether watercolor, oil, or digital, has distinct qualities that can bring out the spirit of your furry friend in dif...

DIY Is In: How Aussie Parents Are Redefining Birthday Parties

When planning his daughter’s birthday, Rich opted for a DIY approach, inspired by her love for drawing maps and giving clues. Their weekend tradition of hiding treats at home sparked the idea, and with a pirate ship playground already chosen as t...

When Touchscreens Turn Temperamental: What to Do Before You Panic

When your touchscreen starts acting up, ignoring taps, registering phantom touches, or freezing entirely, it can feel like your entire setup is falling apart. Before you rush to replace the device, it’s worth taking a deep breath and exploring what c...

Why Social Media Marketing Matters for Businesses in Australia

Today social media is a big part of daily life. All over Australia people use Facebook, Instagram, TikTok , LinkedIn and Twitter to stay connected, share updates and find new ideas. For businesses this means a great chance to reach new customers and...

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

The Times Features

Benefits of Tree Pruning for a Thriving Australian Garden

Tree pruning is an essential aspect of garden maintenance that often doesn't get the attention it deserves. It's a practice that involves the selective removal of certain parts...

What is psychosocial therapy? And why is the government thinking about adding it to Medicare for kids?

The government is considering new, bulk-billed health checks for three-year-olds, to pick up developmental concerns and refer kids that might need additional support. The de...

Detect Hidden Water Leaks Fast: Don’t Ignore Hot Water System Leaks

Detecting water leaks early is crucial for preventing extensive damage to your home. Among the various parts of a home’s plumbing system, hot water systems are particularly suscept...

Why do hamstring injuries happen so often and how can they be prevented?

In a recent clash against the Melbourne Storm, the Brisbane Broncos endured a nightmare rarely seen in professional sport — three players tore their hamstrings[1] in a single g...

What Is the Australian Government First Home Buyers Scheme About?

For many Australians, buying a first home can feel like a daunting task—especially with rising property prices, tight lending rules, and the challenge of saving for a deposit. ...

How artificial intelligence is reshaping the Australian business loan journey

The 2025 backdrop: money is moving differently If you run a small or medium-sized business in Australia, 2025 feels noticeably different. After two years of stubbornly high bo...