Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Openai announces a new AI “agent” aimed to help people do in-depth, complex research with Chatgpt, the company’s AI-based ChatBot platform.
It is called properly deep research.
Openai said blog post On Sunday, this new ability was “designed for people who do intensive knowledge in areas such as finance, science, politics and engineering, and need thorough, accurate and reliable research.” It can be useful, the company added to anyone who “makes purchases that usually require careful research, such as cars, devices and furniture”.
Basically, Chatgpt Deep Research is intended in cases where you not only want a quick response or a summary, but also have to consider information from several sites and other sources.
Openai said that Chatgpt Pro users are limited to today’s 100 queries, followed by Plus and Team users, followed by Enterprise. (Openai targets the extra introduction in about a month, the company said, and paid users will soon be “significantly higher”.) This is a geo-target start; Openai did not have an expenditure schedule that can be shared in the UK, Switzerland and European economic area for Chatgpt customers.
To use Chatgpt Deep Research, you only select “Deep Research” in the composer and then specify the query to attach files or tables. (This is only an internet experience yet, with integration of mobile and desktop application later this month.) Deep research can take 5-30 minutes to answer the question and get notified when the search is completed –
Currently, the outputs of Chatgpt Deep Research are only textual. However, Openai said he would soon add images, data appearances and other “analytical” outputs. The schedule is also the ability to combine “special data sources”, including “subscription-based” and internal resources added by Openai.
The big question is how accurate is Chatgpt deep research? After all, the AI is incomplete. This is prone to hallucinations and other types of errors that can be particularly detrimental to the “deep research” scenario. Perhaps that is why Openai said that all chatgpt deep research results “fully documented, with clear references and summary of thinking, facilitating reference and information.”
The jury turns out that these mitigation will be sufficient to overcome AI errors. Openai’s AI Web Search Function in Chatgpt, Chatgpt search, not infrequently Makes gaffes and gives bad answers to questions– Testing Techcrunch found that ChatGPT search has produced less useful results than search for Google for certain queries.
To explore the accuracy of deep research, Openai uses a special version of the recently announced O3 “argument” AI model, which “requires real tasks to require browser and Python device usage”. Confirmative learning essentially “teaches” a model by trial and mistakes to achieve a particular goal. As the model gets closer to the target, you get virtual “rewards” that ideally makes the task progress.
He claimed that this version of the Openai O3 model is “optimized for browsing and analyzing data”, adding that “exploits the argument for the search, interpretation and analysis of a huge amount For information that you encounter (…) the model can browse the user uploaded files, on the graphs using the Python device, embed the graphs and images from web pages in your answers, and evoke specific sentences or passages from its characteristics. Sources. ”
The company said he was testing the deep research of Chatgpt The last exam of humanityAn evaluation that contains more than 3,000 expert levels in different scientific fields. The O3 model for deep research has achieved a precision of 26.6%, which may seem like a failed grade – but the last exam of mankind is designed to be harder than other reference values to remain before model development. According to Openai, the deep research O3 model was ahead of Gemini thinking (6.2%), Grok-2 (3.8%) and Openai GPT-4O (3.3%).
Nevertheless, Openai notes that deep research on Chatgpt has restrictions, sometimes causing errors and incorrect conclusions. Deep research can struggle to distinguish credible information from rumors, the company said, and often does not pass it when you are not sure – and can cause formatting errors in reports and quotes.
For anyone who is worried about the impact of the generative AI on students, or anyone trying to find online information, this type of profound, well-quoted output probably seems more attractive than a deceptively simple chatbot summary without references. But we will see that most users actually introduce the output to real analysis and double checks, or are the copy paste as a more professional text.
And if it all seems familiar, Google actually announced a similar AI feature, exactly the same name as two months ago.