Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Immediately after the end of the Paris AI action summit, Antropic co-founder and CEO Dario Amodei called the event a “failed opportunity”. He added that “more emphasis and urgency need more topics, taking into account the pace of technology progress” A statement published on TuesdayOr
AI Company has been developing a development-centered event in Paris in cooperation with French Startup PowderAnd Techcrunch had the opportunity to interview Amodei on stage. At the event, he explained his line of thought and defended the third way, which is neither pure optimism nor pure criticism of AI innovation and control.
“In the past, I was a nervous scientist where I was basically looking at the living inside the real brain. And now we look into the artificial brain for a living. So over the next few months, we will achieve exciting progress in the field of interpretation – where we really understand how the models work, ”Amodei told Techcrunch.
“But this is definitely a competition. This is a competition between making models stronger, which is incredibly fast for us and incredibly fast for others – you can’t really slow down, right? … Our understanding must keep up with the ability to build things. I think that’s the only way, he added.
In the United Kingdom, the voice of AI government debate has changed significantly since the first AI summit in Bletchley. This is partly due to the current geopolitical landscape.
“I’m not here this morning to talk about AI Safety, which was the title of the conference a few years ago,” said US Vice President JD Vance at the AI Action Summit on Tuesday. – I’m here to talk about AI.
Interestingly, Amodei tries to avoid this security and the opportunity. Actually believes that focus on security is increased the possibility.
“At the original summit, the United Kingdom’s Bletchley summit, there was a lot of debates about testing and measurement of various risks. And I don’t think these things have slowed down the technology at all, ”Amodei said at the anthropic event. “If something, making such a measurement helped better understand our models, which eventually help to produce better models.”
And every time Amodei puts some emphasis on security, he also likes to recall that the anthropic continues to concentrate on the construction of Frontier AI models.
“I don’t want to do anything to reduce the promise. Every day we offer models that people can build and that are used for wonderful things. And we must not stop this, ”he said.
“When people talk a lot about the risks, I annoy me and say,” Oh, man, no one, no one has done a good job to actually determine how great this technology can be, “the conversation added later.
When the conversation switched to the latest models of Chinese LLM maker Deepseek, Amodei underestimated the technical results and said he felt the public reaction “inorganic”.
“Frankly, my reaction was very little. We saw the V3, which is the basic model of the Deepseek R1 in December. And it was a stunning model, ”he said. “The model, issued in December, was such a very normal cost reduction curve that we could see in models and other models.”
It is noteworthy that the model did not come out at the headquarters of the “three or four Frontier Labs” in which the United States listed Google, Openai and Antropic some Frontier Labs, which usually pushes the envelope with new model emissions.
“And this is a cause for geopolitical concern for me. I never wanted authoritarian governments to dominate this technology, ”he said.
As for Deepseek’s alleged training costs, he rejected the idea that Deepseek V3 was 100 times cheaper than the United States’ training costs: “I think (this) is simply not accurate and not by facts,” he said .
While Amodei did not announce a new model at Wednesday’s event, he tossed some of the company’s upcoming editions – and yes, it includes some reasoning capacities.
“We generally focus on trying to create our own higher differentiated argument models. We are worried about making sure that we have enough capacity to make the models smarter and to worry about security things, ”said Amodei.
One of the questions that the anthropic is trying to solve is the model choice conundrum. For example, if you have a Chatgpt Plus account, it is difficult to know which model you need to select when the model selection appears to the next message.
The same applies to developers who use a large -language model (LLM) API for their own application. They want to balance things between accuracy, answers and costs.
“The idea that there are normal models and there are reasoning models were a bit disturbed and that they are different,” said Amodei. “If I talk to you, you don’t have two brains and one will answer immediately and the other will wait longer.”
According to him, depending on the inputs, a smoother transition should be performed between pre-skilled models such as Claude 3.5 sonnet or GPT-4O, and reinforcing learning models that can produce a reading chain (COT) such as Openai O1 or Deepseek R1.
“We believe that they must exist as part of a single continuous entity. And we may not be there yet, but the anthropic really wants to move things in that direction, ”said Amodei. “We need to make a smoother transition into pre-trained models,” here’s the thing and here’s B, “he added.
As large AI companies, such as anthropic, continue to emit better models, Amodei believes that it will open up great opportunities to disturb large world companies in all industries.
“We are working with some pharmaceutical companies to write clinical studies with Claude and have been able to reduce the time needed to write the clinical study report from 12 weeks to three days,” said Amodei.
“Beyond the biology level, there is legal, financial, insurance, productivity, software, energy. I think that it is basically a disturbing innovation in the AI application space. And we want to help, we want to support all this, ”he finished.
Read the full coverage of the Artificial Intelligence Action Summit in Paris.