Investigadores de Google AI comparten lo que viene después para Gemini


Tulsee Doshi (Sr. Director & Product Lead for Gemini Model) and Madhavi Sewak (Distinguished Researcher at Google DeepMind) joined us to talk about the latest breakthroughs in the Gemini model family, and where AI development is heading next.

Here are some of the key takeaways from our conversation:

Context engineering can help you get better results. Google has found that the RACEF method (Role, Action, Steps, Context, Examples, Format) often outperforms expensive model fine-tuning. As Sewak put it, “prompting these models and passing the right information and context… can really help you assemble a very good agent.”

The memory problem is AI’s next frontier. One of the biggest unsolved challenges in AI development is contextual memory — how models decide what to retain, what to discard, and when to apply specific context. “Models are not great at this yet,” Sewak admitted, “but this is an active area of research for Gemini.”

Mathematical reasoning is a sign of broader intelligence. “Being really good at the IMO is also just a really strong signal of reasoning performance,” Doshi explained. The model’s ability to explore multiple solution paths echoes how humans tackle complex problems — a kind of reasoning that transfers to code, research, and problem-solving.

Human value is shifting. As AI handles more technical grunt work, the premium on human creativity, effective communication, and strategic architecture is increasing. The ability to articulate problems and solutions to AI systems, along with the creative thinking to identify what problems are worth solving in the first place, will go a long way.