News
Apr 7, 2026
OneBio Secures R20 Million Grant to Propel African Biotech Innovation and Venture Building
News
Enterprise
Artificial Intelligence
Americas
NewDecoded
4 min read

Image by Google
Google Labs announced a major update to Opal today, replacing rigid, static model calls with a powerful new agent step. This feature allows the platform to act as a proactive partner that understands high-level objectives rather than just following a predefined sequence of tasks. The agent step autonomously determines which tools and models are needed, such as calling on Web Search for research or Veo for high-quality video generation. This shift enables users to build interactive experiences that feel more collaborative than traditional automation. For example, a Room Styler Opal can now engage in a dialogue with a user, asking for feedback on specific design elements or researching niche sub-styles to refine a visual concept. Previously, these workflows were one-way processes that required users to manually configure every step of the output.
A core addition to this agentic framework is the introduction of persistent memory. Opals can now remember user preferences, brand identities, and historical data across different sessions. This means a user no longer needs to repeat baseline constraints every time they start a new project. A brainstormer tool, for instance, can instantly recall a specific brand voice to generate tailored ideas without fresh input. The platform now supports dynamic routing, allowing agents to follow different paths based on custom logic without requiring any code. In an executive briefing scenario, the agent can detect if a client is new or existing and adjust its research path accordingly. It might trigger a broad web search for a new contact while reviewing internal meeting notes for a long-standing partner.
To ensure precision, the agent step can now initiate interactive chats to gather missing information or offer choices. This human-in-the-loop capability allows the AI to pause execution when an objective is vague, ensuring the final output aligns with the user's vision. This marks a significant evolution from the fire and forget nature of earlier generative tools. This update caters to both casual users who want a system that "just works" and power users who require high-precision prototyping. While the agent step provides autonomous orchestration, standard fixed steps remain available for those who need rigid logic for complex projects. Google Labs continues to expand its creative ecosystem, bridging the gap between automated intelligence and manual control.
This update signals a pivotal shift in the AI industry from simple generative models to agentic orchestration platforms. By integrating persistent memory and dynamic routing, Google is moving beyond one-off prompts toward persistent digital assistants that understand long-term context. This architectural change likely leverages the Agentic Vision capabilities of the recently announced Gemini 3 Flash model, positioning Opal as a high-level operating system for creative projects. As other players like Pomelli and Flow evolve, the focus is clearly moving away from how well a model can generate content to how effectively it can manage complex, multi-stage projects autonomously. This democratization of agent building suggests that the competitive edge in 2026 will lie in how seamlessly an AI can bridge the gap between human intent and automated execution.
Related Articles