News
Feb 19, 2026
News
Enterprise
Artificial Intelligence
Americas
NewDecoded
7 min read
Image by Google
Deepgram, a prominent provider of voice AI infrastructure, has reached a $1.3 billion valuation after securing $130 million in Series C funding. The investment, led by AVP, arrives as the company also announces its acquisition of OfOne, a startup specializing in automated voice solutions for the restaurant industry. This significant capital influx includes participation from major enterprise players like SAP and Twilio, underscoring a growing institutional bet on the future of conversational speech technology. While the company was already operating with positive cash flow, Deepgram CEO Scott Stephenson noted that the raise was opportunistic to capitalize on the mainstream surge in voice AI adoption. The new funding will be utilized to expand the company's global reach and enhance its multi-language capabilities. By controlling its own models rather than relying on third party APIs, the startup aims to provide the low latency required for natural, human like interactions in commercial settings.
The acquisition of OfOne marks a strategic shift toward vertical solutions, specifically targeting the high volume quick service restaurant sector. OfOne claims an accuracy rate of over 93 percent for drive thru orders, addressing a historical pain point where voice assistants often failed due to background noise and complex menus. Stephenson describes food ordering as a critical litmus test for the industry, believing that if AI can handle a lunch rush, it can handle almost any consumer interaction.
This funding round places Deepgram in direct competition with other heavily backed voice specialists such as ElevenLabs and Sesame. As the market for voice AI agents is projected to grow 30 percent annually to reach nearly $20 billion by 2030, the battle for dominance is shifting from general transcription to specialized execution. Deepgram's ability to handle interruptions and real time processing remains its primary differentiator in an increasingly crowded TechCrunch reported landscape.
Google DeepMind has officially released Veo 3.1, a significant upgrade to its generative video model that introduces the "Ingredients to Video" framework. This latest iteration is designed to provide storytellers with enhanced consistency, creativity, and control by allowing them to use reference images to anchor their visuals. The update addresses the long-standing challenge of character and setting stability, making it easier to produce coherent narratives across multiple clips.
The primary innovation in Veo 3.1 is the ability to maintain identity consistency for characters and objects. Creators can now ensure that a character remains visually identical even as the setting or action changes throughout a sequence. This feature minimizes the unpredictable morphing often seen in generative video, providing a level of reliability that is essential for professional filmmaking and detailed storytelling workflows.
Beyond character stability, the model excels at maintaining background and object integrity across different scenes. Users can reuse specific textures or environments, allowing for a sense of permanence in their digital worlds. This capability enables creators to blend disparate elements, such as stylized backgrounds and unique character designs, into a single, cohesive, and high-impact visual clip.
In a major move toward mobile-first content, Veo 3.1 now supports native vertical video generation in a 9:16 aspect ratio. This allows creators to produce high-quality, full-screen content for platforms like YouTube Shorts without the need for cropping or losing visual quality. This native support ensures that the framing and resolution are optimized for the smartphone experience from the moment of creation.
Resolution and fidelity have also received a substantial boost with the introduction of state of the art upscaling capabilities. Users can now generate videos in 1080p for standard editing or 4K for high-end productions that require rich textures and stunning clarity. These high-resolution options provide the flexibility needed for both casual social media posts and professional projects destined for large screens.
Safety and transparency remain a priority with the integration of SynthID digital watermarking in every generated video. Google has also expanded its verification tools within the Gemini app, allowing users to upload videos and confirm if they were produced with Google AI. This approach aims to foster a more transparent digital ecosystem as synthetic media becomes increasingly prevalent. The updated features are currently rolling out across a wide range of Google products and developer platforms. Consumers can access Veo 3.1 directly within the Gemini app and YouTube Shorts, while professional and enterprise users can leverage the technology through Vertex AI and the Gemini API. This broad availability ensures that everyone from casual creators to professional developers can utilize these powerful new cinematic tools.
The release of Veo 3.1 represents a significant shift in the generative AI landscape, moving the technology from experimental visual snippets to reliable production tools. By solving for character and background consistency, Google is directly addressing the primary barriers that have prevented AI video from being used in professional narrative filmmaking. The strategic focus on native vertical video generation also signals an intent to dominate the creator economy, providing a specialized tool for the massive YouTube Shorts and mobile social media markets that competitors like OpenAI's Sora have yet to fully optimize for.