News
Apr 22, 2026
News
Startups
Artificial Intelligence
Americas
NewDecoded
3 min read

Image by memories.ai
Memories.ai officially emerged from stealth today, announcing an $8 million seed round led by Susa Ventures to build the memory layer for artificial intelligence. Founded by former Meta Reality Labs researchers, the company is launching the world’s first Large Visual Memory Model (LVMM). This technology enables AI systems to maintain long-term visual context, moving beyond the short-term limitations of current models.
The startup aims to solve the stateless nature of existing AI like ChatGPT or Gemini, which typically lose context after 30 to 60 minutes of video. By compressing and indexing video data into searchable semantic representations, Memories.ai allows systems to recall specific events across years of footage. This architecture functions similarly to how web search engines index the internet but applies it to visual experiences.
Current enterprise customers are already utilizing the LVMM for security, media archiving, and marketing intelligence. Security teams can query months of surveillance footage using natural language to find specific incidents in seconds. Meanwhile, brands are analyzing millions of social media clips to track trends and influencer impact with unprecedented speed and depth.
Looking ahead, Memories.ai is expanding into embodied AI and personal hardware through its partnership with NVIDIA. The company recently unveiled Project LUCI, an open platform designed to bring continuous visual memory to smart glasses, laptops, and humanoid robots. This integration will allow wearable devices to not just see, but truly remember and understand a user's daily life.
Co-founder Dr. Shawn Shen believes that memory is the fundamental foundation for reaching Artificial General Intelligence. He notes that while the industry has focused on making AI think faster, Memories.ai is focused on making it remember better. Interested users and partners can explore the technology further at https://memories.ai to see the future of personalized AI.
The emergence of Large Visual Memory Models represents a shift from processing-heavy AI to context-aware AI. By bridging the gap between perception and long-term recall, Memories.ai is providing the missing infrastructure needed for autonomous agents to function in the real world. This move directly challenges the current context window race, suggesting that efficient retrieval is more sustainable than massive active memory. For the robotics and wearable industries, this provides a blueprint for devices that learn from experience rather than just reacting to live sensors.
Related Articles