A 20-Year-Old College Dropout Just Raised $3 Million to Solve the AI ​​Memory Problem

From open source side projects to enterprise clients processing billions of tokens weekly. A 20-year-old founder is solving it with technology that adapts and forgets like the human brain.

What is happening: Dhravya Shah, who turned 20 last month, has raised $3 million to build Supermemory, a memory infrastructure layer for artificial intelligence applications. The Mumbai-born founder created his own vector database to solve what he calls one of the toughest challenges in AI: allowing models to retain context across multiple sessions.

Why this matters: As AI adoption accelerates globally, the inability of large language models to maintain long-term memory between sessions remains a critical limitation. Shah’s approach addresses a fundamental infrastructure gap that affects everything from chatbots to video editors, and hundreds of companies are already building on the platform.

A teenager who started building a bookmarking tool in his college dorm has landed $3 million in funding from some of Silicon Valley’s most influential tech executives to solve one of artificial intelligence’s most persistent problems: memory.

Dhravya Shah, who turned 20 last month, announced the seed funding round for Supermemory, describing it as an infrastructure that enables AI applications to remember and adapt like the human brain. The round was led by Susa Ventures, Browder Capital and SF1.vc, with backing from Google’s chief research scientist at DeepMind Jeff Dean, Cloudflare CTO Dane Knecht, Sentry founder David Cramer, and executives from OpenAI, Meta and Google.

“Memory is one of the toughest challenges in AI right now,” Shah said in the funding announcement. “I realized this when I built the first version of supermemory, which was simply a bookmarking and note-taking tool that I was building as a side project in my college dorm two years ago, when I was 18.”

From bookmarks to billions

Originally from Mumbai, Shah began building the initial version of Supermemory, then called Any Context, as part of a personal challenge to create something new every week. He launched it as an open source project on GitHub that allowed users to chat with their Twitter favorites.

The consumer app quickly gained traction, reaching 50,000 users and amassing over 10,000 stars on GitHub, making it one of the fastest-growing open source projects in 2024. Users saved millions of items across the platform and the project won multiple grants, including the buildspace grant.

“At scale, the consumer app struggled a lot, and to our surprise, the ‘Memory’ infrastructure for LLMs like this simply didn’t exist,” Shah explained in his announcement. “I had some infrastructure experience and started sharing more details about how we were building the infrastructure behind the consumer app ourselves.”

Memory as infrastructure

At the time he was developing Supermemory, Shah was working on AI infrastructure at Cloudflare, where he contributed to work that filed a patent to make agents faster. He also worked on startups focused on memory solutions and created multiple consumer applications.

The experience reinforced his understanding of the fundamental challenge of memory. “It’s not just a search problem, it’s about really understanding your users and making their experience magical by contextualizing the LLMs they talk to,” he wrote.

Interest from companies wanting to use Supermemory’s infrastructure for their own products prompted Shah to make a decisive change. Many were willing to pay immediately and some offered contractual work to help implement the open source project. Shah decided to drop out of college, move to San Francisco full time, and transform Supermemory into a commercial product.

“This is my life’s work,” Shah wrote. “I dropped out of college, moved to SF, and continued building the product as a solo founder.”

The commercial version of Supermemory works as a universal memory API for AI applications. Create a knowledge graph based on processed data and personalize the context for users, supporting queries in different types of applications, from writing tools to video editors.

Building from scratch

Shah’s approach involved building basic infrastructure components from scratch. “I built my own vector database, content parsers, and an engine that works like the human brain,” he wrote in his announcement.

The platform can ingest multiple types of data, including files, documents, chats, projects, emails, PDFs, and application data streams. It offers multimodal input support, allowing it to work in different types of AI applications. There’s also a chatbot and annotator feature that allows users to add memories in text, add files or links, and connect to apps like Google Drive, OneDrive, or Notion, along with a Chrome extension for adding notes from websites.

The infrastructure now serves hundreds of companies and builders, with some clients processing billions of tokens weekly. The company is working with several AI applications, including desktop assistants, video editors, search platforms and real estate tools, as well as a robotics company to retain visual memories captured by robots.

“Today I am delighted that we have one of the best and fastest memory products in the world, with hundreds of companies and developers building applications on super memory,” Shah wrote. “And this is just the beginning.”

The vision ahead

Shah considers memory as a fundamental missing piece in the development of artificial general intelligence. He argues that while model vendors are rushing to build superintelligence with PhD-level knowledge and the ability to use tools, memory and adaptation remain underdeveloped.

“It’s increasingly obvious that the last big hill to climb to make intelligence feel truly human, the next exciting tipping point in AI, is memory and personalization,” he wrote on the company’s website.

It highlights that the memory infrastructure must remain independent of specific model vendors. “If Google releases the next best model this week, but you’re stuck on OpenAI because its API has memory, you’d be forced to use what you are,” he explained. “Memory should be a universal right, not a pit.”

Shah maintains that almost all early customers saw increases in app usage, customer satisfaction or revenue by making their experiences more personalized for users. “Users should not be locked into a chatbot because it knows everything about them. Because all chatbots can know everything about them. They all run on super memory.”

His long-term vision is ambitious. “Intelligence without memory is nothing more than sophisticated randomness,” he wrote. “One day, when AGI exists and robots walk everywhere, they will need a memory as sophisticated as their intelligence. And it would be super memory.”

The company is now hiring in engineering, research and product roles as it scales its infrastructure to meet growing demand from companies building AI applications that require persistent contextual memory.

Keep up to date with our stories at LinkedIn, Twitter, Facebook and instagram.

#20YearOld #College #Dropout #Raised #Million #Solve #Memory #Problem

Leave a Reply

Your email address will not be published. Required fields are marked *