Interacting with technology has more friction than with a human: technology misses emotional cues, it’s full of text and it doesn’t engage with you on a personal level.
At Anam, we’re working to change that. We’re building the next interface for technology; real-time AI personas who feel as natural as interacting with a human. Our personas are photorealistic, multilingual, respond in real time and are available 24/7. Most importantly, they are emotive — they can convey the subtitles of human emotion based on the context of the conversation.
We’ve taken a unique approach to tackle this nascent area of research through our in-house AI models and infrastructure. We’ve created an interactive emotional interface, one that mimics how humans have communicated for millennia. We believe that to cross the “uncanny valley” and to create an experience people love interacting with requires more complexity than video loops and mouth dubbing.
Our infrastructure has predominantly been built in-house. We’ve developed our Conversation Engine in which several components work in unison to deliver our AI personas in real time. This involves:
For this to then feel like a dynamic two-way conversation, the engine needs to run within a second. So we’ve architected our infrastructure with speed top of mind by implementing our own low-latency, scalable streaming platform, bringing our personas to the user in real time. This will soon evolve into controlled latency, allowing us to match the persona's speech to the user’s pace—the same way a good conversationalist would.
The result? An API that enables a human face for your product. One that is emotive and scalable, responding to your users in less than a second. Available 24/7 and multilingual—we currently support 32 languages—helping businesses to transform how they reach their users.What’s coming in the next few months?
To support our launch, we’ve raised £2m in funding led by Concept Ventures with investments from Torch Capital and angel investors from Mati Staniszewski, Zeena Qureshi, Otto Söderlund, Warrick Shanly, Jeremy Yap, Olivia Mark, and others. We’ll use this funding to build out our team of all-star AI researchers and software engineers and build the first few iterations of our product.
Technical roles represent almost the entire company at Anam today. We’re comprised of four AI research engineers, two software engineers and two on the commercial side. We’re a small group, united by our curiosity and devotion to creating a new interface that has the potential to transform how we interact with technology.
In the last 6 months, we have been working with 30 design partners who are helping us to refine our product. Our API is already being used in multi-billion pound corporations to AI-native start-ups. For example, we’re working with Henkel, Schwarzkopf, Sama Theraputics and Solid Road catering for use cases across 1-1 teaching assistant, simulation role play for training, sales chat agent, customer support agent, language tutor, interview preparation and recruiting, as well as medical agents for therapy and primary care.
From today, we’re launching our CARA face-generation model as well as General Access to the product. Through the Anam Lab, you can build, create and deploy AI personas for multiple use cases. This is just the first step in developing an entirely new way to interact with technology—one that feels truly human. If you’re as excited as we are, reach out to us at anam.ai.