Empowering people in digital and physical spaces

We are a team of  designers, developers, and social entrepreneurs, working at the intersection of Computer Vision and Augmented Reality.

CONTACT US

Experience and technology combined.

We are a research, innovation, and development lab working at the intersection of Computer Vision and Augmented Reality. Our expert team has previously worked at institutions like MIT, Harvard, and McKinsey. Our main office is in Boston and our teams are spread across the globe.

Since 2017, we craft mobile and intelligent ecosystems that increase productivity and joy in people’s lives.

We combine the power of AR platforms and computer vision to understand physical spaces in new, innovative, and effective ways. To achieve that, in collaboration with MIT, we develop cutting-edge novel neural networks that robustly parse 3D spaces. We optimize our technology to work in real-time and locally on edge devices privately. We provide cross-platform services, API and cloud integrations to make your solution and its user experience smooth and robust.

Edwin template story image
We are proudly supported by prestigious institutions like NSF, US Veteran Affairs, MIT Sandbox, designx, and MET FUND.

Projects

Mind Studio

Mind Studio is a visionOS app that elevates your thinking and learning experience by utilizing the power space. Transform your surroundings into an interactive canvas and seamlessly organize thoughts to enhance efficiency.

LEARN MORE

Supersense

Market leader mobile scanner application for blind and visually impaired users; equipped with novel computer vision and machine learning models.

With the power of AI, Supersense automatically figures out what you are trying to scan, guides you on how to point the camera, and reads the content in the right format. Its unique design minimizes the time and frustrations of scanning and reading text for blind and visually impaired users.

LEARN MORE

ReadBit

ReadBit is a free text-to-speech reader that helps you to listen to books, pdf, news, web articles, and many other file formats. It saves a lot of time by instantly converting any text to an audiobook, reading all the information at your desired speech rate, and providing an excellent listening experience with natural lifelike voices.

LEARN MORE

Museum of Science Navigation

We are piloting a new generation of indoor navigation and exploration technology at Boston’s Museum of Science. We are implementing an innovative system that can guide visitors around the museum, give them more information about different exhibition pieces, and help visitors with disabilities be more independent in the museum.

MIT.nano Experience

As part of the MIT.nano opening event, we created virtual and augmented reality experiences to showcase the laboratory spaces and clean rooms within MIT.nano, which are close to the public and will not  be accessible during the event. The VR/AR experiences provided an opportunity to explore and to better understand how nanoscience and nanotechnology laboratories operate.

MediateVR

Mediate VR is a platform for speech-driven user research in virtual reality. It captures the emotions, challenges, and pleasures of spatial experience through voice recordings. Users explore a virtual environment, respond verbally to prompts, and engage in tasks. Mediate contextualizes user voice recordings through data captured from the virtual environment, synthesizing insights and feedback in real time for our clients through an admin dashboard.