Mediate is a research and innovation lab working at the intersection of Computer Vision and Augmented Reality


Our Story

We are a team of innovators, researchers, designers, and social entrepreneurs who have previously worked at institutions like MIT, Harvard, and McKinsey. Our main office is in Boston and our teams are spread around the US and Turkey.

We are proud to be supported by:

Research and Innovation

Since 2017, we craft mobile and intelligent ecosystems that increase productivity and joy in the daily life of people.

We combine the power of AR platforms and computer vision to understand physical spaces in new, innovative, and effective ways. To achieve that, in collaboration with MIT, we develop cutting-edge novel neural networks that robustly parse 3D spaces. We optimize our technology to work in real-time and locally on edge devices privately.



Supersense is a new kind of app for the visually impaired. We have developed a system that helps people with visual impairments find the objects they are looking for in their environments. The app is available on Google Play Store and iOS App Store and is being used by more than 5,000 users every month.


Museum of Science Navigation

We are piloting a new generation of indoor navigation and exploration technology at Boston’s Museum of Science. We are implementing an innovative system that can guide visitors around the museum, give them more information about different exhibition pieces, and help visitors with disabilities be more independent in the museum.

MIT.nano Experience

As part of the MIT.nano opening event, we created virtual and augmented reality experiences to showcase the laboratory spaces and clean rooms within MIT.nano, which are close to the public and will not  be accessible during the event. The VR/AR experiences provided an opportunity to explore and to better understand how nanoscience and nanotechnology laboratories operate.


Mediate VR is a platform for speech-driven user research in virtual reality. It captures the emotions, challenges, and pleasures of spatial experience through voice recordings. Users explore a virtual environment, respond verbally to prompts, and engage in tasks. Mediate contextualizes user voice recordings through data captured from the virtual environment, synthesizing insights and feedback in real time for our clients through an admin dashboard.


NavigAid is an AI system, which provides contextually relevant, task-driven solutions to problems such as finding objects, identifying paths of ingress and egress, and understanding the layout of an environment. It is enabled by a novel neural network architecture that is capable of extracting semantically and functionally relevant spatial features from images, which help to create a human-like understanding of physical environments.