AI Mapper is an accessible navigation app designed to support people with disabilities when using public transport in London. It focuses on the needs of blind and partially sighted (BPS) individuals as well as wheelchair users, addressing the challenges of navigating complex transit environments. The app combines multimodal features—including vision-based assistance, intelligent journey planning, and real-time wayfinding—to guide users throughout their journey. By integrating both planning and live navigation in one platform, AI Mapper enables more independent, confident, and inclusive travel.
AI Mapper is an accessible navigation app designed to support people with disabilities when using public transport in London. It focuses on the needs of blind and partially sighted (BPS) individuals as well as wheelchair users, addressing the challenges of navigating complex transit environments. The app combines multimodal features—including vision-based assistance, intelligent journey planning, and real-time wayfinding—to guide users throughout their journey. By integrating both planning and live navigation in one platform, AI Mapper enables more independent, confident, and inclusive travel.
Navigating transport systems can be challenging for adults who experience the world differently—especially when environments change quickly. Traditional navigation aids struggle in dynamic real‑life settings, making even simple journeys more complicated and sometimes unsafe. When surroundings shift or become cluttered with visual information, confidence drops and travel becomes stressful. GPS inaccuracies—often several meters off in busy urban areas—add to the difficulty, making it hard to locate intersections, station entrances, or the correct bus stop. These real‑world barriers highlight the need for technology that can offer clearer, safer, and more adaptive guidance throughout the transport experience.
Users find it difficult to understand timetables service updates, service updates, and route information quickly.
Complex layout, unclear directions, and poor structure make routes hard to understand.
Limited audio, assitive support, or alternative formats make information hard to access.
Users must switch between multiple apps for planning, navigation, and accessibility information.
Assistance, guidance, or emergency information is unclear or difficult to locate.
Existing digital tools are difficult to use, overwhelming, or not intuitive.
Computer vision processes this input to detect obstacles and landmarks.
LLM/VLM models convert complex navigation data into short, clear instructions.
The device captures visual and sensor input (camera, LiDAR).
Mapping APIs generate routing and real-time transit information.
The system outputs audio or haptic guidance directly to the user.
The device captures visual and sensor input (camera, LiDAR).
Computer vision processes this input to detect obstacles and landmarks.
Mapping APIs generate routing and real-time transit information.
LLM/VLM models convert complex navigation data into short, clear instructions.
The system outputs audio or haptic guidance directly to the user.
Users can plan accessible routes using public transport, with consideration of preferences, accessibility needs, weather, and real-time data such as crowding.
The app provides step-by-step guidance during travel, helping users navigate stations, platforms, and interchanges.
Users can interact with the app through natural language (voice or text), reducing complexity and making the system easier to use.
Using the phone camera, the app interprets surroundings—such as signage, exits, obstacles, and accessibility features—and translates them into actionable guidance.
The app helps identify step-free routes, elevators, and less crowded paths, particularly benefiting wheelchair users and those who struggle in busy environments
AI Mapper combines mobile sensors, computer vision, and advanced language models to deliver real‑time, context‑aware navigation that adapts to complex transport environments. It uses smartphone hardware, mapping APIs, and multimodal AI to interpret surroundings, detect obstacles, understand spatial layouts, and convert this information into clear, concise travel guidance.
AI Mapper combines mobile sensors, computer vision, and advanced language models to deliver real‑time, context‑aware navigation that adapts to complex transport environments. It uses smartphone hardware, mapping APIs, and multimodal AI to interpret surroundings, detect obstacles, understand spatial layouts, and convert this information into clear, concise travel guidance.
Role
Role
Role
Role
Role
Role
Copyright © 2026 AIMapper. All rights reserved.