← Back to stories Hand holding a smartphone with AI chatbot app, emphasizing artificial intelligence and technology.
Photo by Sanket Mishra on Pexels
TechNode 2026-03-07

MWC 2026: iFlytek unveils 40-gram AI glasses for live, on-lens translation

A 40-gram translator on your face

At Mobile World Congress (MWC) 2026 in Barcelona, iFlytek (科大讯飞) introduced a pair of AI-powered smart glasses aimed at frictionless cross-language communication. The 40-gram device displays real-time translated subtitles directly on the lens and plays the translated audio through an integrated speaker, according to TechNode. It has been reported that the glasses use advanced lip-movement recognition to improve timing and accuracy, and employ bone-conduction audio so users can hear translations without blocking ambient sound. Will AI glasses finally make cross-language chats effortless?

Why it matters

iFlytek is one of China’s leading speech-recognition and AI voice firms, widely used in education and consumer tools across the country. The company was placed on the U.S. Commerce Department’s Entity List in 2019, a move that has complicated access to certain U.S. technologies and shaped how Chinese AI companies source chips and software. Launching translation glasses on a global stage like MWC underscores how Chinese vendors continue to push AI wearables despite geopolitical headwinds and export controls that influence supply chains and international expansion.

The bigger picture

Wearable translation has long been a proving ground for ambient computing, from earbuds to AR overlays. By putting captions on the lens and audio in the ear, iFlytek’s approach mirrors a broader industry bet that unobtrusive, on-the-go AI can dissolve language barriers in travel, business, and education. TechNode’s report highlights the weight and translation features; other specifications and commercial details were not immediately clear. The ultimate test? Accuracy, comfort, battery life—and whether people will actually wear their translator.

AIEVsSemiconductorsSpace
View original source →