iOS 27 Adds Siri Camera Mode and Visual Intelligence

Bloomberg reporter Mark Gurman reports that Apple is developing a new Siri Camera Mode that will appear in iOS 27, embedding Visual Intelligence directly into the Camera app, according to 9to5Mac's coverage of Gurman's reporting. 9to5Mac quotes Gurman saying Apple will "move its Visual Intelligence feature - currently tied to the Camera Control button - into the camera app itself." 9to5Mac also reports a new shutter-style control styled after the Apple Intelligence logo and says the Camera Control shortcut will remain. MacRumors reports backend code hints that Visual Intelligence in iOS 27 will gain features such as nutrition-label scanning, adding printed contact details to Contacts, Wallet pass generation from scans, and automatic Tab Group naming in Safari. Reports further note Apple has a deal to use Google's Gemini models for some Apple Intelligence features. Apple is expected to reveal iOS 27 at WWDC on June 8, 2026, per coverage.
What happened
Bloomberg reporter Mark Gurman, as reported by 9to5Mac, says Apple is developing a new Siri Camera Mode that will be included in iOS 27. 9to5Mac quotes Gurman: "move its Visual Intelligence feature - currently tied to the Camera Control button - into the camera app itself." 9to5Mac additionally reports that the Camera experience is being "redesign[ed]" with a new shutter-like control styled after the Apple Intelligence logo and that the existing Camera Control shortcut will remain available. 9to5Mac reports Apple will unveil iOS 27 at WWDC on June 8, 2026.
Technical details
MacRumors reports discovered backend code strings that suggest Visual Intelligence will gain features including scanning a food nutrition label to surface dietary information, offering to add printed phone numbers and addresses to Contacts, generating Wallet passes from scanned tickets or cards, and automatically naming Tab Groups in Safari. 9to5Mac reports that current Visual Intelligence integrations can surface results from external services including ChatGPT and Google Image Search, and that Gurman says Apple has signed a deal to use Google Gemini models for some Apple Intelligence functionality.
Editorial analysis - technical context
For practitioners: integrating camera-based vision features into a system-level assistant tends to require a hybrid of on-device vision preprocessing and cloud-hosted large models for richer reasoning, given the compute and latency tradeoffs involved. Companies deploying comparable camera+LLM experiences commonly split tasks: visual feature extraction and privacy-sensitive filtering on-device, and contextual understanding or multimodal fusion in the cloud. Reliance on third-party models such as Gemini increases integration complexity around API latency, model updates, and data governance compared with pure in-house models.
Context and significance
Industry context
platform vendors have been moving AI capabilities into core system apps to improve discoverability and drive usage; MacRumors and 9to5Mac coverage places Apple's Camera-centric Siri move squarely in that pattern. For developers and ML engineers, tighter OS-level AI features typically create new extension points, updated APIs, and increased demand for multimodal data pipelines (image-to-text, OCR, structured extraction). Privacy and permissions controls become more prominent when camera input is coupled with assistant-style workflows, a theme repeatedly emphasized in public reporting on system AI features.
What to watch
Observers should track these indicators in the iOS 27 beta cycle: whether Apple publishes developer APIs for Visual Intelligence features, the published device-compatibility list that clarifies on-device capabilities, explicit documentation of which tasks run on-device versus via cloud Gemini endpoints, and any privacy or data-use details in Apple's developer release notes. Also watch WWDC demos and Apple's technical sessions for concrete SDK examples and performance metrics.
Scoring Rationale
The story reports a notable platform-level AI integration that affects mobile UX, developer APIs, and multimodal pipelines. It is not a frontier-model release but is material for practitioners building vision+LLM features and mobile integrations.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems
