Apple Plans Customizable Camera App in iOS 27

Reporting by Bloomberg's Mark Gurman says Apple will make the Camera app "fully customizable" in iOS 27, letting users pick which controls appear and where they sit, according to Bloomberg and corroborating coverage in The Verge and Mashable. Gurman reports the changes include selectable widgets for flash, exposure, timer, night mode, live photos, and resolution, plus per-mode widget sets and a new Add Widgets tray. Bloomberg also reports a major Siri redesign that will surface visual-intelligence features in the Camera and, per the report, integrate third-party models such as OpenAI's ChatGPT alongside Apple models and Gemini for some features. Multiple outlets note broader UI tweaks across Safari, Weather, and Image Playground ahead of Apple's planned iOS 27 reveal at WWDC.
What happened
Reporting by Bloomberg's Mark Gurman says Apple will make the Camera app "fully customizable" in iOS 27, with users able to choose which controls appear on screen and where they sit, per Bloomberg and coverage in The Verge and Mashable. The reported controls include flash, exposure, timer, night mode, live photos, and resolution, and Gurman says widgets will be split into categories such as basic, manual, and settings. Gurman also reports a new Camera-specific Siri mode that surfaces Apple's visual-intelligence features and places Siri access inside the Camera UI.
Technical details
Editorial analysis - technical context: Mobile OS-level camera customization via on-screen widgets reduces friction for power users and third-party camera apps. Providing per-mode widget sets and an "Add Widgets" tray, as reported by Bloomberg and The Verge, mirrors a trend where platform vendors expose more granular camera controls without forcing users into full pro interfaces. For AI features, Bloomberg's reporting that Gemini will be used for some Siri capabilities and that users may switch between Apple models and third-party tools (for example OpenAI ChatGPT) implies more modular AI integration at the system level, which raises questions about latency, on-device versus cloud inference, and privacy boundary decisions for visual-intelligence tasks.
Context and significance
Multiple outlets including Mashable and CNET frame these changes as part of a broader iOS visual refresh; Gurman reports UI updates across Safari, Weather, and Image Playground and describes Liquid Glass refinements and new animations. For developers and ML practitioners, system-level hooks for visual intelligence inside the Camera create new extension points for third-party models and services. This fits a wider pattern where mobile OS vendors expose AI primitives in first-party apps, then open APIs or partnerships that let external models participate in user-facing features.
What to watch
- •Implementation details and API surface: whether Apple announces developer APIs or sandboxed model integrations that allow third-party model swapping.
- •On-device versus cloud inference: how Apple balances Gemini integration with on-device Apple Intelligence features for latency and privacy trade-offs.
- •Controls and UX telemetry: whether per-mode widgets lead to increased use of advanced camera features or fragment the UX for casual users, an outcome observed in prior camera-app customizations.
Scoring Rationale
Notable product-level changes: a fully customizable Camera app and deeper AI integration affect mobile developers and ML deployment patterns. The update is significant but not frontier-breaking; recent reporting is current.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

