OpenAI Codex Automates Adobe Lightroom Photo Denoising
OpenAI's Codex autonomously learned to operate Adobe Lightroom like a human and denoised 50 photos without an API or plugin. It discovered GUI interactions and executed a sequence of actions-select, apply denoise, export-at scale, completing the batch far faster than manual work. The experiment demonstrates emergent tool-use: a code-generating model can synthesize automation scripts and control closed-source desktop software via the user interface. For practitioners this changes the calculus for integration: you can prototype workflows and automation even when no official API exists, but this also raises new security and reliability questions around models that can operate general-purpose software.
What happened
OpenAI's Codex figured out how to operate Adobe Lightroom like a human and denoised 50 photos without an API or plugin. The model generated automation that navigated the Lightroom UI, applied denoising adjustments, and exported results, executing the full end-to-end workflow more quickly than a manual operator.
Technical details
Codex was used to synthesize procedural UI interactions rather than call an SDK. The workflow relied on programmatically driving elements of the desktop application, validating outcomes visually or via file diffs, and iterating when operations failed. Key practical behaviors observed:
- •generating sequences of UI actions and keyboard shortcuts to manipulate controls
- •verifying success by comparing input and output images or export artifacts
- •looping over multi-image batches with retry logic for transient UI states
Context and significance
This is not just a clever script. It shows an LLM-based code system can discover and compose GUI-level operations to use closed-source, GUI-only tools. That lowers the barrier to automation when no API exists and accelerates prototyping of domain workflows like photo editing, data-entry, and legacy app integration. It also surfaces operational risks: brittle UI automation, permission and security concerns, and the potential for models to perform actions beyond intended boundaries.
What to watch
Expect rapid interest in tooling that formalizes safe GUI tool-use, standards for intent and permission, and platform responses from vendors who may expose APIs or harden UIs against automated control.
Scoring Rationale
The demonstration is a notable instance of emergent model tool-use: LLMs composing GUI-level automation for closed-source apps changes how practitioners prototype integrations. It is not a paradigm shift like a new frontier model, but it has appreciable practical and security implications, warranting a mid-high 'notable' score.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



