Databricks Launches Agent Bricks Enterprise Agent Platform

Databricks released Agent Bricks, a unified platform to build, deploy, and govern production AI agents that operate on enterprise data. Agent Bricks combines model access, context from the Lakehouse, identity-first governance via Unity Catalog, and automated evaluation that generates synthetic task data and LLM judges to optimize agents for cost and quality. New GA capabilities include Document Intelligence and Custom Agents, plus integration with OpenAI and open-source models, serverless deployment through Databricks Apps, and distribution via Databricks One. The platform emphasizes multi-AI routing, persistent memory, fine-grained permissions, and built-in benchmarks to move agents from prototype to trusted production at scale.
What happened
Databricks announced general availability of Agent Bricks, its enterprise agent platform that unifies data, models, execution, and governance so agents can run on real business context with correct identity and permissions. The release makes Document Intelligence and Custom Agents broadly available and emphasizes automatic evaluation, synthetic data generation, and multi-AI model access to optimize cost and quality.
Technical details
Agent Bricks integrates tightly with the Lakehouse as the authoritative data source and leverages Unity Catalog to enforce access control and auditing. It supports access to models from OpenAI, Anthropic, Google, and open-source stacks through a single platform contract, with intelligent routing and automatic fallbacks. The platform automates task-aware benchmark creation using research from Mosaic AI Research, then generates synthetic training and evaluation data plus LLM judges to produce repeatable, domain-specific metrics. Developers can build agents using familiar frameworks such as LangChain, LangGraph, and LlamaIndex, validate behavior in an AI Playground, and deploy serverless interactive UIs via Databricks Apps with built-in SSO. Agents can be given persistent memory stored in Lakehouse components like Lakebase and connected to external systems like SharePoint or Google Drive while preserving enterprise access controls.
Platform feature set:
- •Unified governance across data, models, and agents with ownership, permissions, and auditing
- •Automated evaluation: synthetic data, LLM judges, and continuous benchmarking
- •Multi-AI model routing and vendor-agnostic switching to optimize cost and availability
- •Integration with developer workflows and no-code managed builders for faster delivery
- •Serverless deployment and distribution through Databricks Apps and Databricks One
Context and significance
Agent Bricks addresses the practical bottlenecks that stop agent prototypes from reaching production: evaluation, cost management, and governance. Enterprise agents derive value from grounding in proprietary data and institutional semantics; Databricks shifts focus from building agent loops to operating them with correct semantics, identity, and permissions. The combination of lakehouse-native context, automated domain-specific evaluation, and model-agnostic access reduces vendor lock-in and the manual trial-and-error that inflates costs. The product also builds on high-profile ecosystem alignment: executives framed this as an enterprise-era push for models plus data. Sam Altman remarked that enterprise adoption is expanding rapidly, and Ali Ghodsi emphasized customer demand for secure, auditable access to advanced models on private data.
Why practitioners should care
The platform standardizes several hard operational problems: keeping agents within permission boundaries, continuous evaluation against domain-specific metrics, and cost-quality trade-off tuning across model providers. For ML engineers and platform teams, Agent Bricks offers a path to ship agents faster with guardrails that satisfy security, privacy, and compliance teams. For data engineers, the tight Lakehouse integration reduces integration work and preserves governance. For model ops, built-in routing and benchmarking simplify switching or mixing models in production.
What to watch
Adoption hinges on real-world benchmarks from early enterprise deployments and how well the automated synthetic-evaluation pipeline matches human judgment across nuanced tasks. Observe pricing and contractual terms for model access, the granularity of Unity Catalog enforcement in practice, and the platform's flexibility with open-source model hosting. Continued partnership depth with major model providers will determine whether Agent Bricks becomes the standard enterprise agent fabric or one vendor-managed alternative.
Scoring Rationale
Databricks is shipping a cohesive platform that addresses core operational blockers for enterprise agent production: grounding in private data, governance, and repeatable evaluation. This is significant for ML engineering and platform teams, but it is an incremental architecture and product expansion rather than a frontier model breakthrough.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



