Back to Insights

March 26, 2026

Agentic AI & Data Governance: Highlights of the Databricks Meetup Montevideo

On March 26, over 50 data professionals gathered at Qubika Buceo for the latest Databricks Meetup Montevideo:exploring agentic AI, legacy migration acceleration, and modern data governance on the Databricks Lakehouse.

Agentic AI: How to Modernize Your Data Models with Databricks

Once again, Qubika Buceo became the home of the Databricks community in Montevideo, bringing together over 50 data engineers, architects, scientists, and business leaders for an evening packed with technical depth and practical insight. This edition tackled one of the most pressing challenges in modern data teams: how to accelerate the modernization of data governance models and unlock the full potential of agentic AI — all natively within Databricks.

The night featured two talks that complemented each other perfectly: a deep dive into a governance migration accelerator built by the Qubika team, and a live walkthrough of Databricks’ agentic capabilities with Genie and AI agents.

Talk 1: The Databricks Governance Accelerator : From Legacy to Lakehouse at Speed

Speakers: Marco Luquer, Solutions Engineer | Databricks Champion and Facundo Sentena , Senior AI Engineer

The Hidden Problem: Migration Is Not Just a Technical Challenge

Before diving into solutions, the session was open by addressing a challenge that resonates across virtually every organization

Lift & Shift vs Real Modernization

A simple “lift & shift” approach moves tables and pipelines as-is. It’s fast — but dangerous.

It often creates:

  • legacy architectures in the cloud
  • duplicated logic
  • broken governance models

On the other hand, true modernization implies:

  • rethinking how data is organized
  • aligning with business domains
  • leveraging native cloud capabilities

As highlighted during the session, choosing the wrong approach leads to a critical outcome:

A “legacy system in the cloud” that limits scalability and blocks AI adoption

What Gets Lost in Migration

One of the most insightful parts of the talk focused on what actually breaks during migrations:

  • Loss of context → metadata and descriptions disappear
  • Business disconnection → lack of clear domains and tagging
  • Governance gaps → permissions and access control degrade

Even worse, data systems are not isolated,  they are deeply interconnected.

The session described this as:

The “Labyrinth of Dependencies”

  • hidden dependencies between tables, views, and pipelines
  • fragmented role and permission models
  • no standard way to translate access logic across platforms

Why This Matters: The Real Business Risk

This is not just a technical inconvenience — it has real consequences:

  • Manual processes become error-prone
  • Compliance risks increase
  • AI initiatives fail before they start

As stated during the session:

Without a well-governed data foundation, building reliable AI systems is simply not possible

The Solution: An Agentic Approach to Governance Migration

To address this challenge, the Qubika team presented a Governance Migration Accelerator, an agentic system designed to automate one of the hardest parts of migration: governance.

What it does

  • extracts metadata from legacy systems (Snowflake, Redshift, etc.)
  • translates governance models into Unity Catalog
  • validates consistency before deployment
  • generates auditable outputs

The Architecture (simplified)

The system is structured in three layers:

  1. Extraction Layer
    • connectors to legacy systems
    • metadata normalization
  2. Translation & Validation Layer
    • specialized AI agents per artifact (tables, views, roles, etc.)
    • real-time syntactic validation
    • dependency-aware transformations
  3. Reporting & Output Layer
    • migration reports
    • Unity Catalog-ready assets
    • audit logs and metrics

Why Agentic Architecture Changes the Game

Instead of a single general-purpose model, the accelerator uses:

Specialized agents per artifact

Each object type (tables, views, permissions) is handled independently → higher precision.

LLM-as-Judge evaluation

Quality is not assumed — it is measured:

  • offline evaluations
  • production monitoring with sampling

Agent memory (Lakebase)

One of the most critical components:

  • stores previous migrations
  • keeps context across runs
  • enables continuous improvement

Stateless pipelines don’t scale. Agent memory is what makes automation reliable over time.

The Impact

According to the team, this approach enables:

  • Up to 90% faster governance migration
  • Reduced human error
  • Fully auditable processes

Talk 2: Agentic Databricks — Genie, AI Dev Kit, and the Future of the Data Platform

Speaker: Douglas Silva, Solutions Architect, Latam — Databricks

Joining virtually from Brazil, Douglas delivered an energetic and demo-heavy session showing how Databricks is positioning its platform not just as a data lakehouse, but as the foundation for agentic AI applications — both for technical users and business stakeholders.

Genie: An AI Assistant with Enterprise Context

Douglas introduced Genie , Databricks’ native AI assistant. The core differentiator Genie offers versus external AI tools like ChatGPT or Claude is enterprise context: while general-purpose models have no knowledge of your company’s data, Genie is connected to your Unity Catalog, your metrics layer, and your governance policies.

This means a business user can ask “What were our total revenues last quarter?” and get a governed, accurate, data-backed answer — not a hallucination.

Genie comes in two flavors:

  • Genie Code — aimed at technical users: it can write SQL, build notebooks, create pipelines, generate web applications, and even help with migrations. Douglas demonstrated it autonomously building a multi-panel dashboard from a simple natural language request, with zero manual coding.
  • Genie Chat / Genie Spaces — aimed at business users: a clean conversational interface that translates natural language into SQL under the hood, returns humanized responses, and can generate charts automatically. No data engineering skills required.

Live Demos That Impressed the Room

Douglas ran several live demonstrations throughout his session:

Automated dashboard generation — Starting from a single materialized view with 59 columns covering demographic, credit, and product data, Douglas asked Genie to suggest interesting analyses, then generate a full dashboard. Genie built it autonomously while Douglas literally kept his hands in the air.

Conversational ecommerce app — A pre-built application deployed natively within Databricks showcased the convergence of the analytical and transactional layers. The app served personalized product recommendations based on a user’s behavioral profile and purchase history — with each transaction instantly reflected back into the Lakehouse to feed future recommendations.

Root cause analysis with Genie — Perhaps the most impressive demo of the night: a business user querying which customers have the highest churn risk, then asking why a specific customer (Jessica) presents that risk. Genie autonomously built an investigation plan, validated the hypothesis against the data, compared the customer’s profile to average-risk peers, and returned a detailed written report — in minutes rather than days.

Integrating External AI Tools (Claude, Cursor, Windsurf)

Douglas addressed a question that came up repeatedly in the Q&A: how do external AI coding assistants like Claude Code, Cursor, or Windsurf integrate with Databricks securely?

The answer lies in the Databricks AI Dev Kit — an open-source toolkit that allows teams to register external AI assistants as MCP servers within their Databricks workspace. Developers authenticate using their personal access token, inherit their existing Unity Catalog permissions, and the external model never touches raw data directly — it interacts with code and scripts, not with underlying records.

For organizations that already have Cursor or Claude Code as standard developer tooling, this provides a governed path to bring those tools into the data platform without opening up data security risks.

Governance, Metrics, and the Single Source of Truth

One of the most practically valuable parts of Douglas’s session addressed a problem every data team knows well: four analysts, the same business question, four different answers.

Databricks addresses this through a centralized metrics layer within Unity Catalog — where definitions like “total active customers” or “annual revenue” are defined once, governed, and versioned. Genie queries this layer rather than letting users build ad hoc aggregations, ensuring consistent answers regardless of who’s asking or how they phrase the question.

Disaster Recovery and Migration Concerns

The session closed with a frank discussion about migration risk — a topic that resonated strongly with attendees. Douglas outlined two recommended architectures for resilience:

  • Multi-region disaster recovery — dual Databricks workspaces across cloud regions, with pipelines replicated in both, and a unified catalog layer on top.
  • Serverless-first adoption — leaning on Databricks-managed serverless compute, which automatically routes to available infrastructure within a region in case of AZ-level failures, reducing operational overhead while maintaining availability.

For teams at companies that have already started their Databricks journey but are moving cautiously, Douglas’s recommendation was direct: disaster recovery architecture pays for itself in confidence, and serverless-first reduces the blast radius of most failure scenarios significantly.

Join the Next Meetup

The Databricks Meetup Montevideo continues to grow as the go-to gathering for data professionals in Uruguay. Qubika, a Databricks Gold Partner, hosts the series at its Buceo offices and curates sessions that bring the latest platform capabilities together with real-world implementation experience.

Whether you’re evaluating a migration, scaling an existing Lakehouse, or exploring how agentic AI fits into your data platform strategy, the next edition is the place to be

Interested in accelerating your own Databricks governance migration or exploring agentic AI on your data platform?

From governance accelerators to agentic AI applications, Qubika's Databricks practice turns complex data challenges into production-ready solutions .

Learn more!
Avatar photo
Facundo Sentena
Avatar photo
Marco Luquer

By Facundo Sentena and Marco Luquer

Senior AI Engineer at Qubika and Solutions Engineer & Senior Data Scientist at Qubika

Facundo Sentena is a Senior AI Engineer at Qubika focused on Generative AI, LLM orchestration, and production-grade RAG and GraphRAG systems. He designs and delivers end-to-end AI solutions, partnering with product and engineering teams to build scalable, observable AI services that integrate LLM reasoning with enterprise data across cloud environments.

Marco Luquer is a Solutions Engineer & Senior Data Scientist at Qubika focused on Generative AI, LLMs, and production grade data pipelines on the Databricks Lakehouse. He partners with product and data teams to scope use cases, design scalable architectures, and deliver measurable outcomes

News and things that inspire us

Receive regular updates about our latest work

Let’s work together

Get in touch with our experts to review your idea or product, and discuss options for the best approach

Get in touch