Databricks, declarative.

Declarative control for Databricks

Describe the Databricks workspace you want, not the steps to get there. Mortar handles the synchronization so you can ship without the grind.

Features

Everything you need to manage Databricks workspaces like Infrastructure as Code, with clear plan and apply steps.

⚡️

Declarative state

Describe pipelines, jobs and tables as desired outcomes instead of step-by-step scripts.

🔒

Guardrails built in

Enforce access, tagging, and guardrails automatically so teams move fast without risk.

📊

Plan then apply

See every change before it lands with readable plans and clean, predictable outcomes.

🧩

Reusable config

Package workspace patterns as modules so teams can launch new projects in minutes.

How Mortar works

Configuration-driven control for enterprise Databricks environments.

You declare

version: 1
pipeline:
  name: "orders_daily_ingest"
  mode: "batch"
source:
  type: "file"
  path: "abfss://raw/sales/orders/{env}/"
target:
  bronze_table: "sales_orders_raw"
  silver_table: "sales_orders_clean"
  partition_by:
    - "order_date"
schedule:
  type: "daily"
  time: "02:00"
compute:
  job_cluster:
    spark_version: "14.3.x-scala2.12"
    autoscale:
      min_workers: 2
      max_workers: 8
schema_policy:
  mode: "evolve_additive"
  primary_key:
    - "order_id"
              

Mortar delivers

Mortar
Plan
Apply
Databricks

Job clusters

Autoscaling job cluster (2-8 workers) pinned to your Spark version.

Tables

Bronze sales_orders_raw and silver sales_orders_clean created with partitions on order_date.

Pipelines

Batch pipeline orders_daily_ingest wired with source, target, and partitioning from the spec.

Schedules enforced

Daily 02:00 UTC job with governance-friendly schema policy (evolve_additive).

Get in Touch

Want clearer governance and lower Databricks costs? Start the conversation.