Declarative state
Describe pipelines, jobs and tables as desired outcomes instead of step-by-step scripts.
Databricks, declarative.
Describe the Databricks workspace you want, not the steps to get there. Mortar handles the synchronization so you can ship without the grind.
Everything you need to manage Databricks workspaces like Infrastructure as Code, with clear plan and apply steps.
Describe pipelines, jobs and tables as desired outcomes instead of step-by-step scripts.
Enforce access, tagging, and guardrails automatically so teams move fast without risk.
See every change before it lands with readable plans and clean, predictable outcomes.
Package workspace patterns as modules so teams can launch new projects in minutes.
Configuration-driven control for enterprise Databricks environments.
You declare
version: 1
pipeline:
name: "orders_daily_ingest"
mode: "batch"
source:
type: "file"
path: "abfss://raw/sales/orders/{env}/"
target:
bronze_table: "sales_orders_raw"
silver_table: "sales_orders_clean"
partition_by:
- "order_date"
schedule:
type: "daily"
time: "02:00"
compute:
job_cluster:
spark_version: "14.3.x-scala2.12"
autoscale:
min_workers: 2
max_workers: 8
schema_policy:
mode: "evolve_additive"
primary_key:
- "order_id"
Mortar delivers
Autoscaling job cluster (2-8 workers) pinned to your Spark version.
Bronze sales_orders_raw and silver sales_orders_clean created with partitions on order_date.
Batch pipeline orders_daily_ingest wired with source, target, and partitioning from the spec.
Daily 02:00 UTC job with governance-friendly schema policy (evolve_additive).
Want clearer governance and lower Databricks costs? Start the conversation.