We're in alpha · Starting with US & Canada · Shipping weekly — your feedback shapes RiseMe
Performing Arts Fort Worth logo
Performing Arts Fort Worth Verified
Arts and Culture

Senior Data Engineer

Milton, Georgia, United StatesRemoteFull TimeSeniorPosted today

About The Role
The data engineer role is changing. The traditional pattern—build a pipeline, hand it off, wait for someone else to figure out if the data is right—doesn’t work anymore. The engineers who create the most value now are the ones who go deep into the business domain, understand the problem firsthand, and use AI tools to move from question to working solution in hours instead of sprints.
MDVIP’s Analytics team is looking for a Senior Data Engineer who operates as a hybrid technical product owner: someone who builds and maintains the data platform on Azure Databricks, but who also sits with business stakeholders, interrogates the problem, and owns the outcome end-to-end. You won’t wait for requirements to be handed to you. You’ll go find them, validate them, and ship the solution—using Claude Code and agentic development patterns to collapse the distance between understanding a business problem and solving it in production.
This is what it means to shift the engineer left into the business domain. You’re not a pipeline builder waiting for a ticket. You’re the person who understands how physician network growth, member engagement, and operational performance actually work—and who builds the data infrastructure that makes the entire Analytics team faster, sharper, and more impactful.
What You’ll Do
Own the Data Platform

  • Design, build, and operate MDVIP’s data platform on Azure Databricks—ingestion, transformation, storage, and serving layers that power analytics, AI models, and operational reporting.
  • Build and maintain data pipelines across MDVIP’s ecosystem: Salesforce, SQL Server, Snowflake, third-party sources, and the new cloud-native payments platform.
  • Engineer for quality and trust—validation checks, anomaly detection, lineage tracking, and documentation that ensure every downstream consumer can rely on the data.
  • Write clean, version-controlled, production-grade code. Think like a software engineer building a product, not a script runner maintaining jobs.

Go Deep Into the Business Domain

  • Partner directly with business stakeholders across physician growth, member services, finance, and operations to understand how data drives decisions—then build for those decisions, not for abstract requirements.
  • Act as a technical product owner for your domain areas: own the backlog, prioritize based on business impact, and ship iteratively without waiting for a PM to sequence your work.
  • Translate ambiguous business questions into data models, feature tables, and curated datasets that analysts and data scientists can build on immediately.
  • Close the loop—follow your data through to the dashboard, the model, or the operational workflow and validate that it’s actually driving the outcome.

Drive AI-First Engineering Practices

  • Use Claude Code and agentic development as your primary workflow—AI-driven pipeline generation, automated testing, rapid prototyping—to ship at a pace that would be impossible with traditional approaches.
  • Build data infrastructure that is AI-ready: well-documented, semantically clear, and structured so that AI tools and agents can reason over it effectively.
  • Scout, evaluate, and adopt emerging AI tools and platforms that make the data team faster—separating real value from hype with hands-on testing.
  • Share what you learn. Document patterns, run demos, and help the broader team adopt AI-first workflows with confidence.

Who You Are

  • A data engineer who refuses to stay in the technical silo. You go find the business problem, not wait for it to arrive as a Jira ticket.
  • Someone who thinks like a product owner—you prioritize by impact, ship incrementally, and own the outcome, not just the pipeline.
  • The kind of engineer who’s already using AI tools to write, test, and deploy code before anyone asked you to—and who knows when the output needs a human eye.
  • Equally comfortable writing a Spark transformation, debugging a Salesforce data sync, presenting findings to leadership, and pushing back on a vague requirement until it’s sharp enough to build against.
  • Pragmatic over perfectionist. You optimize for business impact and speed to value, not theoretical elegance.
  • A strong collaborator who elevates the people around them through clear communication, reusable patterns, and generous knowledge sharing.

Required Qualifications

  • BS in Computer Science, Data Science, or related field; 6+ years in data engineering or a hybrid data engineering/analytics role.
  • Deep hands-on experience with Azure Databricks—notebooks, Delta Lake, Unity Catalog, and production-scale pipelines.
  • Strong Python and SQL; experience with PySpark and distributed data processing.
  • Built and operated data pipelines that serve analytics, ML models, and operational systems—not just batch ETL jobs.
  • Worked directly with business stakeholders to define requirements, shape data products, and deliver measurable outcomes.
  • Active, daily use of AI coding tools (Claude Code, Copilot, or similar) as a force multiplier.
  • Strong communication skills with a track record of presenting technical work to non-technical audiences.
Ready to apply?
You'll be redirected to Performing Arts Fort Worth's application page.

Similar roles