We're in beta · Starting with US & Canada · Shipping weekly — your feedback shapes RiseMe
CirrusLabs logo
CirrusLabs Verified
IT Consulting, Digital Transformation, Cloud Services, Software Development

Data Engineer

Atlanta, Georgia, United StatesHybridFull TimePosted 2 months agoVisa sponsorship available

Compensation estimateAI

See base, equity, bonus, and total comp estimates for this role — free, no credit card.

Sign up to see compensation estimate

Job Role: Lead Databricks Data Engineer

Client: Synovus

Location: Hybrid Job at Atlanta/ Columbus, GA (3 Day Onsite/ 2 Day Remote)

Duration : Contract to Hire

Job Description:

Designs, builds, and manages scalable data platforms, leading teams to create robust, cloud-based data pipelines (ETL/ELT) using tools like Python, PySpark, SQL, Databricks (Lakehouse, Delta Lake, Unity Catalog), and cloud services (AWS/Azure/GCP). Key responsibilities include technical leadership, mentoring junior engineers, ensuring data quality/governance, optimizing performance (Spark optimizations, Medallion Architecture), stakeholder communication, and driving DataOps practices for data-driven insights.

Key Responsibilities

- Technical Leadership:
Own architecture, set standards, guide technical direction, and lead complex projects.
- Pipeline Development:
Design, build, and optimize scalable data pipelines (ingestion, transformation, storage) on Databricks.
- Mentorship:
Coach and develop data engineers, conduct code reviews, and promote best practices.
- Architecture:
Implement Data Lakehouse (Bronze/Silver/Gold) and Medallion Architecture patterns.
- Performance Tuning:
Identify and resolve bottlenecks, optimize Spark configurations, and manage cluster optimization.
- Collaboration:
Work with architects, product managers, and business stakeholders to translate requirements into solutions.
- Data Governance:
Ensure data quality, integrity, security, and compliance.
- DataOps/CI/CD:
Implement modern engineering practices for automated, reliable data flows.

Required Skills & Experience

- Databricks Expertise:
Deep knowledge of the platform (Lakehouse, Delta Lake, Unity Catalog, SQL).
- Programming:
Python, PySpark, Scala, SQL.
- Cloud Platforms:
AWS, Azure, or GCP.
- Big Data Technologies:
Spark, Data Warehousing, ETL/ELT.
- Leadership:
Proven experience leading technical initiatives and teams.
- Data Modeling:
Strong understanding of data modeling concepts.

Desirable Skills

  • Apache Airflow, Kafka, CI/CD tools (Jenkins), orchestration.
  • Experience in specific domains (e.g., Healthcare).
  • DataOps, MDM, data governance.

Certification preferred

  • Databricks Certified Professional
Ready to apply?
You'll be redirected to CirrusLabs's application page.

Similar roles