Senior/Lead Data Engineer
Compensation estimateAI
See base, equity, bonus, and total comp estimates for this role — free, no credit card.
Sign up to see compensation estimateLocation - Alpharetta
Required Citizenship / Work Permit / Visa Status
US Citizens, green card holder
Must-Haves
8+ years of hands-on data engineering experience in enterprise environments. Strong expertise in Azure services, especially Azure Databricks, Functions, and Azure Data Factory (preferred). Advanced proficiency in Apache Spark with Python (PySpark). Strong command over SQL, query optimization, and performance tuning. Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration. Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution). Strong experience in data modelling (normalized, dimensional, lakehouse modelling). Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar). Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns. Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures. Strong analytical, problem-solving, and communication skills. Ability to collaborate with cross-functional teams and lead technical discussions.
Skills: Java,Python,Spark
Notice period
- 0 to 15days only
Job stability is mandatory
Additional Guidelines
Interview process:- 2 Technical round + 1 Client round 3 days in office, Hybrid model.
==============================================================================================
What You Need
- 8
- years of hands-on data engineering experience in enterprise environments.
- Strong expertise in Azure services, especially Azure Databricks, Functions, and Azure Data Factory (preferred).
- Advanced proficiency in Apache Spark with Python (PySpark).
- Strong command over SQL, query optimization, and performance tuning.
- Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration.
- Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution).
- Strong experience in data modelling (normalized, dimensional, lakehouse modelling).
- Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar).
- Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns.
- Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures.
- Strong analytical, problem-solving, and communication skills.
- Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills
- Experience with CI/CD tools such as Azure DevOps and Git.
- Familiarity with IaC tools (Terraform, ARM).
- Exposure to data governance and cataloging tools (Azure Purview).
- Experience supporting machine learning or BI workloads on Databricks.