Data Engineer Palantir
Compensation estimateAI
See base, equity, bonus, and total comp estimates for this role — free, no credit card.
Sign up to see compensation estimateJob Description: · 3+ years of hands-on experience in data engineering with a focus on ETL workflows, data pipelines, and cloud computing.
· Strong experience with AWS services for data processing and storage (e.g., S3, Glue, Athena, Lambda, Redshift).
· Proficiency in programming languages such as Python, PySpark, and TypeScript/JavaScript.
· Deep understanding of microservices architecture and distributed systems.
· Familiarity with AI/ML tools and frameworks (e.g., TensorFlow, PyTorch) and their integration into data pipelines.
· Experience with big data technologies like Snowflake.
· Strong problem-solving and performance optimization skills.
· Exposure to modern DevOps practices, including CI/CD pipelines and container orchestration tools like Docker and Kubernetes.
· Experience working in agile environments delivering complex data engineering solutions.
· Proven expertise or certification in Palantir Foundry is highly preferred.
· Prior experience in the insurance domain is highly desirable.
Responsibilities: · 3+ years of hands-on experience in data engineering with a focus on ETL workflows, data pipelines, and cloud computing.
· Strong experience with AWS services for data processing and storage (e.g., S3, Glue, Athena, Lambda, Redshift).
· Proficiency in programming languages such as Python, PySpark, and TypeScript/JavaScript.
· Deep understanding of microservices architecture and distributed systems.
· Familiarity with AI/ML tools and frameworks (e.g., TensorFlow, PyTorch) and their integration into data pipelines.
· Experience with big data technologies like Snowflake.
· Strong problem-solving and performance optimization skills.
· Exposure to modern DevOps practices, including CI/CD pipelines and container orchestration tools like Docker and Kubernetes.
· Experience working in agile environments delivering complex data engineering solutions.
· Proven expertise or certification in Palantir Foundry is highly preferred.
· Prior experience in the insurance domain is highly desirable.
Qualifications: · 3+ years of hands-on experience in data engineering with a focus on ETL workflows, data pipelines, and cloud computing.
· Strong experience with AWS services for data processing and storage (e.g., S3, Glue, Athena, Lambda, Redshift).
· Proficiency in programming languages such as Python, PySpark, and TypeScript/JavaScript.
· Deep understanding of microservices architecture and distributed systems.
· Familiarity with AI/ML tools and frameworks (e.g., TensorFlow, PyTorch) and their integration into data pipelines.
· Experience with big data technologies like Snowflake.
· Strong problem-solving and performance optimization skills.
· Exposure to modern DevOps practices, including CI/CD pipelines and container orchestration tools like Docker and Kubernetes.
· Experience working in agile environments delivering complex data engineering solutions.
· Proven expertise or certification in Palantir Foundry is highly preferred.
· Prior experience in the insurance domain is highly desirable.