We're in beta · Starting with US & Canada · Shipping weekly — your feedback shapes RiseMe
Shoreline AI logo
Shoreline AI Verified
Software

Senior Cloud Data Platform Engineer

San Francisco, California, United StatesOnsiteFull TimeSeniorPosted 2 months agoVisa sponsorship available

Compensation estimateAI

See base, equity, bonus, and total comp estimates for this role — free, no credit card.

Sign up to see compensation estimate

About Shoreline AI:

Shoreline AI is an Industrial AI/IoT startup providing a cloud-native, subscription based SAAS solution to optimize asset performance and operational efficiency for industrial powertrains that drive critical operations in Oil & Gas, Manufacturing, and other industries. Shoreline’s product is already deployed at very large enterprise customers that are seeing significant benefits in their operations by avoiding expensive and unplanned downtime, and improving safety of their sites.

Shoreline platform includes an industrial wireless sensor that is easily installed in minutes and captures vibration data along with additional environmental data through its built-in sensors and external ports. The data is captured throughout the data and uploaded to the Shoreline cloud platform that provides AI/ML driven insights to customers along with powerful data visualization tools for human experts.

Shoreline AI has its headquarters in Campbell, California. For more information, please visit
https://shorelineai.us/
.

About the Role:

We are looking for a Senior Cloud Data Platform Engineer/Architect to play a critical role in building and maintaining the data infrastructure and platform that powers Shoreline AI’s products and services.

The role is based in our US headquarters in our Campbell office in the San Jose/San Francisco Bay Area.

Responsibilities:

  • Design, implement and maintain a scalable and secure data lake to handle both structured and semi-structured data, implement flexible data governance, and provide secure access to Data Scientists and Software Developers.
  • Design and build the “Data API” on top of the Data Lake Platform to provide easy programmatic access to Developers to the available Data for processing, analytics, and visualization.
  • Create data pipelines to ingest, clean, and transform data from multiple sources.
  • Develop a strategy for easy creation and deployment of containerized applications.
  • Develop and maintain internal tools and frameworks for data ingestion using Python and SQL.
  • Monitor data pipelines and cloud infrastructure for availability, low latency, and data correctness.
  • Collaborate cross-functionally to define data models, contracts, schemas, access, and retention policies.
  • Embrace software development and deployment best-practices including continuous integration/continuous deployment (CI/CD), employing Infrastructure-as-Code (IaC), automated testing, etc.
  • Learn and adapt to new cloud technologies and development best practices
  • Maintain a strong customer-first attitude, and ensure that all technical solutions focus on providing customer delight.
  • Participate in architecture, design and code reviews and maintain a high-standard of quality, testing, documentation, and compliance with security standards.

Required Skills and Qualifications

  • Deep understanding of foundational cloud computing concepts including architecture and underlying technologies for various services available for storage (e.g., object storage, SQL and NoSQL databases) and computing (containers, virtual machines, FaaS,) workflow orchestration, etc.).
  • Demonstrated ability to work with AI based coding tools (e.g., Claude Code, Gemini CLI) to accelerate learning, defining architectures and project plans, implementing code and automated tests, and improving software development workflows
  • Platform-builder mindset through experience defining and building APIs and tools to help software developers and data scientists be productive
  • 3+ years of experience in architecting, designing, developing, and implementing cloud solutions on AWS platforms
  • Experience working with real-time or batch data ingestion at scale, and designing fault-tolerant ETL/ELT pipelines
  • Familiarity with event-driven architectures and messaging systems like Kafka or Kinesis
  • Hands-on experience with AWS services including but not limited to: S3, Lambda, API Gateway, Glue, Kinesis, Athena, and RDS
  • Excellent collaboration and communication skills and ability to work with remote teams

Desired Skills and Qualifications

We are excited if you have:

  • Demonstrated ability to orchestrate AI coding agents to accelerate software development
  • Previous experience building a data platform with technologies like Apache Iceberg, Trino, Apache Spark, Kafka, Parquet, etc.
Ready to apply?
You'll be redirected to Shoreline AI's application page.

Similar roles