We're in beta · Starting with US & Canada · Shipping weekly — your feedback shapes RiseMe
Glydways logo
Glydways Verified
Transportation, Automotive, Robotics, Infrastructure, Cleantech

Perception Software Engineering Internship

United StatesRemoteInternshipJunior / Entry-levelPosted 1 month agoVisa sponsorship available

Compensation estimateAI

See base, equity, bonus, and total comp estimates for this role — free, no credit card.

Sign up to see compensation estimate

### Who you are
- Team members may work remotely and must be self-motivated
- We are looking for a highly motivated research intern with a strong background in machine learning to join our ML Perception team and apply your expertise to real-world multi agent autonomous driving and infrastructure challenges to help Glydcars see
- Masters / PhD graduating within 12 months of internship completion
- Strong programming skills (Modern C++ and/or Python)
- Familiarity with deep learning frameworks such as Pytorch
- Excellent communication and interpersonal skills
- Strong problem-solving and research skills
- Candidates with experience in early fusion models and/or semantic segmentation models are preferred

### What the job involves
- The Glydways perception team is responsible for designing and implementing a perception system that includes data fusion from state-of-the-art sensors (e.g., LIDAR, RADAR, High definition cameras, Ultra-wide-band radios, etc.) and robust detection, classification, and tracking of any and all obstacles that could present a hazard to the vehicle (Glydcars) in the system
- The perception system for Glydways will reason about information from each Glydcar and from regularly spaced sensor pods monitoring the road network
- The perception team plays a vital role within the Autonomy Software engineering team. The team’s deliverables include:
- Documenting a detailed design for enacting a safe system that can be certified by public transportation authorities
- Implementing said design on mature prototype vehicles in demonstrations to customers, potential customers, and investors
- Expanding the design to include fail-operational capabilities that extend the overall system to safely give our customers a more comfortable experience
- Managing data collection, autonomy testing, and milestone demonstration events that showcase the system’s maturing capabilities, leading to a production system
- In this role, you will focus on multimodal efficient deep learning for safety critical systems with built in redundancy and robustness
- Collaborating with a multidisciplinary team, you will develop innovative models and algorithms to enhance our perception stack
- Implement state-of-the-art onboard multimodal multitask ML perception models, and finetune them to improve performance
- Develop custom adaptations (such as layers, loss functions, etc.) to improve performance on our datasetResearch and development in various areas, including multiview sensor fusion for redundancy, robustness, and safety critical operation; and scene understanding with a focus on anomaly detection
- Design and perform experiments, and share and present findings, potentially as a peer-reviewed publication
- Work with multimodal (camera, lidar, radar) datasets from a multiagent autonomous system

Ready to apply?
You'll be redirected to Glydways's application page.