We're in beta · Starting with US & Canada · Shipping weekly — your feedback shapes RiseMe
Nura Studios logo
Nura Studios Verified
Digital Marketing, Web Design, Branding, Software Development

Senior AI Infrastructure Engineer (MLOps / Model Serving)

Montreal, Quebec, CanadaRemoteFull TimeSeniorPosted 2 months ago

Compensation estimateAI

See base, equity, bonus, and total comp estimates for this role — free, no credit card.

Sign up to see compensation estimate

Senior AI Infrastructure Engineer (MLOps / Model Serving)

Location:
*Montreal or Remote*

Nura Studios is building a new generation of AI-powered creative tools for storytelling.

We are developing a platform that enables creators to produce cinematic-quality episodic content — from concept and script all the way to animation and final edit — within a single integrated environment. Our platform combines Generative AI, real-time graphics, and scalable cloud infrastructure to remove the traditional barriers to high-end content production. At the center of this platform is
Showcraft
, Nura’s creation environment that guides creators through the entire storytelling process — from idea to final frame.

Our founding team has decades of experience building creative technologies used by millions of creators across companies such as
Unity, Adobe, Roblox, and Disney
.

We are a small, highly technical team of engineers, artists, and researchers building the future of AI-driven storytelling.

About the Role

We are looking for a
Senior AI Infrastructure Engineer
to help build and operate the platform that powers large-scale Generative AI workloads
.
This role focuses on:

  • MLOps and model serving
  • Distributed GPU infrastructure
  • Large-scale AI inference systems

You will work on systems that deploy and scale AI models used for:

  • Image generation and editing
  • Video synthesis
  • Embeddings
  • 3D world synthesis and understanding
  • Multimodal creative AI workflows

You will help design and operate our
state-of-the-art inference platform
, running large-scale workloads across multiple GPU cloud providers while ensuring
performance, reliability, observability, and cost efficiency
.

You don’t need to match every requirement below. We value
curiosity, pragmatism, and the ability to ship production systems.

What You’ll Work On

*AI Infrastructure & Model Serving*

- Build systems for
serving large-scale AI models in production
- Develop
high-performance inference pipelines
across GPU providers
- Implement infrastructure for
reliable deployment and scaling of AI workloads
- Build systems that
dispatch workloads to GPU clusters and manage capacity
- Optimize performance using
profiling, batching, caching, and scheduling

*MLOps & Reliability*

- Build tooling for
deploying and versioning machine learning models
- Design
CI/CD pipelines for ML systems
- Improve
observability for AI workloads
(metrics, tracing, monitoring)

You will help track and improve:

  • Latency
  • Reliability
  • GPU utilization
  • Infrastructure cost
  • Model quality signals

*Platform Engineering*

- Build backend services and APIs powering the AI platform
- Collaborate with researchers and engineers to
bring new models into production
- Design systems enabling
rapid iteration with production-grade reliability
- Contribute to architecture decisions around
distributed inference and GPU orchestration

Qualifications

You may have some of the following:

- 8+ years
building production infrastructure, backend systems, or DevOps platforms

or

- 2–3 years
of hands-on
MLOps / AI infrastructure experience

Additional experience:

- Strong
Python backend engineering
experience
- Experience with
model serving or AI infrastructure
- Experience running systems on
AWS or other cloud platforms
- Experience with
distributed systems
- Experience building
high-performance APIs
- Familiarity with
LLMs and AI-assisted development workflows

Nice to have:

  • Golang
  • GPU infrastructure
  • Ray or distributed compute frameworks
  • Docker / containerized workloads
  • OpenTelemetry or observability platforms
  • Autoscaling infrastructure

Why Join Nura Studios

At Nura Studios you will:

- Work on
cutting-edge Generative AI technology for filmmaking
- Build infrastructure for
large-scale AI systems
- Help shape the future of
AI-powered creative tools
- Join a
small, highly experienced engineering team
- Solve challenging problems in
GPU infrastructure, AI deployment, and distributed systems

We offer:

- Flexible vacation
- Remote-first culture
- Compensation package including
salary and equity
- Comprehensive health benefits
- Option to work from our
Old Montreal office

And most importantly — the opportunity to build technology that empowers the
next generation of storytellers.

Ready to apply?
You'll be redirected to Nura Studios's application page.