Use case

Batch processing workloads

Run and manage short-lived batch jobs at scale.

Challenge

Efficient batch processing that can easily scale up or down

Running batch jobs on schedulers like Kubernetes is overly complex and limited in terms of scheduling throughput and scalability. Common issues with the default Kubernetes autoscaler and job controller include duplicated batch jobs or pods failing and restarting mid process. To mitigate the impact of these interruptions, many have had to create custom tooling or workarounds. 

Solution

Use a batch scheduler that is optimized for faster throughput at any scale

Nomad can natively run batch, system batch, and parameterized jobs. Nomad's architecture enables easy scalability and an optimistically concurrent scheduling strategy that can yield thousands of container deployments per second. With the Nomad Autoscaler, it's possible to automatically provision clients only when a batch job is enqueued, and decommission them once the work is complete, saving time and money, since there's no need for manual intervention and the resources are only active for just as long as there are jobs to run.

Graymeta
Graymeta
Customer case study

Backend batch processing with Nomad

Learn how Graymeta moved their application from the traditional processing jobs out of a queue on multiple VMs to processing the same jobs out of the same queue as before, but scheduling them as container jobs in Nomad.

Get started running batch jobs with Nomad

Nomad addresses the technical complexity of managing workloads in production at scale by providing a simple and flexible workload orchestrator across distributed infrastructure and clouds.