» constraint Stanza

Placement job -> constraint
job -> group -> constraint
job -> group -> task -> constraint

The constraint allows restricting the set of eligible nodes. Constraints may filter on attributes or client metadata. Additionally constraints may be specified at the job, group, or task levels for ultimate flexibility.

job "docs" {
  # All tasks in this job must run on linux.
  constraint {
    attribute = "${attr.kernel.name}"
    value     = "linux"
  }

  group "example" {
    # All groups in this job should be scheduled on different hosts.
    constraint {
      operator  = "distinct_hosts"
      value     = "true"
    }

    task "server" {
      # All tasks must run where "my_custom_value" is greater than 3.
      constraint {
        attribute = "${meta.my_custom_value}"
        operator  = ">"
        value     = "3"
      }
    }
  }
}

Placing constraints at both the job level and at the group level is redundant since constraints are applied hierarchically. The job constraints will affect all groups (and tasks) in the job.

» constraint Parameters

  • attribute (string: "") - Specifies the name or reference of the attribute to examine for the constraint. This can be any of the Nomad interpolated values.

  • operator (string: "=") - Specifies the comparison operator. The ordering is compared lexically. Possible values include:

    =
    !=
    >
    >=
    <
    <=
    distinct_hosts
    distinct_property
    regexp
    set_contains
    version
    

    For a detailed explanation of these values and their behavior, please see the operator values section.

  • value (string: "") - Specifies the value to compare the attribute against using the specified operation. This can be a literal value, another attribute, or any Nomad interpolated values.

» operator Values

This section details the specific values for the "operator" parameter in the Nomad job specification for constraints. The operator is always specified as a string, but the string can take on different values which change the behavior of the overall constraint evaluation.

constraint {
  operator = "..."
}
  • "distinct_hosts" - Instructs the scheduler to not co-locate any groups on the same machine. When specified as a job constraint, it applies to all groups in the job. When specified as a group constraint, the effect is constrained to that group. This constraint can not be specified at the task level. Note that the attribute parameter should be omitted when using this constraint.

    constraint {
      operator  = "distinct_hosts"
      value     = "true"
    }
    

    The constraint may also be specified as follows for a more compact representation:

    constraint {
        distinct_hosts = true
    }
    
  • "distinct_property" - Instructs the scheduler to select nodes that have a distinct value of the specified property. The value parameter specifies how many allocations are allowed to share the value of a property. The value must be 1 or greater and if omitted, defaults to 1. When specified as a job constraint, it applies to all groups in the job. When specified as a group constraint, the effect is constrained to that group. This constraint can not be specified at the task level.

    constraint {
      operator  = "distinct_property"
      attribute = "${meta.rack}"
      value     = "3"
    }
    

    The constraint may also be specified as follows for a more compact representation:

    constraint {
      distinct_property = "${meta.rack}"
      value     = "3"
    }
    
  • "regexp" - Specifies a regular expression constraint against the attribute. The syntax of the regular expressions accepted is the same general syntax used by Perl, Python, and many other languages. More precisely, it is the syntax accepted by RE2 and described at in the Google RE2 syntax.

    constraint {
      attribute = "..."
      operator  = "regexp"
      value     = "[a-z0-9]"
    }
    
  • "set_contains" - Specifies a contains constraint against the attribute. The attribute and the list being checked are split using commas. This will check that the given attribute contains all of the specified elements.

    constraint {
      attribute = "..."
      operator  = "set_contains"
      value     = "a,b,c"
    }
    
  • "version" - Specifies a version constraint against the attribute. This supports a comma-separated list of constraints, including the pessimistic operator. For more examples please see the go-version repository for more specific examples.

    constraint {
      attribute = "..."
      operator  = "version"
      value     = ">= 0.1.0, < 0.2"
    }
    

» constraint Examples

The following examples only show the constraint stanzas. Remember that the constraint stanza is only valid in the placements listed above.

» Kernel Data

This example restricts the task to running on nodes which have a kernel version higher than "3.19".

constraint {
  attribute = "${attr.kernel.version}"
  operator  = "version"
  value     = "> 3.19"
}

» Distinct Property

A potential use case of the distinct_property constraint is to spread a service with count > 1 across racks to minimize correlated failure. Nodes can be annotated with which rack they are on using client metadata with values such as "rack-12-1", "rack-12-2", etc. The following constraint would assure that an individual rack is not running more than 2 instances of the task group.

constraint {
  distinct_property = "${meta.rack}"
  value = "2"
}

» Operating Systems

This example restricts the task to running on nodes that are running Ubuntu 14.04

constraint {
  attribute = "${attr.os.name}"
  value     = "ubuntu"
}

constraint {
  attribute = "${attr.os.version}"
  value     = "14.04"
}

» Cloud Metadata

When possible, Nomad populates node attributes from the cloud environment. These values are accessible as filters in constraints. This example constrains this task to only run on nodes that are memory-optimized on AWS.

constraint {
  attribute = "${attr.platform.aws.instance-type}"
  value     = "m4.xlarge"
}

» User-Specified Metadata

This example restricts the task to running on nodes where the binaries for redis, cypress, and nginx are all cached locally. This particular example is utilizing node [metadata][meta].

constraint {
  attribute    = "${meta.cached_binaries}"
  set_contains = "redis,cypress,nginx"
}