Add-HpcTask
Syntax
![]() |
|
---|---|
Parameter Set: job Add-HpcTask -Job <HpcJob> [-CommandLine <String> ] [-Depend <String[]> ] [-End <Int32> ] [-Env <String[]> ] [-Exclusive <Boolean> ] [-Increment <Int32> ] [-Name <String> ] [-NumCores <String> ] [-NumNodes <String> ] [-NumSockets <String> ] [-Parametric] [-RequiredNodes <String[]> ] [-Rerunnable <Boolean> ] [-RunTime <String> ] [-Scheduler <String> ] [-Start <Int32> ] [-Stderr <String> ] [-Stdin <String> ] [-Stdout <String> ] [-TaskFile <String> ] [-Type {<Basic> | <NodePrep> | <NodeRelease> | <ParametricSweep> | <Service>} ] [-WorkDir <String> ] [ <CommonParameters>] Parameter Set: id Add-HpcTask -JobId <Int32> [-CommandLine <String> ] [-Depend <String[]> ] [-End <Int32> ] [-Env <String[]> ] [-Exclusive <Boolean> ] [-Increment <Int32> ] [-Name <String> ] [-NumCores <String> ] [-NumNodes <String> ] [-NumSockets <String> ] [-Parametric] [-RequiredNodes <String[]> ] [-Rerunnable <Boolean> ] [-RunTime <String> ] [-Scheduler <String> ] [-Start <Int32> ] [-Stderr <String> ] [-Stdin <String> ] [-Stdout <String> ] [-TaskFile <String> ] [-Type {<Basic> | <NodePrep> | <NodeRelease> | <ParametricSweep> | <Service>} ] [-WorkDir <String> ] [ <CommonParameters>] |
Detailed Description
Creates a new task and adds it to the specified job on an HPC cluster.
You can use the Add-HpcTask cmdlet on jobs that you have not yet submitted, jobs that you submitted that are currently waiting in the queue, jobs that are already running, jobs that have failed, or jobs that have been canceled. You cannot add tasks to a job that has finished. When the resources that the job scheduler allocated to the job are available, the task begins to run.
Parameters
-CommandLine<String>
Specifies the command line for the task, including the command or application name and any necessary arguments. You must specify the CommandLine parameter or specify a task XML file for the TaskFile parameter that includes a value for the CommandLine attribute.
In tasks that include subtasks, you can use the asterisk (*) character as a placeholder for the parametric sweep index in Parametric Sweep tasks or for the subtask identifier in Service, Node Preparation, and Node Release tasks. You can include more than one asterisk to indicate the minimum number of positions to use when expressing the number of the index or subtask. This does not limit numbers that require more positions. This placeholder can be useful when defining the command or the input and output files for the task.
The HPC Job Scheduler Service interprets commands before sending them to the compute nodes. To run a command that uses an asterisk, you can include the caret (^) as an escape character.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
no default |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Depend<String[]>
Specifies a list of names for the tasks in the specified job on which the new task depends. The new task does not start until all the tasks in the list finish running.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
no default |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-End<Int32>
Specifies the ending index for a parametric task. The ending index must be larger than the starting index. A parametric task runs the command multiple times, substituting the current index value for any asterisks (*) in the command line. The current index starts at the index that the Start parameter specifies, and increases by the value that the Increment parameter specifies each subsequent time the command runs. When the current index exceeds the ending index, the task stops running the command.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
100 |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Env<String[]>
Specifies a list of environment variables to set in the run-time environment of the task and the values to assign to those environment variables. The list should have a format of variable_name1=value1[,variable_name2=value2[,...]]. To unset an environment variable, do not specify a value. For example, "variable_to_unset_name=".
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
no default |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Exclusive<Boolean>
Specifies whether the job scheduler should ensure that no other task runs on the same node as this task while this task runs.
A non-zero value or $true indicates that the job scheduler should ensure that no other task runs on the same node as this task while this task runs. If you specify a non-zero value or $true value for the Exclusive parameter for the task, you must also specify a non-zero value or $true value for the Exclusive parameter for the job to which you are adding the task, or the task fails on submission.
A value of 0 or $false indicates that this task can share compute nodes with other tasks.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
$false |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Increment<Int32>
Specifies the value to use when incrementing the index for a parametric task. This value must be a positive integer. A parametric task runs the command multiple times, substituting the current index value for any asterisks (*) in the command line. The current index starts at the index that the Start parameter specifies, and it increases by the value that the Increment parameter specifies each subsequent time the command runs. When the current index exceeds the index that the End parameter specifies, the task stops running the command.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
1 |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Job<HpcJob>
Specifies an HpcJob object that corresponds to the job to which you want to add the new task. Use the Get-HpcJob cmdlet to get an HpcJob object for a job. You cannot use the Job parameter together with the JobId parameter.
Aliases |
none |
Required? |
true |
Position? |
named |
Default Value |
no default |
Accept Pipeline Input? |
true (ByValue) |
Accept Wildcard Characters? |
false |
-JobId<Int32>
Specifies the job identifier of the job to which you want to add the new task. Use the Get-HpcJob cmdlet to get a list of jobs and their identifiers. You cannot use the JobId parameter together with the Job parameter.
Aliases |
none |
Required? |
true |
Position? |
named |
Default Value |
no default |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Name<String>
Specifies a name to use for this task in command output and in the user interface. The maximum length for the name is 80 characters.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
no default |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-NumCores<String>
Specifies the overall number of cores that the task requires across the HPC cluster, in the format [minimum-]maximum. The task runs on at least the minimum number of cores and on no more than the maximum. If you specify only one value, this cmdlet sets the minimum and maximum number of cores to that value. If you specify a minimum value that exceeds the total number of cores available across the cluster, an error occurs when you submit the task or the job that contains the task.
The minimum and maximum values can be only positive integers.
You cannot specify the NumCores parameter if you also specify the NumNodes or NumSockets parameter. If you do not specify NumCores, NumNodes, or NumSockets, the task is allocated one core.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
1-1 if the NumNodes or NumSockets parameter is not specified, not applicable otherwise |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-NumNodes<String>
Specifies the overall number of nodes that the task requires across the HPC cluster, in the format [minimum-]maximum. The task runs on at least the minimum number of nodes and on no more than the maximum. If you specify only one value, this cmdlet sets the minimum and maximum number of nodes to that value. If you specify a minimum value that exceeds the total number of nodes available across the cluster, an error occurs when you submit the task or the job that contains the task.
The minimum and maximum values can be only positive integers.
You cannot specify the NumNodes parameter if you also specify the NumCores or NumSockets parameter. If you do not specify NumCores, NumNodes, or NumSockets, the task is allocated one core.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
not applicable |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-NumSockets<String>
Specifies the overall number of sockets that the task requires across the HPC cluster, in the format [minimum-]maximum. The task runs on at least the minimum number of sockets and on no more than the maximum. If you specify only one value, this cmdlet sets the minimum and maximum number of sockets to that value. If you specify a minimum value that exceeds the total number of sockets available across the cluster, an error occurs when you submit the task or the job that contains the task.
The minimum and maximum values can be only positive integers.
You cannot specify the NumSockets parameter if you also specify the NumCores or NumNodes parameter. If you do not specify NumCores, NumNodes, or NumSockets, the task is allocated one core.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
not applicable |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Parametric
Indicates that the new task is a parametric task. A parametric task runs the command multiple times, substituting the current index value for any asterisks (*) in the command line. If you specify this parameter, you should also specify values for the Start, End, and Increment values if you do not want to use the default values that define the index values for the parametric task. .
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
not applicable |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-RequiredNodes<String[]>
Specifies a list of nodes on which the task must run. The job scheduler exclusively allocates all of the nodes in this list to run the task.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
no default |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Rerunnable<Boolean>
Specifies whether the job scheduler attempts to rerun the task if the task runs and fails.
A non-zero value or $true indicates that the job scheduler should attempt to rerun the task if the task runs and fails.
A value of 0 or $false indicates that the job scheduler should not attempt to rerun the task if the task runs and fails, and it should move the task to the failed state immediately.
The cluster administrator can configure the number of times that the job scheduler tries to rerun a task before moving the task to the failed state.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
$false |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-RunTime<String>
Specifies the maximum amount of time that the task should run. After the task runs for this amount of time, the job scheduler cancels the task.
You specify the amount of time in a format of [[days:]hours:]minutes. You can also specify "infinite" to indicate that the task can run for an unlimited amount of time.
If you specify only one part of the days:hours:minutes format, the cmdlet interprets the specified value as the number of minutes. For example, 12 indicates 12 minutes. If you specify two parts of the format, the command interprets the left part as hours and the right part as minutes. For example, 10:30 indicates 10 hours and 30 minutes.
You can use one or more digits for each part of the format. The maximum value for each part is 2,147,483,647.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
infinite |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Scheduler<String>
Specifies the host name or IP address of the head node for the cluster that includes the job to which you want to add the task. The value must be a valid computer name or IP address. If you do not specify the Scheduler parameter, this cmdlet uses the scheduler on the head node that the CCP_SCHEDULER environment variable specifies. To set this environment variable, run the following cmdlet:
Set-Content Env:CCP_SCHEDULER <head_node_name>
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
%CCP_SCHEDULER% |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Start<Int32>
Specifies the starting index for a parametric task. The starting index must be less than the ending index. A parametric task runs the command multiple times, substituting the current index value for any asterisks (*) in the command line. The current index starts at the starting index, and it increases by the value that the Increment parameter specifies each subsequent time the command runs. When the current index exceeds the ending index that the End parameter specifies, the task stops running the command.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
1 |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Stderr<String>
Specifies the name for the file to which the task should redirect the standard error stream. This includes the full path or a path that is relative to the working directory for the file if the task should not redirect the standard error stream to a file in the working directory. If you specify a path that does not exist, the task fails.
If you do not specify the Stderr parameter, the task stores up to 4 kilobytes (KB) of data in the job scheduler database that the Output property for the task specifies. Any output that exceeds 4 KB is lost.
The maximum length of value for this parameter is 160 characters.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
no default |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Stdin<String>
Specifies the name for the file from which the task should receive standard input. This includes the full path or a path that is relative to the working directory for the file if the task should not receive standard input from a file in the working directory. If you specify a file or path that does not exist, the task fails.
The maximum length of value for this parameter is 160 characters.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
no default |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Stdout<String>
Specifies the name for the file to which the task should redirect standard output. This includes the full path or a path that is relative to the working directory for the file if the task should not redirect standard output to a file in the working directory. If you specify a path that does not exist, the task fails.
If you do not specify the Stdout parameter, the task stores up to 4 kilobytes (KB) of data in the job scheduler database that the Output property for the task specifies. Any output that exceeds 4 KB is lost.
The maximum length of value for this parameter is 160 characters.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
no default |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-TaskFile<String>
Specifies the name of a task XML file from which to read the settings for the task, including the full or relative path to the file if the file is not in the current directory. You can create a task XML file that includes the settings for another task by running the Export-HpcTask cmdlet. You can then specify that XML file for this parameter to apply those settings for that task to the new task that you create with this cmdlet. You must specify a task XML file for the TaskFile parameter that includes a value for the CommandLine attribute or specify the CommandLine parameter.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
no default |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-Type<TaskType>
Specifies a type for the task, which defines how to run the command for the task.
This parameter is supported only for Windows HPC Server 2008 R2.
The acceptable values for this parameter are:
Basic |
Runs a single instance of a serial application or a Message Passing Interface (MPI) application. An MPI application typically runs concurrently on multiple cores, and it can span multiple nodes. |
NodePrep |
Runs a command or script on each compute node as it is allocated to the job. The Node Preparation task runs on a node before any other task in the job. If the Node Preparation task fails to run on a node, that node is not added to the job. |
NodeRelease |
Runs a command or script on each compute node as it is released from the job. Node Release tasks run when the job is canceled by the user or by graceful preemption. Node Release tasks do not run when the job is canceled by immediate preemption. |
ParametricSweep |
Runs a command a specified number of times as indicated by the start, end, and increment values, generally across indexed input and output files. The steps of the sweep may or may not run in parallel, depending on the resources that are available on the HPC cluster when the task is running. |
Service |
Runs a command or service on all resources that are assigned to the job. New instances of the command start when new resources are added to the job or if a previously running instance exits and the resource that the previously running instance was running on is still allocated to the job. A service task continues to start new instances until the task is canceled, the maximum run time expires, or the maximum number of instances is reached. A service task can create up to 1,000,000 subtasks. Tasks that you submit through a service-oriented architecture (SOA) client run as service tasks. You cannot add a basic task or a parametric sweep task to a job that contains a service task. |
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
Basic |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
-WorkDir<String>
Specifies the working directory under which the task should run.
The maximum length of value for this parameter is 160 characters.
Aliases |
none |
Required? |
false |
Position? |
named |
Default Value |
%USER_PROFILE% |
Accept Pipeline Input? |
false |
Accept Wildcard Characters? |
false |
<CommonParameters>
This cmdlet supports the common parameters: -Verbose, -Debug, -ErrorAction, -ErrorVariable, -OutBuffer, and -OutVariable. For more information, see about_CommonParameters
Inputs
The input type is the type of the objects that you can pipe to the cmdlet.
- The HpcJob object to which the cmdlet should add the task.
Outputs
The output type is the type of the objects that the cmdlet emits.
- The modified HpcJob object.
Examples
EXAMPLE 1
Creates a job named Sample that includes a single task named Basic Task. This task runs the hostname command on a single core. The task saves the output of the command to a file and it is stored on a shared folder on the head node of the HPC cluster.
![]() |
|
---|---|
$j = New-HpcJob -name "Sample" $j | Add-HpcTask -name "Basic Task" -command "hostname.exe" -workdir "\\headnode\output share" -stdout "hostname.out" |
EXAMPLE 2
Creates a job named Sample that contains a parametric sweep task named Sweep Task. This task runs 45 times with a set of values from 10 to 100, where each value is greater than the previous value by two.
The task runs each of the following command lines independently:
Echo 10
Echo 12
Echo 14
...
Echo 98
Echo 100
This sweep creates the following files in the \\headnode\output share directory:
sweepstep10.out
sweepstep12.out
sweepstep14.out
...
Sweepstep98.out
sweepstep100.out
The sweep creates 45 files in all, each of which contains its index number.
![]() |
|
---|---|
$j = New-HpcJob -Name "Sample" $j | Add-HpcTask -Name "Sweep Task" -Parametric -Start 10 -End 100 -Increment 2 -Command "Echo *" -Stdout "\\headnode\output share\sweepstep*.out" |
EXAMPLE 3
Creates a job named Sample with Task Type that contains a node preparation task that displays the name of the node.
![]() |
|
---|---|
$job = New-HpcJob -Name "Sample with Task Type" $job | Add-HpcTask -Name "Node Prep Task" -Type NodePrep -Command "echo %COMPUTERNAME%" |