gcloud dataflow flex-template run - runs a job from the specified path
gcloud dataflow flex-template run JOB_NAME --template-file-gcs-location=TEMPLATE_FILE_GCS_LOCATION [--additional-experiments=[ADDITIONAL_EXPERIMENTS,...]] [--additional-user-labels=[ADDITIONAL_USER_LABELS,...]] [--dataflow-kms-key=DATAFLOW_KMS_KEY] [--disable-public-ips] [--enable-streaming-engine] [--flexrs-goal=FLEXRS_GOAL] [--max-workers=MAX_WORKERS] [--network=NETWORK] [--num-workers=NUM_WORKERS] [--parameters=[PARAMETERS,...]] [--region=REGION_ID] [--service-account-email=SERVICE_ACCOUNT_EMAIL] [--staging-location=STAGING_LOCATION] [--subnetwork=SUBNETWORK] [--temp-location=TEMP_LOCATION] [--worker-machine-type=WORKER_MACHINE_TYPE] [[--[no-]update : --transform-name-mappings=[TRANSFORM_NAME_MAPPINGS,...]]] [--worker-region=WORKER_REGION | --worker-zone=WORKER_ZONE] [GCLOUD_WIDE_FLAG ...]
Runs a job from the specified flex template gcs path.
To run a job from the flex template, run:
$ gcloud dataflow flex-template run my-job \ --template-file-gcs-location=gs://flex-template-path \ --region=europe-west1 \ --parameters=input="gs://input",output="gs://output-path" \ --max-workers=5
- JOB_NAME
Unique name to assign to the job.
- --template-file-gcs-location=TEMPLATE_FILE_GCS_LOCATION
Google Cloud Storage location of the flex template to run. (Must be a URL beginning with 'gs://'.)
- --additional-experiments=[ADDITIONAL_EXPERIMENTS,...]
Additional experiments to pass to the job.
- --additional-user-labels=[ADDITIONAL_USER_LABELS,...]
Additional user labels to pass to the job.
- --dataflow-kms-key=DATAFLOW_KMS_KEY
Cloud KMS key to protect the job resources.
- --disable-public-ips
Cloud Dataflow workers must not use public IP addresses. Overrides the default dataflow/disable_public_ips property value for this command invocation.
- --enable-streaming-engine
Enabling Streaming Engine for the streaming job. Overrides the default dataflow/enable_streaming_engine property value for this command invocation.
- --flexrs-goal=FLEXRS_GOAL
FlexRS goal for the flex template job. FLEXRS_GOAL must be one of: COST_OPTIMIZED, SPEED_OPTIMIZED.
- --max-workers=MAX_WORKERS
Maximum number of workers to run.
- --network=NETWORK
Compute Engine network for launching instances to run your pipeline.
- --num-workers=NUM_WORKERS
Initial number of workers to use.
- --parameters=[PARAMETERS,...]
Parameters to pass to the job.
- --region=REGION_ID
Region ID of the job's regional endpoint. Defaults to 'us-central1'.
- --service-account-email=SERVICE_ACCOUNT_EMAIL
Service account to run the workers as.
- --staging-location=STAGING_LOCATION
Default Google Cloud Storage location to stage local files.(Must be a URL beginning with 'gs://'.)
- --subnetwork=SUBNETWORK
Compute Engine subnetwork for launching instances to run your pipeline.
- --temp-location=TEMP_LOCATION
Default Google Cloud Storage location to stage temporary files. If not set, defaults to the value for --staging-location.(Must be a URL beginning with 'gs://'.)
- --worker-machine-type=WORKER_MACHINE_TYPE
Type of machine to use for workers. Defaults to server-specified.
- --[no-]update
Set this to true for streaming update jobs. Use --update to enable and --no-update to disable.
- --transform-name-mappings=[TRANSFORM_NAME_MAPPINGS,...]
Transform name mappings for the streaming update job.
- At most one of these can be specified:
- --worker-region=WORKER_REGION
Region to run the workers in.
- --worker-zone=WORKER_ZONE
Zone to run the workers in.
These flags are available to all commands: --access-token-file, --account, --billing-project, --configuration, --flags-file, --flatten, --format, --help, --impersonate-service-account, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity.
Run $ gcloud help for details.
This variant is also available:
$ gcloud beta dataflow flex-template run