NAME

gcloud alpha ml-engine jobs submit prediction - start an AI Platform batch prediction job

SYNOPSIS

gcloud alpha ml-engine jobs submit prediction JOB --data-format=DATA_FORMAT --input-paths=INPUT_PATH,[INPUT_PATH,...] --output-path=OUTPUT_PATH --region=REGION (--model=MODEL | --model-dir=MODEL_DIR) [--batch-size=BATCH_SIZE] [--labels=[KEY=VALUE,...]] [--max-worker-count=MAX_WORKER_COUNT] [--runtime-version=RUNTIME_VERSION] [--signature-name=SIGNATURE_NAME] [--version=VERSION] [--accelerator-count=ACCELERATOR_COUNT --accelerator-type=ACCELERATOR_TYPE] [GCLOUD_WIDE_FLAG ...]

DESCRIPTION

(ALPHA) Start an AI Platform batch prediction job.

POSITIONAL ARGUMENTS

JOB

Name of the batch prediction job.

REQUIRED FLAGS

--data-format=DATA_FORMAT

Data format of the input files. DATA_FORMAT must be one of:

text

Text and JSON files; for text files, see https://www.tensorflow.org/guide/datasets#consuming_text_data, for JSON files, see https://cloud.google.com/ai-platform/prediction/docs/overview#batch_prediction_input_data

tf-record

TFRecord files; see https://www.tensorflow.org/guide/datasets#consuming_tfrecord_data

tf-record-gzip

GZIP-compressed TFRecord files.

--input-paths=INPUT_PATH,[INPUT_PATH,...]

Cloud Storage paths to the instances to run prediction on.

Wildcards (*) accepted at the end of a path. More than one path can be specified if multiple file patterns are needed. For example,

gs://my-bucket/instances*,gs://my-bucket/other-instances1

will match any objects whose names start with instances in my-bucket as well as the other-instances1 bucket, while

gs://my-bucket/instance-dir/*

will match any objects in the instance-dir "directory" (since directories aren't a first-class Cloud Storage concept) of my-bucket.

--output-path=OUTPUT_PATH

Cloud Storage path to which to save the output. Example: gs://my-bucket/output.

--region=REGION

The Compute Engine region to run the job in.

Exactly one of these must be specified:
--model=MODEL

Name of the model to use for prediction.

--model-dir=MODEL_DIR

Cloud Storage location where the model files are located.

OPTIONAL FLAGS

--batch-size=BATCH_SIZE

The number of records per batch. The service will buffer batch_size number of records in memory before invoking TensorFlow. Defaults to 64 if not specified.

--labels=[KEY=VALUE,...]

List of label KEY=VALUE pairs to add.

Keys must start with a lowercase character and contain only hyphens (-), underscores (_), lowercase characters, and numbers. Values must contain only hyphens (-), underscores (_), lowercase characters, and numbers.

--max-worker-count=MAX_WORKER_COUNT

The maximum number of workers to be used for parallel processing. Defaults to 10 if not specified.

--runtime-version=RUNTIME_VERSION

AI Platform runtime version for this job. Must be specified unless --master-image-uri is specified instead. It is defined in documentation along with the list of supported versions: https://cloud.google.com/ai-platform/prediction/docs/runtime-version-list

--signature-name=SIGNATURE_NAME

Name of the signature defined in the SavedModel to use for this job. Defaults to DEFAULT_SERVING_SIGNATURE_DEF_KEY in https://www.tensorflow.org/api_docs/python/tf/compat/v1/saved_model/signature_constants, which is "serving_default". Only applies to TensorFlow models.

--version=VERSION

Model version to be used.

This flag may only be given if --model is specified. If unspecified, the default version of the model will be used. To list versions for a model, run

$ gcloud ai-platform versions list

Accelerator Configuration.
--accelerator-count=ACCELERATOR_COUNT

The number of accelerators to attach to the machines. Must be >= 1.

This flag argument must be specified if any of the other arguments in this group are specified.

--accelerator-type=ACCELERATOR_TYPE

The available types of accelerators. ACCELERATOR_TYPE must be one of:

nvidia-tesla-k80

NVIDIA Tesla K80 GPU

nvidia-tesla-p100

NVIDIA Tesla P100 GPU.

This flag argument must be specified if any of the other arguments in this group are specified.

GCLOUD WIDE FLAGS

These flags are available to all commands: --access-token-file, --account, --billing-project, --configuration, --flags-file, --flatten, --format, --help, --impersonate-service-account, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity.

Run $ gcloud help for details.

NOTES

This command is currently in alpha and might change without notice. If this command fails with API permission errors despite specifying the correct project, you might be trying to access an API with an invitation-only early access allowlist. These variants are also available:

$ gcloud ml-engine jobs submit prediction

$ gcloud beta ml-engine jobs submit prediction