NAME

gcloud alpha ai endpoints explain - request an online explanation from an Vertex AI endpoint

SYNOPSIS

gcloud alpha ai endpoints explain (ENDPOINT : --region=REGION) --json-request=JSON_REQUEST [--deployed-model-id=DEPLOYED_MODEL_ID] [GCLOUD_WIDE_FLAG ...]

DESCRIPTION

(ALPHA) gcloud alpha ai endpoints explain sends an explanation request to the Vertex AI endpoint for the given instances. This command reads up to 100 instances, though the service itself accepts instances up to the payload limit size (currently, 1.5MB).

EXAMPLES

To send an explanation request to the endpoint for the json file, input.json, run:

$ gcloud alpha ai endpoints explain ENDPOINT_ID \ --region=us-central1 --json-request=input.json

POSITIONAL ARGUMENTS

Endpoint resource - The endpoint to request an online explanation. The

arguments in this group can be used to specify the attributes of this resource. (NOTE) Some attributes are not given arguments in this group but can be set in other ways. To set the project attribute:

provide the argument endpoint on the command line with a fully specified name;

provide the argument --project on the command line;

set the property core/project.

This must be specified.

ENDPOINT

ID of the endpoint or fully qualified identifier for the endpoint. To set the name attribute:

  • provide the argument endpoint on the command line.

This positional argument must be specified if any of the other arguments in this group are specified.

--region=REGION

Cloud region for the endpoint. To set the region attribute:

  • provide the argument endpoint on the command line with a fully specified name;

  • provide the argument --region on the command line;

  • set the property ai/region;

  • choose one from the prompted list of available regions.

REQUIRED FLAGS

--json-request=JSON_REQUEST

Path to a local file containing the body of a JSON request.

An example of a JSON request:

{ "instances": [ {"x": [1, 2], "y": [3, 4]}, {"x": [-1, -2], "y": [-3, -4]} ] }

This flag accepts "-" for stdin.

OPTIONAL FLAGS

--deployed-model-id=DEPLOYED_MODEL_ID

Id of the deployed model.

GCLOUD WIDE FLAGS

These flags are available to all commands: --access-token-file, --account, --billing-project, --configuration, --flags-file, --flatten, --format, --help, --impersonate-service-account, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity.

Run $ gcloud help for details.

NOTES

This command is currently in alpha and might change without notice. If this command fails with API permission errors despite specifying the correct project, you might be trying to access an API with an invitation-only early access allowlist. These variants are also available:

$ gcloud ai endpoints explain

$ gcloud beta ai endpoints explain