NAME

gcloud alpha ml speech recognize-long-running - get transcripts of longer audio from an audio file

SYNOPSIS

gcloud alpha ml speech recognize-long-running AUDIO --language-code=LANGUAGE_CODE [--additional-language-codes=[LANGUAGE_CODE,...]] [--async] [--enable-automatic-punctuation] [--encoding=ENCODING; default="encoding-unspecified"] [--filter-profanity] [--hints=[HINT,...]] [--include-word-confidence] [--include-word-time-offsets] [--max-alternatives=MAX_ALTERNATIVES; default=1] [--model=MODEL] [--output-uri=OUTPUT_URI] [--sample-rate=SAMPLE_RATE] [--audio-channel-count=AUDIO_CHANNEL_COUNT --separate-channel-recognition] [--audio-topic=AUDIO_TOPIC --interaction-type=INTERACTION_TYPE --microphone-distance=MICROPHONE_DISTANCE --naics-code=NAICS_CODE --original-media-type=ORIGINAL_MEDIA_TYPE --original-mime-type=ORIGINAL_MIME_TYPE --recording-device-name=RECORDING_DEVICE_NAME --recording-device-type=RECORDING_DEVICE_TYPE] [--enable-speaker-diarization : --max-diarization-speaker-count=MAX_DIARIZATION_SPEAKER_COUNT --min-diarization-speaker-count=MIN_DIARIZATION_SPEAKER_COUNT] [GCLOUD_WIDE_FLAG ...]

DESCRIPTION

(ALPHA) Get a transcript of audio up to 80 minutes in length. If the audio is under 60 seconds, you may also use gcloud alpha ml speech recognize to analyze it.

EXAMPLES

To block the command from completing until analysis is finished, run:

$ gcloud alpha ml speech recognize-long-running AUDIO_FILE \ --language-code=LANGUAGE_CODE --sample-rate=SAMPLE_RATE

You can also receive an operation as the result of the command by running:

$ gcloud alpha ml speech recognize-long-running AUDIO_FILE \ --language-code=LANGUAGE_CODE --sample-rate=SAMPLE_RATE --async

This will return information about an operation. To get information about the operation, run:

$ gcloud alpha ml speech operations describe OPERATION_ID

To poll the operation until it's complete, run:

$ gcloud alpha ml speech operations wait OPERATION_ID

POSITIONAL ARGUMENTS

AUDIO

The location of the audio file to transcribe. Must be a local path or a Google Cloud Storage URL (in the format gs://bucket/object).

REQUIRED FLAGS

--language-code=LANGUAGE_CODE

The language of the supplied audio as a BCP-47 https://www.rfc-editor.org/rfc/bcp/bcp47.txt language tag. Example: "en-US". See https://cloud.google.com/speech/docs/languages for a list of the currently supported language codes.

OPTIONAL FLAGS

--additional-language-codes=[LANGUAGE_CODE,...]

The BCP-47 language tags of other languages that the speech may be in. Up to 3 can be provided.

If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language-code.

--async

Return immediately, without waiting for the operation in progress to complete.

--enable-automatic-punctuation

Adds punctuation to recognition result hypotheses.

--encoding=ENCODING; default="encoding-unspecified"

The type of encoding of the file. Required if the file format is not WAV or FLAC. ENCODING must be one of: amr, amr-wb, encoding-unspecified, flac, linear16, mp3, mulaw, ogg-opus, speex-with-header-byte, webm-opus.

--filter-profanity

If True, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. f***.

--hints=[HINT,...]

A list of strings containing word and phrase "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See https://cloud.google.com/speech/limits#content.

--include-word-confidence

Include a list of words and the confidence for those words in the top result.

--include-word-time-offsets

If True, the top result includes a list of words with the start and end time offsets (timestamps) for those words. If False, no word-level time offset information is returned.

--max-alternatives=MAX_ALTERNATIVES; default=1

Maximum number of recognition hypotheses to be returned. The server may return fewer than max_alternatives. Valid values are 0-30. A value of 0 or 1 will return a maximum of one.

--model=MODEL

Select the model best suited to your domain to get best results. If you do not explicitly specify a model, Speech-to-Text will auto-select a model based on your other specified parameters. Some models are premium and cost more than standard models (although you can reduce the price by opting into https://cloud.google.com/speech-to-text/docs/data-logging). MODEL must be one of:

command_and_search

short queries such as voice commands or voice search.

default

audio that is not one of the specific audio models. For example, long-form audio. Ideally the audio is high-fidelity, recorded at a 16khz or greater sampling rate.

latest_long

Use this model for any kind of long form content such as media or spontaneous speech and conversations. Consider using this model in place of the video model, especially if the video model is not available in your target language. You can also use this in place of the default model.

latest_short

Use this model for short utterances that are a few seconds in length. It is useful for trying to capture commands or other single shot directed speech use cases. Consider using this model instead of the command and search model.

medical_conversation

Best for audio that originated from a conversation between a medical provider and patient.

medical_dictation

Best for audio that originated from dictation notes by a medical provider.

phone_call

audio that originated from a phone call (typically recorded at an 8khz sampling rate).

phone_call_enhanced

audio that originated from a phone call (typically recorded at an 8khz sampling rate). This is a premium model and can produce better results but costs more than the standard rate.

video_enhanced

audio that originated from video or includes multiple speakers. Ideally the audio is recorded at a 16khz or greater sampling rate. This is a premium model that costs more than the standard rate.

--output-uri=OUTPUT_URI

Location to which the results should be written. Must be a Google Cloud Storage URI.

--sample-rate=SAMPLE_RATE

The sample rate in Hertz. For best results, set the sampling rate of the audio source to 16000 Hz. If that's not possible, use the native sample rate of the audio source (instead of re-sampling).

Audio channel settings.
--audio-channel-count=AUDIO_CHANNEL_COUNT

The number of channels in the input audio data. Set this for separate-channel-recognition. Valid values are: 1)LINEAR16 and FLAC are 1-8 2)OGG_OPUS are 1-254 3) MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only 1.

This flag argument must be specified if any of the other arguments in this group are specified.

--separate-channel-recognition

Recognition result will contain a channel_tag field to state which channel that result belongs to. If this is not true, only the first channel will be recognized.

This flag argument must be specified if any of the other arguments in this group are specified.

Description of audio data to be recognized. Note that the Google Cloud

Speech-to-text-api does not use this information, and only passes it through back into response.

--audio-topic=AUDIO_TOPIC

(DEPRECATED) Description of the content, e.g. "Recordings of federal supreme court hearings from 2012".

The audio-topic flag is deprecated and will be removed. The Google Cloud Speech-to-text api does not use it, and only passes it through back into response.

--interaction-type=INTERACTION_TYPE

(DEPRECATED) Determining the interaction type in the conversation.

The interaction-type flag is deprecated and will be removed. The Google Cloud Speech-to-text api does not use it, and only passes it through back into response. INTERACTION_TYPE must be one of:

dictation

Transcribe speech to text to create a written document, such as a text-message, email or report.

discussion

Multiple people in a conversation or discussion.

phone-call

A phone-call or video-conference in which two or more people, who are not in the same room, are actively participating.

presentation

One or more persons lecturing or presenting to others, mostly uninterrupted.

professionally-produced

Professionally produced audio (eg. TV Show, Podcast).

voicemail

A recorded message intended for another person to listen to.

voice-command

Transcribe voice commands, such as for controlling a device.

voice-search

Transcribe spoken questions and queries into text.

--microphone-distance=MICROPHONE_DISTANCE

(DEPRECATED) The distance at which the audio device is placed to record the conversation.

The microphone-distance flag is deprecated and will be removed. The Google Cloud Speech-to-text api does not use it, and only passes it through back into response. MICROPHONE_DISTANCE must be one of:

farfield

The speaker is more than 3 meters away from the microphone.

midfield

The speaker is within 3 meters of the microphone.

nearfield

The audio was captured from a microphone close to the speaker, generally within 1 meter. Examples include a phone, dictaphone, or handheld microphone.

--naics-code=NAICS_CODE

(DEPRECATED) The industry vertical to which this speech recognition request most closely applies.

The naics-code flag is deprecated and will be removed. The Google Cloud Speech-to-text api does not use it, and only passes it through back into response.

--original-media-type=ORIGINAL_MEDIA_TYPE

(DEPRECATED) The media type of the original audio conversation.

The original-media-type flag is deprecated and will be removed. The Google Cloud Speech-to-text api does not use it, and only passes it through back into response. ORIGINAL_MEDIA_TYPE must be one of:

audio

The speech data is an audio recording.

video

The speech data originally recorded on a video.

--original-mime-type=ORIGINAL_MIME_TYPE

(DEPRECATED) Mime type of the original audio file. Examples: audio/m4a, audio/mp3.

The original-mime-type flag is deprecated and will be removed. The Google Cloud Speech-to-text api does not use it, and only passes it through back into response.

--recording-device-name=RECORDING_DEVICE_NAME

(DEPRECATED) The device used to make the recording. Examples: Nexus 5X, Polycom SoundStation IP 6000

The recording-device-name flag is deprecated and will be removed. The Google Cloud Speech-to-text api does not use it, and only passes it through back into response.

--recording-device-type=RECORDING_DEVICE_TYPE

(DEPRECATED) The device type through which the original audio was recorded on.

The recording-device-type flag is deprecated and will be removed. The Google Cloud Speech-to-text api does not use it, and only passes it through back into response. RECORDING_DEVICE_TYPE must be one of:

indoor

Speech was recorded indoors.

outdoor

Speech was recorded outdoors.

pc

Speech was recorded using a personal computer or tablet.

phone-line

Speech was recorded over a phone line.

smartphone

Speech was recorded on a smartphone.

vehicle

Speech was recorded in a vehicle.

--enable-speaker-diarization

Enable speaker detection for each recognized word in the top alternative of the recognition result using an integer speaker_tag provided in the WordInfo.

--max-diarization-speaker-count=MAX_DIARIZATION_SPEAKER_COUNT

Maximum estimated number of speakers in the conversation being recognized.

--min-diarization-speaker-count=MIN_DIARIZATION_SPEAKER_COUNT

Minimum estimated number of speakers in the conversation being recognized.

GCLOUD WIDE FLAGS

These flags are available to all commands: --access-token-file, --account, --billing-project, --configuration, --flags-file, --flatten, --format, --help, --impersonate-service-account, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity.

Run $ gcloud help for details.

API REFERENCE

This command uses the speech/v1p1beta1 API. The full documentation for this API can be found at: https://cloud.google.com/speech-to-text/docs/quickstart-protocol

NOTES

This command is currently in alpha and might change without notice. If this command fails with API permission errors despite specifying the correct project, you might be trying to access an API with an invitation-only early access allowlist. These variants are also available:

$ gcloud ml speech recognize-long-running

$ gcloud beta ml speech recognize-long-running