Hub Python Library documentation

Inference types

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Inference types

This page lists the types (e.g. dataclasses) available for each task supported on the Hugging Face Hub. Each task is specified using a JSON schema, and the types are generated from these schemas - with some customization due to Python requirements. Visit @huggingface.js/tasks to find the JSON schemas for each task.

This part of the lib is still under development and will be improved in future releases.

audio_classification

classhuggingface_hub.AudioClassificationInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.audio_classification.AudioClassificationParameters] = None)

Inputs for Audio Classification inference

classhuggingface_hub.AudioClassificationOutputElement

<>

(label: strscore: float)

Outputs for Audio Classification inference

classhuggingface_hub.AudioClassificationParameters

<>

(function_to_apply: typing.Optional[ForwardRef('AudioClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None)

Additional inference parameters for Audio Classification

audio_to_audio

classhuggingface_hub.AudioToAudioInput

<>

(inputs: typing.Any)

Inputs for Audio to Audio inference

classhuggingface_hub.AudioToAudioOutputElement

<>

(blob: typing.Anycontent_type: strlabel: str)

Outputs of inference for the Audio To Audio task A generated audio file with its label.

automatic_speech_recognition

classhuggingface_hub.AutomaticSpeechRecognitionGenerationParameters

<>

(do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('AutomaticSpeechRecognitionEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None)

Parametrization of the text generation process

classhuggingface_hub.AutomaticSpeechRecognitionInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionParameters] = None)

Inputs for Automatic Speech Recognition inference

classhuggingface_hub.AutomaticSpeechRecognitionOutput

<>

(text: strchunks: typing.Optional[typing.List[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionOutputChunk]] = None)

Outputs of inference for the Automatic Speech Recognition task

classhuggingface_hub.AutomaticSpeechRecognitionOutputChunk

<>

(text: strtimestamp: typing.List[float])

classhuggingface_hub.AutomaticSpeechRecognitionParameters

<>

(return_timestamps: typing.Optional[bool] = Nonegenerate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionGenerationParameters] = None)

Additional inference parameters for Automatic Speech Recognition

chat_completion

classhuggingface_hub.ChatCompletionInput

<>

(messages: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessage]frequency_penalty: typing.Optional[float] = Nonelogit_bias: typing.Optional[typing.List[float]] = Nonelogprobs: typing.Optional[bool] = Nonemax_tokens: typing.Optional[int] = Nonemodel: typing.Optional[str] = Nonen: typing.Optional[int] = Nonepresence_penalty: typing.Optional[float] = Noneresponse_format: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputGrammarType] = Noneseed: typing.Optional[int] = Nonestop: typing.Optional[typing.List[str]] = Nonestream: typing.Optional[bool] = Nonestream_options: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputStreamOptions] = Nonetemperature: typing.Optional[float] = Nonetool_choice: typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolChoiceClass, ForwardRef('ChatCompletionInputToolChoiceEnum'), NoneType] = Nonetool_prompt: typing.Optional[str] = Nonetools: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool]] = Nonetop_logprobs: typing.Optional[int] = Nonetop_p: typing.Optional[float] = None)

Chat Completion Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

classhuggingface_hub.ChatCompletionInputFunctionDefinition

<>

(arguments: typing.Anyname: strdescription: typing.Optional[str] = None)

classhuggingface_hub.ChatCompletionInputFunctionName

<>

(name: str)

classhuggingface_hub.ChatCompletionInputGrammarType

<>

(type: ChatCompletionInputGrammarTypeTypevalue: typing.Any)

classhuggingface_hub.ChatCompletionInputMessage

<>

(role: strcontent: typing.Union[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessageChunk], str, NoneType] = Nonename: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolCall]] = None)

classhuggingface_hub.ChatCompletionInputMessageChunk

<>

(type: ChatCompletionInputMessageChunkTypeimage_url: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputURL] = Nonetext: typing.Optional[str] = None)

classhuggingface_hub.ChatCompletionInputStreamOptions

<>

(include_usage: typing.Optional[bool] = None)

classhuggingface_hub.ChatCompletionInputTool

<>

(function: ChatCompletionInputFunctionDefinitiontype: str)

classhuggingface_hub.ChatCompletionInputToolCall

<>

(function: ChatCompletionInputFunctionDefinitionid: strtype: str)

classhuggingface_hub.ChatCompletionInputToolChoiceClass

<>

(function: ChatCompletionInputFunctionName)

classhuggingface_hub.ChatCompletionInputURL

<>

(url: str)

classhuggingface_hub.ChatCompletionOutput

<>

(choices: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputComplete]created: intid: strmodel: strsystem_fingerprint: strusage: ChatCompletionOutputUsage)

Chat Completion Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

classhuggingface_hub.ChatCompletionOutputComplete

<>

(finish_reason: strindex: intmessage: ChatCompletionOutputMessagelogprobs: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprobs] = None)

classhuggingface_hub.ChatCompletionOutputFunctionDefinition

<>

(arguments: typing.Anyname: strdescription: typing.Optional[str] = None)

classhuggingface_hub.ChatCompletionOutputLogprob

<>

(logprob: floattoken: strtop_logprobs: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputTopLogprob])

classhuggingface_hub.ChatCompletionOutputLogprobs

<>

(content: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprob])

classhuggingface_hub.ChatCompletionOutputMessage

<>

(role: strcontent: typing.Optional[str] = Nonetool_call_id: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputToolCall]] = None)

classhuggingface_hub.ChatCompletionOutputToolCall

<>

(function: ChatCompletionOutputFunctionDefinitionid: strtype: str)

classhuggingface_hub.ChatCompletionOutputTopLogprob

<>

(logprob: floattoken: str)

classhuggingface_hub.ChatCompletionOutputUsage

<>

(completion_tokens: intprompt_tokens: inttotal_tokens: int)

classhuggingface_hub.ChatCompletionStreamOutput

<>

(choices: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputChoice]created: intid: strmodel: strsystem_fingerprint: strusage: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputUsage] = None)

Chat Completion Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

classhuggingface_hub.ChatCompletionStreamOutputChoice

<>

(delta: ChatCompletionStreamOutputDeltaindex: intfinish_reason: typing.Optional[str] = Nonelogprobs: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprobs] = None)

classhuggingface_hub.ChatCompletionStreamOutputDelta

<>

(role: strcontent: typing.Optional[str] = Nonetool_call_id: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputDeltaToolCall]] = None)

classhuggingface_hub.ChatCompletionStreamOutputDeltaToolCall

<>

(function: ChatCompletionStreamOutputFunctionid: strindex: inttype: str)

classhuggingface_hub.ChatCompletionStreamOutputFunction

<>

(arguments: strname: typing.Optional[str] = None)

classhuggingface_hub.ChatCompletionStreamOutputLogprob

<>

(logprob: floattoken: strtop_logprobs: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputTopLogprob])

classhuggingface_hub.ChatCompletionStreamOutputLogprobs

<>

(content: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprob])

classhuggingface_hub.ChatCompletionStreamOutputTopLogprob

<>

(logprob: floattoken: str)

classhuggingface_hub.ChatCompletionStreamOutputUsage

<>

(completion_tokens: intprompt_tokens: inttotal_tokens: int)

depth_estimation

classhuggingface_hub.DepthEstimationInput

<>

(inputs: typing.Anyparameters: typing.Optional[typing.Dict[str, typing.Any]] = None)

Inputs for Depth Estimation inference

classhuggingface_hub.DepthEstimationOutput

<>

(depth: typing.Anypredicted_depth: typing.Any)

Outputs of inference for the Depth Estimation task

document_question_answering

classhuggingface_hub.DocumentQuestionAnsweringInput

<>

(inputs: DocumentQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.document_question_answering.DocumentQuestionAnsweringParameters] = None)

Inputs for Document Question Answering inference

classhuggingface_hub.DocumentQuestionAnsweringInputData

<>

(image: typing.Anyquestion: str)

One (document, question) pair to answer

classhuggingface_hub.DocumentQuestionAnsweringOutputElement

<>

(answer: strend: intscore: floatstart: int)

Outputs of inference for the Document Question Answering task

classhuggingface_hub.DocumentQuestionAnsweringParameters

<>

(doc_stride: typing.Optional[int] = Nonehandle_impossible_answer: typing.Optional[bool] = Nonelang: typing.Optional[str] = Nonemax_answer_len: typing.Optional[int] = Nonemax_question_len: typing.Optional[int] = Nonemax_seq_len: typing.Optional[int] = Nonetop_k: typing.Optional[int] = Noneword_boxes: typing.Optional[typing.List[typing.Union[typing.List[float], str]]] = None)

Additional inference parameters for Document Question Answering

feature_extraction

classhuggingface_hub.FeatureExtractionInput

<>

(inputs: typing.Union[typing.List[str], str]normalize: typing.Optional[bool] = Noneprompt_name: typing.Optional[str] = Nonetruncate: typing.Optional[bool] = Nonetruncation_direction: typing.Optional[ForwardRef('FeatureExtractionInputTruncationDirection')] = None)

Feature Extraction Input. Auto-generated from TEI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tei-import.ts.

fill_mask

classhuggingface_hub.FillMaskInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.fill_mask.FillMaskParameters] = None)

Inputs for Fill Mask inference

classhuggingface_hub.FillMaskOutputElement

<>

(score: floatsequence: strtoken: inttoken_str: typing.Anyfill_mask_output_token_str: typing.Optional[str] = None)

Outputs of inference for the Fill Mask task

classhuggingface_hub.FillMaskParameters

<>

(targets: typing.Optional[typing.List[str]] = Nonetop_k: typing.Optional[int] = None)

Additional inference parameters for Fill Mask

image_classification

classhuggingface_hub.ImageClassificationInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_classification.ImageClassificationParameters] = None)

Inputs for Image Classification inference

classhuggingface_hub.ImageClassificationOutputElement

<>

(label: strscore: float)

Outputs of inference for the Image Classification task

classhuggingface_hub.ImageClassificationParameters

<>

(function_to_apply: typing.Optional[ForwardRef('ImageClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None)

Additional inference parameters for Image Classification

image_segmentation

classhuggingface_hub.ImageSegmentationInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_segmentation.ImageSegmentationParameters] = None)

Inputs for Image Segmentation inference

classhuggingface_hub.ImageSegmentationOutputElement

<>

(label: strmask: strscore: typing.Optional[float] = None)

Outputs of inference for the Image Segmentation task A predicted mask / segment

classhuggingface_hub.ImageSegmentationParameters

<>

(mask_threshold: typing.Optional[float] = Noneoverlap_mask_area_threshold: typing.Optional[float] = Nonesubtask: typing.Optional[ForwardRef('ImageSegmentationSubtask')] = Nonethreshold: typing.Optional[float] = None)

Additional inference parameters for Image Segmentation

image_to_image

classhuggingface_hub.ImageToImageInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageParameters] = None)

Inputs for Image To Image inference

classhuggingface_hub.ImageToImageOutput

<>

(image: typing.Any)

Outputs of inference for the Image To Image task

classhuggingface_hub.ImageToImageParameters

<>

(guidance_scale: typing.Optional[float] = Nonenegative_prompt: typing.Optional[str] = Nonenum_inference_steps: typing.Optional[int] = Noneprompt: typing.Optional[str] = Nonetarget_size: typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageTargetSize] = None)

Additional inference parameters for Image To Image

classhuggingface_hub.ImageToImageTargetSize

<>

(height: intwidth: int)

The size in pixel of the output image.

image_to_text

classhuggingface_hub.ImageToTextGenerationParameters

<>

(do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('ImageToTextEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None)

Parametrization of the text generation process

classhuggingface_hub.ImageToTextInput

<>

(inputs: typing.Anyparameters: typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextParameters] = None)

Inputs for Image To Text inference

classhuggingface_hub.ImageToTextOutput

<>

(generated_text: typing.Anyimage_to_text_output_generated_text: typing.Optional[str] = None)

Outputs of inference for the Image To Text task

classhuggingface_hub.ImageToTextParameters

<>

(max_new_tokens: typing.Optional[int] = Nonegenerate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextGenerationParameters] = None)

Additional inference parameters for Image To Text

object_detection

classhuggingface_hub.ObjectDetectionBoundingBox

<>

(xmax: intxmin: intymax: intymin: int)

The predicted bounding box. Coordinates are relative to the top left corner of the input image.

classhuggingface_hub.ObjectDetectionInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.object_detection.ObjectDetectionParameters] = None)

Inputs for Object Detection inference

classhuggingface_hub.ObjectDetectionOutputElement

<>

(box: ObjectDetectionBoundingBoxlabel: strscore: float)

Outputs of inference for the Object Detection task

classhuggingface_hub.ObjectDetectionParameters

<>

(threshold: typing.Optional[float] = None)

Additional inference parameters for Object Detection

question_answering

classhuggingface_hub.QuestionAnsweringInput

<>

(inputs: QuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.question_answering.QuestionAnsweringParameters] = None)

Inputs for Question Answering inference

classhuggingface_hub.QuestionAnsweringInputData

<>

(context: strquestion: str)

One (context, question) pair to answer

classhuggingface_hub.QuestionAnsweringOutputElement

<>

(answer: strend: intscore: floatstart: int)

Outputs of inference for the Question Answering task

classhuggingface_hub.QuestionAnsweringParameters

<>

(align_to_words: typing.Optional[bool] = Nonedoc_stride: typing.Optional[int] = Nonehandle_impossible_answer: typing.Optional[bool] = Nonemax_answer_len: typing.Optional[int] = Nonemax_question_len: typing.Optional[int] = Nonemax_seq_len: typing.Optional[int] = Nonetop_k: typing.Optional[int] = None)

Additional inference parameters for Question Answering

sentence_similarity

classhuggingface_hub.SentenceSimilarityInput

<>

(inputs: SentenceSimilarityInputDataparameters: typing.Optional[typing.Dict[str, typing.Any]] = None)

Inputs for Sentence similarity inference

classhuggingface_hub.SentenceSimilarityInputData

<>

(sentences: typing.List[str]source_sentence: str)

summarization

classhuggingface_hub.SummarizationInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.summarization.SummarizationParameters] = None)

Inputs for Summarization inference

classhuggingface_hub.SummarizationOutput

<>

(summary_text: str)

Outputs of inference for the Summarization task

classhuggingface_hub.SummarizationParameters

<>

(clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonetruncation: typing.Optional[ForwardRef('SummarizationTruncationStrategy')] = None)

Additional inference parameters for summarization.

table_question_answering

classhuggingface_hub.TableQuestionAnsweringInput

<>

(inputs: TableQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.table_question_answering.TableQuestionAnsweringParameters] = None)

Inputs for Table Question Answering inference

classhuggingface_hub.TableQuestionAnsweringInputData

<>

(question: strtable: typing.Dict[str, typing.List[str]])

One (table, question) pair to answer

classhuggingface_hub.TableQuestionAnsweringOutputElement

<>

(answer: strcells: typing.List[str]coordinates: typing.List[typing.List[int]]aggregator: typing.Optional[str] = None)

Outputs of inference for the Table Question Answering task

classhuggingface_hub.TableQuestionAnsweringParameters

<>

(padding: typing.Optional[ForwardRef('Padding')] = Nonesequential: typing.Optional[bool] = Nonetruncation: typing.Optional[bool] = None)

Additional inference parameters for Table Question Answering

text2text_generation

classhuggingface_hub.Text2TextGenerationInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text2text_generation.Text2TextGenerationParameters] = None)

Inputs for Text2text Generation inference

classhuggingface_hub.Text2TextGenerationOutput

<>

(generated_text: typing.Anytext2_text_generation_output_generated_text: typing.Optional[str] = None)

Outputs of inference for the Text2text Generation task

classhuggingface_hub.Text2TextGenerationParameters

<>

(clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonetruncation: typing.Optional[ForwardRef('Text2TextGenerationTruncationStrategy')] = None)

Additional inference parameters for Text2text Generation

text_classification

classhuggingface_hub.TextClassificationInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_classification.TextClassificationParameters] = None)

Inputs for Text Classification inference

classhuggingface_hub.TextClassificationOutputElement

<>

(label: strscore: float)

Outputs of inference for the Text Classification task

classhuggingface_hub.TextClassificationParameters

<>

(function_to_apply: typing.Optional[ForwardRef('TextClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None)

Additional inference parameters for Text Classification

text_generation

classhuggingface_hub.TextGenerationInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGenerateParameters] = Nonestream: typing.Optional[bool] = None)

Text Generation Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

classhuggingface_hub.TextGenerationInputGenerateParameters

<>

(adapter_id: typing.Optional[str] = Nonebest_of: typing.Optional[int] = Nonedecoder_input_details: typing.Optional[bool] = Nonedetails: typing.Optional[bool] = Nonedo_sample: typing.Optional[bool] = Nonefrequency_penalty: typing.Optional[float] = Nonegrammar: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = Nonemax_new_tokens: typing.Optional[int] = Nonerepetition_penalty: typing.Optional[float] = Nonereturn_full_text: typing.Optional[bool] = Noneseed: typing.Optional[int] = Nonestop: typing.Optional[typing.List[str]] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_n_tokens: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetruncate: typing.Optional[int] = Nonetypical_p: typing.Optional[float] = Nonewatermark: typing.Optional[bool] = None)

classhuggingface_hub.TextGenerationInputGrammarType

<>

(type: TypeEnumvalue: typing.Any)

classhuggingface_hub.TextGenerationOutput

<>

(generated_text: strdetails: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputDetails] = None)

Text Generation Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

classhuggingface_hub.TextGenerationOutputBestOfSequence

<>

(finish_reason: TextGenerationOutputFinishReasongenerated_text: strgenerated_tokens: intprefill: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputPrefillToken]tokens: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]seed: typing.Optional[int] = Nonetop_tokens: typing.Optional[typing.List[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None)

classhuggingface_hub.TextGenerationOutputDetails

<>

(finish_reason: TextGenerationOutputFinishReasongenerated_tokens: intprefill: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputPrefillToken]tokens: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]best_of_sequences: typing.Optional[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputBestOfSequence]] = Noneseed: typing.Optional[int] = Nonetop_tokens: typing.Optional[typing.List[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None)

classhuggingface_hub.TextGenerationOutputPrefillToken

<>

(id: intlogprob: floattext: str)

classhuggingface_hub.TextGenerationOutputToken

<>

(id: intlogprob: floatspecial: booltext: str)

classhuggingface_hub.TextGenerationStreamOutput

<>

(index: inttoken: TextGenerationStreamOutputTokendetails: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputStreamDetails] = Nonegenerated_text: typing.Optional[str] = Nonetop_tokens: typing.Optional[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputToken]] = None)

Text Generation Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

classhuggingface_hub.TextGenerationStreamOutputStreamDetails

<>

(finish_reason: TextGenerationOutputFinishReasongenerated_tokens: intinput_length: intseed: typing.Optional[int] = None)

classhuggingface_hub.TextGenerationStreamOutputToken

<>

(id: intlogprob: floatspecial: booltext: str)

text_to_audio

classhuggingface_hub.TextToAudioGenerationParameters

<>

(do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('TextToAudioEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None)

Parametrization of the text generation process

classhuggingface_hub.TextToAudioInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioParameters] = None)

Inputs for Text To Audio inference

classhuggingface_hub.TextToAudioOutput

<>

(audio: typing.Anysampling_rate: float)

Outputs of inference for the Text To Audio task

classhuggingface_hub.TextToAudioParameters

<>

(generate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioGenerationParameters] = None)

Additional inference parameters for Text To Audio

text_to_image

classhuggingface_hub.TextToImageInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_image.TextToImageParameters] = None)

Inputs for Text To Image inference

classhuggingface_hub.TextToImageOutput

<>

(image: typing.Any)

Outputs of inference for the Text To Image task

classhuggingface_hub.TextToImageParameters

<>

(guidance_scale: typing.Optional[float] = Noneheight: typing.Optional[int] = Nonenegative_prompt: typing.Optional[str] = Nonenum_inference_steps: typing.Optional[int] = Nonescheduler: typing.Optional[str] = Noneseed: typing.Optional[int] = Nonewidth: typing.Optional[int] = None)

Additional inference parameters for Text To Image

text_to_speech

classhuggingface_hub.TextToSpeechGenerationParameters

<>

(do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('TextToSpeechEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None)

Parametrization of the text generation process

classhuggingface_hub.TextToSpeechInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechParameters] = None)

Inputs for Text To Speech inference

classhuggingface_hub.TextToSpeechOutput

<>

(audio: typing.Anysampling_rate: typing.Optional[float] = None)

Outputs of inference for the Text To Speech task

classhuggingface_hub.TextToSpeechParameters

<>

(generate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechGenerationParameters] = None)

Additional inference parameters for Text To Speech

text_to_video

classhuggingface_hub.TextToVideoInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_video.TextToVideoParameters] = None)

Inputs for Text To Video inference

classhuggingface_hub.TextToVideoOutput

<>

(video: typing.Any)

Outputs of inference for the Text To Video task

classhuggingface_hub.TextToVideoParameters

<>

(guidance_scale: typing.Optional[float] = Nonenegative_prompt: typing.Optional[typing.List[str]] = Nonenum_frames: typing.Optional[float] = Nonenum_inference_steps: typing.Optional[int] = Noneseed: typing.Optional[int] = None)

Additional inference parameters for Text To Video

token_classification

classhuggingface_hub.TokenClassificationInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.token_classification.TokenClassificationParameters] = None)

Inputs for Token Classification inference

classhuggingface_hub.TokenClassificationOutputElement

<>

(end: intscore: floatstart: intword: strentity: typing.Optional[str] = Noneentity_group: typing.Optional[str] = None)

Outputs of inference for the Token Classification task

classhuggingface_hub.TokenClassificationParameters

<>

(aggregation_strategy: typing.Optional[ForwardRef('TokenClassificationAggregationStrategy')] = Noneignore_labels: typing.Optional[typing.List[str]] = Nonestride: typing.Optional[int] = None)

Additional inference parameters for Token Classification

translation

classhuggingface_hub.TranslationInput

<>

(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.translation.TranslationParameters] = None)

Inputs for Translation inference

classhuggingface_hub.TranslationOutput

<>

(translation_text: str)

Outputs of inference for the Translation task

classhuggingface_hub.TranslationParameters

<>

(clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonesrc_lang: typing.Optional[str] = Nonetgt_lang: typing.Optional[str] = Nonetruncation: typing.Optional[ForwardRef('TranslationTruncationStrategy')] = None)

Additional inference parameters for Translation

video_classification

classhuggingface_hub.VideoClassificationInput

<>

(inputs: typing.Anyparameters: typing.Optional[huggingface_hub.inference._generated.types.video_classification.VideoClassificationParameters] = None)

Inputs for Video Classification inference

classhuggingface_hub.VideoClassificationOutputElement

<>

(label: strscore: float)

Outputs of inference for the Video Classification task

classhuggingface_hub.VideoClassificationParameters

<>

(frame_sampling_rate: typing.Optional[int] = Nonefunction_to_apply: typing.Optional[ForwardRef('VideoClassificationOutputTransform')] = Nonenum_frames: typing.Optional[int] = Nonetop_k: typing.Optional[int] = None)

Additional inference parameters for Video Classification

visual_question_answering

classhuggingface_hub.VisualQuestionAnsweringInput

<>

(inputs: VisualQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.visual_question_answering.VisualQuestionAnsweringParameters] = None)

Inputs for Visual Question Answering inference

classhuggingface_hub.VisualQuestionAnsweringInputData

<>

(image: typing.Anyquestion: str)

One (image, question) pair to answer

classhuggingface_hub.VisualQuestionAnsweringOutputElement

<>

(score: floatanswer: typing.Optional[str] = None)

Outputs of inference for the Visual Question Answering task

classhuggingface_hub.VisualQuestionAnsweringParameters

<>

(top_k: typing.Optional[int] = None)

Additional inference parameters for Visual Question Answering

zero_shot_classification

classhuggingface_hub.ZeroShotClassificationInput

<>

(inputs: strparameters: ZeroShotClassificationParameters)

Inputs for Zero Shot Classification inference

classhuggingface_hub.ZeroShotClassificationOutputElement

<>

(label: strscore: float)

Outputs of inference for the Zero Shot Classification task

classhuggingface_hub.ZeroShotClassificationParameters

<>

(candidate_labels: typing.List[str]hypothesis_template: typing.Optional[str] = Nonemulti_label: typing.Optional[bool] = None)

Additional inference parameters for Zero Shot Classification

zero_shot_image_classification

classhuggingface_hub.ZeroShotImageClassificationInput

<>

(inputs: strparameters: ZeroShotImageClassificationParameters)

Inputs for Zero Shot Image Classification inference

classhuggingface_hub.ZeroShotImageClassificationOutputElement

<>

(label: strscore: float)

Outputs of inference for the Zero Shot Image Classification task

classhuggingface_hub.ZeroShotImageClassificationParameters

<>

(candidate_labels: typing.List[str]hypothesis_template: typing.Optional[str] = None)

Additional inference parameters for Zero Shot Image Classification

zero_shot_object_detection

classhuggingface_hub.ZeroShotObjectDetectionBoundingBox

<>

(xmax: intxmin: intymax: intymin: int)

The predicted bounding box. Coordinates are relative to the top left corner of the input image.

classhuggingface_hub.ZeroShotObjectDetectionInput

<>

(inputs: strparameters: ZeroShotObjectDetectionParameters)

Inputs for Zero Shot Object Detection inference

classhuggingface_hub.ZeroShotObjectDetectionOutputElement

<>

(box: ZeroShotObjectDetectionBoundingBoxlabel: strscore: float)

Outputs of inference for the Zero Shot Object Detection task

classhuggingface_hub.ZeroShotObjectDetectionParameters

<>

(candidate_labels: typing.List[str])

Additional inference parameters for Zero Shot Object Detection

<>Update on GitHub

close