Hub Python Library documentation
Inference types
Inference types
This page lists the types (e.g. dataclasses) available for each task supported on the Hugging Face Hub. Each task is specified using a JSON schema, and the types are generated from these schemas - with some customization due to Python requirements. Visit @huggingface.js/tasks to find the JSON schemas for each task.
This part of the lib is still under development and will be improved in future releases.
audio_classification
classhuggingface_hub.AudioClassificationInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.audio_classification.AudioClassificationParameters] = None)
Inputs for Audio Classification inference
classhuggingface_hub.AudioClassificationOutputElement
<source>(label: strscore: float)
Outputs for Audio Classification inference
classhuggingface_hub.AudioClassificationParameters
<source>(function_to_apply: typing.Optional[ForwardRef('AudioClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None)
Additional inference parameters for Audio Classification
audio_to_audio
classhuggingface_hub.AudioToAudioInput
<source>(inputs: typing.Any)
Inputs for Audio to Audio inference
classhuggingface_hub.AudioToAudioOutputElement
<source>(blob: typing.Anycontent_type: strlabel: str)
Outputs of inference for the Audio To Audio task A generated audio file with its label.
automatic_speech_recognition
classhuggingface_hub.AutomaticSpeechRecognitionGenerationParameters
<source>(do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('AutomaticSpeechRecognitionEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None)
Parametrization of the text generation process
classhuggingface_hub.AutomaticSpeechRecognitionInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionParameters] = None)
Inputs for Automatic Speech Recognition inference
classhuggingface_hub.AutomaticSpeechRecognitionOutput
<source>(text: strchunks: typing.Optional[typing.List[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionOutputChunk]] = None)
Outputs of inference for the Automatic Speech Recognition task
classhuggingface_hub.AutomaticSpeechRecognitionOutputChunk
<source>(text: strtimestamp: typing.List[float])
classhuggingface_hub.AutomaticSpeechRecognitionParameters
<source>(return_timestamps: typing.Optional[bool] = Nonegenerate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionGenerationParameters] = None)
Additional inference parameters for Automatic Speech Recognition
chat_completion
classhuggingface_hub.ChatCompletionInput
<source>(messages: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessage]frequency_penalty: typing.Optional[float] = Nonelogit_bias: typing.Optional[typing.List[float]] = Nonelogprobs: typing.Optional[bool] = Nonemax_tokens: typing.Optional[int] = Nonemodel: typing.Optional[str] = Nonen: typing.Optional[int] = Nonepresence_penalty: typing.Optional[float] = Noneresponse_format: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputGrammarType] = Noneseed: typing.Optional[int] = Nonestop: typing.Optional[typing.List[str]] = Nonestream: typing.Optional[bool] = Nonestream_options: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputStreamOptions] = Nonetemperature: typing.Optional[float] = Nonetool_choice: typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolChoiceClass, ForwardRef('ChatCompletionInputToolChoiceEnum'), NoneType] = Nonetool_prompt: typing.Optional[str] = Nonetools: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool]] = Nonetop_logprobs: typing.Optional[int] = Nonetop_p: typing.Optional[float] = None)
Chat Completion Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
classhuggingface_hub.ChatCompletionInputFunctionDefinition
<source>(arguments: typing.Anyname: strdescription: typing.Optional[str] = None)
classhuggingface_hub.ChatCompletionInputFunctionName
<source>(name: str)
classhuggingface_hub.ChatCompletionInputGrammarType
<source>(type: ChatCompletionInputGrammarTypeTypevalue: typing.Any)
classhuggingface_hub.ChatCompletionInputMessage
<source>(role: strcontent: typing.Union[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessageChunk], str, NoneType] = Nonename: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolCall]] = None)
classhuggingface_hub.ChatCompletionInputMessageChunk
<source>(type: ChatCompletionInputMessageChunkTypeimage_url: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputURL] = Nonetext: typing.Optional[str] = None)
classhuggingface_hub.ChatCompletionInputStreamOptions
<source>(include_usage: typing.Optional[bool] = None)
classhuggingface_hub.ChatCompletionInputTool
<source>(function: ChatCompletionInputFunctionDefinitiontype: str)
classhuggingface_hub.ChatCompletionInputToolCall
<source>(function: ChatCompletionInputFunctionDefinitionid: strtype: str)
classhuggingface_hub.ChatCompletionInputToolChoiceClass
<source>(function: ChatCompletionInputFunctionName)
classhuggingface_hub.ChatCompletionInputURL
<source>(url: str)
classhuggingface_hub.ChatCompletionOutput
<source>(choices: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputComplete]created: intid: strmodel: strsystem_fingerprint: strusage: ChatCompletionOutputUsage)
Chat Completion Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
classhuggingface_hub.ChatCompletionOutputComplete
<source>(finish_reason: strindex: intmessage: ChatCompletionOutputMessagelogprobs: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprobs] = None)
classhuggingface_hub.ChatCompletionOutputFunctionDefinition
<source>(arguments: typing.Anyname: strdescription: typing.Optional[str] = None)
classhuggingface_hub.ChatCompletionOutputLogprob
<source>(logprob: floattoken: strtop_logprobs: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputTopLogprob])
classhuggingface_hub.ChatCompletionOutputLogprobs
<source>(content: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprob])
classhuggingface_hub.ChatCompletionOutputMessage
<source>(role: strcontent: typing.Optional[str] = Nonetool_call_id: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputToolCall]] = None)
classhuggingface_hub.ChatCompletionOutputToolCall
<source>(function: ChatCompletionOutputFunctionDefinitionid: strtype: str)
classhuggingface_hub.ChatCompletionOutputTopLogprob
<source>(logprob: floattoken: str)
classhuggingface_hub.ChatCompletionOutputUsage
<source>(completion_tokens: intprompt_tokens: inttotal_tokens: int)
classhuggingface_hub.ChatCompletionStreamOutput
<source>(choices: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputChoice]created: intid: strmodel: strsystem_fingerprint: strusage: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputUsage] = None)
Chat Completion Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
classhuggingface_hub.ChatCompletionStreamOutputChoice
<source>(delta: ChatCompletionStreamOutputDeltaindex: intfinish_reason: typing.Optional[str] = Nonelogprobs: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprobs] = None)
classhuggingface_hub.ChatCompletionStreamOutputDelta
<source>(role: strcontent: typing.Optional[str] = Nonetool_call_id: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputDeltaToolCall]] = None)
classhuggingface_hub.ChatCompletionStreamOutputDeltaToolCall
<source>(function: ChatCompletionStreamOutputFunctionid: strindex: inttype: str)
classhuggingface_hub.ChatCompletionStreamOutputFunction
<source>(arguments: strname: typing.Optional[str] = None)
classhuggingface_hub.ChatCompletionStreamOutputLogprob
<source>(logprob: floattoken: strtop_logprobs: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputTopLogprob])
classhuggingface_hub.ChatCompletionStreamOutputLogprobs
<source>(content: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprob])
classhuggingface_hub.ChatCompletionStreamOutputTopLogprob
<source>(logprob: floattoken: str)
classhuggingface_hub.ChatCompletionStreamOutputUsage
<source>(completion_tokens: intprompt_tokens: inttotal_tokens: int)
depth_estimation
classhuggingface_hub.DepthEstimationInput
<source>(inputs: typing.Anyparameters: typing.Optional[typing.Dict[str, typing.Any]] = None)
Inputs for Depth Estimation inference
classhuggingface_hub.DepthEstimationOutput
<source>(depth: typing.Anypredicted_depth: typing.Any)
Outputs of inference for the Depth Estimation task
document_question_answering
classhuggingface_hub.DocumentQuestionAnsweringInput
<source>(inputs: DocumentQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.document_question_answering.DocumentQuestionAnsweringParameters] = None)
Inputs for Document Question Answering inference
classhuggingface_hub.DocumentQuestionAnsweringInputData
<source>(image: typing.Anyquestion: str)
One (document, question) pair to answer
classhuggingface_hub.DocumentQuestionAnsweringOutputElement
<source>(answer: strend: intscore: floatstart: int)
Outputs of inference for the Document Question Answering task
classhuggingface_hub.DocumentQuestionAnsweringParameters
<source>(doc_stride: typing.Optional[int] = Nonehandle_impossible_answer: typing.Optional[bool] = Nonelang: typing.Optional[str] = Nonemax_answer_len: typing.Optional[int] = Nonemax_question_len: typing.Optional[int] = Nonemax_seq_len: typing.Optional[int] = Nonetop_k: typing.Optional[int] = Noneword_boxes: typing.Optional[typing.List[typing.Union[typing.List[float], str]]] = None)
Additional inference parameters for Document Question Answering
feature_extraction
classhuggingface_hub.FeatureExtractionInput
<source>(inputs: typing.Union[typing.List[str], str]normalize: typing.Optional[bool] = Noneprompt_name: typing.Optional[str] = Nonetruncate: typing.Optional[bool] = Nonetruncation_direction: typing.Optional[ForwardRef('FeatureExtractionInputTruncationDirection')] = None)
Feature Extraction Input. Auto-generated from TEI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tei-import.ts.
fill_mask
classhuggingface_hub.FillMaskInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.fill_mask.FillMaskParameters] = None)
Inputs for Fill Mask inference
classhuggingface_hub.FillMaskOutputElement
<source>(score: floatsequence: strtoken: inttoken_str: typing.Anyfill_mask_output_token_str: typing.Optional[str] = None)
Outputs of inference for the Fill Mask task
classhuggingface_hub.FillMaskParameters
<source>(targets: typing.Optional[typing.List[str]] = Nonetop_k: typing.Optional[int] = None)
Additional inference parameters for Fill Mask
image_classification
classhuggingface_hub.ImageClassificationInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_classification.ImageClassificationParameters] = None)
Inputs for Image Classification inference
classhuggingface_hub.ImageClassificationOutputElement
<source>(label: strscore: float)
Outputs of inference for the Image Classification task
classhuggingface_hub.ImageClassificationParameters
<source>(function_to_apply: typing.Optional[ForwardRef('ImageClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None)
Additional inference parameters for Image Classification
image_segmentation
classhuggingface_hub.ImageSegmentationInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_segmentation.ImageSegmentationParameters] = None)
Inputs for Image Segmentation inference
classhuggingface_hub.ImageSegmentationOutputElement
<source>(label: strmask: strscore: typing.Optional[float] = None)
Outputs of inference for the Image Segmentation task A predicted mask / segment
classhuggingface_hub.ImageSegmentationParameters
<source>(mask_threshold: typing.Optional[float] = Noneoverlap_mask_area_threshold: typing.Optional[float] = Nonesubtask: typing.Optional[ForwardRef('ImageSegmentationSubtask')] = Nonethreshold: typing.Optional[float] = None)
Additional inference parameters for Image Segmentation
image_to_image
classhuggingface_hub.ImageToImageInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageParameters] = None)
Inputs for Image To Image inference
classhuggingface_hub.ImageToImageOutput
<source>(image: typing.Any)
Outputs of inference for the Image To Image task
classhuggingface_hub.ImageToImageParameters
<source>(guidance_scale: typing.Optional[float] = Nonenegative_prompt: typing.Optional[str] = Nonenum_inference_steps: typing.Optional[int] = Noneprompt: typing.Optional[str] = Nonetarget_size: typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageTargetSize] = None)
Additional inference parameters for Image To Image
classhuggingface_hub.ImageToImageTargetSize
<source>(height: intwidth: int)
The size in pixel of the output image.
image_to_text
classhuggingface_hub.ImageToTextGenerationParameters
<source>(do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('ImageToTextEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None)
Parametrization of the text generation process
classhuggingface_hub.ImageToTextInput
<source>(inputs: typing.Anyparameters: typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextParameters] = None)
Inputs for Image To Text inference
classhuggingface_hub.ImageToTextOutput
<source>(generated_text: typing.Anyimage_to_text_output_generated_text: typing.Optional[str] = None)
Outputs of inference for the Image To Text task
classhuggingface_hub.ImageToTextParameters
<source>(max_new_tokens: typing.Optional[int] = Nonegenerate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextGenerationParameters] = None)
Additional inference parameters for Image To Text
object_detection
classhuggingface_hub.ObjectDetectionBoundingBox
<source>(xmax: intxmin: intymax: intymin: int)
The predicted bounding box. Coordinates are relative to the top left corner of the input image.
classhuggingface_hub.ObjectDetectionInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.object_detection.ObjectDetectionParameters] = None)
Inputs for Object Detection inference
classhuggingface_hub.ObjectDetectionOutputElement
<source>(box: ObjectDetectionBoundingBoxlabel: strscore: float)
Outputs of inference for the Object Detection task
classhuggingface_hub.ObjectDetectionParameters
<source>(threshold: typing.Optional[float] = None)
Additional inference parameters for Object Detection
question_answering
classhuggingface_hub.QuestionAnsweringInput
<source>(inputs: QuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.question_answering.QuestionAnsweringParameters] = None)
Inputs for Question Answering inference
classhuggingface_hub.QuestionAnsweringInputData
<source>(context: strquestion: str)
One (context, question) pair to answer
classhuggingface_hub.QuestionAnsweringOutputElement
<source>(answer: strend: intscore: floatstart: int)
Outputs of inference for the Question Answering task
classhuggingface_hub.QuestionAnsweringParameters
<source>(align_to_words: typing.Optional[bool] = Nonedoc_stride: typing.Optional[int] = Nonehandle_impossible_answer: typing.Optional[bool] = Nonemax_answer_len: typing.Optional[int] = Nonemax_question_len: typing.Optional[int] = Nonemax_seq_len: typing.Optional[int] = Nonetop_k: typing.Optional[int] = None)
Additional inference parameters for Question Answering
sentence_similarity
classhuggingface_hub.SentenceSimilarityInput
<source>(inputs: SentenceSimilarityInputDataparameters: typing.Optional[typing.Dict[str, typing.Any]] = None)
Inputs for Sentence similarity inference
classhuggingface_hub.SentenceSimilarityInputData
<source>(sentences: typing.List[str]source_sentence: str)
summarization
classhuggingface_hub.SummarizationInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.summarization.SummarizationParameters] = None)
Inputs for Summarization inference
classhuggingface_hub.SummarizationOutput
<source>(summary_text: str)
Outputs of inference for the Summarization task
classhuggingface_hub.SummarizationParameters
<source>(clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonetruncation: typing.Optional[ForwardRef('SummarizationTruncationStrategy')] = None)
Additional inference parameters for summarization.
table_question_answering
classhuggingface_hub.TableQuestionAnsweringInput
<source>(inputs: TableQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.table_question_answering.TableQuestionAnsweringParameters] = None)
Inputs for Table Question Answering inference
classhuggingface_hub.TableQuestionAnsweringInputData
<source>(question: strtable: typing.Dict[str, typing.List[str]])
One (table, question) pair to answer
classhuggingface_hub.TableQuestionAnsweringOutputElement
<source>(answer: strcells: typing.List[str]coordinates: typing.List[typing.List[int]]aggregator: typing.Optional[str] = None)
Outputs of inference for the Table Question Answering task
classhuggingface_hub.TableQuestionAnsweringParameters
<source>(padding: typing.Optional[ForwardRef('Padding')] = Nonesequential: typing.Optional[bool] = Nonetruncation: typing.Optional[bool] = None)
Additional inference parameters for Table Question Answering
text2text_generation
classhuggingface_hub.Text2TextGenerationInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text2text_generation.Text2TextGenerationParameters] = None)
Inputs for Text2text Generation inference
classhuggingface_hub.Text2TextGenerationOutput
<source>(generated_text: typing.Anytext2_text_generation_output_generated_text: typing.Optional[str] = None)
Outputs of inference for the Text2text Generation task
classhuggingface_hub.Text2TextGenerationParameters
<source>(clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonetruncation: typing.Optional[ForwardRef('Text2TextGenerationTruncationStrategy')] = None)
Additional inference parameters for Text2text Generation
text_classification
classhuggingface_hub.TextClassificationInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_classification.TextClassificationParameters] = None)
Inputs for Text Classification inference
classhuggingface_hub.TextClassificationOutputElement
<source>(label: strscore: float)
Outputs of inference for the Text Classification task
classhuggingface_hub.TextClassificationParameters
<source>(function_to_apply: typing.Optional[ForwardRef('TextClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None)
Additional inference parameters for Text Classification
text_generation
classhuggingface_hub.TextGenerationInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGenerateParameters] = Nonestream: typing.Optional[bool] = None)
Text Generation Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
classhuggingface_hub.TextGenerationInputGenerateParameters
<source>(adapter_id: typing.Optional[str] = Nonebest_of: typing.Optional[int] = Nonedecoder_input_details: typing.Optional[bool] = Nonedetails: typing.Optional[bool] = Nonedo_sample: typing.Optional[bool] = Nonefrequency_penalty: typing.Optional[float] = Nonegrammar: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = Nonemax_new_tokens: typing.Optional[int] = Nonerepetition_penalty: typing.Optional[float] = Nonereturn_full_text: typing.Optional[bool] = Noneseed: typing.Optional[int] = Nonestop: typing.Optional[typing.List[str]] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_n_tokens: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetruncate: typing.Optional[int] = Nonetypical_p: typing.Optional[float] = Nonewatermark: typing.Optional[bool] = None)
classhuggingface_hub.TextGenerationInputGrammarType
<source>(type: TypeEnumvalue: typing.Any)
classhuggingface_hub.TextGenerationOutput
<source>(generated_text: strdetails: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputDetails] = None)
Text Generation Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
classhuggingface_hub.TextGenerationOutputBestOfSequence
<source>(finish_reason: TextGenerationOutputFinishReasongenerated_text: strgenerated_tokens: intprefill: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputPrefillToken]tokens: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]seed: typing.Optional[int] = Nonetop_tokens: typing.Optional[typing.List[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None)
classhuggingface_hub.TextGenerationOutputDetails
<source>(finish_reason: TextGenerationOutputFinishReasongenerated_tokens: intprefill: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputPrefillToken]tokens: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]best_of_sequences: typing.Optional[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputBestOfSequence]] = Noneseed: typing.Optional[int] = Nonetop_tokens: typing.Optional[typing.List[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None)
classhuggingface_hub.TextGenerationOutputPrefillToken
<source>(id: intlogprob: floattext: str)
classhuggingface_hub.TextGenerationOutputToken
<source>(id: intlogprob: floatspecial: booltext: str)
classhuggingface_hub.TextGenerationStreamOutput
<source>(index: inttoken: TextGenerationStreamOutputTokendetails: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputStreamDetails] = Nonegenerated_text: typing.Optional[str] = Nonetop_tokens: typing.Optional[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputToken]] = None)
Text Generation Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
classhuggingface_hub.TextGenerationStreamOutputStreamDetails
<source>(finish_reason: TextGenerationOutputFinishReasongenerated_tokens: intinput_length: intseed: typing.Optional[int] = None)
classhuggingface_hub.TextGenerationStreamOutputToken
<source>(id: intlogprob: floatspecial: booltext: str)
text_to_audio
classhuggingface_hub.TextToAudioGenerationParameters
<source>(do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('TextToAudioEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None)
Parametrization of the text generation process
classhuggingface_hub.TextToAudioInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioParameters] = None)
Inputs for Text To Audio inference
classhuggingface_hub.TextToAudioOutput
<source>(audio: typing.Anysampling_rate: float)
Outputs of inference for the Text To Audio task
classhuggingface_hub.TextToAudioParameters
<source>(generate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioGenerationParameters] = None)
Additional inference parameters for Text To Audio
text_to_image
classhuggingface_hub.TextToImageInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_image.TextToImageParameters] = None)
Inputs for Text To Image inference
classhuggingface_hub.TextToImageOutput
<source>(image: typing.Any)
Outputs of inference for the Text To Image task
classhuggingface_hub.TextToImageParameters
<source>(guidance_scale: typing.Optional[float] = Noneheight: typing.Optional[int] = Nonenegative_prompt: typing.Optional[str] = Nonenum_inference_steps: typing.Optional[int] = Nonescheduler: typing.Optional[str] = Noneseed: typing.Optional[int] = Nonewidth: typing.Optional[int] = None)
Additional inference parameters for Text To Image
text_to_speech
classhuggingface_hub.TextToSpeechGenerationParameters
<source>(do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('TextToSpeechEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None)
Parametrization of the text generation process
classhuggingface_hub.TextToSpeechInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechParameters] = None)
Inputs for Text To Speech inference
classhuggingface_hub.TextToSpeechOutput
<source>(audio: typing.Anysampling_rate: typing.Optional[float] = None)
Outputs of inference for the Text To Speech task
classhuggingface_hub.TextToSpeechParameters
<source>(generate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechGenerationParameters] = None)
Additional inference parameters for Text To Speech
text_to_video
classhuggingface_hub.TextToVideoInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_video.TextToVideoParameters] = None)
Inputs for Text To Video inference
classhuggingface_hub.TextToVideoOutput
<source>(video: typing.Any)
Outputs of inference for the Text To Video task
classhuggingface_hub.TextToVideoParameters
<source>(guidance_scale: typing.Optional[float] = Nonenegative_prompt: typing.Optional[typing.List[str]] = Nonenum_frames: typing.Optional[float] = Nonenum_inference_steps: typing.Optional[int] = Noneseed: typing.Optional[int] = None)
Additional inference parameters for Text To Video
token_classification
classhuggingface_hub.TokenClassificationInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.token_classification.TokenClassificationParameters] = None)
Inputs for Token Classification inference
classhuggingface_hub.TokenClassificationOutputElement
<source>(end: intscore: floatstart: intword: strentity: typing.Optional[str] = Noneentity_group: typing.Optional[str] = None)
Outputs of inference for the Token Classification task
classhuggingface_hub.TokenClassificationParameters
<source>(aggregation_strategy: typing.Optional[ForwardRef('TokenClassificationAggregationStrategy')] = Noneignore_labels: typing.Optional[typing.List[str]] = Nonestride: typing.Optional[int] = None)
Additional inference parameters for Token Classification
translation
classhuggingface_hub.TranslationInput
<source>(inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.translation.TranslationParameters] = None)
Inputs for Translation inference
classhuggingface_hub.TranslationOutput
<source>(translation_text: str)
Outputs of inference for the Translation task
classhuggingface_hub.TranslationParameters
<source>(clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonesrc_lang: typing.Optional[str] = Nonetgt_lang: typing.Optional[str] = Nonetruncation: typing.Optional[ForwardRef('TranslationTruncationStrategy')] = None)
Additional inference parameters for Translation
video_classification
classhuggingface_hub.VideoClassificationInput
<source>(inputs: typing.Anyparameters: typing.Optional[huggingface_hub.inference._generated.types.video_classification.VideoClassificationParameters] = None)
Inputs for Video Classification inference
classhuggingface_hub.VideoClassificationOutputElement
<source>(label: strscore: float)
Outputs of inference for the Video Classification task
classhuggingface_hub.VideoClassificationParameters
<source>(frame_sampling_rate: typing.Optional[int] = Nonefunction_to_apply: typing.Optional[ForwardRef('VideoClassificationOutputTransform')] = Nonenum_frames: typing.Optional[int] = Nonetop_k: typing.Optional[int] = None)
Additional inference parameters for Video Classification
visual_question_answering
classhuggingface_hub.VisualQuestionAnsweringInput
<source>(inputs: VisualQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.visual_question_answering.VisualQuestionAnsweringParameters] = None)
Inputs for Visual Question Answering inference
classhuggingface_hub.VisualQuestionAnsweringInputData
<source>(image: typing.Anyquestion: str)
One (image, question) pair to answer
classhuggingface_hub.VisualQuestionAnsweringOutputElement
<source>(score: floatanswer: typing.Optional[str] = None)
Outputs of inference for the Visual Question Answering task
classhuggingface_hub.VisualQuestionAnsweringParameters
<source>(top_k: typing.Optional[int] = None)
Additional inference parameters for Visual Question Answering
zero_shot_classification
classhuggingface_hub.ZeroShotClassificationInput
<source>(inputs: strparameters: ZeroShotClassificationParameters)
Inputs for Zero Shot Classification inference
classhuggingface_hub.ZeroShotClassificationOutputElement
<source>(label: strscore: float)
Outputs of inference for the Zero Shot Classification task
classhuggingface_hub.ZeroShotClassificationParameters
<source>(candidate_labels: typing.List[str]hypothesis_template: typing.Optional[str] = Nonemulti_label: typing.Optional[bool] = None)
Additional inference parameters for Zero Shot Classification
zero_shot_image_classification
classhuggingface_hub.ZeroShotImageClassificationInput
<source>(inputs: strparameters: ZeroShotImageClassificationParameters)
Inputs for Zero Shot Image Classification inference
classhuggingface_hub.ZeroShotImageClassificationOutputElement
<source>(label: strscore: float)
Outputs of inference for the Zero Shot Image Classification task
classhuggingface_hub.ZeroShotImageClassificationParameters
<source>(candidate_labels: typing.List[str]hypothesis_template: typing.Optional[str] = None)
Additional inference parameters for Zero Shot Image Classification
zero_shot_object_detection
classhuggingface_hub.ZeroShotObjectDetectionBoundingBox
<source>(xmax: intxmin: intymax: intymin: int)
The predicted bounding box. Coordinates are relative to the top left corner of the input image.
classhuggingface_hub.ZeroShotObjectDetectionInput
<source>(inputs: strparameters: ZeroShotObjectDetectionParameters)
Inputs for Zero Shot Object Detection inference
classhuggingface_hub.ZeroShotObjectDetectionOutputElement
<source>(box: ZeroShotObjectDetectionBoundingBoxlabel: strscore: float)
Outputs of inference for the Zero Shot Object Detection task
classhuggingface_hub.ZeroShotObjectDetectionParameters
<source>(candidate_labels: typing.List[str])
Additional inference parameters for Zero Shot Object Detection