Files
elasticsearch-js/docs/reference/api/Inference.md
2025-05-27 16:07:27 -05:00

36 KiB

Inference

Constructor

::: new Inference(transport: Transport); :::

Properties

Name Type Description
acceptedParams Record<string, { path: string[]; body: string[]; query: string[]; }>  
transport Transport  

Methods

Name Signature Description
chatCompletionUnified chatCompletionUnified(this: [That](./That.md), params: [InferenceChatCompletionUnifiedRequest](./InferenceChatCompletionUnifiedRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferenceChatCompletionUnifiedResponse](./InferenceChatCompletionUnifiedResponse.md)>; Perform chat completion inference The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the chat_completion task type for openai and elastic inference services. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. NOTE: The chat_completion task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use the openai service or the elastic service, use the Chat completion inference API.
chatCompletionUnified chatCompletionUnified(this: [That](./That.md), params: [InferenceChatCompletionUnifiedRequest](./InferenceChatCompletionUnifiedRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferenceChatCompletionUnifiedResponse](./InferenceChatCompletionUnifiedResponse.md), unknown>>;  
chatCompletionUnified chatCompletionUnified(this: [That](./That.md), params: [InferenceChatCompletionUnifiedRequest](./InferenceChatCompletionUnifiedRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferenceChatCompletionUnifiedResponse](./InferenceChatCompletionUnifiedResponse.md)>;  
completion completion(this: [That](./That.md), params: [InferenceCompletionRequest](./InferenceCompletionRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferenceCompletionResponse](./InferenceCompletionResponse.md)>; Perform completion inference on the service
completion completion(this: [That](./That.md), params: [InferenceCompletionRequest](./InferenceCompletionRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferenceCompletionResponse](./InferenceCompletionResponse.md), unknown>>;  
completion completion(this: [That](./That.md), params: [InferenceCompletionRequest](./InferenceCompletionRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferenceCompletionResponse](./InferenceCompletionResponse.md)>;  
delete delete(this: [That](./That.md), params: [InferenceDeleteRequest](./InferenceDeleteRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferenceDeleteResponse](./InferenceDeleteResponse.md)>; Delete an inference endpoint
delete delete(this: [That](./That.md), params: [InferenceDeleteRequest](./InferenceDeleteRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferenceDeleteResponse](./InferenceDeleteResponse.md), unknown>>;  
delete delete(this: [That](./That.md), params: [InferenceDeleteRequest](./InferenceDeleteRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferenceDeleteResponse](./InferenceDeleteResponse.md)>;  
get get(this: [That](./That.md), params?: [InferenceGetRequest](./InferenceGetRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferenceGetResponse](./InferenceGetResponse.md)>; Get an inference endpoint
get get(this: [That](./That.md), params?: [InferenceGetRequest](./InferenceGetRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferenceGetResponse](./InferenceGetResponse.md), unknown>>;  
get get(this: [That](./That.md), params?: [InferenceGetRequest](./InferenceGetRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferenceGetResponse](./InferenceGetResponse.md)>;  
inference inference(this: [That](./That.md), params: [InferenceInferenceRequest](./InferenceInferenceRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferenceInferenceResponse](./InferenceInferenceResponse.md)>; Perform inference on the service. This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API. For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation. > info > The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
inference inference(this: [That](./That.md), params: [InferenceInferenceRequest](./InferenceInferenceRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferenceInferenceResponse](./InferenceInferenceResponse.md), unknown>>;  
inference inference(this: [That](./That.md), params: [InferenceInferenceRequest](./InferenceInferenceRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferenceInferenceResponse](./InferenceInferenceResponse.md)>;  
put put(this: [That](./That.md), params: [InferencePutRequest](./InferencePutRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutResponse](./InferencePutResponse.md)>; Create an inference endpoint. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
put put(this: [That](./That.md), params: [InferencePutRequest](./InferencePutRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutResponse](./InferencePutResponse.md), unknown>>;  
put put(this: [That](./That.md), params: [InferencePutRequest](./InferencePutRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutResponse](./InferencePutResponse.md)>;  
putAlibabacloud putAlibabacloud(this: [That](./That.md), params: [InferencePutAlibabacloudRequest](./InferencePutAlibabacloudRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutAlibabacloudResponse](./InferencePutAlibabacloudResponse.md)>; Create an AlibabaCloud AI Search inference endpoint. Create an inference endpoint to perform an inference task with the alibabacloud-ai-search service.
putAlibabacloud putAlibabacloud(this: [That](./That.md), params: [InferencePutAlibabacloudRequest](./InferencePutAlibabacloudRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutAlibabacloudResponse](./InferencePutAlibabacloudResponse.md), unknown>>;  
putAlibabacloud putAlibabacloud(this: [That](./That.md), params: [InferencePutAlibabacloudRequest](./InferencePutAlibabacloudRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutAlibabacloudResponse](./InferencePutAlibabacloudResponse.md)>;  
putAmazonbedrock putAmazonbedrock(this: [That](./That.md), params: [InferencePutAmazonbedrockRequest](./InferencePutAmazonbedrockRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutAmazonbedrockResponse](./InferencePutAmazonbedrockResponse.md)>; Create an Amazon Bedrock inference endpoint. Creates an inference endpoint to perform an inference task with the amazonbedrock service. > info > You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.
putAmazonbedrock putAmazonbedrock(this: [That](./That.md), params: [InferencePutAmazonbedrockRequest](./InferencePutAmazonbedrockRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutAmazonbedrockResponse](./InferencePutAmazonbedrockResponse.md), unknown>>;  
putAmazonbedrock putAmazonbedrock(this: [That](./That.md), params: [InferencePutAmazonbedrockRequest](./InferencePutAmazonbedrockRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutAmazonbedrockResponse](./InferencePutAmazonbedrockResponse.md)>;  
putAnthropic putAnthropic(this: [That](./That.md), params: [InferencePutAnthropicRequest](./InferencePutAnthropicRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutAnthropicResponse](./InferencePutAnthropicResponse.md)>; Create an Anthropic inference endpoint. Create an inference endpoint to perform an inference task with the anthropic service.
putAnthropic putAnthropic(this: [That](./That.md), params: [InferencePutAnthropicRequest](./InferencePutAnthropicRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutAnthropicResponse](./InferencePutAnthropicResponse.md), unknown>>;  
putAnthropic putAnthropic(this: [That](./That.md), params: [InferencePutAnthropicRequest](./InferencePutAnthropicRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutAnthropicResponse](./InferencePutAnthropicResponse.md)>;  
putAzureaistudio putAzureaistudio(this: [That](./That.md), params: [InferencePutAzureaistudioRequest](./InferencePutAzureaistudioRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutAzureaistudioResponse](./InferencePutAzureaistudioResponse.md)>; Create an Azure AI studio inference endpoint. Create an inference endpoint to perform an inference task with the azureaistudio service.
putAzureaistudio putAzureaistudio(this: [That](./That.md), params: [InferencePutAzureaistudioRequest](./InferencePutAzureaistudioRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutAzureaistudioResponse](./InferencePutAzureaistudioResponse.md), unknown>>;  
putAzureaistudio putAzureaistudio(this: [That](./That.md), params: [InferencePutAzureaistudioRequest](./InferencePutAzureaistudioRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutAzureaistudioResponse](./InferencePutAzureaistudioResponse.md)>;  
putAzureopenai putAzureopenai(this: [That](./That.md), params: [InferencePutAzureopenaiRequest](./InferencePutAzureopenaiRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutAzureopenaiResponse](./InferencePutAzureopenaiResponse.md)>; Create an Azure OpenAI inference endpoint. Create an inference endpoint to perform an inference task with the azureopenai service. The list of chat completion models that you can choose from in your Azure OpenAI deployment include: * GPT-4 and GPT-4 Turbo models * GPT-3.5 The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.
putAzureopenai putAzureopenai(this: [That](./That.md), params: [InferencePutAzureopenaiRequest](./InferencePutAzureopenaiRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutAzureopenaiResponse](./InferencePutAzureopenaiResponse.md), unknown>>;  
putAzureopenai putAzureopenai(this: [That](./That.md), params: [InferencePutAzureopenaiRequest](./InferencePutAzureopenaiRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutAzureopenaiResponse](./InferencePutAzureopenaiResponse.md)>;  
putCohere putCohere(this: [That](./That.md), params: [InferencePutCohereRequest](./InferencePutCohereRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutCohereResponse](./InferencePutCohereResponse.md)>; Create a Cohere inference endpoint. Create an inference endpoint to perform an inference task with the cohere service.
putCohere putCohere(this: [That](./That.md), params: [InferencePutCohereRequest](./InferencePutCohereRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutCohereResponse](./InferencePutCohereResponse.md), unknown>>;  
putCohere putCohere(this: [That](./That.md), params: [InferencePutCohereRequest](./InferencePutCohereRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutCohereResponse](./InferencePutCohereResponse.md)>;  
putElasticsearch putElasticsearch(this: [That](./That.md), params: [InferencePutElasticsearchRequest](./InferencePutElasticsearchRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutElasticsearchResponse](./InferencePutElasticsearchResponse.md)>; Create an Elasticsearch inference endpoint. Create an inference endpoint to perform an inference task with the elasticsearch service. > info > Your Elasticsearch deployment contains preconfigured ELSER and E5 inference endpoints, you only need to create the enpoints using the API if you want to customize the settings. If you use the ELSER or the E5 model through the elasticsearch service, the API request will automatically download and deploy the model if it isn't downloaded yet. > info > You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
putElasticsearch putElasticsearch(this: [That](./That.md), params: [InferencePutElasticsearchRequest](./InferencePutElasticsearchRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutElasticsearchResponse](./InferencePutElasticsearchResponse.md), unknown>>;  
putElasticsearch putElasticsearch(this: [That](./That.md), params: [InferencePutElasticsearchRequest](./InferencePutElasticsearchRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutElasticsearchResponse](./InferencePutElasticsearchResponse.md)>;  
putElser putElser(this: [That](./That.md), params: [InferencePutElserRequest](./InferencePutElserRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutElserResponse](./InferencePutElserResponse.md)>; Create an ELSER inference endpoint. Create an inference endpoint to perform an inference task with the elser service. You can also deploy ELSER by using the Elasticsearch inference integration. > info > Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings. The API request will automatically download and deploy the ELSER model if it isn't already downloaded. > info > You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
putElser putElser(this: [That](./That.md), params: [InferencePutElserRequest](./InferencePutElserRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutElserResponse](./InferencePutElserResponse.md), unknown>>;  
putElser putElser(this: [That](./That.md), params: [InferencePutElserRequest](./InferencePutElserRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutElserResponse](./InferencePutElserResponse.md)>;  
putGoogleaistudio putGoogleaistudio(this: [That](./That.md), params: [InferencePutGoogleaistudioRequest](./InferencePutGoogleaistudioRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutGoogleaistudioResponse](./InferencePutGoogleaistudioResponse.md)>; Create an Google AI Studio inference endpoint. Create an inference endpoint to perform an inference task with the googleaistudio service.
putGoogleaistudio putGoogleaistudio(this: [That](./That.md), params: [InferencePutGoogleaistudioRequest](./InferencePutGoogleaistudioRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutGoogleaistudioResponse](./InferencePutGoogleaistudioResponse.md), unknown>>;  
putGoogleaistudio putGoogleaistudio(this: [That](./That.md), params: [InferencePutGoogleaistudioRequest](./InferencePutGoogleaistudioRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutGoogleaistudioResponse](./InferencePutGoogleaistudioResponse.md)>;  
putGooglevertexai putGooglevertexai(this: [That](./That.md), params: [InferencePutGooglevertexaiRequest](./InferencePutGooglevertexaiRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutGooglevertexaiResponse](./InferencePutGooglevertexaiResponse.md)>; Create a Google Vertex AI inference endpoint. Create an inference endpoint to perform an inference task with the googlevertexai service.
putGooglevertexai putGooglevertexai(this: [That](./That.md), params: [InferencePutGooglevertexaiRequest](./InferencePutGooglevertexaiRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutGooglevertexaiResponse](./InferencePutGooglevertexaiResponse.md), unknown>>;  
putGooglevertexai putGooglevertexai(this: [That](./That.md), params: [InferencePutGooglevertexaiRequest](./InferencePutGooglevertexaiRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutGooglevertexaiResponse](./InferencePutGooglevertexaiResponse.md)>;  
putHuggingFace putHuggingFace(this: [That](./That.md), params: [InferencePutHuggingFaceRequest](./InferencePutHuggingFaceRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutHuggingFaceResponse](./InferencePutHuggingFaceResponse.md)>; Create a Hugging Face inference endpoint. Create an inference endpoint to perform an inference task with the hugging_face service. You must first create an inference endpoint on the Hugging Face endpoint page to get an endpoint URL. Select the model you want to use on the new endpoint creation page (for example intfloat/e5-small-v2), then select the sentence embeddings task under the advanced configuration section. Create the endpoint and copy the URL after the endpoint initialization has been finished. The following models are recommended for the Hugging Face service: * all-MiniLM-L6-v2 * all-MiniLM-L12-v2 * all-mpnet-base-v2 * e5-base-v2 * e5-small-v2 * multilingual-e5-base * multilingual-e5-small
putHuggingFace putHuggingFace(this: [That](./That.md), params: [InferencePutHuggingFaceRequest](./InferencePutHuggingFaceRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutHuggingFaceResponse](./InferencePutHuggingFaceResponse.md), unknown>>;  
putHuggingFace putHuggingFace(this: [That](./That.md), params: [InferencePutHuggingFaceRequest](./InferencePutHuggingFaceRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutHuggingFaceResponse](./InferencePutHuggingFaceResponse.md)>;  
putJinaai putJinaai(this: [That](./That.md), params: [InferencePutJinaaiRequest](./InferencePutJinaaiRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutJinaaiResponse](./InferencePutJinaaiResponse.md)>; Create an JinaAI inference endpoint. Create an inference endpoint to perform an inference task with the jinaai service. To review the available rerank models, refer to < https://jina.ai/reranker > . To review the available text_embedding models, refer to the < https://jina.ai/embeddings/ > .
putJinaai putJinaai(this: [That](./That.md), params: [InferencePutJinaaiRequest](./InferencePutJinaaiRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutJinaaiResponse](./InferencePutJinaaiResponse.md), unknown>>;  
putJinaai putJinaai(this: [That](./That.md), params: [InferencePutJinaaiRequest](./InferencePutJinaaiRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutJinaaiResponse](./InferencePutJinaaiResponse.md)>;  
putMistral putMistral(this: [That](./That.md), params: [InferencePutMistralRequest](./InferencePutMistralRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutMistralResponse](./InferencePutMistralResponse.md)>; Create a Mistral inference endpoint. Creates an inference endpoint to perform an inference task with the mistral service.
putMistral putMistral(this: [That](./That.md), params: [InferencePutMistralRequest](./InferencePutMistralRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutMistralResponse](./InferencePutMistralResponse.md), unknown>>;  
putMistral putMistral(this: [That](./That.md), params: [InferencePutMistralRequest](./InferencePutMistralRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutMistralResponse](./InferencePutMistralResponse.md)>;  
putOpenai putOpenai(this: [That](./That.md), params: [InferencePutOpenaiRequest](./InferencePutOpenaiRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutOpenaiResponse](./InferencePutOpenaiResponse.md)>; Create an OpenAI inference endpoint. Create an inference endpoint to perform an inference task with the openai service or openai compatible APIs.
putOpenai putOpenai(this: [That](./That.md), params: [InferencePutOpenaiRequest](./InferencePutOpenaiRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutOpenaiResponse](./InferencePutOpenaiResponse.md), unknown>>;  
putOpenai putOpenai(this: [That](./That.md), params: [InferencePutOpenaiRequest](./InferencePutOpenaiRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutOpenaiResponse](./InferencePutOpenaiResponse.md)>;  
putVoyageai putVoyageai(this: [That](./That.md), params: [InferencePutVoyageaiRequest](./InferencePutVoyageaiRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutVoyageaiResponse](./InferencePutVoyageaiResponse.md)>; Create a VoyageAI inference endpoint. Create an inference endpoint to perform an inference task with the voyageai service. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
putVoyageai putVoyageai(this: [That](./That.md), params: [InferencePutVoyageaiRequest](./InferencePutVoyageaiRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutVoyageaiResponse](./InferencePutVoyageaiResponse.md), unknown>>;  
putVoyageai putVoyageai(this: [That](./That.md), params: [InferencePutVoyageaiRequest](./InferencePutVoyageaiRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutVoyageaiResponse](./InferencePutVoyageaiResponse.md)>;  
putWatsonx putWatsonx(this: [That](./That.md), params: [InferencePutWatsonxRequest](./InferencePutWatsonxRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferencePutWatsonxResponse](./InferencePutWatsonxResponse.md)>; Create a Watsonx inference endpoint. Create an inference endpoint to perform an inference task with the watsonxai service. You need an IBM Cloud Databases for Elasticsearch deployment to use the watsonxai inference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.
putWatsonx putWatsonx(this: [That](./That.md), params: [InferencePutWatsonxRequest](./InferencePutWatsonxRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferencePutWatsonxResponse](./InferencePutWatsonxResponse.md), unknown>>;  
putWatsonx putWatsonx(this: [That](./That.md), params: [InferencePutWatsonxRequest](./InferencePutWatsonxRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferencePutWatsonxResponse](./InferencePutWatsonxResponse.md)>;  
rerank rerank(this: [That](./That.md), params: [InferenceRerankRequest](./InferenceRerankRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferenceRerankResponse](./InferenceRerankResponse.md)>; Perform rereanking inference on the service
rerank rerank(this: [That](./That.md), params: [InferenceRerankRequest](./InferenceRerankRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferenceRerankResponse](./InferenceRerankResponse.md), unknown>>;  
rerank rerank(this: [That](./That.md), params: [InferenceRerankRequest](./InferenceRerankRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferenceRerankResponse](./InferenceRerankResponse.md)>;  
sparseEmbedding sparseEmbedding(this: [That](./That.md), params: [InferenceSparseEmbeddingRequest](./InferenceSparseEmbeddingRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferenceSparseEmbeddingResponse](./InferenceSparseEmbeddingResponse.md)>; Perform sparse embedding inference on the service
sparseEmbedding sparseEmbedding(this: [That](./That.md), params: [InferenceSparseEmbeddingRequest](./InferenceSparseEmbeddingRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferenceSparseEmbeddingResponse](./InferenceSparseEmbeddingResponse.md), unknown>>;  
sparseEmbedding sparseEmbedding(this: [That](./That.md), params: [InferenceSparseEmbeddingRequest](./InferenceSparseEmbeddingRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferenceSparseEmbeddingResponse](./InferenceSparseEmbeddingResponse.md)>;  
streamCompletion streamCompletion(this: [That](./That.md), params: [InferenceStreamCompletionRequest](./InferenceStreamCompletionRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferenceStreamCompletionResponse](./InferenceStreamCompletionResponse.md)>; Perform streaming inference. Get real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation. This API works only with the completion task type. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. This API requires the monitor_inference cluster privilege (the built-in inference_admin and inference_user roles grant this privilege). You must use a client that supports streaming.
streamCompletion streamCompletion(this: [That](./That.md), params: [InferenceStreamCompletionRequest](./InferenceStreamCompletionRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferenceStreamCompletionResponse](./InferenceStreamCompletionResponse.md), unknown>>;  
streamCompletion streamCompletion(this: [That](./That.md), params: [InferenceStreamCompletionRequest](./InferenceStreamCompletionRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferenceStreamCompletionResponse](./InferenceStreamCompletionResponse.md)>;  
textEmbedding textEmbedding(this: [That](./That.md), params: [InferenceTextEmbeddingRequest](./InferenceTextEmbeddingRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferenceTextEmbeddingResponse](./InferenceTextEmbeddingResponse.md)>; Perform text embedding inference on the service
textEmbedding textEmbedding(this: [That](./That.md), params: [InferenceTextEmbeddingRequest](./InferenceTextEmbeddingRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferenceTextEmbeddingResponse](./InferenceTextEmbeddingResponse.md), unknown>>;  
textEmbedding textEmbedding(this: [That](./That.md), params: [InferenceTextEmbeddingRequest](./InferenceTextEmbeddingRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferenceTextEmbeddingResponse](./InferenceTextEmbeddingResponse.md)>;  
update update(this: [That](./That.md), params: [InferenceUpdateRequest](./InferenceUpdateRequest.md), options?: [TransportRequestOptionsWithOutMeta](./TransportRequestOptionsWithOutMeta.md)): Promise<[InferenceUpdateResponse](./InferenceUpdateResponse.md)>; Update an inference endpoint. Modify task_settings, secrets (within service_settings), or num_allocations for an inference endpoint, depending on the specific endpoint service and task_type. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
update update(this: [That](./That.md), params: [InferenceUpdateRequest](./InferenceUpdateRequest.md), options?: [TransportRequestOptionsWithMeta](./TransportRequestOptionsWithMeta.md)): Promise<[TransportResult](./TransportResult.md)<[InferenceUpdateResponse](./InferenceUpdateResponse.md), unknown>>;  
update update(this: [That](./That.md), params: [InferenceUpdateRequest](./InferenceUpdateRequest.md), options?: [TransportRequestOptions](./TransportRequestOptions.md)): Promise<[InferenceUpdateResponse](./InferenceUpdateResponse.md)>;