Auto-generated API code (#2678)
This commit is contained in:
@ -7287,7 +7287,7 @@ To unset a version, replace the template without specifying one.
|
||||
** *`create` (Optional, boolean)*: If true, this request cannot replace or update existing index templates.
|
||||
** *`master_timeout` (Optional, string | -1 | 0)*: Period to wait for a connection to the master node. If no response is
|
||||
received before the timeout expires, the request fails and returns an error.
|
||||
** *`cause` (Optional, string)*
|
||||
** *`cause` (Optional, string)*: User defined reason for creating/updating the index template
|
||||
|
||||
[discrete]
|
||||
==== recovery
|
||||
@ -8097,6 +8097,17 @@ NOTE: The `chat_completion` task type only supports streaming and only through t
|
||||
** *`service` (Enum("elastic"))*: The type of service supported for the specified task type. In this case, `elastic`.
|
||||
** *`service_settings` ({ model_id, rate_limit })*: Settings used to install the inference model. These settings are specific to the `elastic` service.
|
||||
|
||||
[discrete]
|
||||
==== put_mistral
|
||||
Configure a Mistral inference endpoint
|
||||
|
||||
{ref}/infer-service-mistral.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
----
|
||||
client.inference.putMistral()
|
||||
----
|
||||
|
||||
|
||||
[discrete]
|
||||
==== put_openai
|
||||
Create an OpenAI inference endpoint.
|
||||
|
||||
Reference in New Issue
Block a user