Auto-generated code for 8.x (#2529)
This commit is contained in:
@ -96,7 +96,8 @@ client.closePointInTime({ id })
|
||||
|
||||
[discrete]
|
||||
=== count
|
||||
Returns number of documents matching a query.
|
||||
Count search results.
|
||||
Get the number of documents matching a query.
|
||||
|
||||
{ref}/search-count.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -530,7 +531,24 @@ client.getSource({ id, index })
|
||||
|
||||
[discrete]
|
||||
=== health_report
|
||||
Returns the health of the cluster.
|
||||
Get the cluster health.
|
||||
Get a report with the health status of an Elasticsearch cluster.
|
||||
The report contains a list of indicators that compose Elasticsearch functionality.
|
||||
|
||||
Each indicator has a health status of: green, unknown, yellow or red.
|
||||
The indicator will provide an explanation and metadata describing the reason for its current health status.
|
||||
|
||||
The cluster’s status is controlled by the worst indicator status.
|
||||
|
||||
In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue.
|
||||
Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.
|
||||
|
||||
Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system.
|
||||
The root cause and remediation steps are encapsulated in a diagnosis.
|
||||
A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.
|
||||
|
||||
NOTE: The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently.
|
||||
When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.
|
||||
|
||||
{ref}/health-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -824,7 +842,7 @@ If `true`, the point in time will contain all the shards that are available at t
|
||||
[discrete]
|
||||
=== ping
|
||||
Ping the cluster.
|
||||
Returns whether the cluster is running.
|
||||
Get information about whether the cluster is running.
|
||||
|
||||
{ref}/index.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -1118,6 +1136,8 @@ However, using computationally expensive named queries on a large number of hits
|
||||
This parameter can only be used when the `q` query string parameter is specified.
|
||||
** *`max_concurrent_shard_requests` (Optional, number)*: Defines the number of concurrent shard requests per node this search executes concurrently.
|
||||
This value should be used to limit the impact of the search on the cluster in order to limit the number of concurrent shard requests.
|
||||
** *`min_compatible_shard_node` (Optional, string)*: The minimum version of the node that can handle the request
|
||||
Any handling node with a lower version will fail the request.
|
||||
** *`preference` (Optional, string)*: Nodes and shards used for the search.
|
||||
By default, Elasticsearch selects from eligible nodes and shards using adaptive replica selection, accounting for allocation awareness. Valid values are:
|
||||
`_only_local` to run the search only on shards on the local node;
|
||||
@ -1643,8 +1663,6 @@ the indices stats API.
|
||||
** *`wait_for_completion_timeout` (Optional, string | -1 | 0)*: Blocks and waits until the search is completed up to a certain timeout.
|
||||
When the async search completes within the timeout, the response won’t include the ID as the results are not stored in the cluster.
|
||||
** *`keep_on_completion` (Optional, boolean)*: If `true`, results are stored for later retrieval when the search completes within the `wait_for_completion_timeout`.
|
||||
** *`keep_alive` (Optional, string | -1 | 0)*: Specifies how long the async search needs to be available.
|
||||
Ongoing async searches and any saved search results are deleted after this period.
|
||||
** *`allow_no_indices` (Optional, boolean)*: Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified)
|
||||
** *`allow_partial_search_results` (Optional, boolean)*: Indicate if an error should be returned if there is a partial search failure or timeout
|
||||
** *`analyzer` (Optional, string)*: The analyzer to use for the query string
|
||||
@ -1659,8 +1677,8 @@ A partial reduction is performed every time the coordinating node has received a
|
||||
** *`ignore_unavailable` (Optional, boolean)*: Whether specified concrete indices should be ignored when unavailable (missing or closed)
|
||||
** *`lenient` (Optional, boolean)*: Specify whether format-based query failures (such as providing text to a numeric field) should be ignored
|
||||
** *`max_concurrent_shard_requests` (Optional, number)*: The number of concurrent shard requests per node this search executes concurrently. This value should be used to limit the impact of the search on the cluster in order to limit the number of concurrent shard requests
|
||||
** *`min_compatible_shard_node` (Optional, string)*
|
||||
** *`preference` (Optional, string)*: Specify the node or shard the operation should be performed on (default: random)
|
||||
** *`pre_filter_shard_size` (Optional, number)*: The default value cannot be changed, which enforces the execution of a pre-filter roundtrip to retrieve statistics from each shard so that the ones that surely don’t hold any document matching the query get skipped.
|
||||
** *`request_cache` (Optional, boolean)*: Specify if request cache should be used for this request or not, defaults to true
|
||||
** *`routing` (Optional, string)*: A list of specific routing values
|
||||
** *`search_type` (Optional, Enum("query_then_fetch" | "dfs_query_then_fetch"))*: Search operation type
|
||||
@ -1809,10 +1827,6 @@ client.cat.allocation({ ... })
|
||||
* *Request (object):*
|
||||
** *`node_id` (Optional, string | string[])*: List of node identifiers or names used to limit the returned information.
|
||||
** *`bytes` (Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb"))*: The unit used to display byte values.
|
||||
** *`local` (Optional, boolean)*: If `true`, the request computes the list of selected nodes from the
|
||||
local cluster state. If `false` the list of selected nodes are computed
|
||||
from the cluster state of the master node. In both cases the coordinating
|
||||
node will send requests for further information to each selected node.
|
||||
|
||||
[discrete]
|
||||
==== component_templates
|
||||
@ -1834,10 +1848,6 @@ client.cat.componentTemplates({ ... })
|
||||
|
||||
* *Request (object):*
|
||||
** *`name` (Optional, string)*: The name of the component template. Accepts wildcard expressions. If omitted, all component templates are returned.
|
||||
** *`local` (Optional, boolean)*: If `true`, the request computes the list of selected nodes from the
|
||||
local cluster state. If `false` the list of selected nodes are computed
|
||||
from the cluster state of the master node. In both cases the coordinating
|
||||
node will send requests for further information to each selected node.
|
||||
|
||||
[discrete]
|
||||
==== count
|
||||
@ -1964,17 +1974,9 @@ IMPORTANT: cat APIs are only intended for human consumption using the command li
|
||||
{ref}/cat-master.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
----
|
||||
client.cat.master({ ... })
|
||||
client.cat.master()
|
||||
----
|
||||
|
||||
[discrete]
|
||||
==== Arguments
|
||||
|
||||
* *Request (object):*
|
||||
** *`local` (Optional, boolean)*: If `true`, the request computes the list of selected nodes from the
|
||||
local cluster state. If `false` the list of selected nodes are computed
|
||||
from the cluster state of the master node. In both cases the coordinating
|
||||
node will send requests for further information to each selected node.
|
||||
|
||||
[discrete]
|
||||
==== ml_data_frame_analytics
|
||||
@ -2113,17 +2115,9 @@ IMPORTANT: cat APIs are only intended for human consumption using the command li
|
||||
{ref}/cat-nodeattrs.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
----
|
||||
client.cat.nodeattrs({ ... })
|
||||
client.cat.nodeattrs()
|
||||
----
|
||||
|
||||
[discrete]
|
||||
==== Arguments
|
||||
|
||||
* *Request (object):*
|
||||
** *`local` (Optional, boolean)*: If `true`, the request computes the list of selected nodes from the
|
||||
local cluster state. If `false` the list of selected nodes are computed
|
||||
from the cluster state of the master node. In both cases the coordinating
|
||||
node will send requests for further information to each selected node.
|
||||
|
||||
[discrete]
|
||||
==== nodes
|
||||
@ -2152,17 +2146,9 @@ IMPORTANT: cat APIs are only intended for human consumption using the command li
|
||||
{ref}/cat-pending-tasks.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
----
|
||||
client.cat.pendingTasks({ ... })
|
||||
client.cat.pendingTasks()
|
||||
----
|
||||
|
||||
[discrete]
|
||||
==== Arguments
|
||||
|
||||
* *Request (object):*
|
||||
** *`local` (Optional, boolean)*: If `true`, the request computes the list of selected nodes from the
|
||||
local cluster state. If `false` the list of selected nodes are computed
|
||||
from the cluster state of the master node. In both cases the coordinating
|
||||
node will send requests for further information to each selected node.
|
||||
|
||||
[discrete]
|
||||
==== plugins
|
||||
@ -2172,17 +2158,9 @@ IMPORTANT: cat APIs are only intended for human consumption using the command li
|
||||
{ref}/cat-plugins.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
----
|
||||
client.cat.plugins({ ... })
|
||||
client.cat.plugins()
|
||||
----
|
||||
|
||||
[discrete]
|
||||
==== Arguments
|
||||
|
||||
* *Request (object):*
|
||||
** *`local` (Optional, boolean)*: If `true`, the request computes the list of selected nodes from the
|
||||
local cluster state. If `false` the list of selected nodes are computed
|
||||
from the cluster state of the master node. In both cases the coordinating
|
||||
node will send requests for further information to each selected node.
|
||||
|
||||
[discrete]
|
||||
==== recovery
|
||||
@ -2239,10 +2217,6 @@ client.cat.segments({ ... })
|
||||
Supports wildcards (`*`).
|
||||
To target all data streams and indices, omit this parameter or use `*` or `_all`.
|
||||
** *`bytes` (Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb"))*: The unit used to display byte values.
|
||||
** *`local` (Optional, boolean)*: If `true`, the request computes the list of selected nodes from the
|
||||
local cluster state. If `false` the list of selected nodes are computed
|
||||
from the cluster state of the master node. In both cases the coordinating
|
||||
node will send requests for further information to each selected node.
|
||||
|
||||
[discrete]
|
||||
==== shards
|
||||
@ -2325,10 +2299,6 @@ client.cat.templates({ ... })
|
||||
* *Request (object):*
|
||||
** *`name` (Optional, string)*: The name of the template to return.
|
||||
Accepts wildcard expressions. If omitted, all templates are returned.
|
||||
** *`local` (Optional, boolean)*: If `true`, the request computes the list of selected nodes from the
|
||||
local cluster state. If `false` the list of selected nodes are computed
|
||||
from the cluster state of the master node. In both cases the coordinating
|
||||
node will send requests for further information to each selected node.
|
||||
|
||||
[discrete]
|
||||
==== thread_pool
|
||||
@ -2349,10 +2319,6 @@ client.cat.threadPool({ ... })
|
||||
** *`thread_pool_patterns` (Optional, string | string[])*: A list of thread pool names used to limit the request.
|
||||
Accepts wildcard expressions.
|
||||
** *`time` (Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d"))*: The unit used to display time values.
|
||||
** *`local` (Optional, boolean)*: If `true`, the request computes the list of selected nodes from the
|
||||
local cluster state. If `false` the list of selected nodes are computed
|
||||
from the cluster state of the master node. In both cases the coordinating
|
||||
node will send requests for further information to each selected node.
|
||||
|
||||
[discrete]
|
||||
==== transforms
|
||||
@ -2409,37 +2375,27 @@ Creates a new follower index configured to follow the referenced leader index.
|
||||
{ref}/ccr-put-follow.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
----
|
||||
client.ccr.follow({ index, leader_index, remote_cluster })
|
||||
client.ccr.follow({ index })
|
||||
----
|
||||
|
||||
[discrete]
|
||||
==== Arguments
|
||||
|
||||
* *Request (object):*
|
||||
** *`index` (string)*: The name of the follower index.
|
||||
** *`leader_index` (string)*: The name of the index in the leader cluster to follow.
|
||||
** *`remote_cluster` (string)*: The remote cluster containing the leader index.
|
||||
** *`data_stream_name` (Optional, string)*: If the leader index is part of a data stream, the name to which the local data stream for the followed index should be renamed.
|
||||
** *`max_outstanding_read_requests` (Optional, number)*: The maximum number of outstanding reads requests from the remote cluster.
|
||||
** *`max_outstanding_write_requests` (Optional, number)*: The maximum number of outstanding write requests on the follower.
|
||||
** *`max_read_request_operation_count` (Optional, number)*: The maximum number of operations to pull per read from the remote cluster.
|
||||
** *`max_read_request_size` (Optional, number | string)*: The maximum size in bytes of per read of a batch of operations pulled from the remote cluster.
|
||||
** *`max_retry_delay` (Optional, string | -1 | 0)*: The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when
|
||||
retrying.
|
||||
** *`max_write_buffer_count` (Optional, number)*: The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be
|
||||
deferred until the number of queued operations goes below the limit.
|
||||
** *`max_write_buffer_size` (Optional, number | string)*: The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will
|
||||
be deferred until the total bytes of queued operations goes below the limit.
|
||||
** *`max_write_request_operation_count` (Optional, number)*: The maximum number of operations per bulk write request executed on the follower.
|
||||
** *`max_write_request_size` (Optional, number | string)*: The maximum total bytes of operations per bulk write request executed on the follower.
|
||||
** *`read_poll_timeout` (Optional, string | -1 | 0)*: The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index.
|
||||
When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics.
|
||||
Then the follower will immediately attempt to read from the leader again.
|
||||
** *`settings` (Optional, { index, mode, routing_path, soft_deletes, sort, number_of_shards, number_of_replicas, number_of_routing_shards, check_on_startup, codec, routing_partition_size, load_fixed_bitset_filters_eagerly, hidden, auto_expand_replicas, merge, search, refresh_interval, max_result_window, max_inner_result_window, max_rescore_window, max_docvalue_fields_search, max_script_fields, max_ngram_diff, max_shingle_diff, blocks, max_refresh_listeners, analyze, highlight, max_terms_count, max_regex_length, routing, gc_deletes, default_pipeline, final_pipeline, lifecycle, provided_name, creation_date, creation_date_string, uuid, version, verified_before_close, format, max_slices_per_scroll, translog, query_string, priority, top_metrics_max_size, analysis, settings, time_series, queries, similarity, mapping, indexing.slowlog, indexing_pressure, store })*: Settings to override from the leader index.
|
||||
** *`wait_for_active_shards` (Optional, number | Enum("all" | "index-setting"))*: Specifies the number of shards to wait on being active before responding. This defaults to waiting on none of the shards to be
|
||||
active.
|
||||
A shard must be restored from the leader index before being active. Restoring a follower shard requires transferring all the
|
||||
remote Lucene segment files to the follower index.
|
||||
** *`index` (string)*: The name of the follower index
|
||||
** *`leader_index` (Optional, string)*
|
||||
** *`max_outstanding_read_requests` (Optional, number)*
|
||||
** *`max_outstanding_write_requests` (Optional, number)*
|
||||
** *`max_read_request_operation_count` (Optional, number)*
|
||||
** *`max_read_request_size` (Optional, string)*
|
||||
** *`max_retry_delay` (Optional, string | -1 | 0)*
|
||||
** *`max_write_buffer_count` (Optional, number)*
|
||||
** *`max_write_buffer_size` (Optional, string)*
|
||||
** *`max_write_request_operation_count` (Optional, number)*
|
||||
** *`max_write_request_size` (Optional, string)*
|
||||
** *`read_poll_timeout` (Optional, string | -1 | 0)*
|
||||
** *`remote_cluster` (Optional, string)*
|
||||
** *`wait_for_active_shards` (Optional, number | Enum("all" | "index-setting"))*: Sets the number of shard copies that must be active before returning. Defaults to 0. Set to `all` for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1)
|
||||
|
||||
[discrete]
|
||||
==== follow_info
|
||||
@ -2645,7 +2601,11 @@ client.ccr.unfollow({ index })
|
||||
=== cluster
|
||||
[discrete]
|
||||
==== allocation_explain
|
||||
Provides explanations for shard allocations in the cluster.
|
||||
Explain the shard allocations.
|
||||
Get explanations for shard allocations in the cluster.
|
||||
For unassigned shards, it provides an explanation for why the shard is unassigned.
|
||||
For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node.
|
||||
This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.
|
||||
|
||||
{ref}/cluster-allocation-explain.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -2688,7 +2648,8 @@ If no response is received before the timeout expires, the request fails and ret
|
||||
|
||||
[discrete]
|
||||
==== delete_voting_config_exclusions
|
||||
Clears cluster voting config exclusions.
|
||||
Clear cluster voting config exclusions.
|
||||
Remove master-eligible nodes from the voting configuration exclusion list.
|
||||
|
||||
{ref}/voting-config-exclusions.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -2756,7 +2717,7 @@ If no response is received before the timeout expires, the request fails and ret
|
||||
|
||||
[discrete]
|
||||
==== get_settings
|
||||
Returns cluster-wide settings.
|
||||
Get cluster-wide settings.
|
||||
By default, it returns only settings that have been explicitly defined.
|
||||
|
||||
{ref}/cluster-get-settings.html[Endpoint documentation]
|
||||
@ -2778,8 +2739,16 @@ If no response is received before the timeout expires, the request fails and ret
|
||||
|
||||
[discrete]
|
||||
==== health
|
||||
The cluster health API returns a simple status on the health of the cluster. You can also use the API to get the health status of only specified data streams and indices. For data streams, the API retrieves the health status of the stream’s backing indices.
|
||||
The cluster health status is: green, yellow or red. On the shard level, a red status indicates that the specific shard is not allocated in the cluster, yellow means that the primary shard is allocated but replicas are not, and green means that all shards are allocated. The index level status is controlled by the worst shard status. The cluster status is controlled by the worst index status.
|
||||
Get the cluster health status.
|
||||
You can also use the API to get the health status of only specified data streams and indices.
|
||||
For data streams, the API retrieves the health status of the stream’s backing indices.
|
||||
|
||||
The cluster health status is: green, yellow or red.
|
||||
On the shard level, a red status indicates that the specific shard is not allocated in the cluster. Yellow means that the primary shard is allocated but replicas are not. Green means that all shards are allocated.
|
||||
The index level status is controlled by the worst shard status.
|
||||
|
||||
One of the main benefits of the API is the ability to wait until the cluster reaches a certain high watermark health level.
|
||||
The cluster status is controlled by the worst index status.
|
||||
|
||||
{ref}/cluster-health.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -2823,9 +2792,11 @@ client.cluster.info({ target })
|
||||
|
||||
[discrete]
|
||||
==== pending_tasks
|
||||
Returns cluster-level changes (such as create index, update mapping, allocate or fail shard) that have not yet been executed.
|
||||
Get the pending cluster tasks.
|
||||
Get information about cluster-level changes (such as create index, update mapping, allocate or fail shard) that have not yet taken effect.
|
||||
|
||||
NOTE: This API returns a list of any pending updates to the cluster state.
|
||||
These are distinct from the tasks reported by the Task Management API which include periodic tasks and tasks initiated by the user, such as node stats, search queries, or create index requests.
|
||||
These are distinct from the tasks reported by the task management API which include periodic tasks and tasks initiated by the user, such as node stats, search queries, or create index requests.
|
||||
However, if a user-initiated task such as a create index command causes a cluster state update, the activity of this task might be reported by both task api and pending cluster tasks API.
|
||||
|
||||
{ref}/cluster-pending.html[Endpoint documentation]
|
||||
@ -2845,7 +2816,24 @@ If no response is received before the timeout expires, the request fails and ret
|
||||
|
||||
[discrete]
|
||||
==== post_voting_config_exclusions
|
||||
Updates the cluster voting config exclusions by node ids or node names.
|
||||
Update voting configuration exclusions.
|
||||
Update the cluster voting config exclusions by node IDs or node names.
|
||||
By default, if there are more than three master-eligible nodes in the cluster and you remove fewer than half of the master-eligible nodes in the cluster at once, the voting configuration automatically shrinks.
|
||||
If you want to shrink the voting configuration to contain fewer than three nodes or to remove half or more of the master-eligible nodes in the cluster at once, use this API to remove departing nodes from the voting configuration manually.
|
||||
The API adds an entry for each specified node to the cluster’s voting configuration exclusions list.
|
||||
It then waits until the cluster has reconfigured its voting configuration to exclude the specified nodes.
|
||||
|
||||
Clusters should have no voting configuration exclusions in normal operation.
|
||||
Once the excluded nodes have stopped, clear the voting configuration exclusions with `DELETE /_cluster/voting_config_exclusions`.
|
||||
This API waits for the nodes to be fully removed from the cluster before it returns.
|
||||
If your cluster has voting configuration exclusions for nodes that you no longer intend to remove, use `DELETE /_cluster/voting_config_exclusions?wait_for_removal=false` to clear the voting configuration exclusions without waiting for the nodes to leave the cluster.
|
||||
|
||||
A response to `POST /_cluster/voting_config_exclusions` with an HTTP status code of 200 OK guarantees that the node has been removed from the voting configuration and will not be reinstated until the voting configuration exclusions are cleared by calling `DELETE /_cluster/voting_config_exclusions`.
|
||||
If the call to `POST /_cluster/voting_config_exclusions` fails or returns a response with an HTTP status code other than 200 OK then the node may not have been removed from the voting configuration.
|
||||
In that case, you may safely retry the call.
|
||||
|
||||
NOTE: Voting exclusions are required only when you remove at least half of the master-eligible nodes from a cluster in a short time period.
|
||||
They are not required when removing master-ineligible nodes or when removing fewer than half of the master-eligible nodes.
|
||||
|
||||
{ref}/voting-config-exclusions.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -2916,7 +2904,24 @@ If no response is received before the timeout expires, the request fails and ret
|
||||
|
||||
[discrete]
|
||||
==== put_settings
|
||||
Updates the cluster settings.
|
||||
Update the cluster settings.
|
||||
Configure and update dynamic settings on a running cluster.
|
||||
You can also configure dynamic settings locally on an unstarted or shut down node in `elasticsearch.yml`.
|
||||
|
||||
Updates made with this API can be persistent, which apply across cluster restarts, or transient, which reset after a cluster restart.
|
||||
You can also reset transient or persistent settings by assigning them a null value.
|
||||
|
||||
If you configure the same setting using multiple methods, Elasticsearch applies the settings in following order of precedence: 1) Transient setting; 2) Persistent setting; 3) `elasticsearch.yml` setting; 4) Default setting value.
|
||||
For example, you can apply a transient setting to override a persistent setting or `elasticsearch.yml` setting.
|
||||
However, a change to an `elasticsearch.yml` setting will not override a defined transient or persistent setting.
|
||||
|
||||
TIP: In Elastic Cloud, use the user settings feature to configure all cluster settings. This method automatically rejects unsafe settings that could break your cluster.
|
||||
If you run Elasticsearch on your own hardware, use this API to configure dynamic cluster settings.
|
||||
Only use `elasticsearch.yml` for static cluster settings and node settings.
|
||||
The API doesn’t require a restart and ensures a setting’s value is the same on all nodes.
|
||||
|
||||
WARNING: Transient cluster settings are no longer recommended. Use persistent cluster settings instead.
|
||||
If a cluster becomes unstable, transient settings can clear unexpectedly, resulting in a potentially undesired cluster configuration.
|
||||
|
||||
{ref}/cluster-update-settings.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -2936,9 +2941,9 @@ client.cluster.putSettings({ ... })
|
||||
|
||||
[discrete]
|
||||
==== remote_info
|
||||
The cluster remote info API allows you to retrieve all of the configured
|
||||
remote cluster information. It returns connection and endpoint information
|
||||
keyed by the configured remote cluster alias.
|
||||
Get remote cluster information.
|
||||
Get all of the configured remote cluster information.
|
||||
This API returns connection and endpoint information keyed by the configured remote cluster alias.
|
||||
|
||||
{ref}/cluster-remote-info.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -2949,7 +2954,20 @@ client.cluster.remoteInfo()
|
||||
|
||||
[discrete]
|
||||
==== reroute
|
||||
Allows to manually change the allocation of individual shards in the cluster.
|
||||
Reroute the cluster.
|
||||
Manually change the allocation of individual shards in the cluster.
|
||||
For example, a shard can be moved from one node to another explicitly, an allocation can be canceled, and an unassigned shard can be explicitly allocated to a specific node.
|
||||
|
||||
It is important to note that after processing any reroute commands Elasticsearch will perform rebalancing as normal (respecting the values of settings such as `cluster.routing.rebalance.enable`) in order to remain in a balanced state.
|
||||
For example, if the requested allocation includes moving a shard from node1 to node2 then this may cause a shard to be moved from node2 back to node1 to even things out.
|
||||
|
||||
The cluster can be set to disable allocations using the `cluster.routing.allocation.enable` setting.
|
||||
If allocations are disabled then the only allocations that will be performed are explicit ones given using the reroute command, and consequent allocations due to rebalancing.
|
||||
|
||||
The cluster will attempt to allocate a shard a maximum of `index.allocation.max_retries` times in a row (defaults to `5`), before giving up and leaving the shard unallocated.
|
||||
This scenario can be caused by structural problems such as having an analyzer which refers to a stopwords file which doesn’t exist on all nodes.
|
||||
|
||||
Once the problem has been corrected, allocation can be manually retried by calling the reroute API with the `?retry_failed` URI query parameter, which will attempt a single retry round for these shards.
|
||||
|
||||
{ref}/cluster-reroute.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -2962,8 +2980,9 @@ client.cluster.reroute({ ... })
|
||||
|
||||
* *Request (object):*
|
||||
** *`commands` (Optional, { cancel, move, allocate_replica, allocate_stale_primary, allocate_empty_primary }[])*: Defines the commands to perform.
|
||||
** *`dry_run` (Optional, boolean)*: If true, then the request simulates the operation only and returns the resulting state.
|
||||
** *`explain` (Optional, boolean)*: If true, then the response contains an explanation of why the commands can or cannot be executed.
|
||||
** *`dry_run` (Optional, boolean)*: If true, then the request simulates the operation.
|
||||
It will calculate the result of applying the commands to the current cluster state and return the resulting cluster state after the commands (and rebalancing) have been applied; it will not actually perform the requested changes.
|
||||
** *`explain` (Optional, boolean)*: If true, then the response contains an explanation of why the commands can or cannot run.
|
||||
** *`metric` (Optional, string | string[])*: Limits the information returned to the specified metrics.
|
||||
** *`retry_failed` (Optional, boolean)*: If true, then retries allocation of shards that are blocked due to too many subsequent allocation failures.
|
||||
** *`master_timeout` (Optional, string | -1 | 0)*: Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
|
||||
@ -2971,7 +2990,25 @@ client.cluster.reroute({ ... })
|
||||
|
||||
[discrete]
|
||||
==== state
|
||||
Returns a comprehensive information about the state of the cluster.
|
||||
Get the cluster state.
|
||||
Get comprehensive information about the state of the cluster.
|
||||
|
||||
The cluster state is an internal data structure which keeps track of a variety of information needed by every node, including the identity and attributes of the other nodes in the cluster; cluster-wide settings; index metadata, including the mapping and settings for each index; the location and status of every shard copy in the cluster.
|
||||
|
||||
The elected master node ensures that every node in the cluster has a copy of the same cluster state.
|
||||
This API lets you retrieve a representation of this internal state for debugging or diagnostic purposes.
|
||||
You may need to consult the Elasticsearch source code to determine the precise meaning of the response.
|
||||
|
||||
By default the API will route requests to the elected master node since this node is the authoritative source of cluster states.
|
||||
You can also retrieve the cluster state held on the node handling the API request by adding the `?local=true` query parameter.
|
||||
|
||||
Elasticsearch may need to expend significant effort to compute a response to this API in larger clusters, and the response may comprise a very large quantity of data.
|
||||
If you use this API repeatedly, your cluster may become unstable.
|
||||
|
||||
WARNING: The response is a representation of an internal data structure.
|
||||
Its format is not subject to the same compatibility guarantees as other more stable APIs and may change from version to version.
|
||||
Do not query this API using external monitoring tools.
|
||||
Instead, obtain the information you require using other more stable cluster APIs.
|
||||
|
||||
{ref}/cluster-state.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -2996,8 +3033,8 @@ client.cluster.state({ ... })
|
||||
|
||||
[discrete]
|
||||
==== stats
|
||||
Returns cluster statistics.
|
||||
It returns basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).
|
||||
Get cluster statistics.
|
||||
Get basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).
|
||||
|
||||
{ref}/cluster-stats.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -3622,7 +3659,8 @@ client.enrich.deletePolicy({ name })
|
||||
|
||||
[discrete]
|
||||
==== execute_policy
|
||||
Creates the enrich index for an existing enrich policy.
|
||||
Run an enrich policy.
|
||||
Create the enrich index for an existing enrich policy.
|
||||
|
||||
{ref}/execute-enrich-policy-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -3691,7 +3729,8 @@ client.enrich.stats()
|
||||
=== eql
|
||||
[discrete]
|
||||
==== delete
|
||||
Deletes an async EQL search or a stored synchronous EQL search.
|
||||
Delete an async EQL search.
|
||||
Delete an async EQL search or a stored synchronous EQL search.
|
||||
The API also deletes results for the search.
|
||||
|
||||
{ref}/eql-search-api.html[Endpoint documentation]
|
||||
@ -3710,7 +3749,8 @@ A search ID is also provided if the request’s `keep_on_completion` parameter i
|
||||
|
||||
[discrete]
|
||||
==== get
|
||||
Returns the current status and available results for an async EQL search or a stored synchronous EQL search.
|
||||
Get async EQL search results.
|
||||
Get the current status and available results for an async EQL search or a stored synchronous EQL search.
|
||||
|
||||
{ref}/get-async-eql-search-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -3730,7 +3770,8 @@ Defaults to no timeout, meaning the request waits for complete search results.
|
||||
|
||||
[discrete]
|
||||
==== get_status
|
||||
Returns the current status for an async EQL search or a stored synchronous EQL search without returning results.
|
||||
Get the async EQL status.
|
||||
Get the current status for an async EQL search or a stored synchronous EQL search without returning results.
|
||||
|
||||
{ref}/get-async-eql-status-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -3746,7 +3787,9 @@ client.eql.getStatus({ id })
|
||||
|
||||
[discrete]
|
||||
==== search
|
||||
Returns results matching a query expressed in Event Query Language (EQL)
|
||||
Get EQL search results.
|
||||
Returns search results for an Event Query Language (EQL) query.
|
||||
EQL assumes each document in a data stream or index corresponds to an event.
|
||||
|
||||
{ref}/eql-search-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -3773,9 +3816,6 @@ client.eql.search({ index, query })
|
||||
** *`fields` (Optional, { field, format, include_unmapped } | { field, format, include_unmapped }[])*: Array of wildcard (*) patterns. The response returns values for field names matching these patterns in the fields property of each hit.
|
||||
** *`result_position` (Optional, Enum("tail" | "head"))*
|
||||
** *`runtime_mappings` (Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>)*
|
||||
** *`max_samples_per_key` (Optional, number)*: By default, the response of a sample query contains up to `10` samples, with one sample per unique set of join keys. Use the `size`
|
||||
parameter to get a smaller or larger set of samples. To retrieve more than one sample per set of join keys, use the
|
||||
`max_samples_per_key` parameter. Pipes are not supported for sample queries.
|
||||
** *`allow_no_indices` (Optional, boolean)*
|
||||
** *`expand_wildcards` (Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[])*
|
||||
** *`ignore_unavailable` (Optional, boolean)*: If true, missing or closed indices are not included in the response.
|
||||
@ -3806,7 +3846,8 @@ client.esql.asyncQueryGet()
|
||||
|
||||
[discrete]
|
||||
==== query
|
||||
Executes an ES|QL request
|
||||
Run an ES|QL query.
|
||||
Get search results for an ES|QL (Elasticsearch query language) query.
|
||||
|
||||
{ref}/esql-rest.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -3998,6 +4039,7 @@ the indices stats API.
|
||||
** *`ignore_unavailable` (Optional, boolean)*
|
||||
** *`lenient` (Optional, boolean)*
|
||||
** *`max_concurrent_shard_requests` (Optional, number)*
|
||||
** *`min_compatible_shard_node` (Optional, string)*
|
||||
** *`preference` (Optional, string)*
|
||||
** *`pre_filter_shard_size` (Optional, number)*
|
||||
** *`request_cache` (Optional, boolean)*
|
||||
@ -4024,7 +4066,12 @@ which is true by default.
|
||||
=== graph
|
||||
[discrete]
|
||||
==== explore
|
||||
Extracts and summarizes information about the documents and terms in an Elasticsearch data stream or index.
|
||||
Explore graph analytics.
|
||||
Extract and summarize information about the documents and terms in an Elasticsearch data stream or index.
|
||||
The easiest way to understand the behavior of this API is to use the Graph UI to explore connections.
|
||||
An initial request to the `_explore` API contains a seed query that identifies the documents of interest and specifies the fields that define the vertices and connections you want to include in the graph.
|
||||
Subsequent requests enable you to spider out from one more vertices of interest.
|
||||
You can exclude vertices that have already been returned.
|
||||
|
||||
{ref}/graph-explore-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -4690,10 +4737,12 @@ If the request can target data streams, this argument determines whether wildcar
|
||||
Supports a list of values, such as `open,hidden`.
|
||||
Valid values are: `all`, `open`, `closed`, `hidden`, `none`.
|
||||
** *`ignore_unavailable` (Optional, boolean)*: If `false`, requests that include a missing data stream or index in the target indices or data streams return an error.
|
||||
** *`local` (Optional, boolean)*: If `true`, the request retrieves information from the local node only.
|
||||
|
||||
[discrete]
|
||||
==== exists_index_template
|
||||
Returns information about whether a particular index template exists.
|
||||
Check index templates.
|
||||
Check whether index templates exist.
|
||||
|
||||
{ref}/index-templates.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -4887,6 +4936,7 @@ If the request can target data streams, this argument determines whether wildcar
|
||||
Supports a list of values, such as `open,hidden`.
|
||||
Valid values are: `all`, `open`, `closed`, `hidden`, `none`.
|
||||
** *`ignore_unavailable` (Optional, boolean)*: If `false`, the request returns an error if it targets a missing or closed index.
|
||||
** *`local` (Optional, boolean)*: If `true`, the request retrieves information from the local node only.
|
||||
|
||||
[discrete]
|
||||
==== get_data_lifecycle
|
||||
@ -5229,7 +5279,11 @@ client.indices.putDataLifecycle({ name })
|
||||
** *`name` (string | string[])*: List of data streams used to limit the request.
|
||||
Supports wildcards (`*`).
|
||||
To target all data streams use `*` or `_all`.
|
||||
** *`lifecycle` (Optional, { data_retention, downsampling, enabled })*
|
||||
** *`data_retention` (Optional, string | -1 | 0)*: If defined, every document added to this data stream will be stored at least for this time frame.
|
||||
Any time after this duration the document could be deleted.
|
||||
When empty, every document in this data stream will be stored indefinitely.
|
||||
** *`downsampling` (Optional, { rounds })*: If defined, every backing index will execute the configured downsampling configuration after the backing
|
||||
index is not the data stream write index anymore.
|
||||
** *`expand_wildcards` (Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[])*: Type of data stream that wildcard patterns can match.
|
||||
Supports a list of values, such as `open,hidden`.
|
||||
Valid values are: `all`, `hidden`, `open`, `closed`, `none`.
|
||||
@ -5313,7 +5367,7 @@ a new date field is added instead of string.
|
||||
not used at all by Elasticsearch, but can be used to store
|
||||
application-specific metadata.
|
||||
** *`numeric_detection` (Optional, boolean)*: Automatically map strings into numeric data types for all fields.
|
||||
** *`properties` (Optional, Record<string, { type } | { boost, fielddata, index, null_value, type } | { type, enabled, null_value, boost, coerce, script, on_script_error, ignore_malformed, time_series_metric, analyzer, eager_global_ordinals, index, index_options, index_phrases, index_prefixes, norms, position_increment_gap, search_analyzer, search_quote_analyzer, term_vector, format, precision_step, locale } | { relations, eager_global_ordinals, type } | { boost, eager_global_ordinals, index, index_options, script, on_script_error, normalizer, norms, null_value, similarity, split_queries_on_whitespace, time_series_dimension, type } | { type, fields, meta, copy_to } | { type } | { positive_score_impact, type } | { positive_score_impact, type } | { analyzer, index, index_options, max_shingle_size, norms, search_analyzer, search_quote_analyzer, similarity, term_vector, type } | { analyzer, boost, eager_global_ordinals, fielddata, fielddata_frequency_filter, index, index_options, index_phrases, index_prefixes, norms, position_increment_gap, search_analyzer, search_quote_analyzer, similarity, term_vector, type } | { type } | { type, null_value } | { boost, format, ignore_malformed, index, null_value, precision_step, type } | { boost, fielddata, format, ignore_malformed, index, null_value, precision_step, locale, type } | { type, default_metric, metrics, time_series_metric } | { type, dims, element_type, index, index_options, similarity } | { boost, depth_limit, doc_values, eager_global_ordinals, index, index_options, null_value, similarity, split_queries_on_whitespace, type } | { enabled, include_in_parent, include_in_root, type } | { enabled, subobjects, type } | { type, meta, inference_id } | { type } | { analyzer, contexts, max_input_length, preserve_position_increments, preserve_separators, search_analyzer, type } | { value, type } | { path, type } | { ignore_malformed, type } | { boost, index, ignore_malformed, null_value, on_script_error, script, time_series_dimension, type } | { type } | { analyzer, boost, index, null_value, enable_position_increments, type } | { ignore_malformed, ignore_z_value, null_value, index, on_script_error, script, type } | { coerce, ignore_malformed, ignore_z_value, orientation, strategy, type } | { ignore_malformed, ignore_z_value, null_value, type } | { coerce, ignore_malformed, ignore_z_value, orientation, type } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value, scaling_factor } | { type, null_value } | { type, null_value } | { format, type } | { type } | { type } | { type } | { type } | { type } | { type, norms, index_options, index, null_value, rules, language, country, variant, strength, decomposition, alternate, case_level, case_first, numeric, variable_top, hiragana_quaternary_mode }>)*: Mapping for a field. For new fields, this mapping can include:
|
||||
** *`properties` (Optional, Record<string, { type } | { boost, fielddata, index, null_value, type } | { type, enabled, null_value, boost, coerce, script, on_script_error, ignore_malformed, time_series_metric, analyzer, eager_global_ordinals, index, index_options, index_phrases, index_prefixes, norms, position_increment_gap, search_analyzer, search_quote_analyzer, term_vector, format, precision_step, locale } | { relations, eager_global_ordinals, type } | { boost, eager_global_ordinals, index, index_options, script, on_script_error, normalizer, norms, null_value, similarity, split_queries_on_whitespace, time_series_dimension, type } | { type, fields, meta, copy_to } | { type } | { positive_score_impact, type } | { positive_score_impact, type } | { analyzer, index, index_options, max_shingle_size, norms, search_analyzer, search_quote_analyzer, similarity, term_vector, type } | { analyzer, boost, eager_global_ordinals, fielddata, fielddata_frequency_filter, index, index_options, index_phrases, index_prefixes, norms, position_increment_gap, search_analyzer, search_quote_analyzer, similarity, term_vector, type } | { type } | { type, null_value } | { boost, format, ignore_malformed, index, null_value, precision_step, type } | { boost, fielddata, format, ignore_malformed, index, null_value, precision_step, locale, type } | { type, default_metric, metrics, time_series_metric } | { type, element_type, dims, similarity, index, index_options } | { boost, depth_limit, doc_values, eager_global_ordinals, index, index_options, null_value, similarity, split_queries_on_whitespace, type } | { enabled, include_in_parent, include_in_root, type } | { enabled, subobjects, type } | { type, meta, inference_id } | { type } | { analyzer, contexts, max_input_length, preserve_position_increments, preserve_separators, search_analyzer, type } | { value, type } | { path, type } | { ignore_malformed, type } | { boost, index, ignore_malformed, null_value, on_script_error, script, time_series_dimension, type } | { type } | { analyzer, boost, index, null_value, enable_position_increments, type } | { ignore_malformed, ignore_z_value, null_value, index, on_script_error, script, type } | { coerce, ignore_malformed, ignore_z_value, orientation, strategy, type } | { ignore_malformed, ignore_z_value, null_value, type } | { coerce, ignore_malformed, ignore_z_value, orientation, type } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value, scaling_factor } | { type, null_value } | { type, null_value } | { format, type } | { type } | { type } | { type } | { type } | { type } | { type, norms, index_options, index, null_value, rules, language, country, variant, strength, decomposition, alternate, case_level, case_first, numeric, variable_top, hiragana_quaternary_mode }>)*: Mapping for a field. For new fields, this mapping can include:
|
||||
|
||||
- Field name
|
||||
- Field data type
|
||||
@ -5502,7 +5556,8 @@ Valid values are: `all`, `open`, `closed`, `hidden`, `none`.
|
||||
|
||||
[discrete]
|
||||
==== resolve_index
|
||||
Resolves the specified name(s) and/or index patterns for indices, aliases, and data streams.
|
||||
Resolve indices.
|
||||
Resolve the names and/or index patterns for indices, aliases, and data streams.
|
||||
Multiple patterns and remote clusters are supported.
|
||||
|
||||
{ref}/indices-resolve-index-api.html[Endpoint documentation]
|
||||
@ -5589,6 +5644,7 @@ If the request can target data streams, this argument determines whether wildcar
|
||||
Supports a list of values, such as `open,hidden`.
|
||||
Valid values are: `all`, `open`, `closed`, `hidden`, `none`.
|
||||
** *`ignore_unavailable` (Optional, boolean)*: If `false`, the request returns an error if it targets a missing or closed index.
|
||||
** *`verbose` (Optional, boolean)*: If `true`, the request returns a verbose response.
|
||||
|
||||
[discrete]
|
||||
==== shard_stores
|
||||
@ -5938,7 +5994,8 @@ client.inference.streamInference()
|
||||
=== ingest
|
||||
[discrete]
|
||||
==== delete_geoip_database
|
||||
Deletes a geoip database configuration.
|
||||
Delete GeoIP database configurations.
|
||||
Delete one or more IP geolocation database configurations.
|
||||
|
||||
{ref}/delete-geoip-database-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -5968,7 +6025,8 @@ client.ingest.deleteIpLocationDatabase()
|
||||
|
||||
[discrete]
|
||||
==== delete_pipeline
|
||||
Deletes one or more existing ingest pipeline.
|
||||
Delete pipelines.
|
||||
Delete one or more ingest pipelines.
|
||||
|
||||
{ref}/delete-pipeline-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -5989,7 +6047,8 @@ If no response is received before the timeout expires, the request fails and ret
|
||||
|
||||
[discrete]
|
||||
==== geo_ip_stats
|
||||
Gets download statistics for GeoIP2 databases used with the geoip processor.
|
||||
Get GeoIP statistics.
|
||||
Get download statistics for GeoIP2 databases that are used with the GeoIP processor.
|
||||
|
||||
{ref}/geoip-processor.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -6000,7 +6059,8 @@ client.ingest.geoIpStats()
|
||||
|
||||
[discrete]
|
||||
==== get_geoip_database
|
||||
Returns information about one or more geoip database configurations.
|
||||
Get GeoIP database configurations.
|
||||
Get information about one or more IP geolocation database configurations.
|
||||
|
||||
{ref}/get-geoip-database-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -6031,7 +6091,8 @@ client.ingest.getIpLocationDatabase()
|
||||
|
||||
[discrete]
|
||||
==== get_pipeline
|
||||
Returns information about one or more ingest pipelines.
|
||||
Get pipelines.
|
||||
Get information about one or more ingest pipelines.
|
||||
This API returns a local reference of the pipeline.
|
||||
|
||||
{ref}/get-pipeline-api.html[Endpoint documentation]
|
||||
@ -6053,8 +6114,9 @@ If no response is received before the timeout expires, the request fails and ret
|
||||
|
||||
[discrete]
|
||||
==== processor_grok
|
||||
Extracts structured fields out of a single text field within a document.
|
||||
You choose which field to extract matched fields from, as well as the grok pattern you expect will match.
|
||||
Run a grok processor.
|
||||
Extract structured fields out of a single text field within a document.
|
||||
You must choose which field to extract matched fields from, as well as the grok pattern you expect will match.
|
||||
A grok pattern is like a regular expression that supports aliased expressions that can be reused.
|
||||
|
||||
{ref}/grok-processor.html[Endpoint documentation]
|
||||
@ -6066,7 +6128,8 @@ client.ingest.processorGrok()
|
||||
|
||||
[discrete]
|
||||
==== put_geoip_database
|
||||
Returns information about one or more geoip database configurations.
|
||||
Create or update GeoIP database configurations.
|
||||
Create or update IP geolocation database configurations.
|
||||
|
||||
{ref}/put-geoip-database-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -6099,7 +6162,7 @@ client.ingest.putIpLocationDatabase()
|
||||
|
||||
[discrete]
|
||||
==== put_pipeline
|
||||
Creates or updates an ingest pipeline.
|
||||
Create or update a pipeline.
|
||||
Changes made using this API take effect immediately.
|
||||
|
||||
{ref}/ingest.html[Endpoint documentation]
|
||||
@ -6126,7 +6189,9 @@ When a deprecated ingest pipeline is referenced as the default or final pipeline
|
||||
|
||||
[discrete]
|
||||
==== simulate
|
||||
Executes an ingest pipeline against a set of provided documents.
|
||||
Simulate a pipeline.
|
||||
Run an ingest pipeline against a set of provided documents.
|
||||
You can either specify an existing pipeline to use with the provided documents or supply a pipeline definition in the body of the request.
|
||||
|
||||
{ref}/simulate-pipeline-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -7458,7 +7523,7 @@ client.ml.postCalendarEvents({ calendar_id, events })
|
||||
|
||||
* *Request (object):*
|
||||
** *`calendar_id` (string)*: A string that uniquely identifies a calendar.
|
||||
** *`events` ({ calendar_id, event_id, description, end_time, start_time, skip_result, skip_model_update, force_time_shift }[])*: A list of one of more scheduled events. The event’s start and end times can be specified as integer milliseconds since the epoch or as a string in ISO 8601 format.
|
||||
** *`events` ({ calendar_id, event_id, description, end_time, start_time }[])*: A list of one of more scheduled events. The event’s start and end times can be specified as integer milliseconds since the epoch or as a string in ISO 8601 format.
|
||||
|
||||
[discrete]
|
||||
==== post_data
|
||||
@ -7650,7 +7715,7 @@ Create a datafeed.
|
||||
Datafeeds retrieve data from Elasticsearch for analysis by an anomaly detection job.
|
||||
You can associate only one datafeed with each anomaly detection job.
|
||||
The datafeed contains a query that runs at a defined interval (`frequency`).
|
||||
If you are concerned about delayed data, you can add a delay (`query_delay`) at each interval.
|
||||
If you are concerned about delayed data, you can add a delay (`query_delay') at each interval.
|
||||
When Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had
|
||||
at the time of creation and runs the query using those same roles. If you provide secondary authorization headers,
|
||||
those credentials are used instead.
|
||||
@ -8443,7 +8508,8 @@ client.monitoring.bulk({ system_id, system_api_version, interval })
|
||||
=== nodes
|
||||
[discrete]
|
||||
==== clear_repositories_metering_archive
|
||||
You can use this API to clear the archived repositories metering information in the cluster.
|
||||
Clear the archived repositories metering.
|
||||
Clear the archived repositories metering information in the cluster.
|
||||
|
||||
{ref}/clear-repositories-metering-archive-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8461,10 +8527,10 @@ All the nodes selective options are explained [here](https://www.elastic.co/guid
|
||||
|
||||
[discrete]
|
||||
==== get_repositories_metering_info
|
||||
You can use the cluster repositories metering API to retrieve repositories metering information in a cluster.
|
||||
This API exposes monotonically non-decreasing counters and it’s expected that clients would durably store the
|
||||
information needed to compute aggregations over a period of time. Additionally, the information exposed by this
|
||||
API is volatile, meaning that it won’t be present after node restarts.
|
||||
Get cluster repositories metering.
|
||||
Get repositories metering information for a cluster.
|
||||
This API exposes monotonically non-decreasing counters and it is expected that clients would durably store the information needed to compute aggregations over a period of time.
|
||||
Additionally, the information exposed by this API is volatile, meaning that it will not be present after node restarts.
|
||||
|
||||
{ref}/get-repositories-metering-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8481,8 +8547,9 @@ All the nodes selective options are explained [here](https://www.elastic.co/guid
|
||||
|
||||
[discrete]
|
||||
==== hot_threads
|
||||
This API yields a breakdown of the hot threads on each selected node in the cluster.
|
||||
The output is plain text with a breakdown of each node’s top hot threads.
|
||||
Get the hot threads for nodes.
|
||||
Get a breakdown of the hot threads on each selected node in the cluster.
|
||||
The output is plain text with a breakdown of the top hot threads for each node.
|
||||
|
||||
{ref}/cluster-nodes-hot-threads.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8510,7 +8577,8 @@ before the timeout expires, the request fails and returns an error.
|
||||
|
||||
[discrete]
|
||||
==== info
|
||||
Returns cluster nodes information.
|
||||
Get node information.
|
||||
By default, the API returns all attributes and core settings for cluster nodes.
|
||||
|
||||
{ref}/cluster-nodes-info.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8530,7 +8598,15 @@ client.nodes.info({ ... })
|
||||
|
||||
[discrete]
|
||||
==== reload_secure_settings
|
||||
Reloads the keystore on nodes in the cluster.
|
||||
Reload the keystore on nodes in the cluster.
|
||||
|
||||
Secure settings are stored in an on-disk keystore. Certain of these settings are reloadable.
|
||||
That is, you can change them on disk and reload them without restarting any nodes in the cluster.
|
||||
When you have updated reloadable secure settings in your keystore, you can use this API to reload those settings on each node.
|
||||
|
||||
When the Elasticsearch keystore is password protected and not simply obfuscated, you must provide the password for the keystore when you reload the secure settings.
|
||||
Reloading the settings for the whole cluster assumes that the keystores for all nodes are protected with the same password; this method is allowed only when inter-node communications are encrypted.
|
||||
Alternatively, you can reload the secure settings on each node by locally accessing the API and passing the node-specific Elasticsearch keystore password.
|
||||
|
||||
{ref}/secure-settings.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8549,7 +8625,9 @@ If no response is received before the timeout expires, the request fails and ret
|
||||
|
||||
[discrete]
|
||||
==== stats
|
||||
Returns cluster nodes statistics.
|
||||
Get node statistics.
|
||||
Get statistics for nodes in a cluster.
|
||||
By default, all stats are returned. You can limit the returned information by using metrics.
|
||||
|
||||
{ref}/cluster-nodes-stats.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8577,7 +8655,7 @@ client.nodes.stats({ ... })
|
||||
|
||||
[discrete]
|
||||
==== usage
|
||||
Returns information on the usage of features.
|
||||
Get feature usage information.
|
||||
|
||||
{ref}/cluster-nodes-usage.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8599,7 +8677,8 @@ If no response is received before the timeout expires, the request fails and ret
|
||||
=== query_rules
|
||||
[discrete]
|
||||
==== delete_rule
|
||||
Deletes a query rule within a query ruleset.
|
||||
Delete a query rule.
|
||||
Delete a query rule within a query ruleset.
|
||||
|
||||
{ref}/delete-query-rule.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8616,7 +8695,7 @@ client.queryRules.deleteRule({ ruleset_id, rule_id })
|
||||
|
||||
[discrete]
|
||||
==== delete_ruleset
|
||||
Deletes a query ruleset.
|
||||
Delete a query ruleset.
|
||||
|
||||
{ref}/delete-query-ruleset.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8632,7 +8711,8 @@ client.queryRules.deleteRuleset({ ruleset_id })
|
||||
|
||||
[discrete]
|
||||
==== get_rule
|
||||
Returns the details about a query rule within a query ruleset
|
||||
Get a query rule.
|
||||
Get details about a query rule within a query ruleset.
|
||||
|
||||
{ref}/get-query-rule.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8649,7 +8729,8 @@ client.queryRules.getRule({ ruleset_id, rule_id })
|
||||
|
||||
[discrete]
|
||||
==== get_ruleset
|
||||
Returns the details about a query ruleset
|
||||
Get a query ruleset.
|
||||
Get details about a query ruleset.
|
||||
|
||||
{ref}/get-query-ruleset.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8665,7 +8746,8 @@ client.queryRules.getRuleset({ ruleset_id })
|
||||
|
||||
[discrete]
|
||||
==== list_rulesets
|
||||
Returns summarized information about existing query rulesets.
|
||||
Get all query rulesets.
|
||||
Get summarized information about the query rulesets.
|
||||
|
||||
{ref}/list-query-rulesets.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8682,7 +8764,8 @@ client.queryRules.listRulesets({ ... })
|
||||
|
||||
[discrete]
|
||||
==== put_rule
|
||||
Creates or updates a query rule within a query ruleset.
|
||||
Create or update a query rule.
|
||||
Create or update a query rule within a query ruleset.
|
||||
|
||||
{ref}/put-query-rule.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8703,7 +8786,7 @@ client.queryRules.putRule({ ruleset_id, rule_id, type, criteria, actions })
|
||||
|
||||
[discrete]
|
||||
==== put_ruleset
|
||||
Creates or updates a query ruleset.
|
||||
Create or update a query ruleset.
|
||||
|
||||
{ref}/put-query-ruleset.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -8720,7 +8803,8 @@ client.queryRules.putRuleset({ ruleset_id, rules })
|
||||
|
||||
[discrete]
|
||||
==== test
|
||||
Creates or updates a query ruleset.
|
||||
Test a query ruleset.
|
||||
Evaluate match criteria against a query ruleset to identify the rules that would match that criteria.
|
||||
|
||||
{ref}/test-query-ruleset.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -9014,7 +9098,7 @@ client.searchApplication.put({ name })
|
||||
|
||||
* *Request (object):*
|
||||
** *`name` (string)*: The name of the search application to be created or updated.
|
||||
** *`search_application` (Optional, { indices, analytics_collection_name, template })*
|
||||
** *`search_application` (Optional, { name, indices, updated_at_millis, analytics_collection_name, template })*
|
||||
** *`create` (Optional, boolean)*: If `true`, this request cannot replace or update existing Search Applications.
|
||||
|
||||
[discrete]
|
||||
@ -9717,6 +9801,8 @@ client.security.getPrivileges({ ... })
|
||||
Get roles.
|
||||
|
||||
Get roles in the native realm.
|
||||
The role management APIs are generally the preferred way to manage roles, rather than using file-based role management.
|
||||
The get roles API cannot retrieve roles that are defined in roles files.
|
||||
|
||||
{ref}/security-api-get-role.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -10088,7 +10174,7 @@ client.security.putRole({ name })
|
||||
==== Arguments
|
||||
|
||||
* *Request (object):*
|
||||
** *`name` (string)*: The name of the role that is being created or updated. On Elasticsearch Serverless, the role name must begin with a letter or digit and can only contain letters, digits and the characters '_', '-', and '.'. Each role must have a unique name, as this will serve as the identifier for that role.
|
||||
** *`name` (string)*: The name of the role.
|
||||
** *`applications` (Optional, { application, privileges, resources }[])*: A list of application privilege entries.
|
||||
** *`cluster` (Optional, Enum("all" | "cancel_task" | "create_snapshot" | "cross_cluster_replication" | "cross_cluster_search" | "delegate_pki" | "grant_api_key" | "manage" | "manage_api_key" | "manage_autoscaling" | "manage_behavioral_analytics" | "manage_ccr" | "manage_data_frame_transforms" | "manage_data_stream_global_retention" | "manage_enrich" | "manage_ilm" | "manage_index_templates" | "manage_inference" | "manage_ingest_pipelines" | "manage_logstash_pipelines" | "manage_ml" | "manage_oidc" | "manage_own_api_key" | "manage_pipeline" | "manage_rollup" | "manage_saml" | "manage_search_application" | "manage_search_query_rules" | "manage_search_synonyms" | "manage_security" | "manage_service_account" | "manage_slm" | "manage_token" | "manage_transform" | "manage_user_profile" | "manage_watcher" | "monitor" | "monitor_data_frame_transforms" | "monitor_data_stream_global_retention" | "monitor_enrich" | "monitor_inference" | "monitor_ml" | "monitor_rollup" | "monitor_snapshot" | "monitor_stats" | "monitor_text_structure" | "monitor_transform" | "monitor_watcher" | "none" | "post_behavioral_analytics_event" | "read_ccr" | "read_fleet_secrets" | "read_ilm" | "read_pipeline" | "read_security" | "read_slm" | "transport_client" | "write_connector_secrets" | "write_fleet_secrets")[])*: A list of cluster privileges. These privileges define the cluster-level actions for users with this role.
|
||||
** *`global` (Optional, Record<string, User-defined value>)*: An object defining global privileges. A global privilege is a form of cluster privilege that is request-aware. Support for global privileges is currently limited to the management of application privileges.
|
||||
@ -10984,7 +11070,7 @@ client.snapshot.verifyRepository({ repository })
|
||||
=== sql
|
||||
[discrete]
|
||||
==== clear_cursor
|
||||
Clears the SQL cursor
|
||||
Clear an SQL search cursor.
|
||||
|
||||
{ref}/clear-sql-cursor-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11000,7 +11086,9 @@ client.sql.clearCursor({ cursor })
|
||||
|
||||
[discrete]
|
||||
==== delete_async
|
||||
Deletes an async SQL search or a stored synchronous SQL search. If the search is still running, the API cancels it.
|
||||
Delete an async SQL search.
|
||||
Delete an async SQL search or a stored synchronous SQL search.
|
||||
If the search is still running, the API cancels it.
|
||||
|
||||
{ref}/delete-async-sql-search-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11016,7 +11104,8 @@ client.sql.deleteAsync({ id })
|
||||
|
||||
[discrete]
|
||||
==== get_async
|
||||
Returns the current status and available results for an async SQL search or stored synchronous SQL search
|
||||
Get async SQL search results.
|
||||
Get the current status and available results for an async SQL search or stored synchronous SQL search.
|
||||
|
||||
{ref}/get-async-sql-search-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11039,7 +11128,8 @@ meaning the request waits for complete search results.
|
||||
|
||||
[discrete]
|
||||
==== get_async_status
|
||||
Returns the current status of an async SQL search or a stored synchronous SQL search
|
||||
Get the async SQL search status.
|
||||
Get the current status of an async SQL search or a stored synchronous SQL search.
|
||||
|
||||
{ref}/get-async-sql-search-status-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11055,7 +11145,8 @@ client.sql.getAsyncStatus({ id })
|
||||
|
||||
[discrete]
|
||||
==== query
|
||||
Executes a SQL request
|
||||
Get SQL search results.
|
||||
Run an SQL request.
|
||||
|
||||
{ref}/sql-search-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11090,7 +11181,8 @@ precedence over mapped fields with the same name.
|
||||
|
||||
[discrete]
|
||||
==== translate
|
||||
Translates SQL into Elasticsearch queries
|
||||
Translate SQL into Elasticsearch queries.
|
||||
Translate an SQL search into a search API request containing Query DSL.
|
||||
|
||||
{ref}/sql-translate-api.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11140,7 +11232,7 @@ client.ssl.certificates()
|
||||
=== synonyms
|
||||
[discrete]
|
||||
==== delete_synonym
|
||||
Deletes a synonym set
|
||||
Delete a synonym set.
|
||||
|
||||
{ref}/delete-synonyms-set.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11156,7 +11248,8 @@ client.synonyms.deleteSynonym({ id })
|
||||
|
||||
[discrete]
|
||||
==== delete_synonym_rule
|
||||
Deletes a synonym rule in a synonym set
|
||||
Delete a synonym rule.
|
||||
Delete a synonym rule from a synonym set.
|
||||
|
||||
{ref}/delete-synonym-rule.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11173,7 +11266,7 @@ client.synonyms.deleteSynonymRule({ set_id, rule_id })
|
||||
|
||||
[discrete]
|
||||
==== get_synonym
|
||||
Retrieves a synonym set
|
||||
Get a synonym set.
|
||||
|
||||
{ref}/get-synonyms-set.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11191,7 +11284,8 @@ client.synonyms.getSynonym({ id })
|
||||
|
||||
[discrete]
|
||||
==== get_synonym_rule
|
||||
Retrieves a synonym rule from a synonym set
|
||||
Get a synonym rule.
|
||||
Get a synonym rule from a synonym set.
|
||||
|
||||
{ref}/get-synonym-rule.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11208,7 +11302,8 @@ client.synonyms.getSynonymRule({ set_id, rule_id })
|
||||
|
||||
[discrete]
|
||||
==== get_synonyms_sets
|
||||
Retrieves a summary of all defined synonym sets
|
||||
Get all synonym sets.
|
||||
Get a summary of all defined synonym sets.
|
||||
|
||||
{ref}/list-synonyms-sets.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11225,7 +11320,9 @@ client.synonyms.getSynonymsSets({ ... })
|
||||
|
||||
[discrete]
|
||||
==== put_synonym
|
||||
Creates or updates a synonym set.
|
||||
Create or update a synonym set.
|
||||
Synonyms sets are limited to a maximum of 10,000 synonym rules per set.
|
||||
If you need to manage more synonym rules, you can create multiple synonym sets.
|
||||
|
||||
{ref}/put-synonyms-set.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
@ -11242,7 +11339,8 @@ client.synonyms.putSynonym({ id, synonyms_set })
|
||||
|
||||
[discrete]
|
||||
==== put_synonym_rule
|
||||
Creates or updates a synonym rule in a synonym set
|
||||
Create or update a synonym rule.
|
||||
Create or update a synonym rule in a synonym set.
|
||||
|
||||
{ref}/put-synonym-rule.html[Endpoint documentation]
|
||||
[source,ts]
|
||||
|
||||
Reference in New Issue
Block a user