Auto-generated code for main (#2320)

Co-authored-by: Josh Mock <joshua.mock@elastic.co>
This commit is contained in:
Elastic Machine
2024-08-07 02:31:39 +10:00
committed by GitHub
parent 84ab2a787d
commit 99cefe8b19
7 changed files with 2006 additions and 93 deletions

View File

@ -1624,8 +1624,8 @@ client.autoscaling.putAutoscalingPolicy({ name })
Get aliases.
Retrieves the clusters index aliases, including filter and routing information.
The API does not return data stream aliases.
> info
> CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use [the /_alias endpoints](#endpoint-alias).
CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.
{ref}/cat-alias.html[Endpoint documentation]
[source,ts]
@ -1663,9 +1663,9 @@ client.cat.allocation({ ... })
Get component templates.
Returns information about component templates in a cluster.
Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
> info
> CAT APIs are only intended for human consumption using the command line or Kibana console.
They are not intended for use by applications. For application consumption, use [the /_component_template endpoints](#endpoint-component-template).
CAT APIs are only intended for human consumption using the command line or Kibana console.
They are not intended for use by applications. For application consumption, use the get component template API.
{ref}/cat-component-templates.html[Endpoint documentation]
[source,ts]
@ -1684,9 +1684,9 @@ client.cat.componentTemplates({ ... })
Get a document count.
Provides quick access to a document count for a data stream, an index, or an entire cluster.n/
The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.
> info
> CAT APIs are only intended for human consumption using the command line or Kibana console.
They are not intended for use by applications. For application consumption, use [the /_count endpoints](#endpoint-count).
CAT APIs are only intended for human consumption using the command line or Kibana console.
They are not intended for use by applications. For application consumption, use the count API.
{ref}/cat-count.html[Endpoint documentation]
[source,ts]
@ -1763,9 +1763,6 @@ client.cat.help()
==== indices
Get index information.
Returns high-level information about indices in a cluster, including backing indices for data streams.
> info
> CAT APIs are only intended for human consumption using the command line or Kibana console.
They are not intended for use by applications. For application consumption, use an index endpoint.
Use this request to get the following information for each index in a cluster:
- shard count
@ -1775,7 +1772,10 @@ Use this request to get the following information for each index in a cluster:
- total store size of all shards, including shard replicas
These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents.
To get an accurate count of Elasticsearch documents, use the [/_cat/count](#operation-cat-count) or [count](#endpoint-count) endpoints.
To get an accurate count of Elasticsearch documents, use the cat count or count APIs.
CAT APIs are only intended for human consumption using the command line or Kibana console.
They are not intended for use by applications. For application consumption, use an index endpoint.
{ref}/cat-indices.html[Endpoint documentation]
[source,ts]
@ -1813,10 +1813,9 @@ client.cat.master()
Get data frame analytics jobs.
Returns configuration and usage information about data frame analytics jobs.
> info
> CAT APIs are only intended for human consumption using the Kibana
CAT APIs are only intended for human consumption using the Kibana
console or command line. They are not intended for use by applications. For
application consumption, use [the /_ml/data_frame/analytics endpoints](#endpoint-ml).
application consumption, use the get data frame analytics jobs statistics API.
{ref}/cat-dfanalytics.html[Endpoint documentation]
[source,ts]
@ -1844,10 +1843,9 @@ This API returns a maximum of 10,000 datafeeds.
If the Elasticsearch security features are enabled, you must have `monitor_ml`, `monitor`, `manage_ml`, or `manage`
cluster privileges to use this API.
> info
> CAT APIs are only intended for human consumption using the Kibana
CAT APIs are only intended for human consumption using the Kibana
console or command line. They are not intended for use by applications. For
application consumption, use [the /_ml/datafeeds endpoints](#endpoint-ml).
application consumption, use the get datafeed statistics API.
{ref}/cat-datafeeds.html[Endpoint documentation]
[source,ts]
@ -1881,10 +1879,9 @@ This API returns a maximum of 10,000 jobs.
If the Elasticsearch security features are enabled, you must have `monitor_ml`,
`monitor`, `manage_ml`, or `manage` cluster privileges to use this API.
> info
> CAT APIs are only intended for human consumption using the Kibana
CAT APIs are only intended for human consumption using the Kibana
console or command line. They are not intended for use by applications. For
application consumption, use [the /_ml/anomaly_detectors endpoints](#endpoint-ml).
application consumption, use the get anomaly detection job statistics API.
{ref}/cat-anomaly-detectors.html[Endpoint documentation]
[source,ts]
@ -1916,10 +1913,9 @@ matches.
Get trained models.
Returns configuration and usage information about inference trained models.
> info
> CAT APIs are only intended for human consumption using the Kibana
CAT APIs are only intended for human consumption using the Kibana
console or command line. They are not intended for use by applications. For
application consumption, use [the /_ml/trained_models endpoints](#endpoint-ml).
application consumption, use the get trained models statistics API.
{ref}/cat-trained-model.html[Endpoint documentation]
[source,ts]
@ -2159,10 +2155,9 @@ Accepts wildcard expressions.
Get transforms.
Returns configuration and usage information about transforms.
> info
> CAT APIs are only intended for human consumption using the Kibana
CAT APIs are only intended for human consumption using the Kibana
console or command line. They are not intended for use by applications. For
application consumption, use [the /_transform endpoints](#endpoint-transform).
application consumption, use the get transform statistics API.
{ref}/cat-transforms.html[Endpoint documentation]
[source,ts]
@ -2582,7 +2577,7 @@ client.cluster.health({ ... })
==== Arguments
* *Request (object):*
** *`index` (Optional, string | string[])*: List of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*) are supported. To target all data streams and indices in a cluster, omit this parameter or use _all or *.
** *`index` (Optional, string | string[])*: List of data streams, indices, and index aliases used to limit the request. Wildcard expressions (`*`) are supported. To target all data streams and indices in a cluster, omit this parameter or use _all or `*`.
** *`expand_wildcards` (Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[])*: Whether to expand wildcard expression to concrete indices that are open, closed or both.
** *`level` (Optional, Enum("cluster" | "indices" | "shards"))*: Can be one of cluster, indices or shards. Controls the details level of the health information returned.
** *`local` (Optional, boolean)*: If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.
@ -2806,6 +2801,481 @@ client.cluster.stats({ ... })
If a node does not respond before its timeout expires, the response does not include its stats.
However, timed out nodes are included in the responses `_nodes.failed` property. Defaults to no timeout.
[discrete]
=== connector
[discrete]
==== check_in
Updates the last_seen field in the connector, and sets it to current timestamp
{ref}/check-in-connector-api.html[Endpoint documentation]
[source,ts]
----
client.connector.checkIn({ connector_id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be checked in
[discrete]
==== delete
Deletes a connector.
{ref}/delete-connector-api.html[Endpoint documentation]
[source,ts]
----
client.connector.delete({ connector_id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be deleted
** *`delete_sync_jobs` (Optional, boolean)*: A flag indicating if associated sync jobs should be also removed. Defaults to false.
[discrete]
==== get
Retrieves a connector.
{ref}/get-connector-api.html[Endpoint documentation]
[source,ts]
----
client.connector.get({ connector_id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector
[discrete]
==== list
Returns existing connectors.
{ref}/list-connector-api.html[Endpoint documentation]
[source,ts]
----
client.connector.list({ ... })
----
[discrete]
==== Arguments
* *Request (object):*
** *`from` (Optional, number)*: Starting offset (default: 0)
** *`size` (Optional, number)*: Specifies a max number of results to get
** *`index_name` (Optional, string | string[])*: A list of connector index names to fetch connector documents for
** *`connector_name` (Optional, string | string[])*: A list of connector names to fetch connector documents for
** *`service_type` (Optional, string | string[])*: A list of connector service types to fetch connector documents for
** *`query` (Optional, string)*: A wildcard query string that filters connectors with matching name, description or index name
[discrete]
==== post
Creates a connector.
{ref}/create-connector-api.html[Endpoint documentation]
[source,ts]
----
client.connector.post({ ... })
----
[discrete]
==== Arguments
* *Request (object):*
** *`description` (Optional, string)*
** *`index_name` (Optional, string)*
** *`is_native` (Optional, boolean)*
** *`language` (Optional, string)*
** *`name` (Optional, string)*
** *`service_type` (Optional, string)*
[discrete]
==== put
Creates or updates a connector.
{ref}/create-connector-api.html[Endpoint documentation]
[source,ts]
----
client.connector.put({ ... })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (Optional, string)*: The unique identifier of the connector to be created or updated. ID is auto-generated if not provided.
** *`description` (Optional, string)*
** *`index_name` (Optional, string)*
** *`is_native` (Optional, boolean)*
** *`language` (Optional, string)*
** *`name` (Optional, string)*
** *`service_type` (Optional, string)*
[discrete]
==== sync_job_cancel
Cancels a connector sync job.
{ref}/cancel-connector-sync-job-api.html[Endpoint documentation]
[source,ts]
----
client.connector.syncJobCancel({ connector_sync_job_id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_sync_job_id` (string)*: The unique identifier of the connector sync job
[discrete]
==== sync_job_check_in
Checks in a connector sync job (refreshes 'last_seen').
{ref}/check-in-connector-sync-job-api.html[Endpoint documentation]
[source,ts]
----
client.connector.syncJobCheckIn()
----
[discrete]
==== sync_job_claim
Claims a connector sync job.
[source,ts]
----
client.connector.syncJobClaim()
----
[discrete]
==== sync_job_delete
Deletes a connector sync job.
{ref}/delete-connector-sync-job-api.html[Endpoint documentation]
[source,ts]
----
client.connector.syncJobDelete({ connector_sync_job_id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_sync_job_id` (string)*: The unique identifier of the connector sync job to be deleted
[discrete]
==== sync_job_error
Sets an error for a connector sync job.
{ref}/set-connector-sync-job-error-api.html[Endpoint documentation]
[source,ts]
----
client.connector.syncJobError()
----
[discrete]
==== sync_job_get
Retrieves a connector sync job.
{ref}/get-connector-sync-job-api.html[Endpoint documentation]
[source,ts]
----
client.connector.syncJobGet({ connector_sync_job_id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_sync_job_id` (string)*: The unique identifier of the connector sync job
[discrete]
==== sync_job_list
Lists connector sync jobs.
{ref}/list-connector-sync-jobs-api.html[Endpoint documentation]
[source,ts]
----
client.connector.syncJobList({ ... })
----
[discrete]
==== Arguments
* *Request (object):*
** *`from` (Optional, number)*: Starting offset (default: 0)
** *`size` (Optional, number)*: Specifies a max number of results to get
** *`status` (Optional, Enum("canceling" | "canceled" | "completed" | "error" | "in_progress" | "pending" | "suspended"))*: A sync job status to fetch connector sync jobs for
** *`connector_id` (Optional, string)*: A connector id to fetch connector sync jobs for
** *`job_type` (Optional, Enum("full" | "incremental" | "access_control") | Enum("full" | "incremental" | "access_control")[])*: A list of job types to fetch the sync jobs for
[discrete]
==== sync_job_post
Creates a connector sync job.
{ref}/create-connector-sync-job-api.html[Endpoint documentation]
[source,ts]
----
client.connector.syncJobPost({ id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`id` (string)*: The id of the associated connector
** *`job_type` (Optional, Enum("full" | "incremental" | "access_control"))*
** *`trigger_method` (Optional, Enum("on_demand" | "scheduled"))*
[discrete]
==== sync_job_update_stats
Updates the stats fields in the connector sync job document.
{ref}/set-connector-sync-job-stats-api.html[Endpoint documentation]
[source,ts]
----
client.connector.syncJobUpdateStats()
----
[discrete]
==== update_active_filtering
Activates the valid draft filtering for a connector.
{ref}/update-connector-filtering-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updateActiveFiltering({ connector_id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
[discrete]
==== update_api_key_id
Updates the API key id in the connector document
{ref}/update-connector-api-key-id-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updateApiKeyId({ connector_id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`api_key_id` (Optional, string)*
** *`api_key_secret_id` (Optional, string)*
[discrete]
==== update_configuration
Updates the configuration field in the connector document
{ref}/update-connector-configuration-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updateConfiguration({ connector_id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`configuration` (Optional, Record<string, { category, default_value, depends_on, display, label, options, order, placeholder, required, sensitive, tooltip, type, ui_restrictions, validations, value }>)*
** *`values` (Optional, Record<string, User-defined value>)*
[discrete]
==== update_error
Updates the filtering field in the connector document
{ref}/update-connector-error-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updateError({ connector_id, error })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`error` (T | null)*
[discrete]
==== update_features
Updates the connector features in the connector document.
{ref}/update-connector-features-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updateFeatures()
----
[discrete]
==== update_filtering
Updates the filtering field in the connector document
{ref}/update-connector-filtering-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updateFiltering({ connector_id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`filtering` (Optional, { active, domain, draft }[])*
** *`rules` (Optional, { created_at, field, id, order, policy, rule, updated_at, value }[])*
** *`advanced_snippet` (Optional, { created_at, updated_at, value })*
[discrete]
==== update_filtering_validation
Updates the draft filtering validation info for a connector.
[source,ts]
----
client.connector.updateFilteringValidation({ connector_id, validation })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`validation` ({ errors, state })*
[discrete]
==== update_index_name
Updates the index_name in the connector document
{ref}/update-connector-index-name-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updateIndexName({ connector_id, index_name })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`index_name` (T | null)*
[discrete]
==== update_name
Updates the name and description fields in the connector document
{ref}/update-connector-name-description-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updateName({ connector_id })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`name` (Optional, string)*
** *`description` (Optional, string)*
[discrete]
==== update_native
Updates the is_native flag in the connector document
[source,ts]
----
client.connector.updateNative({ connector_id, is_native })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`is_native` (boolean)*
[discrete]
==== update_pipeline
Updates the pipeline field in the connector document
{ref}/update-connector-pipeline-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updatePipeline({ connector_id, pipeline })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`pipeline` ({ extract_binary_content, name, reduce_whitespace, run_ml_inference })*
[discrete]
==== update_scheduling
Updates the scheduling field in the connector document
{ref}/update-connector-scheduling-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updateScheduling({ connector_id, scheduling })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`scheduling` ({ access_control, full, incremental })*
[discrete]
==== update_service_type
Updates the service type of the connector
{ref}/update-connector-service-type-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updateServiceType({ connector_id, service_type })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`service_type` (string)*
[discrete]
==== update_status
Updates the status of the connector
{ref}/update-connector-status-api.html[Endpoint documentation]
[source,ts]
----
client.connector.updateStatus({ connector_id, status })
----
[discrete]
==== Arguments
* *Request (object):*
** *`connector_id` (string)*: The unique identifier of the connector to be updated
** *`status` (Enum("created" | "needs_configuration" | "configured" | "connected" | "error"))*
[discrete]
=== dangling_indices
[discrete]
@ -5517,7 +5987,8 @@ client.migration.postFeatureUpgrade()
=== ml
[discrete]
==== clear_trained_model_deployment_cache
Clears a trained model deployment cache on all nodes where the trained model is assigned.
Clear trained model deployment cache.
Cache will be cleared on all nodes where the trained model is assigned.
A trained model deployment may have an inference cache enabled.
As requests are handled by each allocated node, their responses may be cached on that individual node.
Calling this API clears the caches without restarting the deployment.
@ -5559,6 +6030,7 @@ client.ml.closeJob({ job_id })
[discrete]
==== delete_calendar
Delete a calendar.
Removes all scheduled events from a calendar, then deletes it.
{ref}/ml-delete-calendar.html[Endpoint documentation]
@ -5575,7 +6047,7 @@ client.ml.deleteCalendar({ calendar_id })
[discrete]
==== delete_calendar_event
Deletes scheduled events from a calendar.
Delete events from a calendar.
{ref}/ml-delete-calendar-event.html[Endpoint documentation]
[source,ts]
@ -5593,7 +6065,7 @@ You can obtain this identifier by using the get calendar events API.
[discrete]
==== delete_calendar_job
Deletes anomaly detection jobs from a calendar.
Delete anomaly jobs from a calendar.
{ref}/ml-delete-calendar-job.html[Endpoint documentation]
[source,ts]
@ -5611,7 +6083,7 @@ list of jobs or groups.
[discrete]
==== delete_data_frame_analytics
Deletes a data frame analytics job.
Delete a data frame analytics job.
{ref}/delete-dfanalytics.html[Endpoint documentation]
[source,ts]
@ -5629,7 +6101,7 @@ client.ml.deleteDataFrameAnalytics({ id })
[discrete]
==== delete_datafeed
Deletes an existing datafeed.
Delete a datafeed.
{ref}/ml-delete-datafeed.html[Endpoint documentation]
[source,ts]
@ -5650,7 +6122,7 @@ stopping and deleting the datafeed.
[discrete]
==== delete_expired_data
Deletes expired and unused machine learning data.
Delete expired ML data.
Deletes all job results, model snapshots and forecast data that have exceeded
their retention days period. Machine learning state documents that are not
associated with any job are also deleted.
@ -5678,7 +6150,7 @@ behavior is no throttling.
[discrete]
==== delete_filter
Deletes a filter.
Delete a filter.
If an anomaly detection job references the filter, you cannot delete the
filter. You must update or delete the job before you can delete the filter.
@ -5696,7 +6168,7 @@ client.ml.deleteFilter({ filter_id })
[discrete]
==== delete_forecast
Deletes forecasts from a machine learning job.
Delete forecasts from a job.
By default, forecasts are retained for 14 days. You can specify a
different retention period with the `expires_in` parameter in the forecast
jobs API. The delete forecast API enables you to delete one or more
@ -5755,7 +6227,7 @@ job deletion completes.
[discrete]
==== delete_model_snapshot
Deletes an existing model snapshot.
Delete a model snapshot.
You cannot delete the active model snapshot. To delete that snapshot, first
revert to a different one. To identify the active model snapshot, refer to
the `model_snapshot_id` in the results from the get jobs API.
@ -5775,8 +6247,8 @@ client.ml.deleteModelSnapshot({ job_id, snapshot_id })
[discrete]
==== delete_trained_model
Deletes an existing trained inference model that is currently not referenced
by an ingest pipeline.
Delete an unreferenced trained model.
The request deletes a trained inference model that is not referenced by an ingest pipeline.
{ref}/delete-trained-models.html[Endpoint documentation]
[source,ts]
@ -5793,7 +6265,7 @@ client.ml.deleteTrainedModel({ model_id })
[discrete]
==== delete_trained_model_alias
Deletes a trained model alias.
Delete a trained model alias.
This API deletes an existing model alias that refers to a trained model. If
the model alias is missing or refers to a model other than the one identified
by the `model_id`, this API returns an error.
@ -5813,6 +6285,7 @@ client.ml.deleteTrainedModelAlias({ model_alias, model_id })
[discrete]
==== estimate_model_memory
Estimate job model memory usage.
Makes an estimation of the memory usage for an anomaly detection job model.
It is based on analysis configuration details for the job and cardinality
estimates for the fields it references.
@ -5844,7 +6317,7 @@ omitted from the request if no detectors have a `by_field_name`,
[discrete]
==== evaluate_data_frame
Evaluates the data frame analytics for an annotated index.
Evaluate data frame analytics.
The API packages together commonly used evaluation metrics for various types
of machine learning features. This has been designed for use on indexes
created by data frame analytics. Evaluation requires both a ground truth
@ -5866,7 +6339,7 @@ client.ml.evaluateDataFrame({ evaluation, index })
[discrete]
==== explain_data_frame_analytics
Explains a data frame analytics config.
Explain data frame analytics config.
This API provides explanations for a data frame analytics config that either
exists already or one that has not been created yet. The following
explanations are provided:
@ -9096,7 +9569,7 @@ client.security.putPrivileges({ ... })
==== Arguments
* *Request (object):*
** *`privileges` (Optional, Record<string, Record<string, User-defined value>>)*
** *`privileges` (Optional, Record<string, Record<string, { allocate, delete, downsample, freeze, forcemerge, migrate, readonly, rollover, set_priority, searchable_snapshot, shrink, unfollow, wait_for_snapshot }>>)*
** *`refresh` (Optional, Enum(true | false | "wait_for"))*: If `true` (the default) then refresh the affected shards to make this operation visible to search, if `wait_for` then wait for a refresh to make this operation visible to search, if `false` then do nothing with refreshes.
[discrete]

View File

@ -45,7 +45,7 @@ export default class Cat {
}
/**
* Get aliases. Retrieves the clusters index aliases, including filter and routing information. The API does not return data stream aliases. > info > CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use [the /_alias endpoints](#endpoint-alias).
* Get aliases. Retrieves the clusters index aliases, including filter and routing information. The API does not return data stream aliases. CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/cat-alias.html | Elasticsearch API documentation}
*/
async aliases (this: That, params?: T.CatAliasesRequest | TB.CatAliasesRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.CatAliasesResponse>
@ -125,7 +125,7 @@ export default class Cat {
}
/**
* Get component templates. Returns information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. > info > CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use [the /_component_template endpoints](#endpoint-component-template).
* Get component templates. Returns information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/cat-component-templates.html | Elasticsearch API documentation}
*/
async componentTemplates (this: That, params?: T.CatComponentTemplatesRequest | TB.CatComponentTemplatesRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.CatComponentTemplatesResponse>
@ -165,7 +165,7 @@ export default class Cat {
}
/**
* Get a document count. Provides quick access to a document count for a data stream, an index, or an entire cluster.n/ The document count only includes live documents, not deleted documents which have not yet been removed by the merge process. > info > CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use [the /_count endpoints](#endpoint-count).
* Get a document count. Provides quick access to a document count for a data stream, an index, or an entire cluster.n/ The document count only includes live documents, not deleted documents which have not yet been removed by the merge process. CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/cat-count.html | Elasticsearch API documentation}
*/
async count (this: That, params?: T.CatCountRequest | TB.CatCountRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.CatCountResponse>
@ -305,7 +305,7 @@ export default class Cat {
}
/**
* Get index information. Returns high-level information about indices in a cluster, including backing indices for data streams. > info > CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint. Use this request to get the following information for each index in a cluster: - shard count - document count - deleted document count - primary store size - total store size of all shards, including shard replicas These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the [/_cat/count](#operation-cat-count) or [count](#endpoint-count) endpoints.
* Get index information. Returns high-level information about indices in a cluster, including backing indices for data streams. Use this request to get the following information for each index in a cluster: - shard count - document count - deleted document count - primary store size - total store size of all shards, including shard replicas These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs. CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/cat-indices.html | Elasticsearch API documentation}
*/
async indices (this: That, params?: T.CatIndicesRequest | TB.CatIndicesRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.CatIndicesResponse>
@ -375,7 +375,7 @@ export default class Cat {
}
/**
* Get data frame analytics jobs. Returns configuration and usage information about data frame analytics jobs. > info > CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use [the /_ml/data_frame/analytics endpoints](#endpoint-ml).
* Get data frame analytics jobs. Returns configuration and usage information about data frame analytics jobs. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/cat-dfanalytics.html | Elasticsearch API documentation}
*/
async mlDataFrameAnalytics (this: That, params?: T.CatMlDataFrameAnalyticsRequest | TB.CatMlDataFrameAnalyticsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.CatMlDataFrameAnalyticsResponse>
@ -415,7 +415,7 @@ export default class Cat {
}
/**
* Get datafeeds. Returns configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster privileges to use this API. > info > CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use [the /_ml/datafeeds endpoints](#endpoint-ml).
* Get datafeeds. Returns configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster privileges to use this API. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/cat-datafeeds.html | Elasticsearch API documentation}
*/
async mlDatafeeds (this: That, params?: T.CatMlDatafeedsRequest | TB.CatMlDatafeedsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.CatMlDatafeedsResponse>
@ -455,7 +455,7 @@ export default class Cat {
}
/**
* Get anomaly detection jobs. Returns configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster privileges to use this API. > info > CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use [the /_ml/anomaly_detectors endpoints](#endpoint-ml).
* Get anomaly detection jobs. Returns configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster privileges to use this API. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/cat-anomaly-detectors.html | Elasticsearch API documentation}
*/
async mlJobs (this: That, params?: T.CatMlJobsRequest | TB.CatMlJobsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.CatMlJobsResponse>
@ -495,7 +495,7 @@ export default class Cat {
}
/**
* Get trained models. Returns configuration and usage information about inference trained models. > info > CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use [the /_ml/trained_models endpoints](#endpoint-ml).
* Get trained models. Returns configuration and usage information about inference trained models. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/cat-trained-model.html | Elasticsearch API documentation}
*/
async mlTrainedModels (this: That, params?: T.CatMlTrainedModelsRequest | TB.CatMlTrainedModelsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.CatMlTrainedModelsResponse>
@ -955,7 +955,7 @@ export default class Cat {
}
/**
* Get transforms. Returns configuration and usage information about transforms. > info > CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use [the /_transform endpoints](#endpoint-transform).
* Get transforms. Returns configuration and usage information about transforms. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/cat-transforms.html | Elasticsearch API documentation}
*/
async transforms (this: That, params?: T.CatTransformsRequest | TB.CatTransformsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.CatTransformsResponse>

1318
src/api/api/connector.ts Normal file

File diff suppressed because it is too large Load Diff

View File

@ -45,7 +45,7 @@ export default class Ml {
}
/**
* Clears a trained model deployment cache on all nodes where the trained model is assigned. A trained model deployment may have an inference cache enabled. As requests are handled by each allocated node, their responses may be cached on that individual node. Calling this API clears the caches without restarting the deployment.
* Clear trained model deployment cache. Cache will be cleared on all nodes where the trained model is assigned. A trained model deployment may have an inference cache enabled. As requests are handled by each allocated node, their responses may be cached on that individual node. Calling this API clears the caches without restarting the deployment.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/clear-trained-model-deployment-cache.html | Elasticsearch API documentation}
*/
async clearTrainedModelDeploymentCache (this: That, params: T.MlClearTrainedModelDeploymentCacheRequest | TB.MlClearTrainedModelDeploymentCacheRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlClearTrainedModelDeploymentCacheResponse>
@ -121,7 +121,7 @@ export default class Ml {
}
/**
* Removes all scheduled events from a calendar, then deletes it.
* Delete a calendar. Removes all scheduled events from a calendar, then deletes it.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/ml-delete-calendar.html | Elasticsearch API documentation}
*/
async deleteCalendar (this: That, params: T.MlDeleteCalendarRequest | TB.MlDeleteCalendarRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlDeleteCalendarResponse>
@ -153,7 +153,7 @@ export default class Ml {
}
/**
* Deletes scheduled events from a calendar.
* Delete events from a calendar.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/ml-delete-calendar-event.html | Elasticsearch API documentation}
*/
async deleteCalendarEvent (this: That, params: T.MlDeleteCalendarEventRequest | TB.MlDeleteCalendarEventRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlDeleteCalendarEventResponse>
@ -186,7 +186,7 @@ export default class Ml {
}
/**
* Deletes anomaly detection jobs from a calendar.
* Delete anomaly jobs from a calendar.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/ml-delete-calendar-job.html | Elasticsearch API documentation}
*/
async deleteCalendarJob (this: That, params: T.MlDeleteCalendarJobRequest | TB.MlDeleteCalendarJobRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlDeleteCalendarJobResponse>
@ -219,7 +219,7 @@ export default class Ml {
}
/**
* Deletes a data frame analytics job.
* Delete a data frame analytics job.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/delete-dfanalytics.html | Elasticsearch API documentation}
*/
async deleteDataFrameAnalytics (this: That, params: T.MlDeleteDataFrameAnalyticsRequest | TB.MlDeleteDataFrameAnalyticsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlDeleteDataFrameAnalyticsResponse>
@ -251,7 +251,7 @@ export default class Ml {
}
/**
* Deletes an existing datafeed.
* Delete a datafeed.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/ml-delete-datafeed.html | Elasticsearch API documentation}
*/
async deleteDatafeed (this: That, params: T.MlDeleteDatafeedRequest | TB.MlDeleteDatafeedRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlDeleteDatafeedResponse>
@ -283,7 +283,7 @@ export default class Ml {
}
/**
* Deletes expired and unused machine learning data. Deletes all job results, model snapshots and forecast data that have exceeded their retention days period. Machine learning state documents that are not associated with any job are also deleted. You can limit the request to a single or set of anomaly detection jobs by using a job identifier, a group name, a comma-separated list of jobs, or a wildcard expression. You can delete expired data for all anomaly detection jobs by using _all, by specifying * as the <job_id>, or by omitting the <job_id>.
* Delete expired ML data. Deletes all job results, model snapshots and forecast data that have exceeded their retention days period. Machine learning state documents that are not associated with any job are also deleted. You can limit the request to a single or set of anomaly detection jobs by using a job identifier, a group name, a comma-separated list of jobs, or a wildcard expression. You can delete expired data for all anomaly detection jobs by using _all, by specifying * as the <job_id>, or by omitting the <job_id>.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/ml-delete-expired-data.html | Elasticsearch API documentation}
*/
async deleteExpiredData (this: That, params?: T.MlDeleteExpiredDataRequest | TB.MlDeleteExpiredDataRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlDeleteExpiredDataResponse>
@ -335,7 +335,7 @@ export default class Ml {
}
/**
* Deletes a filter. If an anomaly detection job references the filter, you cannot delete the filter. You must update or delete the job before you can delete the filter.
* Delete a filter. If an anomaly detection job references the filter, you cannot delete the filter. You must update or delete the job before you can delete the filter.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/ml-delete-filter.html | Elasticsearch API documentation}
*/
async deleteFilter (this: That, params: T.MlDeleteFilterRequest | TB.MlDeleteFilterRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlDeleteFilterResponse>
@ -367,7 +367,7 @@ export default class Ml {
}
/**
* Deletes forecasts from a machine learning job. By default, forecasts are retained for 14 days. You can specify a different retention period with the `expires_in` parameter in the forecast jobs API. The delete forecast API enables you to delete one or more forecasts before they expire.
* Delete forecasts from a job. By default, forecasts are retained for 14 days. You can specify a different retention period with the `expires_in` parameter in the forecast jobs API. The delete forecast API enables you to delete one or more forecasts before they expire.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/ml-delete-forecast.html | Elasticsearch API documentation}
*/
async deleteForecast (this: That, params: T.MlDeleteForecastRequest | TB.MlDeleteForecastRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlDeleteForecastResponse>
@ -439,7 +439,7 @@ export default class Ml {
}
/**
* Deletes an existing model snapshot. You cannot delete the active model snapshot. To delete that snapshot, first revert to a different one. To identify the active model snapshot, refer to the `model_snapshot_id` in the results from the get jobs API.
* Delete a model snapshot. You cannot delete the active model snapshot. To delete that snapshot, first revert to a different one. To identify the active model snapshot, refer to the `model_snapshot_id` in the results from the get jobs API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/ml-delete-snapshot.html | Elasticsearch API documentation}
*/
async deleteModelSnapshot (this: That, params: T.MlDeleteModelSnapshotRequest | TB.MlDeleteModelSnapshotRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlDeleteModelSnapshotResponse>
@ -472,7 +472,7 @@ export default class Ml {
}
/**
* Deletes an existing trained inference model that is currently not referenced by an ingest pipeline.
* Delete an unreferenced trained model. The request deletes a trained inference model that is not referenced by an ingest pipeline.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/delete-trained-models.html | Elasticsearch API documentation}
*/
async deleteTrainedModel (this: That, params: T.MlDeleteTrainedModelRequest | TB.MlDeleteTrainedModelRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlDeleteTrainedModelResponse>
@ -504,7 +504,7 @@ export default class Ml {
}
/**
* Deletes a trained model alias. This API deletes an existing model alias that refers to a trained model. If the model alias is missing or refers to a model other than the one identified by the `model_id`, this API returns an error.
* Delete a trained model alias. This API deletes an existing model alias that refers to a trained model. If the model alias is missing or refers to a model other than the one identified by the `model_id`, this API returns an error.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/delete-trained-models-aliases.html | Elasticsearch API documentation}
*/
async deleteTrainedModelAlias (this: That, params: T.MlDeleteTrainedModelAliasRequest | TB.MlDeleteTrainedModelAliasRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlDeleteTrainedModelAliasResponse>
@ -537,7 +537,7 @@ export default class Ml {
}
/**
* Makes an estimation of the memory usage for an anomaly detection job model. It is based on analysis configuration details for the job and cardinality estimates for the fields it references.
* Estimate job model memory usage. Makes an estimation of the memory usage for an anomaly detection job model. It is based on analysis configuration details for the job and cardinality estimates for the fields it references.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/ml-apis.html | Elasticsearch API documentation}
*/
async estimateModelMemory (this: That, params?: T.MlEstimateModelMemoryRequest | TB.MlEstimateModelMemoryRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlEstimateModelMemoryResponse>
@ -579,7 +579,7 @@ export default class Ml {
}
/**
* Evaluates the data frame analytics for an annotated index. The API packages together commonly used evaluation metrics for various types of machine learning features. This has been designed for use on indexes created by data frame analytics. Evaluation requires both a ground truth field and an analytics result field to be present.
* Evaluate data frame analytics. The API packages together commonly used evaluation metrics for various types of machine learning features. This has been designed for use on indexes created by data frame analytics. Evaluation requires both a ground truth field and an analytics result field to be present.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/master/evaluate-dfanalytics.html | Elasticsearch API documentation}
*/
async evaluateDataFrame (this: That, params: T.MlEvaluateDataFrameRequest | TB.MlEvaluateDataFrameRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlEvaluateDataFrameResponse>
@ -620,7 +620,7 @@ export default class Ml {
}
/**
* Explains a data frame analytics config. This API provides explanations for a data frame analytics config that either exists already or one that has not been created yet. The following explanations are provided: * which fields are included or not in the analysis and why, * how much memory is estimated to be required. The estimate can be used when deciding the appropriate value for model_memory_limit setting later on. If you have object fields or fields that are excluded via source filtering, they are not included in the explanation.
* Explain data frame analytics config. This API provides explanations for a data frame analytics config that either exists already or one that has not been created yet. The following explanations are provided: * which fields are included or not in the analysis and why, * how much memory is estimated to be required. The estimate can be used when deciding the appropriate value for model_memory_limit setting later on. If you have object fields or fields that are excluded via source filtering, they are not included in the explanation.
* @see {@link http://www.elastic.co/guide/en/elasticsearch/reference/master/explain-dfanalytics.html | Elasticsearch API documentation}
*/
async explainDataFrameAnalytics (this: That, params?: T.MlExplainDataFrameAnalyticsRequest | TB.MlExplainDataFrameAnalyticsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MlExplainDataFrameAnalyticsResponse>

View File

@ -35,6 +35,7 @@ import CcrApi from './api/ccr'
import clearScrollApi from './api/clear_scroll'
import closePointInTimeApi from './api/close_point_in_time'
import ClusterApi from './api/cluster'
import ConnectorApi from './api/connector'
import countApi from './api/count'
import createApi from './api/create'
import DanglingIndicesApi from './api/dangling_indices'
@ -123,6 +124,7 @@ export default interface API {
clearScroll: typeof clearScrollApi
closePointInTime: typeof closePointInTimeApi
cluster: ClusterApi
connector: ConnectorApi
count: typeof countApi
create: typeof createApi
danglingIndices: DanglingIndicesApi
@ -206,6 +208,7 @@ const kAutoscaling = Symbol('Autoscaling')
const kCat = Symbol('Cat')
const kCcr = Symbol('Ccr')
const kCluster = Symbol('Cluster')
const kConnector = Symbol('Connector')
const kDanglingIndices = Symbol('DanglingIndices')
const kEnrich = Symbol('Enrich')
const kEql = Symbol('Eql')
@ -248,6 +251,7 @@ export default class API {
[kCat]: symbol | null
[kCcr]: symbol | null
[kCluster]: symbol | null
[kConnector]: symbol | null
[kDanglingIndices]: symbol | null
[kEnrich]: symbol | null
[kEql]: symbol | null
@ -289,6 +293,7 @@ export default class API {
this[kCat] = null
this[kCcr] = null
this[kCluster] = null
this[kConnector] = null
this[kDanglingIndices] = null
this[kEnrich] = null
this[kEql] = null
@ -389,6 +394,9 @@ Object.defineProperties(API.prototype, {
cluster: {
get () { return this[kCluster] === null ? (this[kCluster] = new ClusterApi(this.transport)) : this[kCluster] }
},
connector: {
get () { return this[kConnector] === null ? (this[kConnector] = new ConnectorApi(this.transport)) : this[kConnector] }
},
danglingIndices: {
get () { return this[kDanglingIndices] === null ? (this[kDanglingIndices] = new DanglingIndicesApi(this.transport)) : this[kDanglingIndices] }
},

View File

@ -9125,9 +9125,9 @@ export interface ConnectorConnectorConfigProperties {
required: boolean
sensitive: boolean
tooltip?: string | null
type: ConnectorConnectorFieldType
ui_restrictions: string[]
validations: ConnectorValidation[]
type?: ConnectorConnectorFieldType
ui_restrictions?: string[]
validations?: ConnectorValidation[]
value: any
}
@ -9989,22 +9989,51 @@ export interface GraphExploreResponse {
vertices: GraphVertex[]
}
export type IlmActions = any
export interface IlmConfigurations {
rollover?: IndicesRolloverRolloverConditions
forcemerge?: IlmForceMergeConfiguration
shrink?: IlmShrinkConfiguration
export interface IlmActions {
allocate?: IlmAllocateAction
delete?: IlmDeleteAction
downsample?: IlmDownsampleAction
freeze?: EmptyObject
forcemerge?: IlmForceMergeAction
migrate?: IlmMigrateAction
readonly?: EmptyObject
rollover?: IlmRolloverAction
set_priority?: IlmSetPriorityAction
searchable_snapshot?: IlmSearchableSnapshotAction
shrink?: IlmShrinkAction
unfollow?: EmptyObject
wait_for_snapshot?: IlmWaitForSnapshotAction
}
export interface IlmForceMergeConfiguration {
export interface IlmAllocateAction {
number_of_replicas?: integer
total_shards_per_node?: integer
include?: Record<string, string>
exclude?: Record<string, string>
require?: Record<string, string>
}
export interface IlmDeleteAction {
delete_searchable_snapshot?: boolean
}
export interface IlmDownsampleAction {
fixed_interval: DurationLarge
wait_timeout?: Duration
}
export interface IlmForceMergeAction {
max_num_segments: integer
index_codec?: string
}
export interface IlmMigrateAction {
enabled?: boolean
}
export interface IlmPhase {
actions?: IlmActions
min_age?: Duration | long
configurations?: IlmConfigurations
}
export interface IlmPhases {
@ -10020,8 +10049,36 @@ export interface IlmPolicy {
_meta?: Metadata
}
export interface IlmShrinkConfiguration {
number_of_shards: integer
export interface IlmRolloverAction {
max_size?: ByteSize
max_primary_shard_size?: ByteSize
max_age?: Duration
max_docs?: long
max_primary_shard_docs?: long
min_size?: ByteSize
min_primary_shard_size?: ByteSize
min_age?: Duration
min_docs?: long
min_primary_shard_docs?: long
}
export interface IlmSearchableSnapshotAction {
snapshot_repository: string
force_merge_index?: boolean
}
export interface IlmSetPriorityAction {
priority?: integer
}
export interface IlmShrinkAction {
number_of_shards?: integer
max_primary_shard_size?: ByteSize
allow_write_after_shrink?: boolean
}
export interface IlmWaitForSnapshotAction {
policy: string
}
export interface IlmDeleteLifecycleRequest extends RequestBase {

View File

@ -9226,9 +9226,9 @@ export interface ConnectorConnectorConfigProperties {
required: boolean
sensitive: boolean
tooltip?: string | null
type: ConnectorConnectorFieldType
ui_restrictions: string[]
validations: ConnectorValidation[]
type?: ConnectorConnectorFieldType
ui_restrictions?: string[]
validations?: ConnectorValidation[]
value: any
}
@ -10154,22 +10154,51 @@ export interface GraphExploreResponse {
vertices: GraphVertex[]
}
export type IlmActions = any
export interface IlmConfigurations {
rollover?: IndicesRolloverRolloverConditions
forcemerge?: IlmForceMergeConfiguration
shrink?: IlmShrinkConfiguration
export interface IlmActions {
allocate?: IlmAllocateAction
delete?: IlmDeleteAction
downsample?: IlmDownsampleAction
freeze?: EmptyObject
forcemerge?: IlmForceMergeAction
migrate?: IlmMigrateAction
readonly?: EmptyObject
rollover?: IlmRolloverAction
set_priority?: IlmSetPriorityAction
searchable_snapshot?: IlmSearchableSnapshotAction
shrink?: IlmShrinkAction
unfollow?: EmptyObject
wait_for_snapshot?: IlmWaitForSnapshotAction
}
export interface IlmForceMergeConfiguration {
export interface IlmAllocateAction {
number_of_replicas?: integer
total_shards_per_node?: integer
include?: Record<string, string>
exclude?: Record<string, string>
require?: Record<string, string>
}
export interface IlmDeleteAction {
delete_searchable_snapshot?: boolean
}
export interface IlmDownsampleAction {
fixed_interval: DurationLarge
wait_timeout?: Duration
}
export interface IlmForceMergeAction {
max_num_segments: integer
index_codec?: string
}
export interface IlmMigrateAction {
enabled?: boolean
}
export interface IlmPhase {
actions?: IlmActions
min_age?: Duration | long
configurations?: IlmConfigurations
}
export interface IlmPhases {
@ -10185,8 +10214,36 @@ export interface IlmPolicy {
_meta?: Metadata
}
export interface IlmShrinkConfiguration {
number_of_shards: integer
export interface IlmRolloverAction {
max_size?: ByteSize
max_primary_shard_size?: ByteSize
max_age?: Duration
max_docs?: long
max_primary_shard_docs?: long
min_size?: ByteSize
min_primary_shard_size?: ByteSize
min_age?: Duration
min_docs?: long
min_primary_shard_docs?: long
}
export interface IlmSearchableSnapshotAction {
snapshot_repository: string
force_merge_index?: boolean
}
export interface IlmSetPriorityAction {
priority?: integer
}
export interface IlmShrinkAction {
number_of_shards?: integer
max_primary_shard_size?: ByteSize
allow_write_after_shrink?: boolean
}
export interface IlmWaitForSnapshotAction {
policy: string
}
export interface IlmDeleteLifecycleRequest extends RequestBase {