Compare commits

..

49 Commits

Author SHA1 Message Date
469c853a10 Bump to 8.17.1 (#2632) 2025-02-24 13:37:19 -06:00
01f4cf9ba7 Auto-generated API code (#2626) 2025-02-24 10:55:36 -06:00
85dea32310 Auto-generated API code (#2619) 2025-02-18 10:39:36 -06:00
528dd6b24a Auto-generated API code (#2608) 2025-02-10 13:07:52 -06:00
d540d7fdb2 Report correct transport connection type in telemetry (#2599) (#2603)
Fixes #2324

(cherry picked from commit 172180cb21)

Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2025-02-03 13:37:59 -06:00
07f75a4d9d Auto-generated API code (#2596) 2025-02-03 12:52:56 -06:00
4e6cbf96aa Auto-generated API code (#2579) 2025-01-28 11:52:01 -06:00
aa7d327d20 Auto-generated code for 8.17 (#2567) 2025-01-13 10:07:15 -06:00
6cdb08757d Auto-generated code for 8.17 (#2550) 2025-01-07 12:52:21 -06:00
48f369fe82 Update dependency @elastic/request-converter to v8.17.0 (#2555) (#2559)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
(cherry picked from commit e688f36396)

Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2025-01-06 12:44:35 -06:00
abcdd08b89 Backport #2543 (#2544) 2024-12-12 11:46:51 -06:00
b3e523ad57 Auto-generated code for 8.17 (#2528) 2024-12-10 17:41:02 +00:00
dd0a304641 [Backport 8.17] Checkout correct branch of generator (#2532)
(cherry picked from commit ed3cace127)

Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-12-10 10:32:53 -06:00
00675bf260 [Backport 8.17] Codegen for 8.x clients should use the 8.x generator branch (#2518)
(cherry picked from commit 15b9ee2f06)

Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-12-05 10:39:14 -06:00
75b7b07b3d Auto-generated code for 8.17 (#2503) 2024-12-02 12:11:27 -06:00
cc9d8569b2 [Backport 8.17] Update changelog to include 8.16.2 and 8.15.3 (#2496)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-21 10:42:49 -06:00
e07b8ebf68 [Backport 8.17] Ignore tap artifacts (#2493)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-21 10:22:58 -06:00
e18ba8b7af Add docstrings for Client class and related properties (#2484) (#2486)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-21 10:20:22 -06:00
3f3e9bac1e [Backport 8.x] Add changelog for 8.16.1 (#2480)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-18 13:33:47 -06:00
b3b9d40293 [Backport 8.x] Fix ECMAScript import (#2476)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-18 12:34:14 -06:00
14cdd64f7d Update dependency @elastic/request-converter to v8.16.1 (#2469) (#2474)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-18 11:23:45 -06:00
5d41135190 Auto-generated code for 8.x (#2471) 2024-11-18 11:21:08 -06:00
e4c4f1acb7 [Backport 8.x] Update integration test automation branches (#2465)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-18 11:19:20 -06:00
ad3caf9e94 [Backport 8.x] Upgrade ts-standard (#2461)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-13 10:45:23 -06:00
fcb421a54e [Backport 8.x] Update dependency typescript to v5 (#2458)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-12 13:38:46 -06:00
71e6e7007a [Backport 8.x] Update dependency chai to v5 (#2457)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-12 11:29:02 -06:00
dabe34dae8 [Backport 8.x] Address feedback and add clarity (#2451)
Co-authored-by: Marci W <333176+marciw@users.noreply.github.com>
2024-11-12 11:07:14 -06:00
346663f704 [Backport 8.x] Update dependency @types/split2 to v4 (#2448)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-11 11:52:38 -06:00
32345dac41 [Backport 8.x] Update dependency @types/node to v22 (#2447)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-11 11:52:06 -06:00
6efbed6be1 Auto-generated code for 8.x (#2441) 2024-11-11 10:41:35 -06:00
8be306b82d [Backport 8.x] Add changelog for 8.15.2 (#2445)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-11 09:49:50 -06:00
979475b542 Update dependency @types/node to v18.19.64 (#2421) (#2431)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-06 13:10:34 -06:00
a829634f83 [Backport 8.x] Add _id to the result of helpers.search (#2434)
(cherry picked from commit 2455dac4e5)

Co-authored-by: Rami <72725910+ramikg@users.noreply.github.com>
2024-11-06 12:27:50 -06:00
dd9b38b051 [Backport 8.x] Add streaming support to Arrow helper (#2429)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-04 15:48:30 -06:00
0e98719d60 Auto-generated code for 8.x (#2426) 2024-11-04 09:58:15 -06:00
7da9976777 Auto-generated code for 8.x (#2412)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-10-28 15:12:33 -05:00
4ca358cad6 [Backport 8.x] Skip flaky test (#2418)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-10-28 11:57:48 -05:00
cf2eda1ab3 [Backport 8.x] Don't use hash-based Docker image version (#2415)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-10-28 11:27:40 -05:00
5d747eec0c [Backport 8.x] Pin dependencies (#2413)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-10-28 11:26:29 -05:00
f38cbde243 [Backport 8.x] Don't generate coverage during standard unit test run (#2405)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-10-24 12:07:39 -05:00
61c18a6ba5 [Backport 8.x] Upgrade tap to latest (#2402)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-10-24 11:38:47 -05:00
432cd36879 [Backport 8.x] Basic helper for ES|QL's Apache Arrow output format (#2394)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-10-23 08:46:40 -05:00
e7de86a1f2 [Backport 8.x] Upgrade transport to 8.8.1 (#2390)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-10-21 11:57:53 -05:00
f23f77cc41 Manually backport #2375 to 8.x (#2389) 2024-10-21 11:40:05 -05:00
09b5c84d24 [Backport 8.x] Respect disablePrototypePoisoningProtection option (#2388)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-10-21 11:34:12 -05:00
604d4aefa7 [Backport 8.x] Update changelog for 8.15.1 (#2387)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-10-21 11:05:24 -05:00
e279b3ebfa [Backport 8.x] Add doc about timeout best practices (#2386)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-10-21 11:04:55 -05:00
fceebae8ae Auto-generated code for 8.x (#2372) 2024-10-14 11:18:55 -05:00
e45ed28c05 Auto-generated code for 8.x (#2369) 2024-09-30 13:41:23 -05:00
257 changed files with 52769 additions and 57494 deletions

View File

@ -12,3 +12,5 @@ WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .

View File

@ -25,3 +25,6 @@ USER ${BUILDER_UID}:${BUILDER_GID}
# install dependencies
COPY package.json .
RUN npm install
# copy project files
COPY . .

View File

@ -1,6 +1,20 @@
/*
* Copyright Elasticsearch B.V. and contributors
* SPDX-License-Identifier: Apache-2.0
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
/* global $ argv */
@ -109,7 +123,7 @@ async function codegen (args) {
await $`rm -rf ${join(import.meta.url, '..', 'src', 'api')}`
await $`mkdir ${join(import.meta.url, '..', 'src', 'api')}`
await $`cp -R ${join(import.meta.url, '..', '..', 'elastic-client-generator-js', 'output')}/* ${join(import.meta.url, '..', 'src', 'api')}`
await $`mv ${join(import.meta.url, '..', 'src', 'api', 'reference.md')} ${join(import.meta.url, '..', 'docs', 'reference', 'api-reference.md')}`
await $`mv ${join(import.meta.url, '..', 'src', 'api', 'reference.asciidoc')} ${join(import.meta.url, '..', 'docs', 'reference.asciidoc')}`
await $`npm run build`
// run docs example generation

View File

@ -25,7 +25,7 @@ steps:
provider: "gcp"
image: family/core-ubuntu-2204
plugins:
- junit-annotate#v2.6.0:
- junit-annotate#v2.4.1:
artifacts: "junit-output/junit-*.xml"
job-uuid-file-pattern: "junit-(.*).xml"
fail-build-on-error: true

2
.github/make.sh vendored
View File

@ -65,7 +65,7 @@ codegen)
if [ -v "$VERSION" ] || [[ -z "$VERSION" ]]; then
# fall back to branch name or `main` if no VERSION is set
branch_name=$(git rev-parse --abbrev-ref HEAD)
if [[ "$branch_name" =~ ^[0-9]+\.([0-9]+|x) ]]; then
if [[ "$branch_name" =~ ^[0-9]+\.[0-9]+ ]]; then
echo -e "\033[36;1mTARGET: codegen -> No VERSION argument found, using branch name: \`$branch_name\`\033[0m"
VERSION="$branch_name"
else

26
.github/stale.yml vendored Normal file
View File

@ -0,0 +1,26 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 15
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7
# Issues with these labels will never be considered stale
exemptLabels:
- "discussion"
- "feature request"
- "bug"
- "todo"
- "good first issue"
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: |
We understand that this might be important for you, but this issue has been automatically marked as stale because it has not had recent activity either from our end or yours.
It will be closed if no further activity occurs, please write a comment if you would like to keep this going.
Note: in the past months we have built a new client, that has just landed in master. If you want to open an issue or a pr for the legacy client, you should do that in https://github.com/elastic/elasticsearch-js-legacy
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

View File

@ -1,19 +0,0 @@
name: docs-build
on:
push:
branches:
- main
pull_request_target: ~
merge_group: ~
jobs:
docs-preview:
uses: elastic/docs-builder/.github/workflows/preview-build.yml@main
with:
path-pattern: docs/**
permissions:
deployments: write
id-token: write
contents: read
pull-requests: read

View File

@ -1,14 +0,0 @@
name: docs-cleanup
on:
pull_request_target:
types:
- closed
jobs:
docs-preview:
uses: elastic/docs-builder/.github/workflows/preview-cleanup.yml@main
permissions:
contents: none
id-token: write
deployments: write

View File

@ -41,7 +41,7 @@ jobs:
persist-credentials: false
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@cdca7365b2dadb8aad0a33bc7601856ffabcc48e # v4
uses: actions/setup-node@39370e3970a6d050c480ffad4ff0ed4d3fdee5af # v4
with:
node-version: ${{ matrix.node-version }}
@ -71,7 +71,7 @@ jobs:
persist-credentials: false
- name: Use Node.js
uses: actions/setup-node@cdca7365b2dadb8aad0a33bc7601856ffabcc48e # v4
uses: actions/setup-node@39370e3970a6d050c480ffad4ff0ed4d3fdee5af # v4
with:
node-version: 22.x
@ -83,9 +83,6 @@ jobs:
run: |
npm run license-checker
- name: SPDX header check
run: npm run license-header
test-bun:
name: Test Bun
runs-on: ${{ matrix.os }}

View File

@ -16,45 +16,23 @@ jobs:
with:
persist-credentials: false
ref: ${{ github.event.inputs.branch }}
- uses: actions/setup-node@cdca7365b2dadb8aad0a33bc7601856ffabcc48e # v4
- uses: actions/setup-node@39370e3970a6d050c480ffad4ff0ed4d3fdee5af # v4
with:
node-version: "22.x"
registry-url: "https://registry.npmjs.org"
- run: npm install -g npm
- run: npm install
- run: npm test
- name: npm publish
run: |
version=$(jq -r .version package.json)
tag_meta=$(echo "$version" | cut -s -d '-' -f2)
if [[ -z "$tag_meta" ]]; then
npm publish --provenance --access public
else
tag=$(echo "$tag_meta" | cut -d '.' -f1)
npm publish --provenance --access public --tag "$tag"
fi
- run: npm publish --provenance --access public
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
- name: Publish version on GitHub
run: |
- run: |
version=$(jq -r .version package.json)
tag_meta=$(echo "$version" | cut -s -d '-' -f2)
if [[ -z "$tag_meta" ]]; then
gh release create \
-n "[Changelog](https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/$BRANCH_NAME/changelog-client.html)"
--target "$BRANCH_NAME" \
--title "v$version" \
"v$version"
else
tag_main=$(echo "$version" | cut -d '-' -f1)
gh release create \
-n "This is a $tag_main pre-release. Changes may not be stable." \
--latest=false \
--prerelease \
--target "$BRANCH_NAME" \
--title "v$version" \
"v$version"
fi
gh release create \
-n "[Changelog](https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/$BRANCH_NAME/changelog-client.html)" \
--target "$BRANCH_NAME" \
-t "v$version" \
"v$version"
env:
BRANCH_NAME: ${{ github.event.inputs.branch }}
GH_TOKEN: ${{ github.token }}

43
.github/workflows/serverless-patch.sh vendored Executable file
View File

@ -0,0 +1,43 @@
#!/usr/bin/env bash
set -exuo pipefail
merge_commit_sha=$(jq -r '.pull_request.merge_commit_sha' "$GITHUB_EVENT_PATH")
pull_request_id=$(jq -r '.pull_request.number' "$GITHUB_EVENT_PATH")
pr_shortcode="elastic/elasticsearch-js#$pull_request_id"
# generate patch file
cd "$GITHUB_WORKSPACE/stack"
git format-patch -1 --stdout "$merge_commit_sha" > /tmp/patch.diff
# set committer info
git config --global user.email "elasticmachine@users.noreply.github.com"
git config --global user.name "Elastic Machine"
# apply patch file
cd "$GITHUB_WORKSPACE/serverless"
git am -C1 --reject /tmp/patch.diff || git am --quit
# generate PR body comment
comment="Patch applied from $pr_shortcode"
# enumerate rejected patches in PR comment
has_rejects='false'
for f in ./**/*.rej; do
has_rejects='true'
comment="$comment
## Rejected patch \`$f\` must be resolved:
\`\`\`diff
$(cat "$f")
\`\`\`
"
done
# delete .rej files
rm -fv ./**/*.rej
# send data to output parameters
echo "$comment" > /tmp/pr_body
echo "PR_DRAFT=$has_rejects" >> "$GITHUB_OUTPUT"

53
.github/workflows/serverless-patch.yml vendored Normal file
View File

@ -0,0 +1,53 @@
---
name: Apply PR changes to serverless
on:
pull_request_target:
types:
- closed
- labeled
jobs:
apply-patch:
name: Apply patch
runs-on: ubuntu-latest
# Only react to merged PRs for security reasons.
# See https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target.
if: >
github.event.pull_request.merged
&& (
(
github.event.action == 'closed'
&& contains(github.event.pull_request.labels.*.name, 'apply-to-serverless')
)
||
(
github.event.action == 'labeled'
&& github.event.label.name == 'apply-to-serverless'
)
)
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
persist-credentials: false
repository: elastic/elasticsearch-js
ref: main
path: stack
fetch-depth: 0
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
persist-credentials: false
repository: elastic/elasticsearch-serverless-js
ref: main
path: serverless
- name: Apply patch from stack to serverless
id: apply-patch
run: $GITHUB_WORKSPACE/stack/.github/workflows/serverless-patch.sh
- uses: peter-evans/create-pull-request@c5a7806660adbe173f04e3e038b0ccdcd758773c # v6
with:
token: ${{ secrets.GH_TOKEN }}
path: serverless
title: "Apply patch from elastic/elasticsearch-js#${{ github.event.pull_request.number }}"
commit-message: "Apply patch from elastic/elasticsearch-js#${{ github.event.pull_request.number }}"
body-path: /tmp/pr_body
draft: "${{ steps.apply-patch.outputs.PR_DRAFT }}"
add-paths: ":!*.rej"

View File

@ -1,21 +1,21 @@
---
name: "Close stale issues and PRs"
name: 'Close stale issues and PRs'
on:
schedule:
- cron: "30 1 * * *"
- cron: '30 1 * * *'
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9
- uses: actions/stale@1160a2240286f5da8ec72b1c0816ce2481aabf84 # v8
with:
stale-issue-label: stale
stale-pr-label: stale
days-before-stale: 90
days-before-close: 14
exempt-issue-labels: "good first issue,tracking"
exempt-issue-labels: 'good first issue'
close-issue-label: closed-stale
close-pr-label: closed-stale
stale-issue-message: "This issue is stale because it has been open 90 days with no activity. Remove the `stale` label, or leave a comment, or this will be closed in 14 days."
stale-pr-message: "This pull request is stale because it has been open 90 days with no activity. Remove the `stale` label, or leave a comment, or this will be closed in 14 days."
stale-issue-message: 'This issue is stale because it has been open 90 days with no activity. Remove the `stale` label, or leave a comment, or this will be closed in 14 days.'
stale-pr-message: 'This pull request is stale because it has been open 90 days with no activity. Remove the `stale` label, or leave a comment, or this will be closed in 14 days.'

View File

@ -28,9 +28,6 @@ spec:
spec:
repository: elastic/elasticsearch-js
pipeline_file: .buildkite/pipeline.yml
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: "true"
SLACK_NOTIFICATIONS_CHANNEL: "#devtools-notify-javascript"
teams:
devtools-team:
access_level: MANAGE_BUILD_AND_READ
@ -48,9 +45,6 @@ spec:
8_x:
branch: "8.x"
cronline: "@daily"
8_17:
branch: "8.17"
cronline: "@daily"
8_18:
branch: "8.18"
8_14:
branch: "8.16"
cronline: "@daily"

View File

@ -1,27 +1,25 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/advanced-config.html
---
[[advanced-config]]
=== Advanced configuration
# Advanced configuration [advanced-config]
If you need to customize the client behavior heavily, you are in the right place! The client enables you to customize the following internals:
If you need to customize the client behavior heavily, you are in the right
place! The client enables you to customize the following internals:
* `ConnectionPool` class
* `Connection` class
* `Serializer` class
::::{note}
For information about the `Transport` class, refer to [Transport](/reference/transport.md).
::::
NOTE: For information about the `Transport` class, refer to <<transport>>.
[discrete]
==== `ConnectionPool`
## `ConnectionPool` [_connectionpool]
This class is responsible for keeping in memory all the {es} Connection that you
are using. There is a single Connection for every node. The connection pool
handles the resurrection strategies and the updates of the pool.
This class is responsible for keeping in memory all the {{es}} Connection that you are using. There is a single Connection for every node. The connection pool handles the resurrection strategies and the updates of the pool.
```js
[source,js]
----
const { Client, ConnectionPool } = require('@elastic/elasticsearch')
class MyConnectionPool extends ConnectionPool {
@ -36,14 +34,19 @@ const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
```
----
## `Connection` [_connection]
[discrete]
==== `Connection`
This class represents a single node, it holds every information we have on the node, such as roles, id, URL, custom headers and so on. The actual HTTP request is performed here, this means that if you want to swap the default HTTP client (Node.js core), you should override the `request` method of this class.
This class represents a single node, it holds every information we have on the
node, such as roles, id, URL, custom headers and so on. The actual HTTP request
is performed here, this means that if you want to swap the default HTTP client
(Node.js core), you should override the `request` method of this class.
```js
[source,js]
----
const { Client, BaseConnection } = require('@elastic/elasticsearch')
class MyConnection extends BaseConnection {
@ -57,19 +60,22 @@ const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
```
----
## `Serializer` [_serializer]
[discrete]
==== `Serializer`
This class is responsible for the serialization of every request, it offers the following methods:
This class is responsible for the serialization of every request, it offers the
following methods:
* `serialize(object: any): string;` serializes request objects.
* `deserialize(json: string): any;` deserializes response strings.
* `ndserialize(array: any[]): string;` serializes bulk request objects.
* `qserialize(object: any): string;` serializes request query parameters.
```js
[source,js]
----
const { Client, Serializer } = require('@elastic/elasticsearch')
class MySerializer extends Serializer {
@ -83,10 +89,11 @@ const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
```
----
## Redaction of potentially sensitive data [redaction]
[discrete]
[[redaction]]
==== Redaction of potentially sensitive data
When the client raises an `Error` that originated at the HTTP layer, like a `ConnectionError` or `TimeoutError`, a `meta` object is often attached to the error object that includes metadata useful for debugging, like request and response information. Because this can include potentially sensitive data, like authentication secrets in an `Authorization` header, the client takes measures to redact common sources of sensitive data when this metadata is attached and serialized.
@ -94,7 +101,8 @@ If your configuration requires extra headers or other configurations that may in
By default, the `redaction` option is set to `{ type: 'replace' }`, which recursively searches for sensitive key names, case insensitive, and replaces their values with the string `[redacted]`.
```js
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
@ -107,11 +115,12 @@ try {
} catch (err) {
console.log(err.meta.meta.request.options.headers.authorization) // prints "[redacted]"
}
```
----
If you would like to redact additional properties, you can include additional key names to search and replace:
```js
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
@ -129,11 +138,12 @@ try {
} catch (err) {
console.log(err.meta.meta.request.options.headers['X-My-Secret-Password']) // prints "[redacted]"
}
```
----
Alternatively, if you know youre not going to use the metadata at all, setting the redaction type to `remove` will remove all optional sources of potentially sensitive data entirely, or replacing them with `null` for required properties.
Alternatively, if you know you're not going to use the metadata at all, setting the redaction type to `remove` will remove all optional sources of potentially sensitive data entirely, or replacing them with `null` for required properties.
```js
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
@ -147,16 +157,14 @@ try {
} catch (err) {
console.log(err.meta.meta.request.options.headers) // undefined
}
```
----
Finally, if you prefer to turn off redaction altogether, perhaps while debugging on a local developer environment, you can set the redaction type to `off`. This will revert the client to pre-8.11.0 behavior, where basic redaction is only performed during common serialization methods like `console.log` and `JSON.stringify`.
::::{warning}
Setting `redaction.type` to `off` is not recommended in production environments.
::::
WARNING: Setting `redaction.type` to `off` is not recommended in production environments.
```js
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
@ -170,10 +178,18 @@ try {
} catch (err) {
console.log(err.meta.meta.request.options.headers.authorization) // the actual header value will be logged
}
```
----
[discrete]
==== Migrate to v8
## Migrate to v8 [_migrate_to_v8]
The Node.js client can be configured to emit an HTTP header `Accept: application/vnd.elasticsearch+json; compatible-with=7` which signals to Elasticsearch that the client is requesting `7.x` version of request and response bodies. This allows for upgrading from 7.x to 8.x version of Elasticsearch without upgrading everything at once. Elasticsearch should be upgraded first after the compatibility header is configured and clients should be upgraded second. To enable to setting, configure the environment variable `ELASTIC_CLIENT_APIVERSIONING` to `true`.
The Node.js client can be configured to emit an HTTP header
`Accept: application/vnd.elasticsearch+json; compatible-with=7`
which signals to Elasticsearch that the client is requesting
`7.x` version of request and response bodies. This allows for
upgrading from 7.x to 8.x version of Elasticsearch without upgrading
everything at once. Elasticsearch should be upgraded first after
the compatibility header is configured and clients should be upgraded
second.
To enable to setting, configure the environment variable
`ELASTIC_CLIENT_APIVERSIONING` to `true`.

270
docs/basic-config.asciidoc Normal file
View File

@ -0,0 +1,270 @@
[[basic-config]]
=== Basic configuration
This page shows you the possible basic configuration options that the clients
offers.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' },
maxRetries: 5,
requestTimeout: 60000,
sniffOnStart: true
})
----
[cols=2*]
|===
|`node` or `nodes`
a|The Elasticsearch endpoint to use. +
It can be a single string or an array of strings:
[source,js]
----
node: 'http://localhost:9200'
----
Or it can be an object (or an array of objects) that represents the node:
[source,js]
----
node: {
url: new URL('http://localhost:9200'),
tls: 'tls options',
agent: 'http agent options',
id: 'custom node id',
headers: { 'custom': 'headers' }
roles: {
master: true,
data: true,
ingest: true,
ml: false
}
}
----
|`auth`
a|Your authentication data. You can use both basic authentication and
{ref}/security-api-create-api-key.html[ApiKey]. +
See <<authentication,Authentication>> for more details. +
_Default:_ `null`
Basic authentication:
[source,js]
----
auth: {
username: 'elastic',
password: 'changeme'
}
----
{ref}/security-api-create-api-key.html[ApiKey] authentication:
[source,js]
----
auth: {
apiKey: 'base64EncodedKey'
}
----
Bearer authentication, useful for https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-service-token.html[service account tokens]. Be aware that it does not handle automatic token refresh:
[source,js]
----
auth: {
bearer: 'token'
}
----
|`maxRetries`
|`number` - Max number of retries for each request. +
_Default:_ `3`
|`requestTimeout`
|`number` - Max request timeout in milliseconds for each request. +
_Default:_ `30000`
|`pingTimeout`
|`number` - Max ping request timeout in milliseconds for each request. +
_Default:_ `3000`
|`sniffInterval`
|`number, boolean` - Perform a sniff operation every `n` milliseconds. Sniffing might not be the best solution for you, take a look https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how[here] to know more. +
_Default:_ `false`
|`sniffOnStart`
|`boolean` - Perform a sniff once the client is started. Sniffing might not be the best solution for you, take a look https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how[here] to know more. +
_Default:_ `false`
|`sniffEndpoint`
|`string` - Endpoint to ping during a sniff. +
_Default:_ `'_nodes/_all/http'`
|`sniffOnConnectionFault`
|`boolean` - Perform a sniff on connection fault. Sniffing might not be the best solution for you, take a look https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how[here] to know more. +
_Default:_ `false`
|`resurrectStrategy`
|`string` - Configure the node resurrection strategy. +
_Options:_ `'ping'`, `'optimistic'`, `'none'` +
_Default:_ `'ping'`
|`suggestCompression`
|`boolean` - Adds `accept-encoding` header to every request. +
_Default:_ `false`
|`compression`
|`string, boolean` - Enables gzip request body compression. +
_Options:_ `'gzip'`, `false` +
_Default:_ `false`
|`tls`
|`http.SecureContextOptions` - tls https://nodejs.org/api/tls.html[configuraton]. +
_Default:_ `null`
|`proxy`
a|`string, URL` - If you are using an http(s) proxy, you can put its url here.
The client will automatically handle the connection to it. +
_Default:_ `null`
[source,js]
----
const client = new Client({
node: 'http://localhost:9200',
proxy: 'http://localhost:8080'
})
// Proxy with basic authentication
const client = new Client({
node: 'http://localhost:9200',
proxy: 'http://user:pwd@localhost:8080'
})
----
|`agent`
a|`http.AgentOptions, function` - http agent https://nodejs.org/api/http.html#http_new_agent_options[options],
or a function that returns an actual http agent instance. If you want to disable the http agent use entirely
(and disable the `keep-alive` feature), set the agent to `false`. +
_Default:_ `null`
[source,js]
----
const client = new Client({
node: 'http://localhost:9200',
agent: { agent: 'options' }
})
const client = new Client({
node: 'http://localhost:9200',
// the function takes as parameter the option
// object passed to the Connection constructor
agent: (opts) => new CustomAgent()
})
const client = new Client({
node: 'http://localhost:9200',
// Disable agent and keep-alive
agent: false
})
----
|`nodeFilter`
a|`function` - Filters which node not to use for a request. +
_Default:_
[source,js]
----
function defaultNodeFilter (node) {
// avoid master only nodes
if (node.roles.master === true &&
node.roles.data === false &&
node.roles.ingest === false) {
return false
}
return true
}
----
|`nodeSelector`
a|`function` - custom selection strategy. +
_Options:_ `'round-robin'`, `'random'`, custom function +
_Default:_ `'round-robin'` +
_Custom function example:_
[source,js]
----
function nodeSelector (connections) {
const index = calculateIndex()
return connections[index]
}
----
|`generateRequestId`
a|`function` - function to generate the request id for every request, it takes
two parameters, the request parameters and options. +
By default it generates an incremental integer for every request. +
_Custom function example:_
[source,js]
----
function generateRequestId (params, options) {
// your id generation logic
// must be syncronous
return 'id'
}
----
|`name`
|`string, symbol` - The name to identify the client instance in the events. +
_Default:_ `elasticsearch-js`
|`opaqueIdPrefix`
|`string` - A string that will be use to prefix any `X-Opaque-Id` header. +
See https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/observability.html#_x-opaque-id_support[`X-Opaque-Id` support] for more details. +
_Default:_ `null`
|`headers`
|`object` - A set of custom headers to send in every request. +
_Default:_ `{}`
|`context`
|`object` - A custom object that you can use for observability in your events.
It will be merged with the API level context option. +
_Default:_ `null`
|`enableMetaHeader`
|`boolean` - If true, adds an header named `'x-elastic-client-meta'`, containing some minimal telemetry data,
such as the client and platform version. +
_Default:_ `true`
|`cloud`
a|`object` - Custom configuration for connecting to
https://cloud.elastic.co[Elastic Cloud]. See https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/auth-reference.html[Authentication]
for more details. +
_Default:_ `null` +
_Cloud configuration example:_
[source,js]
----
const client = new Client({
cloud: {
id: '<cloud-id>'
},
auth: {
username: 'elastic',
password: 'changeme'
}
})
----
|`disablePrototypePoisoningProtection`
|`boolean`, `'proto'`, `'constructor'` - The client can protect you against prototype poisoning attacks. Read https://web.archive.org/web/20200319091159/https://hueniverse.com/square-brackets-are-the-enemy-ff5b9fd8a3e8?gi=184a27ee2a08[this article] to learn more about this security concern. If needed, you can enable prototype poisoning protection entirely (`false`) or one of the two checks (`'proto'` or `'constructor'`). For performance reasons, it is disabled by default. Read the `secure-json-parse` https://github.com/fastify/secure-json-parse[documentation] to learn more. +
_Default:_ `true`
|`caFingerprint`
|`string` - If configured, verify that the fingerprint of the CA certificate that has signed the certificate of the server matches the supplied fingerprint. Only accepts SHA256 digest fingerprints. +
_Default:_ `null`
|`maxResponseSize`
|`number` - When configured, it verifies that the uncompressed response size is lower than the configured number, if it's higher it will abort the request. It cannot be higher than buffer.constants.MAX_STRING_LENGTH +
_Default:_ `null`
|`maxCompressedResponseSize`
|`number` - When configured, it verifies that the compressed response size is lower than the configured number, if it's higher it will abort the request. It cannot be higher than buffer.constants.MAX_LENGTH +
_Default:_ `null`
|===

949
docs/changelog.asciidoc Normal file
View File

@ -0,0 +1,949 @@
[[changelog-client]]
== Release notes
[discrete]
=== 8.17.1
[discrete]
==== Fixes
[discrete]
===== Improved support for Elasticsearch `v8.17`
Updated TypeScript types based on fixes and improvements to the Elasticsearch specification.
[discrete]
===== Report correct transport connection type in telemetry
The client's telemetry reporting mechanism was incorrectly reporting all traffic as using `HttpConnection` when the default is `UndiciConnection`. https://github.com/elastic/elasticsearch-js/issues/2324[#2324]
[discrete]
=== 8.17.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.17`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/release-notes-8.17.0.html[here].
[discrete]
=== 8.16.4
[discrete]
==== Fixes
[discrete]
===== Improved support for Elasticsearch `v8.16`
Updated TypeScript types based on fixes and improvements to the Elasticsearch specification.
[discrete]
===== Report correct transport connection type in telemetry
The client's telemetry reporting mechanism was incorrectly reporting all traffic as using `HttpConnection` when the default is `UndiciConnection`. https://github.com/elastic/elasticsearch-js/issues/2324[#2324]
[discrete]
=== 8.16.3
[discrete]
==== Fixes
[discrete]
===== Improved support for Elasticsearch `v8.16`
Updated TypeScript types based on fixes and improvements to the Elasticsearch specification.
[discrete]
=== 8.16.2
[discrete]
==== Fixes
[discrete]
===== Improved support for Elasticsearch `v8.16`
Updated TypeScript types based on fixes and improvements to the Elasticsearch specification.
[discrete]
===== Drop testing artifacts from npm package
Tap, the unit testing tool used by this project, was recently upgraded and started writing to a `.tap` directory. Since tests are run prior to an `npm publish` in CI, this directory was being included in the published package and bloating its size.
[discrete]
=== 8.16.1
[discrete]
==== Fixes
[discrete]
===== Fix ECMAScript imports
Fixed package configuration to correctly support native ECMAScript `import` syntax.
[discrete]
=== 8.16.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.16`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.16/release-notes-8.16.0.html[here].
[discrete]
===== Support Apache Arrow in ES|QL helper
The ES|QL helper can now return results as an Apache Arrow `Table` or `RecordBatchReader`, which enables high-performance calculations on ES|QL results, even if the response data is larger than the system's available memory. See <<esql-helper>> for more information.
[discrete]
==== Fixes
[discrete]
===== Pass prototype poisoning options to serializer correctly
The client's `disablePrototypePoisoningProtection` option was set to `true` by default, but when it was set to any other value it was ignored, making it impossible to enable prototype poisoning protection without providing a custom serializer implementation.
[discrete]
=== 8.15.3
[discrete]
==== Fixes
[discrete]
===== Improved support for Elasticsearch `v8.15`
Updated TypeScript types based on fixes and improvements to the Elasticsearch specification.
[discrete]
===== Drop testing artifacts from npm package
Tap, the unit testing tool, was recently upgraded and started writing to a `.tap` directory. Since tests are run prior to an `npm publish` in CI, this directory was being included in the published package and bloating its size.
[discrete]
=== 8.15.2
[discrete]
==== Fixes
[discrete]
===== Improved support for Elasticsearch `v8.15`
Updated TypeScript types based on fixes and improvements to the Elasticsearch specification.
[discrete]
=== 8.15.1
[discrete]
==== Fixes
[discrete]
===== Improved support for Elasticsearch `v8.15`
Updated TypeScript types based on fixes and improvements to the Elasticsearch specification.
[discrete]
=== 8.15.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.15.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.15/release-notes-8.15.0.html[here].
[discrete]
===== OpenTelemetry zero-code instrumentation support
For those that use an observability service that supports OpenTelemetry spans, the client will now automatically generate traces for each Elasticsearch request it makes.
See {jsclient}/observability.html#_opentelemetry[the docs]
for more information.
[discrete]
=== 8.14.1
[discrete]
==== Features
[discrete]
===== Improved support for Elasticsearch `8.14`
Updated types based on fixes and changes to the Elasticsearch specification.
[discrete]
=== 8.14.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.14.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.14/release-notes-8.14.0.html[here].
[discrete]
===== ES|QL object API helper
A helper method has been added that parses the response of an ES|QL query and converts it into an array of objects.
A TypeScript type parameter can also be provided to improve developer experience when working with the result. https://github.com/elastic/elasticsearch-js/pull/2238[#2238]
[discrete]
===== `onSuccess` callback added to bulk helper
The bulk helper now supports an `onSuccess` callback that will be called for each successful operation. https://github.com/elastic/elasticsearch-js/pull/2199[#2199]
[discrete]
===== Request retries are more polite
https://github.com/elastic/elastic-transport-js/releases/tag/v8.6.0[`@elastic/transport` v8.6.0] was released, which refactored when and how failed requests are retried. Timed-out requests are no longer retried by default, and retries now use exponential backoff rather than running immediately.
[discrete]
=== 8.13.1
[discrete]
==== Fixes
[discrete]
===== Pin @elastic/transport to `~8.4.1`
Switching from `^8.4.1` to `~8.4.1` ensures 8.13 client users are not required to update to Node.js v18+, which is a new requirement set by `@elastic/transport` v8.5.0. See https://github.com/elastic/elastic-transport-js/issues/91[elastic/elastic-transport-js#91] for details.
v8.13.0 was also released depending on v8.4.0 of `@elastic/transport` instead of v8.4.1, which was unintentional.
[discrete]
=== 8.13.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.13.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.13/release-notes-8.13.0.html[here].
[discrete]
==== Fixes
[discrete]
===== Ensure new connections inherit client's set defaults https://github.com/elastic/elasticsearch-js/pull/2159[#2159]
When instantiating a client, any connection-related defaults (e.g. `requestTimeout`) set on that client instance would not be inherited by nodes if they were entered as strings rather than a `ConnectionOptions` object.
[discrete]
=== 8.12.3
[discrete]
==== Fixes
[discrete]
===== Bump @elastic/transport to `~8.4.1`
Switching from `^8.4.1` to `~8.4.1` ensures 8.12 client users are not required to update to Node.js v18+, which is a new requirement set by `@elastic/transport` v8.5.0. See https://github.com/elastic/elastic-transport-js/issues/91[elastic/elastic-transport-js#91] for details.
[discrete]
=== 8.12.2
[discrete]
==== Fixes
[discrete]
===== Upgrade transport to 8.4.1 https://github.com/elastic/elasticsearch-js/pull/2137[#2137]
Upgrades `@elastic/transport` to 8.4.1 to resolve https://github.com/elastic/elastic-transport-js/pull/83[a bug] where arrays in error diagnostics were unintentionally transformed into objects.
[discrete]
=== 8.12.1
[discrete]
==== Fixes
[discrete]
===== Fix hang in bulk helper semaphore https://github.com/elastic/elasticsearch-js/pull/2027[#2027]
The failing state could be reached when a server's response times are slower than flushInterval.
[discrete]
=== 8.12.0
[discrete]
=== Features
[discrete]
===== Support for Elasticsearch `v8.12.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.12/release-notes-8.12.0.html[here].
[discrete]
=== 8.11.1
[discrete]
==== Fixes
[discrete]
===== Bump @elastic/transport to `~8.4.0`
Switching from `^8.4.0` to `~8.4.0` ensures 8.11 client users are not required to update to Node.js v18+, which is a new requirement set by `@elastic/transport` v8.5.0. See https://github.com/elastic/elastic-transport-js/issues/91[elastic/elastic-transport-js#91] for details.
[discrete]
=== 8.11.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.11.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.11/release-notes-8.11.0.html[here].
[discrete]
===== Enhanced support for redacting potentially sensitive data https://github.com/elastic/elasticsearch-js/pull/2095[#2095]
`@elastic/transport` https://github.com/elastic/elastic-transport-js/releases/tag/v8.4.0[version 8.4.0] introduces enhanced measures for ensuring that request metadata attached to some `Error` objects is redacted. This functionality is primarily to address custom logging solutions that don't use common serialization methods like `JSON.stringify`, `console.log`, or `util.inspect`, which were already accounted for.
See <<redaction>> for more information.
[discrete]
=== 8.10.1
[discrete]
==== Fixes
[discrete]
===== Bump @elastic/transport to `~8.3.4`
Switching from `^8.3.4` to `~8.3.4` ensures 8.10 client users are not required to update to Node.js v18+, which is a new requirement set by `@elastic/transport` v8.5.0. See https://github.com/elastic/elastic-transport-js/issues/91[elastic/elastic-transport-js#91] for details.
[discrete]
=== 8.10.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.10.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.10/release-notes-8.10.0.html[here].
[discrete]
=== 8.9.2
[discrete]
==== Fixes
[discrete]
===== Bump @elastic/transport to `~8.3.4`
Switching from `^8.3.4` to `~8.3.4` ensures 8.9 client users are not required to update to Node.js v18+, which is a new requirement set by `@elastic/transport` v8.5.0. See https://github.com/elastic/elastic-transport-js/issues/91[elastic/elastic-transport-js#91] for details.
[discrete]
=== 8.9.1
[discrete]
==== Fixes
[discrete]
===== Upgrade Transport https://github.com/elastic/elasticsearch-js/pull/1968[#1968]
Upgrades `@elastic/transport` to the latest patch release to fix https://github.com/elastic/elastic-transport-js/pull/69[a bug] that could cause the process to exit when handling malformed `HEAD` requests.
[discrete]
=== 8.9.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.9.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.9/release-notes-8.9.0.html[here].
[discrete]
===== Allow document to be overwritten in `onDocument` iteratee of bulk helper https://github.com/elastic/elasticsearch-js/pull/1732[#1732]
In the {jsclient}/client-helpers.html#bulk-helper[bulk helper], documents could not be modified before being sent to Elasticsearch. It is now possible to {jsclient}/client-helpers.html#_modifying_a_document_before_operation[modify a document] before sending it.
[discrete]
==== Fixes
[discrete]
===== Updated `user-agent` header https://github.com/elastic/elasticsearch-js/pull/1954[#1954]
The `user-agent` header the client used to connect to Elasticsearch was using a non-standard format that has been improved.
[discrete]
=== 8.8.2
[discrete]
==== Fixes
[discrete]
===== Bump @elastic/transport to `~8.3.2`
Switching from `^8.3.2` to `~8.3.2` ensures 8.8 client users are not required to update to Node.js v18+, which is a new requirement set by `@elastic/transport` v8.5.0. See https://github.com/elastic/elastic-transport-js/issues/91[elastic/elastic-transport-js#91] for details.
[discrete]
=== 8.8.1
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.8.1`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.8/release-notes-8.8.1.html[here].
[discrete]
==== Fixes
[discrete]
===== Fix index drift bug in bulk helper https://github.com/elastic/elasticsearch-js/pull/1759[#1759]
Fixes a bug in the bulk helper that would cause `onDrop` to send back the wrong JSON document or error on a nonexistent document when an error occurred on a bulk HTTP request that contained a `delete` action.
[discrete]
===== Fix a memory leak caused by an outdated version of Undici https://github.com/elastic/elasticsearch-js/pull/1902[#1902]
Undici 5.5.1, used by https://github.com/elastic/elastic-transport-js[elastic-transport-js], could create a memory leak when a high volume of requests created too many HTTP `abort` listeners. Upgrading Undici to 5.22.1 removed the memory leak.
[discrete]
=== 8.8.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.8.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.8/release-notes-8.8.0.html[here].
[discrete]
==== Fixes
[discrete]
===== Fix type declarations for legacy types with a body key https://github.com/elastic/elasticsearch-js/pull/1784[#1784]
Prior releases contained a bug where type declarations for legacy types that include a `body` key were not actually importing the type that includes the `body` key.
[discrete]
=== 8.7.3
[discrete]
==== Fixes
[discrete]
===== Bump @elastic/transport to `~8.3.1`
Switching from `^8.3.1` to `~8.3.1` ensures 8.7 client users are not required to update to Node.js v18+, which is a new requirement set by `@elastic/transport` v8.5.0. See https://github.com/elastic/elastic-transport-js/issues/91[elastic/elastic-transport-js#91] for details.
[discrete]
=== 8.7.0
[discrete]
===== Support for Elasticsearch `v8.7.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.7/release-notes-8.7.0.html[here].
[discrete]
=== 8.6.1
[discrete]
==== Fixes
[discrete]
===== Bump @elastic/transport to `~8.3.1`
Switching from `^8.3.1` to `~8.3.1` ensures 8.6 client users are not required to update to Node.js v18+, which is a new requirement set by `@elastic/transport` v8.5.0. See https://github.com/elastic/elastic-transport-js/issues/91[elastic/elastic-transport-js#91] for details.
[discrete]
=== 8.6.0
[discrete]
===== Bump @elastic/transport to 8.3.1+ https://github.com/elastic/elasticsearch-js/pull/1802[#1802]
The `@elastic/transport` dependency has been bumped to `~8.3.1` to ensure
fixes to the `maxResponseSize` option are available in the client.
[discrete]
===== Support for Elasticsearch `v8.6.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.6/release-notes-8.6.0.html[here].
[discrete]
=== 8.5.0
[discrete]
===== Support for Elasticsearch `v8.5.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.5/release-notes-8.5.0.html[here].
[discrete]
=== 8.4.0
[discrete]
===== Support for Elasticsearch `v8.4.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.4/release-notes-8.4.0.html[here].
[discrete]
=== 8.2.1
[discrete]
==== Fixes
[discrete]
===== Support for Elasticsearch `v8.2.1`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.2/release-notes-8.2.1.html[here].
[discrete]
===== Fix ndjson APIs https://github.com/elastic/elasticsearch-js/pull/1688[#1688]
The previous release contained a bug that broken ndjson APIs.
We have released `v8.2.0-patch.1` to address this.
This fix is the same as the one we have released and we strongly recommend upgrading to this version.
[discrete]
===== Fix node shutdown apis https://github.com/elastic/elasticsearch-js/pull/1697[#1697]
The shutdown APIs wheren't complete, this fix completes them.
[discrete]
==== Types: move query keys to body https://github.com/elastic/elasticsearch-js/pull/1693[#1693]
The types definitions where wrongly representing the types of fields present in both query and body.
[discrete]
=== 8.2.0
[discrete]
==== Breaking changes
[discrete]
===== Drop Node.js v12 https://github.com/elastic/elasticsearch-js/pull/1670[#1670]
According to our https://github.com/elastic/elasticsearch-js#nodejs-support[Node.js support matrix].
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.2`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.2/release-notes-8.2.0.html[here].
[discrete]
===== More lenient parameter checks https://github.com/elastic/elasticsearch-js/pull/1662[#1662]
When creating a new client, an `undefined` `caFingerprint` no longer trigger an error for a http connection.
[discrete]
===== Update TypeScript docs and export estypes https://github.com/elastic/elasticsearch-js/pull/1675[#1675]
You can import the full TypeScript requests & responses definitions as it follows:
[source,ts]
----
import { estypes } from '@elastic/elasticsearch'
----
If you need the legacy definitions with the body, you can do the following:
[source,ts]
----
import { estypesWithBody } from '@elastic/elasticsearch'
----
[discrete]
==== Fixes
[discrete]
===== Updated hpagent to the latest version https://github.com/elastic/elastic-transport-js/pull/49[transport/#49]
You can fing the related changes https://github.com/delvedor/hpagent/releases/tag/v1.0.0[here].
[discrete]
=== 8.1.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.1`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.1/release-notes-8.1.0.html[here].
[discrete]
===== Export SniffingTransport https://github.com/elastic/elasticsearch-js/pull/1653[#1653]
Now the client exports the SniffingTransport class.
[discrete]
==== Fixes
[discrete]
===== Fix onFlushTimeout timer not being cleared when upstream errors https://github.com/elastic/elasticsearch-js/pull/1616[#1616]
Fixes a memory leak caused by an error in the upstream dataset of the bulk helper.
[discrete]
===== Cleanup abort listener https://github.com/elastic/elastic-transport-js/pull/42[transport/#42]
The legacy http client was not cleaning up the abort listener, which could cause a memory leak.
[discrete]
===== Improve undici performances https://github.com/elastic/elastic-transport-js/pull/41[transport/#41]
Improve the stream body collection and keep alive timeout.
[discrete]
=== 8.0.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.0`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.0/release-notes-8.0.0.html[here].
[discrete]
===== Drop old typescript definitions
*Breaking: Yes* | *Migration effort: Medium*
The current TypeScript definitions will be removed from the client, and the new definitions, which contain request and response definitions as well will be shipped by default.
[discrete]
===== Drop callback-style API
*Breaking: Yes* | *Migration effort: Large*
Maintaining both API styles is not a problem per se, but it makes error handling more convoluted due to async stack traces.
Moving to a full-promise API will solve this issue.
[source,js]
----
// callback-style api
client.search({ params }, { options }, (err, result) => {
console.log(err || result)
})
// promise-style api
client.search({ params }, { options })
.then(console.log)
.catch(console.log)
// async-style (sugar syntax on top of promises)
const response = await client.search({ params }, { options })
console.log(response)
----
If you are already using the promise-style API, this won't be a breaking change for you.
[discrete]
===== Remove the current abort API and use the new AbortController standard
*Breaking: Yes* | *Migration effort: Small*
The old abort API makes sense for callbacks but it's annoying to use with promises
[source,js]
----
// callback-style api
const request = client.search({ params }, { options }, (err, result) => {
console.log(err) // RequestAbortedError
})
request.abort()
// promise-style api
const promise = client.search({ params }, { options })
promise
.then(console.log)
.catch(console.log) // RequestAbortedError
promise.abort()
----
Node v12 has added the standard https://nodejs.org/api/globals.html#globals_class_abortcontroller[`AbortController`] API which is designed to work well with both callbacks and promises.
[source,js]
----
const ac = new AbortController()
client.search({ params }, { signal: ac.signal })
.then(console.log)
.catch(console.log) // RequestAbortedError
ac.abort()
----
[discrete]
===== Remove the body key from the request
*Breaking: Yes* | *Migration effort: Small*
Thanks to the new types we are developing now we know exactly where a parameter should go.
The client API leaks HTTP-related notions in many places, and removing them would definitely improve the DX.
This could be a rather big breaking change, so a double solution could be used during the 8.x lifecycle. (accepting body keys without them being wrapped in the body as well as the current solution).
To convert code from 7.x, you need to remove the `body` parameter in all the endpoints request.
For instance, this is an example for the `search` endpoint:
[source,js]
----
// from
const response = await client.search({
index: 'test',
body: {
query: {
match_all: {}
}
}
})
// to
const response = await client.search({
index: 'test',
query: {
match_all: {}
}
})
----
[discrete]
===== Migrate to new separate transport
*Breaking: Yes* | *Migration effort: Small to none*
The separated transport has been rewritten in TypeScript and has already dropped the callback style API.
Given that now is separated, most of the Elasticsearch specific concepts have been removed, and the client will likely need to extend parts of it for reintroducing them.
If you weren't extending the internals of the client, this won't be a breaking change for you.
[discrete]
===== The returned value of API calls is the body and not the HTTP related keys
*Breaking: Yes* | *Migration effort: Small*
The client API leaks HTTP-related notions in many places, and removing them would definitely improve the DX.
The client will expose a new request-specific option to still get the full response details.
The new behaviour returns the `body` value directly as response.
If you want to have the 7.x response format, you need to add `meta : true` in the request.
This will return all the HTTP meta information, including the `body`.
For instance, this is an example for the `search` endpoint:
[source,js]
----
// from
const response = await client.search({
index: 'test',
body: {
query: {
match_all: {}
}
}
})
console.log(response) // { body: SearchResponse, statusCode: number, headers: object, warnings: array }
// to
const response = await client.search({
index: 'test',
query: {
match_all: {}
}
})
console.log(response) // SearchResponse
// with a bit of TypeScript and JavaScript magic...
const response = await client.search({
index: 'test',
query: {
match_all: {}
}
}, {
meta: true
})
console.log(response) // { body: SearchResponse, statusCode: number, headers: object, warnings: array }
----
[discrete]
===== Use a weighted connection pool
*Breaking: Yes* | *Migration effort: Small to none*
Move from the current cluster connection pool to a weight-based implementation.
This new implementation offers better performances and runs less code in the background, the old connection pool can still be used.
If you weren't extending the internals of the client, this won't be a breaking change for you.
[discrete]
===== Migrate to the "undici" http client
*Breaking: Yes* | *Migration effort: Small to none*
By default, the HTTP client will no longer be the default Node.js HTTP client, but https://github.com/nodejs/undici[undici] instead.
Undici is a brand new HTTP client written from scratch, it offers vastly improved performances and has better support for promises.
Furthermore, it offers comprehensive and predictable error handling. The old HTTP client can still be used.
If you weren't extending the internals of the client, this won't be a breaking change for you.
[discrete]
===== Drop support for old camelCased keys
*Breaking: Yes* | *Migration effort: Medium*
Currently, every path or query parameter could be expressed in both `snake_case` and `camelCase`. Internally the client will convert everything to `snake_case`.
This was done in an effort to reduce the friction of migrating from the legacy to the new client, but now it no longer makes sense.
If you are already using `snake_case` keys, this won't be a breaking change for you.
[discrete]
===== Rename `ssl` option to `tls`
*Breaking: Yes* | *Migration effort: Small*
People usually refers to this as `tls`, furthermore, internally we use the tls API and Node.js refers to it as tls everywhere.
[source,js]
----
// before
const client = new Client({
node: 'https://localhost:9200',
ssl: {
rejectUnauthorized: false
}
})
// after
const client = new Client({
node: 'https://localhost:9200',
tls: {
rejectUnauthorized: false
}
})
----
[discrete]
===== Remove prototype poisoning protection
*Breaking: Yes* | *Migration effort: Small*
Prototype poisoning protection is very useful, but it can cause performances issues with big payloads.
In v8 it will be removed, and the documentation will show how to add it back with a custom serializer.
[discrete]
===== Remove client extensions API
*Breaking: Yes* | *Migration effort: Large*
Nowadays the client support the entire Elasticsearch API, and the `transport.request` method can be used if necessary. The client extensions API have no reason to exist.
[source,js]
----
client.extend('utility.index', ({ makeRequest }) => {
return function _index (params, options) {
// your code
}
})
client.utility.index(...)
----
If you weren't using client extensions, this won't be a breaking change for you.
[discrete]
===== Move to TypeScript
*Breaking: No* | *Migration effort: None*
The new separated transport is already written in TypeScript, and it makes sense that the client v8 will be fully written in TypeScript as well.
[discrete]
===== Move from emitter-like interface to a diagnostic method
*Breaking: Yes* | *Migration effort: Small*
Currently, the client offers a subset of methods of the `EventEmitter` class, v8 will ship with a `diagnostic` property which will be a proper event emitter.
[source,js]
----
// from
client.on('request', console.log)
// to
client.diagnostic.on('request', console.log)
----
[discrete]
===== Remove username & password properties from Cloud configuration
*Breaking: Yes* | *Migration effort: Small*
The Cloud configuration does not support ApiKey and Bearer auth, while the `auth` options does.
There is no need to keep the legacy basic auth support in the cloud configuration.
[source,js]
----
// before
const client = new Client({
cloud: {
id: '<cloud-id>',
username: 'elastic',
password: 'changeme'
}
})
// after
const client = new Client({
cloud: {
id: '<cloud-id>'
},
auth: {
username: 'elastic',
password: 'changeme'
}
})
----
If you are already passing the basic auth options in the `auth` configuration, this won't be a breaking change for you.
[discrete]
===== Calling `client.close` will reject new requests
Once you call `client.close` every new request after that will be rejected with a `NoLivingConnectionsError`. In-flight requests will be executed normally unless an in-flight request requires a retry, in which case it will be rejected.
[discrete]
===== Parameters rename
- `ilm.delete_lifecycle`: `policy` parameter has been renamed to `name`
- `ilm.get_lifecycle`: `policy` parameter has been renamed to `name`
- `ilm.put_lifecycle`: `policy` parameter has been renamed to `name`
- `snapshot.cleanup_repository`: `repository` parameter has been renamed to `name`
- `snapshot.create_repository`: `repository` parameter has been renamed to `name`
- `snapshot.delete_repository`: `repository` parameter has been renamed to `name`
- `snapshot.get_repository`: `repository` parameter has been renamed to `name`
- `snapshot.verify_repository`: `repository` parameter has been renamed to `name`
[discrete]
===== Removal of snake_cased methods
The v7 client provided snake_cased methods, such as `client.delete_by_query`. This is no longer supported, now only camelCased method are present.
So `client.delete_by_query` can be accessed with `client.deleteByQuery`

36
docs/child.asciidoc Normal file
View File

@ -0,0 +1,36 @@
[[child]]
=== Creating a child client
There are some use cases where you may need multiple instances of the client.
You can easily do that by calling `new Client()` as many times as you need, but
you will lose all the benefits of using one single client, such as the long
living connections and the connection pool handling. To avoid this problem, the
client offers a `child` API, which returns a new client instance that shares the
connection pool with the parent client.
NOTE: The event emitter is shared between the parent and the child(ren). If you
extend the parent client, the child client will have the same extensions, while
if the child client adds an extension, the parent client will not be extended.
You can pass to the `child` every client option you would pass to a normal
client, but the connection pool specific options (`ssl`, `agent`, `pingTimeout`,
`Connection`, and `resurrectStrategy`).
CAUTION: If you call `close` in any of the parent/child clients, every client
will be closed.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
const child = client.child({
headers: { 'x-foo': 'bar' },
requestTimeout: 1000
})
client.info().then(console.log, console.log)
child.info().then(console.log, console.log)
----

View File

@ -0,0 +1,12 @@
[[client-configuration]]
== Configuration
The client is designed to be easily configured for your needs. In the following
section, you can see the possible options that you can use to configure it.
* <<basic-config>>
* <<advanced-config>>
* <<timeout-best-practices>>
* <<child>>
* <<client-testing>>

738
docs/connecting.asciidoc Normal file
View File

@ -0,0 +1,738 @@
[[client-connecting]]
== Connecting
This page contains the information you need to connect and use the Client with
{es}.
**On this page**
* <<authentication, Authentication options>>
* <<client-usage, Using the client>>
* <<client-faas-env, Using the Client in a Function-as-a-Service Environment>>
* <<client-connect-proxy, Connecting through a proxy>>
* <<client-error-handling, Handling errors>>
* <<keep-alive, Keep-alive connections>>
* <<close-connections, Closing a client's connections>>
* <<product-check, Automatic product check>>
[[authentication]]
[discrete]
=== Authentication
This document contains code snippets to show you how to connect to various {es}
providers.
[discrete]
[[auth-ec]]
==== Elastic Cloud
If you are using https://www.elastic.co/cloud[Elastic Cloud], the client offers
an easy way to connect to it via the `cloud` option. You must pass the Cloud ID
that you can find in the cloud console, then your username and password inside
the `auth` option.
NOTE: When connecting to Elastic Cloud, the client will automatically enable
both request and response compression by default, since it yields significant
throughput improvements. Moreover, the client will also set the tls option
`secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still
override this option by configuring them.
IMPORTANT: Do not enable sniffing when using Elastic Cloud, since the nodes are
behind a load balancer, Elastic Cloud will take care of everything for you.
Take a look https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how[here]
to know more.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
cloud: {
id: '<cloud-id>'
},
auth: {
username: 'elastic',
password: 'changeme'
}
})
----
[discrete]
[[connect-self-managed-new]]
=== Connecting to a self-managed cluster
By default {es} will start with security features like authentication and TLS
enabled. To connect to the {es} cluster you'll need to configure the Node.js {es}
client to use HTTPS with the generated CA certificate in order to make requests
successfully.
If you're just getting started with {es} we recommend reading the documentation
on https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html[configuring]
and
https://www.elastic.co/guide/en/elasticsearch/reference/current/starting-elasticsearch.html[starting {es}]
to ensure your cluster is running as expected.
When you start {es} for the first time you'll see a distinct block like the one
below in the output from {es} (you may have to scroll up if it's been a while):
[source,sh]
----
-> Elasticsearch security features have been automatically configured!
-> Authentication is enabled and cluster connections are encrypted.
-> Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
lhQpLELkjkrawaBoaz0Q
-> HTTP CA certificate SHA-256 fingerprint:
a52dd93511e8c6045e21f16654b77c9ee0f34aea26d9f40320b531c474676228
...
----
Depending on the circumstances there are two options for verifying the HTTPS
connection, either verifying with the CA certificate itself or via the HTTP CA
certificate fingerprint.
[discrete]
[[auth-tls]]
==== TLS configuration
The generated root CA certificate can be found in the `certs` directory in your
{es} config location (`$ES_CONF_PATH/certs/http_ca.crt`). If you're running {es}
in Docker there is
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html[additional documentation for retrieving the CA certificate].
Without any additional configuration you can specify `https://` node urls, and
the certificates used to sign these requests will be verified. To turn off
certificate verification, you must specify an `tls` object in the top level
config and set `rejectUnauthorized: false`. The default `tls` values are the
same that Node.js's https://nodejs.org/api/tls.html#tls_tls_connect_options_callback[`tls.connect()`]
uses.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
node: 'https://localhost:9200',
auth: {
username: 'elastic',
password: 'changeme'
},
tls: {
ca: fs.readFileSync('./http_ca.crt'),
rejectUnauthorized: false
}
})
----
[discrete]
[[auth-ca-fingerprint]]
==== CA fingerprint
You can configure the client to only trust certificates that are signed by a specific CA certificate
(CA certificate pinning) by providing a `caFingerprint` option.
This will verify that the fingerprint of the CA certificate that has signed
the certificate of the server matches the supplied value.
You must configure a SHA256 digest.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
node: 'https://example.com'
auth: { ... },
// the fingerprint (SHA256) of the CA certificate that is used to sign
// the certificate that the Elasticsearch node presents for TLS.
caFingerprint: '20:0D:CA:FA:76:...',
tls: {
// might be required if it's a self-signed certificate
rejectUnauthorized: false
}
})
----
The certificate fingerprint can be calculated using `openssl x509` with the
certificate file:
[source,sh]
----
openssl x509 -fingerprint -sha256 -noout -in /path/to/http_ca.crt
----
If you don't have access to the generated CA file from {es} you can use the
following script to output the root CA fingerprint of the {es} instance with
`openssl s_client`:
[source,sh]
----
# Replace the values of 'localhost' and '9200' to the
# corresponding host and port values for the cluster.
openssl s_client -connect localhost:9200 -servername localhost -showcerts </dev/null 2>/dev/null \
| openssl x509 -fingerprint -sha256 -noout -in /dev/stdin
----
The output of `openssl x509` will look something like this:
[source,sh]
----
SHA256 Fingerprint=A5:2D:D9:35:11:E8:C6:04:5E:21:F1:66:54:B7:7C:9E:E0:F3:4A:EA:26:D9:F4:03:20:B5:31:C4:74:67:62:28
----
[discrete]
[[connect-no-security]]
=== Connecting without security enabled
WARNING: Running {es} without security enabled is not recommended.
If your cluster is configured with
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html[security explicitly disabled]
then you can connect via HTTP:
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
node: 'http://example.com'
})
----
[discrete]
[[auth-strategies]]
=== Authentication strategies
Following you can find all the supported authentication strategies.
[discrete]
[[auth-apikey]]
==== ApiKey authentication
You can use the
{ref-7x}/security-api-create-api-key.html[ApiKey]
authentication by passing the `apiKey` parameter via the `auth` option. The
`apiKey` parameter can be either a base64 encoded string or an object with the
values that you can obtain from the
{ref-7x}/security-api-create-api-key.html[create api key endpoint].
NOTE: If you provide both basic authentication credentials and the ApiKey
configuration, the ApiKey takes precedence.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
node: 'https://localhost:9200',
auth: {
apiKey: 'base64EncodedKey'
}
})
----
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
node: 'https://localhost:9200',
auth: {
apiKey: {
id: 'foo',
api_key: 'bar'
}
}
})
----
[discrete]
[[auth-bearer]]
==== Bearer authentication
You can provide your credentials by passing the `bearer` token
parameter via the `auth` option.
Useful for https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-service-token.html[service account tokens].
Be aware that it does not handle automatic token refresh.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
node: 'https://localhost:9200',
auth: {
bearer: 'token'
}
})
----
[discrete]
[[auth-basic]]
==== Basic authentication
You can provide your credentials by passing the `username` and `password`
parameters via the `auth` option.
NOTE: If you provide both basic authentication credentials and the Api Key
configuration, the Api Key will take precedence.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
node: 'https://localhost:9200',
auth: {
username: 'elastic',
password: 'changeme'
}
})
----
Otherwise, you can provide your credentials in the node(s) URL.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
node: 'https://username:password@localhost:9200'
})
----
[discrete]
[[client-usage]]
=== Usage
Using the client is straightforward, it supports all the public APIs of {es},
and every method exposes the same signature.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
const result = await client.search({
index: 'my-index',
query: {
match: { hello: 'world' }
}
})
----
The returned value of every API call is the response body from {es}.
If you need to access additonal metadata, such as the status code or headers,
you must specify `meta: true` in the request options:
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
const result = await client.search({
index: 'my-index',
query: {
match: { hello: 'world' }
}
}, { meta: true })
----
In this case, the result will be:
[source,ts]
----
{
body: object | boolean
statusCode: number
headers: object
warnings: string[],
meta: object
}
----
NOTE: The body is a boolean value when you use `HEAD` APIs.
[discrete]
==== Aborting a request
If needed, you can abort a running request by using the `AbortController` standard.
CAUTION: If you abort a request, the request will fail with a
`RequestAbortedError`.
[source,js]
----
const AbortController = require('node-abort-controller')
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
const abortController = new AbortController()
setImmediate(() => abortController.abort())
const result = await client.search({
index: 'my-index',
query: {
match: { hello: 'world' }
}
}, { signal: abortController.signal })
----
[discrete]
==== Request specific options
If needed you can pass request specific options in a second object:
[source,js]
----
const result = await client.search({
index: 'my-index',
body: {
query: {
match: { hello: 'world' }
}
}
}, {
ignore: [404],
maxRetries: 3
})
----
The supported request specific options are:
[cols=2*]
|===
|`ignore`
|`number[]` - HTTP status codes which should not be considered errors for this request. +
_Default:_ `null`
|`requestTimeout`
|`number | string` - Max request timeout for the request in milliseconds, it overrides the client default. +
_Default:_ `30000`
|`retryOnTimeout`
|`boolean` - Retry requests that have timed out.
_Default:_ `false`
|`maxRetries`
|`number` - Max number of retries for the request, it overrides the client default. +
_Default:_ `3`
|`compression`
|`string | boolean` - Enables body compression for the request. +
_Options:_ `false`, `'gzip'` +
_Default:_ `false`
|`asStream`
|`boolean` - Instead of getting the parsed body back, you get the raw Node.js stream of data. +
_Default:_ `false`
|`headers`
|`object` - Custom headers for the request. +
_Default:_ `null`
|`querystring`
|`object` - Custom querystring for the request. +
_Default:_ `null`
|`id`
|`any` - Custom request id. _(overrides the top level request id generator)_ +
_Default:_ `null`
|`context`
|`any` - Custom object per request. _(you can use it to pass data to the clients events)_ +
_Default:_ `null`
|`opaqueId`
|`string` - Set the `X-Opaque-Id` HTTP header. See {ref}/api-conventions.html#x-opaque-id
_Default:_ `null`
|`maxResponseSize`
|`number` - When configured, it verifies that the uncompressed response size is lower than the configured number, if it's higher it will abort the request. It cannot be higher than buffer.constants.MAX_STRING_LENTGH +
_Default:_ `null`
|`maxCompressedResponseSize`
|`number` - When configured, it verifies that the compressed response size is lower than the configured number, if it's higher it will abort the request. It cannot be higher than buffer.constants.MAX_LENTGH +
_Default:_ `null`
|`signal`
|`AbortSignal` - The AbortSignal instance to allow request abortion. +
_Default:_ `null`
|`meta`
|`boolean` - Rather than returning the body, return an object containing `body`, `statusCode`, `headers` and `meta` keys +
_Default_: `false`
|`redaction`
|`object` - Options for redacting potentially sensitive data from error metadata. See <<redaction>>.
|`retryBackoff`
|`(min: number, max: number, attempt: number) => number;` - A function that calculates how long to sleep, in seconds, before the next request retry +
_Default:_ A built-in function that uses exponential backoff with jitter.
|===
[discrete]
[[client-faas-env]]
=== Using the Client in a Function-as-a-Service Environment
This section illustrates the best practices for leveraging the {es} client in a Function-as-a-Service (FaaS) environment.
The most influential optimization is to initialize the client outside of the function, the global scope.
This practice does not only improve performance but also enables background functionality as for example https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how[sniffing].
The following examples provide a skeleton for the best practices.
[discrete]
==== GCP Cloud Functions
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
// client initialisation
})
exports.testFunction = async function (req, res) {
// use the client
}
----
[discrete]
==== AWS Lambda
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
// client initialisation
})
exports.handler = async function (event, context) {
// use the client
}
----
[discrete]
==== Azure Functions
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
// client initialisation
})
module.exports = async function (context, req) {
// use the client
}
----
Resources used to assess these recommendations:
- https://cloud.google.com/functions/docs/bestpractices/tips#use_global_variables_to_reuse_objects_in_future_invocations[GCP Cloud Functions: Tips & Tricks]
- https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html[Best practices for working with AWS Lambda functions]
- https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-python?tabs=azurecli-linux%2Capplication-level#global-variables[Azure Functions Python developer guide]
- https://docs.aws.amazon.com/lambda/latest/operatorguide/global-scope.html[AWS Lambda: Comparing the effect of global scope]
[discrete]
[[client-connect-proxy]]
=== Connecting through a proxy
~Added~ ~in~ ~`v7.10.0`~
If you need to pass through an http(s) proxy for connecting to {es}, the client
out of the box offers a handy configuration for helping you with it. Under the
hood, it uses the https://github.com/delvedor/hpagent[`hpagent`] module.
IMPORTANT: In versions 8.0+ of the client, the default `Connection` type is set to `UndiciConnection`, which does not support proxy configurations.
To use a proxy, you will need to use the `HttpConnection` class from `@elastic/transport` instead.
[source,js]
----
import { HttpConnection } from '@elastic/transport'
const client = new Client({
node: 'http://localhost:9200',
proxy: 'http://localhost:8080',
Connection: HttpConnection,
})
----
Basic authentication is supported as well:
[source,js]
----
const client = new Client({
node: 'http://localhost:9200',
proxy: 'http:user:pwd@//localhost:8080',
Connection: HttpConnection,
})
----
If you are connecting through a non-http(s) proxy, such as a `socks5` or `pac`,
you can use the `agent` option to configure it.
[source,js]
----
const SocksProxyAgent = require('socks-proxy-agent')
const client = new Client({
node: 'http://localhost:9200',
agent () {
return new SocksProxyAgent('socks://127.0.0.1:1080')
},
Connection: HttpConnection,
})
----
[discrete]
[[client-error-handling]]
=== Error handling
The client exposes a variety of error objects that you can use to enhance your
error handling. You can find all the error objects inside the `errors` key in
the client.
[source,js]
----
const { errors } = require('@elastic/elasticsearch')
console.log(errors)
----
You can find the errors exported by the client in the table below.
[cols=3*]
|===
|*Error*
|*Description*
|*Properties*
|`ElasticsearchClientError`
|Every error inherits from this class, it is the basic error generated by the client.
a|* `name` - `string`
* `message` - `string`
|`TimeoutError`
|Generated when a request exceeds the `requestTimeout` option.
a|* `name` - `string`
* `message` - `string`
* `meta` - `object`, contains all the information about the request
|`ConnectionError`
|Generated when an error occurs during the request, it can be a connection error or a malformed stream of data.
a|* `name` - `string`
* `message` - `string`
* `meta` - `object`, contains all the information about the request
|`RequestAbortedError`
|Generated if the user calls the `request.abort()` method.
a|* `name` - `string`
* `message` - `string`
* `meta` - `object`, contains all the information about the request
|`NoLivingConnectionsError`
|Given the configuration, the ConnectionPool was not able to find a usable Connection for this request.
a|* `name` - `string`
* `message` - `string`
* `meta` - `object`, contains all the information about the request
|`SerializationError`
|Generated if the serialization fails.
a|* `name` - `string`
* `message` - `string`
* `data` - `object`, the object to serialize
|`DeserializationError`
|Generated if the deserialization fails.
a|* `name` - `string`
* `message` - `string`
* `data` - `string`, the string to deserialize
|`ConfigurationError`
|Generated if there is a malformed configuration or parameter.
a|* `name` - `string`
* `message` - `string`
|`ResponseError`
|Generated when in case of a `4xx` or `5xx` response.
a|* `name` - `string`
* `message` - `string`
* `meta` - `object`, contains all the information about the request
* `body` - `object`, the response body
* `statusCode` - `object`, the response headers
* `headers` - `object`, the response status code
|===
[[keep-alive]]
[discrete]
=== Keep-alive connections
By default, the client uses persistent, keep-alive connections to reduce the overhead of creating a new HTTP connection for each Elasticsearch request.
If you are using the default `UndiciConnection` connection class, it maintains a pool of 256 connections with a keep-alive of 10 minutes.
If you are using the legacy `HttpConnection` connection class, it maintains a pool of 256 connections with a keep-alive of 1 minute.
If you need to disable keep-alive connections, you can override the HTTP agent with your preferred https://nodejs.org/api/http.html#http_new_agent_options[HTTP agent options]:
[source,js]
----
const client = new Client({
node: 'http://localhost:9200',
// the function takes as parameter the option
// object passed to the Connection constructor
agent: (opts) => new CustomAgent()
})
----
Or you can disable the HTTP agent entirely:
[source,js]
----
const client = new Client({
node: 'http://localhost:9200',
// Disable agent and keep-alive
agent: false
})
----
[discrete]
[[close-connections]]
=== Closing a client's connections
If you would like to close all open connections being managed by an instance of the client, use the `close()` function:
[source,js]
----
const client = new Client({
node: 'http://localhost:9200'
});
client.close();
----
[discrete]
[[product-check]]
=== Automatic product check
Since v7.14.0, the client performs a required product check before the first call.
This pre-flight product check allows the client to establish the version of Elasticsearch
that it is communicating with. The product check requires one additional HTTP request to
be sent to the server as part of the request pipeline before the main API call is sent.
In most cases, this will succeed during the very first API call that the client sends.
Once the product check completes, no further product check HTTP requests are sent for
subsequent API calls.

View File

@ -3,11 +3,17 @@
[source, js]
----
const response = await client.esql.asyncQuery({
format: "json",
query:
"\n FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index*\n | STATS COUNT(http.response.status_code) BY user.id\n | LIMIT 2\n ",
include_ccs_metadata: true,
const response = await client.transport.request({
method: "POST",
path: "/_query/async",
querystring: {
format: "json",
},
body: {
query:
"\n FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index*\n | STATS COUNT(http.response.status_code) BY user.id\n | LIMIT 2\n ",
include_ccs_metadata: true,
},
});
console.log(response);
----

View File

@ -3,20 +3,23 @@
[source, js]
----
const response = await client.searchApplication.renderQuery({
name: "my-app",
params: {
query_string: "my first query",
text_fields: [
{
name: "title",
boost: 5,
},
{
name: "description",
boost: 1,
},
],
const response = await client.transport.request({
method: "POST",
path: "/_application/search_application/my-app/_render_query",
body: {
params: {
query_string: "my first query",
text_fields: [
{
name: "title",
boost: 5,
},
{
name: "description",
boost: 1,
},
],
},
},
});
console.log(response);

View File

@ -3,16 +3,18 @@
[source, js]
----
const response = await client.inference.streamInference({
task_type: "chat_completion",
inference_id: "openai-completion",
model: "gpt-4o",
messages: [
{
role: "user",
content: "What is Elastic?",
},
],
const response = await client.transport.request({
method: "POST",
path: "/_inference/chat_completion/openai-completion/_stream",
body: {
model: "gpt-4o",
messages: [
{
role: "user",
content: "What is Elastic?",
},
],
},
});
console.log(response);
----

View File

@ -1,25 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.createFrom({
source: "my-index",
dest: "my-new-index",
create_from: {
settings_override: {
index: {
number_of_shards: 5,
},
},
mappings_override: {
properties: {
field2: {
type: "boolean",
},
},
},
},
});
console.log(response);
----

View File

@ -1,10 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.cancelMigrateReindex({
index: "my-data-stream",
});
console.log(response);
----

View File

@ -1,12 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.security.delegatePki({
x509_certificate_chain: [
"MIIDeDCCAmCgAwIBAgIUBzj/nGGKxP2iXawsSquHmQjCJmMwDQYJKoZIhvcNAQELBQAwUzErMCkGA1UEAxMiRWxhc3RpY3NlYXJjaCBUZXN0IEludGVybWVkaWF0ZSBDQTEWMBQGA1UECxMNRWxhc3RpY3NlYXJjaDEMMAoGA1UEChMDb3JnMB4XDTIzMDcxODE5MjkwNloXDTQzMDcxMzE5MjkwNlowSjEiMCAGA1UEAxMZRWxhc3RpY3NlYXJjaCBUZXN0IENsaWVudDEWMBQGA1UECxMNRWxhc3RpY3NlYXJjaDEMMAoGA1UEChMDb3JnMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAllHL4pQkkfwAm/oLkxYYO+r950DEy1bjH+4viCHzNADLCTWO+lOZJVlNx7QEzJE3QGMdif9CCBBxQFMapA7oUFCLq84fPSQQu5AnvvbltVD9nwVtCs+9ZGDjMKsz98RhSLMFIkxdxi6HkQ3Lfa4ZSI4lvba4oo+T/GveazBDS+NgmKyq00EOXt3tWi1G9vEVItommzXWfv0agJWzVnLMldwkPqsw0W7zrpyT7FZS4iLbQADGceOW8fiauOGMkscu9zAnDR/SbWl/chYioQOdw6ndFLn1YIFPd37xL0WsdsldTpn0vH3YfzgLMffT/3P6YlwBegWzsx6FnM/93Ecb4wIDAQABo00wSzAJBgNVHRMEAjAAMB0GA1UdDgQWBBQKNRwjW+Ad/FN1Rpoqme/5+jrFWzAfBgNVHSMEGDAWgBRcya0c0x/PaI7MbmJVIylWgLqXNjANBgkqhkiG9w0BAQsFAAOCAQEACZ3PF7Uqu47lplXHP6YlzYL2jL0D28hpj5lGtdha4Muw1m/BjDb0Pu8l0NQ1z3AP6AVcvjNDkQq6Y5jeSz0bwQlealQpYfo7EMXjOidrft1GbqOMFmTBLpLA9SvwYGobSTXWTkJzonqVaTcf80HpMgM2uEhodwTcvz6v1WEfeT/HMjmdIsq4ImrOL9RNrcZG6nWfw0HR3JNOgrbfyEztEI471jHznZ336OEcyX7gQuvHE8tOv5+oD1d7s3Xg1yuFp+Ynh+FfOi3hPCuaHA+7F6fLmzMDLVUBAllugst1C3U+L/paD7tqIa4ka+KNPCbSfwazmJrt4XNiivPR4hwH5g==",
],
});
console.log(response);
----

View File

@ -3,23 +3,27 @@
[source, js]
----
const response = await client.simulate.ingest({
docs: [
{
_index: "my-index",
_id: "123",
_source: {
foo: "bar",
const response = await client.transport.request({
method: "POST",
path: "/_ingest/_simulate",
body: {
docs: [
{
_index: "my-index",
_id: "123",
_source: {
foo: "bar",
},
},
},
{
_index: "my-index",
_id: "456",
_source: {
foo: "rab",
{
_index: "my-index",
_id: "456",
_source: {
foo: "rab",
},
},
},
],
],
},
});
console.log(response);
----

View File

@ -1,15 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.migrateReindex({
reindex: {
source: {
index: "my-data-stream",
},
mode: "upgrade",
},
});
console.log(response);
----

View File

@ -3,10 +3,14 @@
[source, js]
----
const response = await client.security.oidcLogout({
token:
"dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
refresh_token: "vLBPvmAB6KvwvJZr27cS",
const response = await client.transport.request({
method: "POST",
path: "/_security/oidc/logout",
body: {
token:
"dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
refresh_token: "vLBPvmAB6KvwvJZr27cS",
},
});
console.log(response);
----

View File

@ -3,9 +3,12 @@
[source, js]
----
const response = await client.esql.asyncQueryGet({
id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
wait_for_completion_timeout: "30s",
const response = await client.transport.request({
method: "GET",
path: "/_query/async/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE&#x3D;",
querystring: {
wait_for_completion_timeout: "30s",
},
});
console.log(response);
----

View File

@ -1,22 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.createFrom({
source: "my-index",
dest: "my-new-index",
create_from: {
settings_override: {
index: {
"blocks.write": null,
"blocks.read": null,
"blocks.read_only": null,
"blocks.read_only_allow_delete": null,
"blocks.metadata": null,
},
},
},
});
console.log(response);
----

View File

@ -1,21 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.create({
index: "semantic-embeddings",
mappings: {
properties: {
semantic_text: {
type: "semantic_text",
},
content: {
type: "text",
copy_to: "semantic_text",
},
},
},
});
console.log(response);
----

View File

@ -3,10 +3,14 @@
[source, js]
----
const response = await client.esql.asyncQuery({
query:
"\n FROM library\n | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n | STATS MAX(page_count) BY year\n | SORT year\n | LIMIT 5\n ",
wait_for_completion_timeout: "2s",
const response = await client.transport.request({
method: "POST",
path: "/_query/async",
body: {
query:
"\n FROM library\n | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n | STATS MAX(page_count) BY year\n | SORT year\n | LIMIT 5\n ",
wait_for_completion_timeout: "2s",
},
});
console.log(response);
----

View File

@ -3,8 +3,9 @@
[source, js]
----
const response = await client.esql.asyncQueryGet({
id: "FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=",
const response = await client.transport.request({
method: "GET",
path: "/_query/async/FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM&#x3D;",
});
console.log(response);
----

View File

@ -3,41 +3,43 @@
[source, js]
----
const response = await client.inference.streamInference({
task_type: "chat_completion",
inference_id: "openai-completion",
messages: [
{
role: "user",
content: [
{
type: "text",
text: "What's the price of a scarf?",
},
],
},
],
tools: [
{
type: "function",
function: {
name: "get_current_price",
description: "Get the current price of a item",
parameters: {
type: "object",
properties: {
item: {
id: "123",
const response = await client.transport.request({
method: "POST",
path: "/_inference/chat_completion/openai-completion/_stream",
body: {
messages: [
{
role: "user",
content: [
{
type: "text",
text: "What's the price of a scarf?",
},
],
},
],
tools: [
{
type: "function",
function: {
name: "get_current_price",
description: "Get the current price of a item",
parameters: {
type: "object",
properties: {
item: {
id: "123",
},
},
},
},
},
},
],
tool_choice: {
type: "function",
function: {
name: "get_current_price",
],
tool_choice: {
type: "function",
function: {
name: "get_current_price",
},
},
},
});

View File

@ -1,10 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.getMigrateReindexStatus({
index: "my-data-stream",
});
console.log(response);
----

View File

@ -6,11 +6,6 @@
const response = await client.ml.startTrainedModelDeployment({
model_id: "my_model",
deployment_id: "my_model_for_search",
adaptive_allocations: {
enabled: true,
min_number_of_allocations: 3,
max_number_of_allocations: 10,
},
});
console.log(response);
----

View File

@ -3,10 +3,12 @@
[source, js]
----
const response = await client.inference.streamInference({
task_type: "completion",
inference_id: "openai-completion",
input: "What is Elastic?",
const response = await client.transport.request({
method: "POST",
path: "/_inference/completion/openai-completion/_stream",
body: {
input: "What is Elastic?",
},
});
console.log(response);
----

View File

@ -3,10 +3,14 @@
[source, js]
----
const response = await client.security.oidcPrepareAuthentication({
realm: "oidc1",
state: "lGYK0EcSLjqH6pkT5EVZjC6eIW5YCGgywj2sxROO",
nonce: "zOBXLJGUooRrbLbQk5YCcyC8AXw3iloynvluYhZ5",
const response = await client.transport.request({
method: "POST",
path: "/_security/oidc/prepare",
body: {
realm: "oidc1",
state: "lGYK0EcSLjqH6pkT5EVZjC6eIW5YCGgywj2sxROO",
nonce: "zOBXLJGUooRrbLbQk5YCcyC8AXw3iloynvluYhZ5",
},
});
console.log(response);
----

View File

@ -3,11 +3,17 @@
[source, js]
----
const response = await client.esql.asyncQuery({
format: "json",
query:
"\n FROM cluster_one:my-index*,cluster_two:logs*\n | STATS COUNT(http.response.status_code) BY user.id\n | LIMIT 2\n ",
include_ccs_metadata: true,
const response = await client.transport.request({
method: "POST",
path: "/_query/async",
querystring: {
format: "json",
},
body: {
query:
"\n FROM cluster_one:my-index*,cluster_two:logs*\n | STATS COUNT(http.response.status_code) BY user.id\n | LIMIT 2\n ",
include_ccs_metadata: true,
},
});
console.log(response);
----

View File

@ -1,11 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.getDataLifecycleStats({
human: "true",
pretty: "true",
});
console.log(response);
----

View File

@ -1,12 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.createFrom({
source: "my-index",
dest: "my-new-index",
create_from: null,
});
console.log(response);
----

View File

@ -1,10 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.esql.asyncQueryDelete({
id: "FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=",
});
console.log(response);
----

View File

@ -3,8 +3,12 @@
[source, js]
----
const response = await client.security.bulkUpdateApiKeys({
ids: ["VuaCfGcBCdbkQm-e5aOx", "H3_AhoIBA9hmeQJdg7ij"],
const response = await client.transport.request({
method: "POST",
path: "/_security/api_key/_bulk_update",
body: {
ids: ["VuaCfGcBCdbkQm-e5aOx", "H3_AhoIBA9hmeQJdg7ij"],
},
});
console.log(response);
----

View File

@ -3,31 +3,35 @@
[source, js]
----
const response = await client.textStructure.findMessageStructure({
messages: [
"[2024-03-05T10:52:36,256][INFO ][o.a.l.u.VectorUtilPanamaProvider] [laptop] Java vector incubator API enabled; uses preferredBitSize=128",
"[2024-03-05T10:52:41,038][INFO ][o.e.p.PluginsService ] [laptop] loaded module [repository-url]",
"[2024-03-05T10:52:41,042][INFO ][o.e.p.PluginsService ] [laptop] loaded module [rest-root]",
"[2024-03-05T10:52:41,043][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-core]",
"[2024-03-05T10:52:41,043][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-redact]",
"[2024-03-05T10:52:41,043][INFO ][o.e.p.PluginsService ] [laptop] loaded module [ingest-user-agent]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-monitoring]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [repository-s3]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-analytics]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-ent-search]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-autoscaling]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [lang-painless]]",
"[2024-03-05T10:52:41,059][INFO ][o.e.p.PluginsService ] [laptop] loaded module [lang-expression]",
"[2024-03-05T10:52:41,059][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-eql]",
"[2024-03-05T10:52:43,291][INFO ][o.e.e.NodeEnvironment ] [laptop] heap size [16gb], compressed ordinary object pointers [true]",
"[2024-03-05T10:52:46,098][INFO ][o.e.x.s.Security ] [laptop] Security is enabled",
"[2024-03-05T10:52:47,227][INFO ][o.e.x.p.ProfilingPlugin ] [laptop] Profiling is enabled",
"[2024-03-05T10:52:47,259][INFO ][o.e.x.p.ProfilingPlugin ] [laptop] profiling index templates will not be installed or reinstalled",
"[2024-03-05T10:52:47,755][INFO ][o.e.i.r.RecoverySettings ] [laptop] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]",
"[2024-03-05T10:52:47,787][INFO ][o.e.d.DiscoveryModule ] [laptop] using discovery type [multi-node] and seed hosts providers [settings]",
"[2024-03-05T10:52:49,188][INFO ][o.e.n.Node ] [laptop] initialized",
"[2024-03-05T10:52:49,199][INFO ][o.e.n.Node ] [laptop] starting ...",
],
const response = await client.transport.request({
method: "POST",
path: "/_text_structure/find_message_structure",
body: {
messages: [
"[2024-03-05T10:52:36,256][INFO ][o.a.l.u.VectorUtilPanamaProvider] [laptop] Java vector incubator API enabled; uses preferredBitSize=128",
"[2024-03-05T10:52:41,038][INFO ][o.e.p.PluginsService ] [laptop] loaded module [repository-url]",
"[2024-03-05T10:52:41,042][INFO ][o.e.p.PluginsService ] [laptop] loaded module [rest-root]",
"[2024-03-05T10:52:41,043][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-core]",
"[2024-03-05T10:52:41,043][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-redact]",
"[2024-03-05T10:52:41,043][INFO ][o.e.p.PluginsService ] [laptop] loaded module [ingest-user-agent]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-monitoring]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [repository-s3]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-analytics]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-ent-search]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-autoscaling]",
"[2024-03-05T10:52:41,044][INFO ][o.e.p.PluginsService ] [laptop] loaded module [lang-painless]]",
"[2024-03-05T10:52:41,059][INFO ][o.e.p.PluginsService ] [laptop] loaded module [lang-expression]",
"[2024-03-05T10:52:41,059][INFO ][o.e.p.PluginsService ] [laptop] loaded module [x-pack-eql]",
"[2024-03-05T10:52:43,291][INFO ][o.e.e.NodeEnvironment ] [laptop] heap size [16gb], compressed ordinary object pointers [true]",
"[2024-03-05T10:52:46,098][INFO ][o.e.x.s.Security ] [laptop] Security is enabled",
"[2024-03-05T10:52:47,227][INFO ][o.e.x.p.ProfilingPlugin ] [laptop] Profiling is enabled",
"[2024-03-05T10:52:47,259][INFO ][o.e.x.p.ProfilingPlugin ] [laptop] profiling index templates will not be installed or reinstalled",
"[2024-03-05T10:52:47,755][INFO ][o.e.i.r.RecoverySettings ] [laptop] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]",
"[2024-03-05T10:52:47,787][INFO ][o.e.d.DiscoveryModule ] [laptop] using discovery type [multi-node] and seed hosts providers [settings]",
"[2024-03-05T10:52:49,188][INFO ][o.e.n.Node ] [laptop] initialized",
"[2024-03-05T10:52:49,199][INFO ][o.e.n.Node ] [laptop] starting ...",
],
},
});
console.log(response);
----

View File

@ -3,8 +3,9 @@
[source, js]
----
const response = await client.ingest.deleteIpLocationDatabase({
id: "my-database-id",
const response = await client.transport.request({
method: "DELETE",
path: "/_ingest/ip_location/database/my-database-id",
});
console.log(response);
----

View File

@ -3,26 +3,30 @@
[source, js]
----
const response = await client.security.bulkUpdateApiKeys({
ids: ["VuaCfGcBCdbkQm-e5aOx", "H3_AhoIBA9hmeQJdg7ij"],
role_descriptors: {
"role-a": {
indices: [
{
names: ["*"],
privileges: ["write"],
},
],
const response = await client.transport.request({
method: "POST",
path: "/_security/api_key/_bulk_update",
body: {
ids: ["VuaCfGcBCdbkQm-e5aOx", "H3_AhoIBA9hmeQJdg7ij"],
role_descriptors: {
"role-a": {
indices: [
{
names: ["*"],
privileges: ["write"],
},
],
},
},
},
metadata: {
environment: {
level: 2,
trusted: true,
tags: ["production"],
metadata: {
environment: {
level: 2,
trusted: true,
tags: ["production"],
},
},
expiration: "30d",
},
expiration: "30d",
});
console.log(response);
----

View File

@ -3,30 +3,32 @@
[source, js]
----
const response = await client.inference.streamInference({
task_type: "chat_completion",
inference_id: "openai-completion",
messages: [
{
role: "assistant",
content: "Let's find out what the weather is",
tool_calls: [
{
id: "call_KcAjWtAww20AihPHphUh46Gd",
type: "function",
function: {
name: "get_current_weather",
arguments: '{"location":"Boston, MA"}',
const response = await client.transport.request({
method: "POST",
path: "/_inference/chat_completion/openai-completion/_stream",
body: {
messages: [
{
role: "assistant",
content: "Let's find out what the weather is",
tool_calls: [
{
id: "call_KcAjWtAww20AihPHphUh46Gd",
type: "function",
function: {
name: "get_current_weather",
arguments: '{"location":"Boston, MA"}',
},
},
},
],
},
{
role: "tool",
content: "The weather is cold",
tool_call_id: "call_KcAjWtAww20AihPHphUh46Gd",
},
],
],
},
{
role: "tool",
content: "The weather is cold",
tool_call_id: "call_KcAjWtAww20AihPHphUh46Gd",
},
],
},
});
console.log(response);
----

View File

@ -3,9 +3,10 @@
[source, js]
----
const response = await client.ingest.putIpLocationDatabase({
id: "my-database-1",
configuration: {
const response = await client.transport.request({
method: "PUT",
path: "/_ingest/ip_location/database/my-database-1",
body: {
name: "GeoIP2-Domain",
maxmind: {
account_id: "1234567",

View File

@ -1,8 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.resolveCluster();
console.log(response);
----

View File

@ -3,8 +3,9 @@
[source, js]
----
const response = await client.ingest.deleteIpLocationDatabase({
id: "example-database-id",
const response = await client.transport.request({
method: "DELETE",
path: "/_ingest/ip_location/database/example-database-id",
});
console.log(response);
----

View File

@ -3,10 +3,10 @@
[source, js]
----
const response = await client.searchApplication.postBehavioralAnalyticsEvent({
collection_name: "my_analytics_collection",
event_type: "search_click",
payload: {
const response = await client.transport.request({
method: "POST",
path: "/_application/analytics/my_analytics_collection/event/search_click",
body: {
session: {
id: "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9",
},

View File

@ -3,12 +3,16 @@
[source, js]
----
const response = await client.security.oidcAuthenticate({
redirect_uri:
"https://oidc-kibana.elastic.co:5603/api/security/oidc/callback?code=jtI3Ntt8v3_XvcLzCFGq&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
state: "4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
nonce: "WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM",
realm: "oidc1",
const response = await client.transport.request({
method: "POST",
path: "/_security/oidc/authenticate",
body: {
redirect_uri:
"https://oidc-kibana.elastic.co:5603/api/security/oidc/callback?code=jtI3Ntt8v3_XvcLzCFGq&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
state: "4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
nonce: "WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM",
realm: "oidc1",
},
});
console.log(response);
----

View File

@ -3,10 +3,13 @@
[source, js]
----
const response = await client.searchApplication.renderQuery({
name: "my_search_application",
params: {
query_string: "rock climbing",
const response = await client.transport.request({
method: "POST",
path: "/_application/search_application/my_search_application/_render_query",
body: {
params: {
query_string: "rock climbing",
},
},
});
console.log(response);

View File

@ -1,10 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.cancelMigrateReindex({
index: "my-data-stream",
});
console.log(response);
----

View File

@ -1,10 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.getMigrateReindexStatus({
index: "my-data-stream",
});
console.log(response);
----

View File

@ -3,8 +3,9 @@
[source, js]
----
const response = await client.searchApplication.renderQuery({
name: "my_search_application",
const response = await client.transport.request({
method: "POST",
path: "/_application/search_application/my_search_application/_render_query",
});
console.log(response);
----

View File

@ -1,10 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.esql.asyncQueryStop({
id: "FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=",
});
console.log(response);
----

View File

@ -208,9 +208,13 @@ const response = await client.bulk({
});
console.log(response);
const response1 = await client.textStructure.findFieldStructure({
index: "test-logs",
field: "message",
const response1 = await client.transport.request({
method: "GET",
path: "/_text_structure/find_field_structure",
querystring: {
index: "test-logs",
field: "message",
},
});
console.log(response1);
----

View File

@ -3,32 +3,36 @@
[source, js]
----
const response = await client.simulate.ingest({
docs: [
{
_index: "my-index",
_id: "123",
_source: {
foo: "bar",
},
},
{
_index: "my-index",
_id: "456",
_source: {
foo: "rab",
},
},
],
pipeline_substitutions: {
"my-pipeline": {
processors: [
{
uppercase: {
field: "foo",
},
const response = await client.transport.request({
method: "POST",
path: "/_ingest/_simulate",
body: {
docs: [
{
_index: "my-index",
_id: "123",
_source: {
foo: "bar",
},
],
},
{
_index: "my-index",
_id: "456",
_source: {
foo: "rab",
},
},
],
pipeline_substitutions: {
"my-pipeline": {
processors: [
{
uppercase: {
field: "foo",
},
},
],
},
},
},
});

View File

@ -3,9 +3,13 @@
[source, js]
----
const response = await client.security.bulkUpdateApiKeys({
ids: ["VuaCfGcBCdbkQm-e5aOx", "H3_AhoIBA9hmeQJdg7ij"],
role_descriptors: {},
const response = await client.transport.request({
method: "POST",
path: "/_security/api_key/_bulk_update",
body: {
ids: ["VuaCfGcBCdbkQm-e5aOx", "H3_AhoIBA9hmeQJdg7ij"],
role_descriptors: {},
},
});
console.log(response);
----

View File

@ -3,65 +3,69 @@
[source, js]
----
const response = await client.simulate.ingest({
docs: [
{
_index: "my-index",
_id: "id",
_source: {
foo: "bar",
},
},
{
_index: "my-index",
_id: "id",
_source: {
foo: "rab",
},
},
],
pipeline_substitutions: {
"my-pipeline": {
processors: [
{
set: {
field: "field3",
value: "value3",
},
const response = await client.transport.request({
method: "POST",
path: "/_ingest/_simulate",
body: {
docs: [
{
_index: "my-index",
_id: "id",
_source: {
foo: "bar",
},
],
},
{
_index: "my-index",
_id: "id",
_source: {
foo: "rab",
},
},
],
pipeline_substitutions: {
"my-pipeline": {
processors: [
{
set: {
field: "field3",
value: "value3",
},
},
],
},
},
},
component_template_substitutions: {
"my-component-template": {
template: {
mappings: {
dynamic: "true",
properties: {
field3: {
type: "keyword",
component_template_substitutions: {
"my-component-template": {
template: {
mappings: {
dynamic: "true",
properties: {
field3: {
type: "keyword",
},
},
},
settings: {
index: {
default_pipeline: "my-pipeline",
},
},
},
settings: {
index: {
default_pipeline: "my-pipeline",
},
},
},
},
},
index_template_substitutions: {
"my-index-template": {
index_patterns: ["my-index-*"],
composed_of: ["component_template_1", "component_template_2"],
index_template_substitutions: {
"my-index-template": {
index_patterns: ["my-index-*"],
composed_of: ["component_template_1", "component_template_2"],
},
},
},
mapping_addition: {
dynamic: "strict",
properties: {
foo: {
type: "keyword",
mapping_addition: {
dynamic: "strict",
properties: {
foo: {
type: "keyword",
},
},
},
},

View File

@ -3,9 +3,13 @@
[source, js]
----
const response = await client.security.oidcPrepareAuthentication({
iss: "http://127.0.0.1:8080",
login_hint: "this_is_an_opaque_string",
const response = await client.transport.request({
method: "POST",
path: "/_security/oidc/prepare",
body: {
iss: "http://127.0.0.1:8080",
login_hint: "this_is_an_opaque_string",
},
});
console.log(response);
----

View File

@ -3,9 +3,10 @@
[source, js]
----
const response = await client.ingest.putIpLocationDatabase({
id: "my-database-2",
configuration: {
const response = await client.transport.request({
method: "PUT",
path: "/_ingest/ip_location/database/my-database-2",
body: {
name: "standard_location",
ipinfo: {},
},

View File

@ -3,8 +3,9 @@
[source, js]
----
const response = await client.ingest.getIpLocationDatabase({
id: "my-database-id",
const response = await client.transport.request({
method: "GET",
path: "/_ingest/ip_location/database/my-database-id",
});
console.log(response);
----

View File

@ -3,6 +3,9 @@
[source, js]
----
const response = await client.security.getSettings();
const response = await client.transport.request({
method: "GET",
path: "/_security/settings",
});
console.log(response);
----

View File

@ -1,14 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.createFrom({
source: "my-index",
dest: "my-new-index",
create_from: {
remove_index_blocks: false,
},
});
console.log(response);
----

View File

@ -3,8 +3,12 @@
[source, js]
----
const response = await client.security.oidcPrepareAuthentication({
realm: "oidc1",
const response = await client.transport.request({
method: "POST",
path: "/_security/oidc/prepare",
body: {
realm: "oidc1",
},
});
console.log(response);
----

View File

@ -3,34 +3,38 @@
[source, js]
----
const response = await client.simulate.ingest({
docs: [
{
_index: "my-index",
_id: "123",
_source: {
foo: "foo",
const response = await client.transport.request({
method: "POST",
path: "/_ingest/_simulate",
body: {
docs: [
{
_index: "my-index",
_id: "123",
_source: {
foo: "foo",
},
},
},
{
_index: "my-index",
_id: "456",
_source: {
bar: "rab",
{
_index: "my-index",
_id: "456",
_source: {
bar: "rab",
},
},
},
],
component_template_substitutions: {
"my-mappings_template": {
template: {
mappings: {
dynamic: "strict",
properties: {
foo: {
type: "keyword",
},
bar: {
type: "keyword",
],
component_template_substitutions: {
"my-mappings_template": {
template: {
mappings: {
dynamic: "strict",
properties: {
foo: {
type: "keyword",
},
bar: {
type: "keyword",
},
},
},
},

View File

@ -1,12 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.createFrom({
source: ".ml-anomalies-custom-example",
dest: ".reindexed-v9-ml-anomalies-custom-example",
create_from: null,
});
console.log(response);
----

View File

@ -1,12 +0,0 @@
project: 'Node.js client'
exclude:
- examples/proxy/README.md
cross_links:
- docs-content
- elasticsearch
toc:
- toc: reference
- toc: release-notes
subs:
stack: "Elastic Stack"
es: "Elasticsearch"

View File

@ -1,13 +1,11 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/as_stream_examples.html
---
[[as_stream_examples]]
=== asStream
# asStream [as_stream_examples]
Instead of getting the parsed body back, you will get the raw Node.js stream of
data.
Instead of getting the parsed body back, you will get the raw Node.js stream of data.
```js
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -68,14 +66,13 @@ async function run () {
}
run().catch(console.log)
```
----
::::{tip}
This can be useful if you need to pipe the {{es}}'s response to a proxy, or send it directly to another source.
::::
TIP: This can be useful if you need to pipe the {es}'s response to a proxy, or
send it directly to another source.
```js
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -99,5 +96,4 @@ fastify.post('/search/:index', async (req, reply) => {
})
fastify.listen(3000)
```
----

View File

@ -1,18 +1,13 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/bulk_examples.html
---
[[bulk_examples]]
=== Bulk
# Bulk [bulk_examples]
With the {jsclient}/api-reference.html#_bulk[`bulk` API], you can perform multiple index/delete operations in a
single API call. The `bulk` API significantly increases indexing speed.
With the [`bulk` API](/reference/api-reference.md#_bulk), you can perform multiple index/delete operations in a single API call. The `bulk` API significantly increases indexing speed.
NOTE: You can also use the {jsclient}/client-helpers.html[bulk helper].
::::{note}
You can also use the [bulk helper](/reference/client-helpers.md#bulk-helper).
::::
```js
[source,js]
----
'use strict'
require('array.prototype.flatmap').shim()
@ -95,5 +90,4 @@ async function run () {
}
run().catch(console.log)
```
----

View File

@ -1,18 +1,12 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/exists_examples.html
---
# Exists [exists_examples]
[[exists_examples]]
=== Exists
Check that the document `/game-of-thrones/1` exists.
::::{note}
Since this API uses the `HEAD` method, the body value will be boolean.
::::
NOTE: Since this API uses the `HEAD` method, the body value will be boolean.
```js
[source,js]
---------
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -40,5 +34,4 @@ async function run () {
}
run().catch(console.log)
```
---------

View File

@ -1,13 +1,12 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/get_examples.html
---
[[get_examples]]
=== Get
# Get [get_examples]
The get API allows to get a typed JSON document from the index based on its id.
The following example gets a JSON document from an index called
`game-of-thrones`, under a type called `_doc`, with id valued `'1'`.
The get API allows to get a typed JSON document from the index based on its id. The following example gets a JSON document from an index called `game-of-thrones`, under a type called `_doc`, with id valued `'1'`.
```js
[source,js]
---------
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -35,5 +34,4 @@ async function run () {
}
run().catch(console.log)
```
---------

View File

@ -1,13 +1,10 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/ignore_examples.html
---
# Ignore [ignore_examples]
[[ignore_examples]]
=== Ignore
HTTP status codes which should not be considered errors for this request.
```js
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -65,5 +62,4 @@ async function run () {
}
run().catch(console.log)
```
----

View File

@ -0,0 +1,34 @@
[[examples]]
== Examples
Following you can find some examples on how to use the client.
* Use of the <<as_stream_examples,asStream>> parameter;
* Executing a <<bulk_examples,bulk>> request;
* Executing a <<exists_examples,exists>> request;
* Executing a <<get_examples,get>> request;
* Executing a <<sql_query_examples,sql.query>> request;
* Executing a <<update_examples,update>> request;
* Executing a <<update_by_query_examples,update by query>> request;
* Executing a <<reindex_examples,reindex>> request;
* Use of the <<ignore_examples,ignore>> parameter;
* Executing a <<msearch_examples,msearch>> request;
* How do I <<scroll_examples,scroll>>?
* Executing a <<search_examples,search>> request;
* I need <<suggest_examples,suggestions>>;
* How to use the <<transport_request_examples,transport.request>> method;
include::asStream.asciidoc[]
include::bulk.asciidoc[]
include::exists.asciidoc[]
include::get.asciidoc[]
include::ignore.asciidoc[]
include::msearch.asciidoc[]
include::scroll.asciidoc[]
include::search.asciidoc[]
include::suggest.asciidoc[]
include::transport.request.asciidoc[]
include::sql.query.asciidoc[]
include::update.asciidoc[]
include::update_by_query.asciidoc[]
include::reindex.asciidoc[]

View File

@ -1,13 +1,11 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/msearch_examples.html
---
[[msearch_examples]]
=== MSearch
# MSearch [msearch_examples]
The multi search API allows to execute several search requests within the same
API.
The multi search API allows to execute several search requests within the same API.
```js
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -59,5 +57,4 @@ async function run () {
}
run().catch(console.log)
```
----

View File

@ -1,6 +1,20 @@
/*
* Copyright Elasticsearch B.V. and contributors
* SPDX-License-Identifier: Apache-2.0
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
// IMPORTANT: this is not a production ready code & purely for demonstration purposes,

View File

@ -1,6 +1,20 @@
/*
* Copyright Elasticsearch B.V. and contributors
* SPDX-License-Identifier: Apache-2.0
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
// IMPORTANT: this is not a production ready code & purely for demonstration purposes,

View File

@ -1,6 +1,20 @@
/*
* Copyright Elasticsearch B.V. and contributors
* SPDX-License-Identifier: Apache-2.0
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
// IMPORTANT: this is not a production ready code & purely for demonstration purposes,

View File

@ -1,6 +1,20 @@
/*
* Copyright Elasticsearch B.V. and contributors
* SPDX-License-Identifier: Apache-2.0
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
// IMPORTANT: this is not a production ready code & purely for demonstration purposes,

View File

@ -1,6 +1,20 @@
/*
* Copyright Elasticsearch B.V. and contributors
* SPDX-License-Identifier: Apache-2.0
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
// IMPORTANT: this is not a production ready code & purely for demonstration purposes,

View File

@ -1,6 +1,20 @@
/*
* Copyright Elasticsearch B.V. and contributors
* SPDX-License-Identifier: Apache-2.0
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
'use strict'

View File

@ -1,15 +1,17 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/reindex_examples.html
---
[[reindex_examples]]
=== Reindex
# Reindex [reindex_examples]
The `reindex` API extracts the document source from the source index and indexes
the documents into the destination index. You can copy all documents to the
destination index, reindex a subset of the documents or update the source before
to reindex it.
The `reindex` API extracts the document source from the source index and indexes the documents into the destination index. You can copy all documents to the destination index, reindex a subset of the documents or update the source before to reindex it.
In the following example we have a `game-of-thrones` index which contains
different quotes of various characters, we want to create a new index only for
the house Stark and remove the `house` field from the document source.
In the following example we have a `game-of-thrones` index which contains different quotes of various characters, we want to create a new index only for the house Stark and remove the `house` field from the document source.
```js
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -74,5 +76,4 @@ async function run () {
}
run().catch(console.log)
```
----

View File

@ -1,27 +1,28 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/scroll_examples.html
---
[[scroll_examples]]
=== Scroll
# Scroll [scroll_examples]
While a search request returns a single “page” of results, the scroll API can be
used to retrieve large numbers of results (or even all results) from a single
search request, in much the same way as you would use a cursor on a traditional
database.
While a search request returns a single “page” of results, the scroll API can be used to retrieve large numbers of results (or even all results) from a single search request, in much the same way as you would use a cursor on a traditional database.
Scrolling is not intended for real time user requests, but rather for processing
large amounts of data, for example in order to reindex the contents of one index
into a new index with a different configuration.
Scrolling is not intended for real time user requests, but rather for processing large amounts of data, for example in order to reindex the contents of one index into a new index with a different configuration.
NOTE: The results that are returned from a scroll request reflect the state of
the index at the time that the initial search request was made, like a snapshot
in time. Subsequent changes to documents (index, update or delete) will only
affect later search requests.
::::{note}
The results that are returned from a scroll request reflect the state of the index at the time that the initial search request was made, like a snapshot in time. Subsequent changes to documents (index, update or delete) will only affect later search requests.
::::
In order to use scrolling, the initial search request should specify the scroll
parameter in the query string, which tells {es} how long it should keep the
“search context” alive.
NOTE: Did you know that we provide an helper for sending scroll requests? You can find it {jsclient}/client-helpers.html[here].
In order to use scrolling, the initial search request should specify the scroll parameter in the query string, which tells {{es}} how long it should keep the “search context” alive.
::::{note}
Did you know that we provide an helper for sending scroll requests? You can find it [here](/reference/client-helpers.md#scroll-search-helper).
::::
```js
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -110,11 +111,13 @@ async function run () {
}
run().catch(console.log)
```
----
Another cool usage of the `scroll` API can be done with Node.js ≥ 10, by using async iteration!
Another cool usage of the `scroll` API can be done with Node.js ≥ 10, by using
async iteration!
```js
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -189,5 +192,4 @@ async function run () {
}
run().catch(console.log)
```
----

View File

@ -1,13 +1,14 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/search_examples.html
---
[[search_examples]]
=== Search
# Search [search_examples]
The `search` API allows you to execute a search query and get back search hits
that match the query. The query can either be provided using a simple
https://www.elastic.co/guide/en/elasticsearch/reference/6.6/search-uri-request.html[query string as a parameter],
or using a
https://www.elastic.co/guide/en/elasticsearch/reference/6.6/search-request-body.html[request body].
The `search` API allows you to execute a search query and get back search hits that match the query. The query can either be provided using a simple [query string as a parameter](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search), or using a [request body](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-body.html).
```js
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -60,5 +61,4 @@ async function run () {
}
run().catch(console.log)
```
----

View File

@ -1,15 +1,19 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/sql_query_examples.html
---
[[sql_query_examples]]
=== SQL
# SQL [sql_query_examples]
{es} SQL is an X-Pack component that allows SQL-like queries to be executed in
real-time against {es}. Whether using the REST interface, command-line or JDBC,
any client can use SQL to search and aggregate data natively inside {es}. One
can think of {es} SQL as a translator, one that understands both SQL and {es}
and makes it easy to read and process data in real-time, at scale by leveraging
{es} capabilities.
{{es}} SQL is an X-Pack component that allows SQL-like queries to be executed in real-time against {{es}}. Whether using the REST interface, command-line or JDBC, any client can use SQL to search and aggregate data natively inside {{es}}. One can think of {{es}} SQL as a translator, one that understands both SQL and {{es}} and makes it easy to read and process data in real-time, at scale by leveraging {{es}} capabilities.
In the following example we will search all the documents that has the field
`house` equals to `stark`, log the result with the tabular view and then
manipulate the result to obtain an object easy to navigate.
In the following example we will search all the documents that has the field `house` equals to `stark`, log the result with the tabular view and then manipulate the result to obtain an object easy to navigate.
```js
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -65,5 +69,4 @@ async function run () {
}
run().catch(console.log)
```
----

View File

@ -1,15 +1,14 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/suggest_examples.html
---
[[suggest_examples]]
=== Suggest
# Suggest [suggest_examples]
The suggest feature suggests similar looking terms based on a provided text by
using a suggester. _Parts of the suggest feature are still under development._
The suggest feature suggests similar looking terms based on a provided text by using a suggester. *Parts of the suggest feature are still under development.*
The suggest request part is defined alongside the query part in a `search`
request. If the query part is left out, only suggestions are returned.
The suggest request part is defined alongside the query part in a `search` request. If the query part is left out, only suggestions are returned.
```js
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -64,5 +63,5 @@ async function run () {
}
run().catch(console.log)
```
----

View File

@ -1,23 +1,22 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/transport_request_examples.html
---
[[transport_request_examples]]
=== transport.request
# transport.request [transport_request_examples]
It can happen that you need to communicate with {es} by using an API that is not
supported by the client, to mitigate this issue you can directly call
`client.transport.request`, which is the internal utility that the client uses
to communicate with {es} when you use an API method.
It can happen that you need to communicate with {{es}} by using an API that is not supported by the client, to mitigate this issue you can directly call `client.transport.request`, which is the internal utility that the client uses to communicate with {{es}} when you use an API method.
::::{note}
When using the `transport.request` method you must provide all the parameters needed to perform an HTTP call, such as `method`, `path`, `querystring`, and `body`.
::::
NOTE: When using the `transport.request` method you must provide all the
parameters needed to perform an HTTP call, such as `method`, `path`,
`querystring`, and `body`.
::::{tip}
If you find yourself use this method too often, take in consideration the use of `client.extend`, which will make your code look cleaner and easier to maintain.
::::
TIP: If you find yourself use this method too often, take in consideration the
use of `client.extend`, which will make your code look cleaner and easier to
maintain.
```js
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -72,5 +71,4 @@ async function run () {
}
run().catch(console.log)
```
----

View File

@ -1,13 +1,12 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/update_examples.html
---
[[update_examples]]
=== Update
# Update [update_examples]
The update API allows updates of a specific document using the given script. In
the following example, we will index a document that also tracks how many times
a character has said the given quote, and then we will update the `times` field.
The update API allows updates of a specific document using the given script. In the following example, we will index a document that also tracks how many times a character has said the given quote, and then we will update the `times` field.
```js
[source,js]
---------
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -48,11 +47,13 @@ async function run () {
}
run().catch(console.log)
```
---------
With the update API, you can also run a partial update of a document.
```js
[source,js]
---------
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -89,5 +90,6 @@ async function run () {
}
run().catch(console.log)
```
---------

View File

@ -1,13 +1,12 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/update_by_query_examples.html
---
[[update_by_query_examples]]
=== Update By Query
# Update By Query [update_by_query_examples]
The simplest usage of _update_by_query just performs an update on every document
in the index without changing the source. This is useful to pick up a new
property or some other online mapping change.
The simplest usage of _update_by_query just performs an update on every document in the index without changing the source. This is useful to pick up a new property or some other online mapping change.
```js
[source,js]
---------
'use strict'
const { Client } = require('@elastic/elasticsearch')
@ -57,5 +56,5 @@ async function run () {
}
run().catch(console.log)
```
---------

View File

@ -0,0 +1,170 @@
[[getting-started-js]]
== Getting started
This page guides you through the installation process of the Node.js client,
shows you how to instantiate the client, and how to perform basic Elasticsearch
operations with it.
[discrete]
=== Requirements
* https://nodejs.org/[Node.js] version 14.x or newer
* https://docs.npmjs.com/downloading-and-installing-node-js-and-npm[`npm`], usually bundled with Node.js
[discrete]
=== Installation
To install the latest version of the client, run the following command:
[source,shell]
--------------------------
npm install @elastic/elasticsearch
--------------------------
Refer to the <<installation>> page to learn more.
[discrete]
=== Connecting
You can connect to the Elastic Cloud using an API key and the Elasticsearch
endpoint.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
node: 'https://...', // Elasticsearch endpoint
auth: {
apiKey: { // API key ID and secret
id: 'foo',
api_key: 'bar',
}
}
})
----
Your Elasticsearch endpoint can be found on the **My deployment** page of your
deployment:
image::images/es-endpoint.jpg[alt="Finding Elasticsearch endpoint",align="center"]
You can generate an API key on the **Management** page under Security.
image::images/create-api-key.png[alt="Create API key",align="center"]
For other connection options, refer to the <<client-connecting>> section.
[discrete]
=== Operations
Time to use Elasticsearch! This section walks you through the basic, and most
important, operations of Elasticsearch.
[discrete]
==== Creating an index
This is how you create the `my_index` index:
[source,js]
----
await client.indices.create({ index: 'my_index' })
----
[discrete]
==== Indexing documents
This is a simple way of indexing a document:
[source,js]
----
await client.index({
index: 'my_index',
id: 'my_document_id',
document: {
foo: 'foo',
bar: 'bar',
},
})
----
[discrete]
==== Getting documents
You can get documents by using the following code:
[source,js]
----
await client.get({
index: 'my_index',
id: 'my_document_id',
})
----
[discrete]
==== Searching documents
This is how you can create a single match query with the client:
[source,js]
----
await client.search({
query: {
match: {
foo: 'foo'
}
}
})
----
[discrete]
==== Updating documents
This is how you can update a document, for example to add a new field:
[source,js]
----
await client.update({
index: 'my_index',
id: 'my_document_id',
doc: {
foo: 'bar',
new_field: 'new value'
}
})
----
[discrete]
==== Deleting documents
[source,js]
----
await client.delete({
index: 'my_index',
id: 'my_document_id',
})
----
[discrete]
==== Deleting an index
[source,js]
----
await client.indices.delete({ index: 'my_index' })
----
[discrete]
== Further reading
* Use <<client-helpers>> for a more comfortable experience with the APIs.
* For an elaborate example of how to ingest data into Elastic Cloud,
refer to {cloud}/ec-getting-started-node-js.html[this page].

748
docs/helpers.asciidoc Normal file
View File

@ -0,0 +1,748 @@
[[client-helpers]]
== Client helpers
The client comes with an handy collection of helpers to give you a more
comfortable experience with some APIs.
CAUTION: The client helpers are experimental, and the API may change in the next
minor releases. The helpers will not work in any Node.js version lower than 10.
[discrete]
[[bulk-helper]]
=== Bulk helper
~Added~ ~in~ ~`v7.7.0`~
Running bulk requests can be complex due to the shape of the API, this helper
aims to provide a nicer developer experience around the Bulk API.
[discrete]
==== Usage
[source,js]
----
const { createReadStream } = require('fs')
const split = require('split2')
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
const result = await client.helpers.bulk({
datasource: createReadStream('./dataset.ndjson').pipe(split()),
onDocument (doc) {
return {
index: { _index: 'my-index' }
}
}
})
console.log(result)
// {
// total: number,
// failed: number,
// retry: number,
// successful: number,
// time: number,
// bytes: number,
// aborted: boolean
// }
----
To create a new instance of the Bulk helper, access it as shown in the example
above, the configuration options are:
[cols=2*]
|===
|`datasource`
a|An array, async generator or a readable stream with the data you need to index/create/update/delete.
It can be an array of strings or objects, but also a stream of json strings or JavaScript objects. +
If it is a stream, we recommend to use the https://www.npmjs.com/package/split2[`split2`] package, that splits the stream on new lines delimiters. +
This parameter is mandatory.
[source,js]
----
const { createReadStream } = require('fs')
const split = require('split2')
const b = client.helpers.bulk({
// if you just use split(), the data will be used as array of strings
datasource: createReadStream('./dataset.ndjson').pipe(split())
// if you need to manipulate the data, you can pass JSON.parse to split
datasource: createReadStream('./dataset.ndjson').pipe(split(JSON.parse))
})
----
|`onDocument`
a|A function that is called for each document of the datasource. Inside this function you can manipulate the document and you must return the operation you want to execute with the document. Look at the link:{ref}/docs-bulk.html[Bulk API documentation] to see the supported operations. +
This parameter is mandatory.
[source,js]
----
const b = client.helpers.bulk({
onDocument (doc) {
return {
index: { _index: 'my-index' }
}
}
})
----
|`onDrop`
a|A function that is called for everytime a document can't be indexed and it has reached the maximum amount of retries.
[source,js]
----
const b = client.helpers.bulk({
onDrop (doc) {
console.log(doc)
}
})
----
|`onSuccess`
a|A function that is called for each successful operation in the bulk request, which includes the result from Elasticsearch along with the original document that was sent, or `null` for delete operations.
[source,js]
----
const b = client.helpers.bulk({
onSuccess ({ result, document }) {
console.log(`SUCCESS: Document ${result.index._id} indexed to ${result.index._index}`)
}
})
----
|`flushBytes`
a|The size of the bulk body in bytes to reach before to send it. Default of 5MB. +
_Default:_ `5000000`
[source,js]
----
const b = client.helpers.bulk({
flushBytes: 1000000
})
----
|`flushInterval`
a|How much time (in milliseconds) the helper waits before flushing the body from the last document read. +
_Default:_ `30000`
[source,js]
----
const b = client.helpers.bulk({
flushInterval: 30000
})
----
|`concurrency`
a|How many request is executed at the same time. +
_Default:_ `5`
[source,js]
----
const b = client.helpers.bulk({
concurrency: 10
})
----
|`retries`
a|How many times a document is retried before to call the `onDrop` callback. +
_Default:_ Client max retries.
[source,js]
----
const b = client.helpers.bulk({
retries: 3
})
----
|`wait`
a|How much time to wait before retries in milliseconds. +
_Default:_ 5000.
[source,js]
----
const b = client.helpers.bulk({
wait: 3000
})
----
|`refreshOnCompletion`
a|If `true`, at the end of the bulk operation it runs a refresh on all indices or on the specified indices. +
_Default:_ false.
[source,js]
----
const b = client.helpers.bulk({
refreshOnCompletion: true
// or
refreshOnCompletion: 'index-name'
})
----
|===
[discrete]
==== Supported operations
[discrete]
===== Index
[source,js]
----
client.helpers.bulk({
datasource: myDatasource,
onDocument (doc) {
return {
index: { _index: 'my-index' }
}
}
})
----
[discrete]
===== Create
[source,js]
----
client.helpers.bulk({
datasource: myDatasource,
onDocument (doc) {
return {
create: { _index: 'my-index', _id: doc.id }
}
}
})
----
[discrete]
===== Update
[source,js]
----
client.helpers.bulk({
datasource: myDatasource,
onDocument (doc) {
// Note that the update operation requires you to return
// an array, where the first element is the action, while
// the second are the document option
return [
{ update: { _index: 'my-index', _id: doc.id } },
{ doc_as_upsert: true }
]
}
})
----
[discrete]
===== Delete
[source,js]
----
client.helpers.bulk({
datasource: myDatasource,
onDocument (doc) {
return {
delete: { _index: 'my-index', _id: doc.id }
}
}
})
----
[discrete]
==== Abort a bulk operation
If needed, you can abort a bulk operation at any time. The bulk helper returns a
https://promisesaplus.com/[thenable], which has an `abort` method.
NOTE: The abort method stops the execution of the bulk operation, but if you
are using a concurrency higher than one, the operations that are already running
will not be stopped.
[source,js]
----
const { createReadStream } = require('fs')
const split = require('split2')
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
const b = client.helpers.bulk({
datasource: createReadStream('./dataset.ndjson').pipe(split()),
onDocument (doc) {
return {
index: { _index: 'my-index' }
}
},
onDrop (doc) {
b.abort()
}
})
console.log(await b)
----
[discrete]
==== Passing custom options to the Bulk API
You can pass any option supported by the link:
{ref}/docs-bulk.html#docs-bulk-api-query-params[Bulk API] to the helper, and the
helper uses those options in conjunction with the Bulk API call.
[source,js]
----
const result = await client.helpers.bulk({
datasource: [...],
onDocument (doc) {
return {
index: { _index: 'my-index' }
}
},
pipeline: 'my-pipeline'
})
----
[discrete]
==== Usage with an async generator
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
async function * generator () {
const dataset = [
{ user: 'jon', age: 23 },
{ user: 'arya', age: 18 },
{ user: 'tyrion', age: 39 }
]
for (const doc of dataset) {
yield doc
}
}
const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
const result = await client.helpers.bulk({
datasource: generator(),
onDocument (doc) {
return {
index: { _index: 'my-index' }
}
}
})
console.log(result)
----
[discrete]
==== Modifying a document before operation
~Added~ ~in~ ~`v8.8.2`~
If you need to modify documents in your datasource before it is sent to Elasticsearch, you can return an array in the `onDocument` function rather than an operation object. The first item in the array must be the operation object, and the second item must be the document or partial document object as you'd like it to be sent to Elasticsearch.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
const result = await client.helpers.bulk({
datasource: [...],
onDocument (doc) {
return [
{ index: { _index: 'my-index' } },
{ ...doc, favorite_color: 'mauve' },
]
}
})
console.log(result)
----
[discrete]
[[multi-search-helper]]
=== Multi search helper
~Added~ ~in~ ~`v7.8.0`~
If you send search request at a high rate, this helper might be useful
for you. It uses the multi search API under the hood to batch the requests
and improve the overall performances of your application. The `result` exposes a
`documents` property as well, which allows you to access directly the hits
sources.
[discrete]
==== Usage
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
const m = client.helpers.msearch()
m.search(
{ index: 'stackoverflow' },
{ query: { match: { title: 'javascript' } } }
)
.then(result => console.log(result.body)) // or result.documents
.catch(err => console.error(err))
----
To create a new instance of the multi search (msearch) helper, you should access
it as shown in the example above, the configuration options are:
[cols=2*]
|===
|`operations`
a|How many search operations should be sent in a single msearch request. +
_Default:_ `5`
[source,js]
----
const m = client.helpers.msearch({
operations: 10
})
----
|`flushInterval`
a|How much time (in milliseconds) the helper waits before flushing the operations from the last operation read. +
_Default:_ `500`
[source,js]
----
const m = client.helpers.msearch({
flushInterval: 500
})
----
|`concurrency`
a|How many request is executed at the same time. +
_Default:_ `5`
[source,js]
----
const m = client.helpers.msearch({
concurrency: 10
})
----
|`retries`
a|How many times an operation is retried before to resolve the request. An operation is retried only in case of a 429 error. +
_Default:_ Client max retries.
[source,js]
----
const m = client.helpers.msearch({
retries: 3
})
----
|`wait`
a|How much time to wait before retries in milliseconds. +
_Default:_ 5000.
[source,js]
----
const m = client.helpers.msearch({
wait: 3000
})
----
|===
[discrete]
==== Stopping the msearch helper
If needed, you can stop an msearch processor at any time. The msearch helper
returns a https://promisesaplus.com/[thenable], which has an `stop` method.
If you are creating multiple msearch helpers instances and using them for a
limitied period of time, remember to always use the `stop` method once you have
finished using them, otherwise your application will start leaking memory.
The `stop` method accepts an optional error, that will be dispatched every
subsequent search request.
NOTE: The stop method stops the execution of the msearch processor, but if
you are using a concurrency higher than one, the operations that are already
running will not be stopped.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
cloud: { id: '<cloud-id>' },
auth: { apiKey: 'base64EncodedKey' }
})
const m = client.helpers.msearch()
m.search(
{ index: 'stackoverflow' },
{ query: { match: { title: 'javascript' } } }
)
.then(result => console.log(result.body))
.catch(err => console.error(err))
m.search(
{ index: 'stackoverflow' },
{ query: { match: { title: 'ruby' } } }
)
.then(result => console.log(result.body))
.catch(err => console.error(err))
setImmediate(() => m.stop())
----
[discrete]
[[search-helper]]
=== Search helper
~Added~ ~in~ ~`v7.7.0`~
A simple wrapper around the search API. Instead of returning the entire `result`
object it returns only the search documents source. For improving the
performances, this helper automatically adds `filter_path=hits.hits._source` to
the query string.
[source,js]
----
const documents = await client.helpers.search({
index: 'stackoverflow',
query: {
match: {
title: 'javascript'
}
}
})
for (const doc of documents) {
console.log(doc)
}
----
[discrete]
[[scroll-search-helper]]
=== Scroll search helper
~Added~ ~in~ ~`v7.7.0`~
This helpers offers a simple and intuitive way to use the scroll search API.
Once called, it returns an
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function[async iterator]
which can be used in conjuction with a for-await...of. It handles automatically
the `429` error and uses the `maxRetries` option of the client.
[source,js]
----
const scrollSearch = client.helpers.scrollSearch({
index: 'stackoverflow',
query: {
match: {
title: 'javascript'
}
}
})
for await (const result of scrollSearch) {
console.log(result)
}
----
[discrete]
==== Clear a scroll search
If needed, you can clear a scroll search by calling `result.clear()`:
[source,js]
----
for await (const result of scrollSearch) {
if (condition) {
await result.clear()
}
}
----
[discrete]
==== Quickly getting the documents
If you only need the documents from the result of a scroll search, you can
access them via `result.documents`:
[source,js]
----
for await (const result of scrollSearch) {
console.log(result.documents)
}
----
[discrete]
[[scroll-documents-helper]]
=== Scroll documents helper
~Added~ ~in~ ~`v7.7.0`~
It works in the same way as the scroll search helper, but it returns only the
documents instead. Note, every loop cycle returns a single document, and you
can't use the `clear` method. For improving the performances, this helper
automatically adds `filter_path=hits.hits._source` to the query string.
[source,js]
----
const scrollSearch = client.helpers.scrollDocuments({
index: 'stackoverflow',
query: {
match: {
title: 'javascript'
}
}
})
for await (const doc of scrollSearch) {
console.log(doc)
}
----
[discrete]
[[esql-helper]]
=== ES|QL helper
ES|QL queries can return their results in {ref}/esql-rest.html#esql-rest-format[several formats].
The default JSON format returned by ES|QL queries contains arrays of values
for each row, with column names and types returned separately:
[discrete]
==== Usage
[discrete]
===== `toRecords`
~Added~ ~in~ ~`v8.14.0`~
The default JSON format returned by ES|QL queries contains arrays of values
for each row, with column names and types returned separately:
[source,json]
----
{
"columns": [
{ "name": "@timestamp", "type": "date" },
{ "name": "client_ip", "type": "ip" },
{ "name": "event_duration", "type": "long" },
{ "name": "message", "type": "keyword" }
],
"values": [
[
"2023-10-23T12:15:03.360Z",
"172.21.2.162",
3450233,
"Connected to 10.1.0.3"
],
[
"2023-10-23T12:27:28.948Z",
"172.21.2.113",
2764889,
"Connected to 10.1.0.2"
]
]
}
----
In many cases, it's preferable to operate on an array of objects, one object per row,
rather than an array of arrays. The ES|QL `toRecords` helper converts row data into objects.
[source,js]
----
await client.helpers
.esql({ query: 'FROM sample_data | LIMIT 2' })
.toRecords()
// =>
// {
// "columns": [
// { "name": "@timestamp", "type": "date" },
// { "name": "client_ip", "type": "ip" },
// { "name": "event_duration", "type": "long" },
// { "name": "message", "type": "keyword" }
// ],
// "records": [
// {
// "@timestamp": "2023-10-23T12:15:03.360Z",
// "client_ip": "172.21.2.162",
// "event_duration": 3450233,
// "message": "Connected to 10.1.0.3"
// },
// {
// "@timestamp": "2023-10-23T12:27:28.948Z",
// "client_ip": "172.21.2.113",
// "event_duration": 2764889,
// "message": "Connected to 10.1.0.2"
// },
// ]
// }
----
In TypeScript, you can declare the type that `toRecords` returns:
[source,ts]
----
type EventLog = {
'@timestamp': string,
client_ip: string,
event_duration: number,
message: string,
}
const result = await client.helpers
.esql({ query: 'FROM sample_data | LIMIT 2' })
.toRecords<EventLog>()
----
[discrete]
===== `toArrowReader`
~Added~ ~in~ ~`v8.16.0`~
ES|QL can return results in multiple binary formats, including https://arrow.apache.org/[Apache Arrow]'s streaming format. Because it is a very efficient format to read, it can be valuable for performing high-performance in-memory analytics. And, because the response is streamed as batches of records, it can be used to produce aggregations and other calculations on larger-than-memory data sets.
`toArrowReader` returns a https://arrow.apache.org/docs/js/classes/Arrow_dom.RecordBatchReader.html[`RecordBatchStreamReader`].
[source,ts]
----
const reader = await client.helpers
.esql({ query: 'FROM sample_data' })
.toArrowReader()
// print each record as JSON
for (const recordBatch of reader) {
for (const record of recordBatch) {
console.log(record.toJSON())
}
}
----
[discrete]
===== `toArrowTable`
~Added~ ~in~ ~`v8.16.0`~
If you would like to pull the entire data set in Arrow format but without streaming, you can use the `toArrowTable` helper to get a https://arrow.apache.org/docs/js/classes/Arrow_dom.Table.html[Table] back instead.
[source,ts]
----
const table = await client.helpers
.esql({ query: 'FROM sample_data' })
.toArrowTable()
console.log(table.toArray())
----

View File

Before

Width:  |  Height:  |  Size: 79 KiB

After

Width:  |  Height:  |  Size: 79 KiB

View File

Before

Width:  |  Height:  |  Size: 361 KiB

After

Width:  |  Height:  |  Size: 361 KiB

24
docs/index.asciidoc Normal file
View File

@ -0,0 +1,24 @@
= Elasticsearch JavaScript Client
include::{asciidoc-dir}/../../shared/versions/stack/{source_branch}.asciidoc[]
include::{asciidoc-dir}/../../shared/attributes.asciidoc[]
include::introduction.asciidoc[]
include::getting-started.asciidoc[]
include::changelog.asciidoc[]
include::installation.asciidoc[]
include::connecting.asciidoc[]
include::configuration.asciidoc[]
include::basic-config.asciidoc[]
include::advanced-config.asciidoc[]
include::child.asciidoc[]
include::testing.asciidoc[]
include::integrations.asciidoc[]
include::observability.asciidoc[]
include::transport.asciidoc[]
include::typescript.asciidoc[]
include::reference.asciidoc[]
include::examples/index.asciidoc[]
include::helpers.asciidoc[]
include::redirects.asciidoc[]
include::timeout-best-practices.asciidoc[]

116
docs/installation.asciidoc Normal file
View File

@ -0,0 +1,116 @@
[[installation]]
== Installation
This page guides you through the installation process of the client.
To install the latest version of the client, run the following command:
[source,sh]
----
npm install @elastic/elasticsearch
----
To install a specific major version of the client, run the following command:
[source,sh]
----
npm install @elastic/elasticsearch@<major>
----
To learn more about the supported major versions, please refer to the
<<js-compatibility-matrix>>.
[discrete]
[[nodejs-support]]
=== Node.js support
NOTE: The minimum supported version of Node.js is `v18`.
The client versioning follows the {stack} versioning, this means that
major, minor, and patch releases are done following a precise schedule that
often does not coincide with the https://nodejs.org/en/about/releases/[Node.js release] times.
To avoid support insecure and unsupported versions of Node.js, the
client *will drop the support of EOL versions of Node.js between minor releases*.
Typically, as soon as a Node.js version goes into EOL, the client will continue
to support that version for at least another minor release. If you are using the client
with a version of Node.js that will be unsupported soon, you will see a warning
in your logs (the client will start logging the warning with two minors in advance).
Unless you are *always* using a supported version of Node.js,
we recommend defining the client dependency in your
`package.json` with the `~` instead of `^`. In this way, you will lock the
dependency on the minor release and not the major. (for example, `~7.10.0` instead
of `^7.10.0`).
[%header,cols=3*]
|===
|Node.js Version
|Node.js EOL date
|End of support
|`8.x`
|December 2019
|`7.11` (early 2021)
|`10.x`
|April 2021
|`7.12` (mid 2021)
|`12.x`
|April 2022
|`8.2` (early 2022)
|`14.x`
|April 2023
|`8.8` (early 2023)
|`16.x`
|September 2023
|`8.11` (late 2023)
|===
[discrete]
[[js-compatibility-matrix]]
=== Compatibility matrix
Language clients are forward compatible; meaning that clients support
communicating with greater or equal minor versions of {es} without breaking. It
does not mean that the client automatically supports new features of newer {es}
versions; it is only possible after a release of a new client version. For
example, a 8.12 client version won't automatically support the new features of
the 8.13 version of {es}, the 8.13 client version is required for that.
{es} language clients are only backwards compatible with default distributions
and without guarantees made.
[%header,cols=3*]
|===
|{es} Version
|Client Version
|Supported
|`8.x`
|`8.x`
|`8.x`
|`7.x`
|`7.x`
|`7.17`
|`6.x`
|`6.x`
|
|`5.x`
|`5.x`
|
|===
[discrete]
==== Browser
WARNING: There is no official support for the browser environment. It exposes
your {es} instance to everyone, which could lead to security issues. We
recommend you to write a lightweight proxy that uses this client instead,
you can see a proxy example https://github.com/elastic/elasticsearch-js/tree/master/docs/examples/proxy[here].

View File

@ -0,0 +1,8 @@
[[integrations]]
== Integrations
The Client offers the following integration options for you:
* <<observability>>
* <<transport>>
* <<typescript>>

View File

@ -1,15 +1,12 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/index.html
- https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/introduction.html
---
[[introduction]]
== Introduction
# JavaScript [introduction]
This is the official Node.js client for {{es}}. This page gives a quick overview about the features of the client.
This is the official Node.js client for {es}. This page gives a quick overview
about the features of the client.
## Features [_features]
[discrete]
=== Features
* One-to-one mapping with REST API.
* Generalized, pluggable architecture.
@ -20,35 +17,45 @@ This is the official Node.js client for {{es}}. This page gives a quick overview
* TypeScript support out of the box.
### Install multiple versions [_install_multiple_versions]
[discrete]
==== Install multiple versions
If you are using multiple versions of {{es}}, you need to use multiple versions of the client as well. In the past, installing multiple versions of the same package was not possible, but with `npm v6.9`, you can do it via aliasing.
If you are using multiple versions of {es}, you need to use multiple versions of
the client as well. In the past, installing multiple versions of the same
package was not possible, but with `npm v6.9`, you can do it via aliasing.
To install different version of the client, run the following command:
```sh
[source,sh]
----
npm install <alias>@npm:@elastic/elasticsearch@<version>
```
----
For example, if you need to install `7.x` and `6.x`, run the following commands:
```sh
[source,sh]
----
npm install es6@npm:@elastic/elasticsearch@6
npm install es7@npm:@elastic/elasticsearch@7
```
----
Your `package.json` will look similar to the following example:
```json
[source,json]
----
"dependencies": {
"es6": "npm:@elastic/elasticsearch@^6.7.0",
"es7": "npm:@elastic/elasticsearch@^7.0.0"
}
```
----
Require the packages from your code by using the alias you have defined.
```js
[source,js]
----
const { Client: Client6 } = require('es6')
const { Client: Client7 } = require('es7')
@ -63,16 +70,15 @@ const client7 = new Client7({
client6.info().then(console.log, console.log)
client7.info().then(console.log, console.log)
```
----
Finally, if you want to install the client for the next version of {{es}} (the one that lives in the {{es}} main branch), use the following command:
```sh
Finally, if you want to install the client for the next version of {es} (the one
that lives in the {es} main branch), use the following command:
[source,sh]
----
npm install esmain@github:elastic/elasticsearch-js
```
::::{warning}
This command installs the main branch of the client which is not considered stable.
::::
----
WARNING: This command installs the main branch of the client which is not
considered stable.

Some files were not shown because too many files have changed in this diff Show More