Compare commits

..

54 Commits

Author SHA1 Message Date
f33aa8cccd Publish 9.0 tags as prereleases (#2524) 2024-12-05 14:30:28 -06:00
7cb973a206 Update changelog for 9.0.0 body param removal (#2523) 2024-12-05 14:12:59 -06:00
a4315a905e Auto-generated code for main (#2522) 2024-12-05 13:28:38 -06:00
6447fc10bf Drop support for body param in helpers and tests (#2521)
* Update tests and bulk helper to stop using body param

* Update compatible-with content-type header for 9.0
2024-12-05 13:03:19 -06:00
e9c2f8b0af Bump version to 9.0.0-alpha.1 (#2516)
* Bump version to 9.0.0-alpha.1

* Update npm publish workflow for 9.0.0 alpha
2024-12-05 10:40:34 -06:00
15b9ee2f06 Codegen for 8.x clients should use the 8.x generator branch (#2515) 2024-12-05 10:37:46 -06:00
e30e964131 Update stale rules to keep tracking issues open (#2514) 2024-12-03 11:18:16 -06:00
0f187f47c4 Disable Dockerfile Node.js upgrades by Renovate (#2509) 2024-12-02 12:17:45 -06:00
101f34bd5e Update dependency into-stream to v8 (#2507)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-12-02 12:15:23 -06:00
ec0c561e36 Auto-generated code for main (#2504)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-12-02 12:12:27 -06:00
c1e90b12f0 Update dependency @elastic/request-converter to v8.16.2 (#2505)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-12-02 18:12:01 +00:00
5cb670256e Update dependency typescript to v5.7.2 (#2506)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-12-02 18:07:44 +00:00
86f488f68f Update dependency @types/node to v22.10.1 (#2499)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-12-02 12:03:41 -06:00
6009fab7fe Update changelog to include 8.16.2 and 8.15.3 (#2490) 2024-11-21 10:40:58 -06:00
26ae260058 Ignore tap artifacts (#2487) 2024-11-21 10:14:17 -06:00
fbbbece711 Add docstrings for Client class and related properties (#2484) 2024-11-20 14:43:36 -06:00
a30c3dca2d Add changelog for 8.16.1 (#2479) 2024-11-18 13:32:52 -06:00
36cfacc409 Fix ECMAScript import (#2475) 2024-11-18 12:14:58 -06:00
6dc83cd33e Auto-generated code for main (#2473)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-18 11:29:36 -06:00
7c7ce29127 Update dependency @elastic/request-converter to v8.16.1 (#2469)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-18 11:13:38 -06:00
2b890af355 Update integration test automation branches (#2463) 2024-11-14 09:37:36 -06:00
421f953b00 Upgrade ts-standard (#2460) 2024-11-13 10:42:33 -06:00
c5e4107181 Update peter-evans/create-pull-request action to v7 (#2456)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-12 13:44:49 -06:00
5880c84c13 Update dependency typescript to v5 (#2455)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-12 17:34:24 +00:00
290639d168 Update dependency chai to v5 (#2453)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-12 11:27:52 -06:00
0b90613694 Address feedback and add clarity (#2449) 2024-11-12 10:51:18 -06:00
1ad057abcc Update dependency @types/split2 to v4 (#2438)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-11 11:51:35 -06:00
44d890ec57 Update dependency @types/node to v22 (#2437)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-11-11 17:16:54 +00:00
2b2a2f03e6 Auto-generated code for main (#2439) 2024-11-11 11:05:18 -06:00
7bcd75bdb0 Add changelog for 8.15.2 (#2444) 2024-11-11 09:47:00 -06:00
2455dac4e5 Add _id to the result of helpers.search (#2432) 2024-11-06 12:20:10 -06:00
edb5563bf8 Update dependency @types/node to v18.19.64 (#2421)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-04 22:21:49 +00:00
11939fd22c Add streaming support to Arrow helper (#2407) 2024-11-04 15:47:53 -06:00
e0c613f898 Auto-generated code for main (#2425) 2024-11-04 10:06:34 -06:00
20f2c740cd Update buildkite plugin junit-annotate to v2.5.0 (#2422)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-04 10:01:14 -06:00
97bdca22d8 Update actions/stale action to v9 (#2423)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-11-04 10:00:38 -06:00
a7123f807d Auto-generated code for main (#2384) 2024-10-28 15:12:59 -05:00
20ac2a637e Skip flaky test (#2416) 2024-10-28 11:46:10 -05:00
e287c1edd9 Don't use hash-based Docker image version (#2414) 2024-10-28 11:19:02 -05:00
90d43f4f28 Pin dependencies (#2408)
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
2024-10-28 11:03:46 -05:00
572927b4f1 Don't generate coverage during standard unit test run (#2404) 2024-10-24 11:55:11 -05:00
86b4d4e2f9 Upgrade tap to latest (#2400) 2024-10-24 11:33:46 -05:00
8e79bf847a Enable Renovate (#2398) 2024-10-23 11:38:08 -05:00
cef328c93d Prep 8.16.0 (#2396) 2024-10-23 08:48:04 -05:00
c3247d0c66 Basic helper for ES|QL's Apache Arrow output format (#2391) 2024-10-22 15:00:18 -05:00
e9fdcb0647 Add doc about timeout best practices (#2381) 2024-10-18 11:13:41 -05:00
82acfc33a9 Respect disablePrototypePoisoningProtection option (#2380) 2024-10-16 14:19:44 -05:00
661caf8422 Update changelog for 8.15.1 (#2379) 2024-10-15 11:36:54 -05:00
3430734fe0 Auto-generated code for main (#2371)
Co-authored-by: Josh Mock <joshua.mock@elastic.co>
2024-10-14 12:14:09 -05:00
810e009202 Update Github actions to reflect security best practices (#2375)
* Update Github actions to reflect security best practices

* Upgrade @types/node
2024-10-14 11:27:22 -05:00
c274b1b32f Upgraded @types/node package to v18 (#2374) 2024-10-14 11:13:55 -05:00
428a7b023d Auto-generated code for main (#2368) 2024-09-30 13:37:11 -05:00
aad41df231 Upgrade transport to 8.8.1 (#2366) 2024-09-26 13:29:27 -05:00
34704b2e5c Auto-generated code for main (#2362) 2024-09-23 14:54:43 -05:00
202 changed files with 7460 additions and 40305 deletions

View File

@ -1,20 +1,17 @@
---
agents:
provider: "gcp"
image: family/core-ubuntu-2204
memory: "8G"
cpu: "2"
steps:
- label: ":elasticsearch: :javascript: ES JavaScript ({{ matrix.nodejs }})"
- label: ":elasticsearch: :javascript: ES JavaScript ({{ matrix.nodejs }}) Test Suite: {{ matrix.suite }}"
agents:
provider: "gcp"
env:
NODE_VERSION: "{{ matrix.nodejs }}"
TEST_SUITE: "platinum"
STACK_VERSION: 9.0.0
GITHUB_TOKEN_PATH: "secret/ci/elastic-elasticsearch-js/github-token"
TEST_ES_STACK: "1"
TEST_SUITE: "{{ matrix.suite }}"
STACK_VERSION: 8.16.0
matrix:
setup:
suite:
- "free"
- "platinum"
nodejs:
- "18"
- "20"
@ -24,8 +21,11 @@ steps:
- wait: ~
continue_on_failure: true
- label: ":junit: Test results"
agents:
provider: "gcp"
image: family/core-ubuntu-2204
plugins:
- junit-annotate#v2.4.1:
- junit-annotate#v2.5.0:
artifacts: "junit-output/junit-*.xml"
job-uuid-file-pattern: "junit-(.*).xml"
fail-build-on-error: true

View File

@ -10,29 +10,22 @@ export NODE_VERSION=${NODE_VERSION:-18}
echo "--- :javascript: Building Docker image"
docker build \
--file "$script_path/Dockerfile" \
--tag elastic/elasticsearch-js \
--build-arg NODE_VERSION="$NODE_VERSION" \
.
--file "$script_path/Dockerfile" \
--tag elastic/elasticsearch-js \
--build-arg NODE_VERSION="$NODE_VERSION" \
.
GITHUB_TOKEN=$(vault read -field=token "$GITHUB_TOKEN_PATH")
export GITHUB_TOKEN
echo "--- :javascript: Running tests"
echo "--- :javascript: Running $TEST_SUITE tests"
mkdir -p "$repo/junit-output"
docker run \
--network="${network_name}" \
--env TEST_ES_STACK \
--env STACK_VERSION \
--env GITHUB_TOKEN \
--env "TEST_ES_SERVER=${elasticsearch_url}" \
--env "ELASTIC_PASSWORD=${elastic_password}" \
--env "ELASTIC_USER=elastic" \
--env "BUILDKITE=true" \
--volume "/usr/src/app/node_modules" \
--volume "$repo:/usr/src/app" \
--volume "$repo/junit-output:/junit-output" \
--name elasticsearch-js \
--rm \
elastic/elasticsearch-js \
bash -c "npm run test:integration; [ -f ./report-junit.xml ] && mv ./report-junit.xml /junit-output/junit-$BUILDKITE_JOB_ID.xml || echo 'No JUnit artifact found'"
--network="${network_name}" \
--env "TEST_ES_SERVER=${elasticsearch_url}" \
--env "ELASTIC_PASSWORD=${elastic_password}" \
--env "TEST_SUITE=${TEST_SUITE}" \
--env "ELASTIC_USER=elastic" \
--env "BUILDKITE=true" \
--volume "$repo/junit-output:/junit-output" \
--name elasticsearch-js \
--rm \
elastic/elasticsearch-js \
bash -c "npm run test:integration; [ -f ./$TEST_SUITE-report-junit.xml ] && mv ./$TEST_SUITE-report-junit.xml /junit-output/junit-$BUILDKITE_JOB_ID.xml || echo 'No JUnit artifact found'"

View File

@ -6,6 +6,3 @@ elasticsearch
lib
junit-output
.tap
rest-api-spec
yaml-rest-tests
generated-tests

4
.github/make.sh vendored
View File

@ -65,7 +65,7 @@ codegen)
if [ -v "$VERSION" ] || [[ -z "$VERSION" ]]; then
# fall back to branch name or `main` if no VERSION is set
branch_name=$(git rev-parse --abbrev-ref HEAD)
if [[ "$branch_name" =~ ^[0-9]+\.([0-9]+|x) ]]; then
if [[ "$branch_name" =~ ^[0-9]+\.[0-9]+ ]]; then
echo -e "\033[36;1mTARGET: codegen -> No VERSION argument found, using branch name: \`$branch_name\`\033[0m"
VERSION="$branch_name"
else
@ -176,7 +176,7 @@ else
--rm \
$product \
/bin/bash -c "cd /usr/src && \
git clone --branch $GENERATOR_BRANCH https://$CLIENTS_GITHUB_TOKEN@github.com/elastic/elastic-client-generator-js.git && \
git clone https://$CLIENTS_GITHUB_TOKEN@github.com/elastic/elastic-client-generator-js.git && \
mkdir -p /usr/src/elastic-client-generator-js/output && \
cd /usr/src/elasticsearch-js && \
node .buildkite/make.mjs --task $TASK ${TASK_ARGS[*]}"

26
.github/stale.yml vendored
View File

@ -1,26 +0,0 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 15
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7
# Issues with these labels will never be considered stale
exemptLabels:
- "discussion"
- "feature request"
- "bug"
- "todo"
- "good first issue"
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: |
We understand that this might be important for you, but this issue has been automatically marked as stale because it has not had recent activity either from our end or yours.
It will be closed if no further activity occurs, please write a comment if you would like to keep this going.
Note: in the past months we have built a new client, that has just landed in master. If you want to open an issue or a pr for the legacy client, you should do that in https://github.com/elastic/elasticsearch-js-legacy
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

View File

@ -23,15 +23,18 @@ jobs:
- run: npm install -g npm
- run: npm install
- run: npm test
- run: npm publish --provenance --access public
- run: npm publish --provenance --access public --tag alpha
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
- run: |
- name: Publish version on GitHub
run: |
version=$(jq -r .version package.json)
gh release create \
-n "[Changelog](https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/$BRANCH_NAME/changelog-client.html)" \
-n "This is a 9.0.0 pre-release alpha. Changes may not be stable." \
--latest=false \
--prerelease \
--target "$BRANCH_NAME" \
-t "v$version" \
--title "v$version" \
"v$version"
env:
BRANCH_NAME: ${{ github.event.inputs.branch }}

View File

@ -42,7 +42,7 @@ jobs:
- name: Apply patch from stack to serverless
id: apply-patch
run: $GITHUB_WORKSPACE/stack/.github/workflows/serverless-patch.sh
- uses: peter-evans/create-pull-request@c5a7806660adbe173f04e3e038b0ccdcd758773c # v6
- uses: peter-evans/create-pull-request@5e914681df9dc83aa4e4905692ca88beb2f9e91f # v7
with:
token: ${{ secrets.GH_TOKEN }}
path: serverless

View File

@ -1,21 +1,21 @@
---
name: 'Close stale issues and PRs'
name: "Close stale issues and PRs"
on:
schedule:
- cron: '30 1 * * *'
- cron: "30 1 * * *"
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@1160a2240286f5da8ec72b1c0816ce2481aabf84 # v8
- uses: actions/stale@28ca1036281a5e5922ead5184a1bbf96e5fc984e # v9
with:
stale-issue-label: stale
stale-pr-label: stale
days-before-stale: 90
days-before-close: 14
exempt-issue-labels: 'good first issue'
exempt-issue-labels: "good first issue,tracking"
close-issue-label: closed-stale
close-pr-label: closed-stale
stale-issue-message: 'This issue is stale because it has been open 90 days with no activity. Remove the `stale` label, or leave a comment, or this will be closed in 14 days.'
stale-pr-message: 'This pull request is stale because it has been open 90 days with no activity. Remove the `stale` label, or leave a comment, or this will be closed in 14 days.'
stale-issue-message: "This issue is stale because it has been open 90 days with no activity. Remove the `stale` label, or leave a comment, or this will be closed in 14 days."
stale-pr-message: "This pull request is stale because it has been open 90 days with no activity. Remove the `stale` label, or leave a comment, or this will be closed in 14 days."

4
.gitignore vendored
View File

@ -68,7 +68,3 @@ bun.lockb
test-results
processinfo
.tap
rest-api-spec
yaml-rest-tests
generated-tests
schema

View File

@ -74,6 +74,3 @@ CONTRIBUTING.md
src
bun.lockb
.tap
rest-api-spec
yaml-rest-tests
generated-tests

View File

@ -167,19 +167,16 @@ const client = new Client({
----
|`nodeFilter`
a|`function` - Takes a `Connection` and returns `true` if it can be sent a request, otherwise `false`. +
a|`function` - Filters which node not to use for a request. +
_Default:_
[source,js]
----
function defaultNodeFilter (conn) {
if (conn.roles != null) {
if (
// avoid master-only nodes
conn.roles.master &&
!conn.roles.data &&
!conn.roles.ingest &&
!conn.roles.ml
) return false
function defaultNodeFilter (node) {
// avoid master only nodes
if (node.roles.master === true &&
node.roles.data === false &&
node.roles.ingest === false) {
return false
}
return true
}

View File

@ -2,69 +2,15 @@
== Release notes
[discrete]
=== 8.18.2
=== 9.0.0
[discrete]
==== Fixes
==== Breaking changes
[discrete]
===== Ensure Apache Arrow ES|QL helper uses async iterator
===== Drop support for deprecated `body` parameter
The `esql.toArrowReader()` helper function was trying to return `RecordBatchStreamReader`, a synchronous iterator, despite the fact that the `apache-arrow` package was, in most cases, automatically coercing it to `AsyncRecordBatchStreamReader`, its asynchronous counterpart. It now is always returned as an async iterator.
[discrete]
=== 8.18.1
[discrete]
==== Fixes
[discrete]
===== Fix broken node roles and node filter
The docs note a `nodeFilter` option on the client that will, by default, filter the nodes based on any `roles` values that are set at instantition. At some point, this functionality was partially disabled. This brings the feature back, ensuring that it matches what the documentation has said it does all along.
[discrete]
=== 8.18.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.18`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.18/release-notes-8.18.0.html[here].
[discrete]
==== Fixes
[discrete]
===== Improved Cloud ID parsing
When using a Cloud ID as the `cloud` parameter to instantiate the client, that ID was assumed to be in the correct format. New assertions have been added to verify that format and throw a `ConfigurationError` if it is invalid. See https://github.com/elastic/elasticsearch-js/issues/2694[#2694].
[discrete]
=== 8.17.0
[discrete]
==== Features
[discrete]
===== Support for Elasticsearch `v8.17`
You can find all the API changes
https://www.elastic.co/guide/en/elasticsearch/reference/8.17/release-notes-8.17.0.html[here].
[discrete]
=== 8.16.3
[discrete]
==== Fixes
[discrete]
===== Improved support for Elasticsearch `v8.16`
Updated TypeScript types based on fixes and improvements to the Elasticsearch specification.
In 8.0, the top-level `body` parameter that was available on all API functions <<remove-body-key,was deprecated>>. In 9.0 this property is completely removed.
[discrete]
=== 8.16.2
@ -710,6 +656,7 @@ ac.abort()
----
[discrete]
[[remove-body-key]]
===== Remove the body key from the request
*Breaking: Yes* | *Migration effort: Small*

View File

@ -1,11 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.getDataStream({
name: "my-data-stream",
filter_path: "data_streams.indices.index_name",
});
console.log(response);
----

View File

@ -1,10 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.getMapping({
index: "kibana_sample_data_ecommerce",
});
console.log(response);
----

View File

@ -1,42 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.create({
index: "my-rank-vectors-bit",
mappings: {
properties: {
my_vector: {
type: "rank_vectors",
element_type: "bit",
},
},
},
});
console.log(response);
const response1 = await client.bulk({
index: "my-rank-vectors-bit",
refresh: "true",
operations: [
{
index: {
_id: "1",
},
},
{
my_vector: [127, -127, 0, 1, 42],
},
{
index: {
_id: "2",
},
},
{
my_vector: "8100012a7f",
},
],
});
console.log(response1);
----

View File

@ -5,8 +5,10 @@
----
const response = await client.cluster.putSettings({
persistent: {
"cluster.routing.allocation.disk.watermark.low": "90%",
"cluster.routing.allocation.disk.watermark.high": "95%",
"cluster.routing.allocation.disk.watermark.low": "100gb",
"cluster.routing.allocation.disk.watermark.high": "50gb",
"cluster.routing.allocation.disk.watermark.flood_stage": "10gb",
"cluster.info.update.interval": "1m",
},
});
console.log(response);

View File

@ -1,20 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.transport.request({
method: "POST",
path: "/_inference/chat_completion/openai-completion/_stream",
body: {
model: "gpt-4o",
messages: [
{
role: "user",
content: "What is Elastic?",
},
],
},
});
console.log(response);
----

View File

@ -1,11 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.addBlock({
index: ".ml-anomalies-custom-example",
block: "read_only",
});
console.log(response);
----

View File

@ -6,15 +6,14 @@
const response = await client.search({
index: "test-index",
query: {
match: {
my_semantic_field: "Which country is Paris in?",
},
},
highlight: {
fields: {
my_semantic_field: {
number_of_fragments: 2,
order: "score",
nested: {
path: "inference_field.inference.chunks",
query: {
sparse_vector: {
field: "inference_field.inference.chunks.embeddings",
inference_id: "my-inference-id",
query: "mountain lake",
},
},
},
},

View File

@ -10,7 +10,7 @@ const response = await client.ingest.putPipeline({
{
attachment: {
field: "data",
remove_binary: true,
remove_binary: false,
},
},
],

View File

@ -1,19 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.security.queryRole({
query: {
bool: {
must_not: {
term: {
"metadata._reserved": true,
},
},
},
},
sort: ["name"],
});
console.log(response);
----

View File

@ -14,7 +14,6 @@ const response = await client.indices.putSettings({
"index.search.slowlog.threshold.fetch.info": "800ms",
"index.search.slowlog.threshold.fetch.debug": "500ms",
"index.search.slowlog.threshold.fetch.trace": "200ms",
"index.search.slowlog.include.user": true,
},
});
console.log(response);

View File

@ -1,67 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.create({
index: "my-rank-vectors-bit",
mappings: {
properties: {
my_vector: {
type: "rank_vectors",
element_type: "bit",
},
},
},
});
console.log(response);
const response1 = await client.bulk({
index: "my-rank-vectors-bit",
refresh: "true",
operations: [
{
index: {
_id: "1",
},
},
{
my_vector: [127, -127, 0, 1, 42],
},
{
index: {
_id: "2",
},
},
{
my_vector: "8100012a7f",
},
],
});
console.log(response1);
const response2 = await client.search({
index: "my-rank-vectors-bit",
query: {
script_score: {
query: {
match_all: {},
},
script: {
source: "maxSimDotProduct(params.query_vector, 'my_vector')",
params: {
query_vector: [
[
0.35, 0.77, 0.95, 0.15, 0.11, 0.08, 0.58, 0.06, 0.44, 0.52, 0.21,
0.62, 0.65, 0.16, 0.64, 0.39, 0.93, 0.06, 0.93, 0.31, 0.92, 0,
0.66, 0.86, 0.92, 0.03, 0.81, 0.31, 0.2, 0.92, 0.95, 0.64, 0.19,
0.26, 0.77, 0.64, 0.78, 0.32, 0.97, 0.84,
],
],
},
},
},
},
});
console.log(response2);
----

View File

@ -1,11 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.addBlock({
index: ".ml-anomalies-custom-example",
block: "write",
});
console.log(response);
----

View File

@ -1,26 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "my-rank-vectors-float",
query: {
script_score: {
query: {
match_all: {},
},
script: {
source: "maxSimDotProduct(params.query_vector, 'my_vector')",
params: {
query_vector: [
[0.5, 10, 6],
[-0.5, 10, 10],
],
},
},
},
},
});
console.log(response);
----

View File

@ -1,35 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.ingest.putPipeline({
id: "attachment",
description: "Extract attachment information including original binary",
processors: [
{
attachment: {
field: "data",
remove_binary: false,
},
},
],
});
console.log(response);
const response1 = await client.index({
index: "my-index-000001",
id: "my_id",
pipeline: "attachment",
document: {
data: "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=",
},
});
console.log(response1);
const response2 = await client.get({
index: "my-index-000001",
id: "my_id",
});
console.log(response2);
----

View File

@ -1,24 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.create({
index: "test-index",
query: {
match: {
my_field: "Which country is Paris in?",
},
},
highlight: {
fields: {
my_field: {
type: "semantic",
number_of_fragments: 2,
order: "score",
},
},
},
});
console.log(response);
----

View File

@ -4,10 +4,9 @@
[source, js]
----
const response = await client.indices.putSettings({
index: "*",
index: "my-index-000001",
settings: {
"index.indexing.slowlog.include.user": true,
"index.indexing.slowlog.threshold.index.warn": "30s",
},
});
console.log(response);

View File

@ -1,23 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.create({
index: "test-index",
mappings: {
properties: {
source_field: {
type: "text",
fields: {
infer_field: {
type: "semantic_text",
inference_id: ".elser-2-elasticsearch",
},
},
},
},
},
});
console.log(response);
----

View File

@ -1,28 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "my-index-*",
query: {
bool: {
must: [
{
match: {
"user.id": "kimchy",
},
},
],
must_not: [
{
terms: {
_index: ["my-index-01"],
},
},
],
},
},
});
console.log(response);
----

View File

@ -3,8 +3,8 @@
[source, js]
----
const response = await client.migration.deprecations({
index: ".ml-anomalies-*",
const response = await client.indices.unfreeze({
index: "my-index-000001",
});
console.log(response);
----

View File

@ -1,31 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.ilm.putLifecycle({
name: "my_policy",
policy: {
phases: {
hot: {
actions: {
rollover: {
max_primary_shard_size: "50gb",
},
searchable_snapshot: {
snapshot_repository: "backing_repo",
replicate_for: "14d",
},
},
},
delete: {
min_age: "28d",
actions: {
delete: {},
},
},
},
},
});
console.log(response);
----

View File

@ -4,11 +4,9 @@
[source, js]
----
const response = await client.indices.putSettings({
index: ".reindexed-v9-ml-anomalies-custom-example",
index: "my-index-000001",
settings: {
index: {
number_of_replicas: 0,
},
"index.search.slowlog.include.user": true,
},
});
console.log(response);

View File

@ -6,7 +6,6 @@
const response = await client.indices.resolveCluster({
name: "not-present,clust*:my-index*,oldcluster:*",
ignore_unavailable: "false",
timeout: "5s",
});
console.log(response);
----

View File

@ -10,7 +10,7 @@ const response = await client.ingest.putPipeline({
{
attachment: {
field: "data",
remove_binary: true,
remove_binary: false,
},
},
],

View File

@ -1,70 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "movies",
size: 10,
retriever: {
rescorer: {
rescore: {
window_size: 50,
query: {
rescore_query: {
script_score: {
query: {
match_all: {},
},
script: {
source:
"cosineSimilarity(params.queryVector, 'product-vector_final_stage') + 1.0",
params: {
queryVector: [-0.5, 90, -10, 14.8, -156],
},
},
},
},
},
},
retriever: {
rrf: {
rank_window_size: 100,
retrievers: [
{
standard: {
query: {
sparse_vector: {
field: "plot_embedding",
inference_id: "my-elser-model",
query: "films that explore psychological depths",
},
},
},
},
{
standard: {
query: {
multi_match: {
query: "crime",
fields: ["plot", "title"],
},
},
},
},
{
knn: {
field: "vector",
query_vector: [10, 22, 77],
k: 10,
num_candidates: 10,
},
},
],
},
},
},
},
});
console.log(response);
----

View File

@ -1,23 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.create({
index: "my-index",
settings: {
index: {
number_of_shards: 3,
"blocks.write": true,
},
},
mappings: {
properties: {
field1: {
type: "text",
},
},
},
});
console.log(response);
----

View File

@ -0,0 +1,23 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.bulk({
index: "test-index",
operations: [
{
update: {
_id: "1",
},
},
{
doc: {
infer_field: "updated inference field",
source_field: "updated source field",
},
},
],
});
console.log(response);
----

View File

@ -1,19 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: ".ml-anomalies-custom-example",
size: 0,
aggs: {
job_ids: {
terms: {
field: "job_id",
size: 100,
},
},
},
});
console.log(response);
----

View File

@ -1,61 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "retrievers_example",
retriever: {
linear: {
retrievers: [
{
retriever: {
standard: {
query: {
function_score: {
query: {
term: {
topic: "ai",
},
},
functions: [
{
script_score: {
script: {
source: "doc['timestamp'].value.millis",
},
},
},
],
boost_mode: "replace",
},
},
sort: {
timestamp: {
order: "asc",
},
},
},
},
weight: 2,
normalizer: "minmax",
},
{
retriever: {
knn: {
field: "vector",
query_vector: [0.23, 0.67, 0.89],
k: 3,
num_candidates: 5,
},
},
weight: 1.5,
},
],
rank_window_size: 10,
},
},
_source: false,
});
console.log(response);
----

View File

@ -1,16 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.updateAliases({
actions: [
{
remove_index: {
index: "my-index-2099.05.06-000001",
},
},
],
});
console.log(response);
----

View File

@ -1,18 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "kibana_sample_data_ecommerce",
size: 0,
aggs: {
order_stats: {
stats: {
field: "taxful_total_price",
},
},
},
});
console.log(response);
----

View File

@ -6,11 +6,15 @@
const response = await client.update({
index: "test",
id: 1,
doc: {
product_price: 100,
script: {
source: "ctx._source.counter += params.count",
lang: "painless",
params: {
count: 4,
},
},
upsert: {
product_price: 50,
counter: 1,
},
});
console.log(response);

View File

@ -1,47 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.transport.request({
method: "POST",
path: "/_inference/chat_completion/openai-completion/_stream",
body: {
messages: [
{
role: "user",
content: [
{
type: "text",
text: "What's the price of a scarf?",
},
],
},
],
tools: [
{
type: "function",
function: {
name: "get_current_price",
description: "Get the current price of a item",
parameters: {
type: "object",
properties: {
item: {
id: "123",
},
},
},
},
},
],
tool_choice: {
type: "function",
function: {
name: "get_current_price",
},
},
},
});
console.log(response);
----

View File

@ -3,18 +3,15 @@
[source, js]
----
const response = await client.search({
index: "image-index",
const response = await client.knnSearch({
index: "my-index",
knn: {
field: "image-vector",
query_vector: [-5, 9, -12],
field: "image_vector",
query_vector: [0.3, 0.1, 1.2],
k: 10,
num_candidates: 100,
rescore_vector: {
oversample: 2,
},
},
fields: ["title", "file-type"],
_source: ["name", "file_type"],
});
console.log(response);
----

View File

@ -1,18 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.ingest.simulate({
id: "query_helper_pipeline",
docs: [
{
_source: {
content:
"artificial intelligence in medicine articles published in the last 12 months",
},
},
],
});
console.log(response);
----

View File

@ -1,16 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "jinaai-index",
query: {
semantic: {
field: "content",
query: "who inspired taking care of the sea?",
},
},
});
console.log(response);
----

View File

@ -1,10 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.getSettings({
index: ".reindexed-v9-ml-anomalies-custom-example",
});
console.log(response);
----

View File

@ -4,12 +4,16 @@
[source, js]
----
const response = await client.indices.create({
index: "jinaai-index",
index: "semantic-embeddings",
mappings: {
properties: {
content: {
semantic_text: {
type: "semantic_text",
inference_id: "jinaai-embeddings",
inference_id: "my-elser-endpoint",
},
content: {
type: "text",
copy_to: "semantic_text",
},
},
},

View File

@ -11,7 +11,7 @@ const response = await client.ingest.putPipeline({
attachment: {
field: "data",
properties: ["content", "title"],
remove_binary: true,
remove_binary: false,
},
},
],

View File

@ -1,15 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.putSettings({
index: "*",
settings: {
"index.search.slowlog.include.user": true,
"index.search.slowlog.threshold.fetch.warn": "30s",
"index.search.slowlog.threshold.query.warn": "30s",
},
});
console.log(response);
----

View File

@ -1,16 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.reindex({
wait_for_completion: "false",
source: {
index: ".ml-anomalies-custom-example",
},
dest: {
index: ".reindexed-v9-ml-anomalies-custom-example",
},
});
console.log(response);
----

View File

@ -1,24 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "my-index-000001",
query: {
prefix: {
full_name: {
value: "ki",
},
},
},
highlight: {
fields: {
full_name: {
matched_fields: ["full_name._index_prefix"],
},
},
},
});
console.log(response);
----

View File

@ -1,33 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "kibana_sample_data_ecommerce",
size: 0,
aggs: {
daily_sales: {
date_histogram: {
field: "order_date",
calendar_interval: "day",
},
aggs: {
daily_revenue: {
sum: {
field: "taxful_total_price",
},
},
smoothed_revenue: {
moving_fn: {
buckets_path: "daily_revenue",
window: 3,
script: "MovingFunctions.unweightedAvg(values)",
},
},
},
},
},
});
console.log(response);
----

View File

@ -0,0 +1,26 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "test-index",
query: {
nested: {
path: "inference_field.inference.chunks",
query: {
knn: {
field: "inference_field.inference.chunks.embeddings",
query_vector_builder: {
text_embedding: {
model_id: "my_inference_id",
model_text: "mountain lake",
},
},
},
},
},
},
});
console.log(response);
----

View File

@ -1,35 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
query: {
intervals: {
my_text: {
all_of: {
ordered: false,
max_gaps: 1,
intervals: [
{
match: {
query: "my favorite food",
max_gaps: 0,
ordered: true,
},
},
{
match: {
query: "cold porridge",
max_gaps: 4,
ordered: true,
},
},
],
},
},
},
},
});
console.log(response);
----

View File

@ -1,37 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "kibana_sample_data_ecommerce",
size: 0,
aggs: {
daily_sales: {
date_histogram: {
field: "order_date",
calendar_interval: "day",
format: "yyyy-MM-dd",
},
aggs: {
revenue: {
sum: {
field: "taxful_total_price",
},
},
unique_customers: {
cardinality: {
field: "customer_id",
},
},
avg_basket_size: {
avg: {
field: "total_quantity",
},
},
},
},
},
});
console.log(response);
----

View File

@ -1,34 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.transport.request({
method: "POST",
path: "/_inference/chat_completion/openai-completion/_stream",
body: {
messages: [
{
role: "assistant",
content: "Let's find out what the weather is",
tool_calls: [
{
id: "call_KcAjWtAww20AihPHphUh46Gd",
type: "function",
function: {
name: "get_current_weather",
arguments: '{"location":"Boston, MA"}',
},
},
],
},
{
role: "tool",
content: "The weather is cold",
tool_call_id: "call_KcAjWtAww20AihPHphUh46Gd",
},
],
},
});
console.log(response);
----

View File

@ -11,8 +11,6 @@ const response = await client.indices.putSettings({
"index.indexing.slowlog.threshold.index.debug": "2s",
"index.indexing.slowlog.threshold.index.trace": "500ms",
"index.indexing.slowlog.source": "1000",
"index.indexing.slowlog.reformat": true,
"index.indexing.slowlog.include.user": true,
},
});
console.log(response);

View File

@ -7,14 +7,14 @@ const response = await client.indices.create({
index: "test-index",
mappings: {
properties: {
infer_field: {
type: "semantic_text",
inference_id: "my-elser-endpoint",
},
source_field: {
type: "text",
copy_to: "infer_field",
},
infer_field: {
type: "semantic_text",
inference_id: ".elser-2-elasticsearch",
},
},
},
});

View File

@ -1,12 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.esql.query({
query:
'\nFROM library\n| EVAL year = DATE_EXTRACT("year", release_date)\n| WHERE page_count > ? AND match(author, ?, {"minimum_should_match": ?})\n| LIMIT 5\n',
params: [300, "Frank Herbert", 2],
});
console.log(response);
----

View File

@ -3,8 +3,8 @@
[source, js]
----
const response = await client.indices.getAlias({
index: ".ml-anomalies-custom-example",
const response = await client.security.queryRole({
sort: ["name"],
});
console.log(response);
----

View File

@ -1,39 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "kibana_sample_data_ecommerce",
size: 0,
aggs: {
categories: {
terms: {
field: "category.keyword",
size: 5,
order: {
total_revenue: "desc",
},
},
aggs: {
total_revenue: {
sum: {
field: "taxful_total_price",
},
},
avg_order_value: {
avg: {
field: "taxful_total_price",
},
},
total_items: {
sum: {
field: "total_quantity",
},
},
},
},
},
});
console.log(response);
----

View File

@ -5,11 +5,16 @@
----
const response = await client.inference.put({
task_type: "sparse_embedding",
inference_id: "elser-model-eis",
inference_id: "my-elser-endpoint",
inference_config: {
service: "elastic",
service: "elser",
service_settings: {
model_name: "elser",
adaptive_allocations: {
enabled: true,
min_number_of_allocations: 3,
max_number_of_allocations: 10,
},
num_threads: 1,
},
},
});

View File

@ -1,42 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.bulk({
index: "jinaai-index",
operations: [
{
index: {
_index: "jinaai-index",
_id: "1",
},
},
{
content:
"Sarah Johnson is a talented marine biologist working at the Oceanographic Institute. Her groundbreaking research on coral reef ecosystems has garnered international attention and numerous accolades.",
},
{
index: {
_index: "jinaai-index",
_id: "2",
},
},
{
content:
"She spends months at a time diving in remote locations, meticulously documenting the intricate relationships between various marine species. ",
},
{
index: {
_index: "jinaai-index",
_id: "3",
},
},
{
content:
"Her dedication to preserving these delicate underwater environments has inspired a new generation of conservationists.",
},
],
});
console.log(response);
----

View File

@ -1,32 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.ingest.putPipeline({
id: "query_helper_pipeline",
processors: [
{
script: {
source:
"ctx.prompt = 'Please generate an elasticsearch search query on index `articles_index` for the following natural language query. Dates are in the field `@timestamp`, document types are in the field `type` (options are `news`, `publication`), categories in the field `category` and can be multiple (options are `medicine`, `pharmaceuticals`, `technology`), and document names are in the field `title` which should use a fuzzy match. Ignore fields which cannot be determined from the natural language query context: ' + ctx.content",
},
},
{
inference: {
model_id: "openai_chat_completions",
input_output: {
input_field: "prompt",
output_field: "query",
},
},
},
{
remove: {
field: "prompt",
},
},
],
});
console.log(response);
----

View File

@ -30,13 +30,6 @@ const response = await client.search({
],
},
},
highlight: {
fields: {
semantic_text: {
number_of_fragments: 2,
},
},
},
});
console.log(response);
----

View File

@ -1,30 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.create({
index: "my-rank-vectors-byte",
mappings: {
properties: {
my_vector: {
type: "rank_vectors",
element_type: "byte",
},
},
},
});
console.log(response);
const response1 = await client.index({
index: "my-rank-vectors-byte",
id: 1,
document: {
my_vector: [
[1, 2, 3],
[4, 5, 6],
],
},
});
console.log(response1);
----

View File

@ -5,7 +5,7 @@
----
const response = await client.cluster.putSettings({
persistent: {
"migrate.data_stream_reindex_max_request_per_second": 10000,
"cluster.routing.allocation.disk.watermark.low": "30gb",
},
});
console.log(response);

View File

@ -10,8 +10,7 @@ const response = await client.inference.put({
service: "openai",
service_settings: {
api_key: "<api_key>",
model_id: "text-embedding-3-small",
dimensions: 128,
model_id: "text-embedding-ada-002",
},
},
});

View File

@ -7,7 +7,7 @@ const response = await client.inference.put({
task_type: "sparse_embedding",
inference_id: "elser_embeddings",
inference_config: {
service: "elasticsearch",
service: "elser",
service_settings: {
num_allocations: 1,
num_threads: 1,

View File

@ -1,12 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.cat.indices({
index: ".ml-anomalies-custom-example",
v: "true",
h: "index,store.size",
});
console.log(response);
----

View File

@ -1,12 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.get({
index: ".migrated-ds-my-data-stream-2025.01.23-000001",
human: "true",
filter_path: "*.settings.index.version.created_string",
});
console.log(response);
----

View File

@ -14,7 +14,7 @@ const response = await client.ingest.putPipeline({
attachment: {
target_field: "_ingest._value.attachment",
field: "_ingest._value.data",
remove_binary: true,
remove_binary: false,
},
},
},

View File

@ -1,18 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "kibana_sample_data_ecommerce",
size: 0,
aggs: {
avg_order_value: {
avg: {
field: "taxful_total_price",
},
},
},
});
console.log(response);
----

View File

@ -1,21 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "kibana_sample_data_ecommerce",
size: 0,
aggs: {
daily_orders: {
date_histogram: {
field: "order_date",
calendar_interval: "day",
format: "yyyy-MM-dd",
min_doc_count: 0,
},
},
},
});
console.log(response);
----

View File

@ -12,7 +12,7 @@ const response = await client.ingest.putPipeline({
field: "data",
indexed_chars: 11,
indexed_chars_field: "max_size",
remove_binary: true,
remove_binary: false,
},
},
],

View File

@ -1,12 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.getSettings({
index: "_all",
expand_wildcards: "all",
filter_path: "*.settings.index.*.slowlog",
});
console.log(response);
----

View File

@ -1,22 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "kibana_sample_data_ecommerce",
size: 0,
aggs: {
sales_by_category: {
terms: {
field: "category.keyword",
size: 5,
order: {
_count: "desc",
},
},
},
},
});
console.log(response);
----

View File

@ -1,31 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "kibana_sample_data_ecommerce",
size: 0,
aggs: {
daily_sales: {
date_histogram: {
field: "order_date",
calendar_interval: "day",
},
aggs: {
revenue: {
sum: {
field: "taxful_total_price",
},
},
cumulative_revenue: {
cumulative_sum: {
buckets_path: "revenue",
},
},
},
},
},
});
console.log(response);
----

View File

@ -5,9 +5,6 @@
----
const response = await client.indices.create({
index: "retrievers_example_nested",
settings: {
number_of_shards: 1,
},
mappings: {
properties: {
nested_field: {
@ -21,9 +18,6 @@ const response = await client.indices.create({
dims: 3,
similarity: "l2_norm",
index: true,
index_options: {
type: "flat",
},
},
},
},

View File

@ -1,22 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.inference.put({
task_type: "rerank",
inference_id: "jinaai-rerank",
inference_config: {
service: "jinaai",
service_settings: {
api_key: "<api_key>",
model_id: "jina-reranker-v2-base-multilingual",
},
task_settings: {
top_n: 10,
return_documents: true,
},
},
});
console.log(response);
----

View File

@ -1,11 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.esql.query({
query:
'\nFROM library\n| WHERE match(author, "Frank Herbert", {"minimum_should_match": 2, "operator": "AND"})\n| LIMIT 5\n',
});
console.log(response);
----

View File

@ -1,35 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
query: {
intervals: {
my_text: {
all_of: {
ordered: true,
max_gaps: 1,
intervals: [
{
match: {
query: "my favorite food",
max_gaps: 0,
ordered: true,
},
},
{
match: {
query: "cold porridge",
max_gaps: 4,
ordered: true,
},
},
],
},
},
},
},
});
console.log(response);
----

View File

@ -1,11 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.cluster.state({
metric: "metadata",
filter_path: "metadata.indices.*.system",
});
console.log(response);
----

View File

@ -1,28 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "jinaai-index",
retriever: {
text_similarity_reranker: {
retriever: {
standard: {
query: {
semantic: {
field: "content",
query: "who inspired taking care of the sea?",
},
},
},
},
field: "content",
rank_window_size: 100,
inference_id: "jinaai-rerank",
inference_text: "who inspired taking care of the sea?",
},
},
});
console.log(response);
----

View File

@ -1,44 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.search({
index: "retrievers_example",
retriever: {
linear: {
retrievers: [
{
retriever: {
standard: {
query: {
query_string: {
query: "(information retrieval) OR (artificial intelligence)",
default_field: "text",
},
},
},
},
weight: 2,
normalizer: "minmax",
},
{
retriever: {
knn: {
field: "vector",
query_vector: [0.23, 0.67, 0.89],
k: 3,
num_candidates: 5,
},
},
weight: 1.5,
normalizer: "minmax",
},
],
rank_window_size: 10,
},
},
_source: false,
});
console.log(response);
----

View File

@ -1,17 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.inference.put({
task_type: "chat_completion",
inference_id: "chat-completion-endpoint",
inference_config: {
service: "elastic",
service_settings: {
model_id: "model-1",
},
},
});
console.log(response);
----

View File

@ -1,57 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.updateAliases({
actions: [
{
add: {
index: ".reindexed-v9-ml-anomalies-custom-example",
alias: ".ml-anomalies-example1",
filter: {
term: {
job_id: {
value: "example1",
},
},
},
is_hidden: true,
},
},
{
add: {
index: ".reindexed-v9-ml-anomalies-custom-example",
alias: ".ml-anomalies-example2",
filter: {
term: {
job_id: {
value: "example2",
},
},
},
is_hidden: true,
},
},
{
remove: {
index: ".ml-anomalies-custom-example",
aliases: ".ml-anomalies-*",
},
},
{
remove_index: {
index: ".ml-anomalies-custom-example",
},
},
{
add: {
index: ".reindexed-v9-ml-anomalies-custom-example",
alias: ".ml-anomalies-custom-example",
is_hidden: true,
},
},
],
});
console.log(response);
----

View File

@ -3,13 +3,11 @@
[source, js]
----
const response = await client.inference.put({
const response = await client.inference.inference({
task_type: "my-inference-endpoint",
inference_id: "_update",
inference_config: {
service_settings: {
api_key: "<API_KEY>",
},
service_settings: {
api_key: "<API_KEY>",
},
});
console.log(response);

View File

@ -4,11 +4,9 @@
[source, js]
----
const response = await client.indices.putSettings({
index: ".reindexed-v9-ml-anomalies-custom-example",
index: "my-index-000001",
settings: {
index: {
number_of_replicas: "<original_number_of_replicas>",
},
"index.blocks.read_only_allow_delete": null,
},
});
console.log(response);

View File

@ -1,29 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.indices.create({
index: "my-rank-vectors-float",
mappings: {
properties: {
my_vector: {
type: "rank_vectors",
},
},
},
});
console.log(response);
const response1 = await client.index({
index: "my-rank-vectors-float",
id: 1,
document: {
my_vector: [
[0.5, 10, 6],
[-0.5, 10, 10],
],
},
});
console.log(response1);
----

View File

@ -5,9 +5,6 @@
----
const response = await client.indices.create({
index: "retrievers_example",
settings: {
number_of_shards: 1,
},
mappings: {
properties: {
vector: {
@ -15,9 +12,6 @@ const response = await client.indices.create({
dims: 3,
similarity: "l2_norm",
index: true,
index_options: {
type: "flat",
},
},
text: {
type: "text",
@ -28,9 +22,6 @@ const response = await client.indices.create({
topic: {
type: "keyword",
},
timestamp: {
type: "date",
},
},
},
});
@ -44,7 +35,6 @@ const response1 = await client.index({
text: "Large language models are revolutionizing information retrieval by boosting search precision, deepening contextual understanding, and reshaping user experiences in data-rich environments.",
year: 2024,
topic: ["llm", "ai", "information_retrieval"],
timestamp: "2021-01-01T12:10:30",
},
});
console.log(response1);
@ -57,7 +47,6 @@ const response2 = await client.index({
text: "Artificial intelligence is transforming medicine, from advancing diagnostics and tailoring treatment plans to empowering predictive patient care for improved health outcomes.",
year: 2023,
topic: ["ai", "medicine"],
timestamp: "2022-01-01T12:10:30",
},
});
console.log(response2);
@ -70,7 +59,6 @@ const response3 = await client.index({
text: "AI is redefining security by enabling advanced threat detection, proactive risk analysis, and dynamic defenses against increasingly sophisticated cyber threats.",
year: 2024,
topic: ["ai", "security"],
timestamp: "2023-01-01T12:10:30",
},
});
console.log(response3);
@ -83,7 +71,6 @@ const response4 = await client.index({
text: "Elastic introduces Elastic AI Assistant, the open, generative AI sidekick powered by ESRE to democratize cybersecurity and enable users of every skill level.",
year: 2023,
topic: ["ai", "elastic", "assistant"],
timestamp: "2024-01-01T12:10:30",
},
});
console.log(response4);
@ -96,7 +83,6 @@ const response5 = await client.index({
text: "Learn how to spin up a deployment of our hosted Elasticsearch Service and use Elastic Observability to gain deeper insight into the behavior of your applications and systems.",
year: 2024,
topic: ["documentation", "observability", "elastic"],
timestamp: "2025-01-01T12:10:30",
},
});
console.log(response5);

View File

@ -12,7 +12,7 @@ const response = await client.ingest.putPipeline({
field: "data",
indexed_chars: 11,
indexed_chars_field: "max_size",
remove_binary: true,
remove_binary: false,
},
},
],

View File

@ -12,13 +12,6 @@ const response = await client.search({
fields: ["my_field", "my_field._2gram", "my_field._3gram"],
},
},
highlight: {
fields: {
my_field: {
matched_fields: ["my_field._index_prefix"],
},
},
},
});
console.log(response);
----

View File

@ -1,18 +0,0 @@
// This file is autogenerated, DO NOT EDIT
// Use `node scripts/generate-docs-examples.js` to generate the docs examples
[source, js]
----
const response = await client.inference.put({
task_type: "text_embedding",
inference_id: "jinaai-embeddings",
inference_config: {
service: "jinaai",
service_settings: {
model_id: "jina-embeddings-v3",
api_key: "<api_key>",
},
},
});
console.log(response);
----

View File

@ -715,7 +715,7 @@ const result = await client.helpers
ES|QL can return results in multiple binary formats, including https://arrow.apache.org/[Apache Arrow]'s streaming format. Because it is a very efficient format to read, it can be valuable for performing high-performance in-memory analytics. And, because the response is streamed as batches of records, it can be used to produce aggregations and other calculations on larger-than-memory data sets.
`toArrowReader` returns a https://github.com/apache/arrow/blob/520ae44272d491bbb52eb3c9b84864ed7088f11a/js/src/ipc/reader.ts#L216[`AsyncRecordBatchStreamReader`].
`toArrowReader` returns a https://arrow.apache.org/docs/js/classes/Arrow_dom.RecordBatchReader.html[`RecordBatchStreamReader`].
[source,ts]
----
@ -724,7 +724,7 @@ const reader = await client.helpers
.toArrowReader()
// print each record as JSON
for await (const recordBatch of reader) {
for (const recordBatch of reader) {
for (const record of recordBatch) {
console.log(record.toJSON())
}

View File

@ -97,7 +97,7 @@ client.diagnostic.on('request', (err, result) => {
----
|`deserialization`
a|Emitted before starting deserialization and decompression. If you want to measure this phase duration, you should measure the time elapsed between this event and `response`. This event might not be emitted in certain situations, like: when `asStream` is set to true; a response is terminated early due to content length being too large; or a response is terminated early by an `AbortController`.
a|Emitted before starting deserialization and decompression. If you want to measure this phase duration, you should measure the time elapsed between this event and `response`. _(This event might not be emitted in certain situations)_.
[source,js]
----
client.diagnostic.on('deserialization', (err, result) => {

File diff suppressed because it is too large Load Diff

1
index.d.ts vendored
View File

@ -25,4 +25,3 @@ export * as estypes from './lib/api/types'
export * as estypesWithBody from './lib/api/typesWithBodyKey'
export { Client, SniffingTransport }
export type { ClientOptions, NodeOptions } from './lib/client'
export * as helpers from './lib/helpers'

Some files were not shown because too many files have changed in this diff Show More