Compare commits

..

35 Commits

Author SHA1 Message Date
a28f9c36e4 Bumped v7.6.1 2020-03-13 14:48:25 +01:00
a9a905409e [Backport 7.x] ApiKey should take precedence over basic auth (#1117)
* ApiKey should take precedence over basic auth

* Updated docs

* Updated test

Co-authored-by: Tomas Della Vedova <delvedor@users.noreply.github.com>
2020-03-13 13:41:57 +01:00
ca23e0b23a [Backport 7.x] Secure json parsing (#1113)
* Safe json parsing

* Updated test

Co-authored-by: Tomas Della Vedova <delvedor@users.noreply.github.com>
2020-03-13 09:29:40 +01:00
db67c526e4 API generation 2020-03-11 19:12:21 +01:00
cf5f3c55e0 typo (#1109) 2020-03-11 19:10:32 +01:00
d18f877c77 [DOCS] Fixes out-dated monitoring links (#1096) 2020-03-11 19:10:25 +01:00
917a4e338f Updated api reference header comment (#1049) 2020-03-11 19:07:45 +01:00
9852906c23 Migrate to GitHub Actions (#1104)
* Create nodejs.yml

* Run only on push

* Renamed jobs

* Removed .travis.yml

* Split coverage job and cleanup

* Skip flaky test

* Code coverage reporting

* Renamed codecov file

* Added backport action

* Updated integration test configuration

* Removed unused dependencies

* Fixes
2020-03-10 09:12:30 +01:00
711625edf2 Updates .ci folder to latest incarnation (#1103)
* Updates .ci folder to latest incarnation

The .ci folder cleanup some of its cleanup/wait routines so that they
can be reused. It also significantly reduced the available environment
variable toggles. In lieu of those toggles `run-repository.sh` can now
start multiple nodes using `NUMBER_OF_NODES`

* update certs
2020-03-10 09:12:28 +01:00
372f91c76d Updated LICENSE 2020-03-04 11:04:14 +01:00
53e29db80e Bumped v7.6.0 2020-02-12 14:01:09 +01:00
8bd5c4c4ce Support for Elasticsearch 7.6 (#1075) 2020-02-12 11:17:56 +01:00
d19313a72c Update integration test runner (#1085)
* Improved user and roles handling

* Avoid deleting internal indices

* Updated skip version handling

* Fix leftover

* Improved indices and aliases cleanup

* Clean also internal indices

* Restore previous index/alias cleanup

* Ignore 404
2020-02-11 10:51:08 +01:00
874b04f819 Added integration test stats (#1083) 2020-02-06 12:11:14 +01:00
a91e5375ac Fix link 2020-02-04 12:10:02 +01:00
b99654602a API generation 2020-02-04 11:03:15 +01:00
733070963b Added new examples (#1031)
* Added new examples

* Fixed examples links
2020-02-04 11:02:40 +01:00
726d1824bd Bumped v7.5.1 2020-02-04 11:00:57 +01:00
e94eefe8a2 API generation 2020-02-04 10:36:58 +01:00
cd61e30bb3 Add examples to reference (#1076)
* Updated examples urls

* Added links to examples

* Updated docs generation script to include code examples

* Fixes

* Skip index api

* Fix link

* Fix url generation

* API generation

* Fix new line

* API generation

* Fix leftover

* API generation
2020-02-04 10:35:59 +01:00
21683e6826 Skip compression in case of empty string body (#1080)
* Fix #1069

* Updated test

* Updated test
2020-02-04 10:30:49 +01:00
647546a4e5 Merge branch '7.x' of https://github.com/elastic/elasticsearch-js into 7.x 2020-02-04 10:30:03 +01:00
1a6c36e291 Fix typo in NoLivingConnectionsError (#1045)
Co-authored-by: Tomas Della Vedova <delvedor@users.noreply.github.com>
2020-02-04 10:29:59 +01:00
a34c6dd3a7 [7.x][DOCS] Fine-tunes the Node.Js client Typescript and examples sections. (#1079) 2020-02-03 17:24:28 +01:00
2a59c634f7 Move to latest .ci script structure (#1042)
Introduces a dedicated `run-repository.sh` for the repository custom
bits.

This allows us to keep `run-elasticsearch.sh` and `run-tests` in sync
through file copying or patches easier.

Co-authored-by: Tomas Della Vedova <delvedor@users.noreply.github.com>
2020-01-31 12:00:35 +01:00
2d9bfd6730 [7.x][DOCS] Fine-tunes the Node.Js client extend the client sec… (#1065) 2020-01-29 18:09:15 +01:00
68730dc0e6 Renamed log skip function (#1061) 2020-01-23 09:24:54 +01:00
0f60d78e5d Improve integration test execution time (#1005)
* Integration test: Add limit of 3 minutes per yaml file

* Monitor all test files that take more than 1m to execute

* Set the threshold to 30s

* Refactored integration test runner

* Better time reporting

* Updated test time limits

* Updated CI script

* Run oss only in oss build

* Run only oss test

* Revert "Run only oss test"

This reverts commit fd3a07d42d.
2020-01-23 08:31:38 +01:00
0455b76fb8 [7.x][DOCS] Fine-tunes the Node.Js client observability section (#1056) 2020-01-16 14:31:29 +01:00
63a68fb615 [7.x][DOCS] Fine-tunes the Node.Js client child client section (#1058) 2020-01-16 14:30:28 +01:00
d58365eb70 Change TransportRequestOptions.ignore to number[] (#1053) 2020-01-15 18:25:41 +01:00
51568ed505 ClientOptions["cloud"] should have optional auth fields (#1032) 2020-01-07 12:09:42 +01:00
be7c9f5e9d Return super in example Transport subclass (#980)
If called without a callback, the request method returns a Promise, so
when calling into super.request, the result should be returned to
maintain promise behavior.
2019-12-19 13:37:35 +01:00
35b03aed17 [7.x][DOCS] Fine-tunes the Node.Js client authentication section. (#1018) 2019-12-13 09:30:46 +01:00
c51fbfaafd [7.x][DOCS] Fine-tunes the Node.Js client breaking changes section. (#1013) 2019-12-05 13:20:56 +01:00
81 changed files with 3468 additions and 1377 deletions

View File

@ -1,20 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDSTCCAjGgAwIBAgIUIwN+0zglsexRKwE1RGHvlCcmrdwwDQYJKoZIhvcNAQEL
BQAwNDEyMDAGA1UEAxMpRWxhc3RpYyBDZXJ0aWZpY2F0ZSBUb29sIEF1dG9nZW5l
cmF0ZWQgQ0EwHhcNMTkwMjEzMDcyMjQwWhcNMjIwMjEyMDcyMjQwWjA0MTIwMAYD
VQQDEylFbGFzdGljIENlcnRpZmljYXRlIFRvb2wgQXV0b2dlbmVyYXRlZCBDQTCC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANILs0JO0e7x29zeVx21qalK
XKdX+AMlGJPH75wWO/Jq6YHtxt1wYIg762krOBXfG6JsFSOIwIv5VrzGGRGjSPt9
OXQyXrDDiQvsBT3rpzLNdDs7KMl2tZswwv7w9ujgud0cYnS1MOpn81rfPc73DvMg
xuhplofDx6fn3++PjVRU2FNiIVWyEoaxRjCeGPMBubKZYaYbQA6vYM4Z+ByG727B
AyAER3t7xmvYti/EoO2hv2HQk5zgcj/Oq3AJKhnt8LH8fnfm3TnYNM1htvXqhN05
vsvhvm2PHfnA5qLlSr/3W0aI/U/PqfsFDCgyRV097sMIaKkmavb0Ue7aQ7lgtp0C
AwEAAaNTMFEwHQYDVR0OBBYEFDRKlCMowWR1rwxE0d1lTEQe5O71MB8GA1UdIwQY
MBaAFDRKlCMowWR1rwxE0d1lTEQe5O71MA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZI
hvcNAQELBQADggEBAKbCJ95EBpeuvF70KEt6QU70k/SH1NRvM9YzKryV0D975Jvu
HOSm9HgSTULeAUFZIa4oYyf3QUfVoI+2T/aQrfXA3gfrJWsHURkyNmiHOFAbYHqi
xA6i249G2GTEjc1+le/M2N2CcDKAmurW6vSGK4upXQbPd6KmnhHREX74zkWjnOa+
+tibbSSOCT4Tmja2DbBxAPuivU9IB1g/hIUmbYQqKffQrBJA0658tz6w63a/Q7xN
pCvvbSgiMZ6qcVIcJkBT2IooYie+ax45pQECHthgIUcQAzfmIfqlU0Qfl8rDgAmn
0c1o6HQjKGU2aVGgSRuaaiHaSZjbPIZVS51sOoI=
MIIDSjCCAjKgAwIBAgIVAJQLm8V2LcaCTHUcoIfO+KL63nG3MA0GCSqGSIb3DQEB
CwUAMDQxMjAwBgNVBAMTKUVsYXN0aWMgQ2VydGlmaWNhdGUgVG9vbCBBdXRvZ2Vu
ZXJhdGVkIENBMB4XDTIwMDIyNjA1NTA1N1oXDTIzMDIyNTA1NTA1N1owNDEyMDAG
A1UEAxMpRWxhc3RpYyBDZXJ0aWZpY2F0ZSBUb29sIEF1dG9nZW5lcmF0ZWQgQ0Ew
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDYyajkPvGtUOE5M1OowQfB
kWVrWjo1+LIxzgCeRHp0YztLtdVJ0sk2xoSrt2uZpxcPepdyOseLTjFJex1D2yCR
AEniIqcFif4G72nDih2LlbhpUe/+/MTryj8ZTkFTzI+eMmbQi5FFMaH+kwufmdt/
5/w8YazO18SxxJUlzMqzfNUrhM8vvvVdxgboU7PWhk28wZHCMHQovomHmzclhRpF
N0FMktA98vHHeRjH19P7rNhifSd7hZzoH3H148HVAKoPgqnZ6vW2O2YfAWOP6ulq
cyszr57p8fS9B2wSdlWW7nVHU1JuKcYD67CxbBS23BeGFgCj4tiNrmxO8S5Yf85v
AgMBAAGjUzBRMB0GA1UdDgQWBBSWAlip9eoPmnG4p4OFZeOUBlAbNDAfBgNVHSME
GDAWgBSWAlip9eoPmnG4p4OFZeOUBlAbNDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
SIb3DQEBCwUAA4IBAQA19qqrMTWl7YyId+LR/QIHDrP4jfxmrEELrAL58q5Epc1k
XxZLzOBSXoBfBrPdv+3XklWqXrZjKWfdkux0Xmjnl4qul+srrZDLJVZG3I7IrITh
AmQUmL9MuPiMnAcxoGZp1xpijtW8Qmd2qnambbljWfkuVaa4hcVRfrAX6TciIQ21
bS5aeLGrPqR14h30YzDp0RMmTujEa1o6ExN0+RSTkE9m89Q6WdM69az8JW7YkWqm
I+UCG3TcLd3TXmN1zNQkq4y2ObDK4Sxy/2p6yFPI1Fds5w/zLfBOvvPQY61vEqs8
SCCcQIe7f6NDpIRIBlty1C9IaEHj7edyHjF6rtYb
-----END CERTIFICATE-----

27
.ci/certs/ca.key Normal file
View File

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpgIBAAKCAQEA2Mmo5D7xrVDhOTNTqMEHwZFla1o6NfiyMc4AnkR6dGM7S7XV
SdLJNsaEq7drmacXD3qXcjrHi04xSXsdQ9sgkQBJ4iKnBYn+Bu9pw4odi5W4aVHv
/vzE68o/GU5BU8yPnjJm0IuRRTGh/pMLn5nbf+f8PGGsztfEscSVJczKs3zVK4TP
L771XcYG6FOz1oZNvMGRwjB0KL6Jh5s3JYUaRTdBTJLQPfLxx3kYx9fT+6zYYn0n
e4Wc6B9x9ePB1QCqD4Kp2er1tjtmHwFjj+rpanMrM6+e6fH0vQdsEnZVlu51R1NS
binGA+uwsWwUttwXhhYAo+LYja5sTvEuWH/ObwIDAQABAoIBAQC8QDGnMnmPdWJ+
13FYY3cmwel+FXXjFDk5QpgK15A2rUz6a8XxO1d7d1wR+U84uH4v9Na6XQyWjaoD
EyPQnuJiyAtgkZLUHoY244PGR5NsePEQlBSCKmGeF5w/j1LvP/2e9EmP4wKdQYJY
nLxFNcgEBCFnFbKIU5n8fKa/klybCrwlBokenyBro02tqH4LL7h1YMRRrl97fv1V
e/y/0WcMN+KnMglfz6haimBRV2yamCCHHmBImC+wzOgT/quqlxPfI+a3ScHxuA65
3QyCavaqlPh+T3lXnN/Na4UWqFtzMmwgJX2x1zM5qiln46/JoDiXtagvV43L3rNs
LhPRFeIRAoGBAPhEB7nNpEDNjIRUL6WpebWS9brKAVY7gYn7YQrKGhhCyftyaiBZ
zYgxPaJdqYXf+DmkWlANGoYiwEs40QwkR/FZrvO4+Xh3n3dgtl59ZmieuoQvDsG+
RYIj+TfBaqhewhZNMMl7dxz7DeyQhyRCdsvl3VqJM0RuOsIrzrhCIEItAoGBAN+K
lgWI7swDpOEaLmu+IWMkGImh1LswXoZqIgi/ywZ7htZjPzidOIeUsMi+lrYsKojG
uU3sBxASsf9kYXDnuUuUbGT5M/N2ipXERt7klUAA/f5sg1IKlTrabaN/HGs/uNtf
Efa8v/h2VyTurdPCJ17TNpbOMDwX1qGM62tyt2CLAoGBAIHCnP8iWq18QeuQTO8b
a3/Z9hHRL22w4H4MI6aOB6GSlxuTq6CJD4IVqo9IwSg17fnCy2l3z9s4IqWuZqUf
+XJOW8ELd2jdrT2qEOfGR1Z7UCVyqxXcq1vgDYx0zZh/HpalddB5dcJx/c8do2Ty
UEE2PcHqYB9uNcvzNbLc7RtpAoGBALbuU0yePUTI6qGnajuTcQEPpeDjhRHWSFRZ
ABcG1N8uMS66Mx9iUcNp462zgeP8iqY5caUZtMHreqxT+gWKK7F0+as7386pwElF
QPXgO18QMMqHBIQb0vlBjJ1SRPBjSiSDTVEML1DljvTTOX7kEJHh6HdKrmBO5b54
cqMQUo53AoGBAPVWRPUXCqlBz914xKna0ZUh2aesRBg5BvOoq9ey9c52EIU5PXL5
0Isk8sWSsvhl3tjDPBH5WuL5piKgnCTqkVbEHmWu9s1T57Mw6NuxlPMLBWvyv4c6
tB9brOxv0ui3qGMuBsBoDKbkNnwXyOXLyFg7O+H4l016A3mLQzJM+NGV
-----END RSA PRIVATE KEY-----

View File

@ -1,19 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDIjCCAgqgAwIBAgIUI4QU6jA1dYSCbdIA6oAb2TBEluowDQYJKoZIhvcNAQEL
BQAwNDEyMDAGA1UEAxMpRWxhc3RpYyBDZXJ0aWZpY2F0ZSBUb29sIEF1dG9nZW5l
cmF0ZWQgQ0EwHhcNMTkwMjEzMDcyMzEzWhcNMjIwMjEyMDcyMzEzWjATMREwDwYD
VQQDEwhpbnN0YW5jZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJeT
yOy6EAScZxrULKjHePciiz38grivCrhFFV+dThaRCcl3DhDzb9Eny5q5iEw3WvLQ
Rqmf01jncNIhaocTt66VqveXaMubbE8O0LcG6e4kpFO+JtnVF8JTARTc+ux/1uD6
hO1VG/HItM7WQrQxh4hfB2u1AX2YQtoqEtXXEC+UHWfl4QzuzXjBnKCkO/L9/6Tf
yNFQWXxKnIiTs8Xm9sEhhSCBJPlLTQu+MX4vR2Uwj5XZmflDUr+ZTenl9qYxL6b3
SWhh/qEl4GAj1+tS7ZZOxE0237mUh3IIFYSWSaMm8K2m/BYHkLNWL5B1dMic0lsv
osSoYrQuCef4HQMCitsCAwEAAaNNMEswHQYDVR0OBBYEFFMg4l1GLW8lYbwASY+r
YeWYRzIiMB8GA1UdIwQYMBaAFDRKlCMowWR1rwxE0d1lTEQe5O71MAkGA1UdEwQC
MAAwDQYJKoZIhvcNAQELBQADggEBAEQrgh1xALpumQTzsjxFRGque/vlKTgRs5Kh
xtgapr6wjIbdq7dagee+4yNOKzS5lGVXCgwrJlHESv9qY0uumT/33vK2uduJ7NAd
fR2ZzyBnhMX+mkYhmGrGYCTUMUIwOIQYa4Evis4W+LHmCIDG03l7gLHfdIBe9VMO
pDZum8f6ng0MM49s8/rXODNYKw8kFyUhnfChqMi/2yggb1uUIfKlJJIchkgYjE13
zuC+fjo029Pq1jeMIdxugLf/7I/8NiW1Yj9aCXevUXG1qzHFEuKAinBXYOZO/vWS
LaEqOhwrzNynwgGpYAr7Rfgv4AflltYIIav4PZT03P7fbyAAf8s=
MIIDIzCCAgugAwIBAgIVAMTO6uVx9dLox2t0lY4IcBKZXb5WMA0GCSqGSIb3DQEB
CwUAMDQxMjAwBgNVBAMTKUVsYXN0aWMgQ2VydGlmaWNhdGUgVG9vbCBBdXRvZ2Vu
ZXJhdGVkIENBMB4XDTIwMDIyNjA1NTA1OVoXDTIzMDIyNTA1NTA1OVowEzERMA8G
A1UEAxMIaW5zdGFuY2UwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDK
YLTOikVENiN/qYupOsoXd7VYYnryyfCC/dK4FC2aozkbqjFzBdvPGAasoc4yEiH5
CGeXMgJuOjk1maqetmdIsw00j4oHJviYsnGXzxxS5swhD7spcW4Uk4V4tAUzrbfT
vW/2WW/yYCLe5phVb2chz0jL+WYb4bBmdfs/t6RtP9RqsplYAmVp3gZ6lt2YNtvE
k9gz0TVk3DuO1TquIClfRYUjuywS6xDSvxJ8Jl91EfDWM8QU+9F+YAtiv74xl2U3
P0wwMqNvMxf9/3ak3lTQGsgO4L6cwbKpVLMMzxSVunZz/sgl19xy3qHHz1Qr2MjJ
/2c2J7vahUL4NPRkjJClAgMBAAGjTTBLMB0GA1UdDgQWBBS2Wn8E2VZv4oenY+pR
O8G3zfQXhzAfBgNVHSMEGDAWgBSWAlip9eoPmnG4p4OFZeOUBlAbNDAJBgNVHRME
AjAAMA0GCSqGSIb3DQEBCwUAA4IBAQAvwPvCiJJ6v9jYcyvYY8I3gP0oCwrylpRL
n91UlgRSHUmuAObyOoVN5518gSV/bTU2SDrstcLkLFxHvnfpoGJoxsQEHuGxwDRI
nhYNd62EKLerehNM/F9ILKmvTh8f6QPCzjUuExTXv+63l2Sr6dBS7FHsGs6UKUYO
llM/y9wMZ1LCuZuBg9RhtgpFXRSgDM9Z7Begu0d/BPX9od/qAeZg9Arz4rwUiCN4
IJOMEBEPi5q1tgeS0Fb1Grpqd0Uz5tZKtEHNKzLG+zSMmkneL62Nk2HsmEFZKwzg
u2pU42UaUE596G6o78s1aLn9ICcElPHTjiuZNSiyuu9IzvFDjGQw
-----END CERTIFICATE-----

View File

@ -1,27 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAl5PI7LoQBJxnGtQsqMd49yKLPfyCuK8KuEUVX51OFpEJyXcO
EPNv0SfLmrmITDda8tBGqZ/TWOdw0iFqhxO3rpWq95doy5tsTw7Qtwbp7iSkU74m
2dUXwlMBFNz67H/W4PqE7VUb8ci0ztZCtDGHiF8Ha7UBfZhC2ioS1dcQL5QdZ+Xh
DO7NeMGcoKQ78v3/pN/I0VBZfEqciJOzxeb2wSGFIIEk+UtNC74xfi9HZTCPldmZ
+UNSv5lN6eX2pjEvpvdJaGH+oSXgYCPX61Ltlk7ETTbfuZSHcggVhJZJoybwrab8
FgeQs1YvkHV0yJzSWy+ixKhitC4J5/gdAwKK2wIDAQABAoIBAQCRFTJna/xy/WUu
59FLR4qAOj8++JgCwACpue4oU7/vl6nffSYokWoAr2+RzG4qTX2vFi3cpA8+dGCn
sLZvTi8tWzKGxBTZdg2oakzaMzLr74SeZ052iCGyrZJGbvF6Ny7srr1XEXSq6+os
ZCb6pMHOhO7saBdiKMAsY8MdjTl/33AduuE6ztqv+L92xTr2g4QlbT1KvWlEgppU
k4Gy7zdETkPBTSH/17ZwyGJoJICIAhbL4IpmOM4dPIg8nFkVPPpy6p0z4uGjtgnK
nreZ2EKMzCafBaHn7A77gpi0OrQdl6pe0fsGqv/323YjCJPbwwl5TsoNq44DzwiX
3M7XiVJxAoGBAOCne56vdN4uZmCgLVGT2JSUNVPOu4bfjrxWH6cslzrPT2Zhp3lO
M4axZ3gmcervV252YEZXntXDHHCSfrECllRN1WFD63XmyQ/CkhuvZkkeRHfzL1TE
EdqHOTqs4sRETZ7+RITFC81DZQkWWOKeyXMjyPBqd7RnThQHijB1c8Y5AoGBAKy6
CVKBx+zz5crVD0tz4UhOmz1wRNN0CL0l+FXRuFSgbzMIvwpfiqe25crgeLHe2M2/
TogdWbjZ2nUZQTzoRsSkQ6cKHpj+G/gWurp/UcHHXFVwgLSPF7c3KHDtiYq7Vqw0
bvmhM03LI6+ZIPRV7hLBr7WP7UmpAiREMF7tTnmzAoGBAIkx3w3WywFQxtblmyeB
qbd7F2IaE23XoxyjX+tBEQ4qQqwcoSE0v8TXHIBEwjceeX+NLVhn9ClJYVniLRq+
oL3VVqVyzB4RleJZCc98e3PV1yyFx/b1Uo3pHOsXX9lKeTjKwV9v0rhFGzPEgP3M
yOvXA8TG0FnM6OLUg/D6GX0JAoGAMuHS4TVOGeV3ahr9mHKYiN5vKNgrzka+VEod
L9rJ/FQOrfADpyCiDen5I5ygsXU+VM3oanyK88NpcVlxOGoMft0M+OYoQVWKE7lO
ZKYhBX6fGqQ7pfUJPXXIOgwfmni5fZ0sm+j63g3bg10OsiumKGxaQJgXhL1+3gQg
Y7ZwibUCgYEAlZoFFvkMLjpOSaHk1z5ZZnt19X0QUIultBwkumSqMPm+Ks7+uDrx
thGUCoz4ecr/ci4bIUY7mB+zfAbqnBOMxreJqCRbAIuRypo1IlWkTp8DywoDOfMW
NfzjVmzJ7EJu44nGmVAi1jw4Pbseivvi1ujMCoPgaE8I1uSh144bwN8=
MIIEogIBAAKCAQEAymC0zopFRDYjf6mLqTrKF3e1WGJ68snwgv3SuBQtmqM5G6ox
cwXbzxgGrKHOMhIh+QhnlzICbjo5NZmqnrZnSLMNNI+KByb4mLJxl88cUubMIQ+7
KXFuFJOFeLQFM623071v9llv8mAi3uaYVW9nIc9Iy/lmG+GwZnX7P7ekbT/UarKZ
WAJlad4GepbdmDbbxJPYM9E1ZNw7jtU6riApX0WFI7ssEusQ0r8SfCZfdRHw1jPE
FPvRfmALYr++MZdlNz9MMDKjbzMX/f92pN5U0BrIDuC+nMGyqVSzDM8Ulbp2c/7I
Jdfcct6hx89UK9jIyf9nNie72oVC+DT0ZIyQpQIDAQABAoIBADAh7f7NjgnaInlD
ds8KB3SraPsbeQhzlPtiqRJU4j/MIFH/GYG03AGWQkget67a9y+GmzSvlTpoKKEh
6h2TXl9BDpv4o6ht0WRn1HJ5tM/Wyqf2WNpTew3zxCPgFPikkXsPrChYPzLTQJfp
GkP/mfTFmxfAOlPZSp4j41zVLYs53eDkAegFPVfKSr1XNNJ3QODLPcIBfxBYsiC9
oU+jRW8xYuj31cEl5k5UqrChJ1rm3mt6cguqXKbISuoSvi13gXI6DccqhuLAU+Kr
ib2XYrRP+pWocZo/pM9WUVoNGtFxfY88sAQtvG6gDKo2AURtFyq84Ow0h9mdixV/
gRIDPcECgYEA5nEqE3OKuG9WuUFGXvjtn4C0F6JjflYWh7AbX51S4F6LKrW6/XHL
Rg4BtF+XReT7OQ6llsV8kZeUxsUckkgDLzSaA8lysNDV5KkhAWHfRqH//QKFbqZi
JL9t3x63Qt81US8s2hQk3khPYTRM8ZB3xHiXvZYSGC/0x/DxfEO3QJECgYEA4NK5
sxtrat8sFz6SK9nWEKimPjDVzxJ0hxdX4tRq/JdOO5RncawVqt6TNP9gTuxfBvhW
MhJYEsQj8iUoL1dxo9d1eP8HEANNV0iX5OBvJNmgBp+2OyRSyr+PA55+wAxYuAE7
QKaitOjW57fpArNRt2hQyiSzTuqUFRWTWJHCWNUCgYAEurPTXF6vdFGCUc2g61jt
GhYYGhQSpq+lrz6Qksj9o9MVWE9zHh++21C7o+6V16I0RJGva3QoBMVf4vG4KtQt
5tV2WG8LI+4P2Ey+G4UajP6U8bVNVQrUmD0oBBhcvfn5JY+1Fg6/pRpD82/U0VMz
7AmpMWhDqNBMPiymkTk0kQKBgCuWb05cSI0ly4SOKwS5bRk5uVFhYnKNH255hh6C
FGP4acB/WzbcqC7CjEPAJ0nl5d6SExQOHmk1AcsWjR3wlCWxxiK5PwNJwJrlhh1n
reS1FKN0H36D4lFQpkeLWQOe4Sx7gKNeKzlr0w6Fx3Uwku0+Gju2tdTdAey8jB6l
08opAoGAEe1AuR/OFp2xw6V8TH9UHkkpGxy+OrXI6PX6tgk29PgB+uiMu4RwbjVz
1di1KKq2XecAilVbnyqY+edADxYGbSnci9x5wQRIebfMi3VXKtV8NQBv2as6qwtW
JDcQUWotOHjpdvmfJWWkcBhbAKrgX8ukww00ZI/lC3/rmkGnBBg=
-----END RSA PRIVATE KEY-----

View File

@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDVjCCAj6gAwIBAgIULh42yRefYlRRl1hvt055LrUH0HwwDQYJKoZIhvcNAQEL
BQAwNDEyMDAGA1UEAxMpRWxhc3RpYyBDZXJ0aWZpY2F0ZSBUb29sIEF1dG9nZW5l
cmF0ZWQgQ0EwHhcNMjAwMjI4MDMzNzIwWhcNMjMwMjI3MDMzNzIwWjATMREwDwYD
VQQDEwhpbnN0YW5jZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAIUP
t267NN21z+3ukajej8eojSXwP6zHxy7CUAp+sQ7bTq2XCKxkYX3CW9ThcS4cV9mL
ayYdWEYnbEDGYPQDo7Wk3Ih5OEXTMZb/yNEx5D4S2lGMOS5bCDdYx6GvwCMG4jNx
aMktosaxpprAJiHh2oLgQk0hQc/a9JfMo6kJKtuhjxsxjxLwcOHhuaUD7NS0Pjop
CJkSYcrL+nnQPQjKe4uLhAbSyiX914h4QX0CJ0e4z1ccdDX2PFWTrwaIf//vQhCR
wP2YKdfjR0JB4oDAlu85GsIs2cFLPysM5ufuNZO4fCr8uOwloKI8zZ2HhlIfBEcY
Gcy4g9N/9epmxMXZlGcCAwEAAaOBgDB+MB0GA1UdDgQWBBRefYm8DHHDdkTPHhS1
HEUwTb2uiDAfBgNVHSMEGDAWgBSWAlip9eoPmnG4p4OFZeOUBlAbNDAxBgNVHREE
KjAogglsb2NhbGhvc3SHBH8AAAGHEAAAAAAAAAAAAAAAAAAAAAGCA2VzMTAJBgNV
HRMEAjAAMA0GCSqGSIb3DQEBCwUAA4IBAQC+pauqM2wJjQaHyHu+kIm59P4b/5Oj
IH1cYCQfMB7Y2UMLxp0ew+f7o7zzE2DA52YYFDWy6J5DVWtSBPyeFGgX+RH+aA+9
Iv4cc9QpAs6aFjncorHrzNOrWLgCHIeRAxTR0CAkeP2dUZfDBuMpRyP6rAsYzyLH
Rb3/BfYJSI5vxgt5Ke49Y/ljDKFJTyDmAVrHQ4JWrseYE1UZ2eDkBXeiRlYE/QtB
YsrUSqdL6zvFZyUcilxDUUabNcA+GgeGZ2lAEA90F8vwi62QwRXo3Iv1Hz+6xc43
nFofDK9D8/qkrUD9iuhpx1974QwPhwWyjn9RZRpbZA4ngRL+szdRXR4N
-----END CERTIFICATE-----

View File

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAhQ+3brs03bXP7e6RqN6Px6iNJfA/rMfHLsJQCn6xDttOrZcI
rGRhfcJb1OFxLhxX2YtrJh1YRidsQMZg9AOjtaTciHk4RdMxlv/I0THkPhLaUYw5
LlsIN1jHoa/AIwbiM3FoyS2ixrGmmsAmIeHaguBCTSFBz9r0l8yjqQkq26GPGzGP
EvBw4eG5pQPs1LQ+OikImRJhysv6edA9CMp7i4uEBtLKJf3XiHhBfQInR7jPVxx0
NfY8VZOvBoh//+9CEJHA/Zgp1+NHQkHigMCW7zkawizZwUs/Kwzm5+41k7h8Kvy4
7CWgojzNnYeGUh8ERxgZzLiD03/16mbExdmUZwIDAQABAoIBAEwhjulLMVc9JEfV
PP/qv0cUOBYh3LzF3T/yq4slq7Z9YgnOJYdFM8aZgqNNjc09KEJvE5JOLeiNu9Ff
768Nugg+2HM5MCo7SN9FYCfZLOcbMFCCM2FDcnMAV9A512vzD08xryuT8dNPZ6yZ
DfhK2hQRrb2lrpr3gwSrcGRRu3THqvq7X1RIjpLV3teDMeP8rQPAlpj8fmP+kdVV
5y1ihiDIo87McihG9FMavJtBDXQkUEuVw6eIeir8L/zHHD/ZwhYjNHZGWbrB88sz
CkJkfWh/FlA63tCVdJzkmnERALLTVy9mR0Sq6sUlnFhFNO2BRdWgYLrcp9McfTJC
e8+WsSECgYEAuwQ3nAaFL0jqYu1AREyKT/f3WUenf2UsX7dwwV2/yFtQvkzW7ji4
uZLnfUnZBojtHf35dRo+hDgtvhZhgZNAuPPsbOl/EIMTcbChEqV/3CSTFlhLFM1d
hfM9PoM+Bt/pyUNabjD1sWM0X7WeUhzcddshY3S4daBsNsLuOzweRRcCgYEAtiSS
4qiiGafYsY7gOHuAlOhs/00+1uWIFEHKgoHM9vzCxDN3LCmBdynHk8ZE2TAdhw+l
7xpu6LUxKQDfGmVZa9Epg0kQmVq9c54oQP57pJ3tR+68++insEkfnaZH8jblfq2s
sSkFrY3pdS19edq60nuft64kswKRUUkamCXTXTECgYBdoSfiMpV9bekC7DsPtq5M
iR3KEgi2zEViCmomNTRuL+GF1NyKWdWJ+xVwcYd5MRZdvKimyyPfeGzWTUg14i42
KtEEWgZmkukqMz8BIeCYq6sENeIpIQQgqv3PjU+Bi5r1S4Y7wsFPNRakkD4aaB6r
1rCppWcwZMeoxwEUoO2aswKBgBdDIIdWJi3EpAY5SyWrkEZ0UMdiZC4p7nE33ddB
IJ5CtdU9BXFcc652ZYjX/58FaCABvZ2F8LhDu92SwOusGfmNIxIjWL1dO2jywA1c
8wmZKd7P/M7nbdMz45fMzs9+d1zwbWfK53C8+R4AC1BuwQF0zHc3BHTgVRLelUjt
O8thAoGAdO2gHIqEsZzTgbvLbsh52eVbumjfNGnrnEv1fjb+o+/wAol8dymcmzbL
bZCRzoyA0qwU9kdPFgX46H6so6o1tUM2GQtVFoT6kDnPv7EkLQK0C4cDh6OOHxDU
NPvr/9fHhQd9EDWDvS1JnVMAdKDO6ELp3SoKGGmCXR2QplnqWAk=
-----END RSA PRIVATE KEY-----

67
.ci/functions/cleanup.sh Normal file
View File

@ -0,0 +1,67 @@
#!/usr/bin/env bash
#
# Shared cleanup routines between different steps
#
# Please source .ci/functions/imports.sh as a whole not just this file
#
# Version 1.0.0
# - Initial version after refactor
function cleanup_volume {
if [[ "$(docker volume ls -q -f name=$1)" ]]; then
echo -e "\033[34;1mINFO:\033[0m Removing volume $1\033[0m"
(docker volume rm "$1") || true
fi
}
function container_running {
if [[ "$(docker ps -q -f name=$1)" ]]; then
return 0;
else return 1;
fi
}
function cleanup_node {
if container_running "$1"; then
echo -e "\033[34;1mINFO:\033[0m Removing container $1\033[0m"
(docker container rm --force --volumes "$1") || true
fi
if [[ -n "$1" ]]; then
echo -e "\033[34;1mINFO:\033[0m Removing volume $1-${suffix}-data\033[0m"
cleanup_volume "$1-${suffix}-data"
fi
}
function cleanup_network {
if [[ "$(docker network ls -q -f name=$1)" ]]; then
echo -e "\033[34;1mINFO:\033[0m Removing network $1\033[0m"
(docker network rm "$1") || true
fi
}
function cleanup_trap {
status=$?
set +x
if [[ "$DETACH" != "true" ]]; then
echo -e "\033[34;1mINFO:\033[0m clean the network if not detached (start and exit)\033[0m"
cleanup_all_in_network "$1"
fi
# status is 0 or SIGINT
if [[ "$status" == "0" || "$status" == "130" ]]; then
echo -e "\n\033[32;1mSUCCESS run-tests\033[0m"
exit 0
else
echo -e "\n\033[31;1mFAILURE during run-tests\033[0m"
exit ${status}
fi
};
function cleanup_all_in_network {
if [[ -z "$(docker network ls -q -f name="^$1\$")" ]]; then
echo -e "\033[34;1mINFO:\033[0m $1 is already deleted\033[0m"
return 0
fi
containers=$(docker network inspect -f '{{ range $key, $value := .Containers }}{{ printf "%s\n" .Name}}{{ end }}' $1)
while read -r container; do
cleanup_node "$container"
done <<< "$containers"
cleanup_network $1
echo -e "\033[32;1mSUCCESS:\033[0m Cleaned up and exiting\033[0m"
};

60
.ci/functions/imports.sh Normal file
View File

@ -0,0 +1,60 @@
#!/usr/bin/env bash
#
# Sets up all the common variables and imports relevant functions
#
# Version 1.0.1
# - Initial version after refactor
# - Validate STACK_VERSION asap
function require_stack_version() {
if [[ -z $STACK_VERSION ]]; then
echo -e "\033[31;1mERROR:\033[0m Required environment variable [STACK_VERSION] not set\033[0m"
exit 1
fi
}
require_stack_version
if [[ -z $es_node_name ]]; then
# only set these once
set -euo pipefail
export TEST_SUITE=${TEST_SUITE-oss}
export RUNSCRIPTS=${RUNSCRIPTS-}
export DETACH=${DETACH-false}
export CLEANUP=${CLEANUP-false}
export es_node_name=instance
export elastic_password=changeme
export elasticsearch_image=elasticsearch
export elasticsearch_url=https://elastic:${elastic_password}@${es_node_name}:9200
if [[ $TEST_SUITE != "xpack" ]]; then
export elasticsearch_image=elasticsearch-${TEST_SUITE}
export elasticsearch_url=http://${es_node_name}:9200
fi
export external_elasticsearch_url=${elasticsearch_url/$es_node_name/localhost}
export elasticsearch_container="${elasticsearch_image}:${STACK_VERSION}"
export suffix=rest-test
export moniker=$(echo "$elasticsearch_container" | tr -C "[:alnum:]" '-')
export network_name=${moniker}${suffix}
export ssl_cert="${script_path}/certs/testnode.crt"
export ssl_key="${script_path}/certs/testnode.key"
export ssl_ca="${script_path}/certs/ca.crt"
fi
export script_path=$(dirname $(realpath -s $0))
source $script_path/functions/cleanup.sh
source $script_path/functions/wait-for-container.sh
trap "cleanup_trap ${network_name}" EXIT
if [[ "$CLEANUP" == "true" ]]; then
cleanup_all_in_network $network_name
exit 0
fi
echo -e "\033[34;1mINFO:\033[0m Creating network $network_name if it does not exist already \033[0m"
docker network inspect "$network_name" > /dev/null 2>&1 || docker network create "$network_name"

View File

@ -0,0 +1,36 @@
#!/usr/bin/env bash
#
# Exposes a routine scripts can call to wait for a container if that container set up a health command
#
# Please source .ci/functions/imports.sh as a whole not just this file
#
# Version 1.0.1
# - Initial version after refactor
# - Make sure wait_for_contiainer is silent
function wait_for_container {
set +x
until ! container_running "$1" || (container_running "$1" && [[ "$(docker inspect -f "{{.State.Health.Status}}" ${1})" != "starting" ]]); do
echo ""
docker inspect -f "{{range .State.Health.Log}}{{.Output}}{{end}}" ${1}
echo -e "\033[34;1mINFO:\033[0m waiting for node $1 to be up\033[0m"
sleep 2;
done;
# Always show logs if the container is running, this is very useful both on CI as well as while developing
if container_running $1; then
docker logs $1
fi
if ! container_running $1 || [[ "$(docker inspect -f "{{.State.Health.Status}}" ${1})" != "healthy" ]]; then
cleanup_all_in_network $2
echo
echo -e "\033[31;1mERROR:\033[0m Failed to start $1 in detached mode beyond health checks\033[0m"
echo -e "\033[31;1mERROR:\033[0m dumped the docker log before shutting the node down\033[0m"
return 1
else
echo
echo -e "\033[32;1mSUCCESS:\033[0m Detached and healthy: ${1} on docker network: ${network_name}\033[0m"
return 0
fi
}

View File

@ -1,4 +1,4 @@
---
##### GLOBAL METADATA
@ -42,7 +42,7 @@
- axis:
type: yaml
filename: .ci/test-matrix.yml
name: ELASTICSEARCH_VERSION
name: STACK_VERSION
- axis:
type: yaml
filename: .ci/test-matrix.yml

210
.ci/run-elasticsearch.sh Normal file → Executable file
View File

@ -3,95 +3,34 @@
# Launch one or more Elasticsearch nodes via the Docker image,
# to form a cluster suitable for running the REST API tests.
#
# Export the ELASTICSEARCH_VERSION variable, eg. 'elasticsearch:8.0.0-SNAPSHOT'.
# Export the STACK_VERSION variable, eg. '8.0.0-SNAPSHOT'.
# Export the TEST_SUITE variable, eg. 'oss' or 'xpack' defaults to 'oss'.
# Export the NUMBER_OF_NODES variable to start more than 1 node
if [[ -z "$ELASTICSEARCH_VERSION" ]]; then
echo -e "\033[31;1mERROR:\033[0m Required environment variable [ELASTICSEARCH_VERSION] not set\033[0m"
exit 1
fi
# Version 1.1.0
# - Initial version of the run-elasticsearch.sh script
# - Deleting the volume should not dependent on the container still running
# - Fixed `ES_JAVA_OPTS` config
# - Moved to STACK_VERSION and TEST_VERSION
# - Refactored into functions and imports
# - Support NUMBER_OF_NODES
set -euxo pipefail
script_path=$(dirname $(realpath -s $0))
source $script_path/functions/imports.sh
set -euo pipefail
moniker=$(echo "$ELASTICSEARCH_VERSION" | tr -C "[:alnum:]" '-')
suffix=rest-test
echo -e "\033[34;1mINFO:\033[0m Take down node if called twice with the same arguments (DETACH=true) or on seperate terminals \033[0m"
cleanup_node $es_node_name
NODE_NAME=${NODE_NAME-${moniker}node1}
MASTER_NODE_NAME=${MASTER_NODE_NAME-${NODE_NAME}}
CLUSTER_NAME=${CLUSTER_NAME-${moniker}${suffix}}
HTTP_PORT=${HTTP_PORT-9200}
ELASTIC_PASSWORD=${ELASTIC_PASSWORD-changeme}
SSL_CERT=${SSL_CERT-"$PWD/certs/testnode.crt"}
SSL_KEY=${SSL_KEY-"$PWD/certs/testnode.key"}
SSL_CA=${SSL_CA-"$PWD/certs/ca.crt"}
DETACH=${DETACH-false}
CLEANUP=${CLEANUP-false}
volume_name=${NODE_NAME}-${suffix}-data
network_default=${moniker}${suffix}
NETWORK_NAME=${NETWORK_NAME-"$network_default"}
set +x
function cleanup_volume {
if [[ "$(docker volume ls -q -f name=$1)" ]]; then
echo -e "\033[34;1mINFO:\033[0m Removing volume $1\033[0m"
(docker volume rm "$1") || true
fi
}
function cleanup_node {
if [[ "$(docker ps -q -f name=$1)" ]]; then
echo -e "\033[34;1mINFO:\033[0m Removing container $1\033[0m"
(docker container rm --force --volumes "$1") || true
cleanup_volume "$1-${suffix}-data"
fi
}
function cleanup_network {
if [[ "$(docker network ls -q -f name=$1)" ]]; then
echo -e "\033[34;1mINFO:\033[0m Removing network $1\033[0m"
(docker network rm "$1") || true
fi
}
function cleanup {
if [[ "$DETACH" != "true" ]] || [[ "$1" == "1" ]]; then
echo -e "\033[34;1mINFO:\033[0m clean the node and volume on startup (1) OR on exit if not detached\033[0m"
cleanup_node "$NODE_NAME"
fi
if [[ "$DETACH" != "true" ]]; then
echo -e "\033[34;1mINFO:\033[0m clean the network if not detached (start and exit)\033[0m"
cleanup_network "$NETWORK_NAME"
fi
};
trap "cleanup 0" EXIT
if [[ "$CLEANUP" == "true" ]]; then
trap - EXIT
if [[ -z "$(docker network ls -q -f name=${NETWORK_NAME})" ]]; then
echo -e "\033[34;1mINFO:\033[0m $NETWORK_NAME is already deleted\033[0m"
exit 0
fi
containers=$(docker network inspect -f '{{ range $key, $value := .Containers }}{{ printf "%s\n" .Name}}{{ end }}' ${NETWORK_NAME})
while read -r container; do
cleanup_node "$container"
done <<< "$containers"
cleanup_network "$NETWORK_NAME"
echo -e "\033[32;1mSUCCESS:\033[0m Cleaned up and exiting\033[0m"
exit 0
fi
echo -e "\033[34;1mINFO:\033[0m Making sure previous run leftover infrastructure is removed \033[0m"
cleanup 1
echo -e "\033[34;1mINFO:\033[0m Creating network $NETWORK_NAME if it does not exist already \033[0m"
docker network inspect "$NETWORK_NAME" > /dev/null 2>&1 || docker network create "$NETWORK_NAME"
master_node_name=${es_node_name}
cluster_name=${moniker}${suffix}
declare -a volumes
environment=($(cat <<-END
--env node.name=$NODE_NAME
--env cluster.name=$CLUSTER_NAME
--env cluster.initial_master_nodes=$MASTER_NODE_NAME
--env discovery.seed_hosts=$MASTER_NODE_NAME
--env node.name=$es_node_name
--env cluster.name=$cluster_name
--env cluster.initial_master_nodes=$master_node_name
--env discovery.seed_hosts=$master_node_name
--env cluster.routing.allocation.disk.threshold_enabled=false
--env bootstrap.memory_lock=true
--env node.attr.testattr=test
@ -99,15 +38,9 @@ environment=($(cat <<-END
--env repositories.url.allowed_urls=http://snapshot.test*
END
))
volumes=($(cat <<-END
--volume $volume_name:/usr/share/elasticsearch/data
END
))
if [[ "$ELASTICSEARCH_VERSION" != *oss* ]]; then
if [[ "$TEST_SUITE" == "xpack" ]]; then
environment+=($(cat <<-END
--env ELASTIC_PASSWORD=$ELASTIC_PASSWORD
--env ELASTIC_PASSWORD=$elastic_password
--env xpack.license.self_generated.type=trial
--env xpack.security.enabled=true
--env xpack.security.http.ssl.enabled=true
@ -122,56 +55,61 @@ if [[ "$ELASTICSEARCH_VERSION" != *oss* ]]; then
END
))
volumes+=($(cat <<-END
--volume $SSL_CERT:/usr/share/elasticsearch/config/certs/testnode.crt
--volume $SSL_KEY:/usr/share/elasticsearch/config/certs/testnode.key
--volume $SSL_CA:/usr/share/elasticsearch/config/certs/ca.crt
--volume $ssl_cert:/usr/share/elasticsearch/config/certs/testnode.crt
--volume $ssl_key:/usr/share/elasticsearch/config/certs/testnode.key
--volume $ssl_ca:/usr/share/elasticsearch/config/certs/ca.crt
END
))
fi
url="http://$NODE_NAME"
if [[ "$ELASTICSEARCH_VERSION" != *oss* ]]; then
url="https://elastic:$ELASTIC_PASSWORD@$NODE_NAME"
cert_validation_flags=""
if [[ "$TEST_SUITE" == "xpack" ]]; then
cert_validation_flags="--insecure --cacert /usr/share/elasticsearch/config/certs/ca.crt --resolve ${es_node_name}:443:127.0.0.1"
fi
echo -e "\033[34;1mINFO:\033[0m Starting container $NODE_NAME \033[0m"
set -x
docker run \
--name "$NODE_NAME" \
--network "$NETWORK_NAME" \
--env ES_JAVA_OPTS=-"Xms1g -Xmx1g" \
"${environment[@]}" \
"${volumes[@]}" \
--publish "$HTTP_PORT":9200 \
--ulimit nofile=65536:65536 \
--ulimit memlock=-1:-1 \
--detach="$DETACH" \
--health-cmd="curl --silent --insecure --fail $url:9200/_cluster/health || exit 1" \
--health-interval=2s \
--health-retries=20 \
--health-timeout=2s \
--rm \
docker.elastic.co/elasticsearch/"$ELASTICSEARCH_VERSION";
set +x
NUMBER_OF_NODES=${NUMBER_OF_NODES-1}
http_port=9200
for (( i=0; i<$NUMBER_OF_NODES; i++, http_port++ )); do
node_name=${es_node_name}$i
node_url=${external_elasticsearch_url/9200/${http_port}}$i
if [[ "$i" == "0" ]]; then node_name=$es_node_name; fi
environment+=($(cat <<-END
--env node.name=$node_name
END
))
echo "$i: $http_port $node_url "
volume_name=${node_name}-${suffix}-data
volumes+=($(cat <<-END
--volume $volume_name:/usr/share/elasticsearch/data${i}
END
))
if [[ "$DETACH" == "true" ]]; then
until [[ "$(docker inspect -f "{{.State.Health.Status}}" ${NODE_NAME})" != "starting" ]]; do
sleep 2;
echo ""
echo -e "\033[34;1mINFO:\033[0m waiting for node $NODE_NAME to be up\033[0m"
done;
# Always show the node getting started logs, this is very useful both on CI as well as while developing
docker logs "$NODE_NAME"
if [[ "$(docker inspect -f "{{.State.Health.Status}}" ${NODE_NAME})" != "healthy" ]]; then
cleanup 1
echo
echo -e "\033[31;1mERROR:\033[0m Failed to start ${ELASTICSEARCH_VERSION} in detached mode beyond health checks\033[0m"
echo -e "\033[31;1mERROR:\033[0m dumped the docker log before shutting the node down\033[0m"
exit 1
else
echo
echo -e "\033[32;1mSUCCESS:\033[0m Detached and healthy: ${NODE_NAME} on docker network: ${NETWORK_NAME}\033[0m"
echo -e "\033[32;1mSUCCESS:\033[0m Running on: ${url/$NODE_NAME/localhost}:${HTTP_PORT}\033[0m"
exit 0
# make sure we detach for all but the last node if DETACH=false (default) so all nodes are started
local_detach="true"
if [[ "$i" == "$((NUMBER_OF_NODES-1))" ]]; then local_detach=$DETACH; fi
echo -e "\033[34;1mINFO:\033[0m Starting container $node_name \033[0m"
set -x
docker run \
--name "$node_name" \
--network "$network_name" \
--env "ES_JAVA_OPTS=-Xms1g -Xmx1g" \
"${environment[@]}" \
"${volumes[@]}" \
--publish "$http_port":9200 \
--ulimit nofile=65536:65536 \
--ulimit memlock=-1:-1 \
--detach="$local_detach" \
--health-cmd="curl $cert_validation_flags --fail $elasticsearch_url/_cluster/health || exit 1" \
--health-interval=2s \
--health-retries=20 \
--health-timeout=2s \
--rm \
docker.elastic.co/elasticsearch/"$elasticsearch_container";
set +x
if wait_for_container "$es_node_name" "$network_name"; then
echo -e "\033[32;1mSUCCESS:\033[0m Running on: $node_url\033[0m"
fi
fi
done

46
.ci/run-repository.sh Executable file
View File

@ -0,0 +1,46 @@
#!/usr/bin/env bash
# parameters are available to this script
# STACK_VERSION -- version e.g Major.Minor.Patch(-Prelease)
# TEST_SUITE -- which test suite to run: oss or xpack
# ELASTICSEARCH_URL -- The url at which elasticsearch is reachable, a default is composed based on STACK_VERSION and TEST_SUITE
# NODE_JS_VERSION -- node js version (defined in test-matrix.yml, a default is hardcoded here)
script_path=$(dirname $(realpath -s $0))
source $script_path/functions/imports.sh
set -euo pipefail
NODE_JS_VERSION=${NODE_JS_VERSION-12}
ELASTICSEARCH_URL=${ELASTICSEARCH_URL-"$elasticsearch_url"}
elasticsearch_container=${elasticsearch_container-}
echo -e "\033[34;1mINFO:\033[0m VERSION ${STACK_VERSION}\033[0m"
echo -e "\033[34;1mINFO:\033[0m TEST_SUITE ${TEST_SUITE}\033[0m"
echo -e "\033[34;1mINFO:\033[0m URL ${ELASTICSEARCH_URL}\033[0m"
echo -e "\033[34;1mINFO:\033[0m CONTAINER ${elasticsearch_container}\033[0m"
echo -e "\033[34;1mINFO:\033[0m NODE_JS_VERSION ${NODE_JS_VERSION}\033[0m"
echo -e "\033[1m>>>>> Build docker container >>>>>>>>>>>>>>>>>>>>>>>>>>>>>\033[0m"
docker build \
--file .ci/Dockerfile \
--tag elastic/elasticsearch-js \
--build-arg NODE_JS_VERSION=${NODE_JS_VERSION} \
.
echo -e "\033[1m>>>>> NPM run test:integration >>>>>>>>>>>>>>>>>>>>>>>>>>>>>\033[0m"
repo=$(realpath $(dirname $(realpath -s $0))/../)
run_script_args=""
if [[ "$NODE_JS_VERSION" == "8" ]]; then
run_script_args="-- --node-arg=--harmony-async-iteration"
fi
docker run \
--network=${network_name} \
--env "TEST_ES_SERVER=${ELASTICSEARCH_URL}" \
--volume $repo:/usr/src/app \
--volume /usr/src/app/node_modules \
--name elasticsearch-js \
--rm \
elastic/elasticsearch-js \
npm run test:integration ${run_script_args}

View File

@ -1,59 +1,23 @@
#!/usr/bin/env bash
#
# Runs the client tests via Docker with the expectation that the required
# environment variables have already been exported before running this script.
#
# The required environment variables include:
#
# - $ELASTICSEARCH_VERSION
# - $NODE_JS_VERSION
# - $TEST_SUITE
#
# Version 1.1
# - Moved to .ci folder and seperated out `run-repository.sh`
# - Add `$RUNSCRIPTS` env var for running Elasticsearch dependent products
script_path=$(dirname $(realpath -s $0))
source $script_path/functions/imports.sh
set -euo pipefail
set -eo pipefail
echo -e "\033[1m>>>>> Start [$STACK_VERSION container] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>\033[0m"
DETACH=true bash .ci/run-elasticsearch.sh
set +x
export VAULT_TOKEN=$(vault write -field=token auth/approle/login role_id="$VAULT_ROLE_ID" secret_id="$VAULT_SECRET_ID")
export CODECOV_TOKEN=$(vault read -field=token secret/clients-ci/elasticsearch-js/codecov)
unset VAULT_ROLE_ID VAULT_SECRET_ID VAULT_TOKEN
set -x
docker build \
--file .ci/Dockerfile \
--tag elastic/elasticsearch-js \
--build-arg NODE_JS_VERSION=${NODE_JS_VERSION} \
.
NODE_NAME="es1"
repo=$(pwd)
testnodecrt="/.ci/certs/testnode.crt"
testnodekey="/.ci/certs/testnode.key"
cacrt="/.ci/certs/ca.crt"
elasticsearch_image="elasticsearch"
elasticsearch_url="https://elastic:changeme@${NODE_NAME}:9200"
if [[ $TEST_SUITE != "xpack" ]]; then
elasticsearch_image="elasticsearch-oss"
elasticsearch_url="http://${NODE_NAME}:9200"
if [[ -n "$RUNSCRIPTS" ]]; then
for RUNSCRIPT in ${RUNSCRIPTS//,/ } ; do
echo -e "\033[1m>>>>> Running run-$RUNSCRIPT.sh >>>>>>>>>>>>>>>>>>>>>>>>>>>>>\033[0m"
CONTAINER_NAME=${RUNSCRIPT} \
DETACH=true \
bash .ci/run-${RUNSCRIPT}.sh
done
fi
ELASTICSEARCH_VERSION="${elasticsearch_image}:${ELASTICSEARCH_VERSION}" \
NODE_NAME="${NODE_NAME}" \
NETWORK_NAME="esnet" \
DETACH=true \
SSL_CERT="${repo}${testnodecrt}" \
SSL_KEY="${repo}${testnodekey}" \
SSL_CA="${repo}${cacrt}" \
bash .ci/run-elasticsearch.sh
docker run \
--network=esnet \
--env "TEST_ES_SERVER=${elasticsearch_url}" \
--env "CODECOV_TOKEN" \
--volume $repo:/usr/src/app \
--volume /usr/src/app/node_modules \
--name elasticsearch-js \
--rm \
elastic/elasticsearch-js \
npm run ci
echo -e "\033[1m>>>>> Repository specific tests >>>>>>>>>>>>>>>>>>>>>>>>>>>>>\033[0m"
bash .ci/run-repository.sh

View File

@ -1,6 +1,6 @@
---
ELASTICSEARCH_VERSION:
- 7.5.0
STACK_VERSION:
- 7.6-SNAPSHOT
NODE_JS_VERSION:
- 12

16
.github/workflows/backport.yml vendored Normal file
View File

@ -0,0 +1,16 @@
name: Backport
on:
pull_request:
types:
- closed
- labeled
jobs:
backport:
runs-on: ubuntu-latest
name: Backport
steps:
- name: Backport
uses: tibdex/backport@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}

120
.github/workflows/nodejs.yml vendored Normal file
View File

@ -0,0 +1,120 @@
name: Node CI
on: [push]
jobs:
test:
name: Test
runs-on: ${{ matrix.os }}
strategy:
matrix:
node-version: [10.x, 12.x, 13.x]
os: [ubuntu-latest, windows-latest, macOS-latest]
steps:
- uses: actions/checkout@v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Install
run: |
npm install
- name: Lint
run: |
npm run lint
- name: Unit test
run: |
npm run test:unit
- name: Behavior test
run: |
npm run test:behavior
- name: Type Definitions
run: |
npm run test:types
test-node-v8:
name: Test
runs-on: ${{ matrix.os }}
strategy:
matrix:
node-version: [8.x]
os: [ubuntu-latest, windows-latest, macOS-latest]
steps:
- uses: actions/checkout@v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Install
run: |
npm install
- name: Test
run: |
npm run test:unit -- --node-arg=--harmony-async-iteration
code-coverage:
name: Code coverage
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [12.x]
steps:
- uses: actions/checkout@v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Install
run: |
npm install
- name: Code coverage
run: |
npm run test:coverage
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
with:
file: ./coverage.lcov
fail_ci_if_error: true
license:
name: License check
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [12.x]
steps:
- uses: actions/checkout@v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Install
run: |
npm install
- name: License checker
run: |
npm run license-checker

3
.gitignore vendored
View File

@ -45,6 +45,9 @@ jspm_packages
# vim swap files
*.swp
#Jetbrains editor folder
.idea
package-lock.json
# elasticsearch repo or binary files

View File

@ -1,27 +0,0 @@
language: node_js
node_js:
- "12"
- "10"
- "8"
cache:
npm: false
os:
- windows
- linux
install:
- npm install
script:
- if [ "$TRAVIS_OS_NAME" = "linux" ]; then npm run license-checker; fi
- npm run lint
- npm run test:coverage
- npm run test:types
notifications:
email:
on_success: never
on_failure: always

View File

@ -187,7 +187,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Copyright 2020 Elastic and contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@ -0,0 +1,76 @@
// Licensed to Elasticsearch B.V under one or more agreements.
// Elasticsearch B.V licenses this file to you under the Apache 2.0 License.
// See the LICENSE file in the project root for more information
'use strict'
/* eslint camelcase: 0 */
/* eslint no-unused-vars: 0 */
function buildGetScriptContext (opts) {
// eslint-disable-next-line no-unused-vars
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
'pretty',
'human',
'error_trace',
'source',
'filter_path'
]
const snakeCase = {
errorTrace: 'error_trace',
filterPath: 'filter_path'
}
/**
* Perform a get_script_context request
* Returns all script contexts.
*/
return function getScriptContext (params, options, callback) {
options = options || {}
if (typeof options === 'function') {
callback = options
options = {}
}
if (typeof params === 'function' || params == null) {
callback = params
params = {}
options = {}
}
// validate headers object
if (options.headers != null && typeof options.headers !== 'object') {
const err = new ConfigurationError(`Headers should be an object, instead got: ${typeof options.headers}`)
return handleError(err, callback)
}
var warnings = []
var { method, body, ...querystring } = params
querystring = snakeCaseKeys(acceptedQuerystring, snakeCase, querystring, warnings)
var ignore = options.ignore
if (typeof ignore === 'number') {
options.ignore = [ignore]
}
var path = ''
if (method == null) method = 'GET'
path = '/' + '_script_context'
// build request object
const request = {
method,
path,
body: null,
querystring
}
options.warnings = warnings.length === 0 ? null : warnings
return makeRequest(request, options, callback)
}
}
module.exports = buildGetScriptContext

View File

@ -0,0 +1,76 @@
// Licensed to Elasticsearch B.V under one or more agreements.
// Elasticsearch B.V licenses this file to you under the Apache 2.0 License.
// See the LICENSE file in the project root for more information
'use strict'
/* eslint camelcase: 0 */
/* eslint no-unused-vars: 0 */
function buildGetScriptLanguages (opts) {
// eslint-disable-next-line no-unused-vars
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
'pretty',
'human',
'error_trace',
'source',
'filter_path'
]
const snakeCase = {
errorTrace: 'error_trace',
filterPath: 'filter_path'
}
/**
* Perform a get_script_languages request
* Returns available script types, languages and contexts
*/
return function getScriptLanguages (params, options, callback) {
options = options || {}
if (typeof options === 'function') {
callback = options
options = {}
}
if (typeof params === 'function' || params == null) {
callback = params
params = {}
options = {}
}
// validate headers object
if (options.headers != null && typeof options.headers !== 'object') {
const err = new ConfigurationError(`Headers should be an object, instead got: ${typeof options.headers}`)
return handleError(err, callback)
}
var warnings = []
var { method, body, ...querystring } = params
querystring = snakeCaseKeys(acceptedQuerystring, snakeCase, querystring, warnings)
var ignore = options.ignore
if (typeof ignore === 'number') {
options.ignore = [ignore]
}
var path = ''
if (method == null) method = 'GET'
path = '/' + '_script_language'
// build request object
const request = {
method,
path,
body: null,
querystring
}
options.warnings = warnings.length === 0 ? null : warnings
return makeRequest(request, options, callback)
}
}
module.exports = buildGetScriptLanguages

View File

@ -32,7 +32,7 @@ function buildIndicesFlushSynced (opts) {
/**
* Perform a indices.flush_synced request
* Performs a synced flush operation on one or more indices.
* Performs a synced flush operation on one or more indices. Synced flush is deprecated and will be removed in 8.0. Use flush instead
* https://www.elastic.co/guide/en/elasticsearch/reference/master/indices-synced-flush-api.html
*/
return function indicesFlushSynced (params, options, callback) {

View File

@ -12,11 +12,12 @@ function buildLicenseGet (opts) {
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
'local'
'local',
'accept_enterprise'
]
const snakeCase = {
acceptEnterprise: 'accept_enterprise'
}
/**

View File

@ -12,7 +12,7 @@ function buildMlDeleteDataFrameAnalytics (opts) {
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
'force'
]
const snakeCase = {

View File

@ -0,0 +1,77 @@
// Licensed to Elasticsearch B.V under one or more agreements.
// Elasticsearch B.V licenses this file to you under the Apache 2.0 License.
// See the LICENSE file in the project root for more information
'use strict'
/* eslint camelcase: 0 */
/* eslint no-unused-vars: 0 */
function buildMlDeleteTrainedModel (opts) {
// eslint-disable-next-line no-unused-vars
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
]
const snakeCase = {
}
/**
* Perform a ml.delete_trained_model request
* https://www.elastic.co/guide/en/elasticsearch/reference/current/delete-inference.html
*/
return function mlDeleteTrainedModel (params, options, callback) {
options = options || {}
if (typeof options === 'function') {
callback = options
options = {}
}
if (typeof params === 'function' || params == null) {
callback = params
params = {}
options = {}
}
// check required parameters
if (params['model_id'] == null && params['modelId'] == null) {
const err = new ConfigurationError('Missing required parameter: model_id or modelId')
return handleError(err, callback)
}
// validate headers object
if (options.headers != null && typeof options.headers !== 'object') {
const err = new ConfigurationError(`Headers should be an object, instead got: ${typeof options.headers}`)
return handleError(err, callback)
}
var warnings = []
var { method, body, modelId, model_id, ...querystring } = params
querystring = snakeCaseKeys(acceptedQuerystring, snakeCase, querystring, warnings)
var ignore = options.ignore
if (typeof ignore === 'number') {
options.ignore = [ignore]
}
var path = ''
if (method == null) method = 'DELETE'
path = '/' + '_ml' + '/' + 'inference' + '/' + encodeURIComponent(model_id || modelId)
// build request object
const request = {
method,
path,
body: body || '',
querystring
}
options.warnings = warnings.length === 0 ? null : warnings
return makeRequest(request, options, callback)
}
}
module.exports = buildMlDeleteTrainedModel

View File

@ -0,0 +1,76 @@
// Licensed to Elasticsearch B.V under one or more agreements.
// Elasticsearch B.V licenses this file to you under the Apache 2.0 License.
// See the LICENSE file in the project root for more information
'use strict'
/* eslint camelcase: 0 */
/* eslint no-unused-vars: 0 */
function buildMlExplainDataFrameAnalytics (opts) {
// eslint-disable-next-line no-unused-vars
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
]
const snakeCase = {
}
/**
* Perform a ml.explain_data_frame_analytics request
* http://www.elastic.co/guide/en/elasticsearch/reference/current/explain-dfanalytics.html
*/
return function mlExplainDataFrameAnalytics (params, options, callback) {
options = options || {}
if (typeof options === 'function') {
callback = options
options = {}
}
if (typeof params === 'function' || params == null) {
callback = params
params = {}
options = {}
}
// validate headers object
if (options.headers != null && typeof options.headers !== 'object') {
const err = new ConfigurationError(`Headers should be an object, instead got: ${typeof options.headers}`)
return handleError(err, callback)
}
var warnings = []
var { method, body, id, ...querystring } = params
querystring = snakeCaseKeys(acceptedQuerystring, snakeCase, querystring, warnings)
var ignore = options.ignore
if (typeof ignore === 'number') {
options.ignore = [ignore]
}
var path = ''
if ((id) != null) {
if (method == null) method = body == null ? 'GET' : 'POST'
path = '/' + '_ml' + '/' + 'data_frame' + '/' + 'analytics' + '/' + encodeURIComponent(id) + '/' + '_explain'
} else {
if (method == null) method = body == null ? 'GET' : 'POST'
path = '/' + '_ml' + '/' + 'data_frame' + '/' + 'analytics' + '/' + '_explain'
}
// build request object
const request = {
method,
path,
body: body || '',
querystring
}
options.warnings = warnings.length === 0 ? null : warnings
return makeRequest(request, options, callback)
}
}
module.exports = buildMlExplainDataFrameAnalytics

View File

@ -0,0 +1,83 @@
// Licensed to Elasticsearch B.V under one or more agreements.
// Elasticsearch B.V licenses this file to you under the Apache 2.0 License.
// See the LICENSE file in the project root for more information
'use strict'
/* eslint camelcase: 0 */
/* eslint no-unused-vars: 0 */
function buildMlGetTrainedModels (opts) {
// eslint-disable-next-line no-unused-vars
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
'allow_no_match',
'include_model_definition',
'decompress_definition',
'from',
'size'
]
const snakeCase = {
allowNoMatch: 'allow_no_match',
includeModelDefinition: 'include_model_definition',
decompressDefinition: 'decompress_definition'
}
/**
* Perform a ml.get_trained_models request
* https://www.elastic.co/guide/en/elasticsearch/reference/current/get-inference.html
*/
return function mlGetTrainedModels (params, options, callback) {
options = options || {}
if (typeof options === 'function') {
callback = options
options = {}
}
if (typeof params === 'function' || params == null) {
callback = params
params = {}
options = {}
}
// validate headers object
if (options.headers != null && typeof options.headers !== 'object') {
const err = new ConfigurationError(`Headers should be an object, instead got: ${typeof options.headers}`)
return handleError(err, callback)
}
var warnings = []
var { method, body, modelId, model_id, ...querystring } = params
querystring = snakeCaseKeys(acceptedQuerystring, snakeCase, querystring, warnings)
var ignore = options.ignore
if (typeof ignore === 'number') {
options.ignore = [ignore]
}
var path = ''
if ((model_id || modelId) != null) {
if (method == null) method = 'GET'
path = '/' + '_ml' + '/' + 'inference' + '/' + encodeURIComponent(model_id || modelId)
} else {
if (method == null) method = 'GET'
path = '/' + '_ml' + '/' + 'inference'
}
// build request object
const request = {
method,
path,
body: null,
querystring
}
options.warnings = warnings.length === 0 ? null : warnings
return makeRequest(request, options, callback)
}
}
module.exports = buildMlGetTrainedModels

View File

@ -0,0 +1,79 @@
// Licensed to Elasticsearch B.V under one or more agreements.
// Elasticsearch B.V licenses this file to you under the Apache 2.0 License.
// See the LICENSE file in the project root for more information
'use strict'
/* eslint camelcase: 0 */
/* eslint no-unused-vars: 0 */
function buildMlGetTrainedModelsStats (opts) {
// eslint-disable-next-line no-unused-vars
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
'allow_no_match',
'from',
'size'
]
const snakeCase = {
allowNoMatch: 'allow_no_match'
}
/**
* Perform a ml.get_trained_models_stats request
* https://www.elastic.co/guide/en/elasticsearch/reference/current/get-inference-stats.html
*/
return function mlGetTrainedModelsStats (params, options, callback) {
options = options || {}
if (typeof options === 'function') {
callback = options
options = {}
}
if (typeof params === 'function' || params == null) {
callback = params
params = {}
options = {}
}
// validate headers object
if (options.headers != null && typeof options.headers !== 'object') {
const err = new ConfigurationError(`Headers should be an object, instead got: ${typeof options.headers}`)
return handleError(err, callback)
}
var warnings = []
var { method, body, modelId, model_id, ...querystring } = params
querystring = snakeCaseKeys(acceptedQuerystring, snakeCase, querystring, warnings)
var ignore = options.ignore
if (typeof ignore === 'number') {
options.ignore = [ignore]
}
var path = ''
if ((model_id || modelId) != null) {
if (method == null) method = 'GET'
path = '/' + '_ml' + '/' + 'inference' + '/' + encodeURIComponent(model_id || modelId) + '/' + '_stats'
} else {
if (method == null) method = 'GET'
path = '/' + '_ml' + '/' + 'inference' + '/' + '_stats'
}
// build request object
const request = {
method,
path,
body: null,
querystring
}
options.warnings = warnings.length === 0 ? null : warnings
return makeRequest(request, options, callback)
}
}
module.exports = buildMlGetTrainedModelsStats

View File

@ -7,7 +7,7 @@
/* eslint camelcase: 0 */
/* eslint no-unused-vars: 0 */
function buildMlEstimateMemoryUsage (opts) {
function buildMlPutTrainedModel (opts) {
// eslint-disable-next-line no-unused-vars
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
@ -20,10 +20,10 @@ function buildMlEstimateMemoryUsage (opts) {
}
/**
* Perform a ml.estimate_memory_usage request
* http://www.elastic.co/guide/en/elasticsearch/reference/current/estimate-memory-usage-dfanalytics.html
* Perform a ml.put_trained_model request
* TODO
*/
return function mlEstimateMemoryUsage (params, options, callback) {
return function mlPutTrainedModel (params, options, callback) {
options = options || {}
if (typeof options === 'function') {
callback = options
@ -36,6 +36,10 @@ function buildMlEstimateMemoryUsage (opts) {
}
// check required parameters
if (params['model_id'] == null && params['modelId'] == null) {
const err = new ConfigurationError('Missing required parameter: model_id or modelId')
return handleError(err, callback)
}
if (params['body'] == null) {
const err = new ConfigurationError('Missing required parameter: body')
return handleError(err, callback)
@ -48,7 +52,7 @@ function buildMlEstimateMemoryUsage (opts) {
}
var warnings = []
var { method, body, ...querystring } = params
var { method, body, modelId, model_id, ...querystring } = params
querystring = snakeCaseKeys(acceptedQuerystring, snakeCase, querystring, warnings)
var ignore = options.ignore
@ -58,8 +62,8 @@ function buildMlEstimateMemoryUsage (opts) {
var path = ''
if (method == null) method = 'POST'
path = '/' + '_ml' + '/' + 'data_frame' + '/' + 'analytics' + '/' + '_estimate_memory_usage'
if (method == null) method = 'PUT'
path = '/' + '_ml' + '/' + 'inference' + '/' + encodeURIComponent(model_id || modelId)
// build request object
const request = {
@ -74,4 +78,4 @@ function buildMlEstimateMemoryUsage (opts) {
}
}
module.exports = buildMlEstimateMemoryUsage
module.exports = buildMlPutTrainedModel

View File

@ -15,6 +15,7 @@ function buildRankEval (opts) {
'ignore_unavailable',
'allow_no_indices',
'expand_wildcards',
'search_type',
'pretty',
'human',
'error_trace',
@ -26,6 +27,7 @@ function buildRankEval (opts) {
ignoreUnavailable: 'ignore_unavailable',
allowNoIndices: 'allow_no_indices',
expandWildcards: 'expand_wildcards',
searchType: 'search_type',
errorTrace: 'error_trace',
filterPath: 'filter_path'
}

View File

@ -21,7 +21,7 @@ function buildSlmDeleteLifecycle (opts) {
/**
* Perform a slm.delete_lifecycle request
* https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-delete.html
* https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-delete-policy.html
*/
return function slmDeleteLifecycle (params, options, callback) {
options = options || {}

View File

@ -21,7 +21,7 @@ function buildSlmExecuteLifecycle (opts) {
/**
* Perform a slm.execute_lifecycle request
* https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-execute.html
* https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-execute-policy.html
*/
return function slmExecuteLifecycle (params, options, callback) {
options = options || {}

View File

@ -21,7 +21,7 @@ function buildSlmGetLifecycle (opts) {
/**
* Perform a slm.get_lifecycle request
* https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-get.html
* https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-get-policy.html
*/
return function slmGetLifecycle (params, options, callback) {
options = options || {}

View File

@ -21,7 +21,7 @@ function buildSlmGetStats (opts) {
/**
* Perform a slm.get_stats request
* https://www.elastic.co/guide/en/elasticsearch/reference/master/slm-get-stats.html
* https://www.elastic.co/guide/en/elasticsearch/reference/master/slm-api-get-stats.html
*/
return function slmGetStats (params, options, callback) {
options = options || {}

71
api/api/slm.get_status.js Normal file
View File

@ -0,0 +1,71 @@
// Licensed to Elasticsearch B.V under one or more agreements.
// Elasticsearch B.V licenses this file to you under the Apache 2.0 License.
// See the LICENSE file in the project root for more information
'use strict'
/* eslint camelcase: 0 */
/* eslint no-unused-vars: 0 */
function buildSlmGetStatus (opts) {
// eslint-disable-next-line no-unused-vars
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
]
const snakeCase = {
}
/**
* Perform a slm.get_status request
* https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-get-status.html
*/
return function slmGetStatus (params, options, callback) {
options = options || {}
if (typeof options === 'function') {
callback = options
options = {}
}
if (typeof params === 'function' || params == null) {
callback = params
params = {}
options = {}
}
// validate headers object
if (options.headers != null && typeof options.headers !== 'object') {
const err = new ConfigurationError(`Headers should be an object, instead got: ${typeof options.headers}`)
return handleError(err, callback)
}
var warnings = []
var { method, body, ...querystring } = params
querystring = snakeCaseKeys(acceptedQuerystring, snakeCase, querystring, warnings)
var ignore = options.ignore
if (typeof ignore === 'number') {
options.ignore = [ignore]
}
var path = ''
if (method == null) method = 'GET'
path = '/' + '_slm' + '/' + 'status'
// build request object
const request = {
method,
path,
body: null,
querystring
}
options.warnings = warnings.length === 0 ? null : warnings
return makeRequest(request, options, callback)
}
}
module.exports = buildSlmGetStatus

View File

@ -21,7 +21,7 @@ function buildSlmPutLifecycle (opts) {
/**
* Perform a slm.put_lifecycle request
* https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-put.html
* https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-put-policy.html
*/
return function slmPutLifecycle (params, options, callback) {
options = options || {}

71
api/api/slm.start.js Normal file
View File

@ -0,0 +1,71 @@
// Licensed to Elasticsearch B.V under one or more agreements.
// Elasticsearch B.V licenses this file to you under the Apache 2.0 License.
// See the LICENSE file in the project root for more information
'use strict'
/* eslint camelcase: 0 */
/* eslint no-unused-vars: 0 */
function buildSlmStart (opts) {
// eslint-disable-next-line no-unused-vars
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
]
const snakeCase = {
}
/**
* Perform a slm.start request
* https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-start.html
*/
return function slmStart (params, options, callback) {
options = options || {}
if (typeof options === 'function') {
callback = options
options = {}
}
if (typeof params === 'function' || params == null) {
callback = params
params = {}
options = {}
}
// validate headers object
if (options.headers != null && typeof options.headers !== 'object') {
const err = new ConfigurationError(`Headers should be an object, instead got: ${typeof options.headers}`)
return handleError(err, callback)
}
var warnings = []
var { method, body, ...querystring } = params
querystring = snakeCaseKeys(acceptedQuerystring, snakeCase, querystring, warnings)
var ignore = options.ignore
if (typeof ignore === 'number') {
options.ignore = [ignore]
}
var path = ''
if (method == null) method = 'POST'
path = '/' + '_slm' + '/' + 'start'
// build request object
const request = {
method,
path,
body: body || '',
querystring
}
options.warnings = warnings.length === 0 ? null : warnings
return makeRequest(request, options, callback)
}
}
module.exports = buildSlmStart

71
api/api/slm.stop.js Normal file
View File

@ -0,0 +1,71 @@
// Licensed to Elasticsearch B.V under one or more agreements.
// Elasticsearch B.V licenses this file to you under the Apache 2.0 License.
// See the LICENSE file in the project root for more information
'use strict'
/* eslint camelcase: 0 */
/* eslint no-unused-vars: 0 */
function buildSlmStop (opts) {
// eslint-disable-next-line no-unused-vars
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
]
const snakeCase = {
}
/**
* Perform a slm.stop request
* https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-stop.html
*/
return function slmStop (params, options, callback) {
options = options || {}
if (typeof options === 'function') {
callback = options
options = {}
}
if (typeof params === 'function' || params == null) {
callback = params
params = {}
options = {}
}
// validate headers object
if (options.headers != null && typeof options.headers !== 'object') {
const err = new ConfigurationError(`Headers should be an object, instead got: ${typeof options.headers}`)
return handleError(err, callback)
}
var warnings = []
var { method, body, ...querystring } = params
querystring = snakeCaseKeys(acceptedQuerystring, snakeCase, querystring, warnings)
var ignore = options.ignore
if (typeof ignore === 'number') {
options.ignore = [ignore]
}
var path = ''
if (method == null) method = 'POST'
path = '/' + '_slm' + '/' + 'stop'
// build request object
const request = {
method,
path,
body: body || '',
querystring
}
options.warnings = warnings.length === 0 ? null : warnings
return makeRequest(request, options, callback)
}
}
module.exports = buildSlmStop

View File

@ -12,14 +12,17 @@ function buildTransformStopTransform (opts) {
const { makeRequest, ConfigurationError, handleError, snakeCaseKeys } = opts
const acceptedQuerystring = [
'force',
'wait_for_completion',
'timeout',
'allow_no_match'
'allow_no_match',
'wait_for_checkpoint'
]
const snakeCase = {
waitForCompletion: 'wait_for_completion',
allowNoMatch: 'allow_no_match'
allowNoMatch: 'allow_no_match',
waitForCheckpoint: 'wait_for_checkpoint'
}
/**

View File

@ -113,6 +113,10 @@ function ESAPI (opts) {
get: lazyLoad('get', opts),
get_script: lazyLoad('get_script', opts),
getScript: lazyLoad('get_script', opts),
get_script_context: lazyLoad('get_script_context', opts),
getScriptContext: lazyLoad('get_script_context', opts),
get_script_languages: lazyLoad('get_script_languages', opts),
getScriptLanguages: lazyLoad('get_script_languages', opts),
get_source: lazyLoad('get_source', opts),
getSource: lazyLoad('get_source', opts),
graph: {
@ -254,10 +258,12 @@ function ESAPI (opts) {
deleteJob: lazyLoad('ml.delete_job', opts),
delete_model_snapshot: lazyLoad('ml.delete_model_snapshot', opts),
deleteModelSnapshot: lazyLoad('ml.delete_model_snapshot', opts),
estimate_memory_usage: lazyLoad('ml.estimate_memory_usage', opts),
estimateMemoryUsage: lazyLoad('ml.estimate_memory_usage', opts),
delete_trained_model: lazyLoad('ml.delete_trained_model', opts),
deleteTrainedModel: lazyLoad('ml.delete_trained_model', opts),
evaluate_data_frame: lazyLoad('ml.evaluate_data_frame', opts),
evaluateDataFrame: lazyLoad('ml.evaluate_data_frame', opts),
explain_data_frame_analytics: lazyLoad('ml.explain_data_frame_analytics', opts),
explainDataFrameAnalytics: lazyLoad('ml.explain_data_frame_analytics', opts),
find_file_structure: lazyLoad('ml.find_file_structure', opts),
findFileStructure: lazyLoad('ml.find_file_structure', opts),
flush_job: lazyLoad('ml.flush_job', opts),
@ -293,6 +299,10 @@ function ESAPI (opts) {
getOverallBuckets: lazyLoad('ml.get_overall_buckets', opts),
get_records: lazyLoad('ml.get_records', opts),
getRecords: lazyLoad('ml.get_records', opts),
get_trained_models: lazyLoad('ml.get_trained_models', opts),
getTrainedModels: lazyLoad('ml.get_trained_models', opts),
get_trained_models_stats: lazyLoad('ml.get_trained_models_stats', opts),
getTrainedModelsStats: lazyLoad('ml.get_trained_models_stats', opts),
info: lazyLoad('ml.info', opts),
open_job: lazyLoad('ml.open_job', opts),
openJob: lazyLoad('ml.open_job', opts),
@ -314,6 +324,8 @@ function ESAPI (opts) {
putFilter: lazyLoad('ml.put_filter', opts),
put_job: lazyLoad('ml.put_job', opts),
putJob: lazyLoad('ml.put_job', opts),
put_trained_model: lazyLoad('ml.put_trained_model', opts),
putTrainedModel: lazyLoad('ml.put_trained_model', opts),
revert_model_snapshot: lazyLoad('ml.revert_model_snapshot', opts),
revertModelSnapshot: lazyLoad('ml.revert_model_snapshot', opts),
set_upgrade_mode: lazyLoad('ml.set_upgrade_mode', opts),
@ -454,8 +466,12 @@ function ESAPI (opts) {
getLifecycle: lazyLoad('slm.get_lifecycle', opts),
get_stats: lazyLoad('slm.get_stats', opts),
getStats: lazyLoad('slm.get_stats', opts),
get_status: lazyLoad('slm.get_status', opts),
getStatus: lazyLoad('slm.get_status', opts),
put_lifecycle: lazyLoad('slm.put_lifecycle', opts),
putLifecycle: lazyLoad('slm.put_lifecycle', opts)
putLifecycle: lazyLoad('slm.put_lifecycle', opts),
start: lazyLoad('slm.start', opts),
stop: lazyLoad('slm.stop', opts)
},
snapshot: {
cleanup_repository: lazyLoad('snapshot.cleanup_repository', opts),

View File

@ -87,7 +87,7 @@ export interface CatHelp extends Generic {
export interface CatIndices extends Generic {
index?: string | string[];
format?: string;
bytes?: 'b' | 'k' | 'm' | 'g';
bytes?: 'b' | 'k' | 'kb' | 'm' | 'mb' | 'g' | 'gb' | 't' | 'tb' | 'p' | 'pb';
local?: boolean;
master_timeout?: string;
h?: string | string[];
@ -512,6 +512,12 @@ export interface GetScript extends Generic {
master_timeout?: string;
}
export interface GetScriptContext extends Generic {
}
export interface GetScriptLanguages extends Generic {
}
export interface GetSource extends Generic {
id: string;
index: string;
@ -1050,6 +1056,7 @@ export interface RankEval<T = any> extends Generic {
ignore_unavailable?: boolean;
allow_no_indices?: boolean;
expand_wildcards?: 'open' | 'closed' | 'none' | 'all';
search_type?: 'query_then_fetch' | 'dfs_query_then_fetch';
body: T;
}
@ -1501,6 +1508,7 @@ export interface LicenseDelete extends Generic {
export interface LicenseGet extends Generic {
local?: boolean;
accept_enterprise?: boolean;
}
export interface LicenseGetBasicStatus extends Generic {
@ -1551,6 +1559,7 @@ export interface MlDeleteCalendarJob extends Generic {
export interface MlDeleteDataFrameAnalytics extends Generic {
id: string;
force?: boolean;
}
export interface MlDeleteDatafeed extends Generic {
@ -1583,14 +1592,19 @@ export interface MlDeleteModelSnapshot extends Generic {
snapshot_id: string;
}
export interface MlEstimateMemoryUsage<T = any> extends Generic {
body: T;
export interface MlDeleteTrainedModel extends Generic {
model_id: string;
}
export interface MlEvaluateDataFrame<T = any> extends Generic {
body: T;
}
export interface MlExplainDataFrameAnalytics<T = any> extends Generic {
id?: string;
body?: T;
}
export interface MlFindFileStructure<T = any> extends Generic {
lines_to_sample?: number;
line_merge_size_limit?: number;
@ -1754,6 +1768,22 @@ export interface MlGetRecords<T = any> extends Generic {
body?: T;
}
export interface MlGetTrainedModels extends Generic {
model_id?: string;
allow_no_match?: boolean;
include_model_definition?: boolean;
decompress_definition?: boolean;
from?: number;
size?: number;
}
export interface MlGetTrainedModelsStats extends Generic {
model_id?: string;
allow_no_match?: boolean;
from?: number;
size?: number;
}
export interface MlInfo extends Generic {
}
@ -1807,6 +1837,11 @@ export interface MlPutJob<T = any> extends Generic {
body: T;
}
export interface MlPutTrainedModel<T = any> extends Generic {
model_id: string;
body: T;
}
export interface MlRevertModelSnapshot<T = any> extends Generic {
job_id: string;
snapshot_id: string;
@ -2067,11 +2102,20 @@ export interface SlmGetLifecycle extends Generic {
export interface SlmGetStats extends Generic {
}
export interface SlmGetStatus extends Generic {
}
export interface SlmPutLifecycle<T = any> extends Generic {
policy_id: string;
body?: T;
}
export interface SlmStart extends Generic {
}
export interface SlmStop extends Generic {
}
export interface SqlClearCursor<T = any> extends Generic {
body: T;
}
@ -2124,9 +2168,11 @@ export interface TransformStartTransform extends Generic {
export interface TransformStopTransform extends Generic {
transform_id: string;
force?: boolean;
wait_for_completion?: boolean;
timeout?: string;
allow_no_match?: boolean;
wait_for_checkpoint?: boolean;
}
export interface TransformUpdateTransform<T = any> extends Generic {

View File

@ -3,7 +3,7 @@ comment: off
coverage:
precision: 2
round: down
range: "90...100"
range: "95...100"
status:
project: yes

View File

@ -1,19 +1,25 @@
[[auth-reference]]
== Authentication
This document contains code snippets to show you how to connect to various Elasticsearch providers.
This document contains code snippets to show you how to connect to various {es}
providers.
=== Elastic Cloud
If you are using https://www.elastic.co/cloud[Elastic Cloud], the client offers a easy way to connect to it via the `cloud` option. +
You must pass the Cloud ID that you can find in the cloud console, then your username and password inside the `auth` option.
If you are using https://www.elastic.co/cloud[Elastic Cloud], the client offers
an easy way to connect to it via the `cloud` option. You must pass the Cloud ID
that you can find in the cloud console, then your username and password inside
the `auth` option.
NOTE: When connecting to Elastic Cloud, the client will automatically enable both request and response compression by default, since it yields significant throughput improvements. +
Moreover, the client will also set the ssl option `secureProtocol` to `TLSv1_2_method` unless specified otherwise.
You can still override this option by configuring them.
NOTE: When connecting to Elastic Cloud, the client will automatically enable
both request and response compression by default, since it yields significant
throughput improvements. Moreover, the client will also set the ssl option
`secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still
override this option by configuring them.
IMPORTANT: Do not enable sniffing when using Elastic Cloud, since the nodes are behind a load balancer, Elastic Cloud will take care of everything for you.
IMPORTANT: Do not enable sniffing when using Elastic Cloud, since the nodes are
behind a load balancer, Elastic Cloud will take care of everything for you.
[source,js]
----
@ -29,9 +35,13 @@ const client = new Client({
})
----
=== Basic authentication
You can provide your credentials by passing the `username` and `password` parameters via the `auth` option.
You can provide your credentials by passing the `username` and `password`
parameters via the `auth` option.
NOTE: If you provide both basic authentication credentials and the Api Key configuration, the Api Key will take precedence.
[source,js]
----
@ -45,6 +55,7 @@ const client = new Client({
})
----
Otherwise, you can provide your credentials in the node(s) URL.
[source,js]
@ -55,10 +66,17 @@ const client = new Client({
})
----
=== ApiKey authentication
You can use the https://www.elastic.co/guide/en/elasticsearch/reference/7.x/security-api-create-api-key.html[ApiKey] authentication by passing the `apiKey` parameter via the `auth` option. +
The `apiKey` parameter can be either a base64 encoded string or an object with the values that you can obtain from the https://www.elastic.co/guide/en/elasticsearch/reference/7.x/security-api-create-api-key.html[create api key endpoint].
You can use the
https://www.elastic.co/guide/en/elasticsearch/reference/7.x/security-api-create-api-key.html[ApiKey]
authentication by passing the `apiKey` parameter via the `auth` option. The
`apiKey` parameter can be either a base64 encoded string or an object with the
values that you can obtain from the
https://www.elastic.co/guide/en/elasticsearch/reference/7.x/security-api-create-api-key.html[create api key endpoint].
NOTE: If you provide both basic authentication credentials and the Api Key configuration, the Api Key will take precedence.
[source,js]
----
@ -88,7 +106,14 @@ const client = new Client({
=== SSL configuration
Without any additional configuration you can specify `https://` node urls, but the certificates used to sign these requests will not verified (`rejectUnauthorized: false`). To turn on certificate verification you must specify an `ssl` object either in the top level config or in each host config object and set `rejectUnauthorized: true`. The ssl config object can contain many of the same configuration options that https://nodejs.org/api/tls.html#tls_tls_connect_options_callback[tls.connect()] accepts.
Without any additional configuration you can specify `https://` node urls, but
the certificates used to sign these requests will not verified
(`rejectUnauthorized: false`). To turn on certificate verification, you must
specify an `ssl` object either in the top level config or in each host config
object and set `rejectUnauthorized: true`. The ssl config object can contain
many of the same configuration options that
https://nodejs.org/api/tls.html#tls_tls_connect_options_callback[tls.connect()]
accepts.
[source,js]
----

View File

@ -1,27 +1,44 @@
[[breaking-changes]]
== Breaking changes coming from the old client
If you were already using the previous version of this client --i.e. the one you used to install with `npm install elasticsearch`-- you will encounter some breaking changes.
If you were already using the previous version of this client the one you used
to install with `npm install elasticsearch` you will encounter some breaking
changes.
=== Dont panic!
Every breaking change was carefully weighed, and each is justified. Furthermore, the new codebase has been rewritten with modern JavaScript and has been carefully designed to be easy to maintain.
Every breaking change was carefully weighed, and each is justified. Furthermore,
the new codebase has been rewritten with modern JavaScript and has been
carefully designed to be easy to maintain.
=== Breaking changes
* Minimum supported version of Node.js is `v8`.
* Everything has been rewritten using ES6 classes to help users extend the defaults more easily.
* Everything has been rewritten using ES6 classes to help users extend the
defaults more easily.
* There is no longer an integrated logger. The client now is an event emitter that emits the following events: `request`, `response`, and `error`.
* There is no longer an integrated logger. The client now is an event emitter
that emits the following events: `request`, `response`, and `error`.
* The code is no longer shipped with all the versions of the API, but only that of the packages major version, This means that if you are using Elasticsearch `v6`, you will be required to install `@elastic/elasticsearch@6`, and so on.
* The code is no longer shipped with all the versions of the API, but only that
of the packages major version. This means that if you are using {es} `v6`, you
are required to install `@elastic/elasticsearch@6`, and so on.
* The internals are completely different, so if you used to tweak them a lot, you will need to refactor your code. The public API should be almost the same.
* The internals are completely different, so if you used to tweak them a lot,
you will need to refactor your code. The public API should be almost the same.
* No more browser support, for that will be distributed via another module, `@elastic/elasticsearch-browser`. This module is intended for Node.js only.
* There is no longer browser support, for that will be distributed via another
module: `@elastic/elasticsearch-browser`. This module is intended for Node.js
only.
* The returned value of an API call will no longer be the `body`, `statusCode`,
and `headers` for callbacks, and only the `body` for promises. The new returned
value will be a unique object containing the `body`, `statusCode`, `headers`,
`warnings`, and `meta`, for both callback and promises.
* The returned value of an API call will no longer be the `body`, `statusCode`, and `headers` for callbacks and just the `body` for promises. The new returned value will be a unique object containing the `body`, `statusCode`, `headers`, `warnings`, and `meta`, for both callback and promises.
[source,js]
----
@ -53,14 +70,20 @@ client.search({
----
* Errors: there is no longer a custom error class for every HTTP status code (such as `BadRequest` or `NotFound`). There is instead a single `ResponseError`. Each error class has been renamed, and now each is suffixed with `Error` at the end.
* Errors: there is no longer a custom error class for every HTTP status code
(such as `BadRequest` or `NotFound`). There is instead a single `ResponseError`.
Every error class has been renamed, and now each is suffixed with `Error` at the
end.
* Errors that have been removed: `RequestTypeError`, `Generic`, and all the status code specific errors (such as `BadRequest` or `NotFound`).
* Removed errors: `RequestTypeError`, `Generic`, and all the status code
specific errors (such as `BadRequest` or `NotFound`).
* Errors that have been added: `ConfigurationError` (in case of bad configurations) and `ResponseError`, which contains all the data you may need to handle the specific error, such as `statusCode`, `headers`, `body`, and `message`.
* Added errors: `ConfigurationError` (in case of bad configurations) and
`ResponseError` that contains all the data you may need to handle the specific
error, such as `statusCode`, `headers`, `body`, and `message`.
* Errors that has been renamed:
* Renamed errors:
** `RequestTimeout` (408 statusCode) => `TimeoutError`
** `ConnectionFault` => `ConnectionError`
@ -68,9 +91,12 @@ client.search({
** `Serialization` => `SerializationError`
** `Serialization` => `DeserializationError`
* You must specify the port number in the configuration. In the previous version you can specify the host and port in a variety of ways, with the new client there is only one via the `node` parameter.
* You must specify the port number in the configuration. In the previous
version, you can specify the host and port in a variety of ways. With the new
client, there is only one way to do it, via the `node` parameter.
* The `plugins` option has been removed, if you want to extend the client now you should use the `client.extend` API.
* The `plugins` option has been removed. If you want to extend the client now,
you should use the `client.extend` API.
[source,js]
----
@ -84,7 +110,10 @@ const client = new Client({ ... })
client.extend(...)
----
* There is a clear distinction between the API related parameters and the client related configurations, the parameters `ignore`, `headers`, `requestTimeout` and `maxRetries` are no longer part of the API object, and you should specify them in a second option object.
* There is a clear distinction between the API related parameters and the client
related configurations. The parameters `ignore`, `headers`, `requestTimeout` and
`maxRetries` are no longer part of the API object and you need to specify them
in a second option object.
[source,js]
----
@ -121,7 +150,11 @@ client.search({
})
----
* The `transport.request` method will no longer accept the `query` key, but the `querystring` key instead (which can be a string or an object), furthermore, you need to send a bulk-like request, instead of the `body` key, you should use the `bulkBody` key. Also in this method, the client specific parameters should be passed as a second object.
* The `transport.request` method no longer accepts the `query` key. Use the
`querystring` key instead (which can be a string or an object). You also
need to send a bulk-like request instead of the `body` key, use the `bulkBody`
key. In this method, the client specific parameters should be passed as a second
object.
[source,js]
----
@ -168,7 +201,8 @@ client.transport.request({
=== Talk is cheap. Show me the code.
Following you will find a snippet of code with the old client, followed by the same code logic, but with the new client.
You can find a code snippet with the old client below followed by the same code
logic but with the new client.
[source,js]
----

View File

@ -1,14 +1,23 @@
[[child-client]]
== Creating a child client
There are some use cases where you may need multiple instances of the client. You can easily do that by calling `new Client()` as many times as you need, but you will lose all the benefits of using one single client, such as the long living connections and the connection pool handling. +
To avoid this problem the client offers a `child` API, which returns a new client instance that shares the connection pool with the parent client. +
There are some use cases where you may need multiple instances of the client.
You can easily do that by calling `new Client()` as many times as you need, but
you will lose all the benefits of using one single client, such as the long
living connections and the connection pool handling. To avoid this problem the
client offers a `child` API, which returns a new client instance that shares the
connection pool with the parent client.
NOTE: The event emitter is shared between the parent and the child(ren), and if you extend the parent client, the child client will have the same extensions, while if the child client adds an extension, the parent client will not be extended.
NOTE: The event emitter is shared between the parent and the child(ren). If you
extend the parent client, the child client will have the same extensions, while
if the child client adds an extension, the parent client will not be extended.
You can pass to the `child` every client option you would pass to a normal client, but the connection pool specific options (`ssl`, `agent`, `pingTimeout`, `Connection`, and `resurrectStrategy`).
You can pass to the `child` every client option you would pass to a normal
client, but the connection pool specific options (`ssl`, `agent`, `pingTimeout`,
`Connection`, and `resurrectStrategy`).
CAUTION: If you call `close` in any of the parent/child clients, every client will be closed.
CAUTION: If you call `close` in any of the parent/child clients, every client
will be closed.
[source,js]
----

View File

@ -251,7 +251,7 @@ use the usual client code. In such cases, call `super.method`:
class MyTransport extends Transport {
request (params, options, callback) {
// your code
super.request(params, options, callback)
return super.request(params, options, callback)
}
}
----

View File

@ -1,7 +1,8 @@
[[as_stream_examples]]
== asStream
Instead of getting the parsed body back, you will get the raw Node.js stream of data.
Instead of getting the parsed body back, you will get the raw Node.js stream of
data.
[source,js]
----
@ -76,7 +77,8 @@ async function run () {
run().catch(console.log)
----
TIP: This can be useful if you need to pipe the Elasticsearch's response to a proxy, or send it directly to another source.
TIP: This can be useful if you need to pipe the {es}'s response to a proxy, or
send it directly to another source.
[source,js]
----

View File

@ -1,8 +1,8 @@
[[bulk_examples]]
== Bulk
The `bulk` API makes it possible to perform many index/delete operations in a single API call. +
This can greatly increase the indexing speed.
The `bulk` API makes it possible to perform many index/delete operations in a
single API call. This can greatly increase the indexing speed.
[source,js]
----

View File

@ -1,8 +1,9 @@
[[get_examples]]
== Get
The get API allows to get a typed JSON document from the index based on its id. +
The following example gets a JSON document from an index called `game-of-thrones`, under a type called `_doc`, with id valued `'1'`.
The get API allows to get a typed JSON document from the index based on its id.
The following example gets a JSON document from an index called
`game-of-thrones`, under a type called `_doc`, with id valued `'1'`.
[source,js]
---------

View File

@ -7,6 +7,10 @@ Following you can find some examples on how to use the client.
* Executing a <<bulk_examples,bulk>> request;
* Executing a <<exists_examples,exists>> request;
* Executing a <<get_examples,get>> request;
* Executing a <<sql_query_examples,sql.query>> request;
* Executing a <<update_examples,update>> request;
* Executing a <<update_by_query_examples,update by query>> request;
* Executing a <<reindex_examples,reindex>> request;
* Use of the <<ignore_examples,ignore>> parameter;
* Executing a <<msearch_examples,msearch>> request;
* How do I <<scroll_examples,scroll>>?
@ -26,3 +30,7 @@ include::search.asciidoc[]
include::suggest.asciidoc[]
include::transport.request.asciidoc[]
include::typescript.asciidoc[]
include::sql.query.asciidoc[]
include::update.asciidoc[]
include::update_by_query.asciidoc[]
include::reindex.asciidoc[]

View File

@ -1,7 +1,8 @@
[[msearch_examples]]
== MSearch
The multi search API allows to execute several search requests within the same API.
The multi search API allows to execute several search requests within the same
API.
[source,js]
----

View File

@ -0,0 +1,75 @@
[[reindex_examples]]
== Reindex
The `reindex` API extracts the document source from the source index and indexes the documents into the destination index. You can copy all documents to the destination index, reindex a subset of the documents or update the source before to reindex it.
In the following example we have a `game-of-thrones` index which contains different quotes of various characters, we want to create a new index only for the house Stark and remove the `house` field from the document source.
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })
async function run () {
await client.index({
index: 'game-of-thrones',
body: {
character: 'Ned Stark',
quote: 'Winter is coming.',
house: 'stark'
}
})
await client.index({
index: 'game-of-thrones',
body: {
character: 'Arya Stark',
quote: 'A girl is Arya Stark of Winterfell. And I\'m going home.',
house: 'stark'
}
})
await client.index({
index: 'game-of-thrones',
refresh: true,
body: {
character: 'Tyrion Lannister',
quote: 'A Lannister always pays his debts.',
house: 'lannister'
}
})
await client.reindex({
waitForCompletion: true,
refresh: true,
body: {
source: {
index: 'game-of-thrones',
query: {
match: { character: 'stark' }
}
},
dest: {
index: 'stark-index'
},
script: {
lang: 'painless',
source: 'ctx._source.remove("house")'
}
}
})
const { body } = await client.search({
index: 'stark-index',
body: {
query: { match_all: {} }
}
})
console.log(body.hits.hits)
}
run().catch(console.log)
----

View File

@ -1,13 +1,23 @@
[[scroll_examples]]
== Scroll
While a search request returns a single “page” of results, the scroll API can be used to retrieve large numbers of results (or even all results) from a single search request, in much the same way as you would use a cursor on a traditional database.
While a search request returns a single “page” of results, the scroll API can be
used to retrieve large numbers of results (or even all results) from a single
search request, in much the same way as you would use a cursor on a traditional
database.
Scrolling is not intended for real time user requests, but rather for processing large amounts of data, e.g. in order to reindex the contents of one index into a new index with a different configuration.
Scrolling is not intended for real time user requests, but rather for processing
large amounts of data, e.g. in order to reindex the contents of one index into a
new index with a different configuration.
NOTE: The results that are returned from a scroll request reflect the state of the index at the time that the initial search request was made, like a snapshot in time. Subsequent changes to documents (index, update or delete) will only affect later search requests.
NOTE: The results that are returned from a scroll request reflect the state of
the index at the time that the initial search request was made, like a snapshot
in time. Subsequent changes to documents (index, update or delete) will only
affect later search requests.
In order to use scrolling, the initial search request should specify the scroll parameter in the query string, which tells Elasticsearch how long it should keep the “search context” alive.
In order to use scrolling, the initial search request should specify the scroll
parameter in the query string, which tells Elasticsearch how long it should keep
the “search context” alive.
[source,js]
----
@ -100,7 +110,8 @@ async function run () {
run().catch(console.log)
----
Another cool usage of the `scroll` API can be done with Node.js ≥ 10, by using async iteration!
Another cool usage of the `scroll` API can be done with Node.js ≥ 10, by using
async iteration!
[source,js]
----

View File

@ -1,8 +1,11 @@
[[search_examples]]
== Search
The `search` API allows you to execute a search query and get back search hits that match the query. +
The query can either be provided using a simple https://www.elastic.co/guide/en/elasticsearch/reference/6.6/search-uri-request.html[query string as a parameter], or using a https://www.elastic.co/guide/en/elasticsearch/reference/6.6/search-request-body.html[request body].
The `search` API allows you to execute a search query and get back search hits
that match the query. The query can either be provided using a simple
https://www.elastic.co/guide/en/elasticsearch/reference/6.6/search-uri-request.html[query string as a parameter],
or using a
https://www.elastic.co/guide/en/elasticsearch/reference/6.6/search-request-body.html[request body].
[source,js]
----

View File

@ -0,0 +1,64 @@
[[sql_examples]]
== SQL
Elasticsearch SQL is an X-Pack component that allows SQL-like queries to be executed in real-time against Elasticsearch. Whether using the REST interface, command-line or JDBC, any client can use SQL to search and aggregate data natively inside Elasticsearch. One can think of Elasticsearch SQL as a translator, one that understands both SQL and Elasticsearch and makes it easy to read and process data in real-time, at scale by leveraging Elasticsearch capabilities.
In the following example we will search all the documents that has the field `house` equals to `stark`, log the result with the tabular view and then manipulate the result to obtain an object easy to navigate.
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })
async function run () {
await client.index({
index: 'game-of-thrones',
body: {
character: 'Ned Stark',
quote: 'Winter is coming.',
house: 'stark'
}
})
await client.index({
index: 'game-of-thrones',
body: {
character: 'Arya Stark',
quote: 'A girl is Arya Stark of Winterfell. And I\'m going home.',
house: 'stark'
}
})
await client.index({
index: 'game-of-thrones',
refresh: true,
body: {
character: 'Tyrion Lannister',
quote: 'A Lannister always pays his debts.',
house: 'lannister'
}
})
const { body } = await client.sql.query({
body: {
query: "SELECT * FROM \"game-of-thrones\" WHERE house='stark'"
}
})
console.log(body)
const data = body.rows.map(row => {
const obj = {}
for (var i = 0; i < row.length; i++) {
obj[body.columns[i].name] = row[i]
}
return obj
})
console.log(data)
}
run().catch(console.log)
----

View File

@ -0,0 +1,64 @@
[[sql_query_examples]]
== SQL
Elasticsearch SQL is an X-Pack component that allows SQL-like queries to be executed in real-time against Elasticsearch. Whether using the REST interface, command-line or JDBC, any client can use SQL to search and aggregate data natively inside Elasticsearch. One can think of Elasticsearch SQL as a translator, one that understands both SQL and Elasticsearch and makes it easy to read and process data in real-time, at scale by leveraging Elasticsearch capabilities.
In the following example we will search all the documents that has the field `house` equals to `stark`, log the result with the tabular view and then manipulate the result to obtain an object easy to navigate.
[source,js]
----
'use strict'
const { Client } = require('@elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })
async function run () {
await client.index({
index: 'game-of-thrones',
body: {
character: 'Ned Stark',
quote: 'Winter is coming.',
house: 'stark'
}
})
await client.index({
index: 'game-of-thrones',
body: {
character: 'Arya Stark',
quote: 'A girl is Arya Stark of Winterfell. And I\'m going home.',
house: 'stark'
}
})
await client.index({
index: 'game-of-thrones',
refresh: true,
body: {
character: 'Tyrion Lannister',
quote: 'A Lannister always pays his debts.',
house: 'lannister'
}
})
const { body } = await client.sql.query({
body: {
query: "SELECT * FROM \"game-of-thrones\" WHERE house='stark'"
}
})
console.log(body)
const data = body.rows.map(row => {
const obj = {}
for (var i = 0; i < row.length; i++) {
obj[body.columns[i].name] = row[i]
}
return obj
})
console.log(data)
}
run().catch(console.log)
----

View File

@ -1,10 +1,11 @@
[[suggest_examples]]
== Suggest
The suggest feature suggests similar looking terms based on a provided text by using a suggester. _Parts of the suggest feature are still under development._
The suggest feature suggests similar looking terms based on a provided text by
using a suggester. _Parts of the suggest feature are still under development._
The suggest request part is defined alongside the query part in a `search` request. +
If the query part is left out, only suggestions are returned.
The suggest request part is defined alongside the query part in a `search`
request. If the query part is left out, only suggestions are returned.
[source,js]
----

View File

@ -1,12 +1,19 @@
[[transport_request_examples]]
== transport.request
It can happen that you need to communicate with Elasticsearch by using an API that is not supported by the client, to mitigate this issue you can directly call `client.transport.request`, which is the internal utility that the client uses to communicate with Elasticsearch when you use an API method.
It can happen that you need to communicate with {es} by using an API that is not
supported by the client, to mitigate this issue you can directly call
`client.transport.request`, which is the internal utility that the client uses
to communicate with {es} when you use an API method.
NOTE: When using the `transport.request` method you must provide all the parameters needed to perform an HTTP call, such as `method`, `path`, `querystring`, and `body`.
NOTE: When using the `transport.request` method you must provide all the
parameters needed to perform an HTTP call, such as `method`, `path`,
`querystring`, and `body`.
TIP: If you find yourself use this method too often, take in consideration the use of `client.extend`, which will make your code look cleaner and easier to maintain.
TIP: If you find yourself use this method too often, take in consideration the
use of `client.extend`, which will make your code look cleaner and easier to
maintain.
[source,js]
----

View File

@ -1,9 +1,11 @@
[[typescript_examples]]
== Typescript
The client offers a first-class support for TypeScript, since it ships the type definitions for every exposed API.
The client offers a first-class support for TypeScript, since it ships the type
definitions for every exposed API.
NOTE: If you are using TypeScript you will be required to use _snake_case_ style to define the API parameters instead of _camelCase_.
NOTE: If you are using TypeScript you will be required to use _snake_case_ style
to define the API parameters instead of _camelCase_.
[source,ts]
----

View File

@ -0,0 +1,59 @@
[[update_by_query_examples]]
== Update By Query
The simplest usage of _update_by_query just performs an update on every document in the index without changing the source. This is useful to pick up a new property or some other online mapping change.
[source,js]
---------
'use strict'
const { Client } = require('@elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })
async function run () {
await client.index({
index: 'game-of-thrones',
body: {
character: 'Ned Stark',
quote: 'Winter is coming.'
}
})
await client.index({
index: 'game-of-thrones',
refresh: true,
body: {
character: 'Arya Stark',
quote: 'A girl is Arya Stark of Winterfell. And I\'m going home.'
}
})
await client.updateByQuery({
index: 'game-of-thrones',
refresh: true,
body: {
script: {
lang: 'painless',
source: 'ctx._source["house"] = "stark"'
},
query: {
match: {
character: 'stark'
}
}
}
})
const { body } = await client.search({
index: 'game-of-thrones',
body: {
query: { match_all: {} }
}
})
console.log(body.hits.hits)
}
run().catch(console.log)
---------

View File

@ -0,0 +1,92 @@
[[update_examples]]
== Update
The update API allows updates of a specific document using the given script. +
In the following example, we will index a document that also tracks how many times a character has said the given quote, and then we will update the `times` field.
[source,js]
---------
'use strict'
const { Client } = require('@elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })
async function run () {
await client.index({
index: 'game-of-thrones',
id: '1',
body: {
character: 'Ned Stark',
quote: 'Winter is coming.',
times: 0
}
})
await client.update({
index: 'game-of-thrones',
id: '1',
body: {
script: {
lang: 'painless',
source: 'ctx._source.times++'
// you can also use parameters
// source: 'ctx._source.times += params.count',
// params: { count: 1 }
}
}
})
const { body } = await client.get({
index: 'game-of-thrones',
id: '1'
})
console.log(body)
}
run().catch(console.log)
---------
With the update API, you can also run a partial update of a document.
[source,js]
---------
'use strict'
const { Client } = require('@elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })
async function run () {
await client.index({
index: 'game-of-thrones',
id: '1',
body: {
character: 'Ned Stark',
quote: 'Winter is coming.',
isAlive: true
}
})
await client.update({
index: 'game-of-thrones',
id: '1',
body: {
doc: {
isAlive: false
}
}
})
const { body } = await client.get({
index: 'game-of-thrones',
id: '1'
})
console.log(body)
}
run().catch(console.log)
---------

View File

@ -0,0 +1,59 @@
[[update_by_query_examples]]
== Update By Query
The simplest usage of _update_by_query just performs an update on every document in the index without changing the source. This is useful to pick up a new property or some other online mapping change.
[source,js]
---------
'use strict'
const { Client } = require('@elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })
async function run () {
await client.index({
index: 'game-of-thrones',
body: {
character: 'Ned Stark',
quote: 'Winter is coming.'
}
})
await client.index({
index: 'game-of-thrones',
refresh: true,
body: {
character: 'Arya Stark',
quote: 'A girl is Arya Stark of Winterfell. And I\'m going home.'
}
})
await client.updateByQuery({
index: 'game-of-thrones',
refresh: true,
body: {
script: {
lang: 'painless',
source: 'ctx._source["house"] = "stark"'
},
query: {
match: {
character: 'stark'
}
}
}
})
const { body } = await client.search({
index: 'game-of-thrones',
body: {
query: { match_all: {} }
}
})
console.log(body.hits.hits)
}
run().catch(console.log)
---------

View File

@ -1,10 +1,12 @@
[[extend-client]]
== Extend the client
Sometimes you need to reuse the same logic, or you want to build a custom API to allow you simplify your code. +
The easiest way to achieve that is by extending the client.
Sometimes you need to reuse the same logic, or you want to build a custom API to
allow you simplify your code. The easiest way to achieve that is by extending
the client.
NOTE: If you want to override existing methods, you should specify the `{ force: true }` option.
NOTE: If you want to override existing methods, you should specify the
`{ force: true }` option.
[source,js]
----

View File

@ -1,15 +1,22 @@
[[observability]]
== Observability
The client does not provide a default logger, but instead it offers an event emitter interfaces to hook into internal events, such as `request` and `response`.
The client does not provide a default logger, but instead it offers an event
emitter interfaces to hook into internal events, such as `request` and
`response`.
Correlating those events can be quite hard, especially if your applications have a large codebase with many events happening at the same time.
Correlating those events can be quite hard, especially if your applications have
a large codebase with many events happening at the same time.
To help you with this, the client offers you a correlation id system and other features, let's see them in action.
To help you with this, the client offers you a correlation id system and other
features. Let's see them in action.
=== Events
The client is an event emitter, this means that you can listen for its event and add additional logic to your code, without need to change the client internals or your normal usage. +
You can find the events names by access the `events` key of the client.
The client is an event emitter, this means that you can listen for its event and
add additional logic to your code, without need to change the client internals
or your normal usage. You can find the events names by access the `events` key
of the client.
[source,js]
----
@ -17,7 +24,9 @@ const { events } = require('@elastic/elasticsearch')
console.log(events)
----
The event emitter functionality can be useful if you want to log every request, response and error that is happening during the use of the client.
The event emitter functionality can be useful if you want to log every request,
response and error that is happening during the use of the client.
[source,js]
----
@ -34,11 +43,12 @@ client.on('response', (err, result) => {
})
----
The client emits the following events:
[cols=2*]
|===
|`request`
a|Emitted before sending the actual request to Elasticsearch _(emitted multiple times in case of retries)_.
a|Emitted before sending the actual request to {es} _(emitted multiple times in case of retries)_.
[source,js]
----
client.on('request', (err, result) => {
@ -47,7 +57,7 @@ client.on('request', (err, result) => {
----
|`response`
a|Emitted once Elasticsearch response has been received and parsed.
a|Emitted once {es} response has been received and parsed.
[source,js]
----
client.on('response', (err, result) => {
@ -76,6 +86,7 @@ client.on('resurrect', (err, result) => {
|===
The values of `result` in `request`, `response` and `sniff` will be:
[source,ts]
----
body: any;
@ -100,7 +111,9 @@ meta: {
};
----
While the `result` value in `resurrect` will be:
[source,ts]
----
strategy: string;
@ -112,8 +125,13 @@ request: {
};
----
=== Correlation id
Correlating events can be quite hard, especially if there are many events at the same time. The client offers you an automatic (and configurable) system to help you handle this problem.
Correlating events can be quite hard, especially if there are many events at the
same time. The client offers you an automatic (and configurable) system to help
you handle this problem.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
@ -141,7 +159,10 @@ client.search({
})
----
By default the id is an incremental integer, but you can easily configure that with the `generateRequestId` option:
By default the id is an incremental integer, but you can easily configure that
with the `generateRequestId` option:
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
@ -156,7 +177,9 @@ const client = new Client({
})
----
You can also specify a custom id per request:
[source,js]
----
client.search({
@ -169,8 +192,12 @@ client.search({
})
----
=== Context object
Sometimes, you might need to make some custom data available in your events, you can do that via the `context` option of a request:
Sometimes, you might need to make some custom data available in your events, you
can do that via the `context` option of a request:
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
@ -202,8 +229,14 @@ client.search({
})
----
=== Client name
If you are using multiple instances of the client or if you are using multiple child clients _(which is the recommended way to have multiple instances of the client)_, you might need to recognize which client you are using, the `name` options will help you in this regard:
If you are using multiple instances of the client or if you are using multiple
child clients _(which is the recommended way to have multiple instances of the
client)_, you might need to recognize which client you are using. The `name`
options will help you in this regard.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')
@ -249,11 +282,19 @@ child.search({
})
----
=== X-Opaque-Id support
To improve the overall observability, the client offers an easy way to configure the `X-Opaque-Id` header. If you set the `X-Opaque-Id` in a specific request, this will allow you to discover this identifier in the https://www.elastic.co/guide/en/elasticsearch/reference/master/logging.html#deprecation-logging[deprecation logs], help you with https://www.elastic.co/guide/en/elasticsearch/reference/master/index-modules-slowlog.html#_identifying_search_slow_log_origin[identifying search slow log origin] as well as https://www.elastic.co/guide/en/elasticsearch/reference/master/tasks.html#_identifying_running_tasks[identifying running tasks].
The `X-Opaque-Id` should be configured in each request, for doing that you can use the `opaqueId` option, as you can see in the following example. +
The resulting header will be `{ 'X-Opaque-Id': 'my-search' }`.
=== X-Opaque-Id support
To improve the overall observability, the client offers an easy way to configure
the `X-Opaque-Id` header. If you set the `X-Opaque-Id` in a specific request,
this will allow you to discover this identifier in the
https://www.elastic.co/guide/en/elasticsearch/reference/master/logging.html#deprecation-logging[deprecation logs],
help you with https://www.elastic.co/guide/en/elasticsearch/reference/master/index-modules-slowlog.html#_identifying_search_slow_log_origin[identifying search slow log origin]
as well as https://www.elastic.co/guide/en/elasticsearch/reference/master/tasks.html#_identifying_running_tasks[identifying running tasks].
The `X-Opaque-Id` should be configured in each request, for doing that you can
use the `opaqueId` option, as you can see in the following example. The
resulting header will be `{ 'X-Opaque-Id': 'my-search' }`.
[source,js]
----
@ -272,8 +313,13 @@ client.search({
})
----
Sometimes it may be useful to prefix all the `X-Opaque-Id` headers with a specific string, in case you need to identify a specific client or server. For doing this, the client offers a top-level configuration option: `opaqueIdPrefix`. +
In the following example, the resulting header will be `{ 'X-Opaque-Id': 'proxy-client::my-search' }`.
Sometimes it may be useful to prefix all the `X-Opaque-Id` headers with a
specific string, in case you need to identify a specific client or server. For
doing this, the client offers a top-level configuration option:
`opaqueIdPrefix`. In the following example, the resulting header will be
`{ 'X-Opaque-Id': 'proxy-client::my-search' }`.
[source,js]
----
const { Client } = require('@elastic/elasticsearch')

File diff suppressed because it is too large Load Diff

View File

@ -1,12 +1,19 @@
[[typescript]]
== TypeScript support
The client offers a first-class support for TypeScript, since it ships the type definitions for every exposed API.
The client offers a first-class support for TypeScript, since it ships the type
definitions for every exposed API.
NOTE: If you are using TypeScript you will be required to use _snake_case_ style to define the API parameters instead of _camelCase_.
NOTE: If you are using TypeScript you will be required to use _snake_case_ style
to define the API parameters instead of _camelCase_.
Other than the types for the surface API, the client offers the types for every request method, via the `RequestParams`, if you need the types for a search request for instance, you can access them via `RequestParams.Search`.
Every API that supports a body, accepts a https://www.typescriptlang.org/docs/handbook/generics.html[generics] which represents the type of the request body, if you don't configure anything, it will default to `any`.
Other than the types for the surface API, the client offers the types for every
request method, via the `RequestParams`, if you need the types for a search
request for instance, you can access them via `RequestParams.Search`.
Every API that supports a body, accepts a
https://www.typescriptlang.org/docs/handbook/generics.html[generics] which
represents the type of the request body, if you don't configure anything, it
will default to `any`.
For example:
@ -40,7 +47,9 @@ const searchParams: RequestParams.Search = {
}
----
You can find the type definiton of a response in `ApiResponse`, which accepts a generics as well if you want to specify the body type, otherwise it defaults to `any`.
You can find the type definiton of a response in `ApiResponse`, which accepts a
generics as well if you want to specify the body type, otherwise it defaults to
`any`.
[source,ts]
----

24
index.d.ts vendored
View File

@ -101,8 +101,8 @@ interface ClientOptions {
cloud?: {
id: string;
// TODO: remove username and password here in 8
username: string;
password: string;
username?: string;
password?: string;
}
}
@ -212,6 +212,10 @@ declare class Client extends EventEmitter {
get: ApiMethod<RequestParams.Get>
get_script: ApiMethod<RequestParams.GetScript>
getScript: ApiMethod<RequestParams.GetScript>
get_script_context: ApiMethod<RequestParams.GetScriptContext>
getScriptContext: ApiMethod<RequestParams.GetScriptContext>
get_script_languages: ApiMethod<RequestParams.GetScriptLanguages>
getScriptLanguages: ApiMethod<RequestParams.GetScriptLanguages>
get_source: ApiMethod<RequestParams.GetSource>
getSource: ApiMethod<RequestParams.GetSource>
graph: {
@ -353,10 +357,12 @@ declare class Client extends EventEmitter {
deleteJob: ApiMethod<RequestParams.MlDeleteJob>
delete_model_snapshot: ApiMethod<RequestParams.MlDeleteModelSnapshot>
deleteModelSnapshot: ApiMethod<RequestParams.MlDeleteModelSnapshot>
estimate_memory_usage: ApiMethod<RequestParams.MlEstimateMemoryUsage>
estimateMemoryUsage: ApiMethod<RequestParams.MlEstimateMemoryUsage>
delete_trained_model: ApiMethod<RequestParams.MlDeleteTrainedModel>
deleteTrainedModel: ApiMethod<RequestParams.MlDeleteTrainedModel>
evaluate_data_frame: ApiMethod<RequestParams.MlEvaluateDataFrame>
evaluateDataFrame: ApiMethod<RequestParams.MlEvaluateDataFrame>
explain_data_frame_analytics: ApiMethod<RequestParams.MlExplainDataFrameAnalytics>
explainDataFrameAnalytics: ApiMethod<RequestParams.MlExplainDataFrameAnalytics>
find_file_structure: ApiMethod<RequestParams.MlFindFileStructure>
findFileStructure: ApiMethod<RequestParams.MlFindFileStructure>
flush_job: ApiMethod<RequestParams.MlFlushJob>
@ -392,6 +398,10 @@ declare class Client extends EventEmitter {
getOverallBuckets: ApiMethod<RequestParams.MlGetOverallBuckets>
get_records: ApiMethod<RequestParams.MlGetRecords>
getRecords: ApiMethod<RequestParams.MlGetRecords>
get_trained_models: ApiMethod<RequestParams.MlGetTrainedModels>
getTrainedModels: ApiMethod<RequestParams.MlGetTrainedModels>
get_trained_models_stats: ApiMethod<RequestParams.MlGetTrainedModelsStats>
getTrainedModelsStats: ApiMethod<RequestParams.MlGetTrainedModelsStats>
info: ApiMethod<RequestParams.MlInfo>
open_job: ApiMethod<RequestParams.MlOpenJob>
openJob: ApiMethod<RequestParams.MlOpenJob>
@ -413,6 +423,8 @@ declare class Client extends EventEmitter {
putFilter: ApiMethod<RequestParams.MlPutFilter>
put_job: ApiMethod<RequestParams.MlPutJob>
putJob: ApiMethod<RequestParams.MlPutJob>
put_trained_model: ApiMethod<RequestParams.MlPutTrainedModel>
putTrainedModel: ApiMethod<RequestParams.MlPutTrainedModel>
revert_model_snapshot: ApiMethod<RequestParams.MlRevertModelSnapshot>
revertModelSnapshot: ApiMethod<RequestParams.MlRevertModelSnapshot>
set_upgrade_mode: ApiMethod<RequestParams.MlSetUpgradeMode>
@ -553,8 +565,12 @@ declare class Client extends EventEmitter {
getLifecycle: ApiMethod<RequestParams.SlmGetLifecycle>
get_stats: ApiMethod<RequestParams.SlmGetStats>
getStats: ApiMethod<RequestParams.SlmGetStats>
get_status: ApiMethod<RequestParams.SlmGetStatus>
getStatus: ApiMethod<RequestParams.SlmGetStatus>
put_lifecycle: ApiMethod<RequestParams.SlmPutLifecycle>
putLifecycle: ApiMethod<RequestParams.SlmPutLifecycle>
start: ApiMethod<RequestParams.SlmStart>
stop: ApiMethod<RequestParams.SlmStop>
}
snapshot: {
cleanup_repository: ApiMethod<RequestParams.SnapshotCleanupRepository>

View File

@ -294,16 +294,14 @@ function resolve (host, path) {
function prepareHeaders (headers = {}, auth) {
if (auth != null && headers.authorization == null) {
if (auth.username && auth.password) {
headers.authorization = 'Basic ' + Buffer.from(`${auth.username}:${auth.password}`).toString('base64')
}
if (auth.apiKey) {
if (typeof auth.apiKey === 'object') {
headers.authorization = 'ApiKey ' + Buffer.from(`${auth.apiKey.id}:${auth.apiKey.api_key}`).toString('base64')
} else {
headers.authorization = `ApiKey ${auth.apiKey}`
}
} else if (auth.username && auth.password) {
headers.authorization = 'Basic ' + Buffer.from(`${auth.username}:${auth.password}`).toString('base64')
}
}
return headers

View File

@ -6,6 +6,7 @@
const { stringify } = require('querystring')
const debug = require('debug')('elasticsearch')
const sjson = require('secure-json-parse')
const { SerializationError, DeserializationError } = require('./errors')
class Serializer {
@ -22,7 +23,7 @@ class Serializer {
deserialize (json) {
debug('Deserializing', json)
try {
var object = JSON.parse(json)
var object = sjson.parse(json)
} catch (err) {
throw new DeserializationError(err.message)
}

2
lib/Transport.d.ts vendored
View File

@ -81,7 +81,7 @@ export interface TransportRequestParams {
}
export interface TransportRequestOptions {
ignore?: [number];
ignore?: number[];
requestTimeout?: number | string;
maxRetries?: number;
asStream?: boolean;

View File

@ -109,7 +109,7 @@ class Transport {
if (meta.aborted === true) return
meta.connection = this.getConnection({ requestId: meta.request.id })
if (meta.connection === null) {
return callback(new NoLivingConnectionsError('There are not living connections'), result)
return callback(new NoLivingConnectionsError('There are no living connections'), result)
}
// TODO: make this assignment FAST
@ -130,15 +130,17 @@ class Transport {
return callback(err, result)
}
}
headers['Content-Type'] = headers['Content-Type'] || 'application/json'
if (compression === 'gzip') {
if (isStream(params.body) === false) {
params.body = intoStream(params.body).pipe(createGzip())
} else {
params.body = params.body.pipe(createGzip())
if (params.body !== '') {
headers['Content-Type'] = headers['Content-Type'] || 'application/json'
if (compression === 'gzip') {
if (isStream(params.body) === false) {
params.body = intoStream(params.body).pipe(createGzip())
} else {
params.body = params.body.pipe(createGzip())
}
headers['Content-Encoding'] = compression
}
headers['Content-Encoding'] = compression
}
if (isStream(params.body) === false) {

View File

@ -42,13 +42,13 @@ class BaseConnectionPool {
opts = this.urlToHost(opts)
}
if (opts.url.username !== '' && opts.url.password !== '') {
if (this.auth !== null) {
opts.auth = this.auth
} else if (opts.url.username !== '' && opts.url.password !== '') {
opts.auth = {
username: decodeURIComponent(opts.url.username),
password: decodeURIComponent(opts.url.password)
}
} else if (this.auth !== null) {
opts.auth = this.auth
}
if (opts.ssl == null) opts.ssl = this._ssl

View File

@ -4,7 +4,7 @@
"main": "index.js",
"types": "index.d.ts",
"homepage": "http://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/index.html",
"version": "7.5.0",
"version": "7.6.1",
"keywords": [
"elasticsearch",
"elastic",
@ -19,9 +19,9 @@
"test": "npm run lint && npm run test:unit && npm run test:behavior && npm run test:types",
"test:unit": "tap test/unit/*.test.js -t 300 --no-coverage",
"test:behavior": "tap test/behavior/*.test.js -t 300 --no-coverage",
"test:integration": "tap test/integration/index.js -T --no-coverage",
"test:integration": "node test/integration/index.js",
"test:types": "tsc --project ./test/types/tsconfig.json",
"test:coverage": "nyc tap test/unit/*.test.js test/behavior/*.test.js -t 300 && nyc report --reporter=text-lcov > coverage.lcov && codecov",
"test:coverage": "nyc tap test/unit/*.test.js test/behavior/*.test.js -t 300 && nyc report --reporter=text-lcov > coverage.lcov",
"lint": "standard",
"lint:fix": "standard --fix",
"ci": "npm run license-checker && npm test && npm run test:integration && npm run test:coverage",
@ -39,11 +39,11 @@
},
"devDependencies": {
"@types/node": "^12.6.2",
"codecov": "^3.3.0",
"convert-hrtime": "^3.0.0",
"dedent": "^0.7.0",
"deepmerge": "^4.0.0",
"dezalgo": "^1.0.3",
"fast-deep-equal": "^3.1.1",
"js-yaml": "^3.13.1",
"license-checker": "^25.0.1",
"lolex": "^4.0.1",
@ -58,7 +58,6 @@
"standard": "^13.0.2",
"stoppable": "^1.1.0",
"tap": "^14.4.1",
"tap-mocha-reporter": "^4.0.1",
"typescript": "^3.4.5",
"workq": "^2.1.0"
},
@ -68,7 +67,8 @@
"into-stream": "^5.1.0",
"ms": "^2.1.1",
"once": "^1.4.0",
"pump": "^3.0.0"
"pump": "^3.0.0",
"secure-json-parse": "^2.1.0"
},
"license": "Apache-2.0",
"repository": {

View File

@ -4,11 +4,50 @@
'use strict'
const { readdirSync } = require('fs')
const { join } = require('path')
const dedent = require('dedent')
const codeExamples = readdirSync(join(__dirname, '..', '..', 'docs', 'examples'))
.map(file => file.slice(0, -9))
.filter(api => api !== 'index')
function generateDocs (common, spec) {
var doc = dedent`
[[api-reference]]
////////
===========================================================================================================================
|| ||
|| ||
|| ||
|| ██████╗ ███████╗ █████╗ ██████╗ ███╗ ███╗███████╗ ||
|| ██╔══██╗██╔════╝██╔══██╗██╔══██╗████╗ ████║██╔════╝ ||
|| ██████╔╝█████╗ ███████║██║ ██║██╔████╔██║█████╗ ||
|| ██╔══██╗██╔══╝ ██╔══██║██║ ██║██║╚██╔╝██║██╔══╝ ||
|| ██║ ██║███████╗██║ ██║██████╔╝██║ ╚═╝ ██║███████╗ ||
|| ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚═════╝ ╚═╝ ╚═╝╚══════╝ ||
|| ||
|| ||
|| This file is autogenerated, DO NOT send pull requests that changes this file directly. ||
|| You should update the script that does the generation, which can be found in '/scripts/utils/generateDocs.js'. ||
|| ||
|| You can run the script with the following command: ||
|| node scripts/generate --branch <branch_name> ||
|| or ||
|| node scripts/generate --tag <tag_name> ||
|| ||
|| ||
|| ||
===========================================================================================================================
////////
== API Reference
This document contains the entire list of the Elasticsearch API supported by the client, both OSS and commercial. The client is entirely licensed under Apache 2.0.
@ -31,7 +70,7 @@ function generateDocs (common, spec) {
maxRetries: 3
})
// calback API
// callback API
client.search({
index: 'my-index',
from: 20,
@ -47,14 +86,7 @@ function generateDocs (common, spec) {
In this document, you will find the reference of every parameter accepted by the querystring or the url. If you also need to send the body, you can find the documentation of its format in the reference link that is present along with every endpoint.
////////
This documentation is generated by running:
node scripts/run.js --tag tagName
or
node scripts/run.js --branch branchName
////////\n\n`
\n\n`
doc += commonParameters(common)
spec.forEach(s => {
doc += '\n' + generateApiDoc(s)
@ -67,7 +99,7 @@ function commonParameters (spec) {
=== Common parameters
Parameters that are accepted by all API endpoints.
link:{ref}/common-options.html[Reference]
link:{ref}/common-options.html[Documentation]
[cols=2*]
|===\n`
Object.keys(spec.params).forEach(key => {
@ -170,7 +202,10 @@ function generateApiDoc (spec) {
client.${camelify(name)}(${codeParameters.length > 0 ? `{\n ${codeParameters}\n}` : ''})
----\n`
if (documentationUrl) {
doc += `link:${documentationUrl}[Reference]\n`
doc += `link:${documentationUrl}[Documentation] +\n`
}
if (codeExamples.includes(name)) {
doc += `{jsclient}/${name.replace(/\./g, '_')}_examples.html[Code Example] +\n`
}
if (params.length !== 0) {
@ -209,7 +244,7 @@ const LINK_OVERRIDES = {
'license.post_start_basic': '{ref}/start-basic.html',
'license.post_start_trial': '{ref}/start-trial.html',
'migration.deprecations': '{ref}/migration-api-deprecation.html',
'monitoring.bulk': '{ref}/es-monitoring.html',
'monitoring.bulk': '{ref}/monitor-elasticsearch-cluster.html',
'ingest.delete_pipeline': '{ref}/delete-pipeline-api.html',
'ingest.get_pipeline': '{ref}/get-pipeline-api.html',
'ingest.put_pipeline': '{ref}/put-pipeline-api.html',

View File

@ -111,7 +111,7 @@ test('Should handle hostnames in publish_address', t => {
})
})
test('Sniff interval', t => {
test('Sniff interval', { skip: 'Flaky on CI' }, t => {
t.plan(10)
buildCluster(({ nodes, shutdown, kill }) => {

View File

@ -4,48 +4,6 @@
'use strict'
const esDefaultRoles = [
'apm_system',
'apm_user',
'beats_admin',
'beats_system',
'code_admin',
'code_user',
'data_frame_transforms_admin',
'data_frame_transforms_user',
'enrich_user',
'ingest_admin',
'kibana_dashboard_only_user',
'kibana_system',
'kibana_user',
'logstash_admin',
'logstash_system',
'machine_learning_admin',
'machine_learning_user',
'monitoring_user',
'remote_monitoring_agent',
'remote_monitoring_collector',
'reporting_user',
'rollup_admin',
'rollup_user',
'snapshot_user',
'superuser',
'transform_admin',
'transform_user',
'transport_client',
'watcher_admin',
'watcher_user'
]
const esDefaultUsers = [
'apm_system',
'beats_system',
'elastic',
'logstash_system',
'kibana',
'remote_monitoring_user'
]
function runInParallel (client, operation, options, clientOptions) {
if (options.length === 0) return Promise.resolve()
const operations = options.map(opts => {
@ -76,4 +34,4 @@ function to (promise) {
const sleep = ms => new Promise(resolve => setTimeout(resolve, ms))
module.exports = { runInParallel, esDefaultRoles, esDefaultUsers, delve, to, sleep }
module.exports = { runInParallel, delve, to, sleep }

View File

@ -8,16 +8,20 @@ const { readFileSync, accessSync, mkdirSync, readdirSync, statSync } = require('
const { join, sep } = require('path')
const yaml = require('js-yaml')
const Git = require('simple-git')
const tap = require('tap')
const { Client } = require('../../index')
const TestRunner = require('./test-runner')
const build = require('./test-runner')
const { sleep } = require('./helper')
const ms = require('ms')
const esRepo = 'https://github.com/elastic/elasticsearch.git'
const esFolder = join(__dirname, '..', '..', 'elasticsearch')
const yamlFolder = join(esFolder, 'rest-api-spec', 'src', 'main', 'resources', 'rest-api-spec', 'test')
const xPackYamlFolder = join(esFolder, 'x-pack', 'plugin', 'src', 'test', 'resources', 'rest-api-spec', 'test')
const MAX_API_TIME = 1000 * 90
const MAX_FILE_TIME = 1000 * 30
const MAX_TEST_TIME = 1000 * 3
const ossSkips = {
'cat.indices/10_basic.yml': ['Test cat indices output for closed index (pre 7.2.0)'],
'cluster.health/10_basic.yml': ['cluster health with closed index (pre 7.2.0)'],
@ -68,235 +72,267 @@ const xPackBlackList = {
'xpack/15_basic.yml': ['*']
}
class Runner {
constructor (opts = {}) {
const options = { node: opts.node }
if (opts.isXPack) {
options.ssl = {
ca: readFileSync(join(__dirname, '..', '..', '.ci', 'certs', 'ca.crt'), 'utf8'),
rejectUnauthorized: false
}
}
this.client = new Client(options)
console.log('Loading yaml suite')
}
async waitCluster (client, times = 0) {
try {
await client.cluster.health({ waitForStatus: 'green', timeout: '50s' })
} catch (err) {
if (++times < 10) {
await sleep(5000)
return this.waitCluster(client, times)
}
console.error(err)
process.exit(1)
function runner (opts = {}) {
const options = { node: opts.node }
if (opts.isXPack) {
options.ssl = {
ca: readFileSync(join(__dirname, '..', '..', '.ci', 'certs', 'ca.crt'), 'utf8'),
rejectUnauthorized: false
}
}
const client = new Client(options)
log('Loading yaml suite')
start({ client, isXPack: opts.isXPack })
.catch(console.log)
}
async start ({ isXPack }) {
const { client } = this
const parse = this.parse.bind(this)
async function waitCluster (client, times = 0) {
try {
await client.cluster.health({ waitForStatus: 'green', timeout: '50s' })
} catch (err) {
if (++times < 10) {
await sleep(5000)
return waitCluster(client, times)
}
console.error(err)
process.exit(1)
}
}
console.log('Waiting for Elasticsearch')
await this.waitCluster(client)
async function start ({ client, isXPack }) {
log('Waiting for Elasticsearch')
await waitCluster(client)
const { body } = await client.info()
const { number: version, build_hash: sha } = body.version
const { body } = await client.info()
const { number: version, build_hash: sha } = body.version
console.log(`Checking out sha ${sha}...`)
await this.withSHA(sha)
log(`Checking out sha ${sha}...`)
await withSHA(sha)
console.log(`Testing ${isXPack ? 'XPack' : 'oss'} api...`)
log(`Testing ${isXPack ? 'XPack' : 'oss'} api...`)
const folders = []
.concat(getAllFiles(yamlFolder))
.concat(isXPack ? getAllFiles(xPackYamlFolder) : [])
.filter(t => !/(README|TODO)/g.test(t))
// we cluster the array based on the folder names,
// to provide a better test log output
.reduce((arr, file) => {
const path = file.slice(file.indexOf('/rest-api-spec/test'), file.lastIndexOf('/'))
var inserted = false
for (var i = 0; i < arr.length; i++) {
if (arr[i][0].includes(path)) {
inserted = true
arr[i].push(file)
break
const stats = {
total: 0,
skip: 0,
pass: 0,
assertions: 0
}
const folders = getAllFiles(isXPack ? xPackYamlFolder : yamlFolder)
.filter(t => !/(README|TODO)/g.test(t))
// we cluster the array based on the folder names,
// to provide a better test log output
.reduce((arr, file) => {
const path = file.slice(file.indexOf('/rest-api-spec/test'), file.lastIndexOf('/'))
var inserted = false
for (var i = 0; i < arr.length; i++) {
if (arr[i][0].includes(path)) {
inserted = true
arr[i].push(file)
break
}
}
if (!inserted) arr.push([file])
return arr
}, [])
const totalTime = now()
for (const folder of folders) {
// pretty name
const apiName = folder[0].slice(
folder[0].indexOf(`${sep}rest-api-spec${sep}test`) + 19,
folder[0].lastIndexOf(sep)
)
log('Testing ' + apiName.slice(1))
const apiTime = now()
for (const file of folder) {
const testRunner = build({
client,
version,
isXPack: file.includes('x-pack')
})
const fileTime = now()
const data = readFileSync(file, 'utf8')
// get the test yaml (as object), some file has multiple yaml documents inside,
// every document is separated by '---', so we split on the separator
// and then we remove the empty strings, finally we parse them
const tests = data
.split('\n---\n')
.map(s => s.trim())
.filter(Boolean)
.map(parse)
// get setup and teardown if present
var setupTest = null
var teardownTest = null
for (const test of tests) {
if (test.setup) setupTest = test.setup
if (test.teardown) teardownTest = test.teardown
}
const cleanPath = file.slice(file.lastIndexOf(apiName))
log(' ' + cleanPath)
for (const test of tests) {
const testTime = now()
const name = Object.keys(test)[0]
if (name === 'setup' || name === 'teardown') continue
stats.total += 1
if (shouldSkip(isXPack, file, name)) {
stats.skip += 1
continue
}
log(' - ' + name)
try {
await testRunner.run(setupTest, test[name], teardownTest, stats)
stats.pass += 1
} catch (err) {
console.error(err)
process.exit(1)
}
const totalTestTime = now() - testTime
if (totalTestTime > MAX_TEST_TIME) {
log(' took too long: ' + ms(totalTestTime))
} else {
log(' took: ' + ms(totalTestTime))
}
}
const totalFileTime = now() - fileTime
if (totalFileTime > MAX_FILE_TIME) {
log(` ${cleanPath} took too long: ` + ms(totalFileTime))
} else {
log(` ${cleanPath} took: ` + ms(totalFileTime))
}
}
const totalApiTime = now() - apiTime
if (totalApiTime > MAX_API_TIME) {
log(`${apiName} took too long: ` + ms(totalApiTime))
} else {
log(`${apiName} took: ` + ms(totalApiTime))
}
}
log(`Total testing time: ${ms(now() - totalTime)}`)
log(`Test stats:
- Total: ${stats.total}
- Skip: ${stats.skip}
- Pass: ${stats.pass}
- Assertions: ${stats.assertions}
`)
}
function log (text) {
process.stdout.write(text + '\n')
}
function now () {
var ts = process.hrtime()
return (ts[0] * 1e3) + (ts[1] / 1e6)
}
function parse (data) {
try {
var doc = yaml.safeLoad(data)
} catch (err) {
console.error(err)
return
}
return doc
}
/**
* Sets the elasticsearch repository to the given sha.
* If the repository is not present in `esFolder` it will
* clone the repository and the checkout the sha.
* If the repository is already present but it cannot checkout to
* the given sha, it will perform a pull and then try again.
* @param {string} sha
* @param {function} callback
*/
function withSHA (sha) {
return new Promise((resolve, reject) => {
_withSHA(err => err ? reject(err) : resolve())
})
function _withSHA (callback) {
var fresh = false
var retry = 0
if (!pathExist(esFolder)) {
if (!createFolder(esFolder)) {
return callback(new Error('Failed folder creation'))
}
fresh = true
}
const git = Git(esFolder)
if (fresh) {
clone(checkout)
} else {
checkout()
}
function checkout () {
log(`Checking out sha '${sha}'`)
git.checkout(sha, err => {
if (err) {
if (retry++ > 0) {
return callback(err)
}
return pull(checkout)
}
if (!inserted) arr.push([file])
return arr
}, [])
for (const folder of folders) {
// pretty name
const apiName = folder[0].slice(
folder[0].indexOf(`${sep}rest-api-spec${sep}test`) + 19,
folder[0].lastIndexOf(sep)
)
tap.test(`Testing ${apiName}`, { bail: true, timeout: 0 }, t => {
for (const file of folder) {
const data = readFileSync(file, 'utf8')
// get the test yaml (as object), some file has multiple yaml documents inside,
// every document is separated by '---', so we split on the separator
// and then we remove the empty strings, finally we parse them
const tests = data
.split('\n---\n')
.map(s => s.trim())
.filter(Boolean)
.map(parse)
t.test(
file.slice(file.lastIndexOf(apiName)),
testFile(file, tests)
)
}
t.end()
callback()
})
}
function testFile (file, tests) {
return t => {
// get setup and teardown if present
var setupTest = null
var teardownTest = null
for (const test of tests) {
if (test.setup) setupTest = test.setup
if (test.teardown) teardownTest = test.teardown
function pull (cb) {
log('Pulling elasticsearch repository...')
git.pull(err => {
if (err) {
return callback(err)
}
tests.forEach(test => {
const name = Object.keys(test)[0]
if (name === 'setup' || name === 'teardown') return
if (shouldSkip(t, isXPack, file, name)) return
// create a subtest for the specific folder + test file + test name
t.test(name, async t => {
const testRunner = new TestRunner({
client,
version,
tap: t,
isXPack: file.includes('x-pack')
})
await testRunner.run(setupTest, test[name], teardownTest)
})
})
t.end()
}
cb()
})
}
}
parse (data) {
try {
var doc = yaml.safeLoad(data)
} catch (err) {
console.error(err)
return
}
return doc
}
getTest (folder) {
const tests = readdirSync(folder)
return tests.filter(t => !/(README|TODO)/g.test(t))
}
/**
* Sets the elasticsearch repository to the given sha.
* If the repository is not present in `esFolder` it will
* clone the repository and the checkout the sha.
* If the repository is already present but it cannot checkout to
* the given sha, it will perform a pull and then try again.
* @param {string} sha
* @param {function} callback
*/
withSHA (sha) {
return new Promise((resolve, reject) => {
_withSHA.call(this, err => err ? reject(err) : resolve())
})
function _withSHA (callback) {
var fresh = false
var retry = 0
if (!this.pathExist(esFolder)) {
if (!this.createFolder(esFolder)) {
return callback(new Error('Failed folder creation'))
function clone (cb) {
log('Cloning elasticsearch repository...')
git.clone(esRepo, esFolder, err => {
if (err) {
return callback(err)
}
fresh = true
}
const git = Git(esFolder)
if (fresh) {
clone(checkout)
} else {
checkout()
}
function checkout () {
console.log(`Checking out sha '${sha}'`)
git.checkout(sha, err => {
if (err) {
if (retry++ > 0) {
return callback(err)
}
return pull(checkout)
}
callback()
})
}
function pull (cb) {
console.log('Pulling elasticsearch repository...')
git.pull(err => {
if (err) {
return callback(err)
}
cb()
})
}
function clone (cb) {
console.log('Cloning elasticsearch repository...')
git.clone(esRepo, esFolder, err => {
if (err) {
return callback(err)
}
cb()
})
}
cb()
})
}
}
}
/**
* Checks if the given path exists
* @param {string} path
* @returns {boolean} true if exists, false if not
*/
pathExist (path) {
try {
accessSync(path)
return true
} catch (err) {
return false
}
/**
* Checks if the given path exists
* @param {string} path
* @returns {boolean} true if exists, false if not
*/
function pathExist (path) {
try {
accessSync(path)
return true
} catch (err) {
return false
}
}
/**
* Creates the given folder
* @param {string} name
* @returns {boolean} true on success, false on failure
*/
createFolder (name) {
try {
mkdirSync(name)
return true
} catch (err) {
return false
}
/**
* Creates the given folder
* @param {string} name
* @returns {boolean} true on success, false on failure
*/
function createFolder (name) {
try {
mkdirSync(name)
return true
} catch (err) {
return false
}
}
@ -306,18 +342,17 @@ if (require.main === module) {
node,
isXPack: node.indexOf('@') > -1
}
const runner = new Runner(opts)
runner.start(opts).catch(console.log)
runner(opts)
}
const shouldSkip = (t, isXPack, file, name) => {
const shouldSkip = (isXPack, file, name) => {
var list = Object.keys(ossSkips)
for (var i = 0; i < list.length; i++) {
const ossTest = ossSkips[list[i]]
for (var j = 0; j < ossTest.length; j++) {
if (file.endsWith(list[i]) && (name === ossTest[j] || ossTest[j] === '*')) {
const testName = file.slice(file.indexOf(`${sep}elasticsearch${sep}`)) + ' / ' + name
t.comment(`Skipping test ${testName} because is blacklisted in the oss test`)
log(`Skipping test ${testName} because is blacklisted in the oss test`)
return true
}
}
@ -330,7 +365,7 @@ const shouldSkip = (t, isXPack, file, name) => {
for (j = 0; j < platTest.length; j++) {
if (file.endsWith(list[i]) && (name === platTest[j] || platTest[j] === '*')) {
const testName = file.slice(file.indexOf(`${sep}elasticsearch${sep}`)) + ' / ' + name
t.comment(`Skipping test ${testName} because is blacklisted in the XPack test`)
log(`Skipping test ${testName} because is blacklisted in the XPack test`)
return true
}
}
@ -347,4 +382,4 @@ const getAllFiles = dir =>
return isDirectory ? [...files, ...getAllFiles(name)] : [...files, name]
}, [])
module.exports = Runner
module.exports = runner

File diff suppressed because it is too large Load Diff

View File

@ -467,6 +467,62 @@ test('Authentication', t => {
})
})
t.test('ApiKey should take precedence over basic auth (in url)', t => {
t.plan(3)
function handler (req, res) {
t.match(req.headers, {
authorization: 'ApiKey Zm9vOmJhcg=='
})
res.setHeader('Content-Type', 'application/json;utf=8')
res.end(JSON.stringify({ hello: 'world' }))
}
buildServer(handler, ({ port }, server) => {
const client = new Client({
node: `http://user:pwd@localhost:${port}`,
auth: {
apiKey: 'Zm9vOmJhcg=='
}
})
client.info((err, { body }) => {
t.error(err)
t.deepEqual(body, { hello: 'world' })
server.stop()
})
})
})
t.test('ApiKey should take precedence over basic auth (in opts)', t => {
t.plan(3)
function handler (req, res) {
t.match(req.headers, {
authorization: 'ApiKey Zm9vOmJhcg=='
})
res.setHeader('Content-Type', 'application/json;utf=8')
res.end(JSON.stringify({ hello: 'world' }))
}
buildServer(handler, ({ port }, server) => {
const client = new Client({
node: `http://localhost:${port}`,
auth: {
apiKey: 'Zm9vOmJhcg==',
username: 'user',
password: 'pwd'
}
})
client.info((err, { body }) => {
t.error(err)
t.deepEqual(body, { hello: 'world' })
server.stop()
})
})
})
t.end()
})
@ -827,6 +883,44 @@ test('Elastic cloud config', t => {
t.deepEqual(pool._ssl, { secureProtocol: 'TLSv1_2_method' })
})
t.test('ApiKey should take precedence over basic auth', t => {
t.plan(5)
const client = new Client({
cloud: {
// 'localhost$abcd$efgh'
id: 'name:bG9jYWxob3N0JGFiY2QkZWZnaA=='
},
auth: {
username: 'elastic',
password: 'changeme',
apiKey: 'Zm9vOmJhcg=='
}
})
const pool = client.connectionPool
t.ok(pool instanceof CloudConnectionPool)
t.match(pool.connections.find(c => c.id === 'https://abcd.localhost/'), {
url: new URL('https://elastic:changeme@abcd.localhost'),
id: 'https://abcd.localhost/',
headers: {
authorization: 'ApiKey Zm9vOmJhcg=='
},
ssl: { secureProtocol: 'TLSv1_2_method' },
deadCount: 0,
resurrectTimeout: 0,
roles: {
master: true,
data: true,
ingest: true,
ml: false
}
})
t.strictEqual(client.transport.compression, 'gzip')
t.strictEqual(client.transport.suggestCompression, true)
t.deepEqual(pool._ssl, { secureProtocol: 'TLSv1_2_method' })
})
t.test('Override default options', t => {
t.plan(4)
const client = new Client({

View File

@ -34,7 +34,6 @@ test('Should emit a request event when a request is performed', t => {
body: '',
querystring: 'q=foo%3Abar',
headers: {
'Content-Type': 'application/json',
'Content-Length': '0'
}
},
@ -86,7 +85,6 @@ test('Should emit a response event in case of a successful response', t => {
body: '',
querystring: 'q=foo%3Abar',
headers: {
'Content-Type': 'application/json',
'Content-Length': '0'
}
},
@ -136,7 +134,6 @@ test('Should emit a response event with the error set', t => {
body: '',
querystring: 'q=foo%3Abar',
headers: {
'Content-Type': 'application/json',
'Content-Length': '0'
}
},

View File

@ -1813,6 +1813,55 @@ test('Compress request', t => {
}
})
t.test('Should skip the compression for empty strings/null/undefined', t => {
t.plan(9)
function handler (req, res) {
t.strictEqual(req.headers['content-encoding'], undefined)
t.strictEqual(req.headers['content-type'], undefined)
res.end()
}
buildServer(handler, ({ port }, server) => {
const pool = new ConnectionPool({ Connection })
pool.addConnection(`http://localhost:${port}`)
const transport = new Transport({
emit: () => {},
connectionPool: pool,
serializer: new Serializer(),
maxRetries: 3,
compression: 'gzip',
requestTimeout: 30000,
sniffInterval: false,
sniffOnStart: false
})
transport.request({
method: 'DELETE',
path: '/hello',
body: ''
}, (err, { body }) => {
t.error(err)
transport.request({
method: 'GET',
path: '/hello',
body: null
}, (err, { body }) => {
t.error(err)
transport.request({
method: 'GET',
path: '/hello',
body: undefined
}, (err, { body }) => {
t.error(err)
server.stop()
})
})
})
})
})
t.end()
})
@ -2108,3 +2157,71 @@ test('Should pass request params and options to generateRequestId', t => {
transport.request(params, options, t.error)
})
test('Secure json parsing', t => {
t.test('__proto__ protection', t => {
t.plan(2)
function handler (req, res) {
res.setHeader('Content-Type', 'application/json;utf=8')
res.end('{"__proto__":{"a":1}}')
}
buildServer(handler, ({ port }, server) => {
const pool = new ConnectionPool({ Connection })
pool.addConnection(`http://localhost:${port}`)
const transport = new Transport({
emit: () => {},
connectionPool: pool,
serializer: new Serializer(),
maxRetries: 3,
requestTimeout: 30000,
sniffInterval: false,
sniffOnStart: false
})
transport.request({
method: 'GET',
path: '/hello'
}, (err, { body }) => {
t.true(err instanceof DeserializationError)
t.is(err.message, 'Object contains forbidden prototype property')
server.stop()
})
})
})
t.test('constructor protection', t => {
t.plan(2)
function handler (req, res) {
res.setHeader('Content-Type', 'application/json;utf=8')
res.end('{"constructor":{"prototype":{"bar":"baz"}}}')
}
buildServer(handler, ({ port }, server) => {
const pool = new ConnectionPool({ Connection })
pool.addConnection(`http://localhost:${port}`)
const transport = new Transport({
emit: () => {},
connectionPool: pool,
serializer: new Serializer(),
maxRetries: 3,
requestTimeout: 30000,
sniffInterval: false,
sniffOnStart: false
})
transport.request({
method: 'GET',
path: '/hello'
}, (err, { body }) => {
t.true(err instanceof DeserializationError)
t.is(err.message, 'Object contains forbidden prototype property')
server.stop()
})
})
})
t.end()
})