Files
elasticsearch-js/docs/configuration.asciidoc

286 lines
8.2 KiB
Plaintext

[[configuration]]
== Configuration
The `Client` constructor accepts a single object as it's argument. In the <<config-options>> section all of the available options/keys are listed.
[source,js]
------
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
... config options ...
});
------
WARNING: Due to the complex nature of the configuration, the config object you pass in will be modified and can only be used to create one `Client` instance. Sorry for the inconvenience. Related Github issue: https://github.com/elasticsearch/elasticsearch-js/issues/33
[[config-options]]
=== Config options
[horizontal]
`host or hosts`[[config-hosts]]::
`String, String[], Object[]` -- Specify the hosts that this client will connect to. If sniffing is enabled, or you call `client.sniff()`, this list will be used as seeds to discover the rest of your cluster.
+
The value(s) are passed to the <<host-reference,`Host`>> constructor. `Host` objects can help enforce path-prefixes, default headers and query strings, and can be helpfull in making more intellegent selection algorithms; Head over to <<host-reference,the `Host` docs>> for more information.
Default:::
+
[source,js]
------
'http://localhost:9200'
------
`log`[[config-log]]:: `String, String[], Object, Object[], Constructor` -- Unless a constructor is specified, this sets the output settings for the bundled logger. See the section on configuring-logging[logging] for more information.
Default in Node:::
+
[source,js]
-----
[{
type: 'stdio',
levels: ['error', 'warning']
}]
-----
`apiVersion`[[config-api-version]]:: `String` -- Change the API that they client provides, specify the major version of the Elasticsearch nodes you will be connecting to.
+
WARNING: This default will track the latest version of Elasticsearch, and is only intended to be used during development. It is highly recommended that you set this parameter in all code that is headed to production.
Default ::: `"1.3"`
Options in node :::
* `"1.3"`
* `"1.2"`
* `"1.1"`
* `"1.0"`
* `"0.90"`
* `"master"` (unstable)
* `"1.x"` (unstable)
Options in the browser :::
* `"1.3"`
* `"1.2"`
* `"1.1"`
`sniffOnStart`[[config-sniff-on-start]]:: `Boolean` -- Should the client attempt to detect the rest of the cluster when it is first instantiated?
Default::: `false`
`sniffInterval`[[config-sniff-interval]]:: `Number, false` -- Every `n` milliseconds, perform a sniff operation and make sure our list of nodes is complete.
Default::: `false`
`sniffOnConnectionFault`[[config-sniff-on-connection-fault]]:: `Boolean` -- Should the client immediately sniff for a more current list of nodes when a connection dies?
Default::: `false`
`maxRetries`[[config-max-retries]]:: `Integer` -- How many times should the client try to connect to other nodes before returning a <<connection-fault,ConnectionFault>> error.
Default::: `3`
`requestTimeout`[[config-request-timeout]]:: `Number` -- Milliseconds before an HTTP request will be aborted and retried. This can also be set per request.
Default::: `30000`
`deadTimeout`[[config-dead-timeout]]:: `Number` -- Milliseconds that a dead connection will wait before attempting to revive itself.
Default::: `30000`
`keepAlive`[[config-keep-alive]]:: `Boolean` -- Should the connections to the node be kept open forever? This behavior is recommended when you are connecting directly to Elasticsearch.
Default::: `true`
`maxSockets`[[config-max-sockets]]:: `Number` -- Maximum number of concurrent requests that can be made to any node.
Default::: `10`
`minSockets`[[config-min-sockets]]:: `Number` -- Minimum number of sockets to keep connected to a node, only applies when `keepAlive` is true
Default::: `10`
`suggestCompression`[[config-suggest-compression]]:: `Boolean` -- The client should inform Elasticsearch, on each request, that it can accept compressed responses. In order for the responses to actually be compressed, you must enable `http.compression` in Elasticsearch. See http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html[these docs] for additional info.
Default::: `false`
`connectionClass`[[config-connection-class]]:: `String, Constructor` -- Defines the class that will be used to create connections to store in the connection pool. If you are looking to implement additional protocols you should probably start by writing a Connection class that extends the ConnectionAbstract.
Defaults:::
* Node: `"http"`
* Browser Build: `"xhr"`
* Angular Build: `"angular"`
* jQuery Build: `"jquery"`
`selector`[[config-selector]]:: `String, Function` -- This function will be used to select a connection from the ConnectionPool. It should received a single argument, the list of "active" connections, and return the connection to use. Use this selector to implement special logic for your client such as preferring nodes in a certain rack or data-center.
+
To make this function asynchronous, accept a second argument which will be the callback to use. The callback should be called Node-style with a possible error like: `cb(err, selectedConnection)`.
Default::: `"roundRobin"`
Options:::
* `"roundRobin"`
* `"random"`
`defer`[[config-defer]]:: `Function` -- Override the way that the client creates promises. If you would rather use any other promise library this is how you'd do that. Elasticsearch.js expects that the defer object has a `promise` property (which will be returned to promise consumers), as well as `resolve` and `reject` methods.
Default:::
+
[source,js]
-----
function () {
return when.defer();
}
-----
`nodesToHostCallback`[[config-nodes-to-host-callback]]:: `Function` - This function will receive the list of nodes returned from the `_cluster/nodes` API during a sniff operation. The function should return an array of objects which match the <<config-hosts,specification for the `hosts` config>>.
Default:::
see https://github.com/elasticsearch/elasticsearch-js/blob/master/src/lib/nodes_to_host.js[nodes_to_host.js]
=== Examples
Connect to just a single seed node, and use sniffing to find the rest of the cluster.
[source,js]
-----
var client = new elasticsearch.Client({
host: 'localhost:9200',
sniffOnStart: true,
sniffInterval: 60000,
});
-----
Specify a couple of hosts which use basic auth.
[source,js]
-----
var client = new elasticsearch.Client({
hosts: [
'https://user:pass@box1.server.org:9200',
'https://user:pass@box2.server.org:9200'
]
});
-----
Use host objects to define extra properties, and a selector that uses those properties to pick a node.
[source,js]
-----
var client = new elasticsearch.Client({
hosts: [
{
protocol: 'https',
host: 'box1.server.org',
port: 56394,
country: 'EU',
weight: 10
},
{
protocol: 'https',
host: 'box2.server.org',
port: 56394,
country: 'US',
weight: 50
}
],
selector: function (hosts) {
var myCountry = process.env.COUNTRY;
// first try to find a node that is in the same country
var selection = _.find(nodes, function (node) {
return node.host.country === myCountry;
});
if (!selection) {
// choose the node with the smallest weight.
selection = _(nodes).sortBy(function (node) {
return node.host.weight;
}).first();
}
return selection;
}
});
-----
.Use a custom nodesToHostCallback that will direct all of the requests to a proxy and select the node via a query string param.
[source,js]
-----
var client = new elasticsearch.Client({
nodesToHostCallback: function (nodes) {
/*
* The nodes object will look something like this
* {
* "y-YWd-LITrWXWoCi4r2GlQ": {
* name: "Supremor",
* transport_address: "inet[/192.168.1.15:9300]",
* hostname: "Small-ESBox.infra",
* version: "1.0.0",
* http_address: "inet[/192.168.1.15:9200]",
* attributes: {
* custom: "attribute"
* }
* },
* ...
* }
*/
return _.transform(nodes, function (nodeList, node, id) {
var port = node.http_address.match(/:(\d+)/)[1];
nodeList.push({
host: 'esproxy.example.com',
port: 80,
query: {
nodeHostname: node.hostname,
nodePort: port
}
});
}, []);
}
})
-----