Cluster and Node Management Commands
A cluster is a set of Solr nodes operating in coordination with each other.
These API commands work with a SolrCloud cluster at the entire cluster level, or on individual nodes.
CLUSTERSTATUS: Cluster Status
Fetch the cluster status including collections, shards, replicas, configuration name as well as collection aliases and cluster properties.
Additionally, this command reports a health
status of each collection and shard, in
order to make it easier to monitor the operational state of the collections.
The
following health state values are defined, ordered from the best to worst, based on
the percentage of active replicas (active
):
GREEN
-
active == 100%
, all replicas are active and there’s a shard leader. YELLOW
-
100% > active > 50%
, AND there’s a shard leader. ORANGE
-
50% >= active > 0%
, AND there’s a shard leader. RED
-
No active replicas OR there’s no shard leader.
The collection health state is reported as the worst state of any shard, e.g., for a collection with all shards GREEN except for one YELLOW the collection health will be reported as YELLOW.
-
V1 API
-
V2 API
http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS
curl -X GET http://localhost:8983/api/cluster
CLUSTERSTATUS Parameters
collection
-
Optional
Default: none
The collection or alias name for which information is requested. If omitted, information on all collections in the cluster will be returned. If an alias is supplied, information on the collections in the alias will be returned.
shard
-
Optional
Default: none
The shard(s) for which information is requested. Multiple shard names can be specified as a comma-separated list.
_route_
-
Optional
Default: none
This can be used if you need the details of the shard where a particular document belongs to and you don’t know which shard it falls under.
CLUSTERSTATUS Response
The response will include the status of the request and the status of the cluster.
Examples using CLUSTERSTATUS
Input
http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS
Output
{
"responseHeader":{
"status":0,
"QTime":333},
"cluster":{
"collections":{
"collection1":{
"shards":{
"shard1":{
"range":"80000000-ffffffff",
"state":"active",
"health": "GREEN",
"replicas":{
"core_node1":{
"state":"active",
"core":"collection1",
"node_name":"127.0.1.1:8983_solr",
"base_url":"http://127.0.1.1:8983/solr",
"leader":"true"},
"core_node3":{
"state":"active",
"core":"collection1",
"node_name":"127.0.1.1:8900_solr",
"base_url":"http://127.0.1.1:8900/solr"}}},
"shard2":{
"range":"0-7fffffff",
"state":"active",
"health": "GREEN",
"replicas":{
"core_node2":{
"state":"active",
"core":"collection1",
"node_name":"127.0.1.1:7574_solr",
"base_url":"http://127.0.1.1:7574/solr",
"leader":"true"},
"core_node4":{
"state":"active",
"core":"collection1",
"node_name":"127.0.1.1:7500_solr",
"base_url":"http://127.0.1.1:7500/solr"}}}},
"router":{"name":"compositeId"},
"replicationFactor":"1",
"znodeVersion": 11,
"autoCreated":"true",
"configName" : "my_config",
"health": "GREEN",
"aliases":["both_collections"]
},
"collection2":{
"..."
}
},
"aliases":{ "both_collections":"collection1,collection2" },
"roles":{
"overseer":[
"127.0.1.1:8983_solr",
"127.0.1.1:7574_solr"]
},
"live_nodes":[
"127.0.1.1:7574_solr",
"127.0.1.1:7500_solr",
"127.0.1.1:8983_solr",
"127.0.1.1:8900_solr"]
}
}
CLUSTERPROP: Cluster Properties
Add, edit or delete a cluster-wide property.
-
V1 API
-
V2 API
http://localhost:8983/solr/admin/collections?action=CLUSTERPROP&name=urlScheme&val=https
curl -X POST http://localhost:8983/api/cluster -H 'Content-Type: application/json' -d '
{
"set-property": {
"name": "urlScheme",
"val": "https"
}
}
'
CLUSTERPROP Parameters
name
-
Optional
Default: none
The name of the property. Supported properties names are
location
,maxCoresPerNode
,urlScheme
, anddefaultShardPreferences
. If the Jaeger tracing module has been enabled, the propertysamplePercentage
is also available.Other properties can be set (for example, if you need them for custom plugins) but they must begin with the prefix
ext.
. Unknown properties that don’t begin withext.
will be rejected. val
-
Optional
Default: none
The value of the property. If the value is empty or null, the property is unset.
CLUSTERPROP Response
The response will include the status of the request and the properties that were updated or removed. If the status is anything other than "0", an error message will explain why the request failed.
Examples using CLUSTERPROP
Input
http://localhost:8983/solr/admin/collections?action=CLUSTERPROP&name=urlScheme&val=https&wt=xml
Output
<response>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">0</int>
</lst>
</response>
Setting Cluster-Wide Defaults
It is possible to set cluster-wide default values for certain attributes of a collection, using the defaults
parameter.
Set/update default values
-
V1 API
-
V2 API
There is no V1 equivalent of this action.
curl -X POST -H 'Content-type:application/json' --data-binary '
{
"set-obj-property": {
"defaults" : {
"collection": {
"numShards": 2,
"nrtReplicas": 1,
"tlogReplicas": 1,
"pullReplicas": 1
}
}
}
}' http://localhost:8983/api/cluster
Unset the only value of nrtReplicas
curl -X POST -H 'Content-type:application/json' --data-binary '
{
"set-obj-property": {
"defaults" : {
"collection": {
"nrtReplicas": null
}
}
}
}' http://localhost:8983/api/cluster
Unset all values in defaults
curl -X POST -H 'Content-type:application/json' --data-binary '
{ "set-obj-property" : {
"defaults" : null
}' http://localhost:8983/api/cluster
Default Shard Preferences
Using the defaultShardPreferences
parameter, you can implement rack or availability zone awareness.
First, make sure to "label" your nodes using a system property (e.g., -Drack=rack1
).
Then, set the value of defaultShardPreferences
to node.sysprop:sysprop.YOUR_PROPERTY_NAME
like this:
curl -X POST -H 'Content-type:application/json' --data-binary '
{
"set-property" : {
"name" : "defaultShardPreferences",
"val" : "node.sysprop:sysprop.rack"
}
}' http://localhost:8983/api/cluster
At this point, if you run a query on a node having e.g., rack=rack1
, Solr will try to hit only replicas from rack1
.
Balance Replicas
Shuffle the replicas across the given set of Solr nodes until an equilibrium is reached.
The configured Replica Placement Plugin will be used to decide:
-
Which replicas should be moved for the balancing
-
Which nodes those replicas should be placed
-
When the cluster has reached an "equilibrium"
-
V2 API
curl -X POST http://localhost:8983/api/cluster/replicas/balance -H 'Content-Type: application/json' -d '
{
"nodes": ["localhost:8983_solr", "localhost:8984_solr"],
"async": "balance-replicas-1"
}
'
Parameters
nodes
-
Optional
Default: none
The nodes over which replicas will be balanced. Replicas that live outside this set of nodes will not be included in the balancing.
If this parameter is not provided, all live data nodes will be used.
waitForFinalState
-
Optional
Default:
false
If
true
, the request will complete only when all affected replicas become active. Iffalse
, the API will return when the bare minimum replicas are active, such as the affected leader replicas. async
-
Optional
Default: none
Request ID to track this action which will be processed asynchronously.
BalanceReplicas Response
The response will include the status of the request. If the status is anything other than "0", an error message will explain why the request failed.
This operation does not hold necessary locks on the replicas that belong to on the source node. So don’t perform other collection operations in this period. |
BALANCESHARDUNIQUE: Balance a Property Across Nodes
Ensures that a particular property is distributed evenly amongst the physical nodes that make up a collection. If the property already exists on a replica, every effort is made to leave it there. If the property is not on any replica on a shard, one is chosen and the property is added.
-
V1 API
-
V2 API
http://localhost:8983/solr/admin/collections?action=BALANCESHARDUNIQUE&collection=techproducts&property=preferredLeader
curl -X POST http://localhost:8983/api/collections/techproducts/balance-shard-unique -H 'Content-Type: application/json' -d '
{
"property": "preferredLeader"
}
'
BALANCESHARDUNIQUE Parameters
collection
-
Required
Default: none
The name of the collection to balance the property in.
property
-
Required
Default: none
The property to balance. The literal
property.
is prepended to this property if not specified explicitly. onlyactivenodes
-
Optional
Default:
true
Normally, the property is instantiated on active nodes only. If this parameter is specified as
false
, then inactive nodes are also included for distribution. shardUnique
-
Optional
Default: none
Something of a safety valve. There is one pre-defined property (
preferredLeader
) that defaults this value totrue
. For all other properties that are balanced, this must be set totrue
or an error message will be returned.
BALANCESHARDUNIQUE Response
The response will include the status of the request. If the status is anything other than "0", an error message will explain why the request failed.
Examples using BALANCESHARDUNIQUE
Input
Either of these commands would put the "preferredLeader" property on one replica in every shard in the "collection1" collection.
http://localhost:8983/solr/admin/collections?action=BALANCESHARDUNIQUE&collection=collection1&property=preferredLeader&wt=xml
http://localhost:8983/solr/admin/collections?action=BALANCESHARDUNIQUE&collection=collection1&property=property.preferredLeader&wt=xml
Output
<response>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">9</int>
</lst>
</response>
Examining the clusterstate after issuing this call should show exactly one replica in each shard that has this property.
Migrate Replicas
Migrate all replicas off of a given set of source nodes.
+ If more than one node is used as a targetNode (either explicitly, or by default), then the configured Replica Placement Plugin will be used to determine which targetNode should be used for each migrated replica.
-
V2 API
curl -X POST http://localhost:8983/api/cluster/replicas/migrate -H 'Content-Type: application/json' -d '
{
"sourceNodes": ["localhost:8983_solr", "localhost:8984_solr"],
"targetNodes": ["localhost:8985_solr", "localhost:8986_solr"],
"async": "migrate-replicas-1"
}
'
Parameters
sourceNodes
-
Required
Default: none
The nodes over which replicas will be balanced. Replicas that live outside this set of nodes will not be included in the balancing.
targetNodes
-
Optional
Default: none
The nodes which the migrated replicas will be moved to. If none is provided, then the API will use all live nodes not provided in
sourceNodes
.If there is more than one node to migrate the replicas to, then the configured PlacementPlugin replica will have one of these nodes selected
waitForFinalState
-
Optional
Default:
false
If
true
, the request will complete only when all affected replicas become active. Iffalse
, the API will return when the bare minimum replicas are active, such as the affected leader replicas. async
-
Optional
Default: none
Request ID to track this action which will be processed asynchronously.
MigrateReplicas Response
The response will include the status of the request. If the status is anything other than "0", an error message will explain why the request failed.
This operation does not hold necessary locks on the replicas that belong to on the source node. So don’t perform other collection operations in this period. |
REPLACENODE: Move All Replicas in a Node to Another
This API’s functionality has been replaced and enhanced by Migrate Replicas, please consider using the new API instead, as this API may be removed in a future version. |
This command recreates replicas in one node (the source) on another node(s) (the target). After each replica is copied, the replicas in the source node are deleted.
For source replicas that are also shard leaders the operation will wait for the number of seconds set with the timeout
parameter to make sure there’s an active replica that can become a leader, either an existing replica becoming a leader or the new replica completing recovery and becoming a leader).
If no targetNode is provided, then the configured Replica Placement Plugin will be used to determine which node each recreated replica should be placed on.
-
V1 API
-
V2 API
http://localhost:8983/solr/admin/collections?action=REPLACENODE&sourceNode=source-node&targetNode=target-node
curl -X POST "http://localhost:8983/api/cluster/nodes/localhost:7574_solr/replace" -H 'Content-Type: application/json' -d '
{
"targetNodeName": "localhost:8983_solr",
"waitForFinalState": "false",
"async": "async"
}
'
REPLACENODE Parameters
sourceNode
-
Required
Default: none
The source node from which the replicas need to be copied from.
targetNode
-
Optional
Default: none
The target node where replicas will be copied. If this parameter is not provided, Solr will use all live nodes except for the
sourceNode
. The configured Replica Placement Plugin will be used to determine which node will be used for each replica. parallel
-
Optional
Default:
false
If this flag is set to
true
, all replicas are created in separate threads. Keep in mind that this can lead to very high network and disk I/O if the replicas have very large indices. waitForFinalState
-
Optional
Default:
false
If
true
, the request will complete only when all affected replicas become active. Iffalse
, the API will return when the bare minimum replicas are active, such as the affected leader replicas. async
-
Optional
Default: none
Request ID to track this action which will be processed asynchronously.
timeout
-
Optional
Default:
300
secondsTime in seconds to wait until new replicas are created, and until leader replicas are fully recovered.
This operation does not hold necessary locks on the replicas that belong to on the source node. So don’t perform other collection operations in this period. |
DELETENODE: Delete Replicas in a Node
Deletes all replicas of all collections in that node. Please note that the node itself will remain as a live node after this operation.
-
V1 API
-
V2 API
http://localhost:8983/solr/admin/collections?action=DELETENODE&node=nodeName
curl -X POST "http://localhost:8983/api/cluster/nodes/localhost:7574_solr/clear/" -H 'Content-Type: application/json' -d '
{
"async": "someAsyncId"
}
'
DELETENODE Parameters
node
-
Required
Default: none
The node to be removed.
async
-
Optional
Default: none
Request ID to track this action which will be processed asynchronously.
ADDROLE: Add a Role
Assigns a role to a given node in the cluster.
The only supported role is overseer
.
Use this command to dedicate a particular node as Overseer. Invoke it multiple times to add more nodes. This is useful in large clusters where an Overseer is likely to get overloaded. If available, one among the list of nodes which are assigned the 'overseer' role would become the overseer. The system would assign the role to any other node if none of the designated nodes are up and running.
-
V1 API
-
V2 API
http://localhost:8983/solr/admin/collections?action=ADDROLE&role=overseer&node=localhost:8983_solr
curl -X POST http://localhost:8983/api/cluster -H 'Content-Type: application/json' -d '
{
"add-role": {
"role": "overseer",
"node": "localhost:8983_solr"
}
}
'
ADDROLE Parameters
role
-
Required
Default: none
The name of the role. The only supported role as of now is
overseer
. node
-
Required
Default: none
The name of the node that will be assigned the role. It is possible to assign a role even before that node is started.
REMOVEROLE: Remove Role
Remove an assigned role. This API is used to undo the roles assigned using ADDROLE operation
-
V1 API
-
V2 API
http://localhost:8983/solr/admin/collections?action=REMOVEROLE&role=overseer&node=localhost:8983_solr
curl -X POST http://localhost:8983/api/cluster -H 'Content-Type: application/json' -d '
{
"remove-role": {
"role": "overseer",
"node": "localhost:8983_solr"
}
}
'
REMOVEROLE Parameters
role
-
Required
Default: none
The name of the role. The only supported role as of now is
overseer
. node
-
Required
Default: none
The name of the node where the role should be removed.
OVERSEERSTATUS: Overseer Status and Statistics
Returns the current status of the overseer, performance statistics of various overseer APIs, and the last 10 failures per operation type.
-
V1 API
-
V2 API
http://localhost:8983/solr/admin/collections?action=OVERSEERSTATUS
curl -X GET http://localhost:8983/api/cluster/overseer
Examples using OVERSEERSTATUS
Input:
http://localhost:8983/solr/admin/collections?action=OVERSEERSTATUS
{
"responseHeader":{
"status":0,
"QTime":33},
"leader":"127.0.1.1:8983_solr",
"overseer_queue_size":0,
"overseer_work_queue_size":0,
"overseer_collection_queue_size":2,
"overseer_operations":[
"createcollection",{
"requests":2,
"errors":0,
"avgRequestsPerSecond":0.7467088842794136,
"5minRateRequestsPerSecond":7.525069023276674,
"15minRateRequestsPerSecond":10.271274280947182,
"avgTimePerRequest":0.5050685,
"medianRequestTime":0.5050685,
"75thPcRequestTime":0.519016,
"95thPcRequestTime":0.519016,
"99thPcRequestTime":0.519016,
"999thPcRequestTime":0.519016},
"removeshard",{
"..."
}],
"collection_operations":[
"splitshard",{
"requests":1,
"errors":1,
"recent_failures":[{
"request":{
"operation":"splitshard",
"shard":"shard2",
"collection":"example1"},
"response":[
"Operation splitshard caused exception:","org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: No shard with the specified name exists: shard2",
"exception",{
"msg":"No shard with the specified name exists: shard2",
"rspCode":400}]}],
"avgRequestsPerSecond":0.8198143044809885,
"5minRateRequestsPerSecond":8.043840552427673,
"15minRateRequestsPerSecond":10.502079828515368,
"avgTimePerRequest":2952.7164175,
"medianRequestTime":2952.7164175000003,
"75thPcRequestTime":5904.384052,
"95thPcRequestTime":5904.384052,
"99thPcRequestTime":5904.384052,
"999thPcRequestTime":5904.384052},
"..."
],
"overseer_queue":[
"..."
],
"..."
}