Collections API

The Collections API is used to create, remove, or reload collections.

In the context of SolrCloud you can use it to create collections with a specific number of shards and replicas, move replicas or shards, and create or delete collection aliases.

CREATE: Create a Collection

/admin/collections?action=CREATE&name=name

CREATE Parameters

The CREATE action allows the following parameters:

name

The name of the collection to be created. This parameter is required.

router.name

The router name that will be used. The router defines how documents will be distributed among the shards. Possible values are implicit or compositeId, which is the default.

The implicit router does not automatically route documents to different shards. Whichever shard you indicate on the indexing request (or within each document) will be used as the destination for those documents.

The compositeId router hashes the value in the uniqueKey field and looks up that hash in the collection’s clusterstate to determine which shard will receive the document, with the additional ability to manually direct the routing.

When using the implicit router, the shards parameter is required. When using the compositeId router, the numShards parameter is required.

For more information, see also the section Document Routing.

numShards

The number of shards to be created as part of the collection. This is a required parameter when the router.name is compositeId.

shards

A comma separated list of shard names, e.g., shard-x,shard-y,shard-z. This is a required parameter when the router.name is implicit.

replicationFactor

The number of replicas to be created for each shard. The default is 1.

This will create a NRT type of replica. If you want another type of replica, see the tlogReplicas and pullReplica parameters below. See the section Types of Replicas for more information about replica types.

nrtReplicas

The number of NRT (Near-Real-Time) replicas to create for this collection. This type of replica maintains a transaction log and updates its index locally. If you want all of your replicas to be of this type, you can simply use replicationFactor instead.

tlogReplicas

The number of TLOG replicas to create for this collection. This type of replica maintains a transaction log but only updates its index via replication from a leader. See the section Types of Replicas for more information about replica types.

pullReplicas

The number of PULL replicas to create for this collection. This type of replica does not maintain a transaction log and only updates its index via replication from a leader. This type is not eligible to become a leader and should not be the only type of replicas in the collection. See the section Types of Replicas for more information about replica types.

maxShardsPerNode

When creating collections, the shards and/or replicas are spread across all available (i.e., live) nodes, and two replicas of the same shard will never be on the same node.

If a node is not live when the CREATE action is called, it will not get any parts of the new collection, which could lead to too many replicas being created on a single live node. Defining maxShardsPerNode sets a limit on the number of replicas the CREATE action will spread to each node.

If the entire collection can not be fit into the live nodes, no collection will be created at all. The default maxShardsPerNode value is 1.

createNodeSet

Allows defining the nodes to spread the new collection across. The format is a comma-separated list of node_names, such as localhost:8983_solr,localhost:8984_solr,localhost:8985_solr.

If not provided, the CREATE operation will create shard-replicas spread across all live Solr nodes.

Alternatively, use the special value of EMPTY to initially create no shard-replica within the new collection and then later use the ADDREPLICA operation to add shard-replicas when and where required.

createNodeSet.shuffle

Controls wether or not the shard-replicas created for this collection will be assigned to the nodes specified by the createNodeSet in a sequential manner, or if the list of nodes should be shuffled prior to creating individual replicas.

A false value makes the results of a collection creation predictable and gives more exact control over the location of the individual shard-replicas, but true can be a better choice for ensuring replicas are distributed evenly across nodes. The default is true.

This parameter is ignored if createNodeSet is not also specified.

collection.configName

Defines the name of the configuration (which must already be stored in ZooKeeper) to use for this collection. If not provided, Solr will use the configuration of _default configSet to create a new (and mutable) configSet named <collectionName>.AUTOCREATED and will use it for the new collection. When such a collection (that uses a copy of the _default configset) is deleted, the autocreated configset is not deleted by default.

router.field

If this parameter is specified, the router will look at the value of the field in an input document to compute the hash and identify a shard instead of looking at the uniqueKey field. If the field specified is null in the document, the document will be rejected.

Please note that RealTime Get or retrieval by document ID would also require the parameter _route_ (or shard.keys) to avoid a distributed search.

property.name=value

Set core property name to value. See the section Defining core.properties for details on supported properties and values.

autoAddReplicas

When set to true, enables automatic addition of replicas when the number of active replicas falls below the value set for replicationFactor. This may occur if a replica goes down, for example. The default is false, which means new replicas will not be added.

While this parameter is provided as part of Solr’s set of features to provide autoscaling of clusters, it is available even when you have not implemented any other part of autoscaling (such as a policy). See the section SolrCloud Autoscaling Automatically Adding Replicas for more details about this option and how it can be used.

async

Request ID to track this action which will be processed asynchronously.

rule

Replica placement rules. See the section Rule-based Replica Placement for details.

snitch

Details of the snitch provider. See the section Rule-based Replica Placement for details.

policy

Name of the collection-level policy. See Defining Collection-Specific Policies for details.

waitForFinalState

If true, the request will complete only when all affected replicas become active. The default is false, which means that the API will return the status of the single action, which may be before the new replica is online and active.

withCollection

The name of the collection with which all replicas of this collection must be co-located. The collection must already exist and must have a single shard named shard1. See Colocating collections for more details.

CREATE Response

The response will include the status of the request and the new core names. If the status is anything other than "success", an error message will explain why the request failed.

Examples using CREATE

Input

http://localhost:8983/solr/admin/collections?action=CREATE&name=newCollection&numShards=2&replicationFactor=1&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">3764</int>
  </lst>
  <lst name="success">
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">3450</int>
      </lst>
      <str name="core">newCollection_shard1_replica1</str>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">3597</int>
      </lst>
      <str name="core">newCollection_shard2_replica1</str>
    </lst>
  </lst>
</response>

MODIFYCOLLECTION: Modify Attributes of a Collection

/admin/collections?action=MODIFYCOLLECTION&collection=<collection-name>&<attribute-name>=<attribute-value>&<another-attribute-name>=<another-value>&<yet_another_attribute_name>=

It’s possible to edit multiple attributes at a time. Changing these values only updates the z-node on ZooKeeper, they do not change the topology of the collection. For instance, increasing replicationFactor will not automatically add more replicas to the collection but will allow more ADDREPLICA commands to succeed.

An attribute can be deleted by passing an empty value. For example, yet_another_attribute_name= (with no value) will delete the yet_another_attribute_name parameter from the collection.

MODIFYCOLLECTION Parameters

collection

The name of the collection to be modified. This parameter is required.

attribute=value

Key-value pairs of attribute names and attribute values.

At least one attribute parameter is required.

The attributes that can be modified are:

  • maxShardsPerNode

  • replicationFactor

  • autoAddReplicas

  • collection.configName

  • rule

  • snitch

  • policy

  • withCollection

See the CREATE action section above for details on these attributes.

RELOAD: Reload a Collection

/admin/collections?action=RELOAD&name=name

The RELOAD action is used when you have changed a configuration in ZooKeeper.

RELOAD Parameters

name

The name of the collection to reload. This parameter is required.

async

Request ID to track this action which will be processed asynchronously.

RELOAD Response

The response will include the status of the request and the cores that were reloaded. If the status is anything other than "success", an error message will explain why the request failed.

Examples using RELOAD

Input

http://localhost:8983/solr/admin/collections?action=RELOAD&name=newCollection&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">1551</int>
  </lst>
  <lst name="success">
    <lst name="10.0.1.6:8983_solr">
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">761</int>
      </lst>
    </lst>
    <lst name="10.0.1.4:8983_solr">
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">1527</int>
      </lst>
    </lst>
  </lst>
</response>

SPLITSHARD: Split a Shard

/admin/collections?action=SPLITSHARD&collection=name&shard=shardID

Splitting a shard will take an existing shard and break it into two pieces which are written to disk as two (new) shards. The original shard will continue to contain the same data as-is but it will start re-routing requests to the new shards. The new shards will have as many replicas as the original shard. A soft commit is automatically issued after splitting a shard so that documents are made visible on sub-shards. An explicit commit (hard or soft) is not necessary after a split operation because the index is automatically persisted to disk during the split operation.

This command allows for seamless splitting and requires no downtime. A shard being split will continue to accept query and indexing requests and will automatically start routing requests to the new shards once this operation is complete. This command can only be used for SolrCloud collections created with numShards parameter, meaning collections which rely on Solr’s hash-based routing mechanism.

The split is performed by dividing the original shard’s hash range into two equal partitions and dividing up the documents in the original shard according to the new sub-ranges. Two parameters discussed below, ranges and split.key provide further control over how the split occurs.

The newly created shards will have as many replicas as the parent shard, of the same replica types.

When using splitMethod=rewrite (default) you must ensure that the node running the leader of the parent shard has enough free disk space i.e., more than twice the index size, for the split to succeed. The API uses the Autoscaling framework to find nodes that can satisfy the disk requirements for the new replicas but only when an Autoscaling policy is configured. Refer to Autoscaling Policy and Preferences section for more details.

Also, the first replicas of resulting sub-shards will always be placed on the shard leader node, which may cause Autoscaling policy violations that need to be resolved either automatically (when appropriate triggers are in use) or manually.

Shard splitting can be a long running process. In order to avoid timeouts, you should run this as an asynchronous call.

SPLITSHARD Parameters

collection

The name of the collection that includes the shard to be split. This parameter is required.

shard

The name of the shard to be split. This parameter is required when split.key is not specified.

ranges

A comma-separated list of hash ranges in hexadecimal, such as ranges=0-1f4,1f5-3e8,3e9-5dc.

This parameter can be used to divide the original shard’s hash range into arbitrary hash range intervals specified in hexadecimal. For example, if the original hash range is 0-1500 then adding the parameter: ranges=0-1f4,1f5-3e8,3e9-5dc will divide the original shard into three shards with hash range 0-500, 501-1000, and 1001-1500 respectively.

split.key

The key to use for splitting the index.

This parameter can be used to split a shard using a route key such that all documents of the specified route key end up in a single dedicated sub-shard. Providing the shard parameter is not required in this case because the route key is enough to figure out the right shard. A route key which spans more than one shard is not supported.

For example, suppose split.key=A! hashes to the range 12-15 and belongs to shard 'shard1' with range 0-20. Splitting by this route key would yield three sub-shards with ranges 0-11, 12-15 and 16-20. Note that the sub-shard with the hash range of the route key may also contain documents for other route keys whose hash ranges overlap.

splitMethod

Currently two methods of shard splitting are supported:

  • splitMethod=rewrite (default) after selecting documents to retain in each partition this method creates sub-indexes from scratch, which is a lengthy CPU- and I/O-intensive process but results in optimally-sized sub-indexes that don’t contain any data from documents not belonging to each partition.

  • splitMethod=link uses file system-level hard links for creating copies of the original index files and then only modifies the file that contains the list of deleted documents in each partition. This method is many times quicker and lighter on resources than the rewrite method but the resulting sub-indexes are still as large as the original index because they still contain data from documents not belonging to the partition. This slows down the replication process and consumes more disk space on replica nodes (the multiple hard-linked copies don’t occupy additional disk space on the leader node, unless hard-linking is not supported).

property.name=value

Set core property name to value. See the section Defining core.properties for details on supported properties and values.

waitForFinalState

If true, the request will complete only when all affected replicas become active. The default is false, which means that the API will return the status of the single action, which may be before the new replica is online and active.

timing

If true then each stage of processing will be timed and a timing section will be included in response.

async

Request ID to track this action which will be processed asynchronously

SPLITSHARD Response

The output will include the status of the request and the new shard names, which will use the original shard as their basis, adding an underscore and a number. For example, "shard1" will become "shard1_0" and "shard1_1". If the status is anything other than "success", an error message will explain why the request failed.

Examples using SPLITSHARD

Input

Split shard1 of the "anotherCollection" collection.

http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=anotherCollection&shard=shard1&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">6120</int>
  </lst>
  <lst name="success">
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">3673</int>
      </lst>
      <str name="core">anotherCollection_shard1_1_replica1</str>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">3681</int>
      </lst>
      <str name="core">anotherCollection_shard1_0_replica1</str>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">6008</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">6007</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">71</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">0</int>
      </lst>
      <str name="core">anotherCollection_shard1_1_replica1</str>
      <str name="status">EMPTY_BUFFER</str>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">0</int>
      </lst>
      <str name="core">anotherCollection_shard1_0_replica1</str>
      <str name="status">EMPTY_BUFFER</str>
    </lst>
  </lst>
</response>

CREATESHARD: Create a Shard

Shards can only created with this API for collections that use the 'implicit' router (i.e., when the collection was created, router.name=implicit). A new shard with a name can be created for an existing 'implicit' collection.

Use SPLITSHARD for collections created with the 'compositeId' router (router.key=compositeId).

/admin/collections?action=CREATESHARD&shard=shardName&collection=name

CREATESHARD Parameters

collection

The name of the collection that includes the shard to be split. This parameter is required.

shard

The name of the shard to be created. This parameter is required.

createNodeSet

Allows defining the nodes to spread the new collection across. If not provided, the CREATESHARD operation will create shard-replica spread across all live Solr nodes.

The format is a comma-separated list of node_names, such as localhost:8983_solr,localhost:8984_solr,localhost:8985_solr.

property.name=value

Set core property name to value. See the section Defining core.properties for details on supported properties and values.

waitForFinalState

If true, the request will complete only when all affected replicas become active. The default is false, which means that the API will return the status of the single action, which may be before the new replica is online and active.

async

Request ID to track this action which will be processed asynchronously.

CREATESHARD Response

The output will include the status of the request. If the status is anything other than "success", an error message will explain why the request failed.

Examples using CREATESHARD

Input

Create 'shard-z' for the "anImplicitCollection" collection.

http://localhost:8983/solr/admin/collections?action=CREATESHARD&collection=anImplicitCollection&shard=shard-z&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">558</int>
  </lst>
</response>

DELETESHARD: Delete a Shard

Deleting a shard will unload all replicas of the shard, remove them from clusterstate.json, and (by default) delete the instanceDir and dataDir for each replica. It will only remove shards that are inactive, or which have no range given for custom sharding.

/admin/collections?action=DELETESHARD&shard=shardID&collection=name

DELETESHARD Parameters

collection

The name of the collection that includes the shard to be deleted. This parameter is required.

shard

The name of the shard to be deleted. This parameter is required.

deleteInstanceDir

By default Solr will delete the entire instanceDir of each replica that is deleted. Set this to false to prevent the instance directory from being deleted.

deleteDataDir

By default Solr will delete the dataDir of each replica that is deleted. Set this to false to prevent the data directory from being deleted.

deleteIndex

By default Solr will delete the index of each replica that is deleted. Set this to false to prevent the index directory from being deleted.

async

Request ID to track this action which will be processed asynchronously.

DELETESHARD Response

The output will include the status of the request. If the status is anything other than "success", an error message will explain why the request failed.

Examples using DELETESHARD

Input

Delete 'shard1' of the "anotherCollection" collection.

http://localhost:8983/solr/admin/collections?action=DELETESHARD&collection=anotherCollection&shard=shard1&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">558</int>
  </lst>
  <lst name="success">
    <lst name="10.0.1.4:8983_solr">
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">27</int>
      </lst>
    </lst>
  </lst>
</response>

CREATEALIAS: Create or Modify an Alias for a Collection

The CREATEALIAS action will create a new alias pointing to one or more collections. Aliases come in 2 flavors: standard and routed.

Standard aliases are simple: CREATEALIAS registers the alias name with the names of one or more collections provided by the command. If an existing alias exists, it is replaced/updated. A standard alias can serve to have the appearance of renaming a collection, and can be used to atomically swap which backing/underlying collection is "live" for various purposes. When Solr searches an alias pointing to multiple collections, Solr will search all shards of all the collections as an aggregated whole. While it is possible to send updates to an alias spanning multiple collections, standard aliases have no logic for distributing documents among the referenced collections so all updates will go to the first collection in the list.

/admin/collections?action=CREATEALIAS&name=name&collections=collectionlist

Routed aliases are aliases with additional capabilities to act as a kind of super-collection — routing updates to the correct collection. Since the only routing strategy at present is time oriented, these are also called Time Routed Aliases (TRAs). A TRA manages an alias and a time sequential series of collections that it will both create and optionally delete on-demand. See Time Routed Aliases for some important high-level information before getting started.

Presently this is only supported for temporal fields stored as a DatePointField or TrieDateField type. Other well ordered field types may be added in future versions.
localhost:8983/solr/admin/collections?action=CREATEALIAS&name=timedata&router.start=NOW/DAY&router.field=evt_dt&router.name=time&router.interval=%2B1DAY&router.maxFutureMs=3600000&create-collection.collection.configName=myConfig&create-collection.numShards=2

If run on Jan 15, 2018, the above will create an alias named timedata, that contains collections with names prefixed with timedata and an initial collection named timedata_2018_01_15 will be created immediately. Updates sent to this alias with a (required) value in evt_dt that is before or after 2018-01-15 will be rejected, until the last 60 minutes of 2018-01-15. After 2018-01-15T23:00:00 documents for either 2018-01-15 or 2018-01-16 will be accepted. As soon as the system receives a document for an allowable time window for which there is no collection it will automatically create the next required collection (and potentially any intervening collections if router.interval is smaller than router.maxFutureMs). Both the initial collection and any subsequent collections will be created using the specified configset. All collection creation parameters other than name are allowed, prefixed by create-collection.

This means that one could, for example, partition their collections by day, and within each daily collection route the data to shards based on customer id. Such shards can be of any type (NRT, PULL or TLOG), and rule-based replica placement strategies may also be used.

The values supplied in this command for collection creation will be retained in alias properties, and can be verified by inspecting aliases.json in ZooKeeper.

Presently only updates are routed and queries are distributed to all collections in the alias, but future features may enable routing of the query to the single appropriate collection based on a special parameter or perhaps a filter on the routed field.

CREATEALIAS Parameters

name

The alias name to be created. This parameter is required. If the alias is to be routed it also functions as a prefix for the names of the dependent collections that will be created. It must therefore adhere to normal requirements for collection naming.

async

Request ID to track this action which will be processed asynchronously.

Standard Alias Parameters

collections

A comma-separated list of collections to be aliased. The collections must already exist in the cluster. This parameter signals the creation of a standard alias. If it is present all routing parameters are prohibited. If routing parameters are present this parameter is prohibited.

Routed Alias Parameters

Most routed alias parameters become alias properties that can subsequently be inspected and modified.

router.start

The start date/time of data for this time routed alias in Solr’s standard date/time format (i.e., ISO-8601 or "NOW" optionally with date math).

The first collection created for the alias will be internally named after this value. If a document is submitted with an earlier value for router.field then the earliest collection the alias points to then it will yield an error since it can’t be routed. This date/time MUST NOT have a milliseconds component other than 0. Particularly, this means NOW will fail 999 times out of 1000, though NOW/SECOND, NOW/MINUTE, etc. will work just fine. This parameter is required.

TZ

The timezone to be used when evaluating any date math in router.start or router.interval. This is equivalent to the same parameter supplied to search queries, but understand in this case it’s persisted with most of the other parameters as an alias property.

If GMT-4 is supplied for this value then a document dated 2018-01-14T21:00:00:01.2345Z would be stored in the myAlias_2018-01-15_01 collection (assuming an interval of +1HOUR).

The default timezone is UTC.

router.field

The date field to inspect to determine which underlying collection an incoming document should be routed to. This field is required on all incoming documents.

router.name

The type of routing to use. Presently only time is valid. This parameter is required.

router.interval

A date math expression that will be appended to a timestamp to determine the next collection in the series. Any date math expression that can be evaluated if appended to a timestamp of the form 2018-01-15T16:17:18 will work here.

This parameter is required.

router.maxFutureMs

The maximum milliseconds into the future that a document is allowed to have in router.field for it to be accepted without error. If there was no limit, than an erroneous value could trigger many collections to be created.

The default is 10 minutes.

router.preemptiveCreateMath

A date math expression that results in early creation of new collections.

If a document arrives with a timestamp that is after the end time of the most recent collection minus this interval, then the next (and only the next) collection will be created asynchronously. Without this setting, collections are created synchronously when required by the document time stamp and thus block the flow of documents until the collection is created (possibly several seconds). Preemptive creation reduces these hiccups. If set to enough time (perhaps an hour or more) then if there are problems creating a collection, this window of time might be enough to take corrective action. However after a successful preemptive creation, the collection is consuming resources without being used, and new documents will tend to be routed through it only to be routed elsewhere. Also, note that router.autoDeleteAge is currently evaluated relative to the date of a newly created collection, and so you may want to increase the delete age by the preemptive window amount so that the oldest collection isn’t deleted too soon. Note that it has to be possible to subtract the interval specified from a date, so if prepending a minus sign creates invalid date math, this will cause an error. Also note that a document that is itself destined for a collection that does not exist will still trigger synchronous creation up to that destination collection but will not trigger additional async preemptive creation. Only one type of collection creation can happen per document. Example: 90MINUTES.

This property is blank by default indicating just-in-time, synchronous creation of new collections.

router.autoDeleteAge

A date math expression that results in the oldest collections getting deleted automatically.

The date math is relative to the timestamp of a newly created collection (typically close to the current time), and thus this must produce an earlier time via rounding and/or subtracting. Collections to be deleted must have a time range that is entirely before the computed age. Collections are considered for deletion immediately prior to new collections getting created. Example: /DAY-90DAYS.

The default is not to delete.

create-collection.*

The * wildcard can be replaced with any parameter from the CREATE command except name. All other fields are identical in requirements and naming except that we insist that the configset be explicitly specified. The configset must be created beforehand, either uploaded or copied and modified. It’s probably a bad idea to use "data driven" mode as schema mutations might happen concurrently leading to errors.

CREATEALIAS Response

The output will simply be a responseHeader with details of the time it took to process the request. To confirm the creation of the alias, you can look in the Solr Admin UI, under the Cloud section and find the aliases.json file. The initial collection for routed aliases should also be visible in various parts of the admin UI.

Examples using CREATEALIAS

Input

Create an alias named "testalias" and link it to the collections named "anotherCollection" and "testCollection".

http://localhost:8983/solr/admin/collections?action=CREATEALIAS&name=testalias&collections=anotherCollection,testCollection&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">122</int>
  </lst>
</response>

Create an alias named "myTimeData" for data begining on 2018-01-15 in the UTC time zone and partitioning daily based on the evt_dt field in the incomming documents. Data more than an hour beyond the latest (most recent) partiton is to be rejected and collections are created using a config set named myConfig and

Input

http://localhost:8983/solr/admin/collections?action=CREATEALIAS&name=myTimeData&router.start=NOW/DAY&router.field=evt_dt&router.name=time&router.interval=%2B1DAY&router.maxFutureMs=3600000&create-collection.collection.configName=myConfig&create-collection.numShards=2

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">1234</int>
  </lst>
</response>

A somewhat contrived example demonstrating the V2 API usage and additional collection creation options. Notice that the collection creation parameters follow the v2 API naming convention, not the v1 naming conventions.

Input

POST /api/c
{
  "create-routed-alias" : {
    "name": "somethingTemporalThisWayComes",
    "router" : {
      "name": "time",
      "field": "evt_dt",
      "start":"NOW/MINUTE",
      "interval":"+2HOUR",
      "maxFutureMs":"14400000"
    },
    "create-collection" : {
      "config":"_default",
      "router": {
        "name":"implicit",
        "field":"foo_s"
      },
      "shards":"foo,bar,baz",
      "numShards": 3,
      "tlogReplicas":1,
      "pullReplicas":1,
      "maxShardsPerNode":2,
      "properties" : {
        "foobar":"bazbam"
      }
    }
  }
}

Output

{
    "responseHeader": {
        "status": 0,
        "QTime": 1234
    }
}

LISTALIASES: List of all aliases in the cluster

/admin/collections?action=LISTALIASES

The LISTALIASES action does not take any parameters.

LISTALIASES Response

The output will contain a list of aliases with the corresponding collection names.

Examples using LISTALIASES

Input

List the existing aliases, requesting information as XML from Solr:

http://localhost:8983/solr/admin/collections?action=LISTALIASES&wt=xml

Output

<response>
    <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">0</int>
    </lst>
    <lst name="aliases">
        <str name="testalias1">collection1</str>
        <str name="testalias2">collection1,collection2</str>
    </lst>
    <lst name="properties">
        <lst name="testalias1"/>
        <lst name="testalias2">
            <str name="someKey">someValue</str>
        </lst>
    </lst>
</response>

ALIASPROP: Modify Alias Properties for a Collection

The ALIASPROP action modifies the properties (metadata) on an alias. If a key is set with a value that is empty it will be removed.

/admin/collections?action=ALIASPROP&name=name&property.someKey=somevalue

ALIASPROP Parameters

name

The alias name on which to set properties. This parameter is required.

property.*

The name of the property to be modified replaces '*', the value for the parameter is passed as the value for the property.

async

Request ID to track this action which will be processed asynchronously.

ALIASPROP Response

The output will simply be a responseHeader with details of the time it took to process the request. To confirm the creation of the property or properties, you can look in the Solr Admin UI, under the Cloud section and find the aliases.json file or use the LISTALIASES api command.

Examples using ALIASPROP

Input

For an alias named "testalias2" and set the value "someValue" for a property of "someKey" and "otherValue" for "otherKey".

http://localhost:8983/solr/admin/collections?action=ALIASPROP&name=testalias2&property.someKey=someValue&property.otherKey=otherValue&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">122</int>
  </lst>
</response>

DELETEALIAS: Delete a Collection Alias

/admin/collections?action=DELETEALIAS&name=name

DELETEALIAS Parameters

name

The name of the alias to delete. This parameter is required.

async

Request ID to track this action which will be processed asynchronously.

DELETEALIAS Response

The output will simply be a responseHeader with details of the time it took to process the request. To confirm the removal of the alias, you can look in the Solr Admin UI, under the Cloud section, and find the aliases.json file.

Examples using DELETEALIAS

Input

Remove the alias named "testalias".

http://localhost:8983/solr/admin/collections?action=DELETEALIAS&name=testalias&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">117</int>
  </lst>
</response>

DELETE: Delete a Collection

/admin/collections?action=DELETE&name=collection

DELETE Parameters

name

The name of the collection to delete. This parameter is required.

async

Request ID to track this action which will be processed asynchronously.

DELETE Response

The response will include the status of the request and the cores that were deleted. If the status is anything other than "success", an error message will explain why the request failed.

Examples using DELETE

Input

Delete the collection named "newCollection".

http://localhost:8983/solr/admin/collections?action=DELETE&name=newCollection&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">603</int>
  </lst>
  <lst name="success">
    <lst name="10.0.1.6:8983_solr">
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">19</int>
      </lst>
    </lst>
    <lst name="10.0.1.4:8983_solr">
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">67</int>
      </lst>
    </lst>
  </lst>
</response>

DELETEREPLICA: Delete a Replica

Deletes a named replica from the specified collection and shard.

If the corresponding core is up and running the core is unloaded, the entry is removed from the clusterstate, and (by default) delete the instanceDir and dataDir. If the node/core is down, the entry is taken off the clusterstate and if the core comes up later it is automatically unregistered.

/admin/collections?action=DELETEREPLICA&collection=collection&shard=shard&replica=replica

DELETEREPLICA Parameters

collection

The name of the collection. This parameter is required.

shard

The name of the shard that includes the replica to be removed. This parameter is required.

replica

The name of the replica to remove.

If count is used instead, this parameter is not required. Otherwise, this parameter must be supplied.

count

The number of replicas to remove. If the requested number exceeds the number of replicas, no replicas will be deleted. If there is only one replica, it will not be removed.

If replica is used instead, this parameter is not required. Otherwise, this parameter must be supplied.

deleteInstanceDir

By default Solr will delete the entire instanceDir of the replica that is deleted. Set this to false to prevent the instance directory from being deleted.

deleteDataDir

By default Solr will delete the dataDir of the replica that is deleted. Set this to false to prevent the data directory from being deleted.

deleteIndex

By default Solr will delete the index of the replica that is deleted. Set this to false to prevent the index directory from being deleted.

onlyIfDown

When set to true, no action will be taken if the replica is active. Default false.

async

Request ID to track this action which will be processed asynchronously.

Examples using DELETEREPLICA

Input

http://localhost:8983/solr/admin/collections?action=DELETEREPLICA&collection=test2&shard=shard2&replica=core_node3&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">110</int>
  </lst>
</response>

ADDREPLICA: Add Replica

Add a replica to a shard in a collection. The node name can be specified if the replica is to be created in a specific node.

The API uses the Autoscaling framework to find nodes that can satisfy the disk requirements for the new replica but only when an Autoscaling policy is configured. Refer to Autoscaling Policy and Preferences section for more details.

/admin/collections?action=ADDREPLICA&collection=collection&shard=shard&node=nodeName

ADDREPLICA Parameters

collection

The name of the collection where the replica should be created. This parameter is required.

shard

The name of the shard to which replica is to be added.

If shard is not specified, then _route_ must be.

_route_

If the exact shard name is not known, users may pass the _route_ value and the system would identify the name of the shard.

Ignored if the shard parameter is also specified.

node

The name of the node where the replica should be created.

instanceDir

The instanceDir for the core that will be created.

dataDir

The directory in which the core should be created.

type

The type of replica to create. These possible values are allowed:

  • nrt: The NRT type maintains a transaction log and updates its index locally. This is the default and the most commonly used.

  • tlog: The TLOG type maintains a transaction log but only updates its index via replication.

  • pull: The PULL type does not maintain a transaction log and only updates its index via replication. This type is not eligible to become a leader.

See the section Types of Replicas for more information about replica type options.

property.name=value

Set core property name to value. See Defining core.properties for details about supported properties and values.

waitForFinalState

If true, the request will complete only when all affected replicas become active. The default is false, which means that the API will return the status of the single action, which may be before the new replica is online and active.

async

Request ID to track this action which will be processed asynchronously

Examples using ADDREPLICA

Input

http://localhost:8983/solr/admin/collections?action=ADDREPLICA&collection=test2&shard=shard2&node=192.167.1.2:8983_solr&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">3764</int>
  </lst>
  <lst name="success">
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">3450</int>
      </lst>
      <str name="core">test2_shard2_replica4</str>
    </lst>
  </lst>
</response>

CLUSTERPROP: Cluster Properties

Add, edit or delete a cluster-wide property.

/admin/collections?action=CLUSTERPROP&name=propertyName&val=propertyValue

CLUSTERPROP Parameters

name

The name of the property. Supported properties names are autoAddReplicas, legacyCloud , location, maxCoresPerNode and urlScheme. Other properties can be set (for example, if you need them for custom plugins) but they must begin with the prefix ext.. Unknown properties that don’t begin with ext. will be rejected.

val

The value of the property. If the value is empty or null, the property is unset.

CLUSTERPROP Response

The response will include the status of the request and the properties that were updated or removed. If the status is anything other than "0", an error message will explain why the request failed.

Examples using CLUSTERPROP

Input

http://localhost:8983/solr/admin/collections?action=CLUSTERPROP&name=urlScheme&val=https&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">0</int>
  </lst>
</response>

Deeply Nested Cluster Properties

collectionDefaults

It is possible to set cluster-wide default values for certain attributes of a collection.

Example 1: Set/update default values

curl -X POST -H 'Content-type:application/json' --data-binary '
{ "set-obj-property" : {
    "collectionDefaults" : {
        "numShards" : 2,
        "nrtReplicas" : 1,
        "tlogReplicas" : 1,
        "pullReplicas" : 1,

   }
}' http://localhost:8983/api/cluster

Example 2: Unset the value of nrtReplicas alone

curl -X POST -H 'Content-type:application/json' --data-binary '
{ "set-obj-property" : {
    "collectionDefaults" : {
        "nrtReplicas" : null,
   }
}' http://localhost:8983/api/cluster

Example 2: Unset all values in collectionDefaults

curl -X POST -H 'Content-type:application/json' --data-binary '
{ "set-obj-property" : {
    "collectionDefaults" : null
}' http://localhost:8983/api/cluster

COLLECTIONPROP: Collection Properties

Add, edit or delete a collection property.

/admin/collections?action=COLLECTIONPROP&name=collectionName&propertyName=propertyName&propertyValue=propertyValue

COLLECTIONPROP Parameters

name

The name of the collection for which the property would be set.

propertyName

The name of the property.

propertyValue

The value of the property. When not provided, the property is deleted.

COLLECTIONPROP Response

The response will include the status of the request and the properties that were updated or removed. If the status is anything other than "0", an error message will explain why the request failed.

Examples using COLLECTIONPROP

Input

http://localhost:8983/solr/admin/collections?action=COLLECTIONPROP&name=coll&propertyName=foo&val=bar&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">0</int>
  </lst>
</response>

MIGRATE: Migrate Documents to Another Collection

/admin/collections?action=MIGRATE&collection=name&split.key=key1!&target.collection=target_collection&forward.timeout=60

The MIGRATE command is used to migrate all documents having a given routing key to another collection. The source collection will continue to have the same data as-is but it will start re-routing write requests to the target collection for the number of seconds specified by the forward.timeout parameter. It is the responsibility of the user to switch to the target collection for reads and writes after the MIGRATE action completes.

The routing key specified by the split.key parameter may span multiple shards on both the source and the target collections. The migration is performed shard-by-shard in a single thread. One or more temporary collections may be created by this command during the ‘migrate’ process but they are cleaned up at the end automatically.

This is a long running operation and therefore using the async parameter is highly recommended. If the async parameter is not specified then the operation is synchronous by default and keeping a large read timeout on the invocation is advised. Even with a large read timeout, the request may still timeout but that doesn’t necessarily mean that the operation has failed. Users should check logs, cluster state, source and target collections before invoking the operation again.

This command works only with collections using the compositeId router. The target collection must not receive any writes during the time the MIGRATE command is running otherwise some writes may be lost.

Please note that the MIGRATE API does not perform any de-duplication on the documents so if the target collection contains documents with the same uniqueKey as the documents being migrated then the target collection will end up with duplicate documents.

MIGRATE Parameters

collection

The name of the source collection from which documents will be split. This parameter is required.

target.collection

The name of the target collection to which documents will be migrated. This parameter is required.

split.key

The routing key prefix. For example, if the uniqueKey of a document is "a!123", then you would use split.key=a!. This parameter is required.

forward.timeout

The timeout, in seconds, until which write requests made to the source collection for the given split.key will be forwarded to the target shard. The default is 60 seconds.

property.name=value

Set core property name to value. See the section Defining core.properties for details on supported properties and values.

async

Request ID to track this action which will be processed asynchronously.

MIGRATE Response

The response will include the status of the request.

Examples using MIGRATE

Input

http://localhost:8983/solr/admin/collections?action=MIGRATE&collection=test1&split.key=a!&target.collection=test2&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">19014</int>
  </lst>
  <lst name="success">
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">1</int>
      </lst>
      <str name="core">test2_shard1_0_replica1</str>
      <str name="status">BUFFERING</str>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">2479</int>
      </lst>
      <str name="core">split_shard1_0_temp_shard1_0_shard1_replica1</str>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">1002</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">21</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">1655</int>
      </lst>
      <str name="core">split_shard1_0_temp_shard1_0_shard1_replica2</str>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">4006</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">17</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">1</int>
      </lst>
      <str name="core">test2_shard1_0_replica1</str>
      <str name="status">EMPTY_BUFFER</str>
    </lst>
    <lst name="192.168.43.52:8983_solr">
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">31</int>
      </lst>
    </lst>
    <lst name="192.168.43.52:8983_solr">
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">31</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">1</int>
      </lst>
      <str name="core">test2_shard1_1_replica1</str>
      <str name="status">BUFFERING</str>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">1742</int>
      </lst>
      <str name="core">split_shard1_1_temp_shard1_1_shard1_replica1</str>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">1002</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">15</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">1917</int>
      </lst>
      <str name="core">split_shard1_1_temp_shard1_1_shard1_replica2</str>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">5007</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">8</int>
      </lst>
    </lst>
    <lst>
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">1</int>
      </lst>
      <str name="core">test2_shard1_1_replica1</str>
      <str name="status">EMPTY_BUFFER</str>
    </lst>
    <lst name="192.168.43.52:8983_solr">
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">30</int>
      </lst>
    </lst>
    <lst name="192.168.43.52:8983_solr">
      <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">30</int>
      </lst>
    </lst>
  </lst>
</response>

ADDROLE: Add a Role

/admin/collections?action=ADDROLE&role=roleName&node=nodeName

Assigns a role to a given node in the cluster. The only supported role is overseer.

Use this command to dedicate a particular node as Overseer. Invoke it multiple times to add more nodes. This is useful in large clusters where an Overseer is likely to get overloaded. If available, one among the list of nodes which are assigned the 'overseer' role would become the overseer. The system would assign the role to any other node if none of the designated nodes are up and running.

ADDROLE Parameters

role

The name of the role. The only supported role as of now is overseer. This parameter is required.

node

The name of the node that will be assigned the role. It is possible to assign a role even before that node is started. This parameter is started.

ADDROLE Response

The response will include the status of the request and the properties that were updated or removed. If the status is anything other than "0", an error message will explain why the request failed.

Examples using ADDROLE

Input

http://localhost:8983/solr/admin/collections?action=ADDROLE&role=overseer&node=192.167.1.2:8983_solr&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">0</int>
  </lst>
</response>

REMOVEROLE: Remove Role

Remove an assigned role. This API is used to undo the roles assigned using ADDROLE operation

/admin/collections?action=REMOVEROLE&role=roleName&node=nodeName

REMOVEROLE Parameters

role

The name of the role. The only supported role as of now is overseer. This parameter is required.

node

The name of the node where the role should be removed.

REMOVEROLE Response

The response will include the status of the request and the properties that were updated or removed. If the status is anything other than "0", an error message will explain why the request failed.

Examples using REMOVEROLE

Input

http://localhost:8983/solr/admin/collections?action=REMOVEROLE&role=overseer&node=192.167.1.2:8983_solr&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">0</int>
  </lst>
</response>

OVERSEERSTATUS: Overseer Status and Statistics

Returns the current status of the overseer, performance statistics of various overseer APIs, and the last 10 failures per operation type.

/admin/collections?action=OVERSEERSTATUS

Examples using OVERSEERSTATUS

Input:

http://localhost:8983/solr/admin/collections?action=OVERSEERSTATUS
{
  "responseHeader":{
    "status":0,
    "QTime":33},
  "leader":"127.0.1.1:8983_solr",
  "overseer_queue_size":0,
  "overseer_work_queue_size":0,
  "overseer_collection_queue_size":2,
  "overseer_operations":[
    "createcollection",{
      "requests":2,
      "errors":0,
      "avgRequestsPerSecond":0.7467088842794136,
      "5minRateRequestsPerSecond":7.525069023276674,
      "15minRateRequestsPerSecond":10.271274280947182,
      "avgTimePerRequest":0.5050685,
      "medianRequestTime":0.5050685,
      "75thPcRequestTime":0.519016,
      "95thPcRequestTime":0.519016,
      "99thPcRequestTime":0.519016,
      "999thPcRequestTime":0.519016},
    "removeshard",{
      "..."
  }],
  "collection_operations":[
    "splitshard",{
      "requests":1,
      "errors":1,
      "recent_failures":[{
          "request":{
            "operation":"splitshard",
            "shard":"shard2",
            "collection":"example1"},
          "response":[
            "Operation splitshard caused exception:","org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: No shard with the specified name exists: shard2",
            "exception",{
              "msg":"No shard with the specified name exists: shard2",
              "rspCode":400}]}],
      "avgRequestsPerSecond":0.8198143044809885,
      "5minRateRequestsPerSecond":8.043840552427673,
      "15minRateRequestsPerSecond":10.502079828515368,
      "avgTimePerRequest":2952.7164175,
      "medianRequestTime":2952.7164175000003,
      "75thPcRequestTime":5904.384052,
      "95thPcRequestTime":5904.384052,
      "99thPcRequestTime":5904.384052,
      "999thPcRequestTime":5904.384052},
    "..."
  ],
  "overseer_queue":[
    "..."
  ],
  "..."
 }

CLUSTERSTATUS: Cluster Status

Fetch the cluster status including collections, shards, replicas, configuration name as well as collection aliases and cluster properties.

/admin/collections?action=CLUSTERSTATUS

CLUSTERSTATUS Parameters

collection

The collection name for which information is requested. If omitted, information on all collections in the cluster will be returned.

shard

The shard(s) for which information is requested. Multiple shard names can be specified as a comma-separated list.

_route_

This can be used if you need the details of the shard where a particular document belongs to and you don’t know which shard it falls under.

CLUSTERSTATUS Response

The response will include the status of the request and the status of the cluster.

Examples using CLUSTERSTATUS

Input

http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS

Output

{
  "responseHeader":{
    "status":0,
    "QTime":333},
  "cluster":{
    "collections":{
      "collection1":{
        "shards":{
          "shard1":{
            "range":"80000000-ffffffff",
            "state":"active",
            "replicas":{
              "core_node1":{
                "state":"active",
                "core":"collection1",
                "node_name":"127.0.1.1:8983_solr",
                "base_url":"http://127.0.1.1:8983/solr",
                "leader":"true"},
              "core_node3":{
                "state":"active",
                "core":"collection1",
                "node_name":"127.0.1.1:8900_solr",
                "base_url":"http://127.0.1.1:8900/solr"}}},
          "shard2":{
            "range":"0-7fffffff",
            "state":"active",
            "replicas":{
              "core_node2":{
                "state":"active",
                "core":"collection1",
                "node_name":"127.0.1.1:7574_solr",
                "base_url":"http://127.0.1.1:7574/solr",
                "leader":"true"},
              "core_node4":{
                "state":"active",
                "core":"collection1",
                "node_name":"127.0.1.1:7500_solr",
                "base_url":"http://127.0.1.1:7500/solr"}}}},
        "maxShardsPerNode":"1",
        "router":{"name":"compositeId"},
        "replicationFactor":"1",
        "znodeVersion": 11,
        "autoCreated":"true",
        "configName" : "my_config",
        "aliases":["both_collections"]
      },
      "collection2":{
        "..."
      }
    },
    "aliases":{ "both_collections":"collection1,collection2" },
    "roles":{
      "overseer":[
        "127.0.1.1:8983_solr",
        "127.0.1.1:7574_solr"]
    },
    "live_nodes":[
      "127.0.1.1:7574_solr",
      "127.0.1.1:7500_solr",
      "127.0.1.1:8983_solr",
      "127.0.1.1:8900_solr"]
  }
}

REQUESTSTATUS: Request Status of an Async Call

Request the status and response of an already submitted Asynchronous Collection API (below) call. This call is also used to clear up the stored statuses.

/admin/collections?action=REQUESTSTATUS&requestid=request-id

REQUESTSTATUS Parameters

requestid

The user defined request ID for the request. This can be used to track the status of the submitted asynchronous task. This parameter is required.

Examples using REQUESTSTATUS

Input: Valid Request ID

http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=1000&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">1</int>
  </lst>
  <lst name="status">
    <str name="state">completed</str>
    <str name="msg">found 1000 in completed tasks</str>
  </lst>
</response>

Input: Invalid Request ID

http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=1004&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">1</int>
  </lst>
  <lst name="status">
    <str name="state">notfound</str>
    <str name="msg">Did not find taskid [1004] in any tasks queue</str>
  </lst>
</response>

DELETESTATUS: Delete Status

Deletes the stored response of an already failed or completed Asynchronous Collection API call.

/admin/collections?action=DELETESTATUS&requestid=request-id

DELETESTATUS Parameters

requestid

The request ID of the asynchronous call whose stored response should be cleared.

flush

Set to true to clear all stored completed and failed async request responses.

Examples using DELETESTATUS

Input: Valid Request ID

http://localhost:8983/solr/admin/collections?action=DELETESTATUS&requestid=foo&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">1</int>
  </lst>
  <str name="status">successfully removed stored response for [foo]</str>
</response>

Input: Invalid Request ID

http://localhost:8983/solr/admin/collections?action=DELETESTATUS&requestid=bar&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">1</int>
  </lst>
  <str name="status">[bar] not found in stored responses</str>
</response>

Input: Clear All Stored Statuses

http://localhost:8983/solr/admin/collections?action=DELETESTATUS&flush=true&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">1</int>
  </lst>
  <str name="status"> successfully cleared stored collection api responses </str>
</response>

LIST: List Collections

Fetch the names of the collections in the cluster.

/admin/collections?action=LIST

Examples using LIST

Input

http://localhost:8983/solr/admin/collections?action=LIST

Output

{
  "responseHeader":{
    "status":0,
    "QTime":2011},
  "collections":["collection1",
    "example1",
    "example2"]}

ADDREPLICAPROP: Add Replica Property

Assign an arbitrary property to a particular replica and give it the value specified. If the property already exists, it will be overwritten with the new value.

/admin/collections?action=ADDREPLICAPROP&collection=collectionName&shard=shardName&replica=replicaName&property=propertyName&property.value=value

ADDREPLICAPROP Parameters

collection

The name of the collection the replica belongs to. This parameter is required.

shard

The name of the shard the replica belongs to. This parameter is required.

replica

The replica, e.g., core_node1. This parameter is required.

property

The name of the property to add. This property is required.

This will have the literal property. prepended to distinguish it from system-maintained properties. So these two forms are equivalent:

property=special

and

property=property.special

property.value

The value to assign to the property. This parameter is required.

shardUnique

If true, then setting this property in one replica will remove the property from all other replicas in that shard. The default is false.

There is one pre-defined property preferredLeader for which shardUnique is forced to true and an error returned if shardUnique is explicitly set to false.

PreferredLeader is a boolean property. Any value assigned that is not equal (case insensitive) to true will be interpreted as false for preferredLeader.

ADDREPLICAPROP Response

The response will include the status of the request. If the status is anything other than "0", an error message will explain why the request failed.

Examples using ADDREPLICAPROP

Input

This command would set the "preferredLeader" property (property.preferredLeader) to "true" on "core_node1", and remove that property from any other replica in the shard.

http://localhost:8983/solr/admin/collections?action=ADDREPLICAPROP&shard=shard1&collection=collection1&replica=core_node1&property=preferredLeader&property.value=true&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">46</int>
  </lst>
</response>

Input

This pair of commands will set the "testprop" property (property.testprop) to 'value1' and 'value2' respectively for two nodes in the same shard.

http://localhost:8983/solr/admin/collections?action=ADDREPLICAPROP&shard=shard1&collection=collection1&replica=core_node1&property=testprop&property.value=value1

http://localhost:8983/solr/admin/collections?action=ADDREPLICAPROP&shard=shard1&collection=collection1&replica=core_node3&property=property.testprop&property.value=value2

Input

This pair of commands would result in "core_node_3" having the "testprop" property (property.testprop) value set because the second command specifies shardUnique=true, which would cause the property to be removed from "core_node_1".

http://localhost:8983/solr/admin/collections?action=ADDREPLICAPROP&shard=shard1&collection=collection1&replica=core_node1&property=testprop&property.value=value1

http://localhost:8983/solr/admin/collections?action=ADDREPLICAPROP&shard=shard1&collection=collection1&replica=core_node3&property=testprop&property.value=value2&shardUnique=true

DELETEREPLICAPROP: Delete Replica Property

Deletes an arbitrary property from a particular replica.

/admin/collections?action=DELETEREPLICAPROP&collection=collectionName&shard=shardName&replica=replicaName&property=propertyName

DELETEREPLICAPROP Parameters

collection

The name of the collection the replica belongs to. This parameter is required.

shard

The name of the shard the replica belongs to. This parameter is required.

replica

The replica, e.g., core_node1. This parameter is required.

property

The property to add. This will have the literal property. prepended to distinguish it from system-maintained properties. So these two forms are equivalent:

property=special

and

property=property.special

DELETEREPLICAPROP Response

The response will include the status of the request. If the status is anything other than "0", an error message will explain why the request failed.

Examples using DELETEREPLICAPROP

Input

This command would delete the preferredLeader (property.preferredLeader) from core_node1.

http://localhost:8983/solr/admin/collections?action=DELETEREPLICAPROP&shard=shard1&collection=collection1&replica=core_node1&property=preferredLeader&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">9</int>
  </lst>
</response>

BALANCESHARDUNIQUE: Balance a Property Across Nodes

/admin/collections?action=BALANCESHARDUNIQUE&collection=collectionName&property=propertyName

Insures that a particular property is distributed evenly amongst the physical nodes that make up a collection. If the property already exists on a replica, every effort is made to leave it there. If the property is not on any replica on a shard, one is chosen and the property is added.

BALANCESHARDUNIQUE Parameters

collection

The name of the collection to balance the property in. This parameter is required.

property

The property to balance. The literal property. is prepended to this property if not specified explicitly. This parameter is required.

onlyactivenodes

Defaults to true. Normally, the property is instantiated on active nodes only. If this parameter is specified as false, then inactive nodes are also included for distribution.

shardUnique

Something of a safety valve. There is one pre-defined property (preferredLeader) that defaults this value to true. For all other properties that are balanced, this must be set to true or an error message will be returned.

BALANCESHARDUNIQUE Response

The response will include the status of the request. If the status is anything other than "0", an error message will explain why the request failed.

Examples using BALANCESHARDUNIQUE

Input

Either of these commands would put the "preferredLeader" property on one replica in every shard in the "collection1" collection.

http://localhost:8983/solr/admin/collections?action=BALANCESHARDUNIQUE&collection=collection1&property=preferredLeader&wt=xml

http://localhost:8983/solr/admin/collections?action=BALANCESHARDUNIQUE&collection=collection1&property=property.preferredLeader&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">9</int>
  </lst>
</response>

Examining the clusterstate after issuing this call should show exactly one replica in each shard that has this property.

REBALANCELEADERS: Rebalance Leaders

Reassigns leaders in a collection according to the preferredLeader property across active nodes.

/admin/collections?action=REBALANCELEADERS&collection=collectionName

Leaders are assigned in a collection according to the preferredLeader property on active nodes. This command should be run after the preferredLeader property has been assigned via the BALANCESHARDUNIQUE or ADDREPLICAPROP commands.

It is not required that all shards in a collection have a preferredLeader property. Rebalancing will only attempt to reassign leadership to those replicas that have the preferredLeader property set to true and are not currently the shard leader and are currently active.

REBALANCELEADERS Parameters

collection

The name of the collection to rebalance preferredLeaders on. This parameter is required.

maxAtOnce

The maximum number of reassignments to have queue up at once. Values <=0 are use the default value Integer.MAX_VALUE.

When this number is reached, the process waits for one or more leaders to be successfully assigned before adding more to the queue.

maxWaitSeconds

Defaults to 60. This is the timeout value when waiting for leaders to be reassigned. If maxAtOnce is less than the number of reassignments that will take place, this is the maximum interval that any single wait for at least one reassignment.

For example, if 10 reassignments are to take place and maxAtOnce is 1 and maxWaitSeconds is 60, the upper bound on the time that the command may wait is 10 minutes.

REBALANCELEADERS Response

The response will include the status of the request. If the status is anything other than "0", an error message will explain why the request failed.

Examples using REBALANCELEADERS

Input

Either of these commands would cause all the active replicas that had the preferredLeader property set and were not already the preferred leader to become leaders.

http://localhost:8983/solr/admin/collections?action=REBALANCELEADERS&collection=collection1&wt=xml
http://localhost:8983/solr/admin/collections?action=REBALANCELEADERS&collection=collection1&maxAtOnce=5&maxWaitSeconds=30&wt=xml

Output

In this example, two replicas in the "alreadyLeaders" section already had the leader assigned to the same node as the preferredLeader property so no action was taken.

The replica in the "inactivePreferreds" section had the preferredLeader property set but the node was down and no action was taken. The three nodes in the "successes" section were made leaders because they had the preferredLeader property set but were not leaders and they were active.

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">123</int>
  </lst>
  <lst name="alreadyLeaders">
    <lst name="core_node1">
      <str name="status">success</str>
      <str name="msg">Already leader</str>
      <str name="nodeName">192.168.1.167:7400_solr</str>
    </lst>
    <lst name="core_node17">
      <str name="status">success</str>
      <str name="msg">Already leader</str>
      <str name="nodeName">192.168.1.167:7600_solr</str>
    </lst>
  </lst>
  <lst name="inactivePreferreds">
    <lst name="core_node4">
      <str name="status">skipped</str>
      <str name="msg">Node is a referredLeader, but it's inactive. Skipping</str>
      <str name="nodeName">192.168.1.167:7500_solr</str>
    </lst>
  </lst>
  <lst name="successes">
    <lst name="_collection1_shard3_replica1">
      <str name="status">success</str>
      <str name="msg">
        Assigned 'Collection: 'collection1', Shard: 'shard3', Core: 'collection1_shard3_replica1', BaseUrl:
        'http://192.168.1.167:8983/solr'' to be leader
      </str>
    </lst>
    <lst name="_collection1_shard5_replica3">
      <str name="status">success</str>
      <str name="msg">
        Assigned 'Collection: 'collection1', Shard: 'shard5', Core: 'collection1_shard5_replica3', BaseUrl:
        'http://192.168.1.167:7200/solr'' to be leader
      </str>
    </lst>
    <lst name="_collection1_shard4_replica2">
      <str name="status">success</str>
      <str name="msg">
        Assigned 'Collection: 'collection1', Shard: 'shard4', Core: 'collection1_shard4_replica2', BaseUrl:
        'http://192.168.1.167:7300/solr'' to be leader
      </str>
    </lst>
  </lst>
</response>

Examining the clusterstate after issuing this call should show that every live node that has the preferredLeader property should also have the "leader" property set to true.

FORCELEADER: Force Shard Leader

In the unlikely event of a shard losing its leader, this command can be invoked to force the election of a new leader.

/admin/collections?action=FORCELEADER&collection=<collectionName>&shard=<shardName>

FORCELEADER Parameters

collection

The name of the collection. This parameter is required.

shard

The name of the shard where leader election should occur. This parameter is required.

This is an expert level command, and should be invoked only when regular leader election is not working. This may potentially lead to loss of data in the event that the new leader doesn’t have certain updates, possibly recent ones, which were acknowledged by the old leader before going down.

MIGRATESTATEFORMAT: Migrate Cluster State

A expert level utility API to move a collection from shared clusterstate.json ZooKeeper node (created with stateFormat=1, the default in all Solr releases prior to 5.0) to the per-collection state.json stored in ZooKeeper (created with stateFormat=2, the current default) seamlessly without any application down-time.

/admin/collections?action=MIGRATESTATEFORMAT&collection=<collection_name>

MIGRATESTATEFORMAT Parameters

collection

The name of the collection to be migrated from clusterstate.json to its own state.json ZooKeeper node. This parameter is required.

async

Request ID to track this action which will be processed asynchronously.

This API is useful in migrating any collections created prior to Solr 5.0 to the more scalable cluster state format now used by default. If a collection was created in any Solr 5.x version or higher, then executing this command is not necessary.

BACKUP: Backup Collection

Backs up Solr collections and associated configurations to a shared filesystem - for example a Network File System.

/admin/collections?action=BACKUP&name=myBackupName&collection=myCollectionName&location=/path/to/my/shared/drive

The BACKUP command will backup Solr indexes and configurations for a specified collection. The BACKUP command takes one copy from each shard for the indexes. For configurations, it backs up the configSet that was associated with the collection and metadata.

BACKUP Parameters

collection

The name of the collection to be backed up. This parameter is required.

location

The location on a shared drive for the backup command to write to. Alternately it can be set as a cluster property.

async

Request ID to track this action which will be processed asynchronously.

repository

The name of a repository to be used for the backup. If no repository is specified then the local filesystem repository will be used automatically.

RESTORE: Restore Collection

Restores Solr indexes and associated configurations.

/admin/collections?action=RESTORE&name=myBackupName&location=/path/to/my/shared/drive&collection=myRestoredCollectionName

The RESTORE operation will create a collection with the specified name in the collection parameter. You cannot restore into the same collection the backup was taken from. Also the target collection should not be present at the time the API is called as Solr will create it for you.

The collection created will be have the same number of shards and replicas as the original collection, preserving routing information, etc. Optionally, you can override some parameters documented below.

While restoring, if a configSet with the same name exists in ZooKeeper then Solr will reuse that, or else it will upload the backed up configSet in ZooKeeper and use that.

You can use the collection CREATEALIAS command to make sure clients don’t need to change the endpoint to query or index against the newly restored collection.

RESTORE Parameters

collection

The collection where the indexes will be restored into. This parameter is required.

location

The location on a shared drive for the RESTORE command to read from. Alternately it can be set as a cluster property.

async

Request ID to track this action which will be processed asynchronously.

repository

The name of a repository to be used for the backup. If no repository is specified then the local filesystem repository will be used automatically.

Override Parameters

Additionally, there are several parameters that may have been set on the original collection that can be overridden when restoring the backup:

collection.configName

Defines the name of the configurations to use for this collection. These must already be stored in ZooKeeper. If not provided, Solr will default to the collection name as the configuration name.

replicationFactor

The number of replicas to be created for each shard.

nrtReplicas

The number of NRT (Near-Real-Time) replicas to create for this collection. This type of replica maintains a transaction log and updates its index locally. This parameter behaves the same way as setting replicationFactor parameter.

tlogReplicas

The number of TLOG replicas to create for this collection. This type of replica maintains a transaction log but only updates its index via replication from a leader. See the section Types of Replicas for more information about replica types.

pullReplicas

The number of PULL replicas to create for this collection. This type of replica does not maintain a transaction log and only updates its index via replication from a leader. This type is not eligible to become a leader and should not be the only type of replicas in the collection. See the section Types of Replicas for more information about replica types.

maxShardsPerNode

When creating collections, the shards and/or replicas are spread across all available (i.e., live) nodes, and two replicas of the same shard will never be on the same node.

If a node is not live when the CREATE operation is called, it will not get any parts of the new collection, which could lead to too many replicas being created on a single live node. Defining maxShardsPerNode sets a limit on the number of replicas CREATE will spread to each node. If the entire collection can not be fit into the live nodes, no collection will be created at all.

autoAddReplicas

When set to true, enables auto addition of replicas on shared file systems. See the section Automatically Add Replicas in SolrCloud for more details on settings and overrides.

property.name=value

Set core property name to value. See the section Defining core.properties for details on supported properties and values.

DELETENODE: Delete Replicas in a Node

Deletes all replicas of all collections in that node. Please note that the node itself will remain as a live node after this operation.

/admin/collections?action=DELETENODE&node=nodeName

DELETENODE Parameters

node

The node to be removed. This parameter is required.

async

Request ID to track this action which will be processed asynchronously.

REPLACENODE: Move All Replicas in a Node to Another

This command recreates replicas in one node (the source) to another node(s) (the target). After each replica is copied, the replicas in the source node are deleted.

For source replicas that are also shard leaders the operation will wait for the number of seconds set with the timeout parameter to make sure there’s an active replica that can become a leader (either an existing replica becoming a leader or the new replica completing recovery and becoming a leader).

The API uses the Autoscaling framework to find nodes that can satisfy the disk requirements for the new replicas but only when an Autoscaling policy is configured. Refer to Autoscaling Policy and Preferences section for more details.

/admin/collections?action=REPLACENODE&sourceNode=source-node&targetNode=target-node

REPLACENODE Parameters

sourceNode

The source node from which the replicas need to be copied from. This parameter is required.

targetNode

The target node where replicas will be copied. If this parameter is not provided, Solr will identify nodes automatically based on policies or number of cores in each node.

parallel

If this flag is set to true, all replicas are created in separate threads. Keep in mind that this can lead to very high network and disk I/O if the replicas have very large indices. The default is false.

async

Request ID to track this action which will be processed asynchronously.

timeout

Time in seconds to wait until new replicas are created, and until leader replicas are fully recovered. The default is 300, or 5 minutes.

This operation does not hold necessary locks on the replicas that belong to on the source node. So don’t perform other collection operations in this period.

MOVEREPLICA: Move a Replica to a New Node

This command moves a replica from one node to a new node. In case of shared filesystems the dataDir will be reused.

The API uses the Autoscaling framework to find nodes that can satisfy the disk requirements for the replica to be moved but only when an Autoscaling policy is configured. Refer to Autoscaling Policy and Preferences section for more details.

/admin/collections?action=MOVEREPLICA&collection=collection&shard=shard&replica=replica&sourceNode=nodeName&targetNode=nodeName

MOVEREPLICA Parameters

collection

The name of the collection. This parameter is required.

shard

The name of the shard that the replica belongs to. This parameter is required.

replica

The name of the replica. This parameter is required.

sourceNode

The name of the node that contains the replica. This parameter is required.

targetNode

The name of the destination node. This parameter is required.

async

Request ID to track this action which will be processed asynchronously.

UTILIZENODE: Utilize a New Node

This command can be used to move some replicas from the existing nodes to either a new node or a less loaded node to reduce the load on the existing node.

This uses your autoscaling policies and preferences to identify which replica needs to be moved. It tries to fix any policy violations first and then it tries to move some load off of the most loaded nodes according to the preferences.

/admin/collections?action=UTILIZENODE&node=nodeName

UTILIZENODE Parameters

node

The name of the node that needs to be utilized. This parameter is required.

Asynchronous Calls

Since some collection API calls can be long running tasks (such as SPLITSHARD), you can optionally have the calls run asynchronously. Specifying async=<request-id> enables you to make an asynchronous call, the status of which can be requested using the REQUESTSTATUS call at any time.

As of now, REQUESTSTATUS does not automatically clean up the tracking data structures, meaning the status of completed or failed tasks stays stored in ZooKeeper unless cleared manually. DELETESTATUS can be used to clear the stored statuses. However, there is a limit of 10,000 on the number of async call responses stored in a cluster.

Examples of Async Requests

Input

http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=collection1&shard=shard1&async=1000&wt=xml

Output

<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">99</int>
  </lst>
  <str name="requestid">1000</str>
</response>