Solr Control Script Reference
Solr includes a script known as “bin/solr” that allows you to perform many common operations on your Solr installation or cluster.
You can start and stop Solr, create and delete collections or cores, perform operations on ZooKeeper and check the status of Solr and configured shards.
You can find the script in the bin/
directory of your Solr installation.
The bin/solr
script makes Solr easier to work with by providing simple commands and options to quickly accomplish common goals.
More examples of bin/solr
in use are available throughout this Guide, but particularly in the sections Starting Solr and Getting Started with SolrCloud.
Starting and Stopping
Start and Restart
The start
command starts Solr.
The restart
command allows you to restart Solr while it is already running or if it has been stopped already.
The start
and restart
commands have several options to allow you to run in SolrCloud mode, use an example configuration set, start with a hostname or port that is not the default and point to a local ZooKeeper ensemble.
bin/solr start [options]
bin/solr start -help
bin/solr restart [options]
bin/solr restart -help
When using the restart
command, you must pass all of the parameters you initially passed when you started Solr.
Behind the scenes, a stop request is initiated, so Solr will be stopped before being started again.
If no nodes are already running, restart will skip the step to stop and proceed to starting Solr.
Start Parameters
The bin/solr
script provides many options to allow you to customize the server in common ways, such as changing the listening port.
However, most of the defaults are adequate for most Solr installations, especially when just getting started.
-a "<jvmParams>"
-
Optional
Default: none
Start Solr with additional JVM parameters, such as those starting with
-X
. For example setting up Java debugger to attach to the Solr JVM you could pass:-a "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983"
. In most cases, you should wrap the additional parameters in double quotes.If you are passing JVM parameters that begin with
-D
, you can omit the-a
option.Example:
bin/solr start -a "-Xdebug -Xrunjdwp:transport=dt_socket, server=y,suspend=n,address=1044"
-j "<jettyParams>"
-
Optional
Default: none
Additional parameters to pass to Jetty when starting Solr. For example, to add configuration folder that Jetty should read you could pass:
-j "--include-jetty-dir=/etc/jetty/custom/server/"
. In most cases, you should wrap the additional parameters in double quotes. -cloud
or-c
-
Optional
Default: none
Start Solr in SolrCloud mode, which will also launch the embedded ZooKeeper instance included with Solr. The embedded ZooKeeper instance is started on Solr port+1000, so 9983 if Solr is bound to 8983.
If you are already running a ZooKeeper ensemble that you want to use instead of the embedded (single-node) ZooKeeper, you should also either specify
ZK_HOST
insolr.in.sh
/solr.in.cmd
(see Updating Solr Include Files) or pass the-z
parameter.For more details, see the section SolrCloud Mode below.
Example:
bin/solr start -c
-d <dir>
-
Optional
Default:
server/
Define a server directory, defaults to
server
(as in,$SOLR_TIP/server
). It is uncommon to override this option. When running multiple instances of Solr on the same host, it is more common to use the same server directory for each instance and use a unique Solr home directory using the-s
option.Example:
bin/solr start -d newServerDir
-e <name>
-
Optional
Default: none
Start Solr with an example configuration. These examples are provided to help you get started faster with Solr generally, or just try a specific feature.
The available options are:
-
cloud
: SolrCloud example -
techproducts
: Comprehensive example illustrating many of Solr’s core capabilities -
schemaless
: Schema-less example (schema is inferred from data during indexing) -
films
: Example of starting with _default configset and adding explicit fields dynamicallySee the section Running with Example Configurations below for more details on the example configurations.
Example:
bin/solr start -e schemaless
-
-f
-
Optional
Default: none
Start Solr in the foreground and sends stdout / stderr to
solr-PORT-console.log
. You cannot use this option when running examples with the-e
option.Example:
bin/solr start -f
-host <hostname>
or-h <hostname>
-
Optional
Default:
localhost
Specify the hostname for this Solr instance. If this is not specified,
localhost
is assumed.Example:
bin/solr start -h search.mysolr.com
-m <memory>
-
Optional
Default: 512m
Sets the min (
-Xms
) and max (-Xmx
) heap size for the JVM running Solr.Example:
bin/solr start -m 4g
results in-Xms4g -Xmx4g
settings. -noprompt
-
Optional
Default: none
Don’t prompt for input; accept all defaults when running examples that accept user input.
For example, when using the "cloud" example, an interactive session guides you through several options for your SolrCloud cluster. If you want to accept all of the defaults, you can simply add the
-noprompt
option to your request.Example:
bin/solr start -e cloud -noprompt
-p <port>
-
Optional
Default:
8983
Specify the port to start the Solr HTTP listener on; with the classic default port for Solr being 8983. The specified port (SOLR_PORT) will also be used to determine the stop port. The stop port is defined as STOP_PORT=($SOLR_PORT-1000) and JMX RMI listen port is defined as RMI_PORT=($SOLR_PORT+10000). For instance, if you set -p 8985, then the STOP_PORT=7985 and RMI_PORT=18985. If this is not specified,
8983
will be used.Example:
bin/solr start -p 8655
-s <dir>
-
Optional
Default:
server/solr
Sets the
solr.solr.home
system property. Solr will create core directories under this directory. This allows you to run multiple Solr instances on the same host while reusing the same server directory set using the-d
parameter. If set, the specified directory should contain a solr.xml file, unless solr.xml exists in Zookeeper.This parameter is ignored when running examples (
-e
), as thesolr.solr.home
depends on which example is run.The default value is
server/solr
. If passed relative dir, validation with current dir will be done, before trying defaultserver/<dir>
.Example:
bin/solr start -s newHome
-t <dir>
-
Optional
Default:
solr.solr.home
Sets the
solr.data.home
system property, where Solr will store index data in <instance_dir>/data subdirectories. If not set, Solr uses solr.solr.home for config and data. -v
-
Optional
Default: none
Be more verbose. This changes the logging level of Log4j in Solr from
INFO
toDEBUG
, having the same effect as if you editedlog4j2.xml
.Example:
bin/solr start -f -v
-q
-
Optional
Default: none
Be more quiet. This changes the logging level of Log4j in Solr from
INFO
toWARN
, having the same effect as if you editedlog4j2.xml
. This can be useful in a production setting where you want to limit logging to warnings and errors.Example:
bin/solr start -f -q
-V
or-verbose
-
Optional
Default: none
Verbose messages from this script.
Example:
bin/solr start -V
-z <zkHost>
or-zkHost <zkHost>
-
Optional
Default: see description
Zookeeper connection string, this option is only used with the
-c
option, to start Solr in SolrCloud mode. IfZK_HOST
is not specified insolr.in.sh
/solr.in.cmd
and this option is not provided, Solr will start the embedded ZooKeeper instance and use that instance for SolrCloud operations.Set the
ZK_CREATE_CHROOT
environment variable to true if your ZK host has a chroot path, and you want to create it automatically.Example:
bin/solr start -c -z server1:2181,server2:2181
-force
-
Optional
Default: none
If attempting to start Solr as the root user, the script will exit with a warning that running Solr as "root" can cause problems. It is possible to override this warning with the
-force
parameter.Example:
sudo bin/solr start -force
To emphasize how the default settings work take a moment to understand that the following commands are equivalent:
bin/solr start
bin/solr start -h localhost -p 8983 -d server -s solr -m 512m
It is not necessary to define all of the options when starting if the defaults are fine for your needs.
Setting Java System Properties
The bin/solr
script will pass any additional parameters that begin with -D
to the JVM, which allows you to set arbitrary Java system properties.
For example, to set the auto soft-commit frequency to 3 seconds, you can do:
bin/solr start -Dsolr.autoSoftCommit.maxTime=3000
The SOLR_OPTS
environment variable is also available to set additional System Properties for Solr.
In order to set custom System Properties when running any Solr utility other than start
(e.g. stop
, create
, auth
, status
, api
),
the SOLR_TOOL_OPTS
environment variable should be used.
SolrCloud Mode
The -c
and -cloud
options are equivalent:
bin/solr start -c
bin/solr start -cloud
If you specify a ZooKeeper connection string, such as -z 192.168.1.4:2181
, then Solr will connect to ZooKeeper and join the cluster.
If you have defined ZK_HOST in solr.in.sh /solr.in.cmd (see xref:zookeeper-ensemble.adoc#updating-solr-include-files,Updating Solr Include Files>>) you can omit -z <zk host string> from all bin/solr commands.
|
When starting Solr in SolrCloud mode, if you do not define ZK_HOST
in solr.in.sh
/solr.in.cmd
nor specify the -z
option, then Solr will launch an embedded ZooKeeper server listening on the Solr port + 1000.
For example, if Solr is running on port 8983, then the embedded ZooKeeper will listen on port 9983.
If your ZooKeeper connection string uses a chroot, such as +
To do this use the |
When starting in SolrCloud mode, the interactive script session will prompt you to choose a configset to use.
For more information about starting Solr in SolrCloud mode, see also the section Getting Started with SolrCloud.
Running with Example Configurations
bin/solr start -e <name>
The example configurations allow you to get started quickly with a configuration that mirrors what you hope to accomplish with Solr.
Each example launches Solr with a managed schema, which allows use of the Schema API to make schema edits, but does not allow manual editing of a Schema file.
If you would prefer to manually modify a schema.xml
file directly, you can change this default as described in the section Schema Factory Configuration.
Unless otherwise noted in the descriptions below, the examples do not enable SolrCloud nor Schemaless Mode.
The following examples are provided:
-
cloud: This example starts a 1-4 node SolrCloud cluster on a single machine. When chosen, an interactive session will start to guide you through options to select the initial configset to use, the number of nodes for your example cluster, the ports to use, and name of the collection to be created.
When using this example, you can choose from any of the available configsets found in
$SOLR_TIP/server/solr/configsets
. -
techproducts: This example starts a single-node Solr instance with a schema designed for the sample documents included in the
$SOLR_HOME/example/exampledocs
directory.The configset used can be found in
$SOLR_TIP/server/solr/configsets/sample_techproducts_configs
.The data used can be found in
$SOLR_HOME/example/exampledocs/
. -
schemaless: This example starts a single-node Solr instance using a managed schema, as described in the section Schema Factory Configuration, and provides a very minimal pre-defined schema. Solr will run in Schemaless Mode with this configuration, where Solr will create fields in the schema on the fly and will guess field types used in incoming documents.
The configset used can be found in
$SOLR_TIP/server/solr/configsets/_default
. -
films: This example starts a single-node Solr instance using a managed schema, as described in the section Schema Factory Configuration, and then uses the Schema API to create some custom fields. Solr will run in Schemaless Mode with this configuration, where Solr will create fields in the schema on the fly and will guess field types used in incoming documents as well. It then loads some sample film data.
The configset used can be found in
$SOLR_TIP/server/solr/configsets/_default
.The film data used can be found in
$SOLR_HOME/example/films/films.json
.
The run in-foreground option ( |
Stop
The stop
command sends a STOP request to a running Solr node, which allows it to shutdown gracefully.
The command will wait up to 180 seconds for Solr to stop gracefully and then will forcefully kill the process (kill -9
).
bin/solr stop [options]
bin/solr stop -help
Stop Parameters
-p <port>
-
Optional
Default: none
Stop Solr running on the given port. If you are running more than one instance, or are running in SolrCloud mode, you either need to specify the ports in separate requests or use the
-all
option.Example:
bin/solr stop -p 8983
-all
-
Optional
Default: none
Find and stop all running Solr servers on this host that have a valid PID.
Example:
bin/solr stop -all
-k <key>
-
Optional
Default: none
Stop key used to protect from stopping Solr inadvertently; default is "solrrocks".
Example:
bin/solr stop -k solrrocks
-V
or-verbose
-
Optional
Default: none
Stop Solr with verbose messages from this script.
Example:
bin/solr stop -V
System Information
Version
The version
command simply returns the version of Solr currently installed and immediately exists.
$ bin/solr version
X.Y.0
Status
The status
command displays basic JSON-formatted status information for all locally running Solr servers.
The status
command uses the SOLR_PID_DIR
environment variable to locate Solr process ID files to find running Solr instances, which defaults to the bin
directory.
bin/solr status
The output will include a status of each node of the cluster, as in this example:
Found 2 Solr nodes:
Solr process 39920 running on port 7574
{
"solr_home":"/Applications/Solr/example/cloud/node2/solr/",
"version":"X.Y.0",
"startTime":"2015-02-10T17:19:54.739Z",
"uptime":"1 days, 23 hours, 55 minutes, 48 seconds",
"memory":"77.2 MB (%15.7) of 490.7 MB",
"cloud":{
"ZooKeeper":"localhost:9865",
"liveNodes":"2",
"collections":"2"}}
Solr process 39827 running on port 8865
{
"solr_home":"/Applications/Solr/example/cloud/node1/solr/",
"version":"X.Y.0",
"startTime":"2015-02-10T17:19:49.057Z",
"uptime":"1 days, 23 hours, 55 minutes, 54 seconds",
"memory":"94.2 MB (%19.2) of 490.7 MB",
"cloud":{
"ZooKeeper":"localhost:9865",
"liveNodes":"2",
"collections":"2"}}
Assert
The assert
command sanity checks common issues with Solr installations.
These include checking the ownership/existence of particular directories, and ensuring Solr is available on the expected URL.
The command can either output a specified error message, or change its exit code to indicate errors.
As an example:
$ bin/solr assert --exists /opt/bin/solr
Results in the output below:
ERROR: Directory /opt/bin/solr does not exist.
The basic usage of bin/solr assert
is:
$ bin/solr assert -h
usage: bin/solr assert [-m <message>] [-e] [-rR] [-s <url>] [-S <url>] [-c
<url>] [-C <url>] [-u <dir>] [-x <dir>] [-X <dir>]
-c,--cloud <url> Asserts that Solr is running in cloud mode.
Also fails if Solr not running. URL should
be for root Solr path.
-C,--not-cloud <url> Asserts that Solr is not running in cloud
mode. Also fails if Solr not running. URL
should be for root Solr path.
-e,--exitcode Return an exit code instead of printing
error message on assert fail.
-help Print this message
-m,--message <message> Exception message to be used in place of
the default error message.
-R,--not-root Asserts that we are NOT the root user.
-r,--root Asserts that we are the root user.
-S,--not-started <url> Asserts that Solr is NOT running on a
certain URL. Default timeout is 1000ms.
-s,--started <url> Asserts that Solr is running on a certain
URL. Default timeout is 1000ms.
-t,--timeout <ms> Timeout in ms for commands supporting a
timeout.
-u,--same-user <directory> Asserts that we run as same user that owns
<directory>.
-verbose Enable more verbose command output.
-x,--exists <directory> Asserts that directory <directory> exists.
-X,--not-exists <directory> Asserts that directory <directory> does NOT
exist.
Healthcheck
The healthcheck
command generates a JSON-formatted health report for a collection when running in SolrCloud mode.
The health report provides information about the state of every replica for all shards in a collection, including the number of committed documents and its current state.
bin/solr healthcheck [options]
bin/solr healthcheck -help
Healthcheck Parameters
-c <collection>
-
Required
Default: none
Name of the collection to run a healthcheck against.
Example:
bin/solr healthcheck -c gettingstarted
-solrUrl <url>
-
Optional
Default: none
Base Solr URL, which can be used in SolrCloud mode to determine the ZooKeeper connection string if that’s not known.
-z <zkhost>
or-zkHost <zkhost>
-
Optional
Default:
localhost:9983
ZooKeeper connection string. If you are running Solr on a port other than 8983, you will have to specify the ZooKeeper connection string. By default, this will be the Solr port + 1000. This parameter is unnecessary if
ZK_HOST
is defined insolr.in.sh
orsolr.in.cmd
.Example:
bin/solr healthcheck -z localhost:2181
Below is an example healthcheck request and response using a non-standard ZooKeeper connect string, with 2 nodes running:
$ bin/solr healthcheck -c gettingstarted -z localhost:9865
{
"collection":"gettingstarted",
"status":"healthy",
"numDocs":0,
"numShards":2,
"shards":[
{
"shard":"shard1",
"status":"healthy",
"replicas":[
{
"name":"core_node1",
"url":"http://10.0.1.10:8865/solr/gettingstarted_shard1_replica2/",
"numDocs":0,
"status":"active",
"uptime":"2 days, 1 hours, 18 minutes, 48 seconds",
"memory":"25.6 MB (%5.2) of 490.7 MB",
"leader":true},
{
"name":"core_node4",
"url":"http://10.0.1.10:7574/solr/gettingstarted_shard1_replica1/",
"numDocs":0,
"status":"active",
"uptime":"2 days, 1 hours, 18 minutes, 42 seconds",
"memory":"95.3 MB (%19.4) of 490.7 MB"}]},
{
"shard":"shard2",
"status":"healthy",
"replicas":[
{
"name":"core_node2",
"url":"http://10.0.1.10:8865/solr/gettingstarted_shard2_replica2/",
"numDocs":0,
"status":"active",
"uptime":"2 days, 1 hours, 18 minutes, 48 seconds",
"memory":"25.8 MB (%5.3) of 490.7 MB"},
{
"name":"core_node3",
"url":"http://10.0.1.10:7574/solr/gettingstarted_shard2_replica1/",
"numDocs":0,
"status":"active",
"uptime":"2 days, 1 hours, 18 minutes, 42 seconds",
"memory":"95.4 MB (%19.4) of 490.7 MB",
"leader":true}]}]}
Collections and Cores
The bin/solr
script can also help you create new collections or cores, or delete collections or cores.
Create a Core or Collection
The create
command creates a core or collection depending on whether Solr is running in standalone (core) or SolrCloud mode (collection).
In other words, this action detects which mode Solr is running in, and then takes the appropriate action (either create_core
or create_collection
).
bin/solr create [options]
bin/solr create -help
Create Core or Collection Parameters
-c <name>
-
Required
Default: none
Name of the core or collection to create.
Example:
bin/solr create -c mycollection
-d <confdir>
-
Optional
Default:
_default
The configuration directory.
See the section Configuration Directories and SolrCloud below for more details about this option when running in SolrCloud mode.
Example:
bin/solr create -d _default
-n <configName>
-
Optional
Default: see description
The configuration name. This defaults to the same name as the core or collection.
Example:
bin/solr create -n basic
-p <port>
or-port <port>
-
Optional
Default: see description
The port of a local Solr instance to send the create command to. By default the script tries to detect the port by looking for running Solr instances.
This option is useful if you are running multiple Solr instances on the same host, thus requiring you to be specific about which instance to create the core in.
Example:
bin/solr create -p 8983
-s <shards>
or-shards <shards>
-
Optional
Default:
1
Number of shards to split a collection into. Only applies when Solr is running in SolrCloud mode.
Example:
bin/solr create -s 2
-rf <replicas>
or-replicationFactor <replicas>
-
Optional
Default:
1
Number of copies of each document in the collection. The default is
1
(no replication).Example:
bin/solr create -rf 2
-force
-
Optional
Default: none
If attempting to run create as "root" user, the script will exit with a warning that running Solr or actions against Solr as "root" can cause problems. It is possible to override this warning with the -force parameter.
Example:
bin/solr create -c foo -force
Create a Collection
The create_collection
command creates a collection, and is only available when running in SolrCloud mode.
bin/solr create_collection [options]
bin/solr create_collection -help
Create Collection Parameters
-c <name>
-
Required
Default: none
Name of the collection to create.
Example:
bin/solr create_collection -c mycollection
-d <confdir>
-
Optional
Default:
_default
Configuration directory to copy when creating the new collection.
See the section Configuration Directories and SolrCloud below for more details about this option when running in SolrCloud mode. including some built in example configurations.
_default
is also known as Schemaless Mode.Example:
bin/solr create_collection -d _default
Alternatively, you can pass the path to your own configuration directory instead of using one of the built-in configurations.
Example:
bin/solr create_collection -c mycoll -d /tmp/myconfig
By default the script will upload the specified confdir directory into Zookeeper using the same name as the collection (-c) option. Alternatively, if you want to reuse an existing directory or create a confdir in Zookeeper that can be shared by multiple collections, use the -n option
-n <configName>
-
Optional
Default: see description
Name the configuration directory in Zookeeper. By default, the configuration will be uploaded to Zookeeper using the collection name (-c), but if you want to use an existing directory or override the name of the configuration in Zookeeper, then use the -c option. UMMMM… I COPIED THE ABOVE LINE FROM bin/solr create_collection -h output, but I don’t get it. And bin/solr create_collection -n basic -c mycoll works, it create a copy of _default as "basic" configset… UMMM?
Example:
bin/solr create_collection -n basic -c mycoll
-p <port>
or-port <port>
-
Optional
Default: see description
Port of a local Solr instance where you want to create the new collection. If not specified, the script will search the local system for a running Solr instance and will use the port of the first server it finds.
This option is useful if you are running multiple Solr instances on the same host, thus requiring you to be specific about which instance to create the core in.
Example:
bin/solr create -p 8983
-s <shards>
or-shards <shards>
-
Optional
Default:
1
Number of shards to split a collection into.
Example:
bin/solr create_collection -s 2
-rf <replicas>
or-replicationFactor <replicas>
-
Optional
Default:
1
Number of copies of each document in the collection. The default is
1
(no replication).Example:
bin/solr create_collection -rf 2
-force
-
Optional
Default: none
If attempting to run create as "root" user, the script will exit with a warning that running Solr or actions against Solr as "root" can cause problems. It is possible to override this warning with the -force parameter.
Example:
bin/solr create_collection -c foo -force
Create a Core
The create_core
command creates a core and is only available when running in user-managed (single-node) mode.
bin/solr create_core [options]
bin/solr create_core -help
Create Core Parameters
-c <name>
-
Required
Default: none
Name of the core to create.
Example:
bin/solr create -c mycore
-d <confdir>
-
Optional
Default:
_default
The configuration directory to use when creating a new core.
Example:
bin/solr create -d _default
Alternatively, you can pass the path to your own configuration directory instead of using one of the built-in configurations.
Example:
bin/solr create_collection -c mycore -d /tmp/myconfig
-p <port>
or-port <port>
-
Optional
Default: see description
The port of a local Solr instance to create the new core. By default the script tries to detect the port by looking for running Solr instances.
This option is useful if you are running multiple Solr instances on the same host, thus requiring you to be specific about which instance to create the core in.
Example:
bin/solr create -p 8983
-force
-
Optional
Default: none
If attempting to run create as "root" user, the script will exit with a warning that running Solr or actions against Solr as "root" can cause problems. It is possible to override this warning with the -force parameter.
Example:
bin/solr create -c foo -force
Configuration Directories and SolrCloud
Before creating a collection in SolrCloud, the configuration directory used by the collection must be uploaded to ZooKeeper.
The create
and create_collection
commands supports several use cases for how collections and configuration directories work.
The main decision you need to make is whether a configuration directory in ZooKeeper should be shared across multiple collections.
Let’s work through a few examples to illustrate how configuration directories work in SolrCloud.
First, if you don’t provide the -d
or -n
options, then the default configuration ($SOLR_TIP/server/solr/configsets/_default/conf
) is uploaded to ZooKeeper using the same name as the collection.
For example, the following command will result in the _default
configuration being uploaded to /configs/contacts
in ZooKeeper: bin/solr create -c contacts
.
If you create another collection with bin/solr create -c contacts2
, then another copy of the _default
directory will be uploaded to ZooKeeper under /configs/contacts2
.
Any changes you make to the configuration for the contacts collection will not affect the contacts2
collection.
Put simply, the default behavior creates a unique copy of the configuration directory for each collection you create.
You can override the name given to the configuration directory in ZooKeeper by using the -n
option.
For instance, the command bin/solr create -c logs -d _default -n basic
will upload the server/solr/configsets/_default/conf
directory to ZooKeeper as /configs/basic
.
Notice that we used the -d
option to specify a different configuration than the default.
Solr provides several built-in configurations under server/solr/configsets
.
However you can also provide the path to your own configuration directory using the -d
option.
For instance, the command bin/solr create -c mycoll -d /tmp/myconfigs
, will upload /tmp/myconfigs
into ZooKeeper under /configs/mycoll
.
To reiterate, the configuration directory is named after the collection unless you override it using the -n
option.
Other collections can share the same configuration by specifying the name of the shared configuration using the -n
option.
For instance, the following command will create a new collection that shares the basic configuration created previously: bin/solr create -c logs2 -n basic
.
Data-driven Schema and Shared Configurations
The _default
schema can mutate as data is indexed, since it has schemaless functionality (i.e., data-driven changes to the schema).
Consequently, we recommend that you do not share data-driven configurations between collections unless you are certain that all collections should inherit the changes made when indexing data into one of the collections.
You can turn off schemaless functionality for a collection with the following command, assuming the collection name is mycollection
.
$ bin/solr config -c mycollection -p 8983 -action set-user-property -property update.autoCreateFields -value false
See also the section Set or Unset Configuration Properties.
Delete Core or Collection
The delete
command detects the mode that Solr is running in and then deletes the specified core (user-managed or single-node) or collection (SolrCloud) as appropriate.
bin/solr delete [options]
bin/solr delete -help
If you’re deleting a collection in SolrCloud mode, the default behavior is to also delete the configuration directory from Zookeeper so long as it is not being used by another collection.
For example, if you created a collection with bin/solr create -c contacts
, then the delete command bin/solr delete -c contacts
will check to see if the /configs/contacts
configuration directory is being used by any other collections.
If not, then the /configs/contacts
directory is removed from ZooKeeper. You can override this behavior by passing -deleteConfig false when running this command.atom
Delete Core or Collection Parameters
-c <name>
-
Required
Default: none
Name of the core or collection to delete.
Example:
bin/solr delete -c mycoll
-deleteConfig
-
Optional
Default:
true
Whether or not the configuration directory should also be deleted from ZooKeeper.
If the configuration directory is being used by another collection, then it will not be deleted even if you pass
-deleteConfig
astrue
.Example:
bin/solr delete -deleteConfig false
-p <port>
or-port <port>
-
Optional
Default: see description
The port of a local Solr instance to send the delete command to. If not specified, the script will search the local system for a running Solr instance and will use the port of the first server it finds.
This option is useful if you are running multiple Solr instances on the same host, thus requiring you to be specific about which instance to delete the core from.
Example:
bin/solr delete -p 8983
Authentication
The bin/solr
script allows enabling or disabling Authentication, allowing you to configure authentication from the command line.
Currently this command is only available when using SolrCloud mode and must be run on the machine hosting Solr.
For Basic Authentication the script provides user roles and permission mappings, and maps the created user to the superadmin
role.
For Kerberos it only enables the security.json, it doesn’t set up any users or role mappings.
Enabling Basic Authentication
The command bin/solr auth enable
configures Solr to use Basic Authentication when accessing the User Interface, using bin/solr
and any API requests.
For more information about Solr’s authentication plugins, see the section Securing Solr. For more information on Basic Authentication support specifically, see the section Basic Authentication Plugin. |
The bin/solr auth enable
command makes several changes to enable Basic Authentication:
-
Take the base security.json file, evolves it using
auth
command parameters, and uploads the new file to ZooKeeper. -
Adds two lines to
bin/solr.in.sh
orbin\solr.in.cmd
to set the authentication type, and the path tobasicAuth.conf
:# The following lines added by ./solr for enabling BasicAuth SOLR_AUTH_TYPE="basic" SOLR_AUTHENTICATION_OPTS="-Dsolr.httpclient.config=/path/to/solr-9.4.1/server/solr/basicAuth.conf"
-
Creates the file
server/solr/basicAuth.conf
to store the credential information that is used withbin/solr
commands.
Here are some example usages:
Usage: solr auth enable [-type basicAuth] -credentials user:pass [-blockUnknown <true|false>] [-updateIncludeFileOnly <true|false>] [-V]
solr auth enable [-type basicAuth] -prompt <true|false> [-blockUnknown <true|false>] [-updateIncludeFileOnly <true|false>] [-V]
solr auth enable -type kerberos -config "<kerberos configs>" [-updateIncludeFileOnly <true|false>] [-V]
solr auth disable [-updateIncludeFileOnly <true|false>] [-V]
The command takes the following parameters:
-credentials <user:pass>
-
Optional
Default: none
The username and password in the format of
username:password
of the initial user. Applicable for basicAuth only.If you prefer not to pass the username and password as an argument to the script, you can choose the
-prompt
option. Either-credentials
or-prompt
must be specified. -prompt <true|false>
-
Optional
Default: none
Prompts the user to provide the credentials. If prompt is preferred, pass
true
as a parameter to request the script prompt the user to enter a username and password.Either
-credentials
or-prompt
must be specified. -blockUnknown <true|false>
-
Optional
Default:
true
When
true
, this blocks out access to unauthenticated users from accessing Solr. Whenfalse
, unauthenticated users will still be able to access Solr, but only for operations not explicitly requiring a user role in the Authorization plugin configuration. -solrIncludeFile <includeFilePath>
-
Optional
Default: none
Specify the full path to the include file in the environment. If not specified this script looks for an include file named solr.in.sh to set environment variables. Specifically, the following locations are searched in this order:
-
<script location>/.
-
$HOME/.solr.in.sh
-
/usr/share/solr
-
/usr/local/share/solr
-
/etc/default
-
/var/solr
-
/opt/solr
-
-updateIncludeFileOnly <true|false>
-
Optional
Default:
false
When
true
, only update thebin/solr.in.sh
orbin\solr.in.cmd
, and skip actual enabling/disabling authentication (i.e. don’t updatesecurity.json
). -z <zkHost>
or-zkHost <zkHost>
-
Optional
Default: none
Defines the ZooKeeper connect string. This is useful if you want to enable authentication before all your Solr nodes have come up. Unnecessary if
ZK_HOST
is defined insolr.in.sh
orsolr.in.cmd
. -d <dir>
-
Optional
Default:
$SOLR_TIP/server
Defines the Solr server directory, by default
$SOLR_TIP/server
. It is not common to need to override the default, and is only needed if you have customized the$SOLR_HOME
directory path. -s <dir>
or-solr.home <dir>
-
Optional
Default:
server/solr
Defines the location of
solr.solr.home
, which by default isserver/solr
. If you have multiple instances of Solr on the same host, or if you have customized the$SOLR_HOME
directory path, you likely need to define this. This is where any credentials or authentication configuration files (e.g. basicAuth.conf) would be placed.
Disabling Basic Authentication
You can disable Basic Authentication with bin/solr auth disable
.
If the -updateIncludeFileOnly
option is set to true, then only the settings in bin/solr.in.sh
or bin\solr.in.cmd
will be updated, and security.json
will not be removed.
If the -updateIncludeFileOnly
option is set to false, then the settings in bin/solr.in.sh
or bin\solr.in.cmd
will be updated, and security.json
will be removed.
However, the basicAuth.conf
file is not removed with either option.
Set or Unset Configuration Properties
The bin/solr
script enables a subset of the Config API: (un)setting common properties and (un)setting user-defined properties.
bin/solr config [options]
bin/solr config -help
Set or Unset Common Properties
To set the common property updateHandler.autoCommit.maxDocs
to 100
on collection mycollection
:
bin/solr config -c mycollection -p 8983 -action set-property -property updateHandler.autoCommit.maxDocs -value 100
The default -action
is set-property
, so the above can be shortened by not mentioning it:
bin/solr config -c mycollection -p 8983 -property updateHandler.autoCommit.maxDocs -value 100
To unset a previously set common property, specify -action unset-property
with no -value
:
bin/solr config -c mycollection -p 8983 -action unset-property -property updateHandler.autoCommit.maxDocs
Set or Unset User-Defined Properties
To set the user-defined property update.autoCreateFields
to false
(to disable Schemaless Mode):
bin/solr config -c mycollection -p 8983 -action set-user-property -property update.autoCreateFields -value false
To unset a previously set user-defined property, specify -action unset-user-property
with no -value
:
bin/solr config -c mycollection -p 8983 -action unset-user-property -property update.autoCreateFields
Config Parameters
-c <name>
or-collection <name>
-
Required
Default: none
Name of the core or collection on which to change configuration.
-action <name>
-
Optional
Default:
set-property
Config API action, one of:
set-property
,unset-property
,set-user-property
,unset-user-property
. -property <name>
-
Required
Default: none
Name of the Config API property to apply the action to, such as: 'updateHandler.autoSoftCommit.maxTime'.
-value <new-value>
-
Optional
Default: none
Set the property to this value; accepts JSON objects and strings.
-z <zkHost>
or-zkHost <zkHost>
-
Optional
Default:
localhost:9983
The ZooKeeper connection string, usable in SolrCloud mode. Unnecessary if
ZK_HOST
is defined insolr.in.sh
orsolr.in.cmd
. -p <port>
or-port <port>
-
Optional
Default: none
localhost
port of the Solr node to use when applying the configuration change. -solrUrl <url>
-
Optional
Default:
http://localhost:8983/solr
Base Solr URL, which can be used in SolrCloud mode to determine the ZooKeeper connection string if that’s not known.
-s <scheme>
or-scheme <scheme>
-
Optional
Default:
http
The scheme for accessing Solr. Accepted values: http or https. Default is 'http'
ZooKeeper Operations
The bin/solr
script allows certain operations affecting ZooKeeper.
These operations are for SolrCloud mode only.
The operations are available as sub-commands, which each have their own set of options.
bin/solr zk [sub-command] [options]
bin/solr zk -help
The basic usage of bin/solr zk is:
$ bin/solr zk -h
Usage: solr zk upconfig|downconfig -d <confdir> -n <configName> [-z zkHost]
solr zk cp [-r] <src> <dest> [-z zkHost]
solr zk rm [-r] <path> [-z zkHost]
solr zk mv <src> <dest> [-z zkHost]
solr zk ls [-r] <path> [-z zkHost]
solr zk mkroot <path> [-z zkHost]
Solr should have been started at least once before issuing these commands to initialize ZooKeeper with the znodes Solr expects. Once ZooKeeper is initialized, Solr doesn’t need to be running on any node to use these commands. |
Upload a Configuration Set
Use the zk upconfig
command to upload one of the pre-configured configuration sets or a customized configuration set to ZooKeeper.
ZK Upload Parameters
All parameters below are required.
-n <name>
-
Required
Default: none
Name of the configuration set in ZooKeeper. This command will upload the configuration set to the "configs" ZooKeeper node giving it the name specified.
You can see all uploaded configuration sets in the Admin UI via the Cloud screens. Choose Cloud → Tree → configs to see them.
If a pre-existing configuration set is specified, it will be overwritten in ZooKeeper.
Example:
-n myconfig
-d <configset dir>
-
Required
Default: none
The local directory of the configuration set to upload. It should have a
conf
directory immediately below it that in turn containssolrconfig.xml
etc.If just a name is supplied,
$SOLR_TIP/server/solr/configsets
will be checked for this name. An absolute path may be supplied instead.Examples:
-
-d directory_under_configsets
-
-d /path/to/configset/source
-
-z <zkHost>
-
Required
Default: none
The ZooKeeper connection string. Is not required if
ZK_HOST
is defined insolr.in.sh
orsolr.in.cmd
.Example:
-z 123.321.23.43:2181
An example of this command with all of the parameters is:
bin/solr zk upconfig -z 111.222.333.444:2181 -n mynewconfig -d /path/to/configset
Reload Collections When Changing Configurations
This command does not automatically make changes effective! It simply uploads the configuration sets to ZooKeeper. You can use the Collection API’s RELOAD command to reload any collections that uses this configuration set. |
Download a Configuration Set
Use the zk downconfig
command to download a configuration set from ZooKeeper to the local filesystem.
ZK Download Parameters
All parameters listed below are required.
-n <name>
-
Required
Default: none
Name of the configset in ZooKeeper to download. The Admin UI Cloud → Tree → configs node lists all available configuration sets.
Example:
-n myconfig
-d <configset dir>
-
Required
Default: none
The path to write the downloaded configuration set into. If just a name is supplied,
$SOLR_TIP/server/solr/configsets
will be the parent. An absolute path may be supplied as well.In either case, pre-existing configurations at the destination will be overwritten!
Examples:
-
-d directory_under_configsets
-
-d /path/to/configset/destination
-
-z <zkHost>
-
Required
Default: none
The ZooKeeper connection string. Unnecessary if
ZK_HOST
is defined insolr.in.sh
orsolr.in.cmd
.Example:
-z 123.321.23.43:2181
An example of this command with all parameters is:
bin/solr zk downconfig -z 111.222.333.444:2181 -n mynewconfig -d /path/to/configset
A best practice is to keep your configuration sets in some form of version control as the system-of-record.
In that scenario, downconfig
should rarely be used.
Copy between Local Files and ZooKeeper znodes
Use the zk cp
command for transferring files and directories between ZooKeeper znodes and your local drive.
This command will copy from the local drive to ZooKeeper, from ZooKeeper to the local drive or from ZooKeeper to ZooKeeper.
ZK Copy Parameters
-r
-
Optional
Default: none
Recursively copy <src> to <dst>. The command will fail if the
<src>
has children and-r
is not specified.Example:
-r
<src>
-
Required
Default: none
The file or path to copy from. If prefixed with
zk:
then the source is presumed to be ZooKeeper. If no prefix or the prefix isfile:
then it is presumed to be the local drive. At least one of<src>
or<dest>
must be prefixed byzk:
or the command will fail.Examples:
-
zk:/configs/myconfigs/solrconfig.xml
-
file:/Users/apache/configs/src
-
<dest>
-
Required
Default: none
The file or path to copy to. If prefixed with
zk:
then the source is presumed to be ZooKeeper. If no prefix or the prefix isfile:
then it is presumed to be the local drive.At least one of
<src>
or<dest>
must be prefixed byzk:
or the command will fail. If<dest>
ends in a slash character it names a directory.Examples:
-
zk:/configs/myconfigs/solrconfig.xml
-
file:/Users/apache/configs/src
-
-z <zkHost>
-
Required
Default: none
The ZooKeeper connection string. Optional if
ZK_HOST
is defined insolr.in.sh
orsolr.in.cmd
.Example:
-z 123.321.23.43:2181
When <src>
is a zk resource, <dest>
may be '.'.
If <dest>
ends with '/', then <dest>
will be a local folder or parent znode and the last element of the <src> path will be appended unless <src>
also ends in a slash.
<dest>
may be zk:
, which may be useful when using the cp -r
form to backup/restore the entire zk state.
You must enclose local paths that end in a wildcard in quotes or just end the local path in a slash.
That is, bin/solr zk cp -r /some/dir/ zk:/ -z localhost:2181
is equivalent to bin/solr zk cp -r "/some/dir/" zk:/ -z localhost:2181
but bin/solr zk cp -r /some/dir/\ zk:/ -z localhost:2181
will throw an error.
Here’s an example of backup/restore for a ZK configuration:
To copy to local: bin/solr zk cp -r zk:/ /some/dir -z localhost:2181
To restore to ZK: bin/solr zk cp -r /some/dir/ zk:/ -z localhost:2181
The file:
prefix is stripped, thus file:/wherever
specifies an absolute local path and file:somewhere
specifies a relative local path.
All paths on Zookeeper are absolute.
Zookeeper nodes CAN have data, so moving a single file to a parent znode will overlay the data on the parent Znode so specifying the trailing slash can be important.
Trailing wildcards are supported when copying from localand must be quoted.
Other examples are:
Recursively copy a directory from local to ZooKeeper: bin/solr zk cp -r file:/apache/confgs/whatever/conf zk:/configs/myconf -z 111.222.333.444:2181
Copy a single file from ZooKeeper to local: bin/solr zk cp zk:/configs/myconf/managed_schema /configs/myconf/managed_schema -z 111.222.333.444:2181
Remove a znode from ZooKeeper
Use the zk rm
command to remove a znode (and optionally all child nodes) from ZooKeeper.
ZK Remove Parameters
-r
-
Optional
Default: none
Recursively delete if
<path>
is a directory. Command will fail if<path>
has children and-r
is not specified.Example:
-r
<path>
-
Required
Default: none
The path to remove from ZooKeeper, either a parent or leaf node.
There are limited safety checks, you cannot remove
/
or/zookeeper
nodes.The path is assumed to be a ZooKeeper node, no
zk:
prefix is necessary.Examples:
-
/configs
-
/configs/myconfigset
-
/configs/myconfigset/solrconfig.xml
-
-z <zkHost>
-
Required
Default: none
The ZooKeeper connection string. Optional if
ZK_HOST
is defined insolr.in.sh
orsolr.in.cmd
.Example:
-z 123.321.23.43:2181
Examples of this command with the parameters are:
bin/solr zk rm -r /configs
bin/solr zk rm /configs/myconfigset/schema.xml
Move One ZooKeeper znode to Another (Rename)
Use the zk mv
command to move (rename) a ZooKeeper znode.
ZK Move Parameters
<src>
-
Required
Default: none
The znode to rename. The
zk:
prefix is assumed.Example:
/configs/oldconfigset
<dest>
-
Required
Default: none
The new name of the znode. The
zk:
prefix is assumed.Example:
/configs/newconfigset
-z <zkHost>
-
Required
Default: none
The ZooKeeper connection string. Unnecessary if
ZK_HOST
is defined insolr.in.sh
orsolr.in.cmd
.Example:
-z 123.321.23.43:2181
An example of this command is:
bin/solr zk mv /configs/oldconfigset /configs/newconfigset
List a ZooKeeper znode’s Children
Use the zk ls
command to see the children of a znode.
ZK List Parameters
-r
-
Optional
Default: none
Recursively list all descendants of a znode. Only the node names are listed, not the data.
Example:
-r
<path>
-
Required
Default: none
The path on ZooKeeper to list.
Example:
/collections/mycollection
-z <zkHost>
-
Required
Default: none
The ZooKeeper connection string. Optional if
ZK_HOST
is defined insolr.in.sh
orsolr.in.cmd
.Example:
-z 123.321.23.43:2181
An example of this command with the parameters is:
bin/solr zk ls -r /collections/mycollection
bin/solr zk ls /collections
Create a znode (supports chroot)
Use the zk mkroot
command to create a znode with no data.
The primary use-case for this command to support ZooKeeper’s "chroot" concept.
However, it can also be used to create arbitrary paths.
Create znode Parameters
<path>
-
Required
Default: none
The path on ZooKeeper to create. Intermediate znodes will be created if necessary. A leading slash is assumed if not present.
Example:
/solr
-z <zkHost>
-
Required
Default: none
The ZooKeeper connection string. Optional if
ZK_HOST
is defined insolr.in.sh
orsolr.in.cmd
.Example:
-z 123.321.23.43:2181
Examples of this command:
bin/solr zk mkroot /solr -z 123.321.23.43:2181
bin/solr zk mkroot /solr/production
Exporting and Importing
Exporting Documents to a File
The export
command will allow you to export documents from a collection in JSON, JSON with Lines, or Javabin format.
All documents can be exported, or only those that match a query.
This hasn’t been tested with nested child documents and your results will vary. |
The export command only works with in a Solr running in cloud mode.
|
bin/solr export [options]
bin/solr export -help
The bin/solr export
command takes the following parameters:
-url <url>
-
Required
Default: none
Fully-qualified address to a collection.
Example:
-url http://localhost:8983/solr/techproducts
-format <format>
-
Optional
Default:
json
The file format of the export,
json
,jsonl
, orjavabin
. Choosingjavabin
exports in the native Solr format, and is compact and fast to import.jsonl
is the Json with Lines format, learn more at https://jsonlines.org/. -out <path>
-
Optional
Default: see description
Either the path to the directory for the exported data to be written to, or a specific file to be written out.
If only a directory is specified then the file will be created with the name of the collection, as in
<collection>.json
. -compress
-
Optional
Default: false
If you specify
-compress
then the resulting outputting file with will be gzipped, for example<collection>.json.gz
. -query <query string>
-
Optional
Default:
*:*
A custom query. The default is
*:*
which will export all documents. -fields <fields>
-
Optional
Default: none
Comma separated list of fields to be exported. By default all fields are fetched.
-limit <number of documents>
-
Optional
Default:
100
Maximum number of docs to download. The value
-1
will export all documents.
Examples
Export all documents from a collection gettingstarted
:
bin/solr export -url http://localhost:8983/solr/gettingstarted limit -1
Export all documents of collection gettingstarted
into a file called 1MDocs.json.gz
as a compressed JSON file:
bin/solr export -url http://localhost:8983/solr/gettingstarted -1 -format json -compress -out 1MDocs
Importing Documents into a Collection
Once you have exported documents in a file, you can use the /update request handler to import them to a new Solr collection.
Example: import json
files
First export the documents, making sure to ignore any fields that are populated via a copyField
by specifying what fields you want to export:
$ bin/solr export -url http://localhost:8983/solr/gettingstarted -fields id,name,manu,cat,features
Create a new collection to import the exported documents into:
$ bin/solr create_collection -c test_collection -n techproducts
Now import the data with either of these methods:
$ curl -X POST -d @gettingstarted.json 'http://localhost:8983/solr/test_collection/update/json/docs?commit=true'
or
$ curl -H 'Content-Type: application/json' -X POST -d @gettingstarted.json 'http://localhost:8983/solr/test_collection/update?commit=true'
Example: import javabin
files
$ bin/solr export -url http://localhost:8983/solr/gettingstarted -format javabin -fields id,name,manu,cat,features
$ curl -X POST --header "Content-Type: application/javabin" --data-binary @gettingstarted.javabin 'http://localhost:8983/solr/test_collection/update?commit=true'
Interacting with API
The api
command will allow you to send an arbitrary HTTP request to a Solr API endpoint.
bin/solr api -help
The bin/solr api
command takes the following parameters:
-get <url>
-
Required
Default: none
Send a GET request to a Solr API endpoint.
Example:
bin/solr api -get http://localhost:8983/solr/COLL_NAME/sql?stmt=select+id+from+COLL_NAME+limit+10
API
The api
command will allow you to send an arbitrary HTTP request to a Solr API endpoint.
If you have configured basicAuth or TLS with your Solr you may find this easier than using a separate tool like curl
.
$ bin/solr api api -get http://localhost:8983/solr/techproducts/select?q=*:*
Here is an example of sending a SQL query to the techproducts /sql end point (assumes you started Solr in Cloud mode with the SQL module enabled):
$ bin/solr api api -get http://localhost:8983/solr/techproducts/sql?stmt=select+id+from+techproducts+limit+10
Results are streamed to the terminal.