Upgrading a Solr Cluster
This page covers how to upgrade an existing Solr cluster that was installed using the service installation scripts.
The steps outlined on this page assume you use the default service name of
Here is a checklist of things you need to prepare before starting the upgrade process:
Examine the Solr Upgrade Notes to determine if any behavior changes in the new version of Solr will affect your installation.
If not using replication (i.e., collections with
replicationFactorless than 1), then you should make a backup of each collection. If all of your collections use replication, then you don’t technically need to make a backup since you will be upgrading and verifying each node individually.
Determine which Solr node is currently hosting the Overseer leader process in SolrCloud, as you should upgrade this node last. To determine the Overseer, use the Overseer Status API.
Plan to perform your upgrade during a system maintenance window if possible. You’ll be doing a rolling restart of your cluster (each node, one-by-one), but we still recommend doing the upgrade when system usage is minimal.
Verify the cluster is currently healthy and all replicas are active, as you should not perform an upgrade on a degraded cluster.
Re-build and test all custom server-side components against the new Solr JAR files.
Determine the values of the following variables that are used by the Solr Control Scripts:
ZK_HOST: The ZooKeeper connection string your current SolrCloud nodes use to connect to ZooKeeper; this value will be the same for all nodes in the cluster.
SOLR_HOST: The hostname each Solr node used to register with ZooKeeper when joining the SolrCloud cluster; this value will be used to set the host Java system property when starting the new Solr process.
SOLR_PORT: The port each Solr node is listening on, such as 8983.
SOLR_HOME: The absolute path to the Solr home directory for each Solr node. This value will be passed to the new Solr process using the
solr.solr.homesystem property, see: Configuring solr.xml.
If you are upgrading from an installation of Solr 5.x or later, these values can typically be found in either
You should now be ready to upgrade your cluster. Please verify this process in a test or staging cluster before doing it in production.
The approach we recommend is to perform the upgrade of each Solr node, one-by-one. In other words, you will need to stop a node, upgrade it to the new version of Solr, and restart it before moving on to the next node. This means that for a short period of time, there will be a mix of "Old Solr" and "New Solr" nodes running in your cluster. We also assume that you will point the new Solr node to your existing Solr home directory where the Lucene index files are managed for each collection on the node. This means that you won’t need to move any index files around to perform the upgrade.
Begin by stopping the Solr node you want to upgrade.
After stopping the node, if using a replication (i.e., collections with
replicationFactor less than 1), verify that all leaders hosted on the downed node have successfully migrated to other replicas; you can do this by visiting the Cloud Screens in the Solr Admin UI.
If not using replication, then any collections with shards hosted on the downed node will be temporarily off-line.
Please follow the instructions to install Solr as a Service on Linux documented at Taking Solr to Production.
-n parameter to avoid automatic start of Solr by the installer script.
You need to update the
/etc/default/solr.in.sh include file in the next step to complete the upgrade process.
If you have a
/etc/default/solr.in.sh with a text editor and verify that the following variables are set correctly, or add them bottom of the include file as needed:
ZK_HOST= SOLR_HOST= SOLR_PORT= SOLR_HOME=
Make sure the user you plan to own the Solr process is the owner of the
For instance, if you plan to run Solr as the "solr" user and
/var/solr/data, then you would do:
sudo chown -R solr: /var/solr/data
You are now ready to start the upgraded Solr node by doing:
sudo service solr start.
The upgraded instance will join the existing cluster because you’re using the same
SOLR_HOST settings used by the old Solr node; thus, the new server will look like the old node to the running cluster.
Be sure to look in
/var/solr/logs/solr.log for errors during startup.
You should run the Solr healthcheck command for all collections that are hosted on the upgraded node before proceeding to upgrade the next node in your cluster.
For instance, if the newly upgraded node hosts a replica for the MyDocuments collection, then you can run the following command (replace
ZK_HOST with the ZooKeeper connection string):
/opt/solr/bin/solr healthcheck -c MyDocuments -z ZK_HOST
Look for any problems reported about any of the replicas for the collection.
Lastly, repeat Steps 1-5 for all nodes in your cluster.