Learning To Rank
With the Learning To Rank (or LTR for short) contrib module you can configure and run machine learned ranking models in Solr.
The module also supports feature extraction inside Solr. The only thing you need to do outside Solr is train your own ranking model.
Learning to Rank Concepts
Re-Ranking
Re-Ranking allows you to run a simple query for matching documents and then re-rank the top N documents using the scores from a different, more complex query. This page describes the use of LTR complex queries, information on other rank queries included in the Solr distribution can be found on the Query Re-Ranking page.
Learning To Rank Models
In information retrieval systems, Learning to Rank is used to re-rank the top N retrieved documents using trained machine learning models. The hope is that such sophisticated models can make more nuanced ranking decisions than standard ranking functions like TF-IDF or BM25.
Ranking Model
A ranking model computes the scores used to rerank documents. Irrespective of any particular algorithm or implementation, a ranking model’s computation can use three types of inputs:
parameters that represent the scoring algorithm
features that represent the document being scored
features that represent the query for which the document is being scored
Interleaving
Interleaving is an approach to Online Search Quality evaluation that allows to compare two models interleaving their results in the final ranked list returned to the user.
currently only the Team Draft Interleaving algorithm is supported (and its implementation assumes all results are from the same shard)
Feature
A feature is a value, a number, that represents some quantity or quality of the document being scored or of the query for which documents are being scored. For example documents often have a 'recency' quality and 'number of past purchases' might be a quantity that is passed to Solr as part of the search query.
Normalizer
Some ranking models expect features on a particular scale. A normalizer can be used to translate arbitrary feature values into normalized values e.g., on a 0..1 or 0..100 scale.
Training Models
Feature Engineering
The LTR contrib module includes several feature classes as well as support for custom features. Each feature class’s javadocs contain an example to illustrate use of that class. The process of feature engineering itself is then entirely up to your domain expertise and creativity.
Feature | Class | Example parameters | External Feature Information |
---|---|---|---|
field length | FieldLengthFeature | {"field":"title"} | not (yet) supported |
field value | FieldValueFeature | {"field":"hits"} | not (yet) supported |
original score | OriginalScoreFeature | {} | not applicable |
solr query | SolrFeature | {"q":"{!func} recip(ms(NOW,last_modified) ,3.16e-11,1,1)"} | supported |
solr filter query | SolrFeature | {"fq":["{!terms f=category}book"]} | supported |
solr query + filter query | SolrFeature | {"q":"{!func} recip(ms(NOW,last_modified), 3.16e-11,1,1)", "fq":["{!terms f=category}book"]} | supported |
value | ValueFeature | {"value":"${userFromMobile}","required":true} | supported |
(custom) | (custom class extending Feature) |
Normalizer | Class | Example parameters |
---|---|---|
Identity | IdentityNormalizer | {} |
MinMax | MinMaxNormalizer | {"min":"0", "max":"50" } |
Standard | StandardNormalizer | {"avg":"42","std":"6"} |
(custom) | (custom class extending Normalizer) |
Feature Extraction
The ltr contrib module includes a [features transformer] to support the calculation and return of feature values for feature extraction purposes including and especially when you do not yet have an actual reranking model.
Feature Selection and Model Training
Feature selection and model training take place offline and outside Solr. The ltr contrib module supports two generalized forms of models as well as custom models. Each model class’s javadocs contain an example to illustrate configuration of that class. In the form of JSON files your trained model or models (e.g., different models for different customer geographies) can then be directly uploaded into Solr using provided REST APIs.
General form | Class | Specific examples |
---|---|---|
Linear | LinearModel | RankSVM, Pranking |
Multiple Additive Trees | MultipleAdditiveTreesModel | LambdaMART, Gradient Boosted Regression Trees (GBRT) |
Neural Network | NeuralNetworkModel | RankNet |
(wrapper) | DefaultWrapperModel | (not applicable) |
(custom) | (custom class extending AdapterModel) | (not applicable) |
(custom) | (custom class extending LTRScoringModel) | (not applicable) |
Quick Start with LTR
The "techproducts"
example included with Solr is pre-configured with the plugins required for learning-to-rank, but they are disabled by default.
To enable the plugins, please specify the solr.ltr.enabled
JVM System Property when running the example:
bin/solr start -e techproducts -Dsolr.ltr.enabled=true
Uploading Features
To upload features in a /path/myFeatures.json
file, please run:
curl -XPUT 'http://localhost:8983/solr/techproducts/schema/feature-store' --data-binary "@/path/myFeatures.json" -H 'Content-type:application/json'
To view the features you just uploaded please open the following URL in a browser:
http://localhost:8983/solr/techproducts/schema/feature-store/_DEFAULT_
Extracting Features
To extract features as part of a query, add [features]
to the fl
parameter, for example:
http://localhost:8983/solr/techproducts/query?q=test&fl=id,score,[features]
The output will include feature values as a comma-separated list, resembling the output shown here:
{
"responseHeader":{
"status":0,
"QTime":0,
"params":{
"q":"test",
"fl":"id,score,[features]"}},
"response":{"numFound":2,"start":0,"maxScore":1.959392,"docs":[
{
"id":"GB18030TEST",
"score":1.959392,
"[features]":"documentRecency=0.020893794,isBook=0.0,originalScore=1.959392"},
{
"id":"UTF8TEST",
"score":1.5513437,
"[features]":"documentRecency=0.020893794,isBook=0.0,originalScore=1.5513437"}]
}}
Uploading a Model
To upload the model in a /path/myModel.json
file, please run:
curl -XPUT 'http://localhost:8983/solr/techproducts/schema/model-store' --data-binary "@/path/myModel.json" -H 'Content-type:application/json'
To view the model you just uploaded please open the following URL in a browser:
http://localhost:8983/solr/techproducts/schema/model-store
Running a Rerank Query
To rerank the results of a query, add the rq
parameter to your search, for example:
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myModel reRankDocs=100}&fl=id,score
The addition of the rq
parameter will not change the output of the search.
To obtain the feature values computed during reranking, add [features]
to the fl
parameter, for example:
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myModel reRankDocs=100}&fl=id,score,[features]
The output will include feature values as a comma-separated list, resembling the output shown here:
{
"responseHeader":{
"status":0,
"QTime":0,
"params":{
"q":"test",
"fl":"id,score,[features]",
"rq":"{!ltr model=myModel reRankDocs=100}"}},
"response":{"numFound":2,"start":0,"maxScore":1.0005897,"docs":[
{
"id":"GB18030TEST",
"score":1.0005897,
"[features]":"documentRecency=0.020893792,isBook=0.0,originalScore=1.959392"},
{
"id":"UTF8TEST",
"score":0.79656565,
"[features]":"documentRecency=0.020893792,isBook=0.0,originalScore=1.5513437"}]
}}
Running a Rerank Query Interleaving Two Models
To rerank the results of a query, interleaving two models (myModelA, myModelB) add the rq
parameter to your search, passing two models in input, for example:
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myModelA model=myModelB reRankDocs=100}&fl=id,score
To obtain the model that interleaving picked for a search result, computed during reranking, add [interleaving]
to the fl
parameter, for example:
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myModelA model=myModelB reRankDocs=100}&fl=id,score,[interleaving]
The output will include the model picked for each search result, resembling the output shown here:
{
"responseHeader":{
"status":0,
"QTime":0,
"params":{
"q":"test",
"fl":"id,score,[interleaving]",
"rq":"{!ltr model=myModelA model=myModelB reRankDocs=100}"}},
"response":{"numFound":2,"start":0,"maxScore":1.0005897,"docs":[
{
"id":"GB18030TEST",
"score":1.0005897,
"[interleaving]":"myModelB"},
{
"id":"UTF8TEST",
"score":0.79656565,
"[interleaving]":"myModelA"}]
}}
Running a Rerank Query Interleaving a Model with the Original Ranking
When approaching Search Quality Evaluation with interleaving it may be useful to compare a model with the original ranking.
To rerank the results of a query, interleaving a model with the original ranking, add the rq
parameter to your search, passing the special inbuilt OriginalRanking
model identifier as one model and your comparison model as the other model, for example:
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=_OriginalRanking_ model=myModel reRankDocs=100}&fl=id,score
The addition of the rq
parameter will not change the output of the search.
To obtain the model that interleaving picked for a search result, computed during reranking, add [interleaving]
to the fl
parameter, for example:
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=_OriginalRanking_ model=myModel reRankDocs=100}&fl=id,score,[interleaving]
The output will include the model picked for each search result, resembling the output shown here:
{
"responseHeader":{
"status":0,
"QTime":0,
"params":{
"q":"test",
"fl":"id,score,[features]",
"rq":"{!ltr model=_OriginalRanking_ model=myModel reRankDocs=100}"}},
"response":{"numFound":2,"start":0,"maxScore":1.0005897,"docs":[
{
"id":"GB18030TEST",
"score":1.0005897,
"[interleaving]":"_OriginalRanking_"},
{
"id":"UTF8TEST",
"score":0.79656565,
"[interleaving]":"myModel"}]
}}
Running a Rerank Query with Interleaving Passing a Specific Algorithm
To rerank the results of a query, interleaving two models using a specific algorithm, add the interleavingAlgorithm
local parameter to the ltr query parser, for example:
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myModelA model=myModelB reRankDocs=100 interleavingAlgorithm=TeamDraft}&fl=id,score
Currently the only (and default) algorithm supported is 'TeamDraft'.
External Feature Information
The ValueFeature and SolrFeature classes support the use of external feature information, efi
for short.
Uploading Features
To upload features in a /path/myEfiFeatures.json
file, please run:
curl -XPUT 'http://localhost:8983/solr/techproducts/schema/feature-store' --data-binary "@/path/myEfiFeatures.json" -H 'Content-type:application/json'
To view the features you just uploaded please open the following URL in a browser:
http://localhost:8983/solr/techproducts/schema/feature-store/myEfiFeatureStore
As an aside, you may have noticed that the myEfiFeatures.json
example uses "store":"myEfiFeatureStore"
attributes: read more about feature store
in the LTR Lifecycle section of this page.
Extracting Features
To extract myEfiFeatureStore
features as part of a query, add efi.*
parameters to the [features]
part of the fl
parameter, for example:
http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]
http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13]
Uploading a Model
To upload the model in a /path/myEfiModel.json
file, please run:
curl -XPUT 'http://localhost:8983/solr/techproducts/schema/model-store' --data-binary "@/path/myEfiModel.json" -H 'Content-type:application/json'
To view the model you just uploaded please open the following URL in a browser:
http://localhost:8983/solr/techproducts/schema/model-store
Running a Rerank Query
To obtain the feature values computed during reranking, add [features]
to the fl
parameter and efi.*
parameters to the rq
parameter, for example:
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1}&fl=id,cat,manu,score,[features]
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13}&fl=id,cat,manu,score,[features]
Notice the absence of efi.*
parameters in the [features]
part of the fl
parameter.
Extracting Features While Reranking
To extract features for myEfiFeatureStore
features while still reranking with myModel
:
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myModel}&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]
Notice the absence of efi.*
parameters in the rq
parameter (because myModel
does not use efi
feature) and the presence of efi.*
parameters in the [features]
part of the fl
parameter (because myEfiFeatureStore
contains efi
features).
Read more about model evolution in the LTR Lifecycle section of this page.
Training Example
Example training data and a demo train_and_upload_demo_model.py
script can be found in the solr/contrib/ltr/example
folder in the Apache lucene-solr Git repository (mirrored on github.com). This example folder is not shipped in the Solr binary release.
Installation of LTR
The ltr contrib module requires the dist/solr-ltr-*.jar
JARs.
LTR Configuration
Learning-To-Rank is a contrib module and therefore its plugins must be configured in solrconfig.xml
.
Minimum Requirements
Include the required contrib JARs. Note that by default paths are relative to the Solr core so they may need adjustments to your configuration, or an explicit specification of the
$solr.install.dir
.<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-ltr-\d.*\.jar" />
Declaration of the
ltr
query parser.<queryParser name="ltr" class="org.apache.solr.ltr.search.LTRQParserPlugin"/>
Configuration of the feature values cache.
<cache name="QUERY_DOC_FV" class="solr.search.LRUCache" size="4096" initialSize="2048" autowarmCount="4096" regenerator="solr.search.NoOpRegenerator" />
Declaration of the
[features]
transformer.<transformer name="features" class="org.apache.solr.ltr.response.transform.LTRFeatureLoggerTransformerFactory"> <str name="fvCacheName">QUERY_DOC_FV</str> </transformer>
Declaration of the
[interleaving]
transformer.<transformer name="interleaving" class="org.apache.solr.ltr.response.transform.LTRInterleavingTransformerFactory"/>
Advanced Options
LTRThreadModule
A thread module can be configured for the query parser and/or the transformer to parallelize the creation of feature weights. For details, please refer to the LTRThreadModule javadocs.
Feature Vector Customization
The features transformer returns dense CSV values such as featureA=0.1,featureB=0.2,featureC=0.3,featureD=0.0
.
For sparse CSV output such as featureA:0.1 featureB:0.2 featureC:0.3
you can customize the feature logger transformer declaration in solrconfig.xml
as follows:
<transformer name="features" class="org.apache.solr.ltr.response.transform.LTRFeatureLoggerTransformerFactory">
<str name="fvCacheName">QUERY_DOC_FV</str>
<str name="defaultFormat">sparse</str>
<str name="csvKeyValueDelimiter">:</str>
<str name="csvFeatureSeparator"> </str>
</transformer>
Implementation and Contributions
- How does Solr Learning-To-Rank work under the hood?
- Please refer to the
ltr
javadocs for an implementation overview. - How could I write additional models and/or features?
- Contributions for further models, features, normalizers and interleaving algorithms are welcome. Related links:
LTR Lifecycle
Feature Stores
It is recommended that you organise all your features into stores which are akin to namespaces:
Features within a store must be named uniquely.
Across stores identical or similar features can share the same name.
If no store name is specified then the default
_DEFAULT_
feature store will be used.
To discover the names of all your feature stores:
http://localhost:8983/solr/techproducts/schema/feature-store
To inspect the content of the commonFeatureStore
feature store:
http://localhost:8983/solr/techproducts/schema/feature-store/commonFeatureStore
Models
A model uses features from exactly one feature store.
If no store is specified then the default
_DEFAULT_
feature store will be used.A model need not use all the features defined in a feature store.
Multiple models can use the same feature store.
To extract features for currentFeatureStore
's features:
http://localhost:8983/solr/techproducts/query?q=test&fl=id,score,[features store=currentFeatureStore]
To extract features for nextFeatureStore
features whilst reranking with currentModel
based on currentFeatureStore
:
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=currentModel reRankDocs=100}&fl=id,score,[features store=nextFeatureStore]
To view all models:
http://localhost:8983/solr/techproducts/schema/model-store
To delete the currentModel
model:
curl -XDELETE 'http://localhost:8983/solr/techproducts/schema/model-store/currentModel'
A feature store may be deleted only when there are no models using it. |
To delete the currentFeatureStore
feature store:
curl -XDELETE 'http://localhost:8983/solr/techproducts/schema/feature-store/currentFeatureStore'
Using Large Models
With SolrCloud, large models may fail to upload due to the limitation of ZooKeeper’s buffer. In this case, DefaultWrapperModel
may help you to separate the model definition from uploaded file.
Assuming that you consider to use a large model placed at /path/to/models/myModel.json
through DefaultWrapperModel
.
{
"store" : "largeModelsFeatureStore",
"name" : "myModel",
"class" : ...,
"features" : [
...
],
"params" : {
...
}
}
First, add the directory to Solr’s resource paths with a <lib/>
directive:
<lib dir="/path/to" regex="models" />
Then, configure DefaultWrapperModel
to wrap myModel.json
:
{
"store" : "largeModelsFeatureStore",
"name" : "myWrapperModel",
"class" : "org.apache.solr.ltr.model.DefaultWrapperModel",
"params" : {
"resource" : "myModel.json"
}
}
myModel.json
will be loaded during the initialization and be able to use by specifying model=myWrapperModel
.
No "features" are configured in myWrapperModel because the features of the wrapped model (myModel ) will be used; also note that the "store" configured for the wrapper model must match that of the wrapped model i.e., in this example the feature store called largeModelsFeatureStore is used.
|
<lib dir="/path/to/models" regex=".*\.json" /> doesn’t work as expected in this case, because SolrResourceLoader considers given resources as JAR if <lib /> indicates files.
|
As an alternative to the above-described DefaultWrapperModel
, it is possible to increase ZooKeeper’s file size limit.
Applying Changes
The feature store and the model store are both Managed Resources. Changes made to managed resources are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
LTR Examples
One Feature Store, Multiple Ranking Models
leftModel
andrightModel
both use features fromcommonFeatureStore
and the only different between the two models is the weights attached to each feature.Conventions used:
commonFeatureStore.json
file contains features for thecommonFeatureStore
feature storeleftModel.json
file contains model namedleftModel
rightModel.json
file contains model namedrightModel
The model’s features and weights are sorted alphabetically by name, this makes it easy to see what the commonalities and differences between the two models are.
The stores features are sorted alphabetically by name, this makes it easy to lookup features used in the models
Model Evolution
linearModel201701
uses features fromfeatureStore201701
treesModel201702
uses features fromfeatureStore201702
linearModel201701
andtreesModel201702
and their feature stores can co-exist whilst both are needed.When
linearModel201701
has been deleted thenfeatureStore201701
can also be deleted.Conventions used:
<store>.json
file contains features for the<store>
feature store<model>.json
file contains model name<model>
a 'generation' id (e.g.,
YYYYMM
year-month) is part of the feature store and model namesThe model’s features and weights are sorted alphabetically by name, this makes it easy to see what the commonalities and differences between the two models are.
The stores features are sorted alphabetically by name, this makes it easy to see what the commonalities and differences between the two feature stores are.
Additional LTR Resources
"Learning to Rank in Solr" presentation at Lucene/Solr Revolution 2015 in Austin:
The importance of Online Testing in Learning To Rank: