Class TeamDraftInterleaving

  • All Implemented Interfaces:
    Interleaving

    public class TeamDraftInterleaving
    extends Object
    implements Interleaving
    Interleaving was introduced the first time by Joachims in [1, 2].
    Team Draft Interleaving is among the most successful and used interleaving approaches[3].
    Team Draft Interleaving implements a method similar to the way in which captains select their players in team-matches.
    Team Draft Interleaving produces a fair distribution of ranking models’ elements in the final interleaved list.
    "Team draft interleaving" has also proved to overcome an issue of the "Balanced interleaving" approach, in determining the winning model[4].

    [1] T. Joachims. Optimizing search engines using clickthrough data. KDD (2002)
    [2] T.Joachims.Evaluatingretrievalperformanceusingclickthroughdata.InJ.Franke, G. Nakhaeizadeh, and I. Renz, editors, Text Mining, pages 79–96. Physica/Springer (2003)
    [3] F. Radlinski, M. Kurup, and T. Joachims. How does clickthrough data reflect re- trieval quality? In CIKM, pages 43–52. ACM Press (2008)
    [4] O. Chapelle, T. Joachims, F. Radlinski, and Y. Yue. Large-scale validation and analysis of interleaved search evaluation. ACM TOIS, 30(1):1–41, Feb. (2012)

    • Field Detail

      • RANDOM

        public static Random RANDOM
    • Constructor Detail

      • TeamDraftInterleaving

        public TeamDraftInterleaving()
    • Method Detail

      • interleave

        public InterleavingResult interleave​(org.apache.lucene.search.ScoreDoc[] rerankedA,
                                             org.apache.lucene.search.ScoreDoc[] rerankedB)
        Team Draft Interleaving considers two ranking models: modelA and modelB. For a given query, each model returns its ranked list of documents La = (a1,a2,...) and Lb = (b1, b2, ...). The algorithm creates a unique ranked list I = (i1, i2, ...). This list is created by interleaving elements from the two lists la and lb as described by Chapelle et al.[1]. Each element Ij is labelled TeamA if it is selected from La and TeamB if it is selected from Lb.

        [1] O. Chapelle, T. Joachims, F. Radlinski, and Y. Yue. Large-scale validation and analysis of interleaved search evaluation. ACM TOIS, 30(1):1–41, Feb. (2012)

        Assumptions:
        - rerankedA and rerankedB has the same length. They contains the same search results, ranked differently by two ranking models
        - each reranked list can not contain the same search result more than once.
        - results are all from the same shard

        Specified by:
        interleave in interface Interleaving
        Parameters:
        rerankedA - a ranked list of search results produced by a ranking model A
        rerankedB - a ranked list of search results produced by a ranking model B
        Returns:
        the interleaved ranking list
      • setRANDOM

        public static void setRANDOM​(Random RANDOM)