GenweiWu / Blog

个人技术能力提升
MIT License
4 stars 0 forks source link

elastic search智能搜索 #30

Closed GenweiWu closed 6 years ago

GenweiWu commented 6 years ago

参照的教程

Learning to Rank Demo

This demo uses data from TheMovieDB (TMDB) to demonstrate using Ranklib learning to rank models with Elasticsearch.

Install Dependencies and prep data...

This demo requires

An aside: X Pack

Using the LTR plugin with xpack requires configuring appropriate roles. These can be setup automatically by prepare_xpack.py which takes a username and will prompt for a password. After this is run settings.cfg must be edited to uncomment the ESUser and ESPassword properties.

python prepare_xpack.py <xpack admin username>

Download the TMDB Data & Ranklib Jar

The first time you run this demo, fetch RankLib.jar (used to train model) and tmdb.json (the dataset used)

python prepare.py

Start Elasticsearch/install plugin

Start a supported version of Elasticsearch and follow the instructions to install the learning to rank plugin.

Index to Elasticsearch

This script will create a 'tmdb' index with default/simple mappings. You can edit this file to play with mappings.

python indexMlTmdb.py

Onto the machine learning...

TLDR

If you're actually going to build a learning to rank system, read past this section. But to sum up, the full Movie demo can be run by

python train.py

Then you can search using

python search.py Rambo

and search results can be printed to the console.

More on how all this actually works below:

Create and upload features (loadFeatures.py)

A "feature" in ES LTR corresponds to an Elasticsearch query. The score yielded by the query is used to train and evaluate the model. For example, if you feel that a TF*IDF title score corresponds to higher relevance, then that's a feature you'd want to train on! Other features might include how old a movie is, the number of keywords in a query, or whatever else you suspect might correlate to your user's sense of relevance.

If you examine loadFeatures.py you'll see how we create features. We first initialize the default feature store (PUT /_ltr). We create a feature set (POST /_ltr/_featureset/movie_features). Now we have a place to create features for both logging & use by our models!

In the demo features 1...n json are mustache templates that correspond to the features. In this case, the features are identified by ordinal (feature 1 is in 1.json). They are uploaded to Elasticsearch Learning to Rank with these ordinals as the feature name. In eachFeature, you'll see a loop where we access each mustache template an the file system and return a JSON body for adding the feature to Elasticsearch.

For traditional Ranklib models, the ordinal is the only way features are identified. Other models use feature names which make developing, logging, and managing features more maintainable.

Gather Judgments (sample_judgments.txt)

The first part of the training data is the judgment list. We've provided one in sample_judgments.txt.

What's a judgment list? A judgment list tells us how relevant a document is for a search query. In other words, a three-tuple of

<grade>,<docId>,<keywords>

Quality comes in the form of grades. For example if movie "First Blood" is considered extremely relevant for the query Rambo, we give it a grade of 4 ('exactly relevant'). The movie Bambi would receive a '0'. Instead of the notional CSV format above, Ranklib and other learning to rank systems use a format from LibSVM, shown below:

# qid:1: rambo
#
#
# grade (0-4)   queryid  # docId    title
4   qid:1 # 7555    Rambo

You'll notice we bastardize this syntax to add comments identifying the keywords associated with each query id, and append metadata to each line. Code provided in judgments.py handles this syntax.

Log features (collectFeatures.py)

You saw above how we created features, the next step is to log features for each judgment 3-tuple. This code is in collectFeatures.py. Logging features can be done in several different contexts. Of course, in a production system, you may wish to log features as users search. In other contexts, you may have a hand-created judgment list (as we do) and wish to simply ask Elasticsearch Learning to Rank for feature values for query/document pairs.

Is collectFeatures.py, you'll see an sltr query is included. This query points to a featureSet, not a model. So it does not influence the score. We filter down to needed document ids for each keyword and allow this sltr query to run.

You'll also notice an ext component in the request. This search extension is part of the Elasticsearch Learning to Rank plugin and allows you to configure feature logging. You'll noticed it refers to the query name of sltr, allowing it to pluck out the sltr query and perform logging associated with the feature set.

Once features are gathered, the judgment list is fleshed out with feature value, the ordinals below corresponding to the features in our 1..n.json files.

4   qid:1   1:12.318446 2:9.8376875 # 7555  rambo

Train (train.py and RankLib.jar)

With training data in place, it's time to ask RankLib to train a model, and output to a test file. RankLib supports linear models, ListNet, and several tree-based models such as LambdaMART. In train.py you'll notice how RankLib is called with command line arguments. Models test_N are created in our feature store for each type of RankLib model. In the saveModel function, you can see how the model is uploaded to our "movie_features" feature set.

Search using the model (search.py)

See what sort of search results you get! In search.py you'll see we execute the sltr query referring to a test_N model in the rescore phase. By default test_6 is used (corresponding to LambdaMART), but you can change the used model at the command line.

Search with default LambdaMART:

python search.py rambo

Try a different model:

python search.py rambo test_8
GenweiWu commented 6 years ago

实践过程

搭建指导

o19s/elasticsearch-learning-to-rank · GitHub


准备(版本一定要匹配才能安装)


Elasticsearch安装x-pack

1. 简介

X-pack是elasticsearch的一个扩展包,将安全,警告,监视,图形和报告功能捆绑在一个易于安装的软件包中,虽然x-pack被设计为一个无缝的工作,但是你可以轻松的启用或者关闭一些功能。收费软件

2. 安装说明

3. 安装后

kibana安装x-pack

参考Elasticsearch安装x-pack


Elasticsearch安装ltr

1. 安装指导

o19s/elasticsearch-learning-to-rank · GitHub

2. 手动安装方法:

D:\2222\smartSearch\elasticsearch-6.1.2\bin>elasticsearch-plugin install file:///E:/software/elastic/6.1.2/ltr-1.0.0-es6.1.2.zip

安装后查看已安装插件

D:\2222\smartSearch\elasticsearch-6.1.2\bin>elasticsearch-plugin list
ltr
x-pack

下载TMDB DataRanklib Jar

1. 执行脚本进行下载

$ python prepare.py
GET http://es-learn-to-rank.labs.o19s.com/tmdb.json
GET http://es-learn-to-rank.labs.o19s.com/RankLib-2.8.jar

2. 脚本下载失败,可以自行下载(由脚本内容得出链接)

1. 问题

plugin:elasticsearch Authentication Exception

解决方法:

In your Kibana installation in /config/kibana.yml you have to set the username and password that Kibana should use to access Elasticsearch.

2. 问题

action [ltr:featurestore/data] is unauthorized for user [elastic]

解决方法:

set xpack.security.enabled=false in kibana.yml and elasticsearch.yml

test.py

import os
import subprocess

str = "java -version"   #1.6
os.popen(str).read()

str = "echo %java_home%"   #1.8
# os.popen(str).read()

os.popen('java -jar RankLib-2.8.jar -ranker 0 -train sample_judgments_wfeatures.txt -save model.txt -frate 1.0').read()

解决方法: 控制面板>程序>java,发现的确有个jdk1.6,去程序中删除安装的jdk1.6即可。


参考

GenweiWu commented 6 years ago

elastic search的用户名、密码

执行到这儿需要用户名和密码,则先安装x-pack插件

python prepare_xpack.py <xpack admin username>

个人觉得,其实我们可以不用xpack

GenweiWu commented 6 years ago

对应的console测试

对应kibana的devtools工具的请求测试

GET /tmdb/_search
GET /tmdb/movie/_search

GET /tmdb/_mapping
GET /tmdb/movie/_mapping

GET /tmdb/movie/_search
{
  "_source": [
    "title",
    "overview"
  ],
  "query": {
    "multi_match": {
      "query": "iron man",
      "fields": [
        "title",
        "overview"
      ]
    }
  },
  "rescore": {
    "query": {
      "rescore_query": {
        "sltr": {
          "params": {
            "keywords": "iron man"
          },
          "model": "test_6"
        }
      }
    }
  }
}

GET /_ltr/_featureset/movie_features

GET /tmdb/_search
{
  "size": 100,
  "query": {
    "bool": {
      "must": [
        {
          "terms": {
            "_id": [
              "7555",
              "1370",
              "1369",
              "1368",
              "136278",
              "102947",
              "13969",
              "61645",
              "14423",
              "54156"
            ]
          }
        }
      ],
      "should": [
        {
          "sltr": {
            "params": {
              "keywords": "rambo"
            },
            "_name": "logged_featureset",
            "featureset": "movie_features"
          }
        }
      ]
    }
  },
  "ext": {
    "ltr_log": {
      "log_specs": {
        "name": "main",
        "named_query": "logged_featureset",
        "missing_as_zero": true
      }
    }
  }
}
GenweiWu commented 6 years ago

原理解释

An aside: X Pack

python prepare_xpack.py <xpack admin username>

post了一些xpack的用户角色信息(略过)


Download the TMDB Data & Ranklib Jar

python prepare.py

顾名思义,就是下载了两个文件 tmdb.jsonRanklib.jar


Index to Elasticsearch

python indexMlTmdb.py

  1. 删除tmdb并重新创建

    index="tmdb"
    es.indices.delete(index, ignore=[400, 404])
    es.indices.create(index, body=settings)
  2. post数据


TLDR

python train.py

DELETE /_ltr
PUT /_ltr

step 1

# qid:1: rambo
# qid:2: rocky
# qid:3: bullwinkle

得到

{1: 'rambo', 2: 'rocky', 3: 'bullwinkle'}

step 2

4   qid:1 # 7555    Rambo

得到

docId = {str} '7555'
grade = {int} 4
keywords = {str} 'rambo'
qid = {int} 1

且最终根据qid进行归类

step 3 每个qid作为一次集合,进行查询操作

查询5
GET /tmdb/_search
{
  "size": 100,
  "query": {
    "bool": {
      "must": [
        {
          "terms": {
            "_id": [
              "7555",
              "1370",
              "1369",
              "1368",
              "136278",
              "102947",
              "13969",
              "61645",
              "14423",
              "54156"
            ]
          }
        }
      ],
      "should": [
        {
          "sltr": {
            "params": {
              "keywords": "rambo"
            },
            "_name": "logged_featureset",
            "featureset": "movie_features"
          }
        }
      ]
    }
  },
  "ext": {
    "ltr_log": {
      "log_specs": {
        "name": "main",
        "named_query": "logged_featureset",
        "missing_as_zero": true
      }
    }
  }
}

然后计算得到features,最终写入文件sample_judgments_wfeatures.txt

docId = {str} '7555'
features = {list} <class 'list'>: [12.318446, 10.573845]
 0 = {float} 12.318446
 1 = {float} 10.573845
grade = {int} 4
keywords = {str} 'rambo'
qid = {int} 1

上面获取评分的方法

GET /tmdb/_search
{
  "size": 100,
  "query": {
    "bool": {
      "must": [
        {
          "terms": {
            "_id": [
              "7555",
              "1370",
              "1369",
              "1368",
              "136278",
              "102947",
              "13969",
              "61645",
              "14423",
              "54156"
            ]
          }
        }
      ],
      "should": [
        {
          "sltr": {
            "params": {
              "keywords": "rambo"
            },
            "_name": "logged_featureset",
            "featureset": "movie_features"
          }
        }
      ]
    }
  },
  "ext": {
    "ltr_log": {
      "log_specs": {
        "name": "main",
        "named_query": "logged_featureset",
        "missing_as_zero": true
      }
    }
  }
}
GenweiWu commented 6 years ago

组合搜索

https://www.elastic.co/guide/cn/elasticsearch/guide/cn/combining-filters.html