Elasticsearch:构建自定义分析器指南

在本博客中,我们将介绍不同的内置字符过滤器、分词器和分词过滤器,以及如何创建适合我们需求的自定义分析器。更多关于分析器的知识,请详细阅读文章:

为什么我们需要定制分析器?

你可以通过以所需的方式组合字符过滤器、分词器和分词过滤器来创建自定义分析器来满足您的特定需求。 这使得文本处理具有高度的灵活性和定制性。

正如我们所见,Elasticsearch 中的分析器由三部分组成,我们将看到不同的内置组件:

安装

为了方便今天的测试,我们将安装无安全配置的 Elasticsearch 及 Kibana。我们可以参考文章 "Elasticsearch:如何在 Docker 上运行 Elasticsearch 8.x 进行本地开发"。

我们还需要安装 Python 所需要的包:

pip3 install elasticsearch
markdown 复制代码
1.  $ pip3 list | grep elasticsearch
2.  elasticsearch                            8.12.0
3.  rag-elasticsearch                        0.0.1        /Users/liuxg/python/rag-elasticsearch/my-app/packages/rag-elasticsearch

测试

我们创建一个连接到 Elasticsearch 的客户端:

python 复制代码
1.  from elasticsearch import Elasticsearch

3.  es = Elasticsearch("http://localhost:9200")
4.  print(es.info())

更多关于如何连接到 Elasticsearch 的代码,请参考 "Elasticsearch:关于在 Python 中使用 Elasticsearch 你需要知道的一切 - 8.x"。

Char map filters

HTML Strip Char Filter (html_strip)

从文本中删除 HTML 元素并解码 HTML 实体。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "char_filter": ["html_strip"],
4.          "tokenizer": "standard",
5.          "text": "<p>Hello <b>World</b>! This is <a href='<http://example.com>'>Elasticsearch</a>.</p>"
6.      }
7.  )

9.  # Extract tokens
10.  [token['token'] for token in response['tokens']]

Pattern Replace Char Filter ( **pattern_replace**)

使用正则表达式来匹配字符或字符序列并替换它们。 在下面的示例中,我们从用户名中提取名称:

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "char_filter": [
4.              {
5.                  "type": "pattern_replace",
6.                  "pattern": "[-_@.]",  # Removes hyphens, underscores, apostrophes
7.                  "replacement": " "
8.              }
9.          ],
10.          "tokenizer": "standard",
11.          "text": "liu_xiao_guo"
12.      }
13.  )

15.  # Extract tokens
16.  [token['token'] for token in response['tokens']]

Mapping Char Filter ( **mapping**)

允许自定义定义的字符或字符序列映射。 示例:你可以定义一个映射,将 "&" 替换为 "and",或将 "€" 替换为 "euro"。

ruby 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": "standard",
4.          "char_filter": [
5.              {
6.                  "type": "mapping",
7.                  "mappings": [
8.                      "@gmail.com=>",    # Replace @gmail.com with nothing
9.                      "$=>dollar",       # Replace $ with dollar
10.                  ]
11.              }
12.          ],
13.          "text": "xiaoguo.liu@gmail.com gives me $"
14.      }
15.  )

17.  # Extract tokens
18.  [token['token'] for token in response['tokens']]

Tokenizers

Standard Tokenizer ( **standard**)

Standard 分词器将文本按照单词边界划分为术语,如 Unicode 文本分段算法所定义。 它删除了大多数标点符号。 它是大多数语言的最佳选择。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": "standard",
4.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
5.      }
6.  )

8.  # Extract tokens
9.  [token['token'] for token in response['tokens']]

Letter Tokenizer ( **letter**)

每当遇到非字母的字符时,letter 分词器就会将文本分成术语。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": "letter",
4.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
5.      }
6.  )

8.  # Extract tokens
9.  [token['token'] for token in response['tokens']]

Lowercase Tokenizer ( **lowercase**)

小写分词器类似于字母分词器,但它也将所有术语小写。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": "lowercase",
4.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
5.      }
6.  )

8.  # Extract tokens
9.  [token['token'] for token in response['tokens']]

Whitespace Tokenizer ( **whitespace**)

每当遇到任何空白字符时,whitespace 分词器都会将文本划分为术语。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": "whitespace",
4.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
5.      }
6.  )

8.  # Extract tokens
9.  [token['token'] for token in response['tokens']]

Classic Tokenizer ( **classic**)

classic 分词器是一种基于语法的英语分词器。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": "classic",
4.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
5.      }
6.  )

8.  # Extract tokens
9.  [token['token'] for token in response['tokens']]

UAX URL Email Tokenizer ( **uax_url_email**)

uax_url_email 标记生成器类似于标准标记生成器,只不过它将 URL 和电子邮件地址识别为单个标记。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": "classic",
4.          "text": "visit https://elasticstack.blog.csdn.net to get the best materials to learn Elastic Stack"
5.      }
6.  )

8.  # Extract tokens
9.  [token['token'] for token in response['tokens']]

N-Gram Tokenizer ( **ngram**)

当 ngram 分词器遇到任何指定字符(例如空格或标点符号)列表时,它可以将文本分解为单词,然后返回每个单词的 n-grams:连续字母的滑动窗口,例如 Quick → [qu, ui, ic, ck]。Elasticsearch 中的 N-Gram 分词器在术语部分匹配很重要的场景中特别有用。 最适合自动完成和键入时搜索功能以及处理拼写错误或匹配单词中的子字符串。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": {
4.              "type": "ngram",
5.              "min_gram": 3,
6.              "max_gram": 4
7.          },
8.          "text": "Hello Xiaoguo"
9.      }
10.  )

12.  # Extract tokens
13.  [token['token'] for token in response['tokens']]

Edge N-Gram Tokenizer ( **edge_ngram**)

Elasticsearch 中的 edge_ngram 分词器用于从单词的开头或 "边缘" 开始将单词分解为更小的块或 "n-gram"。 它生成指定长度范围的标记,提供单词从开头到给定大小的一部分。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": {
4.                  "type": "edge_ngram",
5.                  "min_gram": 4,
6.                  "max_gram": 5,
7.                  "token_chars": ["letter", "digit"]
8.          },
9.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
10.      }
11.  )

13.  # Extract tokens
14.  [token['token'] for token in response['tokens']]

Keyword Tokenizer ( **keyword**)

关键字分词器接受给定的任何文本,并将完全相同的文本输出为单个术语。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": "keyword",
4.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
5.      }
6.  )

8.  # Extract tokens
9.  [token['token'] for token in response['tokens']]

Pattern Tokenizer ( **pattern**)

Pattern 分词器使用正则表达式,在文本与单词分隔符匹配时将其拆分为术语,或者将匹配的文本捕获为术语。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": {
4.            "type": "pattern",
5.            "pattern": "_+"
6.          },
7.          "text": "hello_world_from_elasticsearch"
8.      }
9.  )

11.  # Extract tokens
12.  [token['token'] for token in response['tokens']]

Path Tokenizer ( **path_hierarchy**)

它将路径在每个路径分隔符处分解为分词。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": "path_hierarchy",
4.          "text": "/usr/local/bin/python"
5.      }
6.  )

8.  # Extract tokens
9.  [token['token'] for token in response['tokens']]

Token filters

确保你始终传递列表中的过滤器,即使它只有一个,并且你应用的过滤器的顺序非常重要。

Apostrophe

删除撇号后面的所有字符,包括撇号本身。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "filter": ["apostrophe"],
4.          "tokenizer": "standard",
5.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
6.      }
7.  )

9.  # Extract tokens
10.  [token['token'] for token in response['tokens']]

Lowercase Filter

将所有分词转换为小写。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "filter": ["lowercase"],
4.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
5.      }
6.  )

8.  # Extract tokens
9.  [token['token'] for token in response['tokens']]

Uppercase Filter

将所有分词转换为大写。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "filter": ["uppercase"],
4.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
5.      }
6.  )

8.  # Extract tokens
9.  [token['token'] for token in response['tokens']]

Trim Filter

删除流中每个分词的前导和尾随空格。

bash 复制代码
1.  # Analyze the text using the custom analyzer
2.  response = es.indices.analyze(
3.      body={
4.          "tokenizer": "keyword",
5.          "filter":[
6.              "lowercase",
7.              "trim"
8.           ],
9.          "text": " The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone. "
10.      }
11.  )

13.  # Extract tokens
14.  [token['token'] for token in response['tokens']]

ASCII Folding Filter ( **asciifolding**)

asciifolding 过滤器会删除标记中的变音标记。比如,Türkiye 将成为 Turkiye。

css 复制代码
1.  # Analyze the text using the custom analyzer
2.  response = es.indices.analyze(
3.      body={
4.          "filter": ["asciifolding"],
5.          "text": "Türkiye"
6.      }
7.  )

9.  # Extract tokens
10.  [token['token'] for token in response['tokens']]

Synonym Filter

synonym 分词过滤器允许在分析过程中轻松处理同义词。

bash 复制代码
1.  # Analyze the text using the custom analyzer
2.  response = es.indices.analyze(
3.      body={
4.          "tokenizer": "standard",
5.          "filter":[
6.              "lowercase",
7.              {
8.                "type": "synonym",
9.                "synonyms": ["jumps_over => leap"]
10.              }
11.           ],
12.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
13.      }
14.  )

16.  # Extract tokens
17.  [token['token'] for token in response['tokens']]

Synonym Graph Filter

最适合多词同义词。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": "standard",
4.          "filter":[
5.              "lowercase",
6.              {
7.                "type": "synonym_graph",
8.                "synonyms": ["NYC, New York City", "LA, Los Angeles"]
9.              }
10.           ],
11.          "text": "Flight from LA to NYC has been delayed by an hour"
12.      }
13.  )

15.  # Extract tokens
16.  [token['token'] for token in response['tokens']]

请记住,输出并不直观地表示内部图形结构,但 Elasticsearch 在搜索查询期间使用此结构。

与通常同义词不匹配的匹配短语查询将与同义词图完美配合。

Stemmer Filter

词干过滤器,支持多种语言的词干提取。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.      "tokenizer": "standard",
4.      "filter": [
5.          {
6.              "type": "stemmer",
7.              "language": "English",
8.          },
9.          ],
10.      "text": "candies, ladies, plays, playing, ran, running, dresses"
11.      }
12.  )

14.  # Extract tokens
15.  [token['token'] for token in response['tokens']]

KStem Filter

kstem 过滤器将算法词干提取与内置字典相结合。 与其他英语词干分析器(例如 porter_stem 过滤器)相比,kstem 过滤器的词干提取力度较小。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.   "tokenizer": "standard",
4.   "filter": [
5.      'kstem',
6.    ],
7.   "text": "candies, ladies, plays, playing, ran, running"
8.      }
9.  )

11.  # Extract tokens
12.  [token['token'] for token in response['tokens']]

Porter Stem Filter

与其他英语词干过滤器(例如 kstem 过滤器)相比,倾向于更积极地进行词干提取。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.      "tokenizer": "whitespace",
4.      "filter": [
5.          {
6.              "type": "pattern_replace",
7.              "pattern": "[-|.|,]"
8.          },
9.          {
10.              "type": "porter_stem",
11.              "language": "English",
12.          },
13.          ],
14.      "text": "candies, ladies, plays, playing, ran, running, dresses"
15.      }
16.  )

18.  # Extract tokens
19.  [token['token'] for token in response['tokens']]

Snowball Filter

使用 Snowball 生成的词干分析器对单词进行词干分析的过滤器。 适用于法语、德语、俄语、西班牙语等不同语言。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.      "tokenizer": "whitespace",
4.      "filter": [
5.          {
6.              "type": "snowball",
7.              "language": "English",
8.          },
9.          ],
10.      "text": "candies, ladies, plays, playing, ran, running, dresses"
11.      }
12.  )

14.  # Extract tokens
15.  [token['token'] for token in response['tokens']]

Stemmer Override

通过应用自定义映射来覆盖词干算法,然后保护这些术语不被词干分析器修改。 必须放置在任何阻塞过滤器之前。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.        "tokenizer": "standard",
4.        "filter": [
5.          {
6.              "type": "stemmer_override",
7.              "language": "English",
8.              "rules": [
9.                  "running, runs => run",
10.                  "stemmer => stemmer"
11.              ]
12.          },
13.          ],
14.        "text": "candies, ladies, plays, playing, ran, running, dresses"
15.      }
16.  )

18.  # Extract tokens
19.  [token['token'] for token in response['tokens']]

更多使用方法,请参考 Stemmer override token filter | Elasticsearch Guide [8.12] | Elastic

Keyword Marker Filter

将某些术语标记为关键字,防止它们被其他过滤器(如词干分析器)修改。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.       "tokenizer": "whitespace",
4.       "filter": [
5.           {
6.              "type": "keyword_marker",
7.              "keywords": ["running"]  # Mark 'running' as a keyword
8.           },
9.           {
10.              "type": "pattern_replace",
11.              "pattern": "[-|.|,]"
12.           },
13.           {
14.              "type": "porter_stem",
15.              "language": "English",
16.           },
17.        ],
18.       "text": "candies, ladies, plays, playing, runs, running"
19.      }
20.  )

22.  # Extract tokens
23.  [token['token'] for token in response['tokens']]

Stop Filter

从分词流中删除停用词(经常被忽略的常用词)。 示例 --- if、of、is、am、are、the。可以使用默认或自定义的停用词列表。

python 复制代码
1.  # Analyze the text using the custom analyzer
2.  response = es.indices.analyze(
3.      body={
4.          "tokenizer": "standard",
5.          "filter":{
6.              "type":"stop",
7.              "stopwords": ["is","am","are","of","if","a","the"],
8.              "ignore_case": True
9.          },
10.          "text": "i am sachin. I Am software engineer."
11.      }
12.  )

14.  # Extract tokens
15.  [token['token'] for token in response['tokens']]

Unique Filter

从流中删除重复的分词。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.       "tokenizer": "whitespace",
4.       "filter":[
5.           "lowercase", "unique",
6.        ],
7.       "text": "Happy happy joy joy"
8.      }
9.  )

11.  # Extract tokens
12.  [token['token'] for token in response['tokens']]

Length Filter

删除比指定字符长度更短或更长的分词。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "tokenizer": "standard",
4.          "filter":[
5.              "lowercase",
6.              {
7.                "type": "length",
8.                "min": 1,
9.                "max": 4
10.              }
11.           ],
12.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
13.      }
14.  )

16.  # Extract tokens
17.  [token['token'] for token in response['tokens']]

NGram Token Filter

从分词形成指定长度的 ngram。 最适合在键入时自动完成或搜索。 或者用于用户可能会犯错或拼写错误的搜索。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.       "tokenizer": "whitespace",
4.       "filter":[
5.           {
6.              "type": "ngram",
7.              "min_gram": 3,
8.              "max_gram": 4
9.           }
10.        ],
11.       "text": "Skinny blue jeans by levis"
12.      }
13.  )

15.  # Extract tokens
16.  [token['token'] for token in response['tokens']]

Edge NGram Token Filter

从分词的开头形成指定长度的 ngram。 最适合在键入时自动完成或搜索。 它对于搜索建议中常见的部分单词匹配非常有效。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.       "tokenizer": "whitespace",
4.       "filter":[
5.           {
6.              "type": "edge_ngram",
7.              "min_gram": 3,
8.              "max_gram": 4
9.           }
10.        ],
11.       "text": "Skinny blue jeans by levis"
12.      }
13.  )

15.  # Extract tokens
16.  [token['token'] for token in response['tokens']]

Shingle Filter

通过连接相邻的标记,将 shingles 或单词 ngram 添加到分词流中。 默认情况下,shingle 分词过滤器输出两个字的 shingles。 最适用于提高搜索短语查询性能。

css 复制代码
1.  response = es.indices.analyze(
2.      body={
3.       "tokenizer": "whitespace",
4.       "filter":[
5.          {
6.            "type": "shingle",
7.            "min_shingle_size": 2,
8.            "max_shingle_size": 3           
9.          }
10.        ],
11.       "text": "Welcome to use Elastic Stack"
12.      }
13.  )

15.  [token['token'] for token in response['tokens']]

Creating a custom analyzer

以下是文本,下面是所需的输出:

arduino 复制代码
1.  text = "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."

3.  # Desired output
4.  ['2', 'quick', 'brown', 'fox', 'jump', 'over', 'lazy', 'dog', 'bone']

分析器应完成的事情列表:

  • 删除所有符号 - 连字符和下划线。
  • 删除停用词。
  • 将所有文本小写。
  • 删除撇号。
  • 词干。
bash 复制代码
1.  response = es.indices.analyze(
2.      body={
3.          "char_filter": [
4.              {
5.                  "type": "mapping",
6.                  "mappings": [
7.                      "- => ' '", # replacing hyphens with blank space
8.                      "_ => ' '", # replacing underscore with blank space
9.                   ]
10.              }
11.          ],
12.          "tokenizer": "standard",
13.          "filter": ["apostrophe", "lowercase", "stop", "porter_stem"],
14.          "text": "The 2 QUICK Brown-Foxes, jumps_over the lazy-dog's bone."
15.      }
16.  )

18.  # Extract and print tokens
19.  tokens = [token['token'] for token in response['tokens']]
20.  tokens

现在需要注意的一件事是顺序,无论你在内部处理时给 Elasticsearch 什么顺序,总是使用相同的顺序 char_filter > tokenizer > token_filter 但 char_filter 或 token filter 块内的顺序会有所不同。

将自定义分析器添加到索引

为了避免复杂化,最好创建一个新的索引并根据你的要求设置分析器。 以下是设置分析器的方法。

lua 复制代码
1.  settings = {
2.      "settings": {
3.          "analysis": {
4.              "analyzer": {
5.                  "my_custom_analyzer": {
6.                      "type": "custom",
7.                      "char_filter": {
8.                          "type": "mapping",
9.                          "mappings": [
10.                              "- => ' '",
11.                              "_ => ' '",
12.                          ]
13.                      },
14.                      "tokenizer": "standard",
15.                      "filter": ["lowercase", "apostrophe", "stop", "porter_stem"],
16.                  }
17.              }
18.          },
19.          "index": {
20.              "number_of_shards": 1,
21.              "number_of_replicas": 0,
22.              "routing.allocation.include._tier_preference": "data_hot"
23.          },
24.      },
25.      "mappings": {
26.          "properties": {
27.              "title": {"type":"text", "analyzer":"my_custom_analyzer"},
28.              "brand": {"type": "text", "analyzer":"my_custom_analyzer", "fields": {"raw": {"type": "keyword"}}},
29.              "updated_time": {"type": "date"}
30.          }`
31.      }
32.  }

34.  response = es.indices.create(index="trial_index", body=index_settings)

你可以在地址找到所有的代码:github.com/liu-xiao-gu...

相关推荐
喝醉酒的小白44 分钟前
Elasticsearch相关知识@1
大数据·elasticsearch·搜索引擎
小小工匠2 小时前
ElasticSearch - 深入解析 Elasticsearch Composite Aggregation 的分页与去重机制
elasticsearch·composite·after_key·桶聚合分页
风_流沙3 小时前
java 对ElasticSearch数据库操作封装工具类(对你是否适用嘞)
java·数据库·elasticsearch
TGB-Earnest4 小时前
【py脚本+logstash+es实现自动化检测工具】
大数据·elasticsearch·自动化
woshiabc11113 小时前
windows安装Elasticsearch及增删改查操作
大数据·elasticsearch·搜索引擎
arnold6616 小时前
探索 ElasticSearch:性能优化之道
大数据·elasticsearch·性能优化
成长的小牛23318 小时前
es使用knn向量检索中numCandidates和k应该如何配比更合适
大数据·elasticsearch·搜索引擎
Elastic 中国社区官方博客19 小时前
Elasticsearch:什么是查询语言?
大数据·数据库·elasticsearch·搜索引擎·oracle
启明真纳20 小时前
elasticache备份
运维·elasticsearch·云原生·kubernetes
幽弥千月1 天前
【ELK】ES单节点升级为集群并开启https【亲测可用】
elk·elasticsearch·https