elasticsearch分词器插件开发(Elasticsearch配置---验证中文分词插件)
[elk@linux1 ~]$ curl -X GET 'http://localhost:9200/_cat/indices?v',今天小编就来说说关于elasticsearch分词器插件开发?下面更多详细答案一起来看看吧!
elasticsearch分词器插件开发
- 创建Index:
[elk@linux1 ~]$ curl -X GET 'http://localhost:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
创建名称为index1的Index:
[elk@linux1 ~]$ curl -XPUT http://localhost:9200/index1
{"acknowledged":true,"shards_acknowledged":true,"index":"index1"}[elk@linux1 ~]$
[elk@linux1 ~]$
[elk@linux1 ~]$ curl -X GET 'http://localhost:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open index1 9jeo0t0eSh-Lw3Ntb73g5A 1 1 0 0 230b 230b
创建名称为accounts的Index:
[elk@linux1 ~]$ curl -X PUT 'localhost:9200/accounts'
{"acknowledged":true,"shards_acknowledged":true,"index":"accounts"}[elk@linux1 ~]$
[elk@linux1 ~]$
[elk@linux1 ~]$ curl -X GET 'http://localhost:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open index1 9jeo0t0eSh-Lw3Ntb73g5A 1 1 0 0 230b 230b
yellow open accounts nK6QbHUwS4CUvHJf43BwTA 1 1 0 0 230b 230b
2.创建mapping
curl -XPOST http://localhost:9200/index1/_mapping -H 'Content-type:application/json' -d'
{
"properties": {
"content": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
}
}
}'
curl -XPOST 'localhost:9200/accounts/_mapping' -H 'Content-Type:application/json' -d '
{
"properties": {
"user": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
},
"title": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
},
"desc": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
}
}
}'
【注释】有三个字段:user, title, desc。这三个字段都是中文,而且类型都是文本(text),所以需要指定中文分词器,不能使用默认的英文分词器。
【注释】创建Index和mapping的两步(先PUT再POST到_mapping)可以一步(PUT)完成:
curl -X PUT 'localhost:9200/accounts' -H 'Content-Type:application/json' -d '
{
"mappings": {
"properties": {
"user": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
},
"title": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
},
"desc": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
}
}
}
}'
3.数据操作
Elastic 的分词器称为 analyzer。我们对每个字段指定分词器。
"user": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
}
上面代码中,analyzer是字段文本的分词器,search_analyzer是搜索词的分词器。ik_max_word分词器是插件ik提供的,可以对文本进行最大数量的分词。
以accounts 这个Index为例:
a) 新增记录: 向指定的 /Index 发送 PUT 请求,就可以在 Index 里面新增一条记录。比如,向/accounts发送请求,就可以新增一条人员记录。
curl -X PUT 'localhost:9200/accounts/_create/1' -H 'Content-Type:application/json' -d '
{
"user": "张三",
"title": "工程师",
"desc": "数据库管理"
}'
{
"_index": "accounts",
"_type": "_doc",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"_seq_no": 0,
"_primary_term": 1
}
服务器返回的 JSON 对象,会给出 Index、Type、Id、Version 等信息。
curl -X PUT 'localhost:9200/accounts/_create/2' -H 'Content-Type:application/json' -d '
{
"user": "张三四",
"title": "工程师",
"desc": "中间件管理"
}'
b) 查询记录
curl -XPOST http://localhost:9200/accounts/_search -H 'Content-Type:application/json' -d'
{
"query" : { "match" : { "user" : "张三" }},
"highlight" : {
"pre_tags" : ["<tag1>", "<tag2>"],
"post_tags" : ["</tag1>", "</tag2>"],
"fields" : {
"user" : {}
}
}
}
'
{
"took": 328,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": 0.8630463,
"hits": [
{
"_index": "accounts",
"_type": "_doc",
"_id": "1",
"_score": 0.8630463,
"_source": {
"user": "张三",
"title": "工程师",
"desc": "数据库管理"
},
"highlight": {
"user": [
"<tag1>张</tag1><tag1>三</tag1>"
]
}
}
]
}
}
,免责声明:本文仅代表文章作者的个人观点,与本站无关。其原创性、真实性以及文中陈述文字和内容未经本站证实,对本文以及其中全部或者部分内容文字的真实性、完整性和原创性本站不作任何保证或承诺,请读者仅作参考,并自行核实相关内容。文章投诉邮箱:anhduc.ph@yahoo.com