使用 Elasticsearch 检测抄袭 (二)

我在在之前的文章 "使用 Elasticsearch 检测抄袭 (一)" 介绍了如何检文章抄袭。这个在许多的实际使用中非常有意义。我在 CSDN 上的文章也经常被人引用或者抄袭。有的人甚至也不用指明出处。这对文章的作者来说是很不公平的。文章介绍的内容针对很多的博客网站也非常有意义。在那篇文章中,我觉得针对一些开发者来说,不一定能运行的很好。在今天的这篇文章中,我特意使用本地部署,并使用 jupyter notebook 来进行一个展示。这样开发者能一步一步地完整地运行起来。

安装

安装 Elasticsearch 及 Kibana

如果你还没有安装好自己的 Elasticsearch 及 Kibana,那么请参考一下的文章来进行安装:

在安装的时候,请选择 Elastic Stack 8.x 进行安装。在安装的时候,我们可以看到如下的安装信息:

为了能够上传向量模型,我们必须订阅白金版或试用。

上传模型

注意:如果我们在这里通过命令行来进行上传模型的话,那么你就不需要在下面的代码中来实现上传。可以省去那些个步骤。

我们可以参考之前的文章 "Elasticsearch:使用 NLP 问答模型与你喜欢的圣诞歌曲交谈"。我们使用如下的命令来上传 OpenAI detection 模型:

arduino 复制代码
1.  eland_import_hub_model --url https://elastic:o6G_pvRL=8P*7on+o6XH@localhost:9200 \
2.  	--hub-model-id roberta-base-openai-detector \
3.  	--task-type text_classification \
4.  	--ca-cert /Users/liuxg/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt \
5.  	--start

在上面,我们需要根据自己的配置修改上面的证书路径,Elasticsearch 的访问地址。

我们可以在 Kibana 中查看最新上传的模型:

接下来,按照同样的方法,我们安装文本嵌入模型。

arduino 复制代码
1.  eland_import_hub_model --url https://elastic:o6G_pvRL=8P*7on+o6XH@localhost:9200 \
2.  	--hub-model-id sentence-transformers/all-mpnet-base-v2 \
3.  	--task-type text_embedding \
4.  	--ca-cert /Users/liuxg/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt \
5.  	--start 

为了方便大家学习,我们可以在如下的地址下载代码:

bash 复制代码
git clone https://github.com/liu-xiao-guo/elasticsearch-labs

我们可以在如下的位置找到 jupyter notebook:

markdown 复制代码
1.  $ pwd
2.  /Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch
3.  $ ls
4.  plagiarism_detection_es_self_managed.ipynb

运行代码

接下来,我们开始运行 notebook。我们首先安装相应的 python 包:

ini 复制代码
1.  pip3 install elasticsearch==8.11
2.  pip3 -q install eland elasticsearch sentence_transformers transformers torch==2.1.0

在运行代码之前,我们先设置如下的变量:

ini 复制代码
1.  export ES_USER="elastic"
2.  export ES_PASSWORD="o6G_pvRL=8P*7on+o6XH"
3.  export ES_ENDPOINT="localhost"

我们还需要把 Elasticsearch 的证书拷贝到当前的目录中:

bash 复制代码
1.  $ pwd
2.  /Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch
3.  $ cp ~/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt .
4.  $ ls
5.  http_ca.crt                                plagiarism_detection_es_self_managed.ipynb
6.  plagiarism_detection_es.ipynb

导入包:

javascript 复制代码
1.  from elasticsearch import Elasticsearch, helpers
2.  from elasticsearch.client import MlClient
3.  from eland.ml.pytorch import PyTorchModel
4.  from eland.ml.pytorch.transformers import TransformerModel
5.  from urllib.request import urlopen
6.  import json
7.  from pathlib import Path
8.  import os

连接到 Elasticsearch

ini 复制代码
1.  elastic_user=os.getenv('ES_USER')
2.  elastic_password=os.getenv('ES_PASSWORD')
3.  elastic_endpoint=os.getenv("ES_ENDPOINT")

5.  url = f"https://{elastic_user}:{elastic_password}@{elastic_endpoint}:9200"
6.  client = Elasticsearch(url, ca_certs = "./http_ca.crt", verify_certs = True)

8.  print(client.info())

上传 detector 模型

ini 复制代码
1.  hf_model_id ='roberta-base-openai-detector'
2.  tm = TransformerModel(model_id=hf_model_id, task_type="text_classification")

4.  #set the modelID as it is named in Elasticsearch
5.  es_model_id = tm.elasticsearch_model_id()

7.  # Download the model from Hugging Face
8.  tmp_path = "models"
9.  Path(tmp_path).mkdir(parents=True, exist_ok=True)
10.  model_path, config, vocab_path = tm.save(tmp_path)

12.  # Load the model into Elasticsearch
13.  ptm = PyTorchModel(client, es_model_id)
14.  ptm.import_model(model_path=model_path, config_path=None, vocab_path=vocab_path, config=config)

16.  #Start the model
17.  s = MlClient.start_trained_model_deployment(client, model_id=es_model_id)
18.  s.body

我们可以在 Kibana 中进行查看:

上传 text embedding 模型

ini 复制代码
1.  hf_model_id='sentence-transformers/all-mpnet-base-v2'
2.  tm = TransformerModel(model_id=hf_model_id, task_type="text_embedding")

4.  #set the modelID as it is named in Elasticsearch
5.  es_model_id = tm.elasticsearch_model_id()

7.  # Download the model from Hugging Face
8.  tmp_path = "models"
9.  Path(tmp_path).mkdir(parents=True, exist_ok=True)
10.  model_path, config, vocab_path = tm.save(tmp_path)

12.  # Load the model into Elasticsearch
13.  ptm = PyTorchModel(client, es_model_id)
14.  ptm.import_model(model_path=model_path, config_path=None, vocab_path=vocab_path, config=config)

16.  # Start the model
17.  s = MlClient.start_trained_model_deployment(client, model_id=es_model_id)
18.  s.body

我们可以在 Kibana 中查看:

创建源索引

markdown 复制代码
1.  client.indices.create(
2.  index="plagiarism-docs",
3.  mappings= {
4.      "properties": {
5.          "title": {
6.              "type": "text",
7.              "fields": {
8.                  "keyword": {
9.                  "type": "keyword"
10.                  }
11.              }
12.          },
13.          "abstract": {
14.              "type": "text",
15.              "fields": {
16.                  "keyword": {
17.                  "type": "keyword"
18.                  }
19.              }
20.          },
21.          "url": {
22.              "type": "keyword"
23.          },
24.          "venue": {
25.              "type": "keyword"
26.          },
27.           "year": {
28.              "type": "keyword"
29.          }
30.      }
31.  })

我们可以在 Kibana 中进行查看:

创建 checker ingest pipeline

ini 复制代码
1.  client.ingest.put_pipeline(
2.      id="plagiarism-checker-pipeline",
3.      processors = [
4.      {
5.        "inference": { #for ml models - to infer against the data that is being ingested in the pipeline
6.          "model_id": "roberta-base-openai-detector", #text classification model id
7.          "target_field": "openai-detector", # Target field for the inference results
8.          "field_map": { #Maps the document field names to the known field names of the model.
9.          "abstract": "text_field" # Field matching our configured trained model input. Typically for NLP models, the field name is text_field.
10.          }
11.        }
12.      },
13.      {
14.        "inference": {
15.          "model_id": "sentence-transformers__all-mpnet-base-v2", #text embedding model model id
16.          "target_field": "abstract_vector", # Target field for the inference results
17.          "field_map": { #Maps the document field names to the known field names of the model.
18.          "abstract": "text_field" # Field matching our configured trained model input. Typically for NLP models, the field name is text_field.
19.          }
20.        }
21.      }

23.    ]
24.  )

我们可以在 Kibana 中进行查看:

创建 plagiarism checker 索引

perl 复制代码
1.  client.indices.create(
2.  index="plagiarism-checker",
3.  mappings={
4.  "properties": {
5.      "title": {
6.          "type": "text",
7.          "fields": {
8.              "keyword": {
9.                  "type": "keyword"
10.              }
11.          }
12.      },
13.      "abstract": {
14.          "type": "text",
15.          "fields": {
16.              "keyword": {
17.                  "type": "keyword"
18.              }
19.          }
20.      },
21.      "url": {
22.          "type": "keyword"
23.      },
24.      "venue": {
25.          "type": "keyword"
26.      },
27.      "year": {
28.          "type": "keyword"
29.      },
30.      "abstract_vector.predicted_value": { # Inference results field, target_field.predicted_value
31.      "type": "dense_vector",
32.      "dims": 768, # embedding_size
33.      "index": "true",
34.      "similarity": "dot_product" #  When indexing vectors for approximate kNN search, you need to specify the similarity function for comparing the vectors.
35.           }
36.    }
37.  }
38.  )

我们可以在 Kibana 中进行查看:

写入源文档

我们首先把地址 public.ukp.informatik.tu-darmstadt.de/reimers/sen... 里的文档下载到当前目录下:

markdown 复制代码
1.  $ pwd
2.  /Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch
3.  $ ls
4.  emnlp2016-2018.json                        plagiarism_detection_es.ipynb
5.  http_ca.crt                                plagiarism_detection_es_self_managed.ipynb
6.  models

如上所示,emnlp2016-2018.json 就是我们下载的文档。

python 复制代码
1.  # Load data into a JSON object
2.  with open('emnlp2016-2018.json') as f:
3.     data_json = json.load(f)

5.  print(f"Successfully loaded {len(data_json)} documents")

7.  def create_index_body(doc):
8.      """ Generate the body for an Elasticsearch document. """
9.      return {
10.          "_index": "plagiarism-docs",
11.          "_source": doc,
12.      }

14.  # Prepare the documents to be indexed
15.  documents = [create_index_body(doc) for doc in data_json]

17.  # Use helpers.bulk to index
18.  helpers.bulk(client, documents)

20.  print("Done indexing documents into `plagiarism-docs` source index")

我们可以在 Kibana 中进行查看:

使用 ingest pipeline 进行 reindex

markdown 复制代码
1.  client.reindex(wait_for_completion=False,
2.                 source={
3.                    "index": "plagiarism-docs"
4.      },
5.                 dest= {
6.                    "index": "plagiarism-checker",
7.                    "pipeline": "plagiarism-checker-pipeline"
8.      }
9.  )

在上面,我们设置 wait_for_completion=False。这是一个异步的操作。我们需要等一段时间让上面的 reindex 完成。我们可以通过检查如下的文档数:

上面表明我们的文档已经完成。我们再接着查看一下 plagiarism-checker 索引中的文档:

检查重复文字

direct plagarism

python 复制代码
1.  model_text = 'Understanding and reasoning about cooking recipes is a fruitful research direction towards enabling machines to interpret procedural text. In this work, we introduce RecipeQA, a dataset for multimodal comprehension of cooking recipes. It comprises of approximately 20K instructional recipes with multiple modalities such as titles, descriptions and aligned set of images. With over 36K automatically generated question-answer pairs, we design a set of comprehension and reasoning tasks that require joint understanding of images and text, capturing the temporal flow of events and making sense of procedural knowledge. Our preliminary results indicate that RecipeQA will serve as a challenging test bed and an ideal benchmark for evaluating machine comprehension systems. The data and leaderboard are available at http://hucvl.github.io/recipeqa.'

3.  response = client.search(index='plagiarism-checker', size=1,
4.      knn={
5.          "field": "abstract_vector.predicted_value",
6.          "k": 9,
7.          "num_candidates": 974,
8.          "query_vector_builder": {
9.              "text_embedding": {
10.                  "model_id": "sentence-transformers__all-mpnet-base-v2",
11.                  "model_text": model_text
12.              }
13.          }
14.      }
15.  )

17.  for hit in response['hits']['hits']:
18.      score = hit['_score']
19.      title = hit['_source']['title']
20.      abstract = hit['_source']['abstract']
21.      openai = hit['_source']['openai-detector']['predicted_value']
22.      url = hit['_source']['url']

24.      if score > 0.9:
25.          print(f"\nHigh similarity detected! This might be plagiarism.")
26.          print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")

28.          if openai == 'Fake':
29.              print("This document may have been created by AI.\n")

31.      elif score < 0.7:
32.          print(f"\nLow similarity detected. This might not be plagiarism.")

34.          if openai == 'Fake':
35.              print("This document may have been created by AI.\n")

37.      else:
38.          print(f"\nModerate similarity detected.")
39.          print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")

41.          if openai == 'Fake':
42.              print("This document may have been created by AI.\n")

44.  ml_client = MlClient(client)

46.  model_id = 'roberta-base-openai-detector' #open ai text classification model

48.  document = [
49.      {
50.          "text_field": model_text
51.      }
52.  ]

54.  ml_response = ml_client.infer_trained_model(model_id=model_id, docs=document)

56.  predicted_value = ml_response['inference_results'][0]['predicted_value']

58.  if predicted_value == 'Fake':
59.      print("Note: The text query you entered may have been generated by AI.\n")

similar text - paraphrase plagiarism

css 复制代码
1.  model_text = 'Comprehending and deducing information from culinary instructions represents a promising avenue for research aimed at empowering artificial intelligence to decipher step-by-step text. In this study, we present CuisineInquiry, a database for the multifaceted understanding of cooking guidelines. It encompasses a substantial number of informative recipes featuring various elements such as headings, explanations, and a matched assortment of visuals. Utilizing an extensive set of automatically crafted question-answer pairings, we formulate a series of tasks focusing on understanding and logic that necessitate a combined interpretation of visuals and written content. This involves capturing the sequential progression of events and extracting meaning from procedural expertise. Our initial findings suggest that CuisineInquiry is poised to function as a demanding experimental platform.'

3.  response = client.search(index='plagiarism-checker', size=1,
4.      knn={
5.          "field": "abstract_vector.predicted_value",
6.          "k": 9,
7.          "num_candidates": 974,
8.          "query_vector_builder": {
9.              "text_embedding": {
10.                  "model_id": "sentence-transformers__all-mpnet-base-v2",
11.                  "model_text": model_text
12.              }
13.          }
14.      }
15.  )

17.  for hit in response['hits']['hits']:
18.      score = hit['_score']
19.      title = hit['_source']['title']
20.      abstract = hit['_source']['abstract']
21.      openai = hit['_source']['openai-detector']['predicted_value']
22.      url = hit['_source']['url']

24.      if score > 0.9:
25.          print(f"\nHigh similarity detected! This might be plagiarism.")
26.          print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")

28.          if openai == 'Fake':
29.              print("This document may have been created by AI.\n")

31.      elif score < 0.7:
32.          print(f"\nLow similarity detected. This might not be plagiarism.")

34.          if openai == 'Fake':
35.              print("This document may have been created by AI.\n")

37.      else:
38.          print(f"\nModerate similarity detected.")
39.          print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")

41.          if openai == 'Fake':
42.              print("This document may have been created by AI.\n")

44.  ml_client = MlClient(client)

46.  model_id = 'roberta-base-openai-detector' #open ai text classification model

48.  document = [
49.      {
50.          "text_field": model_text
51.      }
52.  ]

54.  ml_response = ml_client.infer_trained_model(model_id=model_id, docs=document)

56.  predicted_value = ml_response['inference_results'][0]['predicted_value']

58.  if predicted_value == 'Fake':
59.      print("Note: The text query you entered may have been generated by AI.\n")

完整的代码可以在地址下载:github.com/liu-xiao-gu...

相关推荐
jwolf233 分钟前
Elasticsearch向量搜索:从语义搜索到图搜图只有一步之遥
elasticsearch·搜索引擎·ai
你可以叫我仔哥呀2 小时前
ElasticSearch学习笔记三:基础操作(一)
笔记·学习·elasticsearch
hummhumm2 小时前
第 25 章 - Golang 项目结构
java·开发语言·前端·后端·python·elasticsearch·golang
java1234_小锋6 小时前
Elasticsearch中的节点(比如共20个),其中的10个选了一个master,另外10个选了另一个master,怎么办?
大数据·elasticsearch·jenkins
Elastic 中国社区官方博客6 小时前
Elasticsearch 开放推理 API 增加了对 IBM watsonx.ai Slate 嵌入模型的支持
大数据·数据库·人工智能·elasticsearch·搜索引擎·ai·全文检索
我的运维人生6 小时前
Elasticsearch实战应用:构建高效搜索与分析平台
大数据·elasticsearch·jenkins·运维开发·技术共享
Mephisto.java10 小时前
【大数据学习 | Spark】Spark的改变分区的算子
大数据·elasticsearch·oracle·spark·kafka·memcache
mqiqe10 小时前
Elasticsearch 分词器
python·elasticsearch
小马爱打代码10 小时前
Elasticsearch简介与实操
大数据·elasticsearch·搜索引擎
java1234_小锋19 小时前
Elasticsearch是如何实现Master选举的?
大数据·elasticsearch·搜索引擎