🍁1. 清洗文本
对一些非结构化的文本数据进行基本的清洗
strip
split
replace
python
# 创建文本
text_data = [' Interrobang. By Aishwarya Henriette ',
'Parking And goding. by karl fautier',
' Today is the night. by jarek prakash ']
python
# 去除文本两端的空格
stripwhitespace = [string.strip() for string in text_data]
python
stripwhitespace
css
['Interrobang. By Aishwarya Henriette', 'Parking And goding. by karl fautier', 'Today is the night. by jarek prakash']
python
# 删除句号
remove_periods = [string.replace('.','') for string in text_data]
python
remove_periods
css
[' Interrobang By Aishwarya Henriette ', 'Parking And goding by karl fautier', ' Today is the night by jarek prakash ']
python
# 创建函数
def capitalizer(string):
return string.upper()
python
[capitalizer(string) for string in remove_periods]
css
[' INTERROBANG BY AISHWARYA HENRIETTE ', 'PARKING AND GODING BY KARL FAUTIER', ' TODAY IS THE NIGHT BY JAREK PRAKASH ']
python
# 使用正则表达式
import re
python
def replace_letters_with_x(string):
return re.sub(r'[a-zA-Z]','x',string)
python
[replace_letters_with_x(string) for string in remove_periods]
css
[' xxxxxxxxxxx xx xxxxxxxxx xxxxxxxxx ', 'xxxxxxx xxx xxxxxx xx xxxx xxxxxxx', ' xxxxx xx xxx xxxxx xx xxxxx xxxxxxx ']
🍂2. 解析并清洗HTML
python
#使用beautiful soup 对html进行解析
python
from bs4 import BeautifulSoup
python
# 创建html代码
html = """
<div class='full_name'><span style='font-weight:bold'>
Masege Azra"
"""
python
# 创建soup对象
soup = BeautifulSoup(html, 'lxml')
python
soup.find('div')
xml
<div class="full_name"><span style="font-weight:bold">
Masege Azra"
</span></div>
🍃3. 移除标点
python
import unicodedata
import sys
python
text_data = ['Hi!!!! I. love. This. Song....',
'10000% Agree!!!! #LoveIT',
'Right??!!']
python
# 创建一个标点符号字典
punctuation = dict.fromkeys(i for i in range(sys.maxunicode) if unicodedata.category(chr(i)).startswith('P'))
python
[string.translate(punctuation) for string in text_data]
css
['Hi I love This Song', '10000 Agree LoveIT', 'Right']
🌍4. 文本分词
这里介绍一下jieba库
python
python
import jieba
python
# 创建文本
string = 'The science of study is the technology of tomorrow'
python
seg = jieba.lcut(string)
print(seg)
css
['The', ' ', 'science', ' ', 'of', ' ', 'study', ' ', 'is', ' ', 'the', ' ', 'technology', ' ', 'of', ' ', 'tomorrow']
当然,本文只是介绍了在数据清洗中的一些最基本的文本处理方法,后续还会介绍目前NLP的一些主流方法和代码。