中科院汉语分词系统是一个非常好用的分词工具,和结巴分词类似,但是比结巴分词功能更加强大,而且更加个性化。
PyNLPIR is a Python wrapper around the NLPIR/ICTCLAS Chinese segmentation software.
Easily segment text using NLPIR, one of the most widely-regarded Chinese text analyzers:
import pynlpir
pynlpir.open()
s = '欢迎科研人员、技术工程师、企事业单位与个人参与NLPIR平台的建设工作。'
pynlpir.segment(s)
[('欢迎', 'verb'), ('科研', 'noun'), ('人员', 'noun'), ('、', 'punctuation mark'), ('技术', 'noun'), ('工程师', 'noun'), ('、', 'punctuation mark'), ('企事业', 'noun'), ('单位', 'noun'), ('与', 'conjunction'), ('个人', 'noun'), ('参与', 'verb'), ('NLPIR', 'noun'), ('平台', 'noun'), ('的', 'particle'), ('建设', 'verb'), ('工作', 'verb'), ('。', 'punctuation mark')]
ctypespip install pynlpir to install PyNLPIRpynlpir update to download the latest license#!/usr/bin/env python
# -*- coding: utf-8 -*-
import pynlpir
pynlpir.open()
s = '欢迎科研人员、技术工程师、企事业单位与个人参与NLPIR平台的建设工作。'
segments = pynlpir.segment(s)
for segment in segments:
print segment[0], '\t', segment[1]
s = '聊天机器人到底该怎么做呢?'
key_words = pynlpir.get_key_words(s, weighted=True)
for key_word in key_words:
print key_word[0], '\t', key_word[1]
s = '海洋是如何形成的,.,。'
segments = pynlpir.segment(s, pos_names='all')
for segment in segments:
print segment[0], '\t', segment[1]
s = '海洋是如何形成的,.,。'
segments = pynlpir.segment(s, pos_names='all', pos_english=False)
for segment in segments:
print segment[0], '\t', segment[1]
pynlpir.close()