Tokenisierungstext mit Scikit-LearnPython

Python-Programme
Anonymous
 Tokenisierungstext mit Scikit-Learn

Post by Anonymous »

Ich habe den folgenden Code, um Funktionen aus einer Reihe von Dateien zu extrahieren (Ordnername ist der Kategorienname) für die Textklassifizierung. < /p>

Code: Select all

import sklearn.datasets
from sklearn.feature_extraction.text import TfidfVectorizer

train = sklearn.datasets.load_files('./train', description=None, categories=None, load_content=True, shuffle=True, encoding=None, decode_error='strict', random_state=0)
print len(train.data)
print train.target_names

vectorizer = TfidfVectorizer()
X_train = vectorizer.fit_transform(train.data)
< /code>

Es wirft die folgende Stapelverfolgung aus: < /p>

Traceback (most recent call last):
File "C:\EclipseWorkspace\TextClassifier\main.py", line 16, in 
X_train = vectorizer.fit_transform(train.data)
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 1285, in fit_transform
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 804, in fit_transform
self.fixed_vocabulary_)
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 739, in _count_vocab
for feature in analyze(doc):
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 236, in 
tokenize(preprocess(self.decode(doc))), stop_words)
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 113, in decode
doc = doc.decode(self.encoding, self.decode_error)
File "C:\Python27\lib\encodings\utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 32054: invalid start byte
< /code>

Ich reite Python 2.7 aus. Wie kann ich dies zum Laufen bringen? Gibt es eine Möglichkeit, sklearn.datasets.load_files () 
mit ANSI codieren?>

Quick Reply

Change Text Case: 
   
  • Similar Topics
    Replies
    Views
    Last post