L4打卡学习笔记

发布于:2024-09-18 ⋅ 阅读:(113) ⋅ 点赞:(0)

导入数据


import pandas as pd

data=pd.read_table(r'C:\Users\11054\Desktop\kLearning\L4_learning\datingTestSet2.txt',
                   sep='\t', header=None)

data.head()
X = data.iloc[:,:3]
Y = data.iloc[:,3]
X,Y
(         0          1         2
 0    40920   8.326976  0.953952
 1    14488   7.153469  1.673904
 2    26052   1.441871  0.805124
 3    75136  13.147394  0.428964
 4    38344   1.669788  0.134296
 ..     ...        ...       ...
 995  11145   3.410627  0.631838
 996  68846   9.974715  0.669787
 997  26575  10.650102  0.866627
 998  48111   9.134528  0.728045
 999  43757   7.882601  1.332446
 
 [1000 rows x 3 columns],
 0      3
 1      2
 2      1
 3      1
 4      1
       ..
 995    2
 996    1
 997    3
 998    3
 999    3
 Name: 3, Length: 1000, dtype: int64)

划分训练集测试集


from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, Y,
                                                    test_size=0.25,
                                                    random_state=3)

创建模型


from sklearn.neighbors import KNeighborsClassifier

knc = KNeighborsClassifier(n_neighbors=5)
knc.fit(X_train, y_train)

预测结果


data["预测结果"] = knc.predict(data.iloc[:,:3])
data.head(10)

结果评分


scoreK = knc.score(X_test,y_test)

print(scoreK)

个人总结

  • K邻近模型用于分类,预测数据为数据库中相似度最高的n个数据中占比最大的结果值,n_neighbors用于控制K值

网站公告

今日签到

点亮在社区的每一天
去签到