2021-07-29
2021/7/29 23:05:54
本文主要是介绍2021-07-29,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
支持向量机
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
from scipy.io import loadmat
raw_data = loadmat(‘data/ex6data1.mat’)
raw_data
data = pd.DataFrame(raw_data[‘X’], columns=[‘X1’, ‘X2’])
data[‘y’] = raw_data[‘y’]
positive = data[data[‘y’].isin([1])]
negative = data[data[‘y’].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive[‘X1’], positive[‘X2’], s=50, marker=‘x’, label=‘Positive’)
ax.scatter(negative[‘X1’], negative[‘X2’], s=50, marker=‘o’, label=‘Negative’)
ax.legend()
plt.show()
from sklearn import svm
svc = svm.LinearSVC(C=1, loss=‘hinge’, max_iter=1000)
svc
svc.fit(data[[‘X1’, ‘X2’]], data[‘y’])
svc.score(data[[‘X1’, ‘X2’]], data[‘y’])
svc2 = svm.LinearSVC(C=100, loss=‘hinge’, max_iter=1000)
svc2.fit(data[[‘X1’, ‘X2’]], data[‘y’])
svc2.score(data[[‘X1’, ‘X2’]], data[‘y’])
data[‘SVM 1 Confidence’] = svc.decision_function(data[[‘X1’, ‘X2’]])
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(data[‘X1’], data[‘X2’], s=50, c=data[‘SVM 1 Confidence’], cmap=‘seismic’)
ax.set_title(‘SVM (C=1) Decision Confidence’)
plt.show()
data[‘SVM 2 Confidence’] = svc2.decision_function(data[[‘X1’, ‘X2’]])
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(data[‘X1’], data[‘X2’], s=50, c=data[‘SVM 2 Confidence’], cmap=‘seismic’)
ax.set_title(‘SVM (C=100) Decision Confidence’)
plt.show()
def gaussian_kernel(x1, x2, sigma):
x1 = np.array([1.0, 2.0, 1.0])
x2 = np.array([0.0, 4.0, -1.0])
sigma = 2
gaussian_kernel(x1, x2, sigma)
raw_data = loadmat(‘data/ex6data2.mat’)
data = pd.DataFrame(raw_data[‘X’], columns=[‘X1’, ‘X2’])
data[‘y’] = raw_data[‘y’]
positive = data[data[‘y’].isin([1])]
negative = data[data[‘y’].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive[‘X1’], positive[‘X2’], s=30, marker=‘x’, label=‘Positive’)
ax.scatter(negative[‘X1’], negative[‘X2’], s=30, marker=‘o’, label=‘Negative’)
ax.legend()
plt.show()
svc = svm.SVC(C=100, gamma=10, probability=True)
svc
svc.fit(data[[‘X1’, ‘X2’]], data[‘y’])
svc.score(data[[‘X1’, ‘X2’]], data[‘y’])
data[‘Probability’] = svc.predict_proba(data[[‘X1’, ‘X2’]])[:,0]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(data[‘X1’], data[‘X2’], s=30, c=data[‘Probability’], cmap=‘Reds’)
plt.show()
raw_data = loadmat(‘data/ex6data3.mat’)
X = raw_data[‘X’]
Xval = raw_data[‘Xval’]
y = raw_data[‘y’].ravel()
yval = raw_data[‘yval’].ravel()
#设置可选的超参数
C_values = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100]
gamma_values = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100]
#初始化变量
best_score = 0
best_params = {‘C’: None, ‘gamma’: None}
for C in C_values:
for gamma in gamma_values:
# STEP2:调用SVM包,计算当前参数下的得分
# your code here (appro ~ 2 lines)
svc = None
svc.fit(X, y)
score = None
# STEP3:替换得分最高的超参数组合
# your code here (appro ~ 3 lines)
if score > best_score:
best_score = None
best_params[‘C’] = None
best_params[‘gamma’] = None
best_score, best_params
X = spam_train[‘X’]
Xtest = None
y = None
ytest = None
X.shape, y.shape, Xtest.shape, ytest.shape
svc = svm.SVC()
svc.fit(X, y)
print(‘Training accuracy = {0}%’.format(np.round(svc.score(X, y) * 100, 2)))
print(‘Test accuracy = {0}%’.format(np.round(svc.score(Xtest, ytest) * 100, 2)))
这篇关于2021-07-29的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-11-01后台管理开发学习:新手入门指南
- 2024-11-01后台管理系统开发学习:新手入门教程
- 2024-11-01后台开发学习:从入门到实践的简单教程
- 2024-11-01后台综合解决方案学习:从入门到初级实战教程
- 2024-11-01接口模块封装学习入门教程
- 2024-11-01请求动作封装学习:新手入门教程
- 2024-11-01登录鉴权入门:新手必读指南
- 2024-11-01动态面包屑入门:轻松掌握导航设计技巧
- 2024-11-01动态权限入门:新手必读指南
- 2024-11-01动态主题处理入门:新手必读指南