Python机器学习 - 卡方检验, LabelEncoder, One-hot, xgboost, shap

2021/9/12 1:05:15

本文主要是介绍Python机器学习 - 卡方检验, LabelEncoder, One-hot, xgboost, shap,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

一、统计学相关

1. crosstable

# 计数
ct = pd.crosstab(label, feature, margins=True)
# 比例
ct_prob = contingency_table.div(ct['All'], axis=0)

2. 卡方检验

# p-value
scipy.stats.chi2_contingency(cross_table)[1]
# chi^2
scipy.stats.chi2_contingency(cross_table)[0]

3. SelectKBest

from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
skb = SelectKBest(chi2, k=2)
skb = skb.fit(cols, feat)
skb.get_support()

4. ANOVA

model = ols('value ~ C(treatments)', data=an_df).fit()
anova_table = anova_lm(model, typ = 2)
anova_table

二、数据预处理

1. LabelEncoder

from sklearn.preprocessing import LabelEncoder

le = LabelEncoder()
for col in data.columns:
    if data[col].dtype == 'object':
        data[col] = le.fit_transform(data[col])

需要注意LabelEncoder的结果带有数字意义(2大于1,3大于2),对于离散型特征不建议使用这种方法。

2. OneHot Encoder

from sklearn.preprocessing import OneHotEncoder

onehot_encoder = OneHotEncoder()
onehot_encoded = onehot_encoder.fit_transform(all_Feats)

3. getdummies( )

import pandas

encoded = pd.get_dummies(X)  # 对整个dataframe
oh = pd.get_dummies(X[col], prefix=col)  # 对单列

二、XGBoost

import xgboost
from xgboost import XGBClassifier
from sklearn import metrics
import matplotlib.pyplot as plt
from xgboost import plot_importance

train_X, test_X, train_y, test_y = train_test_split(encoded_x, y, test_size=0.2)

xgbc = XGBClassifier(eta = 0.2, max_depth = 6)
xgbc.fit(train_X, train_y) #, eval_metric = 'logloss', eval_set = evalset)
test_predict = xgbc.predict(test_X)
metrics.accuracy_score(test_y, test_predict)

ax = plot_importance(xgbc)
ax.figure.set_size_inches(8,36)
plt.show()

三、LightGBM

import lightgbm as lgb
import matplotlib.pyplot as plt
from xgboost import plot_importance
from sklearn import metrics


train_data = lgb.Dataset(train_X, label = train_y)
validation_data = lgb.Dataset(test_X, label = test_y)

params={
    'learning_rate':0.1,
    'lambda_l1':0.1,
    'lambda_l2':0.2,
    'max_depth':6,
    'objective':'multiclass',
    'num_class':8,  
}

clf = lgb.train(params, train_data, valid_sets=[validation_data])
y_prob = clf.predict(test_X, num_iteration=clf.best_iteration)
y_pred = [list(x).index(max(x)) for x in y_prob]
metrics.accuracy_score(test_y, y_pred)

columns = test_X.columns.tolist()
df = pd.DataFrame()
df['feature name'] = columns
df['importance'] = clf.feature_importance()
df = df.sort_values('importance')
df.plot.barh(x = 'feature name',figsize=(10,36))

四、SHAP沙普利值

先安装SHAP:

!pip install shap

以xgboost模型为例:

import shap

explainer = shap.TreeExplainer(xgbc)
shap_values = explainer.shap_values(test_X)
shap.summary_plot(shap_values, test_X, plot_type="bar")


这篇关于Python机器学习 - 卡方检验, LabelEncoder, One-hot, xgboost, shap的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程