基于numpy用梯度上升法处理逻辑斯蒂回归

2021/8/18 23:09:52

本文主要是介绍基于numpy用梯度上升法处理逻辑斯蒂回归,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

import numpy as np  import matplotlib.pyplot as plt  w=700 w1=700 n=w1-50 train=np.random.randint(-300,300,(w,4)) train=train.astype(float) train_lable=np.zeros((w,1)) traint=train.astype(float) lam=100 for i in range(4):     train[:,i]=(train[:,i]-train[:,i].mean())/train[:,i].std() for i in range(w):     if 1*train[i,0]+2*train[i,1]+3*train[i,2]+4*train[i,3]-1>0:         train_lable[i]=1     else:         train_lable[i]=0 w=np.zeros(4) b=0 beta=1 for i in range(300):     sum=np.zeros(5)     for i1 in range(4):         for i2 in range(w1-50):             sum[i1]=sum[i1]+train_lable[i2]*train[i2,i1]-train[i2,i1]*np.exp(np.dot(w,train[i2])+b)/(1+np.exp(np.dot(w,train[i2])+b))-lam/n*w[i1]     for i2 in range(w1-50):         sum[4]=sum[4]+train_lable[i2]-np.exp(np.dot(w,train[i2])+b)/(1+np.exp(np.dot(w,train[i2])+b))     loss=0     for i2 in range(w1-50):         loss=loss+train_lable[i2]*(np.dot(w,train[i2])+b)-np.log(1+np.exp(np.dot(w,train[i2])+b))       sum=sum/(w1-50)     loss=loss/(w1-50)-lam/2/n*np.dot(w,w)     if loss>=-0.9 and beta>=1:         beta=beta/10     print(i,beta,loss,sum,w,b)     for i1 in range(5):         if i1==4:             b=beta*sum[4]+b         else:             w[i1]=w[i1]+beta*sum[i1] acc=0 k=w1-50 for i in range(50):     if(np.dot(w,train[i+k])+b>0):         if train_lable[i+k]==1:             acc+=1     else:         if train_lable[i+k]==0:             acc+=1 print(acc)

这篇关于基于numpy用梯度上升法处理逻辑斯蒂回归的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程