非線性回歸


1. 基本模型
    測試數據為X(x0,x1,x2···xn)
    要學習的參數為: Θ(θ0,θ1,θ2,···θn)
    向量表示:
    
   處理二值數據,引入Sigmoid函數時曲線平滑化 
  
  

  

  

import numpy as np
import random

# m denotes the number of examples here, not the number of features
def gradientDescent(x, y, theta, alpha, m, numIterations):  #阿爾法是學習率,m是總實例數,numIterations更新次數
    xTrans = x.transpose()  #轉置
    for i in range(0, numIterations):
        hypothesis = np.dot(x, theta)  #預測值
        loss = hypothesis - y
        # avg cost per example (the 2 in 2*m doesn't really matter here.
        # But to be consistent with the gradient, I include it)
        cost = np.sum(loss ** 2) / (2 * m)
        print("Iteration %d | Cost: %f" % (i, cost))
        # avg gradient per example
        gradient = np.dot(xTrans, loss) / m
        # update
        theta = theta - alpha * gradient
    return theta


def genData(numPoints, bias, variance):     #創建數據,參數:實例數,偏置,方差
    x = np.zeros(shape=(numPoints, 2))     #行數為numPoints,列數為2,數據類型為Numpy arrray
    y = np.zeros(shape=numPoints)   #標簽,只有1列 
    # basically a straight line
    for i in range(0, numPoints):   #0到numPoints-1
        # bias feature
        x[i][0] = 1  #x的第一列全為1
        x[i][1] = i
        # our target variable
        y[i] = (i + bias) + random.uniform(0, 1) * variance
    return x, y

# gen 100 points with a bias of 25 and 10 variance as a bit of noise
x, y = genData(100, 25, 10)
m, n = np.shape(x)
numIterations= 100000
alpha = 0.0005
theta = np.ones(n)
theta = gradientDescent(x, y, theta, alpha, m, numIterations)
print(theta)
View Code

 

生成x和y的值如下:

程序執行結果:

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM