Post

AI学习笔记-1

通过求偏导,梯度下降,反向传播求出系数的过程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import numpy as np

# y = 2x+1
x_data = np.array([2,3,4])
y_data = np.array([5,7,9])


w = 0
b = 0
learning_rate = 0.01


for epoch in range(10000):
    total_loss = 0
    grad_w = 0
    grad_b = 0
    
    for x,y_true in zip(x_data,y_data): 
        
        y_pred = w*x+b
        loss =  0.5*(y_pred-y_true)**2
        total_loss += loss

        # 反向传播
        dL_dy_pred = (y_pred-y_true)
        dy_pred_dw = x
        dy_pred_db = 1

        grad_w += dL_dy_pred*dy_pred_dw
        grad_b += dL_dy_pred * dy_pred_db

    w = w - learning_rate * grad_w/len(x_data)
    b = b - learning_rate * grad_b/len(x_data)

    # 打印过程
    if (epoch+1) % 1000 == 0:
        print(f'Epoch {epoch+1}: w={w:.4f}, b={b:.4f}, Loss={total_loss:.4f}')
    
# 4. 训练结果
print(f'\nTrained w: {w:.4f}')
print(f'Trained b: {b:.4f}')
1
2
3
4
5
6
7
8
9
10
11
12
13
Epoch 1000: w=2.0569, b=0.8177, Loss=0.0034
Epoch 2000: w=2.0304, b=0.9028, Loss=0.0010
Epoch 3000: w=2.0162, b=0.9482, Loss=0.0003
Epoch 4000: w=2.0086, b=0.9724, Loss=0.0001
Epoch 5000: w=2.0046, b=0.9853, Loss=0.0000
Epoch 6000: w=2.0025, b=0.9921, Loss=0.0000
Epoch 7000: w=2.0013, b=0.9958, Loss=0.0000
Epoch 8000: w=2.0007, b=0.9978, Loss=0.0000
Epoch 9000: w=2.0004, b=0.9988, Loss=0.0000
Epoch 10000: w=2.0002, b=0.9994, Loss=0.0000

Trained w: 2.0002
Trained b: 0.9994
This post is licensed under CC BY 4.0 by the author.