win10 + gluon + GPU


1. 下載教程

可以用瀏覽器下載zip格式並解壓,在解壓目錄文件資源管理器的地址欄輸入cmd進入命令行模式。

也可以

git pull https://github.com/mli/gluon-tutorials-zh

2.安裝gluon CPU

添加源:

# 優先使用清華conda鏡像
conda config --prepend channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/

# 也可選用科大conda鏡像
conda config --prepend channels http://mirrors.ustc.edu.cn/anaconda/pkgs/free/

cmd中安裝

conda env create -f environment.yml
activate gluon # 注意Windows下不需要 source

可更新教程:

conda env update -f environment.yml

3.安裝GPU版本

先卸載CPU

pip uninstall mxnet

然后

pip install --pre mxnet-cu75 # CUDA 7.5
pip install --pre mxnet-cu80 # CUDA 8.0

【可選項】國內用戶可使用豆瓣pypi鏡像加速下載:

pip install --pre mxnet-cu75 -i https://pypi.douban.com/simple # CUDA 7.5
pip install --pre mxnet-cu80 -i https://pypi.douban.com/simple # CUDA 8.0

查看安裝

import pip
for pkg in ['mxnet', 'mxnet-cu75', 'mxnet-cu80']:
    pip.main(['show', pkg])

 

4.查看教程

 然后安裝notedown,運行Jupyter並加載notedown插件:

pip install https://github.com/mli/notedown/tarball/master
jupyter notebook --generate-config jupyter notebook
--NotebookApp.contents_manager_class='notedown.NotedownContentsManager'

 5.教程簡記

跟NumPy的轉換

from mxnet import ndarray as nd
import numpy as np
x = np.ones((2,3))
y = nd.array(x)  # numpy -> mxnet
z = y.asnumpy()  # mxnet -> numpy
print([z, y])

自動求導

import mxnet.autograd as ag

假設我們想對函數 $f = 2*x^2$ 求關於 $x$的導數

1.創建變量

x = nd.array([[1, 2], [3, 4]])

2.通過NDArray的方法attach_grad()來要求系統申請梯度空間

x.attach_grad()

3.定義函數 f

with ag.record():
    y = x * 2
    z = y * x

4.反向傳播,求梯度

z.backward()

5.梯度:

print('x.grad: ', x.grad)

 線性回歸,從零開始

#coding=utf-8
"""線性回歸,從零開始"""

from mxnet import ndarray as nd
from mxnet import autograd
import matplotlib.pyplot as plt
import random

# 1.創建數據集
# y[i] = 2 * X[i][0] - 3.4 * X[i][1] + 4.2 + noise
# y = X*w + b + n
num_inputs = 2
num_examples = 1000

true_w = [2, -3.4]
true_b = 4.2

X = nd.random_normal(shape=(num_examples, num_inputs))
y = true_w[0] * X[:,0] + true_w[1] * X[:,1] + true_b
y += 0.01 * nd.random_normal(shape=y.shape)

# plt.scatter(X[:,1].asnumpy(), y.asnumpy())
# plt.show()

# 2.數據讀取
batch_size = 10
def data_iter():
    # 產生一個隨機索引
    idx = list(range(num_examples))
    random.shuffle(idx)
    for i in range(0, num_examples, batch_size):
        j = nd.array(idx[i:min(i+batch_size, num_examples)])
        yield nd.take(X, j), nd.take(y, j)

# for data, label in data_iter():
#     print (data, label)
#     break

# 3.初始化模型參數
w = nd.random_normal(shape=(num_inputs,1))
b = nd.zeros((1,))
params = [w, b]

# print (params)
# 創建梯度空間
for param in params:
    param.attach_grad()

# 4.定義模型
def net(X):
    return nd.dot(X, w) + b

# 5.定義損失函數
def square_loss(yhat, y):
    # 注意這里將y變形成yhat的形狀來避免矩陣的broadcasting
    return (yhat - y.reshape(yhat.shape)) ** 2

# 6.優化
def SGD(params, lr):
    for param in params:
        param[:] = param - lr * param.grad

# 7.訓練
# 模型函數
def real_fn(X):
    return 2 * X[:, 0] - 3.4 * X[:, 1] + 4.2
# 繪制損失隨訓練次數降低的折線圖,以及預測值和真實值的散點圖
def plot(losses, X, sample_size=100):
    xs = list(range(len(losses)))
    fig, axes = plt.subplots(1, 2)
    axes[0].set_title('Loss during training')
    axes[0].plot(xs, losses, '-r')
    axes[1].set_title('Estimated vs real function')
    axes[1].plot(X[:sample_size, 1].asnumpy(),
             net(X[:sample_size, :]).asnumpy(), 'or', label='Estimated')
    axes[1].plot(X[:sample_size, 1].asnumpy(),
             real_fn(X[:sample_size, :]).asnumpy(), '*g', label='Real')
    axes[1].legend()
    plt.show()

epochs = 5
learning_rate = 0.001
niter = 0
losses = []
moving_loss = 0
smoothing_constant = 0.01

# 訓練
for e in range(epochs):
    total_loss = 0
    # 每個epoch
    for data, label in data_iter():
        with autograd.record():
            output = net(data) # 前向傳播
            loss = square_loss(output, label)
        loss.backward() # 反向傳播
        SGD(params, learning_rate) # 更新參數
        iter_loss = nd.sum(loss).asscalar() / batch_size
        total_loss += nd.sum(loss).asscalar()

        # 記錄損失變化
        niter += 1
        curr_loss = nd.mean(loss).asscalar()
        moving_loss = (1 - smoothing_constant) * moving_loss + smoothing_constant * curr_loss

        losses.append(iter_loss)
        if (niter + 1) % 100 == 0:
            print("Epoch %s, batch %s. Average loss: %f" % (e, niter, total_loss / num_examples))
            plot(losses, X)
View Code

 使用GPU

a = nd.array([1,2,3], ctx=mx.gpu())
b = nd.zeros((3,2), ctx=mx.gpu())

可以通過copytoas_in_context來在設備直接傳輸數據。

y = x.copyto(mx.gpu())
z = x.as_in_context(mx.gpu())

這兩個函數的主要區別是,如果源和目標的context一致,as_in_context不復制,而copyto總是會新建內存:

 這類似與caffe中的cuda操作

float* tmp_transform_bbox = NULL;
CUDA_CHECK(cudaMalloc(
&tmp_transform_bbox, 7 * sizeof(Dtype) * rpn_pre_nms_top_n));//修改retained_anchor_num cudaMemcpy(tmp_transform_bbox, &transform_bbox_[transform_bbox_begin], rpn_pre_nms_top_n * sizeof(Dtype) * 7, cudaMemcpyDeviceToDevice);

 參數獲取

w = net[0].weight
b = net[0].bias
print 'name: ', net[0].name, '\nweight: ', w, '\nbias: ', b

print('weight:', w.data())
print('weight gradient', w.grad())
print('bias:', b.data())
print('bias gradient', b.grad())

 

params = net.collect_params()
print(params)
print(params['sequential0_dense0_bias'].data())
print(params.get('dense0_weight').data())

參數初始化

from mxnet import init
params = net.collect_params()
params.initialize(init=init.Normal(sigma=0.02), force_reinit=True)
print(net[0].weight.data(), net[0].bias.data())

 

6.使用中錯誤解決

1.python2打印權重報錯

w = net[0].weight
b = net[0].bias
print('name: ', net[0].name, '\nweight: ', w, '\nbias: ', b)

把C:\Anaconda2\envs\gluon\Lib\site-packages\mxnet\gluon\parameter.py 119行改為

 

s = 'Parameter {name} (shape={_shape}, dtype={dtype})'

 

同時,如果是Python2需要把print后去掉括號。。。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM