閔可夫斯基引擎Minkowski Engine


閔可夫斯基引擎Minkowski Engine

Minkowski引擎是一個用於稀疏張量的自動微分庫。它支持所有標准神經網絡層,例如對稀疏張量的卷積,池化,解池和廣播操作。有關更多信息,請訪問文檔頁面

pip install git+https://github.com/NVIDIA/MinkowskiEngine.git

 

 稀疏張量網絡:空間稀疏張量的神經網絡

壓縮神經網絡以加快推理速度並最小化內存占用已被廣泛研究。用於模型壓縮的流行技術之一是修剪卷積網絡中的權重,也被稱為稀疏卷積網絡。用於模型壓縮的這種參數空間稀疏性壓縮在密集張量上運行的網絡,並且這些網絡的所有中間激活也是密集張量。

但是,在這項工作中,專注於稀疏的數據,尤其是空間稀疏的高維輸入。還可以將這些數據表示為稀疏張量,並且這些稀疏張量在3D感知,配准和統計數據等高維問題中很常見。將專門用於這些輸入的神經網絡定義為稀疏張量網絡,這些稀疏張量網絡處理並生成稀疏張量作為輸出。為了構建稀疏張量網絡,建立了所有標准的神經網絡層,例如MLP,非線性,卷積,規范化,池化操作,就像在密集張量上定義,並在Minkowski引擎中實現的方法一樣。

在下面的稀疏張量卷積上可視化了一個稀疏張量網絡操作。稀疏張量上的卷積層與密集張量上的卷積層相似。但是,在稀疏張量上,在一些指定點上計算卷積輸出,這些點可以在廣義卷積中進行控制。

特征

  • 無限的高維稀疏張量支持
  • 所有標准神經網絡層(卷積,池化,廣播等)
  • 動態計算圖
  • 自定義內核形狀
  • 多GPU訓練
  • 多線程內核映射
  • 多線程編譯
  • 高度優化的GPU內核

Requirements

  • Ubuntu >= 14.04
  • 11.1 > CUDA >= 10.1.243
  • pytorch >= 1.5
  • python >= 3.6
  • GCC >= 7

Pip

MinkowskiEngine是通過PyPI MinkowskiEngine分發的,可以使用簡單安裝pip。按照說明安裝pytorch 。接下來,安裝openblas

sudo apt install libopenblas-dev
pip install torch
pip install -U MinkowskiEngine --install-option="--blas=openblas" -v
 
# For pip installation from the latest source
# pip install -U git+https://github.com/NVIDIA/MinkowskiEngine

If you want to specify arguments for the setup script, please refer to the following command.

# Uncomment some options if things don't work
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine \
#                           \ # uncomment the following line if you want to force cuda installation
#                           --install-option="--force_cuda" \
#                           \ # uncomment the following line if you want to force no cuda installation. force_cuda supercedes cpu_only
#                           --install-option="--cpu_only" \
#                           \ # uncomment the following line when torch fails to find cuda_home.
#                           --install-option="--cuda_home=/usr/local/cuda" \
#                           \ # uncomment the following line to override to openblas, atlas, mkl, blas
#                           --install-option="--blas=openblas" \

快速啟動

要使用Minkowski引擎,首先需要導入引擎。然后,將需要定義網絡。如果沒有量化數據,則需要將(空間)數據體素化或量化為稀疏張量。幸運的是,Minkowski引擎提供了量化功能(MinkowskiEngine.utils.sparse_quantize)。

Anaconda

We recommend python>=3.6 for installation. First, follow the anaconda documentation to install anaconda on your computer.

sudo apt install libopenblas-dev
conda create -n py3-mink python=3.8
conda activate py3-mink
conda install numpy mkl-include pytorch cudatoolkit=11.0 -c pytorch
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine

System Python

Like the anaconda installation, make sure that you install pytorch with the same CUDA version that nvcc uses.

# install system requirements
sudo apt install python3-dev libopenblas-dev
 
# Skip if you already have pip installed on your python3
curl https://bootstrap.pypa.io/get-pip.py | python3
 
# Get pip and install python requirements
python3 -m pip install torch numpy
 
git clone https://github.com/NVIDIA/MinkowskiEngine.git
 
cd MinkowskiEngine
 
python setup.py install
# To specify blas, CUDA_HOME and force CUDA installation, use the following command
# python setup.py install --blas=openblas --cuda_home=/usr/local/cuda --force_cuda

Creating a Network

import torch.nn as nn
import MinkowskiEngine as ME
 
class ExampleNetwork(ME.MinkowskiNetwork):
 
    def __init__(self, in_feat, out_feat, D):
        super(ExampleNetwork, self).__init__(D)
        self.conv1 = nn.Sequential(
            ME.MinkowskiConvolution(
                in_channels=in_feat,
                out_channels=64,
                kernel_size=3,
                stride=2,
                dilation=1,
                has_bias=False,
                dimension=D),
            ME.MinkowskiBatchNorm(64),
            ME.MinkowskiReLU())
        self.conv2 = nn.Sequential(
            ME.MinkowskiConvolution(
                in_channels=64,
                out_channels=128,
                kernel_size=3,
                stride=2,
                dimension=D),
            ME.MinkowskiBatchNorm(128),
            ME.MinkowskiReLU())
        self.pooling = ME.MinkowskiGlobalPooling()
        self.linear = ME.MinkowskiLinear(128, out_feat)
 
    def forward(self, x):
        out = self.conv1(x)
        out = self.conv2(out)
        out = self.pooling(out)
        return self.linear(out)

Forward and backward using the custom network

    # loss and network
    criterion = nn.CrossEntropyLoss()
    net = ExampleNetwork(in_feat=3, out_feat=5, D=2)
    print(net)
 
    # a data loader must return a tuple of coords, features, and labels.
    coords, feat, label = data_loader()
    input = ME.SparseTensor(feat, coords=coords)
    # Forward
    output = net(input)
 
    # Loss
    loss = criterion(output.F, label)

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM