連續張量理解和contiguous()方法使用,view和reshape的區別


連續張量理解和contiguous()方法使用,view和reshape的區別

待辦

內存共享:
下邊的x內存布局是從0開始的,y內存布局,不是從0開始的張量

 
   For example: when you call transpose(), PyTorch doesn't generate new tensor with new layout, it just modifies meta information in Tensor object so offset and stride are for new shape. The transposed tensor and original tensor are indeed sharing the memory! ^[ > x = torch.randn(3,2) y = torch.transpose(x, 0, 1) x[0, 0] = 42 > print(y[0,0]) > # prints 42 ] This is where the concept of contiguous comes in. Above x is contiguous but y is not because its memory layout is different than a tensor of same shape made from scratch. Note that the word "contiguous" is bit misleading because its not that the content of tensor is spread out around disconnected blocks of memory. Here bytes are still allocated in one block of memory but the order of the elements is different! When you call contiguous(), it actually makes a copy of tensor so the order of elements would be same as if tensor of same shape created from scratch. Normally you don't need to worry about this. If PyTorch expects contiguous tensor but if its not then you will get RuntimeError: input is not contiguous and then you just add a call to contiguous()

view 和 reshape 的區別

Another difference is that reshape() can operate on both contiguous
and non-contiguous tensor while view() can only operate on contiguous
tensor. Also see here about the meaning of contiguous.

如果出現不是連續張量的問題,解決方案

Another difference is that reshape() can operate on both contiguous
and non-contiguous tensor while view() can only operate on contiguous
tensor. Also see here about the meaning of contiguous.


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM