This is the second part of the "Introduction to Tensors in PyTorch". If you missed the first part, read it from here and come back later on this post.
So far I have discussed the basics of tensors in the previous post related to defining, reshaping and casting the datatypes. But they are not limited to such operations, in fact, you can perform arithmetic calculations, matrix multiplication and other statistical functions like mean, and etc.
In this post, you will learn the following
- Indexing and Slicing of Tensors
- Arithmetic Operations on Tensors
- Basic Functions on the Tensors
- Operations on 2D Tensors
So let's begin...
Indexing and Slicing of Tensors
PyTorch was created to provide pythonic ways to deal with tensors. Like normal array slicing or indexing, you can also perform such operations on the tensors. The values in return will also be a tensor. The dimension of the tensors in indexing will be one less than the actual dimension.
Since the original tensor was of 1-D, the value would be of course a 0-D tensor. So to if you have a 2-D tensor, and are doing using only one index, you will get a 1-D tensor
NOTE: If you want to get the raw python list of 1+ D tensors, you can use
Like a normal list, you can also update the tensor value by simply accessing the index. For example
NOTE For your sake, PyTorch returns a new tensor object whenever you perform slicing on the original tensor. But it will overwrite the original tensor when you will change the value of the tensor at a particular index. Read More
In the 2-D tensor, the slicing can be done both on rows and columns or either of them. The first part before the comma (
,) means rows and the second part means column. So it will look like
Arithmetic Operations on Tensors
The very basic operations on tensors are vector additions and subtractions. Visit the link in case you want to study the maths behind these vector operations
Suppose you have two tensors \(u\) and \(v\) defined as
Then the vector operations addition and subtraction respectively
Scalar operations are also supported. For example, taking \(5\) as our scalar quantity
The operations addition, subtraction, multiplication and division are shown respectively
In PyTorch, there is no need of creating a 0-D tensor to perform scalar operations you can simply use the scalar value and perform the action. For example,
The cross-product is fairly short and easy, by using the
* symbol. But to perform dot product, you should use
torch.dot(vector1, vector2) or
Tensor.dot(vector2). The dot product will return a 0-D tensor as defined in maths
NOTE While performing cross-product or dot-product, dimensions of both the tensors should be equal, otherwise you will get
Basic Functions on the Tensors
Torch tensors provide a plethora of functions that you can apply to the tensors for the desired results. The first one of all is
NOTE Since the mean of a tensor will be a floating tensor, you can find the mean of floating tensor only otherwise you will get
You can also apply a function to a tensor element-wise. Suppose you have a tensor containing various values of pi and you want to apply the sin and cos function on it. You can use
Sometimes you will have to get an evenly spaced list of numbers between a range, you can use
torch.linspace(start, end, [step]). Let's make it more interactive by plotting the \(\sin (- \pi ) \) to \( \sin ( \pi ) \) and \( \cos (- \pi ) \) to \( \cos ( \pi ) \).
Now let's import matplotlib and plot both the graphs for
Operations on 2D Tensors
The greyscale images are the best example of 2D tensors. Each pixel which you see as \( width * height \) are the \( cols * rows \) for the matrix.
In the above images, it is demonstrated how a binary image can be represented in a matrix. In the case of a grayscale image, the value of each element of the matrix would be in the range of \( 0 - 255 \).
Creating a random 2D tensor using the
torch.rand(*size) method. This is a general method, you can create a random tensor of any dimension using this method. The
Tensor.numel() method used above returns the total number of elements in the tensor.