![]() ![]() Import torch.nn as nn class Network ( nn. This allows me to code out my network in the following way: It allows us to think about that specific set of interconnected neurons as a single abstraction. Also, it records these outputs to assist in the back-propagation that comes later.īut to be able to think of a set of interconnected neurons as a ReLU layer, or a Dropout layer, or a Convolutional layer presents a set of benefits. Even nn.Linear is an abstraction that defines the relationship between a set of tensors according to a specific formula - initialise a set of tensors that are connected in a certain way, then with each pass, take in a certain number of inputs, do a bunch of computations, and spit out a certain number of outputs. Rather, in neural networks, there are a bunch of neurons (tensors in PyTorch) interacting with each other. In fact, there isn’t even such a thing as a hidden layer. The fact is, of course, there is no such thing as a ReLU layer, or even a ReLU tensor. Instead of looking at a hidden layer and having to think that it becomes activated by a ReLU function, I can look at a layer and think of it as a ReLU layer. The nn.ReLU approach offers us the ability to think in terms of a convenient set of layer abstractions. To me it’s simply a question of consistency. If nn.ReLU is simply a more verbose way of calling the F.relu function, why would we bother with it in the first place? Therefore we can surmise that the nn.ReLU approach is simply a more verbose way of calling the F.relu function we were using earlier. Notice that nn.ReLU directly uses F.relu in its forward pass. inplace = inplace def forward ( self, input ): return F. In other words (or code) do we need to do this:Ĭlass ReLU ( Module ): def _init_ ( self, inplace = False ): super ( ReLU, self ). Do we need to initialise nn.Relu multiple times? # Which brings up the following question that had me stuck for a little bit. ![]() Hence the reason why it is known as the functional approach! This led me to an important realisation - F.relu itself likely doesn’t hold any tensor state. In fact, nn.ReLU itself encapsulates F.relu, as we can verify by directly peering into PyTorch’s torch.nn code ( repo url / source url). Nn.ReLU does the exact same thing, except that it represents this operation in a different way, requiring us to first initialise the method with nn.ReLU, before using it in the forward call. F.relu is a function that simply takes an output tensor as an input, converts all values that are less than 0 in that tensor to zero, and spits this out as an output. Rather, it activates the hidden layer that comes before it. The first thing we need to realise is that F.relu doesn’t return a hidden layer. ReLU () def forward ( self, x ): x = self. Module ): def _init_ ( self, num_input, num_output ): super ( Network, self ). ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |