首页人工智能Pytorch13.通过nn.Modul...

13.通过nn.Module方法快捷定义MLP

12.LR多分类实战(MNIST数据集)做LR多分类的时候,网络中的参数w和b都是自己手动定义的,并且要十分了解它们的shape,但是对深度学习框架来说可以直接用现成的定义层的方式来定义。

1.以nn.Linear为例,构建MLP:

import torch
from torch import nn

x = torch.randn(1, 784)
print(x.shape)

layer1 = nn.Linear(784,200)
layer2 = nn.Linear(200,200)
layer3 = nn.Linear(200,10)

x = layer1(x)
print(x.shape)

x = layer2(x)
print(x.shape)

x = layer3(x)
print(x.shape)

运行结果如下:

torch.Size([1, 784])
torch.Size([1, 200])
torch.Size([1, 200])
torch.Size([1, 10])

2.加上激活函数之后的操作:

现在relu函数使用的非常广泛,一般能用relu就要使用relu

import torch
from torch import nn
from torch.nn import functional as F

x = torch.randn(1, 784)
print(x.shape)

layer1 = nn.Linear(784,200)
layer2 = nn.Linear(200,200)
layer3 = nn.Linear(200,10)

x = layer1(x)
x = F.relu(x,inplace=True)
print(x.shape)

x = layer2(x)
x = F.relu(x,inplace=True)
print(x.shape)

x = layer3(x)
x = F.relu(x,inplace=True)
print(x.shape)

运行结果如下:

torch.Size([1, 784])
torch.Size([1, 200])
torch.Size([1, 200])
torch.Size([1, 10])

3.直接继承nn.Module来定义自己的网络层级结构:

class MLP(nn.Module):
	"""
	继承nn.Module,自己定义网络的层
	"""

	def __init__(self):
		"""在构造器中定义层次结构"""
		super(MLP, self).__init__()
		# 在这里定义网络的每一层,可以添加任何继承nn.Module的类
		self.module = nn.Sequential(
			nn.Linear(784, 200),
			nn.ReLU(inplace=True),
			nn.Linear(200, 200),
			nn.ReLU(inplace=True),
			nn.Linear(200, 10),
			nn.ReLU(inplace=True)
		)
	def forward(self, x):
		"""
		定义前向传播过程
		"""
		x = self.module(x)
		return x

4.torch.nn和torch.nn.functional:

  • 类风格(class-style)的API

该风格的API一般在torch.nn下,而且以大写开头。
例如:nn.Linear,nn.ReLU

  • 函数风格(function-style)的API

该风格API一般在torch.nn.functional下,而且全是小写。
例如:F.relu(),F.cross_entropy()

5.使用自己定义的网络实现上一篇的MNIST分类问题:

比起上一篇的方法,这次的方法会显的更高级,上一篇更底层一些。

import torch
from torch import nn
from torch import optim
from torchvision import datasets, transforms


class MLP(nn.Module):

	def __init__(self):
		super(MLP, self).__init__()
		self.module = nn.Sequential(
			nn.Linear(784, 200),
			nn.ReLU(inplace=True),
			nn.Linear(200, 200),
			nn.ReLU(inplace=True),
			nn.Linear(200, 10),
			nn.ReLU(inplace=True)
		)

	def forward(self, x):
		x = self.module(x)
		return x


batch_size = 200
learning_rate = 0.01
epochs = 10

train_loader = torch.utils.data.DataLoader(
	datasets.MNIST('./data', train=True, download=True,
				   transform=transforms.Compose([
					   transforms.ToTensor(),
					   transforms.Normalize((0.1307,), (0.3081,))
				   ])),
	batch_size=batch_size, shuffle=True)


test_loader = torch.utils.data.DataLoader(
	datasets.MNIST('./data', train=False, transform=transforms.Compose([
		transforms.ToTensor(),
		transforms.Normalize((0.1307,), (0.3081,))
	])),
	batch_size=batch_size, shuffle=True)

# 训练+测试过程
net = MLP()
# 这里net.parameters()得到这个类所定义的网络的参数,各个w和各个b
optimizer = optim.SGD(net.parameters(), lr=learning_rate)
Loss = nn.CrossEntropyLoss()

for epoch in range(epochs):
	for batch_idx, (data, target) in enumerate(train_loader):
		data = data.reshape(-1, 28 * 28)
		logits = net(data)  # 前面定义的网络MLP的输出
		loss = Loss(logits, target)  # nn.CrossEntropyLoss()自带Softmax
		optimizer.zero_grad()  # 梯度信息清空
		loss.backward()
		optimizer.step()

		if batch_idx % 100 == 0:
			print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
				epoch, batch_idx * len(data), len(train_loader.dataset),
					   100. * batch_idx / len(train_loader), loss.item()))

	test_loss = 0
	correct = 0

	for data, target in test_loader:
		data = data.reshape(-1, 28 * 28)
		logits = net(data)
		test_loss += Loss(logits, target).item()
	test_loss /= len(test_loader.dataset)
	print('\nTest set: Average loss: {:.4f}'.format(test_loss))

运行结果如下:

Train Epoch: 0 [0/60000 (0%)]	Loss: 2.302029
Train Epoch: 0 [20000/60000 (33%)]	Loss: 2.012484
Train Epoch: 0 [40000/60000 (67%)]	Loss: 1.506401

Test set: Average loss: 0.0062

Train Epoch: 1 [0/60000 (0%)]	Loss: 1.219504
.......(略)

这次的方法中我们没有使用初始化,
原因一是,因为w和b已经归nn.Linear管理了,没有暴露给我们,我们也没办法直接进行初始化。
原因二是,当我们使用高层接口的时候,会有自己一套的初始化方法。

RELATED ARTICLES

欢迎留下您的宝贵建议

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

Recent Comments