Wrappers to use torch and lua from python
nn
network modules, train them, make predictionsimport PyTorch
a = PyTorch.FloatTensor(2,3).uniform()
a += 3
print('a', a)
print('a.sum()', a.sum())
import PyTorch
from PyTorchAug import nn
net = nn.Sequential()
net.add(nn.SpatialConvolutionMM(1, 16, 5, 5, 1, 1, 2, 2))
net.add(nn.ReLU())
net.add(nn.SpatialMaxPooling(3, 3, 3, 3))
net.add(nn.SpatialConvolutionMM(16, 32, 3, 3, 1, 1, 1, 1))
net.add(nn.ReLU())
net.add(nn.SpatialMaxPooling(2, 2, 2, 2))
net.add(nn.Reshape(32 * 4 * 4))
net.add(nn.Linear(32 * 4 * 4, 150))
net.add(nn.Tanh())
net.add(nn.Linear(150, 10))
net.add(nn.LogSoftMax())
net.float()
crit = nn.ClassNLLCriterion()
crit.float()
net.zeroGradParameters()
input = PyTorch.FloatTensor(5, 1, 28, 28).uniform()
labels = PyTorch.ByteTensor(5).geometric(0.9).icmin(10)
output = net.forward(input)
loss = crit.forward(output, labels)
gradOutput = crit.backward(output, labels)
gradInput = net.backward(input, gradOutput)
net.updateParameters(0.02)
Example lua class:
require 'torch'
require 'nn'
local TorchModel = torch.class('TorchModel')
function TorchModel:__init(backend, imageSize, numClasses)
self:buildModel(backend, imageSize, numClasses)
self.imageSize = imageSize
self.numClasses = numClasses
self.backend = backend
end
function TorchModel:buildModel(backend, imageSize, numClasses)
self.net = nn.Sequential()
local net = self.net
net:add(nn.SpatialConvolutionMM(1, 16, 5, 5, 1, 1, 2, 2))
net:add(nn.ReLU())
net:add(nn.SpatialMaxPooling(3, 3, 3, 3))
net:add(nn.SpatialConvolutionMM(16, 32, 3, 3, 1, 1, 1, 1))
net:add(nn.ReLU())
net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
net:add(nn.Reshape(32 * 4 * 4))
net:add(nn.Linear(32 * 4 * 4, 150))
net:add(nn.Tanh())
net:add(nn.Linear(150, numClasses))
net:add(nn.LogSoftMax())
self.crit = nn.ClassNLLCriterion()
self.net:float()
self.crit:float()
end
function TorchModel:trainBatch(learningRate, input, labels)
self.net:zeroGradParameters()
local output = self.net:forward(input)
local loss = self.crit:forward(output, labels)
local gradOutput = self.crit:backward(output, labels)
self.net:backward(input, gradOutput)
self.net:updateParameters(learningRate)
local _, prediction = output:max(2)
local numRight = labels:int():eq(prediction:int()):sum()
return {loss=loss, numRight=numRight} -- you can return a table, it will become a python dictionary
end
function TorchModel:predict(input)
local output = self.net:forward(input)
local _, prediction = output:max(2)
return prediction:byte()
end
Python script that calls this. Assume the lua class is stored in file "torch_model.lua"
import PyTorch
import PyTorchHelpers
import numpy as np
from mnist import MNIST
batchSize = 32
numEpochs = 2
learningRate = 0.02
TorchModel = PyTorchHelpers.load_lua_class('torch_model.lua', 'TorchModel')
torchModel = TorchModel(backend, 28, 10)
mndata = MNIST('../../data/mnist')
imagesList, labelsList = mndata.load_training()
labels = np.array(labelsList, dtype=np.uint8)
images = np.array(imagesList, dtype=np.float32)
labels += 1 # since torch/lua labels are 1-based
N = labels.shape[0]
numBatches = N // batchSize
for epoch in range(numEpochs):
epochLoss = 0
epochNumRight = 0
for b in range(numBatches):
res = torchModel.trainBatch(
learningRate,
images[b * batchSize:(b+1) * batchSize],
labels[b * batchSize:(b+1) * batchSize])
numRight = res['numRight']
epochNumRight += numRight
print('epoch ' + str(epoch) + ' accuracy: ' + str(epochNumRight * 100.0 / N) + '%')
It's easy to modify the lua script to use CUDA, or OpenCL.
luarocks install nn
sudo apt-get install lua5.1 liblua5.1-dev
Run:
pip install -r requirements.txt
pip install -r test/requirements.txt
Run:
git clone https://github.com/hughperkins/pytorch.git
cd pytorch
source ~/torch/install/bin/torch-activate
./build.sh
Run:
source ~/torch/install/bin/torch-activate
cd pytorch
./run_tests.sh
Examples of training models/networks using pytorch:
Addons, for using cuda tensors and opencl tensors directly from python (no need for this to train networks. could be useful if you want to manipulate cuda tensor directly from python)
Please note that currently, right now, I'm focused 100.000% on cuda-on-cl, so please be patient during this period
12 September:
8 September:
PyTorchAug.save(filename, object)
and PyTorchAug.load(filename)
, to save/load Torch .t7
files26 August:
--user
, into home directory14 April:
17 March:
16 March:
6 March:
nn
now, without needing to explicitly register inside pytorch
v3.0.0
to enable this, which is a breaking change, since the nn
classes are now in PyTorchAug.nn
, instead of directly
in PyTorchAug
5 March:
PyTorchHelpers.load_lua_class(lua_filename, lua_classname)
to easily import a lua class from a lua file2 March:
28th Februrary:
26th February:
/
to be the div operation for float and double tensors, and //
for int-type tensors, such as
byte, long, int1.0.0
to 2.0.0-SNAPSHOT
...numpy
.asNumpyTensor()
to convert a torch tensor to a numpy tensor24th February:
require