Les billets libellés: pytorch. Afficher tous les billets.

Auto-Encoders

mardi 18 juin 2019

In one of my classes last semester we had to make a variational auto-encoder for the MNIST dataset. MNIST is a pretty simple and small dataset so that wasn't very difficult. Looking for something more challenging I decided to try to make a face autoencoder.

The first challenge was finding a suitable dataset. There are several public datasets, but they are inconsistent in terms of image size, shape and other important details. I ended up using several datasets - the celebA was the easiest one to use without having to do much work on it. But it is relatively small, consisting of only 200,000 images. I ended up using the ETHZ dataset from Wikipedia and IMDB, which are quite large but the images are all different shapes and sizes. So I had to do some pre-processing of the data to remove unusable images. I removed any images small than the input size I was using of 162x190, and I also removed any images that were wider than they were higher or bigger than 500x500. This dataset also contains some images which have been stretched out at the edges to bizarre proportions. I removed these by deleting any images where the 10th row or column was identical to the first row or column. Finally I resized the large images down to a more reasonable size. This resulted in a dataset of about 390,000 faces, all of which were roughly the right size and shape.

I decided to train my autoencoder as a normal autoencoder rather than a variational one, mostly due to the extra overhead required for the variational layers. I used a latent space of size 4096, and after training for 12 hours a day for a few weeks on Google CoLab the results were surprisingly accurate. Once the model seemed to start overfitting the training data I stopped training it so I could play around with it.

I wanted to try to do interpolation between faces, which was when I realized what the advantage of making the auto-encoder variational was. When I tried to interpolate between faces, because the latent space was not continuous, rather than working as one would expect it was more like adding the faces together. Training the autoencoders as variational forces the latent space to be continuous which makes interpolation possible, so I am currently trying to retrain the model as variational.

Since the non-variational autoencoder had started to overfit the training data I wanted to try to find other ways to improve the quality, so I added an discriminative network which I am also currently training as a GAN, using the autoencoder as the generator. I will update with results of that when I have results worth reporting.

The notebooks used are available on GitHub, and the datasets I used are on Google Cloud Storage, although due to their size and the cost of downloading them they are not publicly available.

Libellés: python, pytorch, autoencoders
1 commentaires

PyTorch Update

jeudi 18 avril 2019

After another couple of weeks using PyTorch my initial enthusiasm has somewhat faded. I still like it a lot, but I have encountered many disadvantages. For one I can now see the advantage of TensorFlows static graphs - it makes the API easier to use. Since the graph is completely defined and then compiled you can just tell each layer how many units it should have and it will infer the number of inputs from whatever it's input is. In PyTorch you need to manually specify the inputs and outputs, which isn't a big deal, but makes it more difficult to tune networks since to change the number of units in a layer you need to change the inputs to the next layer, the batch normalization, etc. whereas with TensorFlow you can just change one number and everything is magically adjusted.

I also think that the TensorFlow API is better than PyTorch. There are some things which are very easy to do in TensorFlow which become incredibly complicated with PyTorch, like adding different regularization amounts to different layers. In TensorFlow there is a parameter to the layer that controls the regularization, in PyTorch you apparently need to loop through all of the parameters and know which ones to add what amount of regularization to.

I suppose one could easily get around these limitations with custom functions and such, and it shouldn't be surprising that TensorFlow seems more mature given that it has the weight of Google behind it, is considered the "industry standard", and has been around for longer. But I now see that TensorFlow has some advantages over PyTorch.

Libellés: python, machine_learning, tensorflow, pytorch
1 commentaires

PyTorch

lundi 08 avril 2019

When I first started with neural networks I learned them with TensorFlow and it seemed like TensorFlow was pretty much the industry standard. I did however keep hearing about PyTorch which was supposedly better than TensorFlow in many ways, but I never really got around to learning it. Last week I had to do one of my assignments in PyTorch so I finally got around to it, and I am already impressed.

The biggest problem I always had with TensorFlow was that the graphs are static. The entire graph must be defined and compiled before it is run and it can't be altered at runtime. You feed data into the graph and it returns output. This results in the rather awkward tf.Session() which must be created before you can do anything, and which contains all of the parameters for the model.

PyTorch has dynamic graphs which are compiled at runtime. This means that you can change things as you go, including altering the graph while it is running, and you don't need to have all the dimensions of all of the data specified in advance like you do in TensorFlow. You can also do things like change the numbers of neurons in a layer dynamically and drop entire layers at runtime which you can't do with TensorFlow.

Debugging PyTorch is a lot easier since you can just make a change and test it - you don't need to recreate the graph and instantiate a session to test it out. You can just run an optimization step whenever you want. Coming from TensorFlow that is just a breath of fresh air.

TensorFlow still has many advantages, including the fact that it is still an industry standard, is easier to deploy and is better supported. But PyTorch is definitely a worth competitor, is far more flexible, and solves many of the problems with TensorFlow.

Libellés: python, machine_learning, tensorflow, pytorch
Aucun commentaire

Archives du Blogue