Generative Neural Machine Translation

Machine Learning models are still largely superficial – the models don’t really ‘understand’ the meaning of the sentences they are translating. If we want increasingly ‘intelligent’ machines, it’s important that models begin to incorporate more knowledge of the world. Read more...

Learning From Scratch by Thinking Fast and Slow with Deep Learning and Tree Search

Training powerful reinforcement learning agents from scratch by Thinking Fast and Slow. Read more...

Some modest insights into the error surface of Neural Nets

Did you know that feedforward Neural Nets (with piecewise linear transfer functions) have no smooth local maxima? In our recent ICML paper Practical Gauss-Newton Optimisation for Deep Learning) we discuss a second order method that can be applied successfully to accelerate training of Neural Networks. However, here I want to discuss some of the fairly straightforward, but perhaps interesting, insights into the geometry of the error surface that that work gives. Read more...

Evolutionary Optimization as a Variational Method

A simple connection between evolutionary optimisation and variational methods. Read more...

Training with a large number of classes

In machine learning we often face the issue of a very large number of classes in a classification problem. This causes a bottleneck in the computation. There’s though a simple and effective way to deal with this. Read more...