In this lesson we have a look at machine learning with TensorFlow.
We will create our own linear classifier, and use TensorFlow’s built-in optimisation algorithm to train it.
First, we will have a look at the data, and what we are trying to do. For those new to machine learning, the task we are trying to perform is called supervised machine learning or classification.
The task is to try and work out the relationship between some input data and an output value. In practical terms, the input data could be measurements, such as height or weight, and the output value would be the expected prediction, such as “cat” or “dog”.
The lesson here extends on the work of our Convergence lesson, which can be found here. I recommend you complete that lesson first.
Let’s create and visualise some data:
Here we have three blobs of data, the yellows, blues and purples. They are plotted out on two dimensions, which we will call x0x0 and x1x1.
These values are stored in the X array.
When we perform machine learning, it is necessary to split your data into a training set, that we use for creating the model, and a testing set, that we use to evaluate it. If we don’t do that, then we can simply create a “cheating classifier” that just remembers our training data. By splitting, our classifier must learn the relationship between inputs (the position on the plot) and the outputs.
Now we plot our testing data. After learning the relationship between position and colour from the training data, the classifier will be given the following points, and will be evaluated on how accurately it colours the points.
Our model will be a simple linear classifier. This means it will draw straight lines between the three colours. Points above a line are given one colour, while those below a line are given another colour. We will call these our decision lines, although they are normally called decision boundaries, because other models can learn more complex shapes than just a line.
To mathematically represent our model, we use this equation:
Our weights W is a (n_features, n_classes) matrix and represents the learned weights from our model. It dictates where the decision lines will site. X is a (n_rows by n_features) matrix, and is the position data – where a given point sits on the graph. Finally, b is a (1 by n_classes) vector, and is the biases. We need this so that our lines don’t have to go through point (0,0), giving us the ability to “draw” lines in any position on the graph.
The points in X are fixed – these are the training or testing data, and are called observed data. The values of W and b are the parameters in our model and we have control over those values. Choosing good values for these values gives us good decision lines.
The process of choosing good values for the parameters in our model is called training the algorithm, and is the “learning” in machine learning.
Let’s take our mathematical model from above, and turn it into a TensorFlow operation.
The Y_pred Tensor represents our mathematical model from above. By passing in observed data (X) we can get the expected values, in our case, the expected colour of a given point. Note the use of broadcasting for applying the bias across all of the predictions.
The actual values in Y_pred are composed of “likelihoods” that the model will select each of the classes for a given point, making is a (n_rows by n_classes) sized matrix. They aren’t real likelihoods, but we can find out which class our model thinks is most likely by finding the maximum value.
Next, we need to define a function that evaluates how good a given set of weights is. Note that we haven’t learned weights yet, they were simply given random values. TensorFlow has built-in loss functions that accept the predicted outputs (i.e. those values that come out of your model) against the actual values (the ground truth that we created when we first created our testing set). We compare these and score how well our model performed. We call it a loss function, because the worse we do the higher the value – we attempt to minimise the loss.
The final step is to create an optimisation step that takes our loss function and finds values for the given variables that minimises the loss. Note that the loss function references Y_true, which in turn references W and b. TensorFlow picks this relationship up, and alters the values in these Variables to find good values.
Now for the training bit!
We pass in the learner in a loop for it to find the best weights. Every time we loop, the learned weights from the previous loop are improved slightly for the next loop. The 0.1 in the previous line of code is the learning rate. If you increase the value, the algorithm learns faster. However, smaller values generally converge to better values. A value of 0.1 is a good starting point while you look at other aspects of the model.
In each loop, we pass in our training data to the learner through placeholders. Every 100th loop, we see how well our model is learning by passing the testing data in directly to the loss function.
A little complex, but we are effectively creating a two dimensional grid covering the possible values for x0 and x1.
There you have it! Our model will classify anything in the yellow region as yellow, and so on. If you overlay the actual test values (stored in y_test_flat), you can highlight any differences.