inprogress
which means I consider it largely incomplete in it's current state. Expect rough, unfinished thoughts.Contents
Some mathematical concepts and derivations that form the basis behind some simple machine learning algorithms. It can be viewed as a high level summary of introductory machine learning.
Helpful Prior Knowledge
- Basic linear algebra, fundamental operations with vectors and matrices, transpose and inverse
- Basic conceptual calculus, taking derivatives
Notation
Common idiomatic notation used in machine learning and linear algebra, used throughout these notes.
- - Vector of parameters
- - Matrix of features
- - Number of features
- - Number of training examples
- - The correct values for each set of features in the training set
- - Hypothesis function
- - Cost function
Given a matrix :
- - the entry in the row, column
- - the transpose of
- - the inverse of
For our matrix of features :
- - vector of the features in the training example
- - value of feature in the training example
Linear Regression
Basic Concepts
- Features: The numerical inputs for the algorithm from which it makes predictions.
- Parameters: The weights which we multiply to the features to produce the final output (corresponds to the prediction we make). These are the values we are trying to learn.
- Hypothesis function: Function which uses the parameters to map the input features to the final prediction. It is represented as , taking the features (could be a single value or a vector) as input.
- Some ways to learn values:
- Gradient Descent:
- Cost function: Function that returns the error of the hypothesis function and the actual correct values in the training set. It is represented as , taking our parameters used to make the prediction () as input.
- We try to find values of which minimize the cost function, because the lower the cost function, the lower our error is. We do this by finding the global minimum.
- Normal Equation: An equation to calculate the minimum without the need for gradient descent. We will talk about this later.
- Gradient Descent:
How it works
Linear regression fits a model to a straight line dataset, therefore our hypothesis function for univariate (one feature) linear regression is:
This is a basic 2D straight line, where is our feature and we are trying to learn the bias parameter and the weight parameter . We only have a single feature parameter because we only have one feature.
If we do the same for multiple features, we will get a linear multi-dimensional equation:
Where is the number of features. Each feature (the terms) has a weight parameter ( through ), and each of the individual bias terms are collected into one term . Notice how the input is no longer a single value, and is instead a collection of values , which we can represent as a column vector. We can do the same thing with our values:
Notice how we have added an extra term into the start of the vector, and we set it equal to 1. This corresponds to the bias term , which we used in the hypothesis equations. We do this because if . This also matches the dimensions of both vectors, enabling us to do operations such as multiplication with them.
With our two matrices, we can write out a vectorized version of the hypothesis function as , which we can see is equivalent to our original equation:
Gradient Descent
One way to learn the values of is gradient descent. In order to implement this, we need a cost function which calculates the error of our hypothesis function above. There are a variety of cost functions that could be used, but the typical one for simple regression is a variation on the average of the squared error:
Recall that represents the number of training examples we have, and represents the actual correct predictions for each set of features in our training set. What this function does is for each of our training sets, take the value of the hypothesis for that set (), and calculate the difference between it and the corresponding actual value, then square that difference. This guarantees a positive value. We then sum up each one of these squared positive values, then divides by , a slight variation on calculating the squared mean error (which would just be dividing by only). The reason we also divide by 2 is because it makes the derivative nicer, as the term inside the summation is squared. When we derive this, will end up with a coefficient of 2 in front, which will nicely cancel with the 2 in the denominator.
The actual gradient descent step comes from finding values of that minimize this function the most, in other words, the global minimum. At the minimum point, the derivative (in this case the partial derivative) of the cost function in terms of will be 0. We can calculate the derivative as follows:
One way to get to the minimum is to repeatedly subtract the value of the derivative from the old value. By doing this, when the derivative is positive (indicating we are to the right of the minimum), will be lowered (move to the left), when the derivative is negative (indicating we are to the left of the minimum), will be raised (move to the right). Thus, with many iterations of this, we will eventually approach the minimum. Here is the mathematical representation (the is used to show that we are updating the value, rather than as an equality operator):
Substituting the derivative we took above. is replaced with because when dealing with multiple features, we mean to say the feature set for the specific training example:
We have added a new variable: . This is called the learning rate, and as you can probably guess from the equation, it corresponds to the size of step we take with each iteration. A large value will lead to subtracting or adding larger values to each time. Too small of a learning rate will lead to gradient descent taking too long to converge, because we are taking very small steps each time. Too large of a learning rate can cause our algorithm to never converge because it will overshoot the minimum each time.
One important point is that we are repeating this step for multiple variables. If we were to write it out fully, assuming we have 50 features (meaning that and ):
Because our is dependent on the values of the parameter vector , we need to make sure we are updating our values simultaneously after we are done with the computations. Consider the following incorrect psuedocode for a single gradient descent step on a three parameters:
# assume:
# theta_0 is the bias term
# theta_1 is the 1st parameter, theta_2 is the second parameter, ... etc.
# alpha is the learning rate
# dcost_1, dcost_2, ... etc. is the partial derivative of the cost function for each respective theta
theta_0 = theta_0 - ((alpha / m) * dcost_0)
theta_1 = theta_1 - ((alpha / m) * dcost_1)
theta_2 = theta_2 - ((alpha / m) * dcost_2)
This is wrong because we are updating the values before we are finished using all of them yet! Here is a correct implementation, where we update the values simultaneously after the computation:
temp0 = theta_0 - ((alpha / m) * dcost_0)
temp1 = theta_1 - ((alpha / m) * dcost_1)
temp2 = theta_2 - ((alpha / m) * dcost_2)
theta_0 = temp0
theta_1 = temp1
theta_2 = temp2