A regression problem is when the output variable is a real value

A regression problem is when the output variable is a real value, such as “dollars” or “weight”. In another words, output variable should be is continuous.

Regression is a method of modeling a target based on independent predictors. This method is mainly used to predict and determine the cause-and-effect relationship between variables. Regression techniques generally differ in terms of the number of independent variables and the nature of the relationship between the independent variables and the dependent variables.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Figure 2. Linear Regression

Simple linear regression is a kind of regression analysis in which the number of independent variables is equal to one and there is a linear relationship between independent (x) and dependent (y) variables. The red line in the graph above is called the best-fitting straight line. Using the given data points, we try to draw a line that best models the points. The line can be modeled using the linear equation presented below.

y = a_0 + a_1 * x

The goal of the linear regression algorithm is to find the best values for a_0 and a_1. To better understand linear regression, we need to know two important concepts.

i. Cost Function:

The cost function is used to determine the best possible values for a_0 and a_1 that would provide the best fit line for the data points. Since the best values for a_0 and a_1 are searched for, this search problem is converted into a minimization problem in which the error between the predicted value and the actual value must be minimized. The minimization and cost functions are listed below:

The difference between the predicted values and the ground truth measures the error difference. The error difference is squared and summed over all data points and this value is divided by the total number of data points. This gives the mean squared error on all data points. Therefore, this cost function is also called the Mean Square Error (MSE) function. With the help of the MSE function, the values of a_0 and a_1 are modified so that the MSE value refers to the minimum.

ii. Gradient Descent:

Gradient descent is a method of updating a_0 and a_1 to reduce the cost function. The idea is starting with some values for a_0 and a_1. These values are then modified iteratively to reduce costs. Gradient descent helps to change the values.

x

Hi!
I'm Sarah!

Would you like to get a custom essay? How about receiving a customized one?

Check it out