How Linear Regression Really Works: The Math Explained
/

How Linear Regression Really Works: The Math Explained

13 mins read

Linear regression is one of the most commonly used statistical methods in the sciences today, but how exactly does it work? To get to the bottom of this question, we need to begin with an explanation of linear regression itself, and how this type of statistical analysis differs from non-linear regression. linear regression math formula you also see this.

Linear regression can be used to model any linear relationship between two variables – if we have data on both variables, then we can use linear regression to find the relationship between them, which we can then use to predict values of one variable given values of another.

Linear regression is a simple but powerful way to identify the relationship between two variables and find the best line of fit through them, but its structure can seem confusing or overwhelming at first glance. Don’t worry! This guide will walk you through what linear regression really means, how it works and why it’s so important to data science, statistics and research more broadly.

What is Linear Regression ?

Regression is a statistical method for fitting a curve to data points. Usually, you’ll want to predict an output based on one or more inputs. Linear regression assumes that your model can be described as a line—or linear—model.

There are many different variations of linear regression and choosing which one is right for you can get confusing, but we’ll talk about that in a minute. First, let’s cover some basics. You’re going to need three things before you get started with linear regression: 

  1. Inputs (x) — he input variables that you’re trying to predict. 
  2. Model Output (y)— What you’re trying to predict the outcome of. 
  3. Error Term (e) — This is the difference between what your prediction was and what it should have been. 

So, if your prediction was 5 centimeters and it actually measured 7 centimeters then the error term would be 2 centimeters. In addition, this formula must also include the mean squared error terms from step 1! To calculate them use this equation: Mean Squared Error = /n where n is the number of points used in your experiment.

Why do we need linear regression?

In order to answer that question, we need to look at what linear regression actually does. Linear regression is used when we want to predict or estimate one outcome based on a second outcome.

For example, we might be interested in understanding how salary depends on age.

We might also be interested in understanding if being a movie star affects life expectancy—or if any factor could have an impact on test scores.

As another example, we might want to estimate demand for a product based on stock levels and competitor prices (i.e., more sales leads to more growth). It turns out that linear regression has other applications too—but its utility is primarily due to its ability to detect correlations between two variables.

Understanding the DataSet

A DataSet is a collection of two or more DataTables. Think of a DataSet as like an Excel worksheet, which contains multiple tables.

In Power BI the DataSet and the individual DataTables have some common characteristics to them. Both have Columns and Rows, both can be refreshed from a data source, and you can even create relationships between the two just like in Excel where you create relationships between different sheets in the same workbook.

Doing the math – how does it really work?

Linear regression is used for a broad range of business and science applications, from marketing to pharmaceutical research.

It does its job by extrapolating information that can be derived from the data you give it, but how does it really work? That’s the topic of today’s post; I hope to answer two questions for you: what goes into linear regression, and how does it function? By the end of our discussion, you should understand how linear regression works in a mathematical sense.

My hope is that afterward you will find some deeper insight into what happened when your last boss asked if anyone had tried linear regression to solve an issue—and why they didn’t use it in the first place. And who knows?

Types of linear regression

In general, there are two types of linear regression

  1. Simple linear regression
  2. Multiple linear regression

With simple linear regression, we can examine the relationship between two variables. Simple linear regression uses a single independent variable to predict the value of the dependent variable.

The study found that people who eat a balanced diet consisting of both healthy and unhealthy foods are just as likely to develop heart disease as people who only eat unhealthy foods. Yeah, linear regression math is fun.

The equation above is a graph of the relationship between two variables. The equation below is a graph of the relationship between one variable and one other variable.

Both equations are of the same type, but we have renamed the variable and added something extra, the ‘e’ to reduce the chance of error. The slope of the line connecting points (m, c) is b1. linear regression mathematical derivation.

The point at which the line crosses the y-axis is (b0, b1). The slope of a line can be positive, negative, zero, or undefined. The line slopes upwards as x increases, meaning that y also increases.

Cost Function

Simply put, a cost function is a mathematical way of determining what parameters you should pick for your model. It will return an error (or cost) associated with your current set of parameters, and it’s your job to tweak those numbers until that cost is minimized.

In linear regression, there are two different types of cost functions used, depending on whether or not you’re dealing with multiple variables.

When we do regression analysis on one variable at a time, we use ordinary least squares (OLS) regression. In OLS regression, our cost function simply looks like y’ = x. These are the math behind regression.

Sample Code

import numpy as np
import matplotlib.pyplot as plt

def estimate_coef(x, y):
	# number of observations/points
	n = np.size(x)

	# mean of x and y vector
	m_x = np.mean(x)
	m_y = np.mean(y)

	# calculating cross-deviation and deviation about x
	SS_xy = np.sum(y*x) - n*m_y*m_x
	SS_xx = np.sum(x*x) - n*m_x*m_x

	# calculating regression coefficients
	b_1 = SS_xy / SS_xx
	b_0 = m_y - b_1*m_x

	return (b_0, b_1)

def plot_regression_line(x, y, b):
	# plotting the actual points as scatter plot
	plt.scatter(x, y, color = "m",
			marker = "o", s = 30)

	# predicted response vector
	y_pred = b[0] + b[1]*x

	# plotting the regression line
	plt.plot(x, y_pred, color = "g")

	# putting labels
	plt.xlabel('x')
	plt.ylabel('y')

	# function to show plot
	plt.show()

def main():
	# observations / data
	x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
	y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])

	# estimating coefficients
	b = estimate_coef(x, y)
	print("Estimated coefficients:\nb_0 = {} \
		\nb_1 = {}".format(b[0], b[1]))

	# plotting regression line
	plot_regression_line(x, y, b)

if __name__ == "__main__":
	main()

The Obtained Result with the Graph.

(math behind simple linear regression

How Linear Regression Really Works: The Math Explained
Output

Why does this work?

If you took a linear regression class in college, you were taught that this technique was an excellent way to model continuous data—but also that it didn’t work for certain types of data.

But why does it work? And how can we tell whether data is linear or not? In order to answer these questions, let’s take a closer look at what exactly a line is and how linear regression works with lines of various shapes.

We can further break down our examination by looking at two scenarios and seeing what happens when we either include them or exclude them from our analysis.

Let’s start with the scenario where we have been told that some data should be modeled as straight lines, but they are actually curves.

For example, if you had a simple scatterplot where the y-axis represents weight (in kilograms) and the x-axis represents height (in centimeters), then the predicted values would be all over the place.

On one hand, there would be many points on top of each other if most people were around 5 feet tall and weighed about 120 pounds; on the other hand, there would be points scattered everywhere if someone who is 6 feet tall weighs just as much as someone who is 4 feet tall.

Image Credits: Shuttorstock

Are there any exceptions to this rule?

Though linear regression is one of the most common statistical techniques, it isn’t always appropriate. For example, if your goal is to plot a non-linear relationship or capture some kind of cyclical component in your data, you’ll want to use a non-linear model like polynomial regression.

In fact, many experts recommend simply fitting a straight line and examining it visually to see if there are any obvious exceptions to this rule before calculating an equation for a more complex model.

Doing so may save you from spending time on a more complicated model only to find that it doesn’t fit well with your data.

Verdict

Linear regression is the most popular and widely used machine learning algorithm. Despite its popularity, there are still many misconceptions about how it works. In this blog post, we’ll dispel some of those myths and explain the math behind linear regression.

First, let’s define what linear regression is. It’s a statistical technique for estimating relationships between two variables, where one variable is independent and the other dependent.

For example, height might be an independent variable in predicting someone’s weight. To do this, we need to find a line that minimizes error when applied to data points.

1 Comment

Leave a Reply

Your email address will not be published.

Latest from Blog