If you have read the previous post, you could have guessed the ideology behind this new model. The forest gives it away. As in nature the forest is made up of many trees, Random forest regression is also combination of many Decision tree regressions.
Decision trees are used for many purposes from sorting and searching to classifying but using it for regression is really hard as there are infinitely possible values for a regression. There is simple math behind it which can be implemented by multiple 'if' loops. This model is available in the sklearn.tree library. Take for example this data.
This is the model you use when you have to deal with variables that are non-linearly related to the target.The degree of the relation is to be provided at the time of making the model object so it is a trial and error method if it is not known from the start.
This is the same as Simple Linear Regression but it is better as it also considers multiple features unlike the former. This model serves the same purpose to draw a straight line based on the training data.
This is the most basic model out there as it has only two variables and even a normal human being can carry out the math included in this model easily. This model just tries to create a straight line with the equation Y=mX + c where Y is the target variable, X is the feature, m is the slope and c is just a constant....