When we are fitting the model to a certain set of data, we can choose how complex or simple to make the model. If the model is more complex, by adding for example the number of parameters, we will have a smaller error, meaning a smaller bias (in average, how much off you are from the real value when predicting it). The problem is that now the model is not capable of generalizing the data, hence it has an higher variance (each instance of the prediction, how much distant it is with respect to the correct value). This is also called overfitting.

On the other hand, we may make the model less complex, maybe by reducing the number of parameters or by adding regularization techniques, this will increase the model generalization, decreasing its variance, but it will have a larger error, hence the bias would increase. This is also called underfitting.

In order to produce a model that fits the data perfectly, we need to find a tradeoff, or a balance, between bias and variance.

Data Science Mock Interview | Interview Questions for Data Scientists - YouTube