The basis for all machine learning models is an objective function which is to be either maximised or minimised based on type of problem.
This objective function is manifested as an error metric to reduce or improve upon. The gradient descent approach behind this optimisation attempts to reduce this error metric as an indication of better fit of the model parameters to the data behaviour.
Given how the error metric influence the process of optimisation and the kind of the fit we end up with, a basic understanding of the various metrics and attributes is warranted.
This covers only a subset of cases and there are plenty of others missing, send a message if you like to know about any specific ones. Any other constructive comments welcomed.