Parameter

CategoriesCausal Wizard Concept , Causal Inference , Statistics , Independence , Method

A parameter in a machine learning model is a value that the model learns during training to make predictions on new data.

What is a parameter?

In machine learning, a parameter refers to a value that a model learns during the training process, which it then uses to make predictions on new data. 

The model's parameters are essentially the knobs that the training algorithm turns in order to adjust the model's behavior and improve its accuracy. These parameters are also explicitly called "trainable parameters", because they can be "trained" by the learning algorithm. The specific parameters that a model has will depend on the type of model being used.

Machine learning models may have millions or billions of parameters. 

Example of trainable model parameters

For example, in a linear regression model, the parameters would be the coefficients that the model assigns to each input feature. In a neural network, the parameters would include the weights assigned to each neuron and the biases used to shift the output of those neurons.

During the training process, the model tries different values for its parameters until it finds the set of values that minimize the difference between its predicted output and the true output. This process is known as optimization or learning, and it's what allows the model to make accurate predictions on new data.

What's the difference between hyper-parameters and parameters?

A hyper-parameter is a parameter which is not learned, but is either set by the user in advance or a characteristic of the dataset or experiment methodology (such as the number of samples available, or the number of training epochs allowed).

Related articles
In categories