7 Tweets 10 reads Feb 29, 2024
We always hear about supervised & unsupervised learning.
But there is another way to divide ML models.
A thread on Parametric and Nonparametric models.
๐Ÿงต
As the name suggests parametricism is about the parameters used by the different models.
They differ in two main aspects:
- Assumptions about the data
- Complexity of the model used
Let's start with Parametric models ๐Ÿ”ฝ
1๏ธโƒฃ Parametric models
A parametric model has a fixed number of parameters, regardless of the size of the training data.
These models rely on their assumptions about data distribution.
For example, linear regression works well only for linear relationships.
Pros โœ…
- Simple: After parameter estimation the model is ready.
- Fast: A fixed number of parameters makes them efficient.
Cons ๐Ÿšซ
- Strong assumptions - less flexibility.
- Limited complexity: Simplicity is also a con since cannot use these on complex issues.
2๏ธโƒฃ Nonparametric models
In a nonparametric model, the number of parameters are defined by the data.
They use fewer assumptions and they are able to learn directly from the training samples about data distributions.
Pros โœ…
- They can understand complex relationships.
- Flexible: Can adapt to different distributions.
Cons ๐Ÿšซ
- For large datasets, they can be slow.
- Larger training sample is required for accurate training.
That's it for today.
I hope you've found this thread helpful.
Like/Retweet the first tweet below for support and follow @levikul09 for more Data Science threads.
Thanks ๐Ÿ˜‰

Loading suggestions...