Site icon Alectio

5 Level Explainer: Hyperparameters

Our 5-Level Explainer on Hyperparameters

 

Welcome back to Alectio’s Explainer series! In this episode, we’re exploring one of the hardest to understand concepts in AI: hyperparameters.

Timestamps:
00:00 – Intro
00:38 – Level 1: Kindergartener
01:52 – Level 2: Teenager
04:03 – Level 3: Non-expert adult
06:20 – Level 4: Computer science major
09:01 – Level 5: Machine learning expert

Contact us to learn how we can help your team build better models with less data. We’d love to show you how it works!

 
Alectio Explains Hyperparameters in 5 Levels of Difficulty

Transcript:


A 5 year old

Have you ever wondered why your grandma’s cookies taste so good? One day, you just simply ask her grandma: “Why are chocolate chip cookies so good?” and she responds with the very long story of how a recipe was passed along and she followed the same exact ingredients in the recipe, so she had the exact same amount of eggs, the exact amount of flour, the exact amount of chocolate

But the only thing that she really had to change was how long she had to bake the cookies for. She realized that sometimes baking the cookies for 20 minutes isn’t the best idea, so they’ve come out as burnt. And then, after some time after a whole bunch of trials, she realized that okay after baking the cookies for 12 minutes.

That was the perfect number and exactly that’s pretty much how hyper parameters work. Tweaking these numbers, so that you get the best the best value. And in this case, it was the amount of time that she took to bake the cookies, which was 12 minutes.

And at the end of the day, scientists aren’t perfect, and they do trial and error runs. And that’s exactly what hyper parameters are similar to.


A Teenager

I want you to imagine that you want to play guitar for your friends and you just sort of pull a guitar out of the closet. It’s completely out of tune; you start playing; it sounds terrible. You decide: “All right! I’ve got to tune it.”. But you don’t have a tuner, so you just kind of start randomly twisting the pegs. Maybe you get the ‘a’ right, but all the rest of them are out. You play a chord, it sounds terrible again.

The more you’re going to have to do this over and over, you’re going to have to keep kind of making random choices until the guitar overall sounds like something you could actually, you know, play a song on. Whether you play guitar, or you don’t play guitar, it really doesn’t matter if the instrument’s completely out of tune.

It’s going to sound bad no matter if you’re you know Jimi Hendrix or some kid who’s just the guitar center for the first time. The crazy thing is that there’s something in Machine Learning that’s very similar to this, and it’s called hyperparameters. Machine Learning is a science, but even in a science there’s some guesswork and this is one of those areas in a model. There are both parameters and there’s hyper parameters are things that you understand that you know you can describe that might be the mass of something etc. They’re measurements they mean something.

Hyperparameters are more like just kind of arbitrary numbers; they’re just figures and when Machine Learning scientists are trying to make a model work, they just sort of insert random numbers into their models, much like you would do if you’re just twisting pegs on a guitar.

The weird thing is that there’s thousands or even hundreds of thousands of them and eventually you can get to a more optimal mixture of those figures, but along the way what you’re doing is you’re crossing things off a gigantic list of potential hyper parameters.

And there’s much like a guitar; there’s an art to this and it takes time and it is a little counter-intuitive that something that is a science involves this much guesswork. But it’s one of the kind and in my mind what makes it really interesting and exciting about Machine Learning.


A non-expert adult

You know how scientists build model for everything? You can think about a model as an equation that has a set of independent variables mapping to a target. These equations have a set of parameters that would influence the prediction of the model. Tuning these parameters is essential for you to better fit the real-world data.

These parameters could be tangible in most cases. If I were to give you an example of a tangible parameter, think about a model that predicts the spread of an infectious disease. The tangible parameter in this case would be the average number of people an infected person could infect. Unfortunately, not all of the parameters in the building of a model is tangible. In some cases, you would have one non-tangible parameter that could influence the prediction of a model. If you have one non-non-tangible parameter, chances are that you have more than one influencing the prediction or building training of the model.

If I were to give you an example of a non-tangible parameter, think about the combinations to open up a lock. The brute force way of finding the right combinations to open up a lock would be to try different combinations until you arrive at the right combination that opens up the lock. Of course, you could be smarter about it by listening to the click, until you arrive at the right combination that opens up the lock. In this case you’d be faster at arriving at the result, but you would have no reasoning, or you couldn’t explain why the combination that you tried worked. Unfortunately, this is the case when like Deep Learning engineers build models that help predict a specific task. They have these set of hyper parameters that they try or brute force, until they arrive at the right combination that helps predict the model better.

 

A CS Student

If you’ve taken Machine Learning class while pursuing your degree, you might have heard of something called hyperparameters. These are basically parameters or settings on a Machine Learning model. The thing is that finding the right hyperparameters for a Machine Learning model is a really important task, because it’s the difference between your model performing really poorly versus it performing really well on the test set.

For example, with bad parameters you can get low accuracy like zero percent or five percent accuracy. With good parameters you can get 90 accuracy or better, but this is just an example. An active area of research is looking into better ways to solve for hyperparameter search and finding the right hyperparameters for your model, so it performs as best as it can. One of the basic approaches to solving hyper parameters is called grid search. Grid search is a brute force approach in which you manually go through every single one of the sets of hyperparameters in the search space. This is sort of a dumb approach, because this would be really expensive to do and computationally very significant. As a result, doing this in the real world is not really feasible.

What’s a smarter way of approaching something like this? Well, one way that is really effective that has shown promise that people use in both academia and industry is something called Bayesian optimization. It is fundamentally based in something called Bayes’ theorem which is a method for calculating a conditional probability. What Bayes’ theorem or Bayesian optimization tries to do is to estimate the objective function of a Machine Learning problem. Typically, a machine learning problem is going to be defined by something called an objective function. It’s a formal definition of the problem space. The thing is that many Machine Learning problems objective functions cannot be solved analytically or the solution is not going to be resulted in a linear time. This means that computing the objective function solution is either intractable or way too computationally expensive.

An approach here is instead of calculating it directly, we can get a general idea of where to go by using Bayesian optimization and Bayes’ theorem, so that as it’s going through the hyperparameter search space you’re guiding it towards the right direction incrementally. Essentially, this is how Bayesian optimization is a significant improvement over something like grid search. This is why it’s so frequently used in the industry.

 

A ML Expert

Will we ever truly understand those pesky hyper parameters? Of course, I’m not talking about the mathematical meaning. We all know what a number of epochs, a number of layers or a batch size stand for. I’m talking about understanding the reason why a specific value would turn out to be optimal for a specific hyperparameter. And even those of us who are extremely good at hyperparameter tuning  might still not claim that they truly understand them just, because they understand some of the heuristics.

I was trained originally as a particle physicist, and I’ve spent years trying to contribute to what scientists call the standard model of particle physics. That’s basically one big equation trying to explain the behavior of every single particle in the universe. In other terms, it’s just one gigantic equation trying to summarize the entire universe. Pretty ambitious, right?

At first, I thought that my job as a particle physicist was to try to build models to measure as many of those parameters as possible, but then I eventually found out that a lot of scientists were actually just using arbitrary values that just made the model look right and closer to reality. For example, you might not know that but even the gravitational constant is something that no scientist truly know why it has a specific value. Later on, sometimes some scientists would discover a new way to actually measure those arbitrary parameters and sometimes they would find out that some of those constants were actually not a constant at all and they had to be replaced by an entire new mathematical formula, because we had made some progress in theoretical physics and it was clear that it was not a constant at all.

Sometimes, I wonder if the exact same thing isn’t happening with Deep Learning. In fact, did you ever wonder if the fact that we still don’t know how to explain those hyperparameters was actually nothing else than a sign that we still haven’t done enough modeling? Well, only time will tell!

< Back to Videos

want to stay up to date?

Sign up for our newsletter and we’ll make sure you don’t miss anything

Exit mobile version