Different tests had different averages and standard deviations (by the way standard deviation is what is the normal difference between any score taken randomly and the average). For example, let us imagine a test with 50 items and a test with 150 items. As you can guess the average score in the first test could be 35 but in the second test maybe it is 100. On the other hand, what is the usual difference of any random score with the average? Maybe in the first test is 1 point but in the second test is 3 points. How can we compare them? We can't unless we transform the scores.
When painted all scores from all tests they looked the same, most people had average intelligence, fewer on extremes. To find common ground they "decided to use a scale, the average would always be 100 and the standard deviation 15". This way scores are always comparable.
To get to the scale it's a pretty easy two-step process. First, you get the score from the test, deduct the test's average and divide it by the standard deviation. That is a normalized score. You could already compare between tests but we want to rescale to use the typical scale.
Example: imagine a score of 39 in a test with average 35 and standard deviation of 2 -> (39-35) / 1 = 2. The normalized score is "2".
The second step is to simply rescale back to IQ typical scale with average 100 and standard deviation of 15 -> ( 2 * 15 ) + 100 = 110. Perfect, now that it is clear let us recheck average IQ or
jump to percentile.