In astronomy a particular scale created in II century B.C. by Hiparh expresses the star brightness. It is called a scale of star magnitudes. Hiparh says that the brightest stars will be +1 m and the weakest +6 m. In 19 century astronomers understand that the human eye estimates the attitude of the brightness between two stars, not their absolute difference. Mathematically this means that if actual brightness increases geometrically, perceived brightness increases arithmetically (Veber-Fehner’s law).

In 19 century the first instruments for measuring the magnitudes of the stars were made – visual photometers. When these instruments had been successfully used to define many star magnitudes, it became clear that the eye measured magnitudes of Hiparh and other ancient astronomers were not so unreal. It has been proven that an interval from 5 m is seen as 100 times difference in brightness, and that two stars which are separated by a 1 magnitude difference, are perceived as differing is 2.512 times. The full diapason of star magnitudes used in astronomy (from the Sun to the weakest star seen in telescope) is round 51 star magnitudes, which is range in brightness of 10 20.