The Law of Luminous Intensity Variation and Technical Vision

The success of majority of image processing solutions crucially depends from suitable lighting. In this paper, we study the problem of monitoring of passenger (cid:13)ows and consider an indirect method to take into account a luminous intensity variation

A large number of problems of technical vision received a lot of attention recently (see e.g.[1] - [5]).In particular, we can mention different problems of robot visual navigation (see e.g.[6] - [13]).
The success of majority of image processing solutions crucially depends from suitable lighting.Therefore, technical vision models must predict luminous intensity variation or, at least, brightness variation.To solve this problem we need some methods to measure and take into account any luminance variation.Brightness is a quite simple attribute of visual perception in which a source appears to be radiating or reflecting light.So, we can easily calculate the brightness of an image.However, knowledge of this attribute is often not sufficient.For instance, luminous intensity observed from a reflecting surface depends from the angle between the observer's line of sight and the surface normal.Also, it should be noted that there are a large number of different direct and indirect reciprocal effects between lighting and the environment.In particular, we can mention test objects, ambient light, lenses, cameras and machine environment.Also, image processing hardware and software all have an effect on the success or failure of lighting.
In this paper, we study the problem of monitoring of passenger flows and consider an indirect method to take into account a luminous intensity variation.We use the following algorithm for passenger detection.We consider videos that have been received from one bus camera.Our model uses the reference image Im.As the reference image we use an image of bus with only free chairs.At first, the reference image should be recognized.In particular, areas of interest should be defined.We consider only rectangular areas of interest.So, we can consider the set of areas of interest as a sequence M 1 , . . ., M k where We consider a video file as a sequence of grayscale images If N l ≥ M inS, then we assume that chair is occupied (see Figure 1).Our algorithm uses five constants InnerT hreshold, ExterT hreshold, R, S, M inS, which values have no natural evidence.In general, we need five genetic algorithms for proper adjustment of these constants.We denote our algorithm after adjustment of constants by GS[0].Let GS [1] be the algorithm GS[0] with intelligent visual landmarks model from [1].
For our experiments, we consider a sequence of files [11].
Using visual observation we have obtained the exact number N um(F [i]) of passengers for each For any image X and detector Y , let R(X, Y ) be the result of detection of passengers on the image X by the detector Y .Let 12 .
It is natural that R(X, Y ) includes some errors.In particular, the detection of a passenger where he is not exist, re-detection of the same passenger.Let L(X, Y ) be the number of detections of passengers where they are not exist.Let M (X, Y ) be the number of re-detections.It is clear that  Let where 0 ≤ h ≤ 5. Let where 0 ≤ h ≤ 1. Selected experimental results are given in Figure 3. Although, algorithms GS[0] and GS [1] give us relatively high level of errors, those algorithms have some important additional property, GS[0] and GS [1] allow us to detect the distribution of changes of lightness in the images of passengers.We can compare changes of lightness of the environment and the distribution of changes of lightness in the images of passengers.This gives us a capacity to create an intelligent algorithm for prediction the distribution of changes of lightness in the images of passengers which we can use as a law of luminous intensity.
ACKNOWLEDGEMENTS.The work was partially supported by Analytical Departmental Program "Developing the scientific potential of high school" 8.1616.2011.

Figure 1 :
Figure 1: An example of initial image (right) and result of recognition (left).

Figure 2 :
Figure 2: Comparison of GS and Haar cascades.

Figure 3 :
Figure 3: Comparison of errors of GS and Haar cascades.