If CCD data are to be reduced, it is essential that they all be on the same instrumental system. First, all the data for each night must be reduced with a common average flat field, for a given filter. (It is possible to use a different flat for each night, which will introduce a zero-point shift from one night to the next.)
Second, all the data for a given night must be comparable, to satisfy Steinheil's principle. One can have problems with some image-extraction routines that use PSF fitting. Because of seeing variations during to night -- and especially because of the dependence of seeing on airmass -- there may be systematic errors introduced by using different point-spread functions on different frames. If a very detailed PSF model is available, so that the whole energy in a star image is well extracted, with very small residuals, one may expect PSF-fitting to work adequately. However, one must be sure that the extracted magnitudes refer to the total energy in the image, and are not just scaled to the peak.
If you use a PSF-fitting routine that leaves obvious ``blemishes'' when the fitted profile is subtracted from the original frame, it is likely that there will be systematic errors that depend on seeing. In turn, this means systematic errors that depend on airmass, which will spoil the determination of extinction coefficients.
In general, the safest approach with CCD data is to simulate ``aperture'' photometry, as it is often called -- just integrate the total signal in a box (round or square) of fixed size centered accurately on each star. This may give larger random errors than PSF-fitting, but smaller systematic errors. This balance between accuracy and precision is a common dilemma in stellar photometry.