I would use a novelty detection approach: Use SVMs (one-class) to find a hyperplane around the existing positive samples. Alternatively, you could use GMMs to fit multiple hyper-ellipsoids to enclose the positive examples. Then given a test image, for the case of SVMs, you check whether this falls within the hyperplane or not. For GMMs, you check if it is enclosed in the hyper-ellipsoids. They are both proven to work well in practice.
If you also have some unlabled data in your training set, I would certainly adapt a variant of transfer learning. Maybe you would be able to automatically label the unlabeled data, based on the already learnt samples.