I got a tracked robot toy and I'm controlling it with an iPhone. The robot outputs a live camera feed of a known frame size and I'm displaying it on UIImage.
I've added a laser pointer to the robot and fixed it alongside the axis of the robot. I'm trying to detect the laser pointer dot on the image and thus try to calculate the proximity of the object. If the laser dot is far away from the center, I know that the robot is stuck against the wall and needs to back up.
How can I go about detecting a dot of bright white-red pixels on a screen? One solution would be to sample the color of pixels within a certain radius of the center and detect a blob bright color. Can anyone suggest an algorithm for this activity?
Another approach would be to keep track of the average position of the dot over the last few frames, thus reducing the guesstimate radius. If there's no dot within a pre-defined region, the search region may be expanded.
Finally, I want to be able to teach the robot to detect carpet around it. Carpet reflects a laser pointer in a certain way, and I want to understand how many frames around the robot have similar properties. If I know where the laser pointer is at a screen, I can clip a small rectangle from that image and compare them one to another. Is there an efficient way of comparing multiple small images to one another to understand if their silhouettes match?
I noticed that the laser is reflected off glossy surfaces, and the direction of this reflection may tell me something about the orientation of the surface in space, in accordance with the laws of refraction.
Thank you!

