I have used motioneyeos in the past to implement a motion capture video recording system on a raspberry pi, right out of the "box". It was nice in that it could do motion detection, video recording, and a live http webcam server simultaneously. However, the motion detection wasn't quite good enough for my situation (I was getting tons of false alarms), so I decided to take a stab at my own motion detection algorithms that are custom tailored to my environment.
I managed to use videoci to stream raw RGB pixel buffers from the camera into C code so I could develop my motion detection algorithm. I then called 'raspivid' directly to record mp4 video clips from the camera. Finally, I got 'uv4l' working which allowed a live http webcam server.
The problem is that I can only do one of these tasks at a time. It seems there is a resource sharing problem once the camera is activated.
Is there a way to allow these tasks to be run simultaneously? If not, is there a better approach that would make it possible? I'd prefer to keep my algorithm in C/C++, if possible.
