First of all, I hope this is the right place to post the question I'm about to ask.
A little background about my problem : I am a data scientist / machine learning engineer / you name it (phd in applied neural network). One big problem about machine learning is to get a dataset large enough to correctly train your model to do the task you want it to do. This problem is especially frequent in image detection / classification. I recently found a paper about some people training neural networks with 3D rendered images so I wanted to try it out.
I'm totally new to bender (5 days of tutorial following at most) but I am kind of getting the scenes I want with eevee. The reason I'm posting here is I didn't find a plug-in/add-on able to automatically detour specific visible rendered items (otherwise I'll have to do it myself on several hundred images).
I tested the picture on a panoptic neural network (detectron 2) and it seems to work. Generating whole datasets of images train-ready would be really huge.
Is there a way to automatically do it (or to get a tiny button somewhere dumping coordinates of the detouring polygon in a file ?) ? Is it possible to create an add-on doing it (I code ML models in python so I should be ok) ?
EDIT : About detouring - With 3D rendered scene I can have enough data to train a model. However, I still have to manually create the polygon containing the object I want to detect (so the model knows what I'm looking for during training) and doing it on hundreds or thousands of images could take ages. I can use coordinates of the angle points of the corresponding polygon, or I can manage to extract it if the corresponding mask is also generated (like of the following picture
).
The whole point of this is to :
- Create a scene and place my camera
- Take a screenshot / get a render picture
- Get the mask / coordinates /... of the item I'm interested in the screenshot / render
- Move the camera / tweak the scene, and go back to step 2
Note : The steps 2 and 3 should be as fast as possible, given that I'll have to do it hundreds of time (or maybe it is possible to automate that too ? this is beyond my current comprehension of blender)
