I am a Blender beginner and therefore hope that this question is not too basic for the pros.
I am trying to reverse engineer the dimensions of a riverbed based on a photo taken by a drone and a 3D model of a weir structure, which happens to be in the picture too ;), see model and picture below.
In the picture, the red lines delimit the riverbed and I need to know their length as well as their distance from the weir structure and from each other.
My idea was to render a picture of the 3D model of my reference object in Blender that matches the actual photo. After having a match (and thus having calibrated the camera position as well as its properties), I wanted to "find" the 3D position of the red lines by adjusting my 3D model (basically drawing these lines in 3D) and rendering it again until the lines on the 3D model match the red lines in the picture. This would then give me a 3D model to extract all the dimensions I need. (Note, I could draw the 3D lines on a plane as I can reasonably assume that the river surface is close to a plane.)
Now, I figured, apart of requiring the 3D model of my reference object, I would need information on the camera. What I know is the drone and the type of camera that was used (DJI Phantom 3 Professional with f/2.8 lens and 94⁰ field of view).
After unsuccessfully trying for days to find the position of the drone and the angle of the camera when the picture was taken, I thought I'd ask you guys how to go about this task of camera calibration and dimension reverse engineering.
I hope there is a simpler way and certainly appreciate any help you are willing to give!

