0

I'm building a real-time image preprocessing pipeline using Blender API and numpy.
I couldn't find a proper way to access rendered image from the camera view without saving the image on disk. This seems to be a basic thing for me though, tell me if there is any way to get rendered image inside python code.

So, I decided to save the image as .jpg inside the tmpfs file system and then read it using PIL but somehow I got no performance increase comparing tmpfs and ext4. Here is code:

import bpy
import time

start_time = time.time() counter = 0 bpy.context.scene.render.image_settings.file_format = 'JPEG' bpy.context.scene.render.filepath = '/tmpfs_or_ext4_system/img.jpg'

for i in range(2000): bpy.ops.render.render(write_still=True) pil_img = Image.open('/tmpfs_or_ext4_file_system/img.jpg') counter += 1

print((time.time() - start_time) / counter)

Outputs for ext4 and tmpfs are almost exactly the same. I thought tmpfs should increase IO-bound operations speed. Am I doing it wrong somehow? How else can I try to increase the performance?

0 Answers0