2

I'm just getting started with blender and have been playing around with it for a few weeks. I have a pretty particular use case and wanted to see if blender could assist me in that.

I have written a program that is focused on distributed computing/interactions. Best way to describe it is through an example. Let's take 100 drones and I wanted to set them up in a specific formation. I could have one program that coordinates each drone, but instead, I'm working on a distributed model where each drone makes its own decision based on the information it sources locally. The end result should be the same, conforming to the target shape but in my case only using local information rather than global information.

How does this tie into Blender? Ideally, I am looking for a way to simulate this scenario, basically create 100 drones and load them with my software click run and I can visually see the movement to understand if the target shape was achieved.

Is this possible with Blender, if not I'm open to hearing any other tools.

joethemow
  • 121
  • 1

1 Answers1

5

Not drones but Suzannes, only 8 of them, only moving in 2D, very unimaginative formation, but the key principles are here. Run the script, move mouse over 3D viewport, press Space.

import bpy, copy
from bpy import context as C, data as D
from mathutils import Vector

sim_start_frame = 1 # when to start the simulation sim_end_frame = 2000 # save CPU cycles if you go get a coffee after starting the sim substeps = 5 # it's common in simulations to allow more than one step per frame seconds_per_frame = 1 / C.scene.render.fps drones = [d for d in D.collections['drones'].objects] mem = {d.name: {"a": Vector(), "v": Vector()} for d in drones} frame_data = {"mem": mem, "loc": {d.name: d.location.copy() for d in drones}} cache = {sim_start_frame-1: frame_data}

def brain(dt, memory, gps): """ This is drone logic. Notice how I didn't properly encapsulate it, and so for example you can set memory["v"] to a huge number. Normally, you want to only be able to send signals to engines here, and have the outside of this function take care of simulating engines accelerating etc. """ gps[0].z, gps[1].z, gps[2].z = 0, 0, 0 # 2D ideal_dist_from_center = 3 goal1 = gps[0].normalized() * (ideal_dist_from_center - gps[0].length)

to_d1 = gps[0] - gps[1]
dist_d1 = to_d1.length
goal2 = to_d1.normalized()*2.5 / dist_d1

to_d2 = gps[0] - gps[1]
dist_d2 = to_d2.length
goal3 = to_d2.normalized()*2.5 / dist_d2

jerk = (goal1 + goal2 + goal3) / 3000
new_a = memory["a"]*.9 + jerk
max_a = new_a.normalized() / 1000
memory["a"] = new_a if new_a < max_a else max_a 
memory["v"] *= .95  # damping, air resistance etc.
memory["v"] += memory["a"]


def frame_change_pre(scene, depsgraph=None): frame = C.scene.frame_current if frame in cache: # restore positions from cache for name, loc in cache[frame]["loc"].items(): D.objects[name].location = loc.copy() return if frame < sim_start_frame or frame > sim_end_frame: return if frame-1 not in cache: return # don't compute if a frame was skipped frame_data = copy.deepcopy(cache[frame-1]) cache[frame] = frame_data

dt = seconds_per_frame / substeps
for _ in range(substeps):
    for d in drones:
        distance = lambda x: (x-d.location).length
        # get this and 2 nearest drones locations:
        gps = sorted([l for l in frame_data[&quot;loc&quot;].values()], key=distance)[:3]
        brain(dt, mem[d.name], gps)
        d.location += mem[d.name][&quot;v&quot;]
        frame_data[&quot;loc&quot;][d.name] = d.location.copy()


listeners = bpy.app.handlers.frame_change_post listeners.clear() # remove old handlers before adding updated versions listeners.append(frame_change_pre) C.scene.frame_current = sim_start_frame

Markus von Broady
  • 36,563
  • 3
  • 30
  • 99
  • 1
    In the end I didn't use dt delta time in the brain, but because of that changing the number of substeps changes the speed of the simulation, instead of just quality. – Markus von Broady Sep 17 '21 at 22:40