6

I want to create particles that are 2d, and always drawn the exact same size, no matter where they are in the scene, or at what angle, OR what size the final render is.

So if my particle texture is 8x8 pixels, I want it to show up as 8x8 pixels exactly on the final render.

This can be done with an alpha overlay in the compositor, but I'd like to apply it to moving particles, and other objects. It would also be nice if they respected the depth of other objects (so if another object is in front of it, it obscures the particle).

EDIT:

I have found a way to do this with geometry nodes by scaling objects depending on their distance from the camera:

enter image description here

Unfortunately its not quite right, and the sprites frequently get stretched or squashed by 1px, which looks very distracting, especially in animations:

enter image description here

Here are the nodes / .blend

https://blenderartists.org/uploads/short-url/5qs6SVA0ARB0jijR0EDvTBhPTCX.blend enter image description here

If anyone can figure out how to avoid this, it would be great.

stackers
  • 229
  • 3
  • 14
  • Something to do with the “Window” texture coordinate? – TheLabCat Feb 10 '22 at 21:01
  • 2
    i would guess the reason are rounding/precision issues – Chris Feb 27 '22 at 08:26
  • okay, how to fix – stackers Feb 27 '22 at 17:54
  • 1
    The reason is that even for a perfectly aligned plane, in perspective projection various "pixels" (understood as regions translating to a pixel) have different area depending on distance from the camera, which is not constant (it would be constant only if instead of a plane you used a part of a sphere (where camera's pinhole sits in the center of that sphere) – Markus von Broady Feb 28 '22 at 12:18
  • @MarkusvonBroady i think i see what you're saying, like the top right corner of the render is farther from the camera than something in the middle. the sphere idea is interesting, though not sure exactly how it would be calculated. Surely just a normal sphere would have far too much curvature. Perhaps I could generate a grid and bend it with geonodes, probably based on the camera perspective. – stackers Feb 28 '22 at 19:25
  • 1
    I finally got some time to play around with this and as soon as I opened Blender the practice brutally defeated the theory... What I said would be true for a human eye, but is not true for a flat sensor that Blender simulates. – Markus von Broady Feb 28 '22 at 21:06
  • in "theory" i could imagine if you use a bigger image with more resolution, it would flicker less.... – Chris Mar 01 '22 at 08:13
  • yes, if I render at a higher resolution there is no flickering, but I want to render the rest of the scene with pixels of the same size as the texture – stackers Mar 02 '22 at 16:43

1 Answers1

2

If your camera is aligned to axes, you can separate XYZ to easily get the distance between sensor and an instanced plane along a ray perpendicular to them (the middle camera ray, center of camera's frustum), and use a formula from Gordon Brinkmann's answer:

Why does reducing Resolution X increase my camera's FOV and increasing it reduce FOV?

If you can't afford to have the camera aligned, you can create a vector representing camera's direction when it has no rotation (0; 0; -1), rotate it by the camera's rotation, and calculate Dot Product (a.k.a. projection product) with the difference of camera location and a given point location.

But since then you still need to similarly calculate sensor's XY coordinates, I decided to just rotate the coordinates to where they would have been if the camera looked down, then snap, and then rotate back.

The oddness/evenness of a given axis has to be the same for the image and the render, otherwise the center of an image is between pixels (on pixel boundaries), and it's snapped to a pixel center or vice-versa. The node setup could be improved to deal with that, but it would clutter the main solution.

Likewise, I only use Resolution X, because I assume either horizontal sensor fit, or auto fit with the horizontal dimension being bigger.

Markus von Broady
  • 36,563
  • 3
  • 30
  • 99
  • 1
    Wow. I really thought this was impossible. :) – Robin Betts Mar 11 '22 at 16:48
  • im amazed, it seems like you actually solved it. still trying to understand it all. the camera distance from a plane rather than the center of the camera makes sense, I actually tried something similar but it didn't seem to work. By "having the camera aligned" did you mean pointing at 0,0? – stackers Mar 12 '22 at 05:54
  • @stackers I didn't mean location but orientation. If your camera rotations are 0;0;0, then you can use my setup without the 2 Vector Rotate nodes. If your camera has some rotations but is aligned to axes, for example 90°;0;90° rotations, Then you just need to treat the XYZ components differently (sensor Y becomes world Z, sensor X is still world X, distance to camera becomes world Y). – Markus von Broady Mar 12 '22 at 09:03
  • Ive just noticed the textures are actually upside down (a bit hard to tell with my crappy texture). I tried to add some rotation at different points in the nodes, but it seems to move the points or rotate it to an incorrect angle (if i put vector rotate right before rotation on instance on points). any ideas? – stackers Mar 14 '22 at 16:14