-
|
In the release of v.1.11.0 an example of bvh raycasting for box volumes is shown:
However it is not clear to me whether it is done in exactly the same way when working with meshes? The above seems to work but it also feels like I'm doing something stupid since I am manually creating the lower and upper bounds rather than use the meshes directly for the bvh calculation. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
|
@StafaH can you please weigh in as you have most recently worked on this code? |
Beta Was this translation helpful? Give feedback.
-
|
Hi @tueboesen, There are two examples you can follow for how to set-up the BVH for rendering. There is an example being built out in mujoco-warp that will be released soon, it can be found in the render branch of mujoco-warp: https://github.com/google-deepmind/mujoco_warp/tree/render The second example is currently being built out in netwon, it is called warp-raytrace: https://github.com/newton-physics/newton/blob/main/newton/_src/sensors/warp_raytrace/render.py The example you have shown is correct in concept. Rendering usually takes the form of Top Level Acceleration Structures (TLAS) and Bottom Level Acceleration Structures (BLAS). At the top level you have your bounding volumes which are your primitives in world space. And at the bottom level you have meshes and other geometric types. You only need to build your lowers/uppers array once for the TLAS. Then every frame you should be updating that array with a kernel that adjusts the box based on movement of the geometry. The BLAS usually stays static unless you have deformable geometry. Those two repos have clean code that should be easy to follow to get an idea of how to build a rendering pipeline using this BVH (or you can use those libraries directly to avoid re-writing your own renderer). Hope this helps! |
Beta Was this translation helpful? Give feedback.
Mesh BVHs are built like this: https://github.com/google-deepmind/mujoco_warp/blob/render/mujoco_warp/_src/bvh.py#L300
Static meshes need to only be built once. When you build the mesh you should calculate the canonical size of the mesh.
Then when you update your lowers/uppers (which you would do every frame since geometry is moving), you can use the original size calculated, update it based on the new position/rotation, and update lowers/uppers for that mesh: https://github.com/google-deepmind/mujoco_warp/blob/render/mujoco_warp/_src/bvh.py#L210
As you mentioned, there is no smarter way to do this, other than streamlining your example to make sure you create the meshes in advance and sto…