Bobinas P4G
  • Login
  • Public

    • Public
    • Groups
    • Popular
    • People

Conversation

Notices

  1. Bernie (codewiz@mstdn.io)'s status on Sunday, 26-Feb-2023 03:09:56 UTC Bernie Bernie

    First working demo using VK_KHR_ray_tracing_pipeline on #amdgpu

    The extension is still experimental and disabled by default in the newly released #Mesa 23.0. I had to turn it on with RADV_PERFTEST=rt.

    In conversation Sunday, 26-Feb-2023 03:09:56 UTC from mstdn.io permalink

    Attachments


    1. https://media.mstdn.io/mstdn-media/media_attachments/files/109/928/801/382/227/297/original/3164715fe93578a7.png
    • Bernie (codewiz@mstdn.io)'s status on Sunday, 26-Feb-2023 03:27:25 UTC Bernie Bernie
      in reply to

      Framerate is good enough, but when I zoom on details I can see the characteristic sampling noise on soft shadows and in the blurry edges of objects not in focus.

      #raytracing #gpu #graphics

      In conversation Sunday, 26-Feb-2023 03:27:25 UTC permalink

      Attachments


      1. https://media.mstdn.io/mstdn-media/media_attachments/files/109/928/856/322/858/831/original/d09524d6f6dc5130.png
    • Bernie (codewiz@mstdn.io)'s status on Sunday, 26-Feb-2023 03:43:10 UTC Bernie Bernie
      in reply to

      Quantization noise also occurs in extreme low-light photography, in which a small number of photons make it through the lenses to excite the CCD or the film.

      https://en.wikipedia.org/wiki/Shot_noise
      #raytracing #graphics

      In conversation Sunday, 26-Feb-2023 03:43:10 UTC permalink

      Attachments


      1. https://media.mstdn.io/mstdn-media/media_attachments/files/109/928/915/229/459/953/original/1013cdeb54dd9f29.png
    • Bernie (codewiz@mstdn.io)'s status on Sunday, 26-Feb-2023 22:16:05 UTC Bernie Bernie
      in reply to
      • penguin42

      @penguin42 That's what I want to learn.

      I'm familiar with the "naive" #RayTracing algorithm, in which you iterate over each pixel on the screen, casting a ray through it and testing for intersections with every object in the scene.

      Once you know the intersection, you cast rays towards every light source in the scene, testing for intersections with objects that could occlude the light. Semi-transparent and reflective materials require recursion (with limits).

      It's very simple... but too slow.

      In conversation Sunday, 26-Feb-2023 22:16:05 UTC permalink
    • penguin42 (penguin42@mastodon.org.uk)'s status on Sunday, 26-Feb-2023 22:16:06 UTC penguin42 penguin42
      in reply to

      @codewiz What does the API for that look like? Traditionally ray tracing has had to have a full scene description at once, which OpenGL etc traditionally hasn't had.
      (And sorry, but you are missing a teapot or 3)

      In conversation Sunday, 26-Feb-2023 22:16:06 UTC permalink
    • Bernie (codewiz@mstdn.io)'s status on Sunday, 26-Feb-2023 22:29:31 UTC Bernie Bernie
      in reply to
      • penguin42

      @penguin42 The schoolbook #RayTracing algorithm has uneven per pixel workload, making parallelization ontothousands of compute units inefficient.

      Furthermore, computing the intersections requires access to the entire scene. Efficient data structures tend to be trees of bounding boxes, with recursive lookups.

      In conversation Sunday, 26-Feb-2023 22:29:31 UTC permalink
    • Bernie (codewiz@mstdn.io)'s status on Sunday, 26-Feb-2023 22:41:12 UTC Bernie Bernie
      in reply to
      • penguin42

      Some old RT algorithms switched the two nested loops: the outer loop iterates over each triangle in the scene, and the inner one tests for intersections with screen pixels.

      The advantage is that you only need to consider the pixels within the projection of the triangle.

      You can also completely cull triangles completely occluded by other triangles in front of them.

      I don't know whether this improved algorithm is actually a win for scenes with millions of triangles.

      @penguin42

      In conversation Sunday, 26-Feb-2023 22:41:12 UTC permalink
    • Bernie (codewiz@mstdn.io)'s status on Sunday, 26-Feb-2023 22:50:43 UTC Bernie Bernie
      in reply to
      • penguin42

      Back to the hardware RT: for a long time, GPUs could dispatch the parallel execution of short functions (historically called shaders). Each instance gets constants such as texture data and variables such as transformed coordinates. The output of these functions can be used to paint pixels on the screen or intermediate buffers.

      @penguin42

      In conversation Sunday, 26-Feb-2023 22:50:43 UTC permalink
    • Bernie (codewiz@mstdn.io)'s status on Sunday, 26-Feb-2023 22:59:09 UTC Bernie Bernie
      in reply to
      • penguin42

      With this, we reached the boundaries of my hands-on knowledge of #GPU accelerated rendering.

      This is a little program I wrote a few years ago to learn the basics of GLSL shaders:
      https://github.com/codewiz/mandelwow

      (it's also the very first code I wrote in #Rust, please forgive the poor style).

      @penguin42

      In conversation Sunday, 26-Feb-2023 22:59:09 UTC permalink

      Attachments


      1. https://media.mstdn.io/mstdn-media/media_attachments/files/109/933/491/697/697/465/original/28de4d726c69f439.png

    • Bernie (codewiz@mstdn.io)'s status on Sunday, 26-Feb-2023 23:28:34 UTC Bernie Bernie
      in reply to
      • penguin42

      The first step to understanding real-time #RayTracing involves a leap to #Vulkan, which generalizes the old #OpenGL rendering model to enable building parallel, multi-pass pipelines using low-level primitives such as command buffers, queues, etc.

      I've been reading a few introductory guides, and this is the official one:
      https://github.com/KhronosGroup/Vulkan-Guide

      @penguin42

      In conversation Sunday, 26-Feb-2023 23:28:34 UTC permalink

      Attachments


    • Bernie (codewiz@mstdn.io)'s status on Monday, 27-Feb-2023 00:19:22 UTC Bernie Bernie
      in reply to
      • penguin42

      Enter the #Vulkan #RayTracing spec, which was finalized over 2 years ago.

      Like OpenGL, Vulkan evolves by means of vendor extensions which get then standardized and, later, incorporated into the core API.

      https://www.khronos.org/blog/vulkan-ray-tracing-final-specification-release

      @penguin42

      In conversation Monday, 27-Feb-2023 00:19:22 UTC permalink
    • Bernie (codewiz@mstdn.io)'s status on Monday, 27-Feb-2023 00:45:58 UTC Bernie Bernie
      in reply to
      • penguin42

      The vendor-neutral ray tracing acceleration emerges from the composition of several building blocks, such as VK_KHR_ray_query:
      https://registry.khronos.org/vulkan/specs/1.3-extensions/man/html/VK_KHR_ray_query.html

      You can guess from the number of contributors that standardization took considerable effort. The API must support a variety of use-cases, including hybrid rendering.

      Ray queries are the simplest of two available techniques, in which SPIRV shaders can cast arbitrary rays and get back a (distance sorted?) list of hits.

      @penguin42

      In conversation Monday, 27-Feb-2023 00:45:58 UTC permalink

      Attachments


    • Bernie (codewiz@mstdn.io)'s status on Monday, 27-Feb-2023 01:03:30 UTC Bernie Bernie
      in reply to
      • penguin42

      The more advanced technique is used in the screenshot at the beginning of this thread: Ray Tracing Pipelines.

      This extension is present in #Mesa, but still flag-guarded for the RADV driver. It crashed my RDNA3 card until last week 😅

      AFAICT, the #Vulkan RT pipeline takes away control of the core algorithm, calling shaders attached to surfaces in the scene when they're hit.

      This diagram shows that ray generation is still performed by a user-defined shader:

      @penguin42

      In conversation Monday, 27-Feb-2023 01:03:30 UTC permalink

      Attachments


      1. https://media.mstdn.io/mstdn-media/media_attachments/files/109/933/979/179/916/586/original/216efd1f512cdba7.png
    • Bernie (codewiz@mstdn.io)'s status on Tuesday, 28-Feb-2023 15:37:25 UTC Bernie Bernie
      in reply to
      • penguin42

      @penguin42 I just finished watching this fine series of lectures on #Vulkan.

      Episode 6 is specifically about real-time #RayTracing, and explains how shaders are bound to objects in the acceleration structure, one of the concepts I couldn't figure out by looking at the source code of the demos.

      https://www.youtube.com/watch?v=12k_frqw7tM&list=PLmIqTlJ6KsE1Jx5HV4sd2jOe3V1KMHHgn&index=6

      In conversation Tuesday, 28-Feb-2023 15:37:25 UTC permalink
    • penguin42 (penguin42@mastodon.org.uk)'s status on Tuesday, 28-Feb-2023 15:37:26 UTC penguin42 penguin42
      in reply to

      @codewiz Thanks for the long description! I have had a play with Vulkan; it's a bit...head breaking (It didn't help I was using vulkano on Rust at the time and learning both in parallel). Yeh that ray_query is interesting - I guess it's kind of the other way around from the traditional stuff

      In conversation Tuesday, 28-Feb-2023 15:37:26 UTC permalink
    • Bernie (codewiz@mstdn.io)'s status on Saturday, 04-Mar-2023 03:51:32 UTC Bernie Bernie
      in reply to
      • penguin42

      I resumed my #Vulkan #RayTracing studies. One thing I still don't quite understand is how you'd move objects around without rebuilding the acceleration structure every frame.

      I tried asking #ChatGPT, and the answer, if correct, implies you *have* to rebuild the AS every frame!

      This might be fine for my demo, but how would it scale to complex scenes with hundreds or thousands of objects like the roads of Night City?

      @penguin42

      In conversation Saturday, 04-Mar-2023 03:51:32 UTC permalink

      Attachments


      1. https://media.mstdn.io/mstdn-media/media_attachments/files/109/962/952/554/422/573/original/52abf288c7b6c6c8.png
    • Bernie (codewiz@mstdn.io)'s status on Saturday, 04-Mar-2023 04:06:21 UTC Bernie Bernie
      in reply to
      • penguin42

      @penguin42 Oh man, I just had to ask...

      I'm hesitant to ask for a full code example... will it generate sensible Vulkan code?

      In conversation Saturday, 04-Mar-2023 04:06:21 UTC permalink

      Attachments


      1. https://media.mstdn.io/mstdn-media/media_attachments/files/109/963/026/416/986/852/original/16c1fef8c09ce035.png
    • Bernie (codewiz@mstdn.io)'s status on Saturday, 04-Mar-2023 04:34:10 UTC Bernie Bernie
      in reply to
      • penguin42

      Exceeds my expectations, but I can't tell if this is what a sensible implementation would actually look like...
      https://sharegpt.com/c/mMNr1qm

      @penguin42 #ChatGPT #Vulkan #RayTracing

      In conversation Saturday, 04-Mar-2023 04:34:10 UTC permalink

      Attachments


    • Bernie (codewiz@mstdn.io)'s status on Saturday, 04-Mar-2023 20:15:44 UTC Bernie Bernie
      in reply to
      • penguin42

      @penguin42 Tutorials say that it's an opaque, hardware-dependent data structure. The part that's visible is AABBs (axis aligned bounding boxes), which are similar to BSPs, but the hierarchy is defined manually.

      These bounding boxes can contain regular triangle meshes or procedural primitives defined by a user shader.

      In conversation Saturday, 04-Mar-2023 20:15:44 UTC permalink
    • penguin42 (penguin42@mastodon.org.uk)'s status on Saturday, 04-Mar-2023 20:15:45 UTC penguin42 penguin42
      in reply to

      @codewiz Oh hell, we've got AI doing matrix maths. Now it's making me curious what the acceleration structures have in them; I guess something like BSP indicating which parts of space are influenced by each object.

      In conversation Saturday, 04-Mar-2023 20:15:45 UTC permalink
    • Bernie (codewiz@mstdn.io)'s status on Saturday, 04-Mar-2023 20:16:39 UTC Bernie Bernie
      in reply to
      • penguin42

      I'm already fantasizing a raytraced mandelbulb 🙂

      @penguin42

      In conversation Saturday, 04-Mar-2023 20:16:39 UTC permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • Privacy
  • Source
  • Version
  • Contact

Bobinas P4G is a social network. It runs on GNU social, version 2.0.1-beta0, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All Bobinas P4G content and data are available under the Creative Commons Attribution 3.0 license.