Why should we differ the concept of raytracing from rasterization?
According to my understanding, modern computer monitors mainly display images in raster ways, that is, what we see on the screen are all dots or pixels. What confused me is that we often said ray tracing is not rasterization, but no matter how the 3D world is calculated, shouldn't everything rendered in the computer be finally converted to pixels forming a raster image on the screen? Or if it is that I misunderstood some critical concepts.
See also questions close to this topic
-
Differences between rigid and smooth skinning
In computer graphics,
how do we compute the displacement of a vertex in smooth skinning using a weight function W and displacement from the affecting joint to vertex d_i??
-
Questions about Computer graphics?
I have questions about computer graphics course, Are they true or not?
1-In character animation, obtaining joint angels from giving coordinates is done by forward kinematics
2-Orthographic projection projects all the pointa in the world along parallel lines onto the image plane
3-Raytracing produces viewer-independent lighting solutions.
-
python how to delete an oval in a list
Hi I started python like 1 month ago and there is a thing that I can't resolve in my program.
I'm making a little game like space invaders but the
"oval21"
does not disappear whenoval21set.vie == 0
I need help please!!!elif event.char == "p": id_liste.append(cnv.create_line(projectile_trajectoire,550,projectile_trajectoire,0, fill="red", width="3")) if projectile_trajectoire in range(60,130): if oval21Set.vie > 0: oval21Set.vie -= 1 if oval21Set.vie < 1: oval21 = oval_list[2] delete_oval(oval21)
-
Questions on ray tracing
I have meet a question on ray tracing:
Consider a ray tracer applied to a three-dimensional scene containing n teapots, where each teapot is composed of m polygons. The scene is illuminated with 3 point light sources and the program is rendering onto a screen containing p pixels. The program is to render ‘geometric’ or hard-edged shadows.
- In terms of n, m and p, how many polygon intersection tests are invoked for the first generation (i.e. initial) rays for the set of screen pixels (not including shadow calculations)?
- Assuming 40% of the initial rays hit a teapot, what is the minimum and maximum number of polygon intersection tests that are invoked for shadow calculations for the first generation of rays?
- Assuming 40% of the initial rays hit a teapot, how many polygon intersection tests are invoked for the second generation of rays (not including shadow calculations)?
- Assume each teapot is now enclosed in a bounding sphere. For first generation rays, the bounding spheres have a 60% hit rate on average. What is the combined total of bounding sphere and polygon intersection tests that are invoked for the first generation rays (not including shadow calculations)?
I know that for
1
, the answer isp*m*n
polygon intersections, but I am really confused about2
,3
and4
. -
DXR Descriptor Heap management for raytracing
After watching videos and reading the documentation on DXR and DX12, I'm still not sure how to manage resources for DX12 raytracing (DXR).
There is quite a difference between rasterizing and raytracing in terms of resource management, the main difference being that rasterizing has a lot of temporal resources that can be bound on the fly, and raytracing being in need of all resources being ready to go at the time of casting rays. The reason is obvious, a ray can hit anything in the whole scene, so we need to have every shader, every texture, every heap ready and filled with data before we cast a single ray.
So far so good.
My first test was adding all resources to a single heap - based on some DXR tutorials. The problem with this approach arises with objects having the same shaders but different textures. I defined 1 shader root signature for my single hit group, which I had to prepare before raytracing. But when creating a root signature, we have to exactly tell which position in the heap corresponds to the SRV where the texture is located. Since there are many textures with different positions in the heap, I would need to create 1 root signature per object with different textures. This of course is not preferred, since based on documentation and common sense, we should keep the root signature amount as small as possible. Therefore, I discarded this test.
My second approach was creating a descriptor heap per object, which contained all local descriptors for this particular object (Textures, Constants etc..). The global resources = TLAS (Top Level Acceleration Structure), and the output and camera constant buffer were kept global in a separate heap. In this approach, I think I misunderstood the documentation by thinking I can add multiple heaps to a root signature. As I'm writing this post, I could not find a way of adding 2 separate heaps to a single root signature. If this is possible, I would love to know how, so any help is appreciated.
Here the code I'm usign for my root signature (using dx12 helpers):
bool PipelineState::CreateHitSignature(Microsoft::WRL::ComPtr<ID3D12RootSignature>& signature) { const auto device = RaytracingModule::GetInstance()->GetDevice(); if (device == nullptr) { return false; } nv_helpers_dx12::RootSignatureGenerator rsc; rsc.AddRootParameter(D3D12_ROOT_PARAMETER_TYPE_SRV,0); // "t0" vertices and colors // Add a single range pointing to the TLAS in the heap rsc.AddHeapRangesParameter({ {2 /*t2*/, 1, 0, D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1}, /* 2nd slot of the first heap */ {3 /*t3*/, 1, 0, D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 3}, /* 4nd slot of the first heap. Per-instance data */ }); signature = rsc.Generate(device, true); return signature.Get() != nullptr; }
Now my last approach would be to create a heap containing all necessary resources -> TLAS, CBVs, SRVs (Textures) etc per object = 1x heap per object effectively. Again, as I was reading documentation, this was not advised, and documentation was stating that we should group resources to global heaps. At this point, I have a feeling I'm mixing DX12 and DXR documentation and best practices, by using proposals from DX12 in the DXR domain, which is probably wrong.
I also read partly through Nvidia Falcor source code and they seem to have 1 resource heap per descriptor type effectively limiting the number of descriptor heaps to a minimum (makes total sense) but I did not jet find how a root signature is created with multiple separate heaps.
I feel like I'm missing one last puzzle part to this mystery before it all falls into place and creates a beautiful image. So if anyone could explain how the resource management (heaps, descriptors etc.. ) should be handled in DXR if we want to have many objects which different resources, it would help me a lot.
So thanks in advance! Jakub
-
Raytracing: mirror reflection in object coordinates
I'm having fun writing my own raytracer. I'm doing correctly a metal surface ray reflection on world coordinates:
- I have an incident ray in world coords, defined by
origin_point
anddirection_vector
; inverse_transform_matrix * origin_point
gives me the ray origin in object space, andinverse_transform_matrix ^ direction_vector
gives me the ray direction in object space (with^
I mean matrix-vector multiplication with no translation).- Now I do my calculations to get the hit point and hit normal vector, in object space;
transform_matrix * hit_point
gives me the hit point in world coords, and(inverse_transform_matrix_transposed ^ hit_normal).normalized()
gives me theworld_normal_vector
with length 1 in world coordinates.
Now I can have the mirror-like reflected ray direction in world coords:
reflected_dir_vector = direction_vector - 2*(direction_vector • world_normal_vector) * world_normal_vector
(•
is the dot product). The hit point in world coords is the origin point of my new ray.Now I can check the angles of the incoming ray and reflected ray respect to the normal (still in world coordinates) using the cross product and they are equal; I can also export my vectors in a wavefront file, load it in Blender and see with my eyes that everything is fine - the angles between the vectors are ok and all the vectors are on the same plane. My eyes aren't a math proof but the rendered picture with these calculation is fine.
BUT:
I'd like to calculate the reflected ray direction in objected coordinates and after that go back to world coordinates, due to the way I want my code to work.
I can calculate the reflected vector direction in object space just like before but using object coords:
reflected_dir_vector = object_coords_direction_vector - 2*(object_coords_direction_vector • object_coords_normal_vector) * object_coords_normal_vector
. Now I don't get what matrix I should use to move this vector to world coordinates; I tried all my direct, inverse and trasponsed transformation matrices but all of them gives me a wrong reflected direction.Where's my error?
I'm not showing code to not to make and excessively long post.
Thanks.
- I have an incident ray in world coords, defined by