Surface analysis in openGL

I'm new to openGL. I want to draw a polygon and do an analysis on that polygon based on the strength, velocity affected, etc. Now I could draw a polygon for example say triangle using the basic primitive types. Also I can fill colors inside the triangle.

Now my question is I want to will the colors for a surface that was constructed using triangle mesh as shown the attached image Output Image:

enter image description here

Can anyone help me how to acheive this in openGL without using shaders?. I was restricted to use the vertex or fragment shaders.

1 answer

  • answered 2018-07-11 06:21 Spektre

    Without shaders you need to do the analysis on CPU side and use OpenGL for visualization only. I see 3 options:

    1. per vertex color

      just compute color based on your analysis for each vertex of your mesh and render. This is doable only if you got uniform point sampling or sufficient vertex density in the mesh (which is highly unlikely).

    2. texturing

      If you got texture coordinates for your mesh then you should use texture. So compute/render your analysis colors into 2D image and use that as texture for your mesh.

    3. glReadPixels

      In case you can not use #1,#2 you still can use per pixel coloring. Just render your mesh (no need for lighting or coloring). Then read depth buffer to CPU side memory as 2D image do your analysis coloring on per pixel basis and render backs as single 2D textured QUAD covering whole screen.

      You can read depth buffer like this:

      const int xs=640; // your GL screen resolution
      const int ys=480;
      GLfloat depth[xs*ys];
      
      glReadPixels(x,y,xs,ys,GL_DEPTH_COMPONENT,GL_FLOAT,zed);
      

      In case you are using perspective you need to linearize depth before processing:

      int i;
      double per[16],z,zFar,zNear;
      glGetDoublev(GL_PROJECTION_MATRIX,per); }               // actual perspective matrix
      zFar =0.5*per[14]*(1.0-((per[10]-1.0)/(per[10]+1.0)));  // compute zFar from perspective matrix
      zNear=zFar*(per[10]+1.0)/(per[10]-1.0);                 // compute zNear from perspective matrix
      for (i=0;i<xs*ys;i++)
          {
          z=depth[i];                                         // logarithmic
          z=(2.0*z)-1.0;                                      // logarithmic NDC
          z=(2.0*zNear)/(zFar+zNear-(z*(zFar-zNear)));        // linear <0,1>
          z=zNear + z*(zFar-zNear);                           // linear <zNear,zFar>
          depth[i]=-z;
          }
      

      Now your depth[x+(xs*y)] should hold z coordinate of x,y pixel of your mesh in camera space. From camera parameters and depth you can compute the real mesh coordinate in camera space and even transform to the mesh local coordinate system. Anyway now you have x,y,z coordinates of all visible surface points of your mesh which is all you need for your analysis.

    You did not provide any specific about your mesh nor analysis so we can only guess. I would go for option #3 as for this kind of tasks is the best as you got access to topologically sorted 3D surface making the analysis much much easier and providing per pixel output.