OpenGL depth testing and blending not working simultaniously
I'm currently writing a gravity-simulation and I have a small problem displaying the particles with OpenGL.
To get "round" particles, I create a small float-array like this:
for (int n = 0; n < 16; n++)
for (int m = 0; m < 16; m++)
{
AlphaData[n * 16 + m] = ((n - 8) * (n - 8) + (m - 8) * (m - 8) < 64);
}
I then put this in a GL_TEXTURE_2D with format GL_RED. In the fragment shader (via glDrawArraysInstanced), I draw the particles like this:
color = vec4(ParticleColor.rgb, texture(Sampler, UV).r);
This works as it should, producing a picture like this (particles enlarged for demonstration):
As you can see, no artifacts. Every particle here is the same size, so every smaller one you see on a "larger" particle is in the background and should not be visible. When I turn on depth-testing with
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
So for the most part, this looks correct ("smaller" particles being behind the "bigger" ones). But I now have artifacts from the underlying quads. Weirdly not ALL particles have this behavior.
Can anybody tell me, what I'm doing wrong? Or do depth-testing and blending not work nicely together?
I'm not sure, what other code you might need for a diagnosis (everything else seems to work correctly), so just tell me, if you need additional code.
I'm using a perspective projection here (of course for particles in 3D-space).
1 answer
-
answered 2022-01-19 16:51
BDL
You're in a special case where your fragments are either fully opaque or fully transparent, so it's possible to get depth-testing and blending to work at the same time. The actual problem is, that for depth testing even a fully transparent fragment will store it's depth value. You can prevent the writing by explicitly discarding the fragment in the shader. Something like:
color = vec4(ParticleColor.rgb, texture(Sampler, UV).r); if (color.a == 0.0) discard;
Note, that conditional branching might introduce some additional overhead, but I wouldn't expect too many problems in your case.
For the general case with semi-transparent fragments, blending and depth-testing at the same time will not work. In order for blending to produce the correct result, you have to depth sort your geometry prior to rendering and render from back to front.
do you know?
how many words do you know
See also questions close to this topic
-
C++ increment with macro
I have the following code and I want to use increment with macro:
#include <iostream> #define ABS(x) ((x) < 0 ? -(x) : (x)) int main(int argc, char** argv) { int x = 5; const int result = ABS(x++); std::cout << "R: " << result << std::endl; std::cout << "X: " << x << std::endl; return EXIT_SUCCESS; }
But output will be incorrect:
R: 6 X: 7
Is it possible to somehow use macros with an increment, or should this be abandoned altogether?
-
Can anyone pls tell me whats wrong with my code ? im stuck for the last 3 hours ,this question is bipartite graph in c++
idk why im getting error ,can someone help ? im trying to prove if a graph is bipartite or not in c++
bool isBipartite(vector<int> graph[],int V) { vector<int> vis(V,0); vector<int> color(V,-1); color[0]=1; queue <int> q; q.push(0); while (!q.empty()) { int temp = q.front(); q.pop(); for (int i=0;i<V;i++) { if (!vis[i] && color[i] == -1) "if there is an edge, and colour is not assigned" { color[i] = 1 - color[temp]; q.push(i); vis[i]=1; } else if (!vis[i] && color[i] == color[temp] "if there is an edge and both vertices have same colours" { vis[i]=1; return 0; // graph is not bipartite } } } return 1; }
it gives output "no" for whatever i enter
-
How to assign two or more values to a QMap Variable in Qt
I am getting confused of how to store the values assigned from 3 different functions and storing them in a single map variable
QMap<QString,QString> TrainMap = nullptr; if(......) ( TrainMap = PrevDayTrainMap(); TrainMap = NextDayTrainMap(); TrainMap = CurrentDayTrainMap(); }
The PrevDayTrainMap,NextDayTrainMap & CurrentDayTrainMap returns a set of values with Date and the TrainIdName.I need to store all the values from prevday,currentday and nextday in the TrainMap but it stores only the currentday values to the TrainMap as it is assigned at the last.I am not sure what to do so that it doesn't overwrite.If I should merge what is the way to do it?
-
School Project, set up OpenGL on VS 2019 Community
I am trying to set up OpenGL on my PC for a project. The project gave me sample code to run and a folder to add include directories and libs to my sample code solution in VS 2019 community. The directions were awful and wanted me to upload the include and lib directories to VC++. This did not work even remotely, none of the glfw or glew headers were recognized. I found other directions online to upload the include folders under C/C++>General>Additional Include Directories and the libs under Link>General>Additional Lib Directories. Then I added glfw3.lib;glu32.lib;glew32.lib;opengl32.lib to my Link>Input>Additional Dependencies. This allowed my program to compile. But now I get the following errors plus 90 more like it:
Warning LNK4098 defaultlib 'MSVCRT' conflicts with use of other libs; use /NODEFAULTLIB:library OpenGLSample C:\OpenGL_Projects\OpenGLSample\OpenGLSample\LINK 1 Error LNK2019 unresolved external symbol __imp__TranslateMessage@4 referenced in function __glfwPlatformInit OpenGLSample C:\OpenGL_Projects\OpenGLSample\OpenGLSample\glfw3.lib(win32_init.obj) 1```
-
Opengl GLFW3, Cant update vertices of two object at the same time
So i am trying to make a game where I want to update my player and enemy. enemy and player are squares. I have a square class with VAO, VBO, EBO and vertices where I load my square.
void Square::loadSquare() { unsigned int indices[] = { 0, 1, 3, // first triangle 1, 2, 3 // second triangle }; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); glGenBuffers(1, &EBO); // bind vertex array object glBindVertexArray(VAO); // bind vertex buffer object glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_DYNAMIC_DRAW); // bind element buffer objects // EBO is stored in the VAO glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); // registered VBO as the vertex attributes glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // unbind the VAO glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); }
and draw method:
void Square::drawSquare() { // Bind the VAO so OpenGL knows to use it glBindVertexArray(VAO); // Draw the triangle using the GL_TRIANGLES primitive glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0); }
Player and Enemy classes are inherited from square class and they have their own update method where I call in my Carte classes update methode.
void Carte::update(){ drawMap(); enemy.update(GrapMap,0); player.update(GrapMap); }
In draw map I simply draw my enemy and player.
void Carte::drawMap(){ enemy.drawSquare(); player.drawSquare(); for(auto & wall : walls){ wall.drawSquare(); } }
Walls are squares too but in my case I draw therm where I want and I don't have a problem with them. at the end of every update of enemy and player after chancing vertices of them I call
glBindBuffer(GL_ARRAY_BUFFER, VAO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_DYNAMIC_DRAW);
When it was only player that I was updating it was working flawlessly. But with enemy I cant see the player but I can see that its vertices are changing accordingly to the key input of the user. When I comment out player and try to update enemy only enemy is not updating but again I can see its vertices changing as it should.
Before creating my Carte object at the main I did this:
// Vertex Shader source code const char* vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; //Fragment Shader source code const char* fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(0.8f, 0.3f, 0.02f, 1.0f);\n" "}\n\0"; int main() { // Initialize GLFW glfwInit(); // Tell GLFW what version of OpenGL we are using // In this case we are using OpenGL 3.3 glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); // Tell GLFW we are using the CORE profile // So that means we only have the modern functions glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // Create a GLFWwindow object of 800 by 800 pixels, naming it "window" GLFWwindow* window = glfwCreateWindow(1000, 1000, "Window", NULL, NULL); // Error check if the window fails to create if (window == NULL) { std::cout << "Failed to create GLFW window" << std::endl; glfwTerminate(); return -1; } // Introduce the window into the current context glfwMakeContextCurrent(window); //Load GLAD so it configures OpenGL gladLoadGL(); // Specify the viewport of OpenGL in the Window // In this case the viewport goes from x = 0, y = 0, to x = 800, y = 800 //glViewport(0, 0, 1400, 1400); // Create Vertex Shader Object and get its reference GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER); // Attach Vertex Shader source to the Vertex Shader Object glShaderSource(vertexShader, 1, &vertexShaderSource, NULL); // Compile the Vertex Shader into machine code glCompileShader(vertexShader); // Create Fragment Shader Object and get its reference GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); // Attach Fragment Shader source to the Fragment Shader Object glShaderSource(fragmentShader, 1, &fragmentShaderSource, NULL); // Compile the Vertex Shader into machine code glCompileShader(fragmentShader); // Create Shader Program Object and get its reference GLuint shaderProgram = glCreateProgram(); // Attach the Vertex and Fragment Shaders to the Shader Program glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); // Wrap-up/Link all the shaders together into the Shader Program glLinkProgram(shaderProgram); // Delete the now useless Vertex and Fragment Shader objects glDeleteShader(vertexShader); glDeleteShader(fragmentShader);
In my while loop I am doing this:
// Specify the color of the background glClearColor(0.07f, 0.13f, 0.17f, 1.0f); // Clean the back buffer and assign the new color to it glClear(GL_COLOR_BUFFER_BIT); // Tell OpenGL which Shader Program we want to use glUseProgram(shaderProgram); int keyW = glfwGetKey(window, GLFW_KEY_W); int keyA = glfwGetKey(window, GLFW_KEY_A); int keyS = glfwGetKey(window, GLFW_KEY_S); int keyD = glfwGetKey(window, GLFW_KEY_D); carte.update(); if(keyW) deneme.setPlayerDirection(Directions::UP); else if(keyA) deneme.setPlayerDirection(Directions::LEFT); else if(keyS) deneme.setPlayerDirection(Directions::DOWN); else if(keyD) deneme.setPlayerDirection(Directions::RIGHT); // Swap the back buffer with the front buffer glfwSwapBuffers(window); // Take care of all GLFW events glfwPollEvents();
I don't understand why I can't update my two objects at the same time. Why I cant draw squares as the vertices changes when there is more then one objects vertices are changing.
edit to show how I change my vertices in Player:
if(getDirection() == Directions::UP){ trans = glm::translate(trans, glm::vec3(0.0f, 0.0002f, 0.0f)); setCenter(center.first,center.second + 0.0002f); } else if (getDirection() == Directions::LEFT){ trans = glm::translate(trans, glm::vec3(-0.0002f, 0.0f, 0.0f)); setCenter(center.first-0.0002f,center.second); } else if (getDirection() == Directions::DOWN){ trans = glm::translate(trans, glm::vec3(0.0f, -0.0002f, 0.0f)); setCenter(center.first,center.second-0.0002f); } else if (getDirection() == Directions::RIGHT){ trans = glm::translate(trans, glm::vec3(0.0002f, 0.0f, 0.0f)); setCenter(center.first+0.0002f,center.second); } else if (getDirection() == Directions::STOP) trans = glm::translate(trans, glm::vec3(0.0f, 0.0f, 0.0f)); for(int i=0; i < 4; i ++){ glm::vec4 tmp = trans * glm::vec4(getVertices()[i],1); setVertices(i, tmp.x, tmp.y, tmp.z); } glBindBuffer(GL_ARRAY_BUFFER, VAO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_DYNAMIC_DRAW);
-
Repeat a cubemap texture on a cube face with OpenGL
Is it possible to make a cube map texture (GL_TEXTURE_CUBE_MAP_POSITIVE_X...) repeat on a given face with OpenGL?
I have a simple unit cube with 24 vertexes centered around the origin (xyz between (-0.5, -0.5, -0.5) and (0.5, 0.5, 0.5)). Among its attributes, initially I set the uvw coords to the xyz position in the fragment shader as was done in the cubemap learnopengl.com tutorial. I obviously tried also to have separate uvw coordinates set to values equal to the scale, but the texture wasnt't repeating on any cube face as can be seen from the one displayed below (I painted the uvw coordinates in the fragment shader) :
height (y-coord of top pixels) = 3.5 in the image above, so by setting v = 3.5 for the vertex at the top, I'd expect the gradient to repeat vertically (which is not the case).
If it's not possible, the only way left for me to fix it is to assign a 2D Texture with custom uv coordinates on each vertex, right?
-
Can mesh / geometry overlap cause artifacts when rendering to multiple render targets?
I am currently trying to implement a method of Snow accumulation.
Everything seems to be working, except the actual accumulation part. In short, in the first stage of this method, the scene is rendered from the perspective of the source (very similar to shadow mapping). The output of this stage is two occlusion textures, each storing different information. The first one stores the ID of the object and the second texture stores the texture coordinates of the object at the visible point. In the second stage, for every pixel of the occlusion textures (which basically means a texture_resolution**2 grid), texture coordinates stored in the second occlusion texture are transformed by a geometry shader to result in a quad covering an area in a unique accumulation texture - each object is assigned an accumulation texture, which stores information about the snow height. The actual accumulation texture to render to is determined using the stored ID in the first occlusion texture. Each accumulation texture is a single render target.
Now, back to the actual problem. Obviously, at some point, parts of UV maps of each object will overlap - meaning, have the same texture coordinates. What I've noticed is that for some reason, this overlap causes the information to be output only to one accumulation texture - when I limit the transformation in geometry shader to a single ID, the output looks correct. It's almost like if I took one texture and subtracted it from the other. When I output all of the information to a single texture it can be clearly seen how the two textures complement each other. I will post pictures and code to illustrate the problems better.
The individual accumulation maps
The output of all information to a single map
Geometry shader code:
#version 330 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec2 texture_coords[3]; in vec2 pos[3]; uniform sampler2D occlusion1; uniform sampler2D occlusion2; flat out float accumulation; flat out int texID; void main() { vec2 coords = texture_coords[0] + texture_coords[1] + texture_coords[2]; coords = coords / 3.0; vec4 ids = texture(occlusion1, coords); vec4 map = texture(occlusion2, coords); float minx = map[0], maxx = map[2], miny = map[1], maxy = map[3]; vec2 minPoint = vec2(minx, miny); vec2 maxPoint = vec2(maxx, maxy); minPoint = (minPoint*2) + vec2(-1.0,-1.0); maxPoint = (maxPoint*2) + vec2(-1.0,-1.0); float noise = ids.y; if (ids.r != 0) { for(int i = 0; i < 3; i++) { vec2 resulting_position; if(pos[i].x < 0) { resulting_position.x = minPoint.x; //gModifier.x = -1.0; } else { resulting_position.x = maxPoint.x; //gModifier.x = 1.0; } if(pos[i].y < 0) { resulting_position.y = minPoint.y; //gModifier.y = -1.0; } else { resulting_position.y = maxPoint.y; //gModifier.y = 1.0; } texID = int(ids.r); accumulation = 1.0f; //accumulation = 0.002f * noise; gl_Position = vec4(resulting_position, 0.0, 1.0); EmitVertex(); } } EndPrimitive(); }
Fragment shader code:
#version 330 core layout(location = 0) out vec4 acc_0; layout(location = 1) out vec4 acc_1; layout(location = 2) out vec4 acc_2; layout(location = 3) out vec4 acc_3; layout(location = 4) out vec4 acc_4; layout(location = 5) out vec4 acc_5; layout(location = 6) out vec4 acc_6; flat in float accumulation; flat in int texID; void main() { switch(texID) { case 1: acc_0 = vec4(vec3(accumulation), 1.0f); break; case 2: acc_1 = vec4(vec3(accumulation), 1.0f); break; case 3: acc_2 = vec4(vec3(accumulation), 1.0f); break; } }
-
How to calculate lookAt matrix for 3D rotation with mouseMoveEvent?
I have been trying to calculate view matrix with lookAt function for each mouseMoveEvent. Right now what I can do is rotate the image but the rotation is not proper when I zoom out.I change the value of FOV in projection matrix for each wheelEvent.
Here is my vertex shader code.
const char *vsrc = "attribute vec3 vertexIn; \ attribute vec3 textureIn; \ varying vec3 textureOut; \ uniform mat4 model; \ uniform mat4 view; \ uniform mat4 projection; \ void main(void) \ { \ gl_Position = projection*view*model*vec4(vertexIn/1000.0,1.0); \ textureOut = textureIn; \ }";
Here is my fragment shader code.
const char *fsrc = "#ifdef GL_ES\n" "precision highp float;\n" "#endif\n" "varying vec3 textureOut;\n" "void main()\n" "{\n" " gl_FragColor = vec4(textureOut.bgr,1.0);" "}";
The function which is called for each Event is
void ABC::wheelEvent(QWheelEvent *event) { if(event->delta()>0) { FoV -= 0.5f; //for zoom out have to decrease FoV } else if(event->delta()<=0) { FoV += 0.5f; //for zoom in have to increase FoV } if (FoV < 0.5f) //maximum zoom FoV = 0.5f; else if(FoV > 177.5) //minimum zoom FoV = 177.5; ProjectionMatrix.setToIdentity(); ProjectionMatrix.perspective(FoV, 4.0f / 3.0f, 0.1f, 100.0f); } void ABC::mousePressEvent(QMouseEvent *event) { lastMousePos = event->pos(); } void ABC::mouseMoveEvent(QMouseEvent *event){ // Compute new orientation horizontalAngle += mouseSpeed * float(event->pos().x() - lastMousePos.x() ); verticalAngle += mouseSpeed * float(event->pos().y() - lastMousePos.y() ); lastMousePos = event->pos(); // Direction : Spherical coordinates to Cartesian coordinates conversion direction = QVector3D( //to find out the direction for camera to look at cos(verticalAngle) * sin(horizontalAngle), sin(verticalAngle), cos(verticalAngle) * cos(horizontalAngle) ); ViewMatrix.setToIdentity(); ViewMatrix.lookAt( direction, // Camera is here QVector3D(0.0,0.0,0.0), // and looks here QVector3D(0.0,1.0,0.0) // Head is up (set to 0,-1,0 to look upside-down) ); }
I'm new to 3D rendering and forgive me if I did some lame mistake. Can anyone please tell me what I'm doing wrong here or provide me with some examples so I can rectify.
The link to the video: https://www.kapwing.com/videos/6274ae966e15330062fb7fba
-
WebGL Phong Shader Bug
Here is a pretty simple shader, but it only works for spheres. It will only render the ambient light effect for other shapes.
The dot(L, N) and dot(R, V) in the fragment shader seem always keep 0. Is the calculation something wrong?
vertex shader
#version 300 es precision mediump float; // Vertex shader for phong illumination model // Per vertex shading // Vertex Attributes in vec3 aVertexPosition; // in model coords in vec3 aNormal; // in model coords // outputs out vec3 N; out vec3 L; out vec3 V; // Transforms uniform mat4 modelT; uniform mat4 viewT; uniform mat4 projT; // Light parameters uniform vec3 ambientLight; uniform vec3 lightPosition; // in world coords uniform vec3 lightColor; // object color parameters uniform vec3 baseColor; uniform vec3 specHighlightColor; // Phong parameters uniform float ka; uniform float kd; uniform float ks; uniform float ke; void main() { // All calculations will be done in camera space mat4 modelView = viewT * modelT; mat4 normalmatrix = transpose(inverse (modelView)); vec3 vcam = (modelView * vec4(aVertexPosition, 1.0)).xyz; vec3 lcam = (viewT * vec4(lightPosition, 1.0)).xyz; vec3 ncam = (normalmatrix * vec4(aNormal, 1.0)).xyz; ncam = faceforward (ncam, vcam, ncam); // vectors to pass on to Fragment Shader N = normalize (ncam); L = normalize (lcam - vcam); V = -normalize (vcam); // transform vertex to clip space gl_Position = projT * viewT * modelT * vec4 (aVertexPosition, 1.0); }
fragment shader
#version 300 es // Fragment shader for phong illumination model // Per vertex shading precision mediump float; // calculated by vertex shader and passed to fragment in vec3 N; in vec3 L; in vec3 V; // Light parameters uniform vec3 ambientLight; uniform vec3 lightColor; // object color parameters uniform vec3 baseColor; uniform vec3 specHighlightColor; // Phong parameters uniform float ka; uniform float kd; uniform float ks; uniform float ke; // Color that is the result of this shader out vec4 fragColor; void main(void) { // individual components vec3 R = normalize (reflect (-L, N)); vec3 ambient = ka * ambientLight * baseColor; vec3 diffuse = kd * lightColor * baseColor * max (dot(L, N), 0.0); vec3 spec = ks * specHighlightColor * lightColor * pow (max(dot(R, V), 0.0), ke); // final color fragColor = vec4 (ambient + diffuse + spec, 1.0); }
Here are the output results
-
how to create blendshapes from mediapipe facemesh
I need to know how to create and store blendshapes in a certain position of each part of the face (ex: left eye, left eyebrow, right eye, right eyebrow, nose, upper lip, lower lip, left cheek, right cheek )
using the landmarks output from the camera feed?
example in the documents:
app.pyimport cv2 import mediapipe as mp mp_drawing = mp.solutions.drawing_utils mp_drawing_styles = mp.solutions.drawing_styles mp_face_mesh = mp.solutions.face_mesh # For webcam input: drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1) cap = cv2.VideoCapture(0) with mp_face_mesh.FaceMesh( max_num_faces=1, refine_landmarks=True, min_detection_confidence=0.5, min_tracking_confidence=0.5) as face_mesh: while cap.isOpened(): success, image = cap.read() if not success: print("Ignoring empty camera frame.") # If loading a video, use 'break' instead of 'continue'. continue # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) results = face_mesh.process(image) # Draw the face mesh annotations on the image. image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) if results.multi_face_landmarks: for face_landmarks in results.multi_face_landmarks: mp_drawing.draw_landmarks( image=image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_TESSELATION, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_tesselation_style()) # Flip the image horizontally for a selfie-view display. cv2.imshow('MediaPipe Face Mesh', cv2.flip(image, 1)) if cv2.waitKey(5) & 0xFF == 27: break cap.release() enter code here
what I'm trying to do is to create some blendshapes for each part of the face as I've mentioned earlier
how to create blendshapes by (ex: pressing space ) while the loop is running and the points keeps on updating? ( for example while default pose, smiling, frowning, angry .. etc )
how to store it in landmarks points array or mesh/es or other files?
-
Dual depth peeling in transparency
im doing a trasparency project now i have to implement depth peeling
so i looked up for some resources to start implementing the algorithm and i found this link
https://www.kitware.com/vtk-technical-highlight-dual-depth-peeling/
there is some things i don't understand in the algorithem for example what exactly is
lastFrontPeel and lastDepthPeel
and should i do this algorithem for every single pixle in my mesh moodels????
answering this questions would help alot thnx in advance
-
Unity sprite with white transparent area shows whit line at border in scene
I have this sprite here which is part of a character. As you probably can see, it has a white area in the lower middle, which is semi-transparent.
But when placed into a sprite renderer in the scene it suddenly shows this white line where the black contour and the white area meet:
In the import settings, "Alpha is transparent" is checked. I tried messing with the texture MaxSize, and it made the line thinner but didn't get rid of it. The sprite renderer has material Sprite-Unlit-Default, I also tried Sprite-Default.
How do I get rid of this?
I am using Unity 2019.4.35f1 LTS and URP. The image was created with ClipStudio and exported as PNG to import in Unity. This is the only image with that issue. We have tried re-exporting but no dice.
I have found this related issue: Incorrect white matte behind antialiasing on imported sprites but the solution seems not applicable to my problem.
Thank you!
-
Directx how to account for alpha blending in depth test
I am trying to implement a simple way to render particles in my toy app. I'm rendering billboarding quads with alpha blending, but this causes a problem with the depth stenciling where parts of the quad that are fully transparent are still obscuring the particles behind them, since they fail depth test. The result looks like this:
This is the way I've setup my pipeline:
// rasterizer D3D11_RASTERIZER_DESC rasterizerDesc; rasterizerDesc.AntialiasedLineEnable = true; rasterizerDesc.CullMode = D3D11_CULL_BACK; rasterizerDesc.DepthBias = 0; rasterizerDesc.DepthBiasClamp = 0.0f; rasterizerDesc.DepthClipEnable = true; rasterizerDesc.FillMode = D3D11_FILL_SOLID; rasterizerDesc.FrontCounterClockwise = false; rasterizerDesc.MultisampleEnable = true; rasterizerDesc.ScissorEnable = false; rasterizerDesc.SlopeScaledDepthBias = 0.0f; // depth stencil state D3D11_DEPTH_STENCIL_DESC depthStencilStateDesc; ZeroMemory(&depthStencilStateDesc, sizeof(D3D11_DEPTH_STENCIL_DESC)); depthStencilStateDesc.DepthEnable = TRUE; depthStencilStateDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL; depthStencilStateDesc.DepthFunc = D3D11_COMPARISON_LESS; depthStencilStateDesc.StencilEnable = FALSE; // blend state D3D11_BLEND_DESC blendStateDescription; ZeroMemory(&blendStateDescription, sizeof(D3D11_BLEND_DESC)); blendStateDescription.AlphaToCoverageEnable = FALSE; blendStateDescription.IndependentBlendEnable = FALSE; blendStateDescription.RenderTarget[0].BlendEnable = TRUE; blendStateDescription.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA; blendStateDescription.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; blendStateDescription.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD; blendStateDescription.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE; blendStateDescription.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO; blendStateDescription.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD; blendStateDescription.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
I've been searching for a while, but not really sure how to setup the depth stenciling to work with alpha blending correctly here, or if even this is the correct approach. Looking at render doc capture it just doesn't care about transparency:
Not sure what other setup might be important here, so let me know in the comments and I'll include whatever else is relevant to this case.
-
Direct3D11 depth test LESS_EQUAL not working as expected
How is it possible that a fragment is generated, passes the depth test but isn't written to the current render target?
This is the pixel history I see if I capture a frame in RenderDoc:
The fragment should pass the depth test (same depth of the corresponding pixel in the depthbuffer and depthfunc = Less equal), blending is not enabled, stencil test is not enabled, but the new color is not written.
What am I missing? Is there some other state I'm not taking into account?