Changing Sin frequency smoothly in shader
In my vertex shader I am using a sin function to offset vertices. But as I change the frequency of my sin function I notice some "flickering". I guess that this comes from the way that the phase is not synchronised anymore.
float s = sin(frequency * _Time);
Is there a way to avoid those flickering effect while changing the frequency ?
See also questions close to this topic

Solving math equations with fractions on google scripts
https://docs.google.com/spreadsheets/d/1s0tltmOLDwWmjROBZN_ibpfc5jcnPk6aUaQvLp80/edit?usp=sharing
So I am making a Google Sheets program to create a math worksheet that randomly generates the numbers within certain parameters.
I have succeeded in addition, subtraction, a mix of addition and subtraction, and multiplication. I am now trying to add fractions to the mix, but have run into the formatting issue. While I have been able to generate random fractions and have them appear correctly, the issue comes in trying to have the correct answer generate on the Key sheet.
I have tried finding a way to read the cells occupied by fractions as a string and parse out the individual numbers however that would only solve the issue with the numerator, and I still have the issue with the denominator (finding the common one or at least having the answer one that can be simplified by the student or teacher after solved).
I included a link above, please let me know if you have any ideas to help with this issue.
Also since I don't think I have it properly documented in the script, the max value of integers generated is located on the key page in the range "K2"

Find the number of distinct ways of writing '1' as a sum of fractions, each with '1' as numerator and a power of '2' as denominator
I am given a number
n
and I have to find the number of distinct ways of writing the number1
as a sum ofn
fractions, where each fraction has the following format: The numerator is always 1.
 The denominator is a power of 2 (for example 2^1, 2^2, etc).
Two methods of writing
1
as a sum of such fractions are NOT distinct if they contain the same fractions. For example, let's sayn=4
. One way of writing1
as a sum of4
fractions would be the following:1/2 + 1/4 + 1/8 + 1/8
. But writing it as1/8 + 1/4 + 1/2 + 1/8
is considered the same(because it contains the exact same fractions, only the order changed) and therefore NOT distinct when compared to the first way of writing. So forn=4
there would only be two ways of writing1
as a sum of 4 fractions. The first would be1/2 + 1/4 + 1/8 + 1/8
(the one mentioned above) and the second would be1/4 + 1/4 + 1/4 + 1/4
. So the result would be2
. The boundaries ofn
are :2 <= n <= 2000
.I wrote the first few on paper (for
n=2
, forn=3
, forn=4
and a few more) and I thought that the results are part of the Fibonacci sequence so that's what I tried but when I sent the source on the site it said that it is wrong. I have a feeling that I have to use dynamic programming but I am not sure how to implement it. Any help would be very appreciated. Thanks a lot! 
C#: Math Round() results in different results
I was just surfing some coding problems in C#. fiddle link here
Q: What is the output of the following code?
using System; public class Program { public static void Main() { Console.WriteLine(Math.Round(6.5)); Console.WriteLine(Math.Round(11.5)); } }
6 12
This is the output.
My doubt is if 6.5 comes as 6. How come 11.5 as 12?
It should be 11 or either 6.5 should be 7.
Maybe it is very unwise, Any suggestion/explanation helps me to understand clearly.

Rendering multiple different materials who will sometimes require different shader configurations
I'm attempting to build a 3D voxel based game engine to learn how to use vulkan. I've run up against a wall that I can't find documentation on how to climb. Right now I am drawing a 2D triangle and moving it around the screen, my 2D triangle is defined as 1 vertex in a vertex buffer that will, in the future, be translated into screen space in my vertex shader. This single vertex is turned into 3 vertices by my geometry shader and is then passed to my fragment shader.
triangle_center_position[0] = (cursor_x  (vulkan_window_width_get() / 2.0f)) / vulkan_window_width_get(); triangle_center_position[1] = (cursor_y  (vulkan_window_height_get() / 2.0f)) / vulkan_window_height_get(); // send to gpu via memory mapped region memcpy(triangle_position_buffer.mapped_memory, &triangle_center_position, sizeof(vec2) * 1);
It is bound in the Command Buffer in the following manner:
VkBuffer vertexBuffers[] = {buffer>buffer}; VkDeviceSize offsets[] = {0}; vkCmdBindVertexBuffers(command_buffer[i], 0, 1, vertexBuffers, offsets); vkCmdDraw(command_buffer[i], (uint32_t) buffer>num_elements, 1, 0, 0);
My plan is to eventually modify this code to take in a 3D point and have my geometry shader expand my 3D point into a voxel.
Unfortunately I only want this transformation to happen to blocks within the voxel world not to other things (player models, etc).
In OpenGL I'd just call
glUseProgram()
on two shader programs built for different "material"s which seems to be extremely discouraged in vulkan. My instinct is something like thiscurrentMaterial = null for (Renderable r : sort(everything, by material type)) if (!r.isWithinViewOfScreen()) continue if (r.material != currentMaterial) currentMaterial = r.material r.material.use() r.render()
The main issue is that some
Renderable
s will use completely different shader requirements but there does not seem to be a provision within vulkan to swap between shader programs. 
glsl calculating hue slow on mobile device
I'm currently developing a game in corona sdk. I have been trying to write a shader to turn some pixel to gray scale if their color is not "similar" to some given colors. Finding a metric to tell whether two colors are similar or not is the hard part but since the given colors are yellow, green and blue I thought that it was a good idea to get the hue of the current pixel and then calculate the distance from the hue of green, yellow and blue. This is the fragment shader I wrote.
P_COLOR vec3 rgb2hsv(P_COLOR vec3 c) { P_COLOR vec4 K = vec4(0.0, 1.0 / 3.0, 2.0 / 3.0, 1.0); P_COLOR vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g)); P_COLOR vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r)); P_COLOR float d = q.x  min(q.w, q.y); P_COLOR float e = 1.0e10; return vec3(abs(q.z + (q.w  q.y) / (6.0 * d + e)), d / (q.x + e), q.x); } P_COLOR vec4 FragmentKernel( P_UV vec2 texCoord ) { P_COLOR vec4 texColor = texture2D( CoronaSampler0, texCoord ); P_COLOR float avgTexColor = (texColor.r + texColor.g + texColor.b) * 0.3333333; P_COLOR vec4 outputColor = vec4(avgTexColor, avgTexColor, avgTexColor, texColor.a); P_COLOR vec3 currentColor = texColor.rgb; P_COLOR vec3 hsv = rgb2hsv(currentColor); P_COLOR float minDistance = abs(hsv.r  0.33); // 0.33 hue for green minDistance = min(minDistance, abs(hsv.r  0.6667)); // 0.6667 hue for blue minDistance = min(minDistance, abs(hsv.r  0.1667)); // 0.1667 hue for yellow P_COLOR float mixFactor = min(minDistance * 4.0, 1.0); outputColor = mix(vec4(currentColor, texColor.a), outputColor, mixFactor); return CoronaColorScale( outputColor ); }
rgb2hsv
is taken from here. Unfortunately, this shader is using up to 70% of my Samsung S7's gpu. I also tried to retrieve only the hue of the current color usingP_COLOR float hue(P_COLOR vec3 c) { return atan(1.7*(c.gc.b), 2.0*c.rc.gc.b); }
as described here but gpu usage is pretty much the same.
I would really like to know why this shader is using so many resources and if there is a better way to calculate color similarity in glsl. Thanks!

run glDispatchCompute in a loop
Can i run this code in a loop without reading results from SSBO? And only read SSBO results after 100 iterations.
for (int i=0; i <100; i++){ glDispatchCompute(1, 200, 1); glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);//i understand this needed to ensure //it is done running the glsl code in GPU from previous iteration }
Also will glsl code executed e.g. second time within the loop(i==1) see results of first glsl execution in SSBO (i==0)?
Finally do i really need glMemoryBarrier call in the loop or it can be outside the loop? I am concerned that GPU code will not see changes done by first iteration in SSBO when executed second time.

MonoGame 2MGFX still using ps_2_0 even though ps_4_0_level_9_1 specified
I have the 2MGFX.exe file I got with the latest installer (3.6) downloaded from the MonoGame site (the exe size is 336,896 bytes, and build date is 03/01/2017 9:05AM).
I renamed MonoGame's BasicEffect shader source and added it to my project. I removed the original PixelLighting code (to avoid the error described below) and added some of my custom code, and it compiles and runs correctly (but I don't have PixelLighting, of course).
But if I leave the PixelLighting code in there along with my additions, I get:
error X5608: Compiled shader code uses too many arithmetic instruction slots (78). Max. allowed by the target (ps_2_0) is 64.
So I started looking at upping the shader model. But what I found was that the current technique line already says "ps_4_0_level_9_1". For fun I changed it to ps_3_0 (which also should have compiled everything), but I still get the error.
Any ideas why I still get the "ps_2_0" error?

Can the Shader Model affect Compute Shader behaviour?
When I run my compute shader on my PC, it works. If I run it on my Mac, it does not. Based on some debugging, I think the problem is that for whatever reason the Y, Z and Wcomponents of an important texture are always zero, even if I forceassign an arbitrary value to them.
So: Can the Shader Model affect Compute Shader behaviour?
If I call SystemInfo.graphicsShaderLevel it returns this:
 On PC: 50 ("Shader Model 5.0 (DX11.0)" according to Unity)
 On Mac: 45 ("Metal / OpenGL ES 3.1 capabilities (Shader Model 3.5 + compute shaders)" according to Unity)
I've tried looking for more information on the topic but the most I could find were these: https://docs.microsoft.com/enus/windows/desktop/direct3dhlsl/dxgraphicshlslsm3 https://docs.microsoft.com/enus/windows/desktop/direct3dhlsl/d3d11graphicsreferencesm5
And I can't find any official information about what Unity refers to as "Shader Model 3.5 + compute shaders".

Custom skybox shader for tiled skybox
I am new to writing shaders. I want to use a texture for 6sided skybox in unity and I want that texture to be repeated several times also called tiling.
But the default 6sided skybox shader in unity doesn't have tiling option. Can anyone write a custom shader for 6sided skybox in unity which has option to tile textures? I also want an option to apply a color tint on the texture if possible. Thanks in advance.

Convert colors from RGB to NV12
I’m working on an app that encodes video with media foundation h264 encoder. Sink writer crashes on Windows 7 with RGB input in VRAM, saying "0x8876086C D3DERR_INVALIDCALL" so I’ve implemented my own RGB>NV12 conversion on GPU, saving more than 60% of PCI express bandwidth.
Here’s what in my media types, both input (NV12) and output (h264):
mt>SetUINT32( MF_MT_VIDEO_CHROMA_SITING, MFVideoChromaSubsampling_MPEG2 ); // Specifies the chroma encoding scheme for MPEG2 video. Chroma samples are aligned horizontally with the luma samples, but are not aligned vertically. The U and V planes are aligned vertically. mt>SetUINT32( MF_MT_YUV_MATRIX, MFVideoTransferMatrix_BT709 ); // ITUR BT.709 transfer matrix. mt>SetUINT32( MF_MT_VIDEO_NOMINAL_RANGE, MFNominalRange_0_255 ); // The normalized range [0...1] maps to [0...255] for 8bit samples or [0...1023] for 10bit samples. mt>SetUINT32( MF_MT_TRANSFER_FUNCTION, MFVideoTransFunc_10 ); // Linear RGB (gamma = 1.0).
The best result so far I have with this formula:
inline float3 yuvFromRgb(float3 rgba) { float3 res; res.x = dot( rgba, float3( 0.182585880, 0.614230573, 0.0620070584 ) ); res.y = dot( rgba, float3( 0.121760942, 0.409611613, 0.531372547 ) ); res.z = dot( rgba, float3( 0.531372547, 0.482648790, 0.0487237722 ) ); res += float3( 0.0627451017, 0.500000000, 0.500000000 ); return saturate( res ); }
What worries me is the formula contradicts everything I’ve read on the internet, code samples, and official ITU specs.
For Y the formula’s fine, I took BT.709 coefficients, and scaled them linearly to map [0..255] into [16..235] as written in the spec. The brightness is OK.
The specs say I must scale U and V to map from [0..255] into [16..240]. My eyes, however, tell me it’s undersaturated. For correct colors I have to scale U & V the other way, from [0..255] into something like [8, 255 + 8].
Why do I need to scale the other way to achieve correct colors after h264 encoding & decoding? Will this code work on other people’s computers?

Unity shader (hlsl) equivalent of Vector3.ProjectOnPlane
In a shader for Unity (hlsl) I'm looking for a way to project a vector(float3 or float4) onto a plane given the plane's normal direction. What I need is something equivalent to Unity's Vector3.ProjectOnPlane function: https://docs.unity3d.com/ScriptReference/Vector3.ProjectOnPlane.html

Is there a 3D matrix in HLSL?
Is there a 3D matrix in HLSL  alike
Texture3D
that would supportmul()
operations or how to implement such thing in HLSL? 
Wrong asin result
My code:
import math x = input() print(math.asin(math.radians(float(x))))
My x was 0.7071067811865475, and the result was some irracional number between 0 and 1, but in my knowledge it should have been around 45

Trigonometric functions approximation using deep learning
Is it possible to use deep learning to approximate trigonometric functions ?
I have used this code. It only got me a horizontal line (y=0). Is there any way to make it vary like sin function?
def f(x): return sin(x) t = arange(0.0, 1.0, 0.01) y = f(t) model=Sequential() model.add(Dense(512,input_shape=(1,), activation='relu')) model.add(Dense(256, activation='linear')) model.add(Dense(128, activation='relu')) model.add(Dense(64, activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(16, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(4, activation='relu')) model.add(Dense(1, activation='softplus')) model.compile(optimizer='adam', loss='mean_absolute_percentage_error', metrics=['accuracy']) model.fit(t,y, epochs=75000,batch_size=10) def fit(t): return model.predict(t)

c++ trigonometric function returns unexpected value
I've been assigned to write a program that takes an int or a float in degrees as input and returns the
sin
andcos
of that value with 6 decimal places. But when the input is 90 and 180 I get weird negative zeros, instead of "positive" zeros.Code:
#include <iostream> #include <cmath> #include <iomanip> using namespace std; const double PI = 3.14159265; int main() { float x; cin >> x; x = x * PI / 180; cout << fixed; cout << setprecision(6) << sin(x); cout << ' ' << setprecision(6) << cos(x) << endl; }
Undesired Input and output:
In:90 Out:1.000000 0.000000 In:180 Out:0.000000 1.000000
I've tried to fix this by multiplying the result of the cos function by 1 when the input is 90 but it still outputs 0.000000.
Solution: As @harold and @user2618142 pointed out, the result of the calculation was negative, so the approximation returned a negative 0.