Changing Sin frequency smoothly in shader
In my vertex shader I am using a sin function to offset vertices. But as I change the frequency of my sin function I notice some "flickering". I guess that this comes from the way that the phase is not synchronised anymore.
float s = sin(frequency * _Time);
Is there a way to avoid those flickering effect while changing the frequency ?
See also questions close to this topic

how to generate random numbers :)
So I am a newbie to Crystal and was curious what is the Crystal equivalent of the following JavaScript code :
var _number = Math.floor(Math.random() * x)
Where
x
is some arbitrary number.In general how would you go about creating a random int in Crystal.

Find a,b,n so that (a^b)%n=x
Say I choose a value for x that can be between
0
and2147483647
. (Int32.MaxValue) I am trying to figure out how I can find values fora,b,n
so that(a^b)%n=x
I already know that I can use ModPow to verify the values, but I don't know how I can find a fitting a,b and n.#include <iostream> /// Calculate (a^b)%n /// \param a The base /// \param b The exponent /// \param n The modulo /// \return (a^b)%n int ModPow(int a, int b, int n) { long long x = 1, y = a; while (b > 0) { if (b % 2 == 1) { x = (x * y) % n; // multiplying with base } y = (y * y) % n; // squaring the base b /= 2; } return x % n; } int main() { int x = 1337; // How to find a,b,n so that (a^b)%n=x int a = ?; int b = ?; int n = ?; if(x == ModPow(a,b,n)) printf("ok"); return 0; }

Creating a helix following a curve
I wish to create a helix that follows a curve, to see what I am after, this link has a gif representing it...
https://www.mapleprimes.com/posts/202940SpiralAroundTheCurve
(I'm not interested in the animation, or other geometry just the resulting pink helix)
Ideally i should be able to have any shape curve, know a diameter/radius I want for the helix, and from them generate a second curve (the helix) that travels around it at a constant pitch
I am doing this in Javascript (threeJS) but i think its more of a general maths problem.
using the following, i can get a helix around a straight section, but it fails miserably when it changes direction/bends...
let helixPoints = []; let helixDiameter = 30; for (let t = 0; t < 1; t += 0.01) { let curvePoint = curve.getPointAt(t); let helixX = curvePoint.x + (helixDiameter * Math.cos(t*100)); let helixY = curvePoint.y +(helixDiameter * Math.sin(t*100)) let helixZ = curvePoint.z; helixPoints.push(new THREE.Vector3(helixX, helixY, helixZ)); } let helixCurve = new THREE.CatmullRomCurve3(helixPoints);
I know I need to do something more to helixZ, and i think so it follows any curve, i may need to get the tangent at the points?
I just cannot get my head around the maths, if anyone could point me in the right direction I would be grateful

Three.js  Get mouse and vertex position in a Vertex Shader
Using Three.js, I'm trying to simply use mouse position in a Vertex Shader in order to modify vertex position (represented by gl_Position).
Despite searching almost everywhere, I couldn't get it done... Here's what I did:
First, I send mouse coordinates from Javascript to the vertex shader, using a simple
uniform
called "mouse", like so:object.material.uniforms.mouse.value.set(mouseX, mouseY, 0);
Where "mouseX" and "mouseY" are calculated with
event.clientX  (window.innerWidth / 2)
andevent.clientY  (window.innerHeight / 2)
respectively, in amousemove
listener.In my vertex shader, vertex position is calculated in the standard way:
gl_Position = projectionMatrix * modelMatrix * viewMatrix * vec4(position, 1.);
Where "position" is the attribute I pass using
geometry.addAttribute('position', new THREE.BufferAttribute(new Float32Array(positionArray), 3));
when creating myBufferGeometry
.Before I even think about calculating distance between mouse and vertex coordinates, I know I have to convert one or the other into a different coordinate system. I trying hard to understand how this works, but sadly I didn't come with an answer yet...
So my first question is: should I convert coordinates directly in the vertex shader (even if I don't know how for now), or should I calculate mouse coordinates in Javascript in order to match vertex coordinate system in the shader?
My second question is: if I were to handle conversion in the vertex shader (which I would prefer as my code would be much cleaner that way), how should I do it?
I know I don't give a lot of information here, but I really don't know what could be relevant, so please ask me if anything is unclear or missing. :/

What could be the source of this aliasing?
I am manually raytracing a 3D image. I have noticed that, the farther from the 3D image I am, the bigger the aliasing.
This 3D image is basically a voxelized representation of the stanford dragon. I have placed volume centered at the origin (the diagonals cross at (0,0,0)), meaning that one of the corners is at (cube_dim, cube_dim, cube_dim) and the other is at (cube_dim, cube_dim, cube_dim).
At close range the image is fine:
(The minor "aliasing" you see here is due to me doing a ray marching algorithm, this is not the aliasing I am worried about, this was expected and acceptabel)
However if we get far away enough some aliasing starts to be seen: (This is a completely different kind of aliasing)
The fragment shader sued to generate the image is this:
#version 430 in vec2 f_coord; out vec4 fragment_color; uniform layout(binding=0, rgba8) image3D volume_data; uniform vec3 camera_pos; uniform float aspect_ratio; uniform float cube_dim; uniform int voxel_resolution; #define EPSILON 0.01 // Check whether the position is inside of the specified box bool inBoxBounds(vec3 corner, float size, vec3 position) { bool inside = true; //Put the position in the coordinate frame of the box position=corner; //The point is inside only if all of it's components are inside for(int i=0; i<3; i++) { inside = inside && (position[i] > EPSILON); inside = inside && (position[i] < size+EPSILON); } return inside; } //Calculate the distance to the intersection to a box, or inifnity if the bos cannot be hit float boxIntersection(vec3 origin, vec3 dir, vec3 corner0, float size) { //dir = normalize(dir); //calculate opposite corner vec3 corner1 = corner0 + vec3(size,size,size); //Set the ray plane intersections float coeffs[6]; coeffs[0] = (corner0.x  origin.x)/(dir.x); coeffs[1] = (corner0.y  origin.y)/(dir.y); coeffs[2] = (corner0.z  origin.z)/(dir.z); coeffs[3] = (corner1.x  origin.x)/(dir.x); coeffs[4] = (corner1.y  origin.y)/(dir.y); coeffs[5] = (corner1.z  origin.z)/(dir.z); float t = 1.f/0.f; //Check for the smallest valid intersection distance //We allow negative values up to size to create correct sorting if the origin is //inside the box for(uint i=0; i<6; i++) t = (coeffs[i]>=0) && inBoxBounds(corner0,size,origin+dir*coeffs[i])? min(coeffs[i],t) : t; return t; } void main() { float v_size = cube_dim/voxel_resolution; vec3 r = (vec3(f_coord.xy,1.f/tan(radians(40)))); r.y /= aspect_ratio; vec3 dir = normalize(r);//;*v_size*0.5; r+= camera_pos; float t = boxIntersection(r, dir, vec3(cube_dim), cube_dim*2); if(isinf(t)) discard; if(!((r.x>=cube_dim) && (r.x<=cube_dim) && (r.y>=cube_dim) && (r.y<=cube_dim) && (r.z>=cube_dim) && (r.z<=cube_dim))) r += dir*t; vec4 color = vec4(0); int c=0; while((r.x>=cube_dim) && (r.x<=cube_dim) && (r.y>=cube_dim) && (r.y<=cube_dim) && (r.z>=cube_dim) && (r.z<=cube_dim)) { r += dir*v_size*0.5; vec4 val = imageLoad(volume_data, ivec3(((r)*0.5/cube_dim+vec3(0.5))*(voxel_resolution1))); if(val.w > 0) { color = val; break; } c++; } fragment_color = color; }
Understanding the algorithm
First, we create a ray based on the screen coordiantes (we use the standard raytracing ttechnique, were the focal length is 1/tan(angle)).
We then start the ray at the camera's current position
We check intersection of the ray with the box containing our object (we basically assume that our 3D texture is a big cube in the scene and we check whether we hit it).
If we donlt hit it we discard the fragment. If we do hit it and we're outside we move along the ray until we are at the surface of the box. If we hit it and are inside we stay where we are.
At this point we are guranteed that the position of our ray is inside the box.
Now we move by small segments along the ray until we either find a non zero value or we hit the end of the box.
 GLSL shader to boost the color

THREE.js apply shader on cube Texture
I have a cubeTextureLoader() setting my scene.background. I was wondering if there's a way to apply a shader to this texture so I can change it's colors. I intend to use this background as a sky and I want to change it to simulate the day and night cycle.
The cubeTextureLoader() I'm using: https://threejs.org/examples/#webgl_materials_cubemap
The sky shader I wold like to apply: https://threejs.org/examples/#webgl_shaders_sky

How to pass nontexture data to SCNTechnique Metal shaders
I can a pass custom parameter of type
sampler2D
to theMetal fragment function
of anSCNTechnique
and I have a working 2nd pass:PList:
<key>inputs</key> <dict> <key>imageFromPass1</key> <string>COLOR</string> <key>myCustomImage</key> <string>myCustomImage_sym</string> </dict>
...
<key>symbols</key> <dict> <key>myCustomImage_sym</key> <dict> <key>type</key> <string>sampler2D</string> </dict> </dict>
Relevant ObjC code:
[technique setValue: UIImagePNGRepresentation(myCustomTexture) forKey:@"myCustomImage_sym"];
Metal function parameters:
fragment half4 myFS(out_vertex_t vert [[stage_in]], texture2d<float, access::sample> imageFromPass1 [[texture(0)]], texture2d<float, access::sample> myCustomImage [[texture(1)]], constant SCNSceneBuffer& scn_frame [[buffer(0)]]) { ...
I access and use all these inputs in the shader function. It Works!
So far so good!
However, when I add another custom parameter of type
float
...<key>blob_pos</key> <string>blob_pos_sym</string>
...
<key>blob_pos_sym</key> <dict> <key>type</key> <string>float</string> </dict>
[_sceneView.technique setValue:[NSNumber numberWithFloat:0.5f] forKey:@"blob_pos_sym"];
constant float& blob_pos [[buffer(2)]]
... the passed values never reach the shader function.
I have tried
 using different buffer(N) values up to 6
 having the custom parameter in the vertex function
 type vec3 and float3 instead of type float
 different means of encoding my float to NSData
wrapping my float in a struct
[technique setValue:[NSValue valueWithSCNVector3: SCNVector3Make(0.5, 0.5, 0.5)] forKey:@"blob_pos_"]; SCNVector3 xx = SCNVector3Make(0.5, 0.5, 0.5); [technique setValue:[NSData dataWithBytes:&xx length:sizeof(xx)] forKey:@"blob_pos_"]; [technique setValue:[NSData dataWithBytesNoCopy:&xx length:sizeof(xx)] forKey:@"blob_pos_"]; simd_float3 x = simd_make_float3(0.5, 0.5, 0.5); [technique setValue:[NSData dataWithBytes:&x length:sizeof(x)] forKey:@"blob_pos_"]; float y = 0.5; [technique setValue:[NSData dataWithBytes:&y length:sizeof(y)] forKey:@"blob_pos_"]; struct MyStruct { float x; }; struct MyStruct myStruct = { 0.5 }; [technique setValue:[NSValue valueWithBytes:&myStruct objCType:@encode(struct MyStruct)] forKey:@"blob_pos_"]; [technique setObject:[NSValue valueWithBytes:&myStruct objCType:@encode(struct MyStruct)] forKeyedSubscript:@"blob_pos_"];
... and it all failed.
Then I looked at handleBindingOfSymbol:usingBlock: ... but it is GLSL only.
I found it's Metal counterpart, handleBindingOfBufferNamed:frequency:usingBlock: ... which is not available in SCNTechnique.
I Googled SCNTechnique Metal ... and realized all of the projects used sampler2D parameters only.
Finally I learned that this isn't new but bugs developers for years.
Before I go and encode this float in a texture, let me know the missing bit to make it work the way intended.

HLSL lighting based on texture pixels instead of screen
In HLSL, how can I calculate lighting based on pixels of a texture, instead of pixels that make up the object?
In other words, if I have a 64x64px texture being rendered on a 1024x768px screen, I want to calculate the lighting as it affects the 64x64px space, resulting in jagged pixels instead of a smooth line.
I've researched dozens of answers but I'm not sure how I can determine at all times if a fragment is a part of a pixel that should be fully lit or not. Maybe this is the wrong approach?
The current implementation uses a diffuse texture and a normal map. It results in what appear as artifacts (diagonal lines) in the output:
Note: The reason it almost looks correct is because of the normal map, which causes some adjacent pixels to have normals that are angled just enough to light some pixels and not others.

directx: shader constant buffer initial value
if I have a constant buffer in my shader like cbuffer buffer { float4 vector; }; what is the value of vector if I forgot to call SetConstantBuffers to set its value in c++ (or pass a nullptr for it in SetConstantBuffers). will it be initialized to zero or be any undefined value? in my test it is always zero but I want to know if that is defined by document

What happens when I pass the center of a texel to Texture2D::GatherRed?
The definition of GatherRed (from memory) is:
Gathers the four texels that would participate in bilinear interpolation at that point.
But if I pass the center of a texel, then bilinear interpolation would return just the value of that texel. In that case, what does GatherRed return?

Is it possible to pass an array into a PixelShader register using WPF
I'm trying to write a Gaussian blur pixel shader for use with WPF Effects, but I'm having difficulties finding a way to pass the kernel into the shader.
The way my shader works is pretty standard. It calculates the kernel on the cpu and then it (tries) to pass it to the gpu for use.
Currently, my kernel is formatted as a float array of size 144 (24x24 kernel with symmetry). From what I've gather, it's not like opengl where I can pass it raw data through a vbo. WPF dependency objects doesn't seem to support array types and hlsl doesn't support native arrays.
I could store the kernel in a texture but I wouldn't know how to generate and then pass that texture. The texture input for shaders seem to be a brush, but I can't find a way to set brush data directly in WPF.
Is passing an array in possible at all? Or do I have to hard code the kernel in the shader? It's such a trivial thing to do in opengl, I can't imagine it's not possible in WPF.

Asteroids libGDX cosine and sine rotation equations, shooting bullets at wrong angle
I've made a Asteroids game for a Android device, but had trouble noticing that the player was standing in a direction but firing the bullet in another slightly displaced direction. I've looked through the code and it only happens when the player accelerates and rotates. Not when the player is only rotating. I hope this code is enough to tell you something about it to give me a suggestion.
The player is rotating in the correct direction while the bullet isn't. I tried putting the update functions in different orders but that didn't help either. I do not see what the problem is because when shooting the player class creates a bullet every time with the radians from the player. So I think it has something to do with that the radians aren't updated when the bullet is fired.
private void initForces() { maxSpeed = 300; acceleration = 200; friction = 10; } private void initRotationSpeed() { // Radians are used to determine the angle the player points in radians = MathUtils.PI / 2; rotationSpeed = 3; } public void shoot() { final int MAX_BULLETS = 4; if (bullets.size() == MAX_BULLETS  isHit()) return; bullets.add(new Bullet(x, y, radians)); } public void update(float dt) { if (updateCheckIfPlayerHit(dt)) { return; } updateCheckExtraLives(); // Forces updateAcceleration(dt); updateRotationSpeed(dt); updateTurning(dt); updateFriction(dt); // Set shape setShape(); // Screen wrap wrap(); } private void updateTurning(float dt) { // Turning: Tilt the screen to left or right to rotate the ship final float ROTATION_SENSITIVITY = 3; if (left  Gdx.input.getAccelerometerX() > ROTATION_SENSITIVITY) { radians += rotationSpeed * dt; } else if (right  Gdx.input.getAccelerometerX() < ROTATION_SENSITIVITY) { radians = rotationSpeed * dt; } } private void updateRotationSpeed(float dt) { x += dx * dt; y += dy * dt; } private void updateAcceleration(float dt) { // Accelerating if (up  Gdx.input.isTouched() && Gdx.input.getX() > 0 && Gdx.input.getX() < Gdx.graphics.getWidth() / 2) { dx += MathUtils.cos(radians) * acceleration * dt; dy += MathUtils.sin(radians) * acceleration * dt; } } private void updateFriction(float dt) { // Friction float vector = (float)Math.sqrt(dx * dx + dy * dy); if (vector > 0) { dx = (dx / vector) * friction * dt; dy = (dy / vector) * friction * dt; } if (vector > maxSpeed) { dx = (dx / vector) * maxSpeed; dy = (dy / vector) * maxSpeed; } }
Bullet.java
package com.mygdx.entities; import com.badlogic.gdx.graphics.glutils.ShapeRenderer; import com.badlogic.gdx.math.MathUtils; public class Bullet extends SpaceObject { private float lifeTime; private float lifeTimer; private boolean remove; Bullet(float x, float y, float radians) { this.x = x; this.y = y; this.radians = radians; float speed = 350; dx = MathUtils.cos(radians) * speed; dy = MathUtils.sin(radians) * speed; width = height = 2; lifeTimer = 0; lifeTime = 1; } public boolean shouldRemove() { return remove; } public void update(float dt) { updateRotationSpeed(dt); wrap(); updateLifeTime(dt); } private void updateRotationSpeed(float dt) { x += dx * dt; y += dy * dt; } private void updateLifeTime(float dt) { // How long time the bullet is supposed to live lifeTimer += dt; if (lifeTimer > lifeTime) { remove = true; } } public void draw(ShapeRenderer shapeRenderer) { shapeRenderer.setColor(1, 1, 1, 1); shapeRenderer.begin(ShapeRenderer.ShapeType.Line); shapeRenderer.circle(x  width / 2, y  height / 2, width / 2); shapeRenderer.end(); } }
And to shoot with the player the code is:
if (Gdx.input.justTouched() && Gdx.input.getX() > Gdx.graphics.getWidth() / 2 && Gdx.input.getX() < Gdx.graphics.getWidth()) { player.shoot(); }

sine wave generation in c++
I am trying to generate a set of points, which when plotted as a graph represent a sine wave of 1 cycle. The requirements are :
 a sine wave of 1 cycle
 lower limit = 29491
 upper limit = 36043
 no of points = 100
 Amplitude = 3276
 zero offset = 32767
Code :
int main() { ofstream outfile; outfile.open("data.dat",ios::trunc  ios::out); for(int i=0;i<100;i++) { outfile << int(3276*sin(i)+32767) << "\n"; } outfile.close(); return 0; }
I am generating and storing the points in a file. When these points are plotted I get the following graph.
But I only need one cycle. How can I do this?
 a sine wave of 1 cycle

Why does sin not work on my Python?
When trying to do some basic trigonometry for Python turtle I decided to use sin, so I used:
import math num = math.sin(15)
It returned
0.650287840157
. I tried using the value but it was way off so I checked on my real calculator and I got0.2588190451
which did work.I tried to make it into
15.0
to see if it was to do with integer rounding because the value was not a float but it still returned the exact same result.Am I using it wrong or is there just an error in the library's function?