Function to smooth scale a line at certain cutpoints
I have a set of points that range from (0, 300,000)
and I want to scale it so it goes from (1, 2)
. I can scale it linearly between (1, 1)
as shown below:
What I really want to do, however, is to have the transformation function scale it nonlinearly: from (0, 100,000]
I'd like it to be a smooth sigmoidlike curve ranging from (1, 1]
, and from (100,000, 300,000])
I'd like it to scale like a log curve, going from (1, 2)
. Something like this:
See also questions close to this topic

Detect if rectangle intersects with other rectangle in geographic coordinate
I have two rectangles and want to detect if rectangle intersects with other rectangle in geographic coordinate. My actual code working only with positive geographic coordinate values
if (rect1.MaxLongitude < rect2.MinLongitude  rect2.MaxLongitude < rect1.MinLongitude  rect1.MaxLatitude < rect2.MinLatitude  rect2.MaxLatitude < rect1.MinLatitude) { return false; } else { return true; }
How can i achieve detecting in geographic coordinate with negative and positive values (even in one coordinate)?

OpenGL 3D object rendering
i have been trying to get into OpenGL recently and i've kinda hit a brick wall. To begin with i followed "The Cherno"'s Youtube series about OpenGl and quite liked it, however so far there was only 2D rendering. I then tried to get 3D rendering working on my own, but that didn't work at all. I then tried to look into other peoples tutorials and apply their explanations onto the code i already have, which didn't work aswell. On the other hand i don't want to drop all the stuff i already wrote from "The Cherno"'s series and watch another 5 hours of youtube videos. I'm now at a point where i feel like giving up, due to me being too incompetent to draw a simple 3D world. I hope that someone here might be able to help me, although i am fully aware that it might be too much. My hope is that someone else saw "The Chernos"'s series and got 3D rendering working on that basis, maybe even having faced similar difficulties. I also looked alot at this video from "R PLANET ACADEMY": "https://www.youtube.com/watch?v=W4_WtHXA6Hs".
I will try to make my code stuff as short as possible and only mention the stuff i think is important, so you don't have to ransack through all the code i have. Btw GLCall() is a macro that checks for errors on the OpenGL code. Here we go:
Activating this made my 2D stuff disappear, i got this from "R PLANET ACADEMY". I guess this is 100% neccesary for 3D?
GLCall(glEnable(GL_DEPTH_TEST));
Positions for one side of a cube. If i understand everything correctly, i should still be able to draw a 2D object. Unsure if the zcoordinate should be between like 0.0f and 1.0f or has to be a pixel value like int 100. Same with the texture coordinates, althought these work with my 2D stuff.
// x, y, z, textureCoordinateX, textureCoordinateY float positions[] = { 50.0f, 50.0f, 0.0f, 0.0f, 0.0f, // 0 50.0f, 50.0f, 0.0f, 1.0f, 0.0f, // 1 50.0f, 50.0f, 0.0f, 1.0f, 1.0f, // 2 50.0f, 50.0f, 0.0f, 0.0f, 1.0f // 3 };
Indices should be trivial, right?
unsigned int indices[]{ 0, 1, 2, 2, 3 ,0 };
Abstracted code to create vertex arrays, vertex buffers etc. This code worked 100% fine with 2D stuff and i also trust "The Cherno" enough to say that they aren't bug riddled. However if someone needs the code for them i can provide it. I first create a VB, then i push three floats onto the layout for my 3 values x, y and z. Then two for the texture coordinates. At last i create the Index Buffer.
VertexArray va; VertexBuffer vb(positions, 5 * 4 * sizeof(float)); VertexBufferLayout layout; layout.Push<float>(3); layout.Push<float>(2); va.AddBuffer(vb, layout); IndexBuffer ib(indices, 6);
This code is from "R PLANET ACADEMY" again, although i think he used a "rotate" for the model (which i didn't want to use, cus i want a unmoving cube for now). Here is where my problem probably lies, because i got a little confused with the maths and such.
glm::vec3 translation2(300, 200, 0); glm::mat4 projPerspective = glm::perspective(45.0f, (GLfloat)windowWidth / (GLfloat)windowHeight, 0.1f, 1000.0f); glm::mat4 viewPerspective = glm::translate(viewPerspective, glm::vec3(0.0f, 0.0f, 3.0f)); glm::mat4 modelPerspective = glm::translate(modelPerspective, translation2); glm::mat4 mvpPerspective = projPerspective * viewPerspective * modelPerspective;
 Here i bind my texture, probably not the problem.*
Texture t_brick("resources/textures/brick.png"); t_brick.Bind(); shader.SetUniform1i("u_Texture", 0);
And at last, i set the uniform in my shader and (try to) draw it.
shader.SetUniformMat4f("u_MVP", mvpPerspective); renderer.Draw(va, ib, shader);
Thank you in advance, if you looked over my mess of a code. I will definitly be happy about any suggestions! If you think this code is unsaveable or it makes more sense to start anew, please let me know. Then i might watch "R PLANET ACADEMY"'s series, althought that will take forever again. Or maybe there is someone who has a nice tutorial on OpenGL 3D rendering, if you can suggest me something? Either way thank you for reading!

How do you do this in R?
This problem uses the package Lahman which you will probably need to install. Consider the data set Batting, which should now be available. It contains batting statistics of all major league players broken down by season since 1871. We will be using this data set some more in the data wrangling chapter of this book. A.) What is the most number of triples (X3B) that have been hit in a single season? B.) What is the playerID(s) of the person(s) who hit the most number of triples in a single season? C.) In what year did it happen? Which player hit the most number of triples in a single season since 1960?

Transform object data with recursive
I try to transform data with recursive but I can't, I'm very newbie for recursive please help me
Is it need to do with recursive or not what you guy think, Please help me
(sorry for my english)
This is my data
const mock = [ { $: { id: '001' } }, { $: { id: '002' }, question: [{ $: { id: 'r001' }, prompt: 'somer001', choices: [{ question: [ { $: { id: 'r0011' }, prompt: 'somer0011', choices: [{ question: [{ $: { id: 'r00111' }, prompt: 'somer00111', choices: [""], }] }] }, { $: { id: 'r0012' }, prompt: 'somer0012', choices: [""], }, ] }] }] } ]
I want to transform to this
const result = { 'r001': { prompt: 'somer001', next: ['r0011', 'r0012'], }, 'r0011': { prompt: 'somer0011', next: ['r00111'], } 'r00111': { prompt: 'somer00111', next: [], }, 'r0012': { prompt: 'somer0012', next: [], }, }

Behaviour of affine transform on 3D image with nonuniform resolution with Scipy
I'm looking to apply an affine transformation, defined in homogeneous coordinates on images of different resolutions, but I encounter an issue when one ax is of different resolution of the others.
Normally, as only the translation part of the affine is dependent of the resolution, I normalize the translation part by the resolution and apply the corresponding affine on the image, using scipy.ndimage.affine_transform.
If the resolution of the image is the same for all axes, it works perfectly, you can see below images of the same transformation (being a scale+translation, or rotation+translation, see code below) being applied to an image, its downsampled version (meaning at a lower resolution). Images match (almost) perfectly, differences in voxel values are mainly caused as far as I know by interpolation errors.
But you can see that the shapes overlay between the downsampled transformed image and the transformed (downsampled for comparison) image
Scale affine transformation applied on the same image, at two different (uniform) resolutions
Rotation affine transformation applied on the same image, at two different (uniform) resolutions
Unfortunately, if one of the image axis has a different resolution than the other (see code below), it works well with affine transform with null nondiagonal terms (like translation, or scaling) but the result of the transformation gives a completely wrong result.
Rotation affine transformation applied on the same image, at two different (nonuniform) resolutions
Here you can see a minimal working example of the code:
import numpy as np import nibabel as nib from scipy.ndimage import zoom from scipy.ndimage import affine_transform import matplotlib.pyplot as plt ################################ #### LOAD ANY 3D IMAGE HERE #### ################################ #%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TO BE DEFINED BY USER orig_img = any 3D grayscale image ndim = orig_img.ndim ################################ ##### DEFINE RESOLUTIONS ####### #%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TO BE DEFINED BY USER # Comment/uncomment to choose the resolutions (in mm) of the images # ORIG_RESOLUTION = [1., 1., 1.] # TARGET_RESOLUTION = [2., 2., 2.] ORIG_RESOLUTION = [1., 0.5, 1.] TARGET_RESOLUTION = [2., 2., 2.] ##################################### ##### DEFINE AFFINE TRANSFORM ####### affine_scale_translation = np.array([[2.0, 0.0, 0.0, 150.], [0.0, 0.8, 0.0, 0. ], [0.0, 0.0, 1.0, 0. ], [0.0, 0.0, 0.0, 1.0]]) a = np.sqrt(2)/2. affine_rotation_translation = np.array([[a , a , 0.0, 50.], [a , a , 0.0, 100. ], [0.0, 0.0, 1.0, 0.0 ], [0.0, 0.0, 0.0, 1.0]]) # #%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TO BE DEFINED BY USER # Comment/uncomment to choose the transformation to be applied # affine_tf, name_affine = affine_scale_translation, "Tf scale" affine_tf, name_affine = affine_rotation_translation, "Tf rotation" ###################################################### ######## DOWNSAMPLE IMAGE TO LOWER RESOLUTION ######## ###################################################### downsample_img = zoom(orig_img, zoom=np.array(ORIG_RESOLUTION)/np.array(TARGET_RESOLUTION), prefilter=False, order=1) ############################################################################## ######## APPLY AFFINE TRANSFORMATION TO ORIGINAL AND DOWNSAMPLE IMAGE ######## ############################################################################## affine_st_full_res, affine_st_low_res = affine_tf.copy(), affine_tf.copy() # Inverse transform as affine_transform apply the tf from the target space to the original space affine_st_full_res, affine_st_low_res = np.linalg.inv(affine_st_full_res), np.linalg.inv(affine_st_low_res) # Normalize translation part (normally expressed in millimeters) for the resolution affine_st_full_res[:ndim, ndim] = affine_st_full_res[:ndim, ndim] / ORIG_RESOLUTION affine_st_low_res[:ndim, ndim] = affine_st_low_res[:ndim, ndim] / TARGET_RESOLUTION # Apply transforms on images of different resolutions orig_tf_img = affine_transform(orig_img, affine_st_full_res, prefilter=False, order=1) downsample_tf_img = affine_transform(downsample_img, affine_st_low_res, prefilter=False, order=1) # Downsample result at full resolution to be compared to result on downsample image downsample_orig_tf_img = zoom(orig_tf_img, zoom=np.array( ORIG_RESOLUTION)/np.array(TARGET_RESOLUTION), prefilter=False, order=1) # print(orig_img.shape) # print(downsample_img.shape) # print(orig_tf_img.shape) # print(downsample_orig_tf_img.shape) ############################### ######## VISUALISATION ######## ############################### # We'll visualize in 2D the slice at the middle of the z (third) axis of the image, in both resolution mid_z_slice_full, mid_z_slice_low = orig_img.shape[2]//2, downsample_img.shape[2]//2 fig, ((ax1, ax2, ax3), (ax4, ax5, ax6)) = plt.subplots(nrows=2, ncols=3) ax1.imshow(orig_img[:, :, mid_z_slice_full], cmap='gray') ax1.axis('off') ax1.set_title('1/ Origin image, at full res: {}'.format(ORIG_RESOLUTION)) ax2.imshow(downsample_img[:, :, mid_z_slice_low], cmap='gray') ax2.axis('off') ax2.set_title('2/ Downsampled image, at low res: {}'.format(TARGET_RESOLUTION)) ax3.imshow(downsample_tf_img[:, :, mid_z_slice_low], cmap='gray') ax3.axis('off') ax3.set_title('3/ Transformed downsampled image') ax4.imshow(orig_tf_img[:, :, mid_z_slice_full], cmap='gray') ax4.axis('off') ax4.set_title('4/ Transformed original image') ax5.imshow(downsample_tf_img[:, :, mid_z_slice_low], cmap='gray') ax5.axis('off') ax5.set_title('5/ Downsampled transformed image') error = ax6.imshow(np.abs(downsample_tf_img[:, :, mid_z_slice_low] \ downsample_orig_tf_img[:, :, mid_z_slice_low]), cmap='hot') ax6.axis('off') ax6.set_title('Error map between 3/ and 5/') fig.colorbar(error) plt.suptitle('Result for {} applied on {} and {} resolution'.format(name_affine, ORIG_RESOLUTION, TARGET_RESOLUTION)) plt.tight_layout() plt.show()

How to transform a unique identifier's position over a time from multiple row to a single row?
I need help with a unique query, I couldn't find any solutions as yet. But that could be because I didn't know how to word what I am looking. Below is a sample dataset
name, position, start_date, end_date ABC, Contractor, 09/02/2017, 07/01/2018 ABC, Associate Consultant, 08/01/2018, 31/12/2018 ABC, Consultant, 01/01/2019, 31/05/2019
Essentially ABC is a person who has had different positions over time. I want to transform this dataset so as to place ABC in a single row, and track their position over time. . Attached is an image that displays the solution I am looking for
I'd appreciate any help here!
Best, Rohit

Solve a system of linear equations in R for nonnumeric arguments
I have a 3x3 orthogonal matrix U such that UU'= I. For this matrix to be uniquely identified I need to impose 3 restrictions  which means at the end solving a system of 9 equations in 9 unknowns.
As for now, I just want to solve UU'= I and get 6 equations. The problem is that the matrix U is not numeric and I don't know how to solve a system which is not numeric in R.
This is the code for you to understand what I would like to to do:
U < matrix(c("u11","u21","u31","u12","u22","u23","u13","u32","u33"),3,3) [,1] [,2] [,3] [1,] "u11" "u12" "u13" [2,] "u21" "u22" "u32" [3,] "u31" "u23" "u33" identity < matrix(c(1, 0, 0, 0, 1, 0, 0, 0, 1), 3,3) # the identity matrix [,1] [,2] [,3] [1,] 1 0 0 [2,] 0 1 0 [3,] 0 0 1 # I would like to solve: U %*% t(U) = identity # which would look like:  u11 u12 u13  u11 u21 u31   1 0 0   u21 u22 u32  u12 u22 u23  =  0 1 0   u31 u23 u33  u13 u32 u33   0 0 1  # to get the 6 equations I need. "Solve()" is just for numeric vectors/matrices so I don't know what to do
Can anyone help me with this?
Thanks a lot!

Eigen  does matrix.transpose create a copy of the matrix?
I would like to do the following matrix product using Eigen:
Eigen::VectorXd vector = Eigen::VectorXd::Random(1000000); // a given long vector Eigen::MatrixXd product = vector * vector.transpose();
I'm not sure Eigen will create a copy of vector when calling
vector.transpose()
or just a view. I experimented by creating a vector and its transpose then modify the value of original vector:Eigen::VectorXd vector(3); vector << 1, 2, 3; Eigen::VectorXd vectorTranspose = vector.transpose(); vector(0) = 10; std::cout << vector << "\n"; // shows col vector [10, 2, 3] std::cout << vectorTranspose << "\n"; // still shows col vector [1, 2, 3] std::cout << vector * vectorTranspose << "\n"; // this gives the error of "invalid matrix product" std::cout << vector * vector.transpose() << "\n"; // this gives the correct behavior
so my questions are:
 For a column vector with shape n by 1, why does transpose still give a column vector instead of row vector?
 Is calling vector * vector.transpose() causing a waste due to the creation of vector.transpose() or does Eigen do something clever about it?

Why is my vector rotation function changing the vector's magnitude?
I'm looking to make a simple function that rotates a vector's point b around point a for a given number of degrees.
What's odd is that my code seems to work somewhat  the vector is rotating, but it's changing length pretty drastically.
If I stop erasing the screen every frame to see every frame at once, I see the lines producing a sort of octagon around my origin.
Even weirder is that the origin isn't even in the center of the octagon  it's in the bottom left.
Here's my code:
struct Point { int x, y; }; struct Line { Point a, b; void rotate(double); }; void Line::rotate(double t) { t *= 3.141592 / 180; double cs = cos(t); double sn = sin(t); double trans_x = (double)b.x  a.x; double trans_y = (double)b.y  a.y; double newx = trans_x * cs  trans_y * sn; double newy = trans_x * sn + trans_y * cs; newx += a.x; newy += a.y; b.x = (int)newx; b.y = (int)newy; }
Using the olc::PixelGameEngine to render, which is why I'm using ints to store coordinates.

Python pandas: filter out nonmatching pairs from two different dataframes
I have two data frames as follows:
1. dataframe_1 id joinKey 0 a000jz4hqo [clickart, 950, 000] 1 a0006zf55o [ca, international, arcserve, lap, desktop] 2 a00004tkvy [noah, activity, centre, jewel, case] 3 a000g80lqo [white, newest] 4 a0006se5bq [singing, coach, carry, a, tune] 2. dataframe_2 id joinKey 0 b000jz4hqo [clickart, 950, 000] 1 b0006zf55o [ca, international, arcserve, lap, desktop] 2 b00004tkvy [noah, s, ark, activity, centre, jewel, case] 3 b000g80lqo [peachtree, jewel] 4 b0006se5bq [singing, coach, unlimited, carry, a, tune]
I want to filter out nonmatching pairs so as to prepare a data frame having data of dataframe_1 and dataframe_2 to carry out further data analysis.
To avoid n*n comparisons, I am flattening the dataframe_1 and dataframe_2 using pandas.DataFrame.explode.
I get new data frames as follows:
new_df1: id joinKey 0 a000jz4hqo clickart 0 a000jz4hqo 950 0 a000jz4hqo 000 0 a0006zf55o ca 0 a0006zf55o international new_df2: id joinKey 0 b000jz4hqo clickart 0 b000jz4hqo 950 0 b000jz4hqo 000 0 b0006zf55o ca 0 b0006zf55o international
As I understood, The SQL query to filter the data is as follows:
SELECT a.id, b.id FROM `new_df1 ` a,`new_df2 ` b WHERE a.joinkey = b.joinkey AND a.id < b.id;
My questions:
 What is the equivalent syntax in pandas dataframe for the above SQL syntax so as to filter out the non matching pairs?
 Is there any better approach to filter out nonmatching pairs?

How do you write a rolling "Case When" statment in PostgreSQL?
Let's say I have a table called "means" that looks like this:
year mean 1990 1.5 1991 1.0 1992 1.3 1993 1.0
And I have a second table called "values" that looks like this:
year tag value 1990 A 0.25 1991 B 1.10 1992 C 2.32 1993 A 0.70
I want to create another column where if the value for a given year is greater than the mean for a given year, the value of that column should be "Greater". If it's less than the mean for a given year, it should be "Less" and if it's equal to the mean, it should be "Equal".
Essentially, I want to create a series of Case When statements that are indexed to the year given in the table.
How would I go about doing that?

Multilabel classification using fasttext with labels' probabilities summation not necessarily equals zero
I followed MultiLabel Classification documentation from fasttext to apply it on my free text dataset which look like this after processing/labelling:
__label__nothing nothing __label__choice __label__goodprices Inexpensive and large selection __label__choice The wide range of products to choose from __label__fastdelivery __label__choice great choice and fast delivery __label__badprices sometimes also expensive __label__choice The wide range of products __label__nothing there is nothing especially . . .
I set up a notebook instance on AWS SageMaker and train the model. For simplicity, let's say with 5 labels (choice, fastdelivery, goodprices, badprices, nothing), the problem is when I predict some text with sitting the (K) to 1 to get all of them, I always get the summation probabilities of labels is equal to 100%, for example:
wide range of products as well as fast delivery
I expect something like:
choice (95%) fastdelivery (95%) goodprices (10%) badprices (5%) nothing (10%)
and then I could set the threshold to greater than 50% so only 2 labels matches (choice and fastdelivery)
instead I got something like:
choice (40%) fastdelivery (40%) goodprices (5%) badprices (5%) nothing (10%)
which means if the text really matches the 5 labels so much it will return 20% for each, and will be dismissed all by the threshold.
N.B.: in the documentation's example the got the output as expected but by following the docs it's not working like that:
The question is how could I achieve the output as expected? within fasttext or even with some other tool, is there some parameters to change/add?
Thanks in advance!

How to rescale new data base on old MinMaxScale?
Now I'm stucking with the problem of scaling new data. In my scheme, I have trained and test the model, with all x_train and x_test have been scaled using sklearn.MinMaxScaler(). Then, applying to the realtime process, how can I scale the new input in the same scale of the training and testing data. The step is as below
featuresData = df[features].values # Array of all features with the length of thousands sc = MinMaxScaler(feature_range=(1,1), copy=False) featuresData = sc.fit_transform(featuresData) #Running model to make the final model model.fit(X,Y) model.predict(X_test) #Saving to abcxyz.h5
Then implementing with new data
#load the model abcxyz.h5 #catching new data #Scaling new data to put into the loaded model << I'm stucking in this step #...
So how to scale the new data to predict then inverse transform to the final result? From my logic, it need to scale in the same manner of the old scaler before training the model
Please help!

PanAndZoom: Image disappears on panning after rotation
I'm using wieslawsoltes/PaZ library to enable panning and zooming of some images in my program. It works very well if the image is in original position but whenever I rotate and then perform panning, the image disappears in my viewer.
I don't know if it's an library issue so I believe there are some problems in my code.
XAML:
<paz:ZoomBorder Name="zoomBorder" Stretch="None" ZoomSpeed="1.2" Background="Gray" ClipToBounds="True" Focusable="True" VerticalAlignment="Stretch" HorizontalAlignment="Stretch" Grid.Row="4" Grid.Column="1"> <Image Source="{Binding SelectedImageToDisplay ,IsAsync=True}" Stretch="UniformToFill"> <Image.LayoutTransform> <TransformGroup> <ScaleTransform ScaleX="{Binding ScaleX, UpdateSourceTrigger=PropertyChanged,Mode=TwoWay}" ScaleY="{Binding ScaleY, UpdateSourceTrigger=PropertyChanged,Mode=TwoWay}"/> <RotateTransform Angle="{Binding RotationAngle, UpdateSourceTrigger=PropertyChanged, Mode=TwoWay}"/> </TransformGroup> </Image.LayoutTransform> </Image> </paz:ZoomBorder>
Rotation:
private async Task RotationEvent(string direction) { if (SelectedImageToDisplay == null) return; switch (direction) { case "LEFT": await Task.Run(() => { RotationAngle = RotationAngle + 90; }); break; case "RIGHT": await Task.Run(() => { RotationAngle = RotationAngle + 90; }); break; case "FLIP": await Task.Run(() => { if (ScaleX == 1) { ScaleX = 1; } else { ScaleX = 1; } }); break; default: break; } }
Image Displaying:
SelectedImageToDisplay = File.ReadAllBytes(SelectedImageItem.IMG_FULLPATH); ScaleX = 1; ScaleY = 1;
Scale X and Y default values are both 1.
I have an other workaround but it sucks to create another file just to display in it's rotated version. Some of my program users are using slow PCs.
I hope you can help me! Thanks.

Least squares for uniform scale and translation
I have two meshes that I want to align, I'll call the reference mesh the template mesh and the other is the target mesh. I have 1 pointtopoint correspondence between my template and target mesh. I am trying to find the uniform scale and translation to align these 2 meshes. This is what I've been doing :
My optimal transformation matrix M =
[M11 0 0 M14; 0 M11 0 M24 ; 0 0 M11 M34; 0 0 0 1 ]
, here M11 is my scale and M14, M24, M34 are the translation in x,y and z. I represent my points from the template mesh as homogeneous coordinates > A =[xt, yt, zt, 1]
the corresponding points on the target mesh > B =[xp, yp, zp, 1]
I know that :
xp = M11*xt + M14 yp = M11*xt + M24 zp = M11*xt + M34
This is how I form and solve my least sqauares problem :
findMin((X*T  B)^2)
Where T, X and B are :
B = X * T [xp [ xt 1 0 0 [M11 yp = yt 0 1 0 * M14 zp zt 0 0 1 M24 1] 1 1 1 1] M34]
This is how I find the least squares fit for T :
(Inverse(X'*X))*X'*B
I am not sure if this is the right way to go about this, particularly
 Is it acceptable to add the row of ones in the last row of matrix X in my least sqaures problem? It has no physical meaning, I just added it to make dimensions match and ensure that the matrix X is invertible.
 With this I get negative scale values, is there a way I can add constraints in the least squares problem to ensure that I get a scale that is always greater than 0? Is this even the right approach to find a scale value?