How to prevent auto rounding in arithmetic operations on python
Is there any way to prevent auto rounding in python while performing arithmetic operations? In the following code segment, n = 3; t is initially 0; x is initially 0; M is an arrray of 3 by 1 dimension; L = [[1 0 0] [2.56 1 0] [5.76 3.5 1]]; B = [[106.8] [177.2] [279.2]].
def forwardSubs(M, n, t, L, B, x):
if(t == 0):
y = (B[0][0])
x = y
else:
y = (B[t][0]  (L[t][0])*x)
for i in range(1, t):
y = y  M[i][0]*L[t][i]
M[t][0] = y
if(t+1==n):
return
forwardSubs(M, n, t+1, L, B, x)
The expected result is M = [[106.8]
[96.208]
[0.76]]
But the program shows M = [[106]
[96]
[0]]
See also questions close to this topic

min function doesn't function properly or at all with the character's health values
I was trying to implement this new item into my game that has thunder strike down and gets the player beneath it hurt; and I thought that using the min function would do the trick as there was a similar item that does the opposite and recovers health: char.HP:
min(char.HP + 4, char.MAXHP)
Here's the part of the code that has been problematic:
def Thunder_Scroll(self, chars): for char in chars: if [char.x, char.y] in positions: char.HP = min(char.HP, char.HP  8)
I expect for the health of the characters in those selected boxes to be drained.

Does ORTools allow you to force a final destination before returning to a depot?
Implemented the Capacitated Vehicle Routing With Time Windows(CVRWTW) by following along the examples provided by ORTools here https://developers.google.com/optimization/routing/vrptw . I am looking at a situation where two nodes are landfills that vehicles need to visit before returning to the origin depot. Once a vehicle has collected the maximum load that it can, it would go to the landfill closest to them and the total load collected by the vehicle is reset to zero(to show that it dumped the garbage), then it returns to the origin depot. Does ortools allow this? Any suggestions on how to add this constraint would be great.

Easiest way to export dictionary (or current output text file) to .csv file instead of a .txt file?
I created a parser and extractor for a log file and wanted to see an example of a quick way to either:
Take current output written onto a .txt file and convert it into a new .csv file (possibly with
pandas
), orUse the .csv module to change the write method sequence into a
csv.writer
and then usingcsv.DictReader
.
What is most efficient in terms of practicality and resource consumption? My current exported
.txt
file and relevant code is posted below.Exported data:
Request ID : bf710010 Username : kadaniel ECID : 6ca4862b14d14a7f81585e6cac363144001477ac Start Time : 20190612T09:14:54.947 End Time : 20190612T09:14:55.22 Request ID : bf710020 Username : kadaniel ECID : 6ca4862b14d14a7f81585e6cac363144001477ac Start Time : 20190612T09:14:55.343 End Time : 20190612T09:14:55.514
Code:
process_records = {} with open(log_file_path, "r") as file: for line in file: m = pattern.match(line) if m is not None: # If there is a match with pattern (timestamp, ecid, requestid, username) = m.groups() if requestid not in process_records: process_records[requestid] = (timestamp, username, ecid, None) else: process_records[requestid] = process_records[requestid][:3] + (timestamp,) for requestid, (start, username, ecid, end) in process_records.items(): print("Request ID: {}\nUsername: {}\nECID: {}\nStart Time: {}\nEnd Time: {}\n\n".format( requestid, username, ecid, start, end, )) file.close() with open(export_file, 'w+') as file: file.write("EXPORTED DATA:\n\n") if pattern != None: for requestid, (start, username, ecid, end) in process_records.items(): file.write(("Request ID : {}\nUsername : {}\nECID : {}\nStart Time : {}\nEnd Time : {}\n\n".format( requestid, username, ecid, start, end, ))) file.close()
I currently have the data in a dictionary,
process_records
. Each key (requestid
) is associated with 4 elements in a tuple. I want the key and each element thereafter to represent its own column. 
Rowwise outer product on sparse matrices
Given two sparse scipy matrices
A, B
I want to compute the rowwise outer product.I can do this with numpy in a number of ways. The easiest perhaps being
np.einsum('ij,ik>ijk', A, B).reshape(n, 1)
or
(A[:, :, np.newaxis] * B[:, np.newaxis, :]).reshape(n, 1)
where
n
is the number of rows inA
andB
.In my case, however, going through dense matrices eat up way too much RAM. The only option I have found is thus to use a python loop:
sp.sparse.vstack((ra.T@rb).reshape(1,1) for ra, rb in zip(A,B)).tocsr()
While using less RAM, this is very slow.
My question is thus, is there a sparse (RAM efficient) way to take the rowwise outer product of two matrices, which keeps things vectorized?
(A similar question is numpy elementwise outer product with sparse matrices but all answers there go through dense matrices.)

Is there a way to speed up Numpy array calculations when they only contain values in upper/lower triangle?
I'm doing some matrix calculations (2d) that only involve values in the upper triangle of the matrices.
So far I've found that using Numpy's
triu
method ("return a copy of a matrix with the elements below the kth diagonal zeroed") works and is quite fast. But presumably, the calculations are still being carried out for the whole matrix, including unnecessary calculations on the zeros. Or are they?...Here is an example of what I tried first:
# Initialize vars N = 160 u = np.empty(N) u[0] = 1000 u[1:] = np.cumprod(np.full(N1, 1/2**(1/16)))*1000 m = np.random.random(N) def method1(): # Prepare matrices with values only in upper triangle ones_ut = np.triu(np.ones((N, N))) u_ut = np.triu(np.broadcast_to(u, (N, N))) m_ut = np.triu(np.broadcast_to(m, (N, N))) # Do calculation return (ones_ut  np.divide(u_ut, u.reshape(N, 1)))**3*m_ut
Then I realized I only need to zeroout the final result matrix:
def method2(): return np.triu((np.ones((N, N))  np.divide(u, u.reshape(N, 1)))**3*m) assert np.array_equal(method1(), method2())
But to my surprise, this was slower.
In [62]: %timeit method1() 662 µs ± 3.65 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [63]: %timeit method2() 836 µs ± 3.74 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Does numpy do some kind of special optimization when it knows the matrices contain half zeros?
I'm curious about why it is slower but actually my main question is, is there a way to speed up vectorized calculations by taking account of the fact that you are not interested in half the values in the matrix?
UPDATE
I tried just doing the calculations over 3 of the quadrants of the matrices but it didn't achieve any speed increase over method 1:
def method4(): split = N//2 x = np.zeros((N, N)) u_mat = 1  u/u.reshape(N, 1) x[:split, :] = u_mat[:split,:]**3*m x[split:, split:] = u_mat[split:, split:]**3*m[split:] return np.triu(x) assert np.array_equal(method1(), method4())
In [86]: %timeit method4() 683 µs ± 1.99 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
But this is faster than method 2.

Translate Matlab matrix concatenation to Python
How to translate the following matlab code for matrix concatenation to python?
nr_a = 10; nc_a = 23; nr_b = 500; a = zeros(nr_a, nc_a); b = zeros(nr_b, nc_a  1); c = zeros(nr_b, 1); d = [ a; b c];
In python, d.shape should equal (nr_a+nr_b, nc_a). My incorrect python solution is
d = np.block([a, [b, c]])

JavaScript Numerics: multiply first then divide?
I just found an arithmetic bug that, very simplified, when multiplying two numbers and then dividing, the order of operations give different values!
In this case, multiplying first gives the correct answer:
(1455/1279) * 1279 1455.0000000000002 (1455*1279) / 1279 1455
I can see that the larger numerator might be advantageous, but it too could have problems, overflow for example.
So the question is: why does this occur, and what are the best practices for JavaScript numeric operations?

How to square a number in c# using Console.ReadLine function?
I am making a simple calculator in c# and need to square a number. How can I make it.
I tried to make the number constant so the user need not enter it ṭwice but it shows a error that the expression assigned to it must be constant.