Is there a cleaner way to find p*q = n, when p and q are coprime?
Basically trying to write a simple RSA brute force. I have modulus N and I'm trying to find p*q=n where p and q are coprimes.
So far i'm able to find the prime factors of N and I can put them in a list. Then taking that list apart into variables then going through each pair to find coprimes. It's messy, and was wondering if there was a cleaner way of doing it?
my code so far:
from fractions import gcd
import sys
n= 17
print "Modulus (n) = ", n
print("p & q are:")
# While Loop to find prime factors
i=1
list=[]
while(i<=n):
k=0
if(n%i==0):
j=1
while(j<=i):
if(i%j==0):
k=k+1
j=j+1
if(k==2):
list.append(i)
i=i+1
# takes list of prime factors and splits to variables
for n, val in enumerate(list):
globals()["var%d"%n] = val
# really messy, tries to multiply first 2 variables, if coprime, print the variables
# if no coprime factors move on to next 2 variables
# if no prime factors in the first place, exit
try:
gg0 = gcd(var0,var1)
except:
print "No Prime Factors"
sys.exit()
try:
gg1 = gcd(var1,var2)
if gg1 == True:
print var1, var2
except:
if gg0 == True:
print var0, var1
See also questions close to this topic

Keras Lambda Layer Before Embedding: Use to Convert Text to Integers
I currently have a
keras
model which uses anEmbedding
layer. Something like this:input = tf.keras.layers.Input(shape=(20,) dtype='int32') x = tf.keras.layers.Embedding(input_dim=1000, output_dim=50, input_length=20, trainable=True, embeddings_initializer='glorot_uniform', mask_zero=False)(input)
This is great and works as expected. However, I want to be able to send text to my model, have it preprocess the text into integers, and continue normally.
Two issues:
1) The Keras docs say that Embedding layers can only be used as the first layer in a model: https://keras.io/layers/embeddings/
2) Even if I could add a
Lambda
layer before theEmbedding
, I'd need it to keep track of certain state (like a dictionary mapping specific words to integers). How might I go about this stateful preprocessing?In short, I need to modify the underlying Tensorflow DAG, so when I save my model and upload to ML Engine, it'll be able to handle my sending it raw text.
Thanks!

PySpark: Sort or OrderBy DataFrame Column *Numerically* not working correctly
I have some fitness data that I'm trying to sort numerically, however the results are not turning out as the manual, and other examples, e.g. Spark DataFrame groupBy and sort in the descending order (pyspark), show:
display(df.sort(col("Calories Burned").desc()))#fails to sort correctly, shows 876, then 4756, display(df.orderBy("Calories Burned", ascending = False))#fails to sort correctly, shows 876, then 4756 display(df.sort(desc("Calories Burned")))
All these examples display the following two columns data (there are more columns, but I'm abbreviating for space):
Date Calories Burned 20181018 876 20180526 4756 20180505 4440
As you can see, these are not sorting numerically. Spark isn't taking the number of digits into account, so 876 appears before 4756.
Whether or not I include the
col
function makes no difference, either.How can this dataset be sorted numerically so the data looks more like this?
Date Calories Burned 20180526 4756 20180505 4440 20181018 876

Turtle is not reacting to onkeypress [SOLVED]
So, I am new in python so I took some time and watched some videos about how to make a simple "snake" game, I was doing everything that dude was saying, but when it came to the keyboard binding something went wrong and I can't move my turtle..
code:
https://pastebin.com/GLSRNKLRimport turtle import time delay = 0.1 # Screen wn = turtle.Screen() wn.title("Snake Game By AniPita") wn.bgcolor('black') wn.setup(600, 600) wn.tracer(0) # Snake Head head = turtle.Turtle() head.speed(0) head.shape("square") head.color("white") head.penup() head.goto(0, 0) head.direction = "stop" # Functions def go_up(): head.direction == "up" def go_down(): head.direction == "down" def go_left(): head.direction == "left" def go_right(): head.direction == "right" def move(): if head.direction == "up": y = head.ycor() head.sety(y + 10) if head.direction == "down": y = head.ycor() head.sety(y  10) if head.direction == "left": x = head.xcor() head.setx(x  10) if head.direction == "right": x = head.xcor() head.setx(x + 10) # Keyboard Bindings wn.onkeypress(go_up(), 'w') wn.onkeypress(go_down(), 's') wn.onkeypress(go_left(), 'a') wn.onkeypress(go_right(), 'd') wn.listen() # Main Game while True: wn.update() time.sleep(delay) move() wn.mainloop()

Writing nested lists into single dataframe in pandas
I have a list of lists that looks like:
new_file = [['Moscow', '1', '2', '90'],['New York', '2', '3', '60'],....etc ]
And I'm passing it through a function
My pseudo code looks like:
def joinFrame(n): for listy in n: . . . . df = pd.DataFrame(r) df.to_csv('temp.csv')
When I call
joinFrame(new_file)
it only writes the last list in new_file. I know I have to iterate all of the lists into the dataframe but don't know how to edit the code in doing so.I want each list within the list to be written into a single dataframe where each entry is a new column:
Moscow  1  2  90  New York  2  3  60 
etc....

Sorting a list in c# (custom defined sorting rules)
I have a list of string called choosedGroupList consisting of 5 items. Each item represents a group,
For example: L1andL4 means that L1 will be grouped with L4.
Another example: L1,L4andL5,L6 means that the group L1,L4 will be grouped with the group L5,L6I am trying to sort this list to be like this:
L1andL4
L5andL6
L1,L4andL5,L6
L2andL1,L4,L5,L6
L3andL2,L1,L4,L5,L6So I wrote this code to perform this task,
//sorting choosedGroupList for (int k = 0; k < choosedGroupList.Count; k++) { for (int j = k + 1; j < choosedGroupList.Count; j++) { string[] parts = choosedGroupList[j].Split(new string[] { "and" }, StringSplitOptions.None); if (parts[0] == choosedGroupList[k].Replace("and", ",")  parts[1] == choosedGroupList[k].Replace("and", ",")) { string[] parts2 = choosedGroupList[k + 1].Split(new string[] { "and" }, StringSplitOptions.None); //if (parts[0] != parts2[0]  parts[1] != parts2[1]) //{ String Temp = choosedGroupList[k + 1]; choosedGroupList[k + 1] = choosedGroupList[j]; choosedGroupList[j] = Temp; //} } } }
I have no exceptions in the code but, I do not get the desired results.
After executing the code this is the result:
L1andL4
L1,L4andL5,L6
L2andL1,L4,L5,L6
L5andL6
L3andL2,L1,L4,L5,L6 
Extracting lat/lon from Geocode result list
I have data set "dfnew" with a column "coordinates" which is a "Geocode list", I want to loop through the list and get the lat and long and attached to new column dfnew['lat'].
I did this code but i get this error TypeError: 'float' object is not callable
a=0 x=dfnew['Coordinates'] for i in x(): dfnew['lat'][a]=dfnew['Coordinates'][a][0]["geometry"]["location"]["lat"] print(dfnew['lat']) a=a+1

Encryption /Decryption using RSA algorithm
When I call Decryption method I see this message :
Javax.Crypto.BadbaddingException: Exception error
Actually the problem from: cipher.dofinal when I decrypt.
I do not know where is the problem please if any can help write the solution with correct my codepublic class RSA { public static String encryptData(byte[] data ,String fileName) { byte[] byteEncryptedData = null; try { Cipher cipher = Cipher.getInstance("RSA"); cipher.init(Cipher.ENCRYPT_MODE, (PublicKey)file.readPublicKeyFromFile(fileName,"public")); byteEncryptedData = cipher.doFinal(data); return Base64.getEncoder().encodeToString(byteEncryptedData); }catch (Exception e){ e.printStackTrace(); } return null; } public static String decryptData(String data , String fileName) { byte[] byteData2 = data.getBytes(); byte[] byteData = Base64.getDecoder().decode(data) ; String byteDecryptedData;// = new byte [data.length]; try{ Cipher cipher = Cipher.getInstance("RSA"); cipher.init(Cipher.DECRYPT_MODE,(PrivateKey)file.readPrivateKeyFromFile(fileName,"private") ); byte[] byteData2 = cipher.doFinal(byteData); return new String(byteData2); }catch (Exception e){ e.printStackTrace(); } return null; } } /////////////////////////////// public class encry_decry { private static PrivateKey privateKey = null; private static PublicKey publicKey = null; public static void main(String[] args) throws InvalidKeySpecException, NoSuchAlgorithmException { try { KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA"); kpg.initialize(2048); KeyPair kp = kpg.genKeyPair(); publicKey = kp.getPublic(); privateKey = kp.getPrivate(); KeyFactory kf = KeyFactory.getInstance("RSA"); RSAPublicKeySpec rsaPublicKeySpec = kf.getKeySpec(publicKey, RSAPublicKeySpec.class); RSAPrivateKeySpec rsaprivateKeySpec = kf.getKeySpec(privateKey, RSAPrivateKeySpec.class); file.writeKeyToFile(CLIENT_PUBLIC_KEY_FILE, rsaPublicKeySpec.getModulus(), rsaPublicKeySpec.getPublicExponent()); file.writeKeyToFile(CLIENT_PRIVATE_KEY_FILE, rsaprivateKeySpec.getModulus(), rsaprivateKeySpec.getPrivateExponent()); String Data = "hello"; byte[]sentenceArr = Data.getBytes(); String encryptData = RSA.encryptData(sentenceArr, file.CLIENT_PUBLIC_KEY_FILE); RSA.decryptData(encryptData, "doctorPrivateKey"); } } catch (IOException ex) { ex.printStackTrace(); } } }

RSA decryption with Cipher.doFinal() do not recover original plain text
It is pretty straightforward to generate public/private key pair from raw resource file(s) and use them to encrypt/decrypt in android app. However, the following code DO NOT correctly recover plaintext when it is run with Androidx86v4.4.4 emulator in VirtualBox. Could anyone please pointout what is wrong with this code (it does not give any error or generate any exceptions): (Changing to Cipher.getInstance("RSA/NONE/NoPadding") is also of no help)
PublicKey mPublicKey = null; PrivateKey mPrivateKey = null; String mPlainText = "The quick brown fox jumped over the lazy dog" ; byte[] mEncryptText = null; byte[] mDecryptText = null; try { InputStream mIS = getResources().openRawResource(R.raw.test1_public_key); DataInputStream dis = new DataInputStream(mIS); byte [] keyBytes = new byte [(int) mIS.available()]; dis.readFully(keyBytes); dis.close(); mIS.close(); X509EncodedKeySpec mX509KeySpec = new X509EncodedKeySpec(keyBytes); mPublicKey = (KeyFactory.getInstance("RSA")).generatePublic(mX509KeySpec); Toast.makeText(this, "Publickey generated", Toast.LENGTH_LONG).show(); } catch(Exception e){ Log.e("onButtondecrypt", "exception", e); Log.e("onButtondecrypt", "exception: " + Log.getStackTraceString(e)); } try { InputStream mIS = getResources().openRawResource(R.raw.test1_private_key); DataInputStream dis = new DataInputStream(mIS); byte [] keyBytes = new byte [(int) mIS.available()]; dis.readFully(keyBytes); dis.close(); mIS.close(); PKCS8EncodedKeySpec mPKCS8keySpec = new PKCS8EncodedKeySpec(keyBytes); mPrivateKey = (KeyFactory.getInstance("RSA")).generatePrivate(mPKCS8keySpec); Toast.makeText(this, "PRIVATE key generated", Toast.LENGTH_LONG).show(); } catch(Exception e){ Log.e("onButtondecrypt", "exception", e); Log.e("onButtondecrypt", "exception: " + Log.getStackTraceString(e)); } Toast.makeText(this, mPlainText, Toast.LENGTH_LONG).show(); Toast.makeText(this, "Encrypting with Publickey ...", Toast.LENGTH_LONG).show(); try { Cipher cipher = Cipher.getInstance("RSA"); cipher.init(Cipher.ENCRYPT_MODE, mPublicKey); mEncryptText = cipher.doFinal(mPlainText.getBytes()); Toast.makeText(this, mEncryptText.toString(), Toast.LENGTH_LONG).show(); } catch(Exception e){ Log.e("onButtondecrypt", "exception", e); Log.e("onButtondecrypt", "exception: " + Log.getStackTraceString(e)); } Toast.makeText(this, "Decrypting with PRIVATE key ...", Toast.LENGTH_LONG).show(); try { Cipher cipher = Cipher.getInstance("RSA"); cipher.init(Cipher.DECRYPT_MODE, mPrivateKey); mDecryptText = cipher.doFinal(mEncryptText); Toast.makeText(this, mDecryptText.toString(), Toast.LENGTH_LONG).show(); } catch(Exception e){ Log.e("onButtondecrypt", "exception", e); Log.e("onButtondecrypt", "exception: " + Log.getStackTraceString(e)); }
Thanks to all.

crypto_akcipher_set_pub_key in kernel return error
I'm currently developing a kernel module where I'm performing RSA signature verification. The public key always fails at crypto_akcipher_set_pub_key.
I generate public key by python, Here is my public key: BEGIN PUBLIC KEY MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCqPdPMzEYirodOYw/GoLyFUo547OBHm3O9/KpF6yoW9lqiDHGUF4Hs5pk/tTElSMh2o5wtM1zuehmJHtetnoV16Sko4Fx6C0VXxUqJyg8twKvC4Cj/nmFK4ARayn5AaJRhvIMq560mfh2UotyIL6Zsi+f9Z8usuDP8MWyhM9nZGQIDAQAB END PUBLIC KEY
Here is my code:
*tfm = crypto_alloc_akcipher("rsa", 0, 0); ... req = akcipher_request_alloc(*tfm, GFP_KERNEL); ... err = crypto_akcipher_set_pub_key(*tfm, data, len); if(err) { printk("set the public key error\n"); akcipher_request_free(req); return NULL; }
I have the same problem like this：crypto_akcipher_set_pub_key in kernel asymmetric crypto always returns error
I typing the command :
hereopenssl asn1parse in 2
outputs:0:d=0 hl=3 l= 159 cons: SEQUENCE 3:d=1 hl=2 l= 13 cons: SEQUENCE 5:d=2 hl=2 l= 9 prim: OBJECT :rsaEncryption 16:d=2 hl=2 l= 0 prim: NULL 18:d=1 hl=3 l= 141 prim: BIT STRING
then i input:
openssl asn1parse in 2 strparse 18
outputs:0:d=0 hl=3 l= 137 cons: SEQUENCE 3:d=1 hl=3 l= 129 prim: INTEGER :AA3DD3CCCC4622AE874E630FC6A0BC85528E78ECE0479B73BDFCAA45EB2A16F65AA20C71941781ECE6993FB5312548C876A39C2D335CEE7A19891ED7AD9E8575E92928E05C7A0B4557C54A89CA0F2DC0ABC2E028FF9E614AE0045ACA7E40689461BC832AE7AD267E1D94A2DC882FA66C8BE7FD67CBACB833FC316CA133D9D919 135:d=1 hl=2 l= 3 prim: INTEGER :010001
Here is my code:
const char *public_key="AA3DD3CCCC4622AE874E630FC6A0BC85528E78ECE0479B73BDFCAA45EB2A16F65AA20C71941781ECE6993FB5312548C876A39C2D335CEE7A19891ED7AD9E8575E92928E05C7A0B4557C54A89CA0F2DC0ABC2E028FF9E614AE0045ACA7E40689461BC832AE7AD267E1D94A2DC882FA66C8BE7FD67CBACB833FC316CA133D9D919";
But, the public key always fails at crypto_akcipher_set_pub_key.
Thanks!

Multiplying two arrays in python with different lenghts
I want to know if it's possible to solve this problem. I have this values:
yf = (0.23561643, 0.312328767, 0.3506849315, 0.3890410958, 0.4273972602, 0.84931506) z = (4.10592285e05, 0.0012005020, 0.00345332906, 0.006367483, 0.0089151571, 0.01109750, 0.01718827)
I want to use this function (Discount factor) but it's not going to work because of the different lenghts between z and yf.
def f(x): res = 1/( 1 + x * yf) return res f(z) output: ValueError: cannot evaluate a numeric op with unequal lengths
My question is that if it exists a way to solve this. The approximate output values are:
res = (0.99923, 0.99892, 0.99837, 0.99802, 0.99763, 0.99175)
Any help with this will be perfect and I want to thanks in advance to everyone who takes his/her time to read it or try to help.

Numpy array and matrix multiplication
I am trying to get rid of the for loop and instead do an arraymatrix multiplication to decrease the processing time when the
weights
array is very large:import numpy as np sequence = [np.random.random(10), np.random.random(10), np.random.random(10)] weights = np.array([[0.1,0.3,0.6],[0.5,0.2,0.3],[0.1,0.8,0.1]]) Cov_matrix = np.matrix(np.cov(sequence)) results = [] for w in weights: result = np.matrix(w)*Cov_matrix*np.matrix(w).T results.append(result.A)
Where:
Cov_matrix
is a3x3
matrix
weights
is an array ofn
lenght withn
1x3
matrices in it.Is there a way to multiply/map
weights
toCov_matrix
and bypass the for loop? I am not very familiar with all the numpy functions. 
Pandas Multiply Specific Columns by Value In Row
I am attempting to multiple specific columns a value in their respective row.
For example:
X Y Z A 10 1 0 1 B 50 0 0 0 C 80 1 1 1
Would become:
X Y Z A 10 10 0 10 B 50 0 0 0 C 80 80 80 80
The problem I am having is that it is timing out when I use mul(). My real dataset is very large. I tried to iterate it with loop in my real code as follows:
for i in range(1,df_final_small.shape[0]): df_final_small.iloc[i].values[3:248] = df_final_small.iloc[i].values[3:248] * df_final_small.iloc[i].values[2]
Which when applied to the example dataframe would look like this:
for i in range(1,df_final_small.shape[0]): df_final_small.iloc[i].values[1:4] = df_final_small.iloc[i].values[1:4] * df_final_small.iloc[i].values[0]
There must be a better way to do this, I am having problems figuring out how to only cast the multiplication to certain columns in the row rather than the entire row.
EDIT: To detail further here is my df.head(5).
id gross 150413 Welcome Email 150413 Welcome Email Repeat Cust 151001 Welcome Email 151001 Welcome Email Repeat Cust 161116 eKomi 1702 Hot Leads Email 1702 Welcome Email  All Purchases 1804 Hot Leads ... SILVER GOLD PLATINUM Acquisition Direct Mail Conversion Direct Mail Retention Direct Mail Retention eMail cluster x y 0 0033333 46.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 1.0 0.0 0.0 0.0 1.0 0.0 10 0.230876 0.461990 1 0033331 2359.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 ... 0.0 1.0 0.0 0.0 0.0 1.0 0.0 9 0.231935 0.648713 2 0033332 117.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 ... 0.0 1.0 0.0 0.0 0.0 1.0 0.0 5 0.812921 0.139403 3 0033334 89.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 ... 0.0 1.0 0.0 0.0 0.0 1.0 0.0 5 0.812921 0.139403 4 0033335 1908.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 1.0 0.0 0.0 1.0 0.0 0.0 7 0.974142 0.145032

How to factorize or vectorize multiple strings in column to apply TFIDF to build a vocabulary
Below is the example of a data frame called
df
that I have containing two important columns. I would like my model to learn the contents in theComposition
column and build a vocabulary using TFIDF and then help predict the Item column.UID Item Composition 1 [Sweater] [Wool, knitting, handmade, knitting needle] 2 [Jeans] [Denim, cotton, orange thread, stonewash, blue dye] 3 [CottonTrouser] [Cotton, littlepolyster, weaving, handstitch, vcut] 4 [SilkShirt] [wormsilk, artificialsilk, weaving, hand looming, color dying, coating] 5 [Carpet] [Wool, cotton, organic cotton, knitting, sewing]
I applied the below
df['Item'] = df['Item'].apply(lambda x: ''.join(str(x).strip('[]') if isinstance(x, list) else x)) df['Composition'] = df['Composition'].apply(lambda x: ''.join(str(x).strip('[]') if isinstance(x, list) else x))
Now it looks like below. It consists of two columns full of strings.
UID Item Composition 1 'Sweater' 'Wool', knitting', 'handmade', 'knitting' 'needle' 2 'Jeans' 'Denim', 'cotton', 'orange thread', 'stonewash', 'blue dye' 3 'CottonTrouser' 'Cotton', 'littlepolyster', 'weaving', 'handstitch', 'vcut' 4 'SilkShirt' 'wormsilk', 'artificialsilk', 'weaving', 'hand looming', 'color dying', 'coating' 5 'Carpet' 'Wool', 'cotton', 'organic cotton', 'knitting', 'sewing'
I am trying to apply pd.factorize() on the data but it doesnt work well. I would like to convert the string to integers and make the model learn the words.
print(df['Indexer']) 0 [0, 1, 2, 3, 4, 5] 1 Index([''Denim ', 'cotton', 'orange thread... 2 NaN 3 NaN 4 NaN
I would like to predict the
Item
column value using the combination of string found perComposition
column. Need some expert advice on how to get this through using TFIDF. Once this is done I would like to pass it through the MultinomialNB classifier or any such classifier to make predictions. 
Trying to use BigInt with software Julia
I am tryng to factor the number 2^200 with Julia software. But the program displays the result 0 which is not correct. I think that this is due to the fact that I am not using BigInt. How can I use BigInt? Have I to install some particular package?

Shanks's square form factorization implementation
Recently I have been studying about Shanks's square form factorization from this wiki page An implementation in C is provided on that page. I was testing that function and noticed that the function is failing to find a factor of 27.
This is the given C function:
#include <inttypes.h> #define nelems(x) (sizeof(x) / sizeof((x)[0])) const int multiplier[] = {1, 3, 5, 7, 11, 3*5, 3*7, 3*11, 5*7, 5*11, 7*11, 3*5*7, 3*5*11, 3*7*11, 5*7*11, 3*5*7*11}; uint64_t SQUFOF( uint64_t N ) { uint64_t D, Po, P, Pprev, Q, Qprev, q, b, r, s; uint32_t L, B, i; s = (uint64_t)(sqrtl(N)+0.5); if (s*s == N) return s; for (int k = 0; k < nelems(multiplier) && N <= UINT64_MAX/multiplier[k]; k++) { D = multiplier[k]*N; Po = Pprev = P = sqrtl(D); Qprev = 1; Q = D  Po*Po; L = 2 * sqrtl( 2*s ); B = 3 * L; for (i = 2 ; i < B ; i++) { b = (uint64_t)((Po + P)/Q); P = b*Q  P; q = Q; Q = Qprev + b*(Pprev  P); r = (uint64_t)(sqrtl(Q)+0.5); if (!(i & 1) && r*r == Q) break; Qprev = q; Pprev = P; }; if (i >= B) continue; b = (uint64_t)((Po  P)/r); Pprev = P = b*r + P; Qprev = r; Q = (D  Pprev*Pprev)/Qprev; i = 0; do { b = (uint64_t)((Po + P)/Q); Pprev = P; P = b*Q  P; q = Q; Q = Qprev + b*(Pprev  P); Qprev = q; i++; } while (P != Pprev); r = gcd(N, Qprev); if (r != 1 && r != N) return r; } return 0; }
Is this a bug of the given implementation on that page? Can this algorithm fail to find factor for some numbers?