ValueError: Variable rnn/basic_rnn_cell/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?
Any ideas how can I solve problem shown below? With the information that I found on the web it is associated with problem of reusing tensorflow scope however nothing works.
ValueError: Variable rnn/basic_rnn_cell/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:
File "/code/backend/management/commands/RNN.py", line 370, in predict
states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32)
File "/code/backend/management/commands/RNN.py", line 499, in Command
predict("string")
File "/code/backend/management/commands/RNN.py", line 12, in <module>
class Command(BaseCommand):
I tried for instance something like this
with tf.variable_scope('scope'):
states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32)
and this
with tf.variable_scope('scope', reuse = True ):
states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32)
and this
with tf.variable_scope('scope', reuse = tf.AUTO_REUSE ):
states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32)
Any ideas?
1 answer

Does this happen when you run the model for the first time (upon opening a new python console)?
If not, you need to clear you computational graph. You can do that by putting this line at the beginning of your script.
tf.reset_default_graph()
See also questions close to this topic

Raspberry Pi Zero W  Detect iBeacon and perform an action
I have a Kontakt.io Beacon Pro  this broadcasts iBeacons. I want to be able to detect the iBeacon using the Raspberry Pi (Zero W) and then for the pi to perform an action using a python script. (turn on LEDs via GPIO). I can detect the iBeacon using the hcitool lescan feature of bluez, but i don't know how (if i can) set up a python script maybe? that will detect the ibeacon and then upon detecting it, will turn the LEDs on.

Setting parameters in pyomo
I am using
CPLEX
withpyomo
. I would like to set the parametermip.limits.solutions = 1
. How to do this with either.options(
or.set_options(
or any other way?I have tried the following but nothing works:
from pyomo.environ import * opt = SolverFactory("cplex") opt.set_options('miplimitssolutions=1') # does not work opt.set_options('mip.limits.solutions=1') # does not work opt.options['mip'] = 'limits' # this works up to here but how to continue?

decoding entities for Element tree
Is there a comprehensive way to find HTML entities (including foreign language characters) and convert them to hexidecimal encoding or another encoding type that is accepted by ElementTree? Is there a best practice for this?
I'm parsing a large data set of XML, which used HTML entities to encode unicode and special characters. My script passes in an XML file line by line. When I parse the data using python ElementTree, I get the following error.
ParseError: undefined entity: line 296, column 29
I have started by building a dictionary to parse the string and encode into hexidecimal. This has alleviated many of the errors. For example, converting the trademark symbol
™
to™
. However, there is no end in sight. This is because I have started to find unicode escaped characters such as 'Å' and 'ö' which are for foreign language. I have looked at several options and will describe them below.xmlcharrefreplace: This did not find foreign language HTML escaped values.
line = line.encode('ascii', 'xmlcharrefreplace')
HTMLParser.enescape(): Did not work, i believe because XML needs some characters escaped such as '<&>'.
h = HTMLParser.HTMLParser() line = h.unescape(line)
Encoding to UTF8: Did not work I believe because XML needs some characters escaped.
line = line.encode('utf8')
BeautifulSoup: This returned a BeautifulSoup object and when converting to a string added an XML version tag to each line and even when replacing that, there was some other type of character additions.
line = BeautifulSoup(line, "xml") line = str(line).replace('<?xml version="1.0" encoding="utf8"?>', "").replace("\n", "")
htmlentitydefs: Still manages to miss many characters. For example, still missed '?' and '=', however, this got me further than other options.
from htmlentitydefs import name2codepoint line = re.sub('&(%s);' % ''.join(name2codepoint), lambda m: unichr(name2codepoint[m.group(1)]), line)

ImportError: No module named 'xgboost'
when I use
import xgboost as xgb from xgboost import XGBClassifier
any of the import, I am getting
ImportError: No module named xgboost
from jupyter notebookI have installed xgboost
Successfully installed numpy1.14.1 scipy1.0.0 xgboost0.7.post3
Do I need to install any preinstallation ?
I am using linux system.

Object reference issue in Python while creating a tree
I'm trying to construct a tree from a list in python. The nodes in my tree have values of indices of my list, and the parent of each node will be the node specified in the list at that index.
In the code sample below,varlist
stores the node elements of the tree with values in the inputarray
. For example, an input list of [1, 0, 4, 0, 3] should give the following tree:0 / \ 1 3 \ 4 \ 2
The way I'm doing this is to first initialize the nodes separately in a list with default parent as None. Then I'm assigning the parent and child as I traverse the array as follows:
class Node1: def __init__(self, val, parent, children = []): self.val = val if parent == 1: self.parent = None else: self.parent = parent self.children = children def __str__(self): return(str(self.val)) def treeHeight(array): varlist = [0] * len(array) for i in range(len(array)): varlist[i] = Node1(i, None) for i in range(len(varlist)): if array[i] != 1: varlist[i].parent = varlist[array[i]] varlist[array[i]].children.append(varlist[i]) else: root = varlist[i] for i in range(len(array)): print(varlist[i].val,varlist[i].parent, varlist[i].children) return(None) if __name__ == '__main__': print(treeHeight([1, 0, 4, 0, 3]))
The output I get is this:
0 None [<__main__.Node1 object at 0x1041051d0>, <__main__.Node1 object at 0x104105208>, <__main__.Node1 object at 0x104105780>, <__main__.Node1 object at 0x104105cc0>] 1 0 [<__main__.Node1 object at 0x1041051d0>, <__main__.Node1 object at 0x104105208>, <__main__.Node1 object at 0x104105780>, <__main__.Node1 object at 0x104105cc0>] 2 4 [<__main__.Node1 object at 0x1041051d0>, <__main__.Node1 object at 0x104105208>, <__main__.Node1 object at 0x104105780>, <__main__.Node1 object at 0x104105cc0>] 3 0 [<__main__.Node1 object at 0x1041051d0>, <__main__.Node1 object at 0x104105208>, <__main__.Node1 object at 0x104105780>, <__main__.Node1 object at 0x104105cc0>] 4 3 [<__main__.Node1 object at 0x1041051d0>, <__main__.Node1 object at 0x104105208>, <__main__.Node1 object at 0x104105780>, <__main__.Node1 object at 0x104105cc0>]
The output is not what I'm expecting because somehow, the
children
list of all the nodes have 4 elements in them, when I was expecting 2 of them to have 2 children and the rest empty. Can someone please help me explain what is going on here? 
python 3.x override class with new class that all references use
Alright, this is going to be difficult.
I have three modules, maincards,present, and newcards.The maincards module has a Card class that has all the information for the deck of cards. Present basically has the functions that reference the cards from the Card class in maincards and presents them onscreen (pygame.)
Now what I need is for newcards, which serves as a modification, to have its own Card class that will override maincards's class which present will pull, under these conditions.
Present will only have a prompt that activates the newcards module, meaning all the coding that will change maincards has to be in newcards.
I cannot change anything in present or maincards.
It has to be an on/off thing, sort of like an option.
In other words, I want a third module that has a class that overrides a class from the first module on command so all references to the first class will be replaced by the new class WITHOUT changing anything in the first two modules.
If you could point me in the right direction or explain how this would be done, that would be great. I hope for both of our sanity's I was clear and concise enough.

Model implementation in production with python
I built a machine learning model of binary classification in python.
It works on my laptop (e.g. command line tool). Now I want to deploy it in production on a separate server in my company. It has to take inputs from another server (C# application), make some calculations and return outputs back to it.
My question is what are the best practices of doing such thing in production? As I know it can be done through TCP/IP connection.
I am new in this field and I don't know the terms used here. So can anybody guide me?
Thanks.

Spark 2.2: Load org.apache.spark.ml.feature.LabeledPoint from file
The following line of code loads the (soon to be deprecated)
mllib.regression.LabeledPoint
from file to anRDD[LabeledPoint]
:MLUtils.loadLibSVMFile(spark.sparkContext, s"$path${File.separator}${fileName}_data_sparse").repartition(defaultPartitionSize)
I'm unable to find the equivalent function for
ml.feature.LabeledPoint
, which is not yet heavily used in the Spark documentation examples.Can someone point me to the relevant function?

Can I Feed a Different Start Symbol to RNN Decoder?
When an RNN is used to predict a sentence, the decoder state is initialized with the input:
<S>
(the start symbol, which prompts the decoder to produce the first word of the response)ex. Given
<S>
, predictThe
, then givenThe
, predictcar
.If my goal was to predict the end of a sequence given the start,
Input: "The car was" Target: "moving down the hill."
Would it make sense to feed the first decoder cell
was
instead of<S>
?Feeding
was
should increase the information accessible to the decoder, compared to using a static token for every example which contains no specific information.I'm posting this question because it conversely seems like it could be necessary for the
<S>
to remain static during training. 
Tensorflow LSTM model parameter learning inside parameter
I'm tryinig to train my LSTM model in tensorflow and my module has to calculate parameter inside parameter. And i want to train both parameters altogether. More details are in the picture below.
I think that tensorflow LSTM module's input must be a perfect sequence and parameters like "tf.placeholder". How can i do this in tensorflow? Or can you recommend another appropriate framework better than tensorflow in this task?

How to not resize input image while running Tensorflow SSD's inference
From what I can understand from Single Shot Multibox Detector paper, it is a fully convolutional network. As such, it won't require the rescaling which tensorflow is doing (to 300x300) during inference. How can I remove this resizing during inference in tensorflow?

Python: How to use multiprocessing in python to speed up a python code with multiple functions
After having gone through a couple of youtube videos on multiprocessing library and https://pymotw.com/2/multiprocessing/basics.html, I developed the basic understanding of how it works. I even went through similar problems posed by other members of SO however none is similar to my requirement. My requirement is to boost the run time of my code, that is taking a lot of time. I have also identified the particular function that is taking time. Here is my code
import time ti=time.time() import matplotlib.pyplot as plt from scipy.integrate import ode import numpy as np from numpy import sin,cos,tan,zeros,exp,tanh,dot,array from matplotlib import rc import itertools from const_for_drdo_split_thrust import * plt.style.use('bmh') import mpltex linestyles=mpltex.linestyle_generator() def zetta(x,spr,c): num=len(x)*len(c) Mu=[[] for i in range(len(x))] for i in range(len(x)): Mu[i]=np.zeros(len(c)) m=[] for i in range(len(x)): for j in range(len(c)): Mu[i][j]=exp(.5*((x[i]c[j])/spr)**2) b=list(itertools.product(*Mu)) for i in range(len(b)): m.append(reduce(lambda x,y:x*y,b[i])) m=np.array(m) S=np.sum(m) return m/S def f(t,Y,tim,So,C,param): V,alpha,beta,p,q,r,phi,theta,psi=Y[0],Y[1],Y[2],Y[3],Y[4],Y[5],Y[6],Y[7],Y[8] CY_b,CY_p,CY_r,Cl_b,Cl_p,Cl_r,Cn_b,Cn_p,Cn_r,CD_0,CD_q,CL0,CL_q,Cm0,Cm_q,dtr,m,g,B,rho,Jx,Jy,Jz,Jxz,S,b,az,ax,dx,bx,bz,c,z,x,c1,c2,c3,K0,K,eta,mx,my,mz,sigma,spr,lr1,lr2,lr3,V_desired,alpha_desired,beta_desired=param[0],param[1],param[2],param[3],param[4],param[5],param[6],param[7],param[8],param[9],param[10],param[11],param[12],param[13],param[14],param[15],param[16],param[17],param[18],param[19],param[20],param[21],param[22],param[23],param[24],param[25],param[26],param[27],param[28],param[29],param[30],param[31],param[32],param[33],param[34],param[35],param[36],param[37],param[38],param[39],param[40],param[41],param[42],param[43],param[44],param[45],param[46],param[47],param[48],param[49],param[50] e1,e2,e3,e4,e5=VV_desired,alphaalpha_desired,phiphi_desired,thetatheta_desired,psipsi_desired xx=[e1,e2,e3,e4,e5] s1old,s2old,s3old,s4old,s5old=So[0],So[1],So[2],So[3],So[4] theta1,theta2,theta3,theta4,theta5=Y[9:9+len(C)**len(xx)],Y[9+len(C)**len(xx):9+2*len(C)**len(xx)],Y[9+2*len(C)**len(xx):9+3*len(C)**len(xx)],Y[9+3*len(C)**len(xx):9+4*len(C)**len(xx)],Y[9+4*len(C)**len(xx):] theta1dot,theta2dot,theta3dot,theta4dot,theta5dot=np.zeros(len(C)**len(xx)),np.zeros(len(C)**len(xx)),np.zeros(len(C)**len(xx)),np.zeros(len(C)**len(xx)),np.zeros(len(C)**len(xx)) CY=CY_b(alpha)*beta+CY_p(alpha)*p+CY_r(alpha)*r Cl=Cl_b(alpha)*beta+Cl_p(alpha)*p+Cl_r(alpha)*r Cn=Cn_b(alpha)*beta+Cn_p(alpha)*p+Cn_r(alpha)*r CD=CD_0(alpha)+CD_q(alpha)*q CL=CL0(alpha)+CL_q(alpha)*q Cm=Cm0(alpha)+Cm_q(alpha)*q #alpha,beta,phi,theta,psi=alpha*dtr,beta*dtr,phi*dtr,theta*dtr,psi*dtr a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15,a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29=.5*rho*CD*S/mx,.5*rho*CY*S/mx,2/mx,1/mx,(m*gB)/mx,.5*rho*CL*S/mz,2/mz,(m*gB)/mz,2/my,1/my,(m*gB)/my,.5*rho*S*CD/my,.5*rho*S*CY/my,(JyJz)/Jx,Jxz/Jx,.5*rho*S*b*Cl/Jx,(m*g*az+B*bz)/Jx,(JzJx)/Jy,Jxz/Jy,.5*rho*S*c*Cm/Jy,2*z/Jy,2*x/Jy,(m*g*az+B*bz)/Jy,(m*g*ax+B*bx)/Jy,(JxJy)/Jz,(Jxz/Jz),.5*rho*b*Cn/Jz,dx/Jz,(m*g*ax+B*bx)/Jz Q=.5*rho*V**2*S #sin(gamma), sin(mu)cos(gamma) and all sg=(cos(alpha)*cos(beta)*sin(theta))(sin(beta)*sin(phi)*cos(theta))(sin(alpha)*cos(beta)*cos(phi)*cos(theta)) smcg=(sin(theta)*cos(alpha)*sin(beta))+(sin(phi)*cos(theta)*cos(beta))(sin(alpha)*sin(beta)*cos(phi)*cos(theta)) cmcg=(sin(theta)*sin(alpha))+(cos(alpha)*cos(phi)*cos(theta)) #import pdb;pdb.set_trace() #sliding surface and control design thetadot=q*cos(phi)r*sin(phi) phidot=p+q*sin(phi)*tan(theta)+r*cos(phi)*tan(theta) psidot=(q*sin(phi)+r*cos(phi))/cos(theta) de3,de4,de5=phidot,thetadot,psidot s1,s2,s3,s4,s5=e1,e2,de3+c3*e3,de4+c4*e4,de5+c5*e5 s=np.array([[s1],[s2],[s3],[s4],[s5]]) #adaptive fuzzy and the formulation of control #s1dot,s2dot,s3dot=vdot+2*e1,alphadot+2*e2,betadot+2*e3 dt=time.time()tim s1dot,s2dot,s3dot,s4dot,s5dot=(s1s1old)/dt,(s2s2old)/dt,(s3s3old)/dt,(s4s4old)/dt,(s5s5old)/dt tim=time.time() Z=zetta(xx,spr,C) u1=dot(Z,theta1) u2=dot(Z,theta2) #import pdb;pdb.set_trace() u3=dot(Z,theta3) u4=dot(Z,theta4) u5=dot(Z,theta5) u=[u1,u2,u3,u4,u5] #dynamical equations vdot=(1/mx)*(Q*(CD*cos(beta)CY*sin(beta))+(m*gB)*sg+u[2]*sin(beta)+(u[0]+u[1])*cos(alpha)*cos(beta)+(u[3]+u[4])*sin(alpha)*cos(beta)) alphadot=q(1/cos(beta))*((p*cos(alpha)+r*sin(alpha))*sin(beta)Q*CL/(mz*V)(m*gB)*cmcg/(mz*V)((u[0]+u[1])*sin(alpha)+(u[3]+u[4])*cos(alpha))) betadot=(p*sin(alpha)r*cos(alpha))+(1/my*V)*((m*gB)*smcg+Q*(CD*cos(beta)+CY*sin(beta))+u[2]*cos(beta)(u[0]+u[1])*sin(beta)*cos(alpha)(u[3]+u[4])*sin(alpha)*sin(beta)) pdot=((JyJz)/Jx)*q*r+(Jxz/Jx)*p*q+(1/Jx)*(Q*b*Cl(m*g*az+B*bz)*sin(phi)*cos(theta)+u[3]*yu[4]*y) qdot=((JzJx)/Jy)*p*r+(Jxz/Jy)*(r**2p**2)+(1/Jy)*(Q*c*Cm(m*g*az+B*bz)*sin(theta)(m*g*ax+B*bx)*cos(phi)*cos(theta)+u[0]*z+u[1]*z+u[3]*x+u[4]*x) rdot=((JxJy)/Jz)*p*q(Jxz/Jz)*q*r+(1/Jz)*(Q*b*Cn+(m*g*ax+B*bx)*sin(phi)*cos(theta)+u[0]*yu[1]*y+u[2]*dx) #thetaddot=qdot*cos(phi)q*phidot*sin(phi)(rdot*sin(phi)+r*phidot*cos(phi)) for i in range(len(C)**len(xx)): #import pdb;pdb.set_trace() theta1dot[i]=lr1*Z[i]*(s1dot+.5*s1+5*tanh(s1))lr1*sigma*theta1[i] theta2dot[i]=lr2*Z[i]*(s2dot+.6*s2+6*tanh(s2))lr2*sigma*theta2[i] theta3dot[i]=lr3*Z[i]*(s3dot+2*s3+10*tanh(s3))lr3*sigma*theta3[i] theta4dot[i]=lr4*Z[i]*(s4dot+3*s4+12*tanh(s4))lr4*sigma*theta4[i] theta5dot[i]=lr5*Z[i]*(s5dot+4*s5+20*tanh(s5))lr5*sigma*theta5[i] #theta1dot[i]=lr1*Z[i]*s1 #theta2dot[i]=lr2*Z[i]*s2 #theta3dot[i]=lr3*Z[i]*s3 s1old,s2old,s3old,s4old,s5old=s1,s2,s3,s4,s5 f=np.array([vdot,alphadot,betadot,pdot,qdot,rdot,phidot,thetadot,psidot]) f1=np.concatenate((f,theta1dot),axis=0) f2=np.concatenate((f1,theta2dot),axis=0) f3=np.concatenate((f2,theta3dot),axis=0) f4=np.concatenate((f3,theta4dot),axis=0) f5=np.concatenate((f4,theta5dot),axis=0) return [f5,u] def sliding_solver(t0,y0,dt,t1,tim,So,C,num,param): n=len(C)**num x,t=[[] for i in range(9+5*n)],[] u=[[] for i in range(5)] r=ode(lambda t,y,tim,So,C,param: f(t,y,tim,So,C,param)[0]).set_integrator('dopri5',method='bdf') r.set_initial_value(y0,t0).set_f_params(tim,So,C,param) while r.successful() and r.t<t1: r.integrate(r.t+dt) for i in range(9+5*n): x[i].append(r.y[i]) for i in range(5): u[i].append(f(r.t,r.y,tim,So,C,param)[1][i]) t.append(r.t) return x,t,u if __name__=='__main__': X,t,u=sliding_solver(0,x0,1e2,T,tim,So,C,num,param) names=["V($m/s$)","alpha($degree$)","beta($degree$)","p($degree/sec$)","q($degree/sec$)","r($degree/sec$)","phi($degree$)","theta($degree$)","psi($degree$)"] plt.rc('text',usetex=True) plt.rc('font',family='serif',serif='Times') plt.rc('xtick',labelsize=20) plt.rc('ytick',labelsize=20) plt.rc('axes',labelsize=20) for i in range(9): plt.figure(i+1) plt.plot(t,X[i],label=names[i]) plt.legend(loc='lower left',prop={'size':20}) plt.xlabel(r'\textbf{time} (s)') plt.ylabel(names[i]) #plt.savefig('amar1_20100.eps') var=9 for j in range(5): for i in range(len(C)**num): plt.figure(10+j) plt.plot(t,X[var+i]) var=var+len(C)**num plt.figure(15) for i in range(5): plt.plot(t,u[i]) plt.show()
In the code above, as you all can see, I am calling just one function in main(), the function being called in main() is nothing more than an ode solver, to this solver, a function by the name "f()" is being passed. It is in this function f() that another function by the name zetta() is being used. The prime culprit here is zetta() which is taking an awful lot of time to return a particular array. I wanted to speed things up by using multiprocessing lib. I tried assigning sliding_solver() to multiprocessing but it is still using only one processor. I want zetta() to use one processor and solver to another. Is it possible? If so how can I accomplish it? PS (1) I have two systems one with 4 core xeon and another with 8 cores i7. (2) All the constansts and parameters used in my code below are being imported from the file const_for_drdo_split_thrust module.

Qvalues exploding when training DQN
I'm training a DQN to play OpenAI's Atari environment, but the Qvalues of my network quickly explode far above what is realistic.
Here's the relevant portion of the code:
for state, action, reward, next_state, done in minibatch: if not done: # To save on memory, next_state is just one frame # So we have to add it to the current state to get the actual input for the network next_4_states = np.array(state) next_4_states = np.roll(next_4_states, 1, axis=3) next_4_states[:, :, :, 0] = next_state target = reward + self.gamma * \ np.amax(self.target_model.predict(next_4_states)) else: target = reward target_f = self.target_model.predict(state) target_f[0][action] = target self.target_model.fit(state, target_f, epochs=1, verbose=0)
The discount factor is 0.99 (it doesn't happen with discount factor 0.9, but also doesn't converge because it can't think far enough ahead).
Stepping through the code, the reason it's happening is all the Q values that aren't meant to be updated (the ones for actions we didn't take) increase slightly. It's my understanding that passing the networks own output to the network during training should keep the output the same, not increase or decrease it. Is there something wrong with my model? Is there some way I can mask the update so it only updates the relevant Q value?

Multiclass Classification with Neural Network: One Hot Encoding
I am required to use the neuralnet function to work with a neural network in a multiclass classification problem. The problem is that neuralnet does not like factors but my predictors and response variable are factors. Therefore, I tried to use One Hot encoding by using the function class.ind() from the NNET package. However, I still obtain an error.
My code is:
CH < read.table("http://data.princeton.edu/wws509/datasets/copen.dat", header=TRUE) CH_transformed < CH %>% rowwise() %>% mutate(id = list(seq(1:n))) %>% unnest(id) %>% dplyr::select(n) nn_CH < neuralnet(class.ind(CH_transformed$satisfaction)~class.ind(CH_transformed$housing)+class.ind(CH_transformed$influence)+class.ind(CH_transformed$contact), data=CH_transformed, hidden=2, err.fct = "ce", linear.output = FALSE)
I would appreciate any advice on how to correctly use One Hot encoding for my predictors and response variable!