Optimizing Mathematical Calculation
I have a model for four possibilities of purchasing a pair items (purchasing both, none or just one) and need to optimize the (pseudo) loglikelihood function. Part of this, of course, is the calculation/definition of the pseudologlikelihood function.
The following is my code, where Beta is a 2d vector for each customer (there are U customers and U different beta vectors), X is a 2d vector for each item (different for each of the N items) and Gamma is a symmetric matrix with a scalar value gamma(i,j) for each pair of items. And df is a dataframe of the purchases  one row for each customer and N columns for the items.
It would seem to me that all of these loops are inefficient and take up too much time, but I am not sure how to speed up this calculation and would appreciate any help improving it. Thank you in advance!
def pseudo_likelihood(Args):
Beta = np.reshape(Args[0:2*U], (U, 2))
Gamma = np.reshape(Args[2*U:], (N,N))
L = 0
for u in range(0,U,1):
print datetime.datetime.today(), " for user {}".format(u)
y = df.loc[u][1:]
beta_u = Beta[u,:]
for l in range(N):
print datetime.datetime.today(), " for item {}".format(l)
for i in range(N1):
if i == l:
continue
for j in range(i+1,N):
if (y[i] == y[j]):
if (y[i] == 1):
L += np.dot(beta_u,(x_vals.iloc[i,1:]+x_vals.iloc[j,1:])) + Gamma[i,j] #Log of the exponent of this expression
else:
L += np.log(
1  np.exp(np.dot(beta_u, (x_vals.iloc[i, 1:] + x_vals.iloc[j, 1:])) + Gamma[i, j])
 np.exp(np.dot(beta_u, x_vals.iloc[i, 1:])) * (
1  np.exp(np.dot(beta_u, x_vals.iloc[j, 1:])))
 np.exp(np.dot(beta_u, x_vals.iloc[j, 1:])) * (
1  np.exp(np.dot(beta_u, x_vals.iloc[i, 1:]))))
else:
if (y[i] == 1):
L += np.dot(beta_u,x_vals.iloc[i,1:]) + np.log(1  np.exp(np.dot(beta_u,x_vals.iloc[j,1:])))
else:
L += (np.dot(beta_u, x_vals.iloc[j,1:])) + np.log(1  np.exp(np.dot(beta_u, x_vals.iloc[i,1:])))
L = (N2)*np.dot(beta_u,x_vals.iloc[l,1:])
for k in range(N):
if k != l:
L = np.dot(beta_u, x_vals.iloc[k,1:])
return L
To add/clarify  I am using this calculation to optimize and find the beta and gamma parameters that generated the data for this pseudolikelihood function.
I am using scipy optimize.minimize with the 'Powell' method.
See also questions close to this topic

Showing all the images with matplotlib
I'm using numpy and matplotlib to read all the images in the folder for image processing techniques. Although, I have done the part of reading image dataset from folders and process it with numpy array. But the problem, I'm facing is of showing all the images with matplotlib.imshow function. Everytime I want to show all the images with imshow function, unfortunately it just give me first image nothing else. My code is below:
import os import numpy as np import matplotlib.pyplot as mpplot import matplotlib.image as mpimg images = [] path = "../path/to/folder" for root, _, files in os.walk(path): current_directory_path = os.path.abspath(root) for f in files: name, ext = os.path.splitext(f) if ext == ".jpg": current_image_path = os.path.join(current_directory_path,f) current_image = mpimg.imread(current_image_path) images.append(current_image) for img in images: print len(img.shape) i = 0 for i in range(len(img.shape)): mpplot.imshow(img) mpplot.show()
I will be thankful if somebody can help me in this.
P.S. I'm pretty new with python, numpy and also at stackoverflow. So, please don't mind if the question is unclear or not direct.
Thanks,

Tkinter, changing a button and its command
We have a button that calls function A. We want function A to change the image of that button and to change its command to call function B instead of A. How to do that? We can't have two buttons at the same time.
Thank you in advance.

SYN flooding in Python
Is there any way to perform SYN flooding on a target ip address? I have tried many programs but none of them have seem to flood the targeted ip addresses. Kindly help.

how to define the Jacobian with numdifftools to use scipy algorithm python?
i have to minimize a quadratic convex function of the only variable x. The function is:
f((x,y,set1,set2,set3,r,s)
I want to use the algorithm in scipy NewtonCG, because trying with BFGS or CG the values was to hight. This algorithm ask me to define the Jacobian so I defined as:
def jaco(x,y,set1,set2,set3,r,s): return nd.Jacobian(lambda v: f((x,y,set1,set2,set3,r,s)(x).ravel()
using the library
numdifftools
. Then I've optimized with spicy:result=scipy.optimize.minimize(f, x,args=(y,set1,set2,set3,r,s),method= 'NewtonCG', jac=jaco, tol= 0.0001, options = {'disp': True, 'maxiter' : 10**3})
. The problem is that I've never used this library (numdifftools
), so maybe I've done something wrong in the definition of the Jacobian. So my value of the function which I obtained using the new algorithm was higher than before (using an algorithm without define Jacobian or Hessian). Could someone tell me what's wrong in the definition of the Jacobian, and how to define it? Thank you 
Efficient code using nested functions in R
I am wondering what is more efficient in R when it comes to using nested functions. Essentially, I have three functions f1, f2, f3. f3 uses f2 which itselfs uses f1
The 2 options I have are:
 Define f1, f2, f3 independently. Then use f3 which will use f1 and f2 predefined in the environment
 Define f3, and include f1 and f2 as part of the code of f3, then use f3
To your knowledge, is one of these ways more efficient than the other?
Many thanks

Fisher information in EWC
EWC proposed in this paper use fisher information to decide which parameters are important to previous tasks. I found an implementation. They calculate the fisher information by squaring the gradient.
# compute firstorder derivatives ders = sess.run(tf.gradients(tf.log(probs[0,class_ind]), self.var_list), feed_dict={self.x: imgset[im_ind:im_ind+1]}) # square the derivatives and add to total for v in range(len(self.F_accum)): self.F_accum[v] += np.square(ders[v])
According to wiki, the higher fisher information indicates the more confident about the parameters.
However, according to gradient descent, the gradient of the optimal value is 0. In this perspective, a better optimized value indicates a low fisher information. In other words, we have less confidence in a better optimized value. In my own opinion, an optimal should contain more information about the tasks and the model should have more confidence in it. Could anyone tell me where is wrong? Thanks.

How could I write Jacobian with numdifftools to use NewtonCG in spicy.optimize.minimize python?
I have to minimize a quadratic convex function od the only variable v, but the function has more arguments. I want to use the algorithm NetwonCG which ask me to define the
Jacobian
. So I've imported the librarynumdifftools
to define the Jac, but doesn't work so maybe I've done something wrong in writing the code. (It's the first time for me in using this libraries). The function to minimize is:fun(v,y,z,set1,set2,b,u)
so I define Jacobian as:
jaco=nd.Jacobian(fun(v,y,z,set1,set2,b,u))
and if I print I have a matrix. But when I put in the optimization as:
result= scipy.optimize.minimize(fun, v,args=(y,z,set1,set2,b,u),method= 'NewtonCG', jac=jaco ,tol= 0.000001, options = {'disp': True, 'maxiter' : 10**3}))
This doesn't work and gives me some different errors as "invalid syntax" but it's always worked before I defined the Jacobian and put it in the function.
Could someone tell me what's wrong and how to write correctly this? Thank you.

How to find Mahalanobis distance between two 1D arrays in Python?
I have two 1D arrays, and I need to find out the Mahalanobis distance between them.
Array 1
0.125510275,0.067021735,0.140631825,0.014300184,0.122152582,0.002372072,0.050777748,0.106606245,0.149123222,0.159149423,0.210138127,0.031959131,0.068411253,0.038253143,0.024590122,0.101361006,0.160774037,0.183688596,0.07163775,0.096662685,0.000117288,0.14251323,0.030461289,0.006710192,0.217195332,0.338565469,0.030219197,0.100772612,0.144092739,0.092911556,0.008420993,0.042907588,0.212668449,0.009366207,7.01E05,0.134508118,0.015715659,0.050884761,0.18804647,0.04946585,0.242626131,0.099951334,0.053660966,0.275807977,0.216019884,0.009127878,0.019819722,0.043750495,0.12940146,0.259942383,0.061821692,0.107142501,0.098196507,0.022301452,0.079412982,0.131031215,0.049483716,0.126781181,0.195536733,0.077051811,0.061049294,0.039563753,0.02573989,0.025330214,0.204785526,0.099218346,0.050533134,0.109173119,0.205652237,0.168003649,0.062734045,0.100320764,0.063513778,0.120843001,0.223983109,0.075016715,0.481291831,0.107607022,0.141365036,0.075003348,0.042418435,0.041501854,0.096700639,0.083469011,0.033227846,0.050748199,0.045331556,0.065955319,0.26927036,0.082820699,0.014033476,0.176714703,0.042264186,0.011814327,0.041769091,0.00132945,0.114337325,0.013483777,0.111367472,0.051828772,0.022199111,0.030011443,0.015529033,0.171916366,0.172722578,0.214662731,0.0219073,0.067695767,0.040487193,0.04814541,0.003313571,0.01360167,0.115932293,0.235844463,0.185181856,0.130868644,0.010789306,0.171733275,0.059378762,0.003508842,0.039326921,0.024174646,0.195897669,0.088932432,0.025385177,0.134177506,0.08158315,0.049005955
And, Array 2
0.120652862,0.030241199,0.146165773,0.044423241,0.138606027,0.048646796,0.00780057,0.101798892,0.185339138,0.210505784,0.1637595,0.015000292,0.10359703,0.102251172,0.043159217,0.183324724,0.171825036,0.173819616,0.112194099,0.161590934,0.002507193,0.163269699,0.037766434,0.041060638,0.178659558,0.268946916,0.055348843,0.11808344,0.113775767,0.073903576,0.039505914,0.032382272,0.159118786,0.007761603,0.057116233,0.043675732,0.057895001,0.104836114,0.22844176,0.055832602,0.245030299,0.006276659,0.140012532,0.21449241,0.159539059,0.049584024,0.016899824,0.074179329,0.119686954,0.242336214,0.001390997,0.097442642,0.059720818,0.109706804,0.073196828,0.16272822,0.022305552,0.102650747,0.192103565,0.104134969,0.099571452,0.101140082,0.038911857,0.071292967,0.202927336,0.12729995,0.047885433,0.165100336,0.220239595,0.19612211,0.075948663,0.096906625,0.07410948,0.108219706,0.155030385,0.042231761,0.484629512,0.093194947,0.105109185,0.072906494,0.056871444,0.057923764,0.101847053,0.092042476,0.061295755,0.031595342,0.01854251,0.074671492,0.266587347,0.052284949,0.003548023,0.171518356,0.053180017,0.022400264,0.061757766,0.038441688,0.139473096,0.05759665,0.101672307,0.074863717,0.02349415,0.011674869,0.010008151,0.141401738,0.190440938,0.216421023,0.028323224,0.078021556,0.011468113,0.100600921,0.019697987,0.014288296,0.114862509,0.162037179,0.171686187,0.149788797,0.01235011,0.136169329,0.008751356,0.024811052,0.003802934,0.00500867,0.1840965,0.086204343,0.018549766,0.110649876,0.068768717,0.03012047
I found that Scipy has already implemented the function. However, I am confused about what the value of IV should be. I tried to do the following
V = np.cov(np.array([array_1, array_2])) IV = np.linalg.inv(V) print(mahalanobis(array_1, array_2, IV))
But, I get the following error:
File "C:\Users\XXXXXX\AppData\Local\Continuum\anaconda3\envs\face\lib\sitepackages\scipy\spatial\distance.py", line 1043, in mahalanobis m = np.dot(np.dot(delta, VI), delta)
ValueError: shapes (128,) and (2,2) not aligned: 128 (dim 0) != 2 (dim 0)
EDIT:
array_1 = [0.10577646642923355, 0.09617947787046432, 0.029290344566106796, 0.02092641592025757, 0.021434104070067406, 0.13410840928554535, 0.028282659128308296, 0.12082239985466003, 0.21936850249767303, 0.06512433290481567, 0.16812698543071747, 0.03302834928035736, 0.18088334798812866, 0.04598559811711311, 0.014739632606506348, 0.06391328573226929, 0.15650317072868347, 0.13678401708602905, 0.01166679710149765, 0.13967938721179962, 0.14632365107536316, 0.025218486785888672, 0.046839646995067596, 0.09690812975168228, 0.13414686918258667, 0.2883925437927246, 0.1435326784849167, 0.17896348237991333, 0.10746842622756958, 0.09142691642045975, 0.04860316216945648, 0.031577128916978836, 0.17280976474285126, 0.059613555669784546, 0.05718057602643967, 0.0401446670293808, 0.026440180838108063, 0.017025159671902657, 0.22091664373874664, 0.024703698232769966, 0.15607595443725586, 0.0018572667613625526, 0.037675946950912476, 0.3210170865058899, 0.10884962230920792, 0.030370134860277176, 0.056784629821777344, 0.030112050473690033, 0.023124486207962036, 0.1449904441833496, 0.08885903656482697, 0.17527811229228973, 0.08804896473884583, 0.038310401141643524, 0.01704210229218006, 0.17355971038341522, 0.018237406387925148, 0.030551932752132416, 0.23085585236549377, 0.13475817441940308, 0.16338199377059937, 0.06968289613723755, 0.04330683499574661, 0.04434924200177193, 0.22637797892093658, 0.07463733851909637, 0.15070196986198425, 0.07500549405813217, 0.10863590240478516, 0.22288714349269867, 0.0010778247378766537, 0.057608842849731445, 0.12828609347343445, 0.17236559092998505, 0.23064571619033813, 0.09910193085670471, 0.46647992730140686, 0.0634111613035202, 0.13985536992549896, 0.052741192281246185, 0.1558966338634491, 0.022585246711969376, 0.10514408349990845, 0.11794176697731018, 0.06241249293088913, 0.06389056891202927, 0.14145469665527344, 0.060088545083999634, 0.09667345881462097, 0.004665130749344826, 0.07927791774272919, 0.21978208422660828, 0.0016187895089387894, 0.04876316711306572, 0.03137822449207306, 0.08962501585483551, 0.09108036011457443, 0.01795950159430504, 0.04094596579670906, 0.03533276170492172, 0.01394269522279501, 0.08244197070598602, 0.05095399543642998, 0.04305890575051308, 0.1195211187005043, 0.16731074452400208, 0.03894471749663353, 0.0222858227789402, 0.07944411784410477, 0.0614166259765625, 0.1481470763683319, 0.09113290905952454, 0.14758692681789398, 0.24051085114479065, 0.164126917719841, 0.1753545105457306, 0.003193420823663473, 0.20875433087348938, 0.03357946127653122, 0.1259773075580597, 0.00022807717323303223, 0.039092566817998886, 0.13582147657871246, 0.01937306858599186, 0.015938198193907738, 0.00787206832319498, 0.05792934447526932, 0.03294186294078827] array_2 = [0.1966051608324051, 0.0940953716635704, 0.0031937970779836178, 0.03691547363996506, 0.07240629941225052, 0.07114037871360779, 0.07133384048938751, 0.1283963918685913, 0.15377545356750488, 0.091400146484375, 0.10803385823965073, 0.09235749393701553, 0.1866973638534546, 0.021168243139982224, 0.09094691276550293, 0.07300164550542831, 0.20971564948558807, 0.1847742646932602, 0.009817334823310375, 0.05971141159534454, 0.09904412180185318, 0.0278592761605978, 0.012554554268717766, 0.09818517416715622, 0.1747943013906479, 0.31632938981056213, 0.0864541232585907, 0.13249783217906952, 0.002135572023689747, 0.04935726895928383, 0.010047778487205505, 0.04549024999141693, 0.26334646344184875, 0.05263081565499306, 0.013573898002505302, 0.2042253464460373, 0.06646320968866348, 0.08540669083595276, 0.12267164140939713, 0.018634958192706108, 0.19135263562202454, 0.01208433136343956, 0.09216200560331345, 0.2779296934604645, 0.1531585156917572, 0.10681629925966263, 0.021275708451867104, 0.059720948338508606, 0.06610126793384552, 0.21058350801467896, 0.005440462380647659, 0.18833838403224945, 0.08883830159902573, 0.025969548150897026, 0.0337764173746109, 0.1585341989994049, 0.02370697632431984, 0.10416869819164276, 0.19022507965564728, 0.11423652619123459, 0.09144753962755203, 0.08765758574008942, 0.0032832929864525795, 0.0051014479249715805, 0.19875964522361755, 0.07349056005477905, 0.1031823456287384, 0.10447365045547485, 0.11358538269996643, 0.24666038155555725, 0.05960353836417198, 0.07124857604503632, 0.039664581418037415, 0.20122921466827393, 0.31481748819351196, 0.006801256909966469, 0.41940364241600037, 0.1236235573887825, 0.12495145946741104, 0.12580059468746185, 0.02020396664738655, 0.03004150651395321, 0.11967054009437561, 0.09008713811635971, 0.07470540702342987, 0.09324200451374054, 0.13763070106506348, 0.07720538973808289, 0.19568027555942535, 0.036567769944667816, 0.030284458771348, 0.14119629561901093, 0.03820852190256119, 0.06232285499572754, 0.036639824509620667, 0.07704029232263565, 0.12276224792003632, 0.0035170004703104496, 0.13103705644607544, 0.027697769924998283, 0.01527332328259945, 0.04027168080210686, 0.03659897670149803, 0.03330300375819206, 0.12293602526187897, 0.09043421596288681, 0.019673841074109077, 0.07563626766204834, 0.13991905748844147, 0.014788001775741577, 0.07630413770675659, 0.00017269013915210962, 0.16345393657684326, 0.25710681080818176, 0.19869503378868103, 0.19393865764141083, 0.07422225922346115, 0.19553625583648682, 0.09189949929714203, 0.051557887345552444, 0.0008843056857585907, 0.006250975653529167, 0.1680600494146347, 0.10320111364126205, 0.03232177346944809, 0.08931156992912292, 0.11964476853609085, 0.00814182311296463]
The covariance matrix of the above arrays turn out to be a singular matrix, and thus I am unable to inverse it. Why does it end up being a singular matrix?
EDIT 2: Solution
Since the covariance matrix here is singular matrix, I had to pseudo inverse it using
np.linalg.pinv(V)
. 
Nonlinear Least Square Model Parameter Estimation Code in Javascript
Anyone with Javascript code for nonlinear least squares for model parameter optimization? Thank you

finite discretization in fortran
Are the following equations being discretized correctly, and are the boundary conditions being implemented right?
I am trying to discretize the following equations in 2 dimensions (x,y) on a square grid in Fortran but cannot obtain a convergent (steady state solution). Below are the equations and code.
Can you help me find the problem? Naively I expect that the discretization is not correct, or my boundary conditions are not being coded right.
The functions psi, A_x, A_y (components of A) are denoted by
psi(1,i,j)
,psi(2,i,j)
andpsi(3,i,j)
in the code below respectively. I discretize them in the function 'grad' using finite differences.The boundary conditions are that the curl of A = Bz on the boundary, and the gradient of psi dotted with the normal is zero on the boundary.
I've posted this here : problem with a function so I never exit the do while loop as a bounty, but never got a solution so I simplified the post a bit. I will delete the bounty post when it allows me since nobody replied to it.
The equations are:
function grad(psi) !There is something wrong within this function. implicit none integer :: i,j real, parameter :: h = 0.1 integer, parameter :: nx = 50, ny = 50 complex, dimension(3,nx:nx, ny:ny) :: psi, grad real :: x,y real :: kappa, k1, k2 complex :: ImK1 kappa = 4.0 k1 = kappa**(1.) k2 = kappa**(2.) ImK1 = cmplx(0,k1) do j=ny+1,ny1 do i=nx+1,nx1 grad(1,i,j) = k2*(psi(1,i+1,j)+psi(1,i1,j)+psi(1,i,j+1)+psi(1,i,j1)4*psi(1,i,j))/h**2 & + psi(1,i,j)*(1abs(psi(1,i,j))**2)  (abs(psi(2,i,j))**2+abs(psi(3,i,j))**2)*psi(1,i,j) &  ImK1*(psi(2,i+1,j)psi(2,i,j))*psi(1,i,j)/h &  ImK1*(psi(3,i,j+1)psi(3,i,j))*psi(1,i,j)/h &  2.*ImK1*psi(2,i,j)*(psi(1,i+1,j)psi(1,i,j))/h &  2.*ImK1*psi(3,i,j)*(psi(1,i,j+1)psi(1,i,j))/h grad(2,i,j) = (psi(2,i,j+1)2.*psi(2,i,j) + psi(2,i,j1))/h**2 &  (psi(3,i+1,j+1) + psi(3,i1,j1)  psi(3,i+1,j1)  psi(3,i1,j+1))/(4*h**2) & + 0.5*ImK1*psi(1,i,j)*(CONJG(psi(1,i+1,j))CONJG(psi(1,i,j)))/h &  0.5*ImK1*CONJG(psi(1,i,j))*(psi(1,i+1,j)psi(1,i,j))/h &  abs(psi(1,i,j))**2*psi(2,i,j) grad(3,i,j) = (psi(3,i+1,j)2.*psi(3,i,j)+psi(3,i1,j))/h**2 &  (psi(2,i+1,j+1)+psi(2,i1,j1)psi(2,i+1,j1)psi(2,i1,j+1))/(4*h**2) & + 0.5*ImK1*psi(1,i,j)*(conjg(psi(1,i,j+1))conjg(psi(1,i,j)))/h &  0.5*ImK1*conjg(psi(1,i,j))*(psi(1,i,j+1)psi(1,i,j))/h &  abs(psi(1,i,j))**2.*psi(3,i,j) end do end do do i = nx,nx x = real(i)*h grad(1,i,ny) = cmplx(0,0) grad(1,i,ny)= cmplx(0,0) grad(2,i,ny) = cmplx(0,0) grad(2,i,ny)= cmplx(0,0) grad(3,i,ny) = cmplx(0,0) grad(3,i,ny)= cmplx(0,0) end do do j = ny,ny y = real(j)*h grad(1,nx,j) = cmplx(0,0) grad(1,nx,j) = cmplx(0,0) grad(2,nx,j) = cmplx(0,0) grad(2,nx,j) = cmplx(0,0) grad(3,nx,j) = cmplx(0,0) grad(3,nx,j) = cmplx(0,0) end do end function grad function z1(x,y) real, intent(in) :: x,y real, parameter :: a = 0.0, b=0.0, c = 0.000000001 z1 = tanh(sqrt((xa)**2+(yb)**2))*x/sqrt(x**2+y**2+c) end function z1 function z2(x,y) real, intent(in) :: x,y real, parameter :: a = 0.0, b=0.0, c = 0.000000001 z2 = tanh(sqrt((xa)**2+(yb)**2))*y/sqrt(x**2+y**2+c) end function z2 program MyProgram implicit none integer :: i,j real :: x,y,L,Bz real, parameter :: h = 0.1 integer, parameter :: nx = 50, ny = 50 complex, dimension(3,nx:nx,ny:ny) :: psi0 L=h*nx Bz = 0.8 !initial conditions do i=nx+1,nx1 do j=ny+1,ny1 x = real(i)*h y = real(j)*h psi0(1,i,j) = cmplx(z1(x,y),z2(x,y)) psi0(2,i,j) = cmplx(0.0,0) psi0(3,i,j) = cmplx(0.0,0) end do end do !boundary conditions do i = nx, nx x = real(i)*h psi0(1,i,ny) = psi0(1,i,ny1) psi0(1,i,ny) = psi0(1,i,ny+1) psi0(2,i,ny) = cmplx(L*Bz/2.,0) psi0(2,i,ny) = cmplx(L*Bz/2.,0) psi0(3,i,ny) = cmplx(L*x/2.,0) psi0(3,i,ny) = cmplx(L*x/2.,0) end do !boundary conditions do j = ny, ny y = real(j)*h psi0(1,nx,j) = psi0(1,nx1,j) psi0(1,nx,j) = psi0(1,nx+1,j) psi0(2,nx,j) = cmplx(Bz*y/2.,0) psi0(2,nx,j) = cmplx(Bz*y/2.,0) psi0(3,nx,j) = cmplx(Bz*L/2.,0) psi0(3,nx,j) = cmplx(Bz*L/2.,0) end do end program

State machine compression
I have a log of outputs given the inputs, for example:
0 =A=> 1 =B=> 1 =B=> 0 =A=> 1 =A=> 0 =A=> 0
And I would like to find the minimal state machine representing it.
I tried, by hand, to break it down into an ordered list of transitions:
0 =A=> 1
1 =B=> 1
1 =B=> 0
0 =A=> 1
1 =A=> 0
0 =A=> 0
If we considered that there are only two states:
q0
with output0
.q1
with output1
.
The list becomes:
q0 (0) =A=> q1 (1)
q1 (1) =B=> q1 (1)
q1 (1) =B=> q0 (0)
q0 (0) =A=> q1 (1)
q1 (1) =A=> q0 (0)
q0 (0) =A=> q0 (0)
We can see that from the state
q0
, the inputA
leads to q1 in lines 1 & 4, but to stateq0
in line 6. Same issue in theq1
state with the actionB
. So I have to create two additional statesq2
with output0
, andq3
with output1
. I can then rewrite the list the following way:q0 (0) =A=> q1 (1)
q1 (1) =B=> q3 (1)
q3 (1) =B=> q0 (0)
q0 (0) =A=> q1 (1)
q1 (1) =A=> q2 (0)
q2 (0) =A=> q0 (0)
And done.
It seems simple by hand but I can't find an algorithm to achieve that given list of transitions. I know that there are several solutions to this example, but I need that can find one.
I considered to treat this as an optimization problem and use for instance a simulated annealing or a genetic algorithm, but this seems overkill. Plus, I really feel that there is a simple way to do that, maybe something related to graphs theory?
Best regards, Alexandre

Calculating Loglikelihood for the Skew Normal Distribution in R
library(sn) library(fGarch) library(maxLik) set.seed(12) nl = 100 locl = 0 scalel = 1 shapel = 1 #data y = c(rsn(n=nl, xi=locl, omega=scalel, alpha=shapel, tau=0, dp=NULL))
Assume only the shape parameter is unknown.
snormFit < function(x, ...){ start = c(mean = 0, sd = 1, xi = 1) # Loglikelihood Function: loglik = function(x, y = x){ f = sum(log(dsnorm(y, 0, 1, x[3]))) f } # Minimization: fit = nlminb(start = start, objective = loglik, lower = c(Inf, 0, 0), upper = c( Inf, Inf, Inf), y = x) # Return Value: fit } shape.l = snormFit(y)$par Warning message: In nlminb(start = start, objective = loglik, lower = c(Inf, 0, : NA/NaN function evaluation > shape.l mean sd xi 0.0000000 1.0000000 0.8216856
# Likelihood
logLik.sn = sum(log(dnorm(shape.l*y))) Warning message: In shape.l * y : longer object length is not a multiple of shorter object length logLik.sn [1] 112.9638
IS the way I computed the loglikelihood correct? However, I am getting some warning messages. What is the reason for this? Are there any other ways to compute the loglikelihood for
skew normal distribution
inR
?Thank you in advance.

Likelihood ratio test with Wald statistic
I wrote a code for the LL of my unrestricted model and my restricted model and optimized this codes with optim. My test is to check whether 2 standard deviations are the same. Now I want to check whether my constraint is true or not and I used the statistic w=(s1s2)/sqrt(vars1_vars22cov(s1,s2) However, it is not working? What am I doing wrong?

How to get pvalues comparing a heteroschedastic model to a nonheteroschedastic one in growth modeling using varFixed in lme function in R?
I am trying to test a linear growth model and I am checking for autocorrelation and heteroschedasticity, to see whether I should include them in my (quadratic) model.
I managed to test for autocorrelation and found that the model with it is a better fit. I then moved to test for heteroschedasticity. I first ran a preliminary test of variance homogeneity with the tapply function and saw that my variance is decreasing over time, so I added the varFixed element to my lme function.
LMModel<lme(Y ~ Time + I(Time^2), random=~Time  ID, data=DATABASE, control = list(opt = "optim"), na.action=na.exclude, correlation=corAR1(), weights=varFixed(~Time))
The model runs correctly, but when I try to compare it to my previous model (with autocorrelation but no heteroschedasticity, i.e. without the varFixed element), the anova does not return pvalues, but only AIC, BIC and loglikelihood. Running the anova to compare all my previous models with each other, also gave me significance testing to see whether the ew model was better. How can I compare the heteroschedastic model to the one with assumption of homogeneous variance?
This is my output:
Model df AIC BIC logLik LMModel4b 1 8 1185.5020 1139.1204 600.7510 LMModel4bb 2 8 608.4465 562.0649 312.2233
Thanks in advance!