Psych Library polychoric Function Error "You have more than 8 categories for your items, polychoric is probably not needed"
I'm new to R and running into an error with the polychoric
function in the psych
package. I'm attempting to store the polychoric correlation matrix in a data frame from using the following syntax:
RPOL36 < polychoric(norm.kdqol36,smooth=TRUE)
where norm.kdqol36
is a data frame with vectors of ordered variables, each with 5 levels (0, 25, 50, 75, 100). Here is an example:
0 25 50 75 100
11962 19953 4987 12998 8261
Despite each variable having 5 levels, I get this error:
Error in polychoric(norm.kdqol36, smooth = TRUE) : You have more than 8 categories for your items, polychoric is probably not needed
Could there be a formatting issue causing the polychoric
function to read my variables as having more than 5 categories?
See also questions close to this topic

Mysql query works well at workbench but takes too long in r
I have a query to run in R which retrieves data from the database and performs operations on it. When I run it in mysql workbench, it works just fine but in r it takes way too long and may hang the entire system. I also tried to run it in command prompt but got the error:
Error: memory exhausted (limit reached?)
mysql query:
library(DBI) library(RMySQL) con < dbConnect(RMySQL::MySQL(), dbname ="mydb", host = "localhost", port = 3306, user = "root", password = "") pedigree < dbGetQuery (connection, "SELECT aa.name as person, mother as mom, father as dad FROM addweight LEFT JOIN aa ON addweight.name2 = aa.name2 or addweight.name = aa.name LEFT JOIN death ON addweight.name2 = death.name2 or addweight.name = death.name Where((death.dodeath > curdate() OR aa.name2 NOT IN (SELECT name2 FROM death) OR aa.name NOT IN (SELECT name FROM death) OR aa.name NOT IN (SELECT name FROM death)) AND (dob < curdate() AND domove < curdate()))")

Can I use Zeppelin as an alternative to Shiny?
I've read that Zeppelin can also do R visualizations using spark.r
My question is can I use it to do visualizations based on user inputs. These users would have not R/zeppelin technical experience.

R Write data in a file
I'm trying to save data in a file, but every time I hit the save button, it saves it but keeps deleting the data I already have there. What could be the problem?
saveData < function(data) { data < as.data.frame(t(data)) if (exists("responses")) { responses << rbind(responses, data) } else { responses << data } write.csv(responses, file = "read.csv", row.names = FALSE)

Why do scoreItems (psych package) return mean without decimals?
I am using the psych package to calculate constructs' scores. The first construct (Coherence) has 4 items.
Create domain list
Domain.list < list(Coherence=c(1:4), CogParticipation=c(5:8), CollectAction=c(9:15), RefMonitoring=c(16:20))
#Caculate Scales Scores
Domain.keys < make.keys(NoMAD.Survey,Domain.list,item.labels=colnames(NoMAD.Survey)) Domain.scored <scoreItems(Domain.keys, NoMAD.Survey, impute = "none", digits = 2)
When I read the output of the means for each respondent, I don't get the answer I would expect.
My first respondent answers were 4, 4, 5, 5. I would have expected a mean of 4.5.
Domain.scored$scores for that first respondent is 4. How do you explain this?

Optimum algorithm to check various combinations of items when number of items is too large
I have a data frame which has 20 columns/items in it, and 593 rows (number of rows doesn't matter though) as shown below:
Using this the reliability of test is obtained as 0.94, with the help of alpha from psych package
psych::alpha
. The output also gives me the the new value of cronbach's alpha if I drop one of the items. However, I want to know how many items can I drop to retain an alpha of at least 0.8 I used a brute force approach for the purpose where I am creating the combination of all the items that exists in my data frame and check if their alpha is in the range (0.7,0.9). Is there a better way of doing this, as this is taking forever to run because number of items is too large to check for all the combination of items. Below is my current piece of code:numberOfItems < 20 for(i in 2:(2^numberOfItems)1){ # ignoring the first case i.e. i=1, as it doesn't represent any model # convert the value of i to binary, e.g. i=5 will give combination = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 # using the binaryLogic package combination < as.binary(i, n=numberOfItems) model < c() for(j in 1:length(combination)){ # choose which columns to consider depending on the combination if(combination[j]) model < c(model, j) } itemsToUse < itemResponses[, c(model)] #cat(model) if(length(model) > 13){ alphaVal < psych::alpha(itemsToUse)$total$raw_alpha if(alphaVal > 0.7 && alphaVal < 0.9){ cat(alphaVal) print(model) } } }
A sample output from this code is as follows:
0.8989831 1 4 5 7 8 9 10 11 13 14 15 16 17 19 20
0.899768 1 4 5 7 8 9 10 11 12 13 15 17 18 19 20
0.899937 1 4 5 7 8 9 10 11 12 13 15 16 17 19 20
0.8980605 1 4 5 7 8 9 10 11 12 13 14 15 17 19 20

R psych package  plot weighted correlation matrix  how to reference the weights source?
I am using the pairs.panels from the Psych package in R to create nice plots of scatterplots and the correlations for multiple variables in a data frame.
require(psych) # plot correlation no weights pairs.panels(iris[ ,1:3], gap = 0, pch = 21, ellipses = TRUE, show.points = TRUE, smoother = TRUE, rug = FALSE, main = '~ Naive Correlations', cor = TRUE )
The help for the package indicates that weighting can be used to compute the weighted correlations ... see my attempt below
# weighted correlations pairs.panels(iris[ ,1:3], gap = 0, pch = 21, ellipses = TRUE, show.points = TRUE, smoother = TRUE, rug = FALSE, main = '~ Weighted Correlations', cor = TRUE, wt = iris[ ,4] )
This attempt returns the error 'Error in wt[, c(1:2)] : incorrect number of dimensions'. I'm not sure why I am getting this error as I assumed that pointing at a column of weights would work  suggestion appreciated