Why doesn't `pivot_wider` work on a `data.table`
A simple example of long to wide pivot:
library(tidyverse)
library(data.table)
df <- data.frame(names=letters[1:3],values=1:3)
df %>% pivot_wider(names_from=names,values_from=values)
#works
dt <- data.table(names=letters[1:3],values=1:3)
dt %>% pivot_wider(names_from=names,values_from=values)
Error in data.frame(row = row_id, col = col_id) :
arguments imply differing number of rows: 0, 3
Why does this error happen?
PS: one fix is to remove the data tableness with as.data.frame(dt)
.
dt %>% as.data.frame %>% pivot_wider(names_from=names,values_from=values)
#works
1 answer
-
answered 2020-11-28 09:37
jangorecki
Manual entry of this function in
tidyr
mentions only "data frame" without specifying other classes like tibble or data.table. So addressing your question, function is either not designed to handle data.table, or there is a bug inpivot_wider
.As a workaround you can now use
as.data.frame
(as you already mentioned), if your data is big then possiblysetDF
to avoid extra in-memory copy.You can also use
data.table
functiondcast
to perform same kind of transformation.library(tidyr) library(data.table) df <- data.frame(names=letters[1:3],values=1:3) pivot_wider(df, names_from=names,values_from=values) ## A tibble: 1 x 3 # a b c # <int> <int> <int> #1 1 2 3 setDT(df) dcast(df, .~names, value.var="values")[, ".":=NULL][] # a b c #1: 1 2 3
See also questions close to this topic
-
Multiple random samples from a given distribution
I want to take multiple random samples from a poisson distribution, then calculate the mean of each one. Then I need to calculate the mean and variance of the means. How can I do this effectively? I am quite new to R. The only way I can think of is by getting random samples 1 by 1 and then calculating its mean 1 by 1. Then I would put them in a vector to calculate the mean and variance.
-
Create dataframe from grouped elements of a vector
I have a vector in R that looks like this:
vector = c('name1',100,'name2',101,'name3',102,'name4',103)
what i want to do is create a DF from this vector that looks like this:
+------------------+ | User| Value | +------------------+ | name1| 100 | | name2| 101 | | name3| 102 | | name4| 103 | +------------------+
what would be the be the more efficient way to do so? thanks in advance
-
how to set breaks only at integers on log10 axes in ggplot2
Transforming
ggplot2
axes to log10 usingscales::trans_breaks()
can sometimes (if the range is small enough) produce un-pretty breaks, at non-integer powers of ten.Is there a general purpose way of setting these breaks to occur only at 10^x, where x are all integers, and, ideally, consecutive (e.g. 10^1, 10^2, 10^3)?
Here's an example of what I mean.
library(ggplot2) # dummy data df <- data.frame(fct = rep(c("A", "B", "C"), each = 3), x = rep(1:3, 3), y = 10^seq(from = -4, to = 1, length.out = 9)) p <- ggplot(df, aes(x, y)) + geom_point() + facet_wrap(~ fct, scales = "free_y") # faceted to try and emphasise that it's general purpose, rather than specific to a particular axis range
The unwanted result -- y-axis breaks are at non-integer powers of ten (e.g. 10^2.8)
p + scale_y_log10( breaks = scales::trans_breaks("log10", function(x) 10^x), labels = scales::trans_format("log10", scales::math_format(10^.x)) )
I can achieve the desired result for this particular example by adjusting the
n
argument toscales::trans_breaks()
, as below. But this is not a general purpose solution, of the kind that could be applied without needing to adjust anything on a case-by-case basis.p + scale_y_log10( breaks = scales::trans_breaks("log10", function(x) 10^x, n = 1), labels = scales::trans_format("log10", scales::math_format(10^.x)) )
Should add that I'm not wed to using
scales::trans_breaks()
, it's just that I've found it's the function that gets me closest to what I'm after.Any help would be much appreciated, thank you!
-
Attribute value to new collumn based on values in similarly called columns
I have a data frame which has distances from a unit's centroid to different points. The points are identified by numbers and what I am trying to obtain a new column where I get the distance to the closest object.
So the data frame looks like this:
FID <- c(12, 12, 14, 15, 17, 18) year <- c(1990, 1994, 1983, 1953, 1957, 2000) centroid_distance_1 <- c(220.3, 220.3, 515.6, NA, 200.2, 22) centroid_distance_2 <- c(520, 520, 24.3, NA , NA, 51.8) centroid_distance_3 <- c(NA, 12.8, 124.2, NA, NA, 18.8) centroid_distance_4 <- c(725.3, 725.3, 44.2, NA, 62.9, 217.9) sample2 <- data.frame(FID, year, centroid_distance_1, centroid_distance_2, centroid_distance_3, centroid_distance_4) sample2 FID year centroid_distance_1 centroid_distance_2 centroid_distance_3 centroid_distance_4 1 12 1990 220.3 520.0 NA 725.3 2 12 1994 220.3 520.0 12.8 725.3 3 14 1983 515.6 24.3 124.2 44.2 4 15 1953 NA NA NA NA 5 17 1957 200.2 NA NA 62.9 6 18 2000 22.0 51.8 18.8 217.9
FID
is an identifier of each unit andyear
a year indicator. Each row is aFID
*year
pair.centroid_distance_x
is the row's distance between its centroid and the objectx
. This is a small sample of the data frame, which contains much more columns and rows.What I am looking for is something like this:
short_distance <- c(220.3, 12.8, 24.3, NA, 62.9,18.8) unit <- c(1, 3, 2, NA, 4, 3) ideal.df <- data.frame(FID, year, short_distance, unit) ideal.df FID year short_distance unit 1 12 1990 220.3 1 2 12 1994 12.8 3 3 14 1983 24.3 2 4 15 1953 NA NA 5 17 1957 62.9 4 6 18 2000 18.8 3
Where basically, I add one column with named
short_distance
which is the cell with the lower value a row takes of all thecentroid_distance_*
columns above, and one namedunit
which identifies the object from which each row has the smaller distance (so if one row has smallest value incentorid_distance_1
it takes the value of1
forunit
).I have tried a bunch of things with
dplyr
and pivot and re-pivoting the dataframe but I'm really not getting there.Thanks a lot for the help!
-
Subsetting data and filtering giving me two different answers to same question in R
So I'm trying to figure out how many observations meet a certain criteria in my datasets. I am using the subsetting options on two different data sets. For one of the datasets (store), the function worked and gave me the right answer. On the other dataset (data), the function worked, but didn't give me the right answer. I know this because I went into the original excel and checked. Then I used the filter option (from dplyr) to subset "data" to see how many observations were left and it gave me the right answer that time. There is a difference in the data sets in that when I look at store's class, I get "data.table" "data.frame". Data's class is just "data.frame." I technically know the right answer, but I want to figure out the source of the difference between the two datasets and then why when I filter on "data", it gave me the right answer but when I used my original function it didn't.
Here's the code that I used on store. This gave me the right number of nrows that corresponded to the number of observations that met the criteria I was looking at.
doc <- xmlTreeParse(fileUrl3, useInternalNodes = T) rootNode <- xmlRoot(doc) store <- data.table(xpathSApply(rootNode, "//zipcode", xmlValue)) nrow(store[store$V1 == "21231",])
Here's the code to data
data <- read.table(fileUrl, sep =",", header = TRUE, nrows = 6496) nrow(data[data$VAL == "24",])
When I use that code. I get the wrong answer (2129). when I filter the data like this:
newdata <- data %>% filter(data$VAL == 24)
The newdata shows the correct answer (53 observations).
-
correlation matrix from text file
I am trying to make correlation matrix from a text files what I have. I want to get the correlation values from these files.
text file what I have
[56] "[1] \”values “”of the [57] "[1] \”e”xamples [58] "[1] \”dummy “”lines [59] "[1] \”testing” [60] "[1] \"Correlation Values\”” [61] "[1] \"Correlation between XXX and YYY: 0.7054 (0.0429)\"" [62] "[1] \"Correlation between XXX and ZZZ: 0.601 (0.0289)\"" [63] "[1] \"Correlation between YYY and ZZZ: 0.6434 (0.0306)\"" [64] "[1] \”Finished\”” [65] "[1] \”testing “”linne [66] “test” [67] “test “again
The matrix will look like
XXX YYY ZZZ XXX 1 0.7054 0.601 YYY 0.7054 1 0.6434 ZZZ 0.601 0.6434 1
I understand that there is some regex technique involved, but think its too advanced for a novice like me. I can get the lines what I want from the file using the following, but still not able to workout the way to extract those numbers and put in a matrix.
mm[grep("Correlation Values”, mm, value = FALSE) + c(1:3)] ## m is the above file that I loaded.
To add the complexity to it the variables and number change in all files. Say this is the case of 4*4 matrix
[95] "[1] \"Correlation Values\”” [96] "[1] \"Correlation between XXX and YYY: 0.7054 (0.0429)\"" [97] "[1] \"Correlation between XXX and ZZZ: 0.601 (0.0289)\"" [98] "[1] \"Correlation between XXX and CCC: 0.0178 (0.0281)\"" [99] "[1] \"Correlation between YYY and ZZZ: 0.6434 (0.0306)\"" [100] "[1] \"Correlation between YYY and CCC: 0.0103 (0.0286)\"" [101] "[1] \"Correlation between ZZZ and CCC: 0.0174 (0.0202)\"" [102] "[1] \”Finished\””
-
Wordpress and wp_usermeta PIVOT results
I am trying to build a report based on user access to the platform. All user are registered users and they come from different divisions. All this is stored in the database under the wp_usermeta table.
For example I want the totals of all the users that logged in the month of december of 2020 from HR division. This is what I came with looking at examples from the web:
SELECT (CASE WHEN meta_value = "HR Division" THEN meta_value END) AS school, (CASE WHEN meta_key = "last-login" AND YEAR(FROM_UNIXTIME(meta_value)) = 2020 AND MONTH(FROM_UNIXTIME(meta_value)) = 12 THEN meta_value END) AS logins (COUNT(user_id)) as totals, FROM wp_usermeta GROUP BY school
but gives me the following error:
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(COUNT(user_id)) as totals, FROM wp_usermeta GROUP BY school' at line 4
I really don't know where to go from here. Any Ideas?
-
How to divide a pandas pivot table by a dataframe with a difference shape?
Objective: I have a pivot table, where I would like to divide each cell by a value from my dataframe, if there is a match.
Specifically, all the cells in the column 0 should be divided by 4 because Store1 is 4 in the dataframe. Similarly, the last column would be divided by 3.
The expected outcome is...
Data:
df = pd.DataFrame({'Start':['Store1','Store1','Store1','Store2','Store2','Store2','Store3','Store3','Store3'], 'Stop':['Store1','Store2','Store3','Store1','Store2','Store3','Store1','Store2','Store3'], 'Distance':[0,100,200,100,0,100,100,100,0]}).pivot(columns='Start', index = 'Stop', values=None) df_div = pd.DataFrame({'Distance':['Store1','Store3'],'Import':[4,3]}) df_div = df_div.set_index('Distance')
-
Reading and formatting Multilevel, Uneven JSON
I have a JSON like the one shown below
{ "timestamps": [ "2020-12-17T20:05:00Z", "2020-12-17T20:10:00Z", "2020-12-17T20:15:00Z", "2020-12-17T20:20:00Z", "2020-12-17T20:25:00Z", "2020-12-17T20:30:00Z" ], "properties": [ { "values": [ -20.58975828559592, -19.356728999226693, -19.808982964173023, -19.673928070777993, -19.712275037138411, -19.48422739982918 ], "name": "Neg Flow", "type": "Double" }, { "values": [ 2, 20, 19, 20, 19, 16 ], "name": "Event Count", "type": "Long" } ], "progress": 100.0
}
How to convert this to a data frame like the following . Though I was able to loop thorough the individual data items, I am interested in finding if there is a sleek way to do this?
+----------------------+---------------------+-------------+ |Time Stamps | Neg Flow | Event Count | +----------------------+---------------------+-------------+ |2020-12-17T20:05:00Z |-20.58975828559592 | 2 | +----------------------+---------------------+-------------+ |2020-12-17T20:10:00Z |-19.356728999226693 | 20 | +----------------------+---------------------+-------------+
-
How to do a conditional NA fill in R dataframe
It may be simple but could not figure out. How to fill
NA
in thefeature
column with conditions as below in the data framedt
.The conditions to fill NA are:
- if the difference in Date is
1
, fill theNA
with the previous row's value (easily done by fill function of tidyverse)
dt_fl<-dt%>% fill(feature, .direction = "down") dt_fl
- if the difference in the Date is
>1
, then fill theNA
with the previous feature value +1 and replace the following rows (feature values) with1
increment to make continuous feature values. Thedt_output
shows what I am expecting fromdt
after fillingNA
values and replacing the feature numbers accordingly.
dt<-structure(list(Date = structure(c(15126, 15127, 15128, 15129, 15130, 15131, 15132, 15133, 15134, 15138, 15139, 15140, 15141, 15142, 15143, 15144, 15145, 15146, 15147, 15148, 15149), class = "Date"), feature = c(1, 1, 1, 1, 1, 1, 1, 1, NA, NA, NA, NA, NA, NA, 2, 2, 2, 2, 2, 2, NA)), row.names = c(NA, -21L), class = c("tbl_df", "tbl", "data.frame")) dt dt_output<-structure(list(Date = structure(c(15126, 15127, 15128, 15129, 15130, 15131, 15132, 15133, 15134, 15138, 15139, 15140, 15141, 15142, 15143, 15144, 15145, 15146, 15147, 15148, 15149), class = "Date"), feature = c(1, 1, 1, 1, 1, 1, 1, 1, NA, NA, NA, NA, NA, NA, 2, 2, 2, 2, 2, 2, NA), finaloutput = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3)), row.names = c(NA, -21L), spec = structure(list(cols = list(Date = structure(list(), class = c("collector_character", "collector")), feature = structure(list(), class = c("collector_double", "collector")), finaloutput = structure(list(), class = c("collector_double", "collector"))), default = structure(list(), class = c("collector_guess", "collector")), skip = 1L), class = "col_spec"), class = c("spec_tbl_df", "tbl_df", "tbl", "data.frame")) dt_output
The help is greatly appreciated, thank you.
- if the difference in Date is
-
long form dataset to long(er) form dataset using pivot_longer
I'm trying my
input
dataset to look like theoutput
: I have tried:pivot_longer(input, hyp, math)
fromlibrary(tidyverse)
without success.Is there a way to achieve my desired output?
input <- read.csv("https://quantdev.ssri.psu.edu/sites/qdev/files/nlsy_math_hyp_long.csv") #==== A few rows of desired output: id var grade d_math d_hyp grp 1 201 38 3 1 0 math 2 201 55 5 1 0 math 3 303 26 2 1 0 math 4 303 33 5 1 0 math 5 2702 56 2 1 0 math
-
Create and fill columns in a dataset with data in rows from a different dataset
I have DATASET1 with three main columns: country, ID (of political parties) and ideology (a measurement of political position in a 0-10 scale). Each row is a different political party amongst a given country.
I need to transfer this data to DATASET2, where each row is a country (in fact, the real data I am working with has thousands of rows for each country, representing interviewees in a public opinion survey, but I have grouped it into country-level data to make it easier to work with and solve this problem). In this dataset I have some groups of columns with data related to the most important political parties of each country - PARTY A, PARTY B, PARTY C, etc. One of these groups of columns has the same ID as in the other dataset, so we can use it to match data from this other dataset.
The output I want: columns with "ideology" of each party corresponding to PARTY A, B, etc. but also for the other parties that are not contemplated in the dataset 2 (something like "ideology_otherparty_1", "ideology_otherparty_2", etc.)
I am trying to find a way of doing this using TIDYR functions like gather and spread together with other functions like "match" or "case_when to match PARTY_A, etc. with the corresponding rows in DATASET1. The problem is I don't know how to combine these functions in order to make it work.
HERE IS A SAMPLE OF THE DATA I HAVE :
dataset1< - structure(list(country = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 7L, 7L, 7L, 7L, 8L, 8L, 8L, 8L), .Label = c("Argentina", "Brazil", "France", "Japan", "Mexico", "Switzerland", "UK", "US"), class = "factor"), ID = 1:71, ideology = c(5L, 9L, 4L, 8L, 9L, 0L, 0L, 4L, 7L, 3L, 0L, 1L, 2L, 9L, 1L, 7L, 7L, 1L, 6L, 6L, 1L, 1L, 1L, 8L, 2L, 5L, 5L, 0L, 0L, 9L, 8L, 6L, 8L, 7L, 7L, 8L, 6L, 4L, 4L, 8L, 6L, 3L, 6L, 5L, 3L, 7L, 9L, 4L, 0L, 0L, 1L, 0L, 9L, 4L, 8L, 4L, 9L, 4L, 5L, 1L, 4L, 7L, 7L, 9L, 4L, 4L, 1L, 7L, 3L, 6L, 6L)), class = "data.frame", row.names = c(NA, -71L)) dataset2 <- structure(list(country = structure(1:8, .Label = c("Argentina", "Brazil", "France", "Japan", "Mexico", "Switzerland", "UK", "US" ), class = "factor"), party_A.ID = c(1L, 10L, 25L, 37L, 43L, 56L, 64L, 68L), party_B_ID = c(2L, 11L, 26L, 38L, 44L, 57L, 65L, 69L), party_C_ID = c(3L, 12L, 27L, 39L, 47L, 58L, 66L, 70L), party_D_ID = c(4L, 13L, 28L, 40L, 48L, 59L, 67L, 71L), party_E_ID = c(6L, 14L, 29L, NA, 49L, 60L, NA, NA)), class = "data.frame", row.names = c(NA, -8L))
And here is the OUTPUT I want:
output <- structure(list(country = structure(1:8, .Label = c("Argentina", "Brazil", "France", "Japan", "Mexico", "Switzerland", "UK", "US" ), class = "factor"), party_A.ID = c(1L, 10L, 25L, 37L, 43L, 56L, 64L, 68L), party_B_ID = c(2L, 11L, 26L, 38L, 44L, 57L, 65L, 69L), party_C_ID = c(3L, 12L, 27L, 39L, 47L, 58L, 66L, 70L), party_D_ID = c(4L, 13L, 28L, 40L, 48L, 59L, 67L, 71L), party_E_ID = c(6L, 14L, 29L, NA, 49L, 60L, NA, NA), other_party_1 = c(5L, 15L, 30L, 41L, 45L, 61L, NA, NA), other_party_2 = c(NA, 16L, 31L, 42L, 46L, 62L, NA, NA), other_party_3 = c(NA, 17L, 32L, NA, 50L, 63L, NA, NA), other_party_4 = c(NA, 18L, 33L, NA, 51L, NA, NA, NA), other_party_5 = c(NA, 19L, 34L, NA, 52L, NA, NA, NA), other_party_6 = c(NA, 20L, 35L, NA, 53L, NA, NA, NA), other_party_7 = c(NA, 21L, 36L, NA, 54L, NA, NA, NA), other_party_8 = c(NA, 22L, NA, NA, 55L, NA, NA, NA), other_party_9 = c(NA, 23L, NA, NA, NA, NA, NA, NA), other_party_10 = c(NA, 24L, NA, NA, NA, NA, NA, NA), ideology_party_A = c(5L, 3L, 2L, 6L, 6L, 4L, 9L, 7L), ideology_party_B = c(9L, 0L, 5L, 4L, 5L, 9L, 4L, 3L), ideology_party_C = c(4L, 1L, 5L, 4L, 9L, 4L, 4L, 6L), ideology_party_D = c(8L, 2L, 0L, 8L, 4L, 5L, 1L, 6L), ideology_party_E = c(NA, 9L, 0L, NA, 0L, 1L, NA, NA), ideology_other_party_1 = c(9L, 1L, 9L, 6L, 3L, 4L, NA, NA), ideology_other_party_2 = c(NA, 7L, 8L, 3L, 7L, 7L, NA, NA), ideology_other_party_3 = c(NA, 7L, 6L, NA, 0L, 7L, NA, NA), ideology_other_party_4 = c(NA, 1L, 8L, NA, 1L, NA, NA, NA), ideology_other_party_5 = c(NA, 6L, 7L, NA, 0L, NA, NA, NA), ideology_other_party_6 = c(NA, 6L, 7L, NA, 9L, NA, NA, NA), ideology_other_party_7 = c(NA, 1L, 8L, NA, 4L, NA, NA, NA), ideology_other_party_8 = c(NA, 1L, NA, NA, 8L, NA, NA, NA), ideology_other_party_9 = c(NA, 1L, NA, NA, NA, NA, NA, NA), ideology_other_party_10 = c(NA, 8L, NA, NA, NA, NA, NA, NA)), class = "data.frame", row.names = c(NA, -8L))
Notice the ID of parties in DATASET1 is in a sequence 1,2,3 ... but this sequence is not always followed when transponding to PARTY A-E. In fact, in my REAL data it is not even close to a sequence, I have 5-digit codes for each party. This is important, because I need a correct matching of each party in the rows of DATASET1 with the rows in DATASET2 based on the "ID" columns.
For the remaining parties (the columns with labels "other_party") it doesn't matter the order (which party will turn to be "other_party_1", "other_party_2", etc.), I just need these columns to be filled with data from parties that have not been considered in the variables labeled "Party A", "Party B", etc.
-
How to reshape conjoint data from wide to long?
users,
I have received a data from a conjoint survey experiment. What I want to do is to reshape from wide to long format. However, this seems to be slightly complicated. I am pretty sure it is possible to do with
cj_tidy
(packagecregg
) but can't solve it myself.In the survey, the respondents were asked to compare two organizations that vary across 7 profiles (Efficiency Opennes Inclusion Leader Gain & System). In total, respondents were presented with four comparisons. So 2 organizations and 4 comparisons (4x2). They had to choose one of the presented organization and rate them separately after choosing one.
At the moment, the profile variables are structured in this way: org1_Efficiency_conj_1, org1_Opennes_conj1 ..etc. The first part "org" indicates whether it is the first or second organization. The last part "conj", indicated the order of the conjoint/comparison, where the "conj4" is the last comparison. The CHOICE variables also follow the order of conjoint – for example,"CHOICE_conj1", "CHOICE_conj2", where =1 means the respondent chose "org1". If =2, then org2 was chosen. The RATING> variable indicates a value from 0 to 10 for each organization: RATING_conj1_org1; RATING_conj1_org2 etc..
The current wide format of the data is not suitable for conjoint analysis - what I need is to create 8 observations for each respondent (4x2=8) where the variable CHOICE would indicate which of the organizations were chosen (where =1 if yes; and =0 if no). In a similar way, the variable RATING should indicate the rating given by respondents for both of the organizations (0 to 10).
This is how I would like the data to look like:
Note please that there are also covariates such as Q1 and Q2 in the picture, they are not a part of the experiment and should remain constant for each individual observation.
Below I share 50 observations from my real data.
> dput(cjdata_wide) structure(list(ID = 1:50, org1_Effeciency_conj_1 = > c(3L, 2L, 1L, 3L, 3L, 2L, 3L, 3L, 3L, 3L, 2L, 1L, 1L, 1L, 1L, 1L, 3L, > 2L, 3L, 3L, 3L, 2L, 3L, 1L, 2L, 1L, 3L, 3L, 1L, 1L, 3L, 1L, 1L, 3L, > 3L, 2L, 3L, 2L, 3L, 2L, 1L, 1L, 3L, 2L, 1L, 1L, 1L, 2L, 2L, 1L ), > org1_Oppenes_conj_1 = c(3L, 3L, 1L, 3L, 1L, 3L, 2L, 3L, 2L, 3L, 1L, > 1L, 1L, 2L, 3L, 2L, 2L, 1L, 3L, 1L, 1L, 1L, 1L, 3L, 3L, 3L, 1L, 3L, > 1L, 2L, 2L, 3L, 2L, 2L, 1L, 3L, 1L, 3L, 2L, 2L, 1L, 2L, 3L, 3L, 3L, > 3L, 3L, 2L, 3L, 1L), org1_Inclusion_conj_1 = c(2L, 1L, 1L, 2L, 2L, > 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, > 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, > 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L), org1_Leader_conj_1 = > c(5L, 6L, 3L, 6L, 1L, 4L, 2L, 6L, 1L, 6L, 1L, 2L, 2L, 6L, 3L, 2L, 6L, > 3L, 5L, 6L, 3L, 1L, 4L, 3L, 5L, 5L, 2L, 1L, 4L, 1L, 3L, 4L, 2L, 3L, > 5L, 2L, 1L, 3L, 3L, 2L, 1L, 4L, 1L, 5L, 2L, 6L, 1L, 4L, 2L, 3L), > org1_Gain_conj_1 = c(4L, 4L, 1L, 3L, 3L, 8L, 3L, 2L, 6L, 5L, 1L, 6L, > 3L, 8L, 1L, 3L, 6L, 2L, 2L, 5L, 5L, 3L, 4L, 8L, 6L, 4L, 5L, 6L, 6L, > 8L, 4L, 4L, 5L, 7L, 6L, 7L, 3L, 7L, 8L, 2L, 6L, 4L, 6L, 4L, 8L, 4L, > 6L, 4L, 3L, 6L), org1_System_conj_1 = c(5L, 4L, 5L, 1L, 4L, 4L, 5L, > 1L, 2L, 2L, 4L, 3L, 1L, 4L, 4L, 2L, 3L, 3L, 2L, 4L, 3L, 1L, 4L, 3L, > 1L, 1L, 5L, 3L, 1L, 3L, 5L, 4L, 5L, 3L, 2L, 4L, 1L, 2L, 3L, 4L, 1L, > 1L, 3L, 5L, 5L, 5L, 1L, 1L, 5L, 3L), org2_Effeciency_conj_1 = c(2L, > 1L, 3L, 2L, 1L, 3L, 1L, 2L, 2L, 2L, 3L, 2L, 3L, 3L, 3L, 2L, 2L, 1L, > 2L, 2L, 2L, 3L, 1L, 3L, 1L, 3L, 2L, 1L, 2L, 2L, 1L, 2L, 3L, 1L, 2L, > 1L, 1L, 3L, 2L, 1L, 3L, 3L, 2L, 3L, 3L, 2L, 2L, 3L, 3L, 3L), > org2_Oppenes_conj_1 = c(1L, 1L, 3L, 1L, 3L, 1L, 1L, 2L, 3L, 2L, 3L, > 3L, 2L, 1L, 1L, 3L, 3L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 3L, 1L, > 2L, 3L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 3L, 1L, 2L, 3L, 1L, 1L, 1L, > 2L, 1L, 1L, 1L, 3L), org2_Inclusion_conj_1 = c(1L, 2L, 2L, 1L, 1L, > 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, > 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, > 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 1L), org2_Leader_conj_1 = > c(4L, 5L, 6L, 3L, 2L, 5L, 1L, 3L, 6L, 2L, 4L, 6L, 6L, 5L, 6L, 4L, 1L, > 2L, 4L, 2L, 4L, 6L, 5L, 6L, 4L, 1L, 3L, 5L, 3L, 5L, 6L, 1L, 6L, 4L, > 1L, 3L, 4L, 2L, 1L, 3L, 4L, 3L, 5L, 2L, 4L, 4L, 3L, 3L, 4L, 2L), > org2_Gain_conj_1 = c(5L, 1L, 6L, 5L, 8L, 6L, 4L, 3L, 8L, 8L, 7L, 7L, > 7L, 5L, 7L, 7L, 2L, 6L, 7L, 7L, 6L, 8L, 3L, 1L, 8L, 2L, 6L, 2L, 5L, > 6L, 7L, 1L, 7L, 2L, 2L, 5L, 8L, 6L, 2L, 7L, 8L, 7L, 1L, 8L, 4L, 3L, > 4L, 7L, 7L, 7L), org2_System_conj_1 = c(3L, 3L, 3L, 4L, 3L, 3L, 3L, > 5L, 4L, 4L, 1L, 4L, 3L, 1L, 5L, 5L, 5L, 4L, 3L, 3L, 4L, 4L, 1L, 5L, > 5L, 3L, 4L, 2L, 5L, 2L, 2L, 5L, 3L, 4L, 3L, 5L, 5L, 5L, 5L, 2L, 3L, > 4L, 2L, 1L, 3L, 3L, 2L, 4L, 4L, 2L), org1_Effeciency_conj_2 = c(2L, > 1L, 2L, 3L, 3L, 2L, 1L, 2L, 1L, 3L, 1L, 1L, 1L, 2L, 3L, 3L, 2L, 3L, > 3L, 1L, 2L, 1L, 2L, 3L, 2L, 3L, 3L, 3L, 2L, 2L, 2L, 3L, 2L, 1L, 2L, > 1L, 1L, 3L, 1L, 3L, 1L, 2L, 3L, 3L, 1L, 2L, 1L, 2L, 3L, 3L), > org1_Oppenes_conj_2 = c(1L, 3L, 2L, 1L, 2L, 3L, 3L, 2L, 1L, 3L, 3L, > 2L, 1L, 2L, 3L, 3L, 1L, 1L, 1L, 2L, 1L, 3L, 1L, 3L, 2L, 1L, 3L, 2L, > 3L, 3L, 3L, 3L, 2L, 2L, 1L, 2L, 1L, 2L, 3L, 2L, 1L, 1L, 1L, 1L, 1L, > 1L, 3L, 3L, 2L, 3L), org1_Inclusion_conj_2 = c(2L, 1L, 1L, 2L, 1L, > 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, > 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, > 1L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 2L), org1_Leader_conj_2 = > c(3L, 3L, 2L, 2L, 5L, 5L, 6L, 2L, 2L, 1L, 6L, 5L, 2L, 1L, 2L, 4L, 5L, > 4L, 3L, 6L, 4L, 1L, 5L, 3L, 1L, 5L, 5L, 4L, 6L, 6L, 5L, 6L, 5L, 4L, > 4L, 6L, 3L, 4L, 6L, 2L, 4L, 4L, 1L, 4L, 4L, 3L, 3L, 1L, 4L, 4L), > org1_Gain_conj_2 = c(3L, 1L, 7L, 7L, 2L, 1L, 8L, 1L, 2L, 7L, 5L, 4L, > 4L, 3L, 6L, 3L, 1L, 1L, 8L, 3L, 4L, 3L, 3L, 5L, 4L, 3L, 4L, 8L, 6L, > 8L, 3L, 1L, 8L, 5L, 6L, 3L, 3L, 6L, 7L, 1L, 3L, 6L, 5L, 7L, 6L, 6L, > 3L, 4L, 2L, 6L), org1_System_conj_2 = c(5L, 1L, 5L, 1L, 4L, 3L, 3L, > 4L, 2L, 1L, 5L, 3L, 5L, 3L, 4L, 2L, 2L, 3L, 4L, 1L, 1L, 4L, 3L, 4L, > 3L, 2L, 1L, 1L, 4L, 5L, 2L, 3L, 5L, 3L, 5L, 2L, 4L, 2L, 1L, 5L, 5L, > 1L, 2L, 2L, 5L, 2L, 4L, 3L, 2L, 3L), org2_Effeciency_conj_2 = c(3L, > 3L, 1L, 2L, 2L, 1L, 3L, 1L, 3L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 3L, 1L, > 2L, 3L, 3L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 3L, 2L, 3L, 3L, 3L, > 2L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L), > org2_Oppenes_conj_2 = c(2L, 2L, 1L, 3L, 1L, 1L, 1L, 1L, 3L, 2L, 2L, > 3L, 3L, 1L, 2L, 1L, 2L, 3L, 3L, 3L, 3L, 2L, 3L, 1L, 1L, 3L, 1L, 3L, > 2L, 2L, 2L, 2L, 3L, 3L, 2L, 3L, 3L, 3L, 2L, 3L, 2L, 2L, 2L, 2L, 2L, > 2L, 2L, 1L, 1L, 2L), org2_Inclusion_conj_2 = c(1L, 2L, 2L, 1L, 2L, > 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, > 1L, 1L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, > 2L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L), org2_Leader_conj_2 = > c(6L, 6L, 1L, 4L, 1L, 4L, 4L, 1L, 4L, 4L, 1L, 3L, 5L, 2L, 1L, 5L, 4L, > 6L, 4L, 2L, 3L, 3L, 1L, 4L, 2L, 2L, 6L, 6L, 1L, 5L, 4L, 4L, 1L, 3L, > 3L, 4L, 5L, 5L, 3L, 3L, 6L, 3L, 2L, 5L, 2L, 6L, 4L, 2L, 5L, 1L), > org2_Gain_conj_2 = c(8L, 5L, 3L, 6L, 8L, 2L, 2L, 2L, 7L, 6L, 4L, 1L, > 6L, 7L, 2L, 1L, 2L, 2L, 3L, 2L, 5L, 5L, 4L, 2L, 7L, 2L, 7L, 4L, 7L, > 1L, 2L, 5L, 1L, 2L, 7L, 1L, 6L, 2L, 8L, 7L, 7L, 1L, 6L, 3L, 3L, 2L, > 5L, 3L, 4L, 2L), org2_System_conj_2 = c(1L, 5L, 3L, 4L, 5L, 1L, 4L, > 3L, 4L, 4L, 4L, 5L, 2L, 2L, 1L, 3L, 4L, 4L, 5L, 2L, 5L, 1L, 2L, 1L, > 2L, 3L, 3L, 4L, 1L, 3L, 3L, 5L, 4L, 5L, 1L, 5L, 5L, 5L, 4L, 3L, 2L, > 4L, 4L, 3L, 3L, 4L, 3L, 1L, 1L, 2L), org1_Effeciency_conj_3 = c(1L, > 3L, 3L, 1L, 2L, 3L, 3L, 1L, 2L, 3L, 1L, 3L, 3L, 3L, 2L, 3L, 2L, 1L, > 1L, 2L, 2L, 3L, 2L, 1L, 3L, 3L, 2L, 3L, 2L, 1L, 2L, 3L, 3L, 1L, 3L, > 3L, 2L, 1L, 1L, 1L, 3L, 2L, 3L, 1L, 3L, 3L, 2L, 3L, 3L, 1L), > org1_Oppenes_conj_3 = c(2L, 3L, 3L, 3L, 1L, 2L, 1L, 2L, 1L, 2L, 3L, > 2L, 3L, 3L, 1L, 3L, 3L, 2L, 3L, 3L, 3L, 3L, 1L, 3L, 1L, 3L, 3L, 1L, > 3L, 1L, 2L, 3L, 2L, 1L, 3L, 1L, 3L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 3L, > 3L, 2L, 3L, 3L, 3L), org1_Inclusion_conj_3 = c(1L, 1L, 1L, 2L, 1L, > 1L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, > 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, > 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 1L), org1_Leader_conj_3 = > c(3L, 1L, 5L, 6L, 3L, 2L, 2L, 6L, 4L, 3L, 3L, 2L, 2L, 1L, 2L, 3L, 5L, > 6L, 4L, 1L, 2L, 4L, 5L, 1L, 2L, 2L, 2L, 6L, 4L, 6L, 4L, 6L, 1L, 1L, > 3L, 5L, 4L, 1L, 3L, 6L, 2L, 6L, 6L, 1L, 2L, 2L, 6L, 2L, 6L, 5L), > org1_Gain_conj_3 = c(2L, 7L, 2L, 4L, 6L, 7L, 2L, 4L, 1L, 5L, 5L, 7L, > 5L, 7L, 7L, 3L, 2L, 6L, 2L, 5L, 6L, 6L, 7L, 3L, 5L, 6L, 3L, 8L, 1L, > 2L, 8L, 5L, 2L, 8L, 5L, 6L, 5L, 2L, 5L, 3L, 3L, 2L, 4L, 2L, 4L, 5L, > 7L, 6L, 2L, 7L), org1_System_conj_3 = c(5L, 5L, 1L, 1L, 4L, 3L, 1L, > 1L, 2L, 5L, 1L, 5L, 2L, 1L, 5L, 4L, 1L, 1L, 3L, 4L, 5L, 1L, 5L, 3L, > 3L, 5L, 1L, 3L, 2L, 5L, 2L, 1L, 5L, 1L, 3L, 2L, 5L, 5L, 2L, 1L, 3L, > 2L, 2L, 4L, 4L, 4L, 2L, 3L, 5L, 4L), org2_Effeciency_conj_3 = c(2L, > 1L, 2L, 2L, 1L, 2L, 2L, 3L, 1L, 2L, 3L, 1L, 2L, 2L, 1L, 1L, 1L, 3L, > 3L, 1L, 3L, 1L, 1L, 2L, 2L, 1L, 3L, 2L, 1L, 3L, 1L, 1L, 1L, 3L, 1L, > 2L, 1L, 2L, 3L, 3L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L), > org2_Oppenes_conj_3 = c(1L, 1L, 1L, 2L, 3L, 3L, 2L, 1L, 3L, 3L, 1L, > 3L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 3L, 2L, 3L, 1L, 2L, 3L, > 1L, 2L, 1L, 1L, 3L, 3L, 1L, 3L, 1L, 2L, 3L, 3L, 3L, 3L, 3L, 1L, 2L, > 2L, 1L, 1L, 2L, 1L), org2_Inclusion_conj_3 = c(2L, 2L, 2L, 1L, 2L, > 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, > 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, > 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 2L, 2L), org2_Leader_conj_3 = > c(1L, 5L, 2L, 1L, 2L, 4L, 4L, 1L, 2L, 4L, 5L, 5L, 5L, 4L, 3L, 4L, 6L, > 3L, 2L, 2L, 5L, 2L, 2L, 5L, 5L, 3L, 5L, 3L, 3L, 1L, 5L, 5L, 2L, 2L, > 2L, 2L, 1L, 6L, 1L, 5L, 1L, 5L, 1L, 2L, 6L, 6L, 4L, 3L, 2L, 6L), > org2_Gain_conj_3 = c(1L, 8L, 3L, 5L, 2L, 6L, 3L, 2L, 7L, 1L, 2L, 2L, > 8L, 1L, 2L, 6L, 1L, 8L, 6L, 3L, 7L, 4L, 5L, 2L, 6L, 8L, 2L, 7L, 6L, > 8L, 5L, 7L, 3L, 6L, 1L, 8L, 4L, 3L, 7L, 5L, 8L, 8L, 3L, 6L, 3L, 4L, > 5L, 4L, 4L, 5L), org2_System_conj_3 = c(4L, 1L, 4L, 3L, 3L, 5L, 3L, > 3L, 4L, 2L, 3L, 1L, 1L, 5L, 2L, 3L, 3L, 2L, 5L, 3L, 1L, 2L, 3L, 5L, > 1L, 4L, 5L, 2L, 3L, 2L, 3L, 2L, 4L, 3L, 5L, 3L, 1L, 1L, 3L, 2L, 4L, > 5L, 5L, 3L, 1L, 1L, 4L, 1L, 4L, 5L), org1_Effeciency_conj_4 = c(1L, > 1L, 2L, 2L, 3L, 2L, 2L, 3L, 3L, 2L, 3L, 3L, 3L, 3L, 1L, 1L, 2L, 3L, > 3L, 1L, 1L, 3L, 1L, 3L, 2L, 3L, 3L, 3L, 1L, 1L, 3L, 3L, 1L, 3L, 2L, > 3L, 3L, 2L, 3L, 1L, 2L, 2L, 3L, 2L, 1L, 1L, 3L, 3L, 1L, 3L), > org1_Oppenes_conj_4 = c(2L, 1L, 2L, 2L, 2L, 3L, 2L, 3L, 2L, 1L, 1L, > 1L, 3L, 1L, 3L, 2L, 2L, 3L, 2L, 3L, 1L, 3L, 3L, 1L, 1L, 1L, 3L, 1L, > 1L, 1L, 2L, 3L, 2L, 2L, 1L, 2L, 1L, 2L, 1L, 3L, 1L, 3L, 3L, 1L, 3L, > 3L, 3L, 2L, 3L, 2L), org1_Inclusion_conj_4 = c(2L, 2L, 1L, 2L, 2L, > 2L, 2L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, > 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 1L, > 2L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 1L), org1_Leader_conj_4 = > c(4L, 6L, 5L, 1L, 2L, 1L, 1L, 3L, 3L, 6L, 2L, 5L, 6L, 6L, 6L, 2L, 3L, > 3L, 4L, 4L, 4L, 1L, 5L, 5L, 2L, 6L, 2L, 5L, 4L, 4L, 2L, 5L, 6L, 5L, > 1L, 4L, 4L, 3L, 4L, 2L, 3L, 2L, 5L, 1L, 3L, 6L, 2L, 6L, 4L, 1L), > org1_Gain_conj_4 = c(3L, 1L, 2L, 3L, 4L, 7L, 2L, 7L, 4L, 1L, 6L, 3L, > 5L, 8L, 3L, 7L, 8L, 1L, 3L, 6L, 7L, 1L, 1L, 1L, 1L, 3L, 4L, 3L, 1L, > 8L, 3L, 2L, 1L, 7L, 2L, 4L, 4L, 1L, 6L, 8L, 6L, 3L, 7L, 3L, 8L, 7L, > 3L, 1L, 3L, 3L), org1_System_conj_4 = c(5L, 1L, 2L, 3L, 2L, 5L, 5L, > 2L, 3L, 5L, 3L, 4L, 5L, 2L, 4L, 2L, 3L, 2L, 4L, 4L, 1L, 1L, 4L, 3L, > 2L, 4L, 3L, 1L, 5L, 5L, 2L, 4L, 5L, 4L, 3L, 3L, 1L, 5L, 4L, 1L, 2L, > 3L, 5L, 5L, 3L, 2L, 5L, 2L, 3L, 3L), org2_Effeciency_conj_4 = c(3L, > 3L, 3L, 1L, 1L, 3L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 3L, 2L, 1L, 1L, > 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 3L, 2L, 2L, 1L, 3L, 1L, 3L, > 2L, 2L, 3L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 3L, 2L, 2L, 3L, 1L), > org2_Oppenes_conj_4 = c(1L, 3L, 1L, 3L, 3L, 2L, 3L, 2L, 3L, 2L, 2L, > 3L, 2L, 2L, 2L, 1L, 3L, 1L, 3L, 2L, 2L, 1L, 1L, 3L, 3L, 2L, 1L, 3L, > 3L, 2L, 3L, 1L, 3L, 3L, 2L, 1L, 3L, 1L, 3L, 1L, 2L, 2L, 1L, 2L, 1L, > 1L, 2L, 3L, 1L, 1L), org2_Inclusion_conj_4 = c(1L, 1L, 2L, 1L, 1L, > 1L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, > 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, > 1L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 2L), org2_Leader_conj_4 = > c(1L, 5L, 2L, 6L, 6L, 6L, 2L, 1L, 2L, 4L, 5L, 3L, 4L, 4L, 2L, 1L, 6L, > 1L, 1L, 2L, 6L, 3L, 1L, 4L, 4L, 3L, 3L, 4L, 6L, 5L, 3L, 2L, 3L, 6L, > 6L, 5L, 2L, 6L, 3L, 5L, 5L, 1L, 6L, 5L, 4L, 5L, 1L, 2L, 2L, 6L), > org2_Gain_conj_4 = c(5L, 8L, 1L, 2L, 7L, 2L, 7L, 8L, 2L, 6L, 7L, 7L, > 7L, 5L, 8L, 4L, 6L, 6L, 6L, 4L, 6L, 6L, 7L, 2L, 5L, 6L, 6L, 1L, 8L, > 5L, 2L, 5L, 6L, 3L, 3L, 7L, 7L, 8L, 4L, 7L, 5L, 2L, 2L, 7L, 6L, 4L, > 7L, 4L, 4L, 1L), org2_System_conj_4 = c(2L, 3L, 3L, 2L, 4L, 4L, 4L, > 4L, 1L, 4L, 1L, 2L, 4L, 5L, 2L, 3L, 5L, 1L, 1L, 1L, 5L, 4L, 2L, 2L, > 3L, 2L, 1L, 4L, 3L, 4L, 5L, 3L, 1L, 3L, 2L, 4L, 4L, 1L, 3L, 3L, 4L, > 5L, 4L, 4L, 1L, 1L, 3L, 5L, 5L, 1L), CHOICE_conj1 = c(2L, 2L, 1L, 2L, > 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, > 1L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, > 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 1L ), RATING_conj1_org1 = > c(1L, 3L, 6L, 5L, 3L, 1L, 5L, 2L, 0L, 7L, 6L, 8L, 5L, 10L, 8L, 10L, > 1L, 6L, 5L, 8L, 2L, 7L, 0L, 6L, 8L, 0L, 4L, 2L, 8L, 6L, 7L, 7L, 7L, > 2L, 3L, 8L, 6L, 7L, 2L, 7L, 3L, 8L, 5L, 7L, 8L, 6L, 6L, 10L, 3L, 9L), > RATING_conj1_org2 = c(7L, 6L, 4L, 7L, 7L, 1L, 6L, 6L, 0L, 3L, 2L, 0L, > 0L, 9L, 5L, 3L, 1L, 6L, 8L, 5L, 2L, 2L, 0L, 4L, 5L, 0L, 6L, 8L, 3L, > 5L, 6L, 6L, 5L, 8L, 3L, 8L, 3L, 1L, 5L, 9L, 7L, 3L, 7L, 6L, 6L, 4L, > 4L, 0L, 6L, 7L), CHOICE_conj2 = c(1L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, > 1L, 2L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, > 2L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 2L, > 2L, 1L, 1L, 2L, 1L, 1L, 2L), RATING_conj2_org1 = c(5L, 4L, 4L, 4L, > 5L, 1L, 5L, 7L, 0L, 3L, 5L, 6L, 5L, 9L, 5L, 3L, 1L, 4L, 4L, 8L, 3L, > 7L, 0L, 9L, 9L, 1L, 3L, 2L, 3L, 5L, 6L, 4L, 5L, 8L, 3L, 7L, 6L, 1L, > 7L, 0L, 7L, 6L, 6L, 8L, 9L, 7L, 5L, 10L, 7L, 7L), RATING_conj2_org2 = > c(0L, 2L, 7L, 4L, 8L, 1L, 7L, 8L, 0L, 3L, 6L, 0L, 0L, 7L, 8L, 10L, > 0L, 3L, 6L, 8L, 2L, 5L, 0L, 4L, 5L, 2L, 5L, 5L, 7L, 5L, 5L, 7L, 1L, > 2L, 3L, 8L, 3L, 7L, 3L, 6L, 2L, 8L, 8L, 8L, 7L, 6L, 6L, 5L, 5L, 9L), > CHOICE_conj3 = c(2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, > 2L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 2L, > 2L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, > 2L, 1L, 1L), RATING_conj3_org1 = c(4L, 6L, 4L, 6L, 7L, 1L, 6L, 3L, > 0L, 6L, 2L, 7L, 0L, 9L, 5L, 3L, 1L, 3L, 4L, 7L, 1L, 8L, 0L, 5L, 5L, > 1L, 5L, 2L, 8L, 5L, 5L, 5L, 3L, 8L, 2L, 4L, 5L, 7L, 8L, 6L, 7L, 6L, > 4L, 9L, 7L, 5L, 4L, 2L, 8L, 9L), RATING_conj3_org2 = c(7L, 4L, 6L, > 5L, 6L, 1L, 3L, 7L, 0L, 3L, 2L, 3L, 3L, 6L, 5L, 10L, 0L, 3L, 4L, 10L, > 0L, 4L, 0L, 7L, 5L, 2L, 3L, 2L, 3L, 5L, 8L, 2L, 7L, 2L, 7L, 5L, 3L, > 3L, 0L, 0L, 2L, 6L, 7L, 8L, 5L, 2L, 8L, 10L, 6L, 8L), CHOICE_conj4 = > c(2L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, > 2L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, > 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L), > RATING_conj4_org1 = c(4L, 5L, 8L, 6L, 4L, 1L, 8L, 3L, 0L, 7L, 5L, 5L, > 2L, 8L, 7L, 10L, 1L, 5L, 5L, 10L, 1L, 3L, 0L, 6L, 7L, 1L, 2L, 5L, 7L, > 8L, 7L, 3L, 6L, 2L, 2L, 8L, 5L, 5L, 4L, 5L, 3L, 7L, 3L, 8L, 8L, 6L, > 2L, 10L, 7L, 7L), RATING_conj4_org2 = c(6L, 4L, 4L, 4L, 5L, 1L, 6L, > 7L, 0L, 3L, 6L, 2L, 0L, 5L, 5L, 3L, 0L, 3L, 4L, 9L, 4L, 8L, 0L, 5L, > 6L, 2L, 8L, 3L, 2L, 5L, 5L, 7L, 2L, 6L, 7L, 8L, 3L, 3L, 1L, 5L, 7L, > 10L, 7L, 10L, 5L, 5L, 7L, 5L, 5L, 8L), Q7 = c(0L, 0L, 8L, 9L, 6L, > 10L, 2L, 2L, 6L, 8L, 0L, 0L, 5L, 2L, 7L, 7L, 3L, 0L, 0L, 5L, 6L, 4L, > 7L, 2L, 977L, 0L, 6L, 3L, 2L, 4L, 7L, 8L, 2L, 1L, 9L, 8L, 10L, 6L, > 0L, 9L, 5L, 0L, 3L, 0L, 0L, 0L, 2L, 5L, 977L, 2L), Q8 = c(1L, 1L, 2L, > 2L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 1L, 977L, 1L, 2L, 2L, 1L, 3L, 1L, 1L, > 3L, 1L, 3L, 1L, 2L, 1L, 977L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, > 3L, 3L, 2L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 977L, 1L), Q9 = c(4L, 8L, > 1L, 0L, 4L, 0L, 8L, 7L, 0L, 0L, 10L, 10L, 0L, 4L, 0L, 10L, 4L, 5L, > 10L, 8L, 2L, 9L, 0L, 5L, 2L, 0L, 5L, 4L, 4L, 8L, 0L, 0L, 5L, 6L, 2L, > 0L, 0L, 0L, 7L, 4L, 5L, 5L, 6L, 10L, 7L, 4L, 6L, 0L, 977L, 7L), Q10 = > c(8L, 10L, 7L, 5L, 7L, 2L, 7L, 8L, 0L, 2L, 10L, 10L, 0L, 10L, 2L, > 10L, 8L, 8L, 10L, 8L, 7L, 10L, 5L, 7L, 4L, 0L, 7L, 7L, 10L, 10L, 4L, > 2L, 5L, 9L, 5L, 6L, 2L, 4L, 10L, 3L, 5L, 7L, 9L, 10L, 10L, 10L, 8L, > 977L, 977L, 10L), Q11 = c(10L, 9L, 1L, 4L, 5L, 0L, 5L, 6L, 1L, 3L, > 9L, 10L, 0L, 10L, 7L, 7L, 5L, 7L, 10L, 10L, 9L, 7L, 0L, 8L, 7L, 0L, > 7L, 7L, 8L, 10L, 5L, 2L, 2L, 10L, 5L, 1L, 2L, 4L, 6L, 4L, 7L, 10L, > 6L, 8L, 8L, 6L, 8L, 6L, 977L, 10L), Q12 = c(0L, 0L, 0L, 5L, 1L, 10L, > 2L, 0L, 0L, 2L, 0L, 0L, 5L, 0L, 6L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, > 0L, 0L, 10L, 3L, 0L, 0L, 977L, 10L, 7L, 0L, 0L, 5L, 8L, 2L, 0L, 966L, > 7L, 977L, 0L, 0L, 0L, 0L, 0L, 0L, 977L, 977L, 0L), Q13 = c(2L, 2L, > 2L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, > 2L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, > 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 977L, 2L), Q14 = > c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, > 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, > 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), Q1 = > c(2L, 2L, 8L, 6L, 5L, 1L, 7L, 3L, 7L, 4L, 1L, 6L, 4L, 1L, 5L, 10L, > 5L, 4L, 3L, 7L, 2L, 5L, 3L, 5L, 977L, 0L, 5L, 4L, 4L, 7L, 5L, 3L, 8L, > 3L, 3L, 0L, 5L, 6L, 3L, 4L, 0L, 3L, 3L, 2L, 7L, 4L, 2L, 7L, 4L, 7L), > Q2 = c(1L, 1L, 1L, 977L, 1L, 3L, 3L, 1L, 2L, 2L, 3L, 1L, 2L, 1L, 3L, > 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 977L, 2L, 3L, 3L, 1L, 1L, 3L, 2L, > 1L, 1L, 3L, 3L, 2L, 3L, 3L, 2L, 1L, 3L, 3L, 3L, 977L, 1L, 3L, 977L, > 977L, 1L), gender = c(1L, 2L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, > 1L, 2L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, > 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, > 1L, 2L, 2L, 1L), profile_age = c(5L, 2L, 5L, 5L, 3L, 5L, 2L, 5L, 3L, > 5L, 3L, 3L, 5L, 5L, 5L, 5L, 5L, 5L, 2L, 5L, 5L, 5L, 5L, 2L, 5L, 5L, > 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 1L, 5L, 5L, 1L, 1L, 1L, 1L, 1L, > 1L, 1L, 1L, 1L, 1L, 1L, 1L ), educ = c(6L, 5L, 2L, 5L, 6L, 6L, 4L, 6L, > 3L, 5L, 4L, 5L, 6L, 4L, 4L, 6L, 6L, 6L, 3L, 6L, 5L, 6L, 5L, 5L, 3L, > 4L, 6L, 6L, 5L, 3L, 3L, 4L, 3L, 6L, 3L, 5L, 5L, 6L, 3L, 5L, 3L, 3L, > 3L, 3L, 4L, 5L, 5L, 4L, 2L, 3L)), class = "data.frame", row.names = > c(NA, > -50L))
What I have done so far is this:
library(cregg) str(long <- cj_tidy(cjdata_wide, profile_variables = c("All the profile variables"), task_variables = c("CHOICE AND RATING VARIABLES HERE"), id = ~ id)) stopifnot(nrow(long) == nrow(data)*4*2
But I'm keep getting errors. I have tried to follow the example given by the
cregg
package - but with no success. Any help is much appreciated! I am open to all possible ways, be it so through cregg package ortidyr
for instance. -
How to plot sjPlots from a nested tibble?
I create some models like this using a nested tidyr dataframe:
set.seed(1) library(tidyr) library(dplyr) library(sjPlot) library(tibble) library(purrr) fits <- tribble(~group, ~colA, ~colB, ~colC, sample(c("group1", "group2"), 10, replace = T), 0, sample(10, replace = T), sample(10, replace = T), sample(c("group1", "group2"), 10, replace = T), 1, sample(10, replace = T), sample(10, replace = T)) %>% unnest(cols = c(colB, colC)) %>% nest(data=-group) %>% mutate(fit= map(data, ~glm(formula = colA ~ colB + colC, data = .x, family="binomial"))) %>% dplyr::select(group, fit) %>% tibble::column_to_rownames("group")
I would like to use this data to create some quick marginal effects plots with
sjPlot::plot_models
like thisplot_models(as.list(fits), type = "pred", terms = c("colB", "colA", "colC"))
Unfortunately, I get the error
Error in if (fam.info$is_linear) tf <- NULL else tf <- "exp" : argument is of length zero In addition: Warning message: Could not access model information.
I've played around a bit with the nesting of the data but I've been unable to get it into a format that
sjPlot::plot_models
will accept.What I was expecting to get is a "Forest plot of multiple regression models" as described in the help file. Ultimately, the goal is to plot the marginal effects of regression models by group, which I was hoping the plot_models will do (please correct me if I'm wrong).