# 4 dimensional choice array in JAGS

I am trying to run a model in JAGS where I have the following dimensions:

• n Subjects (nSubjects)
• 2 sessions (nSessions)
• 4 conditions (nConditions)
• 42 trials per condition (nTrials)

It is a reinforcement learning model weighting action- and stimulus values. I started providing jags with 4D (nSubjects x nSessions x nTrials x nConditions) variables. These were variables such as choice, reward magnitudes associated with available options, stimulus and action properties of choice, choice outcome. JAGS then returned the following error:

``````'Error: 4D and higher dimensional arrays not accepted'
``````

I then tried to solve the problem by providing four different variables for the four different conditions. This made the code much more complex and ugly because I resorted to nested ifelse loops (see code below). Does not seem like a proper solution, but might work. However, I don't know how to handle the fact that my choice data is 4D as well. All other variables are on the right hand side of the equation, so I can work with above-mentioned nested ifelse loops. However, if I provide four different choice varibles, this would not work (as far as I can see, ifelse loops are functions, not a control flow statement, so I cannot state

``````ifelse(c == 1, yCond1 = ..., ifelse(c == 2, yCond2 = ..., ifelse(c == 3, yCond3 = ..., yCond4 = ...)))
``````

I also tried to create a new variable within JAGS (where 4D variables can be created, they apparently just cannot be passed to JAGS), like so:

`y[i,s,1:nTrials,c] = ifelse(c == 1,yAct1,ifelse(c==2,yAct2,ifelse(c==3,yStim1,yStim2)))`

But then the problem is that I have NaN values in the choice data which are allowed in the dependent variable (left-hand side of equation), but not in predictor variables (right-hand side) where they now appear. I assume there must be an elegant way to do this, I just can't come up with one. Any help greatly appreciated, below my matlab and jags code. (model is still early stage, hierarchical priors etc. still to be implemented)

Cheers, David

``````muLogBetaL=-2.3;muLogBetaU=3.4; %bounds on mean of distribution log beta
sigmaLogBetaL=0.01;sigmaLogBetaU=sqrt(((muLogBetaU-muLogBetaL)^2)/12);%bounds on the std of distribution of log beta

dataStruct = struct(...
'nConds',           4,...
'nTrials',          size(choiceComb_Cond1,3),...
'nSubjs',           numel(subjIDs),...
'nSessions',        2,...
'choiceRewCond1',   choiceRewarded_Cond1,...
'choiceRewCond2',   choiceRewarded_Cond2,...
'choiceRewCond3',   choiceRewarded_Cond3,...
'choiceRewCond4',   choiceRewarded_Cond4,...
'choicePushCond1',  choicePush_Cond1,...
'choicePushCond2',  choicePush_Cond2,...
'choicePushCond3',  choicePush_Cond3,...
'choicePushCond4',  choicePush_Cond4,...
'choiceYellCond1',  choiceYell_Cond1,...
'choiceYellCond2',  choiceYell_Cond2,...
'choiceYellCond3',  choiceYell_Cond3,...
'choiceYellCond4',  choiceYell_Cond4,...
'winAmtPushCond1',  winAmtPush_Cond1,...
'winAmtPushCond2',  winAmtPush_Cond2,...
'winAmtPushCond3',  winAmtPush_Cond3,...
'winAmtPushCond4',  winAmtPush_Cond4,...
'winAmtYellCond1',  winAmtYell_Cond1,...
'winAmtYellCond2',  winAmtYell_Cond2,...
'winAmtYellCond3',  winAmtYell_Cond3,...
'winAmtYellCond4',  winAmtYell_Cond4,...
'yCond1',           choiceComb_Cond1,...
'yCond2',           choiceComb_Cond2,...
'yCond3',           choiceComb_Cond3,...
'yCond4',           choiceComb_Cond4);

monitorParameters = {'alpha','beta','acw','vPush'};
initStruct.alpha=ones(numel(subjIDs),2,4).*.5;
initStruct.beta=ones(numel(subjIDs),2).*.02;
initStruct.acw=ones(numel(subjIDs),2,4).*.02;
init0(1) = initStruct;% a structure array that has the initial values for all latent variables for each chain

doParallel = 0;
modelName = 'modelRL';
nChains = 1;
nBurnin = 500;
nSamples = 5000;
nThin = 1;
doDIC = 0;

fprintf( 'Running JAGS ...\n' ); % display
[samples, stats] = matjags( ...
dataStruct, ...                           % Observed data
fullfile(pwd, [modelName '.txt']), ...    % File that contains model definition
init0 , ...                               % Initial values for latent variables
1 , ...                                   % which copy of matjags to run
'doparallel' , doParallel, ...            % Parallelization flag
'nchains', nChains,...                    % Number of MCMC chains
'nburnin', nBurnin,...                    % Number of burnin steps
'nsamples', nSamples, ...                 % Number of samples to extract
'thin', nThin, ...                        % Thinning parameter
'dic', doDIC, ...                         % Do the DIC?
'monitorparams', monitorParameters, ...   % List of latent variables to monitor
'savejagsoutput' , 0 , ...                % Save command line output produced by JAGS?
'verbosity' , 2 , ...                     % 0=do not produce any output; 1=minimal text output; 2=maximum text output
'cleanup' , 0 ,...                        % clean up of temporary files?
'rndseed',1);                             % Randomise seed; 0=no; 1=yes
``````
``````model{
for(i in 1:nSubjs){
for(s in 1:nSessions){
for(c in 1:nConds){

# STARTING VALUES FOR FIRST TRIAL
# Reward probability estimate for pushing/yellow
pPushRew[i,s,1,c] = .5
pYellRew[i,s,1,c] = .5

# Expected values
vPush[i,s,1,c] = ifelse(c == 1,
pPushRew[i,s,1,c] * winAmtPushCond1[i,s,1],
ifelse(c == 2,
pPushRew[i,s,1,c] * winAmtPushCond2[i,s,1],
ifelse(c == 3,
pPushRew[i,s,1,c] * winAmtPushCond3[i,s,1],
pPushRew[i,s,1,c] * winAmtPushCond4[i,s,1])))
vYell[i,s,1,c] = ifelse(c == 1,
pYellRew[i,s,1,c] * winAmtYellCond1[i,s,1],
ifelse(c == 2,
pYellRew[i,s,1,c] * winAmtYellCond2[i,s,1],
ifelse(c == 3,
pYellRew[i,s,1,c] * winAmtYellCond3[i,s,1],
pYellRew[i,s,1,c] * winAmtYellCond4[i,s,1])))
vPull[i,s,1,c] = ifelse(c == 1,
(1-pPushRew[i,s,1,c]) * (100-winAmtPushCond1[i,s,1]),
ifelse(c == 2,
(1-pPushRew[i,s,1,c]) * (100-winAmtPushCond2[i,s,1]),
ifelse(c == 3,
(1-pPushRew[i,s,1,c]) * (100-winAmtPushCond3[i,s,1]),
(1-pPushRew[i,s,1,c]) * (100-winAmtPushCond4[i,s,1]))))
vBlue[i,s,1,c] = ifelse(c == 1,
(1-pYellRew[i,s,1,c]) * (100-winAmtYellCond1[i,s,1]),
ifelse(c == 2,
(1-pYellRew[i,s,1,c]) * (100-winAmtYellCond2[i,s,1]),
ifelse(c == 3,
(1-pYellRew[i,s,1,c]) * (100-winAmtYellCond3[i,s,1]),
(1-pYellRew[i,s,1,c]) * (100-winAmtYellCond4[i,s,1]))))

# Weighted Action-Color combination value
AC[i,s,1,c,1] = (acw[i,s,c] * vPull[i,s,1,c]) + ((1-acw[i,s,c]) * vBlue[i,s,1,c])
AC[i,s,1,c,2] = (acw[i,s,c] * vPull[i,s,1,c]) + ((1-acw[i,s,c]) * vYell[i,s,1,c])
AC[i,s,1,c,3] = (acw[i,s,c] * vPush[i,s,1,c]) + ((1-acw[i,s,c]) * vBlue[i,s,1,c])
AC[i,s,1,c,4] = (acw[i,s,c] * vPush[i,s,1,c]) + ((1-acw[i,s,c]) * vYell[i,s,1,c])

# choice probability for each action-color combination
pAC[i,s,1,c,1] = exp(beta[i,s]*AC[i,s,1,c,1])/(exp(beta[i,s]*AC[i,s,1,c,2])+exp(beta[i,s]*AC[i,s,1,c,3])+exp(beta[i,s]*AC[i,s,1,c,4]))
pAC[i,s,1,c,2] = exp(beta[i,s]*AC[i,s,1,c,2])/(exp(beta[i,s]*AC[i,s,1,c,1])+exp(beta[i,s]*AC[i,s,1,c,3])+exp(beta[i,s]*AC[i,s,1,c,4]))
pAC[i,s,1,c,3] = exp(beta[i,s]*AC[i,s,1,c,3])/(exp(beta[i,s]*AC[i,s,1,c,1])+exp(beta[i,s]*AC[i,s,1,c,2])+exp(beta[i,s]*AC[i,s,1,c,4]))
pAC[i,s,1,c,4] = exp(beta[i,s]*AC[i,s,1,c,4])/(exp(beta[i,s]*AC[i,s,1,c,1])+exp(beta[i,s]*AC[i,s,1,c,2])+exp(beta[i,s]*AC[i,s,1,c,3]))

# realizing choice
y[i,s,1,c] ~ dcat(pAC[i,s,1,c,1:4]) #THIS DOESN'T WORK BECAUSE 4D ARRAY

# LOOP OVER TRIALS
for(t in 2:nTrials){
#updating of reward probability estimate for pushing
pPushRew[i,s,t,c] = ifelse(c==1,
ifelse(choicePushCond1[i,s,t-1] == 1,
pPushRew[i,s,t-1,c] + alpha[i,s,c]*(choiceRewCond1[i,s,t-1]-pPushRew[i,s,t-1,c]),
pPushRew[i,s,t-1,c] + alpha[i,s,c]*((-1*(choiceRewCond1[i,s,t-1]-1))-pPushRew[i,s,t-1,c])),
ifelse(c==2,
ifelse(choicePushCond2[i,s,t-1] == 1,
pPushRew[i,s,t-1,c] + alpha[i,s,c]*(choiceRewCond2[i,s,t-1]-pPushRew[i,s,t-1,c]),
pPushRew[i,s,t-1,c] + alpha[i,s,c]*((-1*(choiceRewCond2[i,s,t-1]-1))-pPushRew[i,s,t-1,c])),
ifelse(c==3,
ifelse(choicePushCond3[i,s,t-1] == 1,
pPushRew[i,s,t-1,c] + alpha[i,s,c]*(choiceRewCond3[i,s,t-1]-pPushRew[i,s,t-1,c]),
pPushRew[i,s,t-1,c] + alpha[i,s,c]*((-1*(choiceRewCond3[i,s,t-1]-1))-pPushRew[i,s,t-1,c])),
ifelse(choicePushCond4[i,s,t-1] == 1,
pPushRew[i,s,t-1,c] + alpha[i,s,c]*(choiceRewCond4[i,s,t-1]-pPushRew[i,s,t-1,c]),
pPushRew[i,s,t-1,c] + alpha[i,s,c]*((-1*(choiceRewCond4[i,s,t-1]-1))-pPushRew[i,s,t-1,c])))))

#updating of reward probability estimate for yellow
pYellRew[i,s,t,c] = ifelse(c==1,
ifelse(choiceYellCond1[i,s,t-1] == 1,
pYellRew[i,s,t-1,c] + alpha[i,s,c]*(choiceRewCond1[i,s,t-1]-pYellRew[i,s,t-1,c]),
pYellRew[i,s,t-1,c] + alpha[i,s,c]*((-1*(choiceRewCond1[i,s,t-1]-1))-pYellRew[i,s,t-1,c])),
ifelse(c==2,
ifelse(choiceYellCond2[i,s,t-1] == 1,
pYellRew[i,s,t-1,c] + alpha[i,s,c]*(choiceRewCond2[i,s,t-1]-pYellRew[i,s,t-1,c]),
pYellRew[i,s,t-1,c] + alpha[i,s,c]*((-1*(choiceRewCond2[i,s,t-1]-1))-pYellRew[i,s,t-1,c])),
ifelse(c==3,
ifelse(choiceYellCond3[i,s,t-1] == 1,
pYellRew[i,s,t-1,c] + alpha[i,s,c]*(choiceRewCond3[i,s,t-1]-pYellRew[i,s,t-1,c]),
pYellRew[i,s,t-1,c] + alpha[i,s,c]*((-1*(choiceRewCond3[i,s,t-1]-1))-pYellRew[i,s,t-1,c])),
ifelse(choiceYellCond4[i,s,t-1] == 1,
pYellRew[i,s,t-1,c] + alpha[i,s,c]*(choiceRewCond4[i,s,t-1]-pYellRew[i,s,t-1,c]),
pYellRew[i,s,t-1,c] + alpha[i,s,c]*((-1*(choiceRewCond4[i,s,t-1]-1))-pYellRew[i,s,t-1,c])))))

# EV for pushing/yellow
vPush[i,s,t,c] = ifelse(c==1,
pPushRew[i,s,t,c] * winAmtPushCond1[i,s,t],
ifelse(c==2,
pPushRew[i,s,t,c] * winAmtPushCond2[i,s,t],
ifelse(c==3,
pPushRew[i,s,t,c] * winAmtPushCond3[i,s,t],
pPushRew[i,s,t,c] * winAmtPushCond4[i,s,t])))
vYell[i,s,t,c] = ifelse(c==1,
pYellRew[i,s,t,c] * winAmtYellCond1[i,s,t],
ifelse(c==2,
pYellRew[i,s,t,c] * winAmtYellCond2[i,s,t],
ifelse(c==3,
pYellRew[i,s,t,c] * winAmtYellCond3[i,s,t],
pYellRew[i,s,t,c] * winAmtYellCond4[i,s,t])))

# EV for pulling/blue
vPull[i,s,t,c] = ifelse(c==1,
(1-pPushRew[i,s,t,c]) * (100-winAmtPushCond1[i,s,t]),
ifelse(c==2,
(1-pPushRew[i,s,t,c]) * (100-winAmtPushCond2[i,s,t]),
ifelse(c==3,
(1-pPushRew[i,s,t,c]) * (100-winAmtPushCond3[i,s,t]),
(1-pPushRew[i,s,t,c]) * (100-winAmtPushCond4[i,s,t]))))
vBlue[i,s,t,c] = ifelse(c==1,
(1-pYellRew[i,s,t,c]) * (100-winAmtYellCond1[i,s,t]),
ifelse(c==2,
(1-pYellRew[i,s,t,c]) * (100-winAmtYellCond2[i,s,t]),
ifelse(c==3,
(1-pYellRew[i,s,t,c]) * (100-winAmtYellCond3[i,s,t]),
(1-pYellRew[i,s,t,c]) * (100-winAmtYellCond4[i,s,t]))))

# Weighted Action-Color combination value
AC[i,s,t,c,1] = (acw[i,s,c] * vPull[i,s,t,c]) + ((1-acw[i,s,c]) * vBlue[i,s,t,c])
AC[i,s,t,c,2] = (acw[i,s,c] * vPull[i,s,t,c]) + ((1-acw[i,s,c]) * vYell[i,s,t,c])
AC[i,s,t,c,3] = (acw[i,s,c] * vPush[i,s,t,c]) + ((1-acw[i,s,c]) * vBlue[i,s,t,c])
AC[i,s,t,c,4] = (acw[i,s,c] * vPush[i,s,t,c]) + ((1-acw[i,s,c]) * vYell[i,s,t,c])

# choice probability for each action-color combination
pAC[i,s,t,c,1] = exp(beta[i,s]*AC[i,s,t,c,1])/(exp(beta[i,s]*AC[i,s,t,c,2])+exp(beta[i,s]*AC[i,s,t,c,3])+exp(beta[i,s]*AC[i,s,t,c,4]))
pAC[i,s,t,c,2] = exp(beta[i,s]*AC[i,s,t,c,2])/(exp(beta[i,s]*AC[i,s,t,c,1])+exp(beta[i,s]*AC[i,s,t,c,3])+exp(beta[i,s]*AC[i,s,t,c,4]))
pAC[i,s,t,c,3] = exp(beta[i,s]*AC[i,s,t,c,3])/(exp(beta[i,s]*AC[i,s,t,c,1])+exp(beta[i,s]*AC[i,s,t,c,2])+exp(beta[i,s]*AC[i,s,t,c,4]))
pAC[i,s,t,c,4] = exp(beta[i,s]*AC[i,s,t,c,4])/(exp(beta[i,s]*AC[i,s,t,c,1])+exp(beta[i,s]*AC[i,s,t,c,2])+exp(beta[i,s]*AC[i,s,t,c,3]))

# realizing choice
y[i,s,t,c] ~ dcat(pAC[i,s,t,c,1:4]) #THIS DOESN'T WORK BECAUSE 4D ARRAY
}

}

# PRIORS
for(c in 1:nConds){
alpha[i,s,c] ~ dbeta(1,1)

acw[i,s,c] ~ dbeta(1,1)
} #end condition specific priors
beta[i,s] ~ dnorm(.001,0.03) T(0.0001,.045)
} #end session loop
} #end subject loop
} #end model
``````