What is better: Subtraction analyses (RFX) vs modelling participant intercepts (MFX)?

Let's make some fake data

Thirty participants in a paired-samples design, yielding 60 observations in total:
N = 30; % Number of participants;
GT.Corr = 0.4; % Correlation between observations;
GT.D = 0.3; % Effect size;
% So...
GT.Mu = [0, GT.D]; % Condition means;
GT.Sigma = [1, GT.Corr; GT.Corr, 1]; % Population covariance matrix;
rng(123456);
Y = mvnrnd(GT.Mu, GT.Sigma, N); % Observations;
% Plot the data:
figure;
scatter(...
[zeros(N,1);ones(N,1)],...
Y(:),...
50,'k','filled');
hold on;
for iSubject = 1:N
line([0,1],Y(iSubject,:));
end
xlim([-1,2]);

Run a subtraction analysis (RFX)

RFX.Y = diff(Y,1,2); % Y(:,2) - Y(:,1);
RFX.X = ones(N,1); % Design matrix for a 1-sample test;
figure;
imagesc(RFX.X);
title('Design matrix for a one sample t-test');
xlim([0,2]); colormap(copper); colorbar; caxis([0,1]);
First, compute the following:
  1. beta values (i.e. model parameters/coefficients);
  2. the projection (a.k.a Hat) matrix;
  3. the residual maker matrix;
  4. an unbiased estimate of the mean squared error (MSE);
  5. the parameter covariance matrix;
  6. the contrast matrix;
RFX.B = pinv(RFX.X) * RFX.Y; % Beta value....
% ... Note, that this is the mean of RFX.Y;
RFX.P = RFX.X*pinv(RFX.X'*RFX.X)*RFX.X'; % Hat matrix ...
% ... Produces predicted values;
RFX.M = eye(size(RFX.P,1)) - RFX.P; % Residual maker matrix...
% ... Produces residual values;
RFX.R = RFX.M * RFX.Y; % Residuals;
RFX.MSE = (RFX.R'*RFX.R) / (size(RFX.X,1)-size(RFX.X,2)); % MSE ...
% ... An unbiased estimate of the mean squared error;
RFX.C = RFX.MSE * pinv(RFX.X'*RFX.X); % Parameter covariance...
% ... Note, this is simply the squared standard error of RFX.Y ...
% ... It is equivalent to var(RFX.Y)/30;
RFX.H = 1; % The contrast matrix for a 1-sample test;
Now, we can compute the stats that we really care about:
  1. Mu: the mean difference between conditions;
  2. SE: the standard error of that difference;
  3. Tval: the t-value;
  4. DFE: the error degrees of freedom;
  5. Pval: the p-value;
RFX.Mu = RFX.H * RFX.B; % Mean difference ...
% ... Equivalent to mean(RFX.Y);
RFX.SE = sqrt(RFX.H * RFX.C * RFX.H'); % Contrast standard error ...
% ... Equivalent to std(RFX.Y)/sqrt(N);
RFX.Tval = RFX.Mu / RFX.SE; % T-statistic;
RFX.DFE = trace(RFX.M); % Error degrees of freedom;
RFX.Pval = (1 - tcdf(abs(RFX.Tval),RFX.DFE)) * 2; % p-value ...
% ... 2-tailed;

Run model with participant intercepts (MFX1)

MFX1.Y = Y(:); % [Condition 1;Condition 2] ...
% ... This reshapes the observations so that:
% - all condition 1 observations are listed at the top;
% - all condition 2 observations are listed at the bottom;
% These two columns represent the fixed effects that we care about ...
% ... (encode each condition mean):
MFX1.X_Fixed = kron(eye(2),ones(N,1));
% These are the random intercepts for participants:
MFX1.X_Rando = kron(ones(2,1),eye(N));
% Stick them together to make the design matrix:
MFX1.X = [MFX1.X_Fixed,MFX1.X_Rando];
% Plot the design matrix:
figure;
imagesc(MFX1.X);
title('Design matrix for a quasi-mixed-effects model');
xlim([0,N+2+1]); colormap(copper); colorbar;
Exactly as before, compute the following:
  1. beta values (i.e. model parameters/coefficients);
  2. the projection (a.k.a Hat) matrix;
  3. the residual maker matrix;
  4. an unbiased estimate of the mean squared error (MSE);
  5. the parameter covariance matrix;
  6. the contrast matrix;
MFX1.B = pinv(MFX1.X) * MFX1.Y; % Beta values;
MFX1.P = MFX1.X*pinv(MFX1.X'*MFX1.X)*MFX1.X'; % Hat matrix ...
% ... Produces predicted values;
MFX1.M = eye(size(MFX1.P,1)) - MFX1.P; % Residual maker matrix...
% ... Produces residual values;
MFX1.R = MFX1.M * MFX1.Y; % Residuals;
MFX1.MSE = (MFX1.R'*MFX1.R) / (size(MFX1.X,1)-size(MFX1.X,2)); % MSE ...
% ... An unbiased estimate of the mean squared error;
MFX1.C = MFX1.MSE * pinv(MFX1.X'*MFX1.X); % Parameter covariance;
MFX1.H = [-1,1,zeros(1,N)]; % The contrast matrix;
Again, we can now compute the stats that we really care about:
  1. Mu: the mean difference between conditions;
  2. SE: the standard error of that difference;
  3. Tval: the t-value;
  4. DFE: the error degrees of freedom;
  5. Pval: the p-value;
MFX1.Mu = MFX1.H * MFX1.B; % Mean difference ...
% ... Equivalent to mean(Y(:,2)-Y(:,1));
MFX1.SE = sqrt(MFX1.H * MFX1.C * MFX1.H'); % Contrast standard error;
MFX1.Tval = MFX1.Mu / MFX1.SE; % T-statistic;
MFX1.DFE = trace(MFX1.M); % Error degrees of freedom;
MFX1.Pval = (1 - tcdf(abs(MFX1.Tval),MFX1.DFE)) * 2; % p-value ...
% ... 2-tailed;

Now make the comparison

Stats_RFX = [RFX.Mu;RFX.SE;RFX.Tval;RFX.DFE;RFX.Pval];
Stats_MFX1 = [MFX1.Mu;MFX1.SE;MFX1.Tval;MFX1.DFE;MFX1.Pval];
Stats = table(Stats_RFX,Stats_MFX1,...
'VariableNames',{'RFX','MFX1'},...
'RowNames',{'Mu','SE','Tval','DFE','Pval'});
disp(Stats);
RFX MFX1
________ ________

Mu 0.37574 0.37574
SE 0.1965 0.19998
Tval 1.9121 1.8789
DFE 29 29
Pval 0.065786 0.070349
The standard error of the MFX1 model is larger which reduces the t-value. This is because of how the within-subject effects are handled in each case.
In calculating MFX1.C, the term (MFX1.X'*MFX1.X) is not easily invertible. This is because the sum of the first two columns in MFX1.X is equal to the sum of the remaining columns. In other words, the design matrix MFX1.X is rank deficient and over-specified.
In cases involving only two experimental conditions (as in this example), this discrepancy can be eliminated by removing one of the participant intercepts in MFX1.X (implemented in MFX2 below).

Removing 1 subject effect (MFX2)

MFX2.Y = Y(:); % [Condition 1;Condition 2] ...
% ... This reshapes the observations so that:
% - all condition 1 observations are listed at the top;
% - all condition 2 observations are listed at the bottom;
% These two columns represent the fixed effects that we care about ...
% ... (encode each condition mean):
MFX2.X_Fixed = kron(eye(2),ones(N,1));
% These are the random intercepts for participants:
MFX2.X_Rando = kron(ones(2,1),eye(N));
% NOW! CHOP OFF THE LAST PARTICIPANT INTERCEPT!
MFX2.X_Rando = MFX2.X_Rando(:,1:end-1);
% Stick them together to make the design matrix:
MFX2.X = [MFX2.X_Fixed,MFX2.X_Rando];
Same stuff as before...
MFX2.B = pinv(MFX2.X) * MFX2.Y; % Beta values;
MFX2.P = MFX2.X*pinv(MFX2.X'*MFX2.X)*MFX2.X'; % Hat matrix ...
% ... Produces predicted values;
MFX2.M = eye(size(MFX2.P,1)) - MFX2.P; % Residual maker matrix...
% ... Produces residual values;
MFX2.R = MFX2.M * MFX2.Y; % Residuals;
MFX2.MSE = (MFX2.R'*MFX2.R) / (size(MFX2.X,1)-size(MFX2.X,2)); % MSE ...
% ... An unbiased estimate of the mean squared error;
MFX2.C = MFX2.MSE * pinv(MFX2.X'*MFX2.X); % Parameter covariance;
MFX2.H = [-1,1,zeros(1,N-1)]; % The contrast matrix;
MFX2.Mu = MFX2.H * MFX2.B; % Mean difference ...
% ... Equivalent to mean(Y(:,2)-Y(:,1));
MFX2.SE = sqrt(MFX2.H * MFX2.C * MFX2.H'); % Contrast standard error;
MFX2.Tval = MFX2.Mu / MFX2.SE; % T-statistic;
MFX2.DFE = trace(MFX2.M); % Error degrees of freedom;
MFX2.Pval = (1 - tcdf(abs(MFX2.Tval),MFX2.DFE)) * 2; % p-value ...
% ... 2-tailed;
Stats_MFX2 = [MFX2.Mu;MFX2.SE;MFX2.Tval;MFX2.DFE;MFX2.Pval];
Stats = table(Stats_RFX,Stats_MFX1,Stats_MFX2,...
'VariableNames',{'RFX','MFX1','MFX2'},...
'RowNames',{'Mu','SE','Tval','DFE','Pval'});
disp(Stats);
RFX MFX1 MFX2
________ ________ ________

Mu 0.37574 0.37574 0.37574
SE 0.1965 0.19998 0.1965
Tval 1.9121 1.8789 1.9121
DFE 29 29 29
Pval 0.065786 0.070349 0.065786