Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 546c2c6e57
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
R/reliablity_single_item.R
Outdated
| T <- t(I) %*% theta %*% I | ||
|
|
||
| # denominators D_f | ||
| denom <- vapply(lVs, function(lv) | ||
| getCompositeDenominator(lv, parTable, cfa, scale.corrected), | ||
| numeric(1) | ||
| ) | ||
| if (scale.corrected) { |
There was a problem hiding this comment.
Restore mean-scaling for uncorrected composite residual covariance
When scale.corrected = FALSE, composites are built with rowMeans earlier in this file, so residual covariances must be divided by the indicator counts for each latent variable; this refactor now computes T <- t(I) %*% theta %*% I and only rescales inside if (scale.corrected), so the FALSE branch returns raw block sums. That inflates off-diagonal residual covariances by roughly k_i * k_j and propagates incorrect ~~ terms into the generated syntax.
Useful? React with 👍 / 👎.
R/reliablity_single_item.R
Outdated
| l <- apply(lambda, MARGIN = 2L, FUN = sum) | ||
|
|
||
| l.inv <- 1 / l | ||
| l.inv[l <= 0] <- 1L |
There was a problem hiding this comment.
Keep loading-sign scaling in scale-corrected residual covariance
Replacing 1 / l with 1 for all l <= 0 changes the model math for factors whose summed loadings are negative (or cancel near zero): scale-corrected composites are defined using division by sum(lambda) in getScaleCorrectedItem, but this line drops both sign and magnitude, so cross-factor residual covariances no longer match the actual composite construction in those cases.
Useful? React with 👍 / 👎.
|
How does this look now @codex? When scale.corrected = FALSE, composites are built with rowMeans earlier in this file, so residual covariances must be divided by the indicator counts for each latent variable; this refactor now computes T <- t(I) %% theta %% I and only rescales inside if (scale.corrected), so the FALSE branch returns raw block sums. That inflates off-diagonal residual covariances by roughly k_i * k_j and propagates incorrect ~~ terms into the generated syntax. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 19c0bf1513
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
R/reliablity_single_item.R
Outdated
| .f <- if (scale.corrected) sum else length | ||
|
|
||
| p <- length(lVs) | ||
| resCov <- matrix(0, p, p, dimnames = list(lVs, lVs)) | ||
|
|
||
| for (i in seq_len(p)) { | ||
| Ii <- item_list[[i]] | ||
| for (j in i:p) { | ||
| Ij <- item_list[[j]] | ||
| block_sum <- sum(thetaF[Ii, Ij, drop = FALSE]) | ||
| val <- block_sum / (denom[i] * denom[j]) | ||
| resCov[i, j] <- val | ||
| resCov[j, i] <- val | ||
| } | ||
| l <- apply(lambda, MARGIN = 2L, FUN = .f) |
There was a problem hiding this comment.
Count nonzero indicators for unscaled composites
When scale.corrected = FALSE, using length here computes the total row count of the sliced lambda matrix for every latent variable, not that latent’s indicator count. Because lambda rows are the union of indicators across all selected factors, l becomes a shared/global count (or otherwise incorrect per-factor count), so L.inv %*% T %*% L.inv rescales residual covariances with wrong denominators and propagates incorrect ~~ terms in generated syntax; this regresses the prior behavior that divided by each factor’s own k_i.
Useful? React with 👍 / 👎.
|
How about now @codex ? When scale.corrected = FALSE, using length here computes the total row count of the sliced lambda matrix for every latent variable, not that latent’s indicator count. Because lambda rows are the union of indicators across all selected factors, l becomes a shared/global count (or otherwise incorrect per-factor count), so L.inv %% T %% L.inv rescales residual covariances with wrong denominators and propagates incorrect ~~ terms in generated syntax; this regresses the prior behavior that divided by each factor’s own k_i. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fbd7de117b
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| } | ||
| l <- apply(lambda, MARGIN = 2L, FUN = FUN) | ||
| l.inv <- 1 / l | ||
| l.inv[l <= 0] <- 1L |
There was a problem hiding this comment.
Preserve denominator sign when inverting loadings
Overwriting l.inv for all l <= 0 silently changes the scaling for any factor whose corrected denominator is negative (or exactly zero), so L.inv %*% T %*% L.inv no longer matches the intended per-factor normalization and can emit incorrect ~~ terms in generated syntax for models with negative/canceling loadings. The previous implementation divided by each factor’s actual denominator, so this guard should not collapse all non-positive values to 1.
Useful? React with 👍 / 👎.
No description provided.