Skip to content

Refactor code for compositeRVCOV()#134

Merged
Kss2k merged 4 commits intomainfrom
refactor-rcs-rvcov
Mar 17, 2026
Merged

Refactor code for compositeRVCOV()#134
Kss2k merged 4 commits intomainfrom
refactor-rcs-rvcov

Conversation

@Kss2k
Copy link
Copy Markdown
Owner

@Kss2k Kss2k commented Mar 17, 2026

No description provided.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 546c2c6e57

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +613 to +615
T <- t(I) %*% theta %*% I

# denominators D_f
denom <- vapply(lVs, function(lv)
getCompositeDenominator(lv, parTable, cfa, scale.corrected),
numeric(1)
)
if (scale.corrected) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Restore mean-scaling for uncorrected composite residual covariance

When scale.corrected = FALSE, composites are built with rowMeans earlier in this file, so residual covariances must be divided by the indicator counts for each latent variable; this refactor now computes T <- t(I) %*% theta %*% I and only rescales inside if (scale.corrected), so the FALSE branch returns raw block sums. That inflates off-diagonal residual covariances by roughly k_i * k_j and propagates incorrect ~~ terms into the generated syntax.

Useful? React with 👍 / 👎.

l <- apply(lambda, MARGIN = 2L, FUN = sum)

l.inv <- 1 / l
l.inv[l <= 0] <- 1L
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep loading-sign scaling in scale-corrected residual covariance

Replacing 1 / l with 1 for all l <= 0 changes the model math for factors whose summed loadings are negative (or cancel near zero): scale-corrected composites are defined using division by sum(lambda) in getScaleCorrectedItem, but this line drops both sign and magnitude, so cross-factor residual covariances no longer match the actual composite construction in those cases.

Useful? React with 👍 / 👎.

@Kss2k
Copy link
Copy Markdown
Owner Author

Kss2k commented Mar 17, 2026

How does this look now @codex? When scale.corrected = FALSE, composites are built with rowMeans earlier in this file, so residual covariances must be divided by the indicator counts for each latent variable; this refactor now computes T <- t(I) %% theta %% I and only rescales inside if (scale.corrected), so the FALSE branch returns raw block sums. That inflates off-diagonal residual covariances by roughly k_i * k_j and propagates incorrect ~~ terms into the generated syntax.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 19c0bf1513

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +615 to +617
.f <- if (scale.corrected) sum else length

p <- length(lVs)
resCov <- matrix(0, p, p, dimnames = list(lVs, lVs))

for (i in seq_len(p)) {
Ii <- item_list[[i]]
for (j in i:p) {
Ij <- item_list[[j]]
block_sum <- sum(thetaF[Ii, Ij, drop = FALSE])
val <- block_sum / (denom[i] * denom[j])
resCov[i, j] <- val
resCov[j, i] <- val
}
l <- apply(lambda, MARGIN = 2L, FUN = .f)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Count nonzero indicators for unscaled composites

When scale.corrected = FALSE, using length here computes the total row count of the sliced lambda matrix for every latent variable, not that latent’s indicator count. Because lambda rows are the union of indicators across all selected factors, l becomes a shared/global count (or otherwise incorrect per-factor count), so L.inv %*% T %*% L.inv rescales residual covariances with wrong denominators and propagates incorrect ~~ terms in generated syntax; this regresses the prior behavior that divided by each factor’s own k_i.

Useful? React with 👍 / 👎.

@Kss2k
Copy link
Copy Markdown
Owner Author

Kss2k commented Mar 17, 2026

How about now @codex ? When scale.corrected = FALSE, using length here computes the total row count of the sliced lambda matrix for every latent variable, not that latent’s indicator count. Because lambda rows are the union of indicators across all selected factors, l becomes a shared/global count (or otherwise incorrect per-factor count), so L.inv %% T %% L.inv rescales residual covariances with wrong denominators and propagates incorrect ~~ terms in generated syntax; this regresses the prior behavior that divided by each factor’s own k_i.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fbd7de117b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

}
l <- apply(lambda, MARGIN = 2L, FUN = FUN)
l.inv <- 1 / l
l.inv[l <= 0] <- 1L
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve denominator sign when inverting loadings

Overwriting l.inv for all l <= 0 silently changes the scaling for any factor whose corrected denominator is negative (or exactly zero), so L.inv %*% T %*% L.inv no longer matches the intended per-factor normalization and can emit incorrect ~~ terms in generated syntax for models with negative/canceling loadings. The previous implementation divided by each factor’s actual denominator, so this guard should not collapse all non-positive values to 1.

Useful? React with 👍 / 👎.

@Kss2k Kss2k merged commit cfd4ab5 into main Mar 17, 2026
6 checks passed
@Kss2k Kss2k deleted the refactor-rcs-rvcov branch March 17, 2026 08:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant