Skip to content

Reduced performance with scaled-down data #14

@uricohen

Description

@uricohen

I'm a very happy user of Clarabel and have now moved away from all my previous choices (ECOS, SCS, quadprog).

I am using it from Python, using the CVXPY API and qpsolvers, to solve large scale problems, e.g., 256 variables and 100K linear equality and inequality constraints.

I now run into an issue where scaling the problem by a factor of 100 changes the results considerably. Clarabel seems to be working well when the data mean is order 1 and the performance is reduced considerably when it is 100 times smaller, even though the problems are equivalent.

Is it a tolerance issue?
Should I scale the data myself?
What's your recommendation on this issue?

Well done, and best wishes.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions