Skip to content

Conversation

@robertbaldyga
Copy link
Member

No description provided.

@robertbaldyga robertbaldyga force-pushed the lru-improvements branch 3 times, most recently from 9ab3640 to e27a3a0 Compare February 10, 2026 06:36
roelap and others added 15 commits February 10, 2026 20:43
Signed-off-by: Roel Apfelbaum <roel.apfelbaum@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
Signed-off-by: Roel Apfelbaum <roel.apfelbaum@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
Signed-off-by: Roel Apfelbaum <roel.apfelbaum@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
Signed-off-by: Roel Apfelbaum <roel.apfelbaum@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
Signed-off-by: Roel Apfelbaum <roel.apfelbaum@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
This contributes to less breakdown of big requests into many small ones.

Signed-off-by: Roel Apfelbaum <roel.apfelbaum@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
This change may lead to wasting up to 31 cache lines but it simplifies
OCF_LRU_GET_LIST_INDEX() macro with is heavily used in ocf_lru_hot_cline()

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
Make best effort to evict contiguous cache line ranges to avoid
splitting requests.

Signed-off-by: Roel Apfelbaum <roel.apfelbaum@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
Signed-off-by: Roel Apfelbaum <roel.apfelbaum@huawei.com>
Signed-off-by: Ian Levine <ian.levine@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
The bug was fixed by adding lock when adding a cache line to LRU list.
Rename ocf_lru_add to ocf_lru_add_locked and add wrapper function
ocf_lru_add() with lock.

Signed-off-by: Sara Merzel <sara.merzel@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
Passing lru list id as a parameter allows to avoid calculating
the id within the macro each time the lock is acquired or released.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
When filling the cache with reads, there is a backfill running
thread asynchronousy. Wait for cache I/Os to settle before checking
the occupancy stats.

Signed-off-by: Robert Baldyga <robert.baldyga@unvertical.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants