Skip to content

Support for Numpy #6

@dirkgr

Description

@dirkgr

We have to store Numpy arrays in this.

For some size of tensor (larger than 4kb?), it would be a killer feature if we could lazy-load the entire tensor. That would mean that LazyTensor (or whatever class we use for this) keeps an open LMDB transaction, so that the pointers to the data stay alive for as long as possible. If we do this, we will need some mechanism yoink transactions from live objects when we run out of transaction slots.

Or maybe it would be better if large tensors get their own memory-mapped file. That means they are not transactional like the rest of OOCMap is. I'm not sure that's really a problem.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions