Skip to content

[TODO] SIMD, BF16/FP16, INT8 optimization #79

@syoyo

Description

@syoyo

Currently NanoRT does not utilize SIMD/AVX.

Also no quantized BVH support.

It'd be better to start to consider optimization and quantization.

Fortunately, recent CPU architecture(AlderLake, ZEN4) supports native BF16/FP16 and INT8 op support, which will boost quantized BVH construction/traversal.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions