Skip to content

Inconsistencies caused by split-patch inference #55

@mingheyuemankong

Description

@mingheyuemankong

The current repository code performs patch-based inference on resized images during processing, then fuses them into a single large image using wavelet fusion.

However, this approach still results in a significant proportion of inconsistencies. For instance, when processing walls, some patches tend to smooth out wall textures during inference, while others generate more detail. This leads to the final image appearing as if multiple patches were stitched together. An example of such inconsistency is shown below:

Image

This issue becomes particularly unacceptable when handling large-scale portraits, such as ID photos. I'm wondering if there are any effective solutions?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions