Skip to content

[VLM] Dynamic image embed size and Gemma 3 Vision support#774

Open
gnguralnick wants to merge 3 commits intomlc-ai:mainfrom
gnguralnick:dynamic-image-embed-size
Open

[VLM] Dynamic image embed size and Gemma 3 Vision support#774
gnguralnick wants to merge 3 commits intomlc-ai:mainfrom
gnguralnick:dynamic-image-embed-size

Conversation

@gnguralnick
Copy link

Replace the hardcoded IMAGE_EMBED_SIZE constant (1921, Phi3.5-V specific) with dynamic per-model computation:

  • computeImageEmbedSize() calculates the correct size per model type (Phi3-V from crop shape, others from mm_tokens_per_image in model_config)
  • Add BOI/EOI token wrapping around image embeddings for models that require it (e.g. Gemma 3 Vision)
  • Expose model_type and model_config fields in ChatConfig to pass through model-specific parameters from mlc-chat-config.json

Also see: mlc-ai/mlc-llm#3429, apache/tvm-ffi#475

…tation

The image embed size was hardcoded to 1921 (Phi3.5-V specific). This
replaces it with a dynamic computeImageEmbedSize() that calculates the
correct size per model type (Phi3-V from crop shape, others from
mm_tokens_per_image in model_config). Also adds BOI/EOI token wrapping
for models like Gemma3-V that require it, and exposes model_type and
model_config fields in ChatConfig.
@gemini-code-assist
Copy link

Summary of Changes

Hello @gnguralnick, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the visual language model (VLM) capabilities by introducing dynamic image embedding size computation and support for model-specific image token wrapping. This change removes a hardcoded image embed size, making the system more adaptable to various VLM architectures, including Gemma 3 Vision, and improves the overall flexibility and extensibility of image input processing within the chat pipeline.

Highlights

  • Dynamic Image Embed Size Calculation: Replaced the hardcoded IMAGE_EMBED_SIZE with a dynamic computeImageEmbedSize method that calculates the correct size based on the model type (e.g., Phi3-V from crop shape, others from mm_tokens_per_image in model_config).
  • Gemma 3 Vision Support: Implemented conditional wrapping of image embeddings with Begin-of-Image (BOI) and End-of-Image (EOI) tokens for models like Gemma 3 Vision, configurable via model_config.
  • Model Configuration Exposure: Added model_type and model_config fields to ChatConfig to allow passing model-specific parameters from mlc-chat-config.json for greater flexibility.
  • Refactored Image Input Handling: Updated the getInputData and getChunkedPrefillInputData functions to asynchronously preload image dimensions and use the dynamic embed size calculation, removing previous hardcoded assumptions and error checks.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/config.ts
    • Added optional model_type and model_config fields to the ChatConfig interface to support model-specific parameters.
  • src/llm_chat.ts
    • Removed the import and usage of the IMAGE_EMBED_SIZE constant.
    • Removed the PrefillChunkSizeSmallerThanImageError as dynamic sizing makes it obsolete.
    • Modified getInputData to be an asynchronous function, returning an additional getEmbedSize callback.
    • Introduced a private computeImageEmbedSize method to dynamically calculate image embedding token count based on model type and image dimensions.
    • Updated the image embedding validation logic to use the dynamically computed expectedSize.
    • Implemented logic to insert BOI and EOI tokens around image embeddings if specified in the model configuration.
    • Adjusted calls to getChunkedPrefillInputData to pass the new getEmbedSize function.
  • src/support.ts
    • Removed the hardcoded IMAGE_EMBED_SIZE constant.
    • Updated the getChunkedPrefillInputData function to accept an getImageEmbedSize callback.
    • Modified the calculation of curDataLen within getChunkedPrefillInputData to use the provided getImageEmbedSize callback for image data.
Activity
  • No specific activity (comments, reviews, or progress updates) were provided in the context for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively refactors the handling of image embeddings to support dynamic sizes, which is a great improvement for supporting various vision-language models like Gemma 3 Vision. However, the audit identified critical security and robustness issues related to missing validation of user-supplied or configuration-supplied data. Specifically, token IDs in logit_bias and special tokens in model_config are passed to GPU kernels without bounds checking, potentially leading to out-of-bounds memory access on the GPU. Additionally, the removal of the prefill chunk size check for images could lead to unhandled internal errors. I also have a couple of suggestions to enhance performance and error handling.

- Use Promise.all to fetch image dimensions concurrently instead of
  sequentially, improving performance for multi-image prompts.
- Add back the PrefillChunkSizeSmallerThanImageError check in
  getChunkedPrefillInputData for when an image embed size exceeds
  prefillChunkSize, which was lost when removing the hardcoded
  IMAGE_EMBED_SIZE.
@gnguralnick gnguralnick force-pushed the dynamic-image-embed-size branch from 1eefcc7 to 8de6410 Compare March 9, 2026 20:48
@akaashrp akaashrp self-assigned this Mar 9, 2026
- Fix getChunkedPrefillInputData tests: add required getImageEmbedSize callback
- Fix getInputData mock: now async, returns 3-tuple with getEmbedSize
- Fix pre-existing bug: schemaOrGrammarStr -> responseFormatCacheKey
- Add tests for computeImageEmbedSize (phi3_v, mm_tokens, unknown model)
- Add tests for calculateResizeShape and calculateCropShape
- Add test for PrefillChunkSizeSmallerThanImageError in chunking
- Add test for dynamic per-image embed sizes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants