Skip to content

feat(pt): make se_a and dpa1 compatible with torch.export#5193

Draft
wanghan-iapcm wants to merge 3 commits intodeepmodeling:masterfrom
wanghan-iapcm:feat-export-descrpt
Draft

feat(pt): make se_a and dpa1 compatible with torch.export#5193
wanghan-iapcm wants to merge 3 commits intodeepmodeling:masterfrom
wanghan-iapcm:feat-export-descrpt

Conversation

@wanghan-iapcm
Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm commented Feb 5, 2026

Summary by CodeRabbit

  • New Features

    • Expanded model export capabilities with support for additional configuration options.
  • Tests

    • Added comprehensive test coverage for descriptor-based model export and integration flows.
    • Updated descriptor tests to utilize enhanced export mechanisms.
  • Refactor

    • Improved internal import organization and structure in core modules for better maintainability.

@github-actions github-actions bot added the Python label Feb 5, 2026
@wanghan-iapcm wanghan-iapcm marked this pull request as draft February 5, 2026 02:42
@gemini-code-assist
Copy link

Summary of Changes

Hello @wanghan-iapcm, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the deepmd.pt module by making the se_a descriptor fully compatible with PyTorch's torch.export feature. This change facilitates more robust and efficient model deployment by leveraging the latest PyTorch export capabilities. The modifications include updating core descriptor logic, refining import statements, and introducing comprehensive unit and integration tests to ensure the reliability of the new export pipeline.

Highlights

  • torch.export Compatibility: The se_a descriptor in deepmd.pt has been updated to be compatible with torch.export, replacing previous torch.jit specific considerations.
  • New Export Tests: A new test file, test_export.py, has been introduced to specifically validate the torch.export functionality for both the DescrptSeA descriptor and the EnergyModel using se_e2_a.
  • Refactored Imports and Code: Imports in se_a.py were streamlined, and torch.jit-related comments and conditional Final imports were removed, indicating a cleaner integration with modern Python typing and PyTorch export mechanisms.
  • JIT Test Removal/Modification: Existing torch.jit tests related to se_e2_a were either removed (TestEnergyModelSeA in test_jit.py) or refactored to use torch.export (test_jit renamed to test_export in test_se_e2_a.py).
Changelog
  • deepmd/pt/model/descriptor/se_a.py
    • Added new imports for Final, DPEnvMat, EmbeddingNet, NetworkCollection, PairExcludeMask, DPTabulate, and ActivationFn to support updated descriptor logic.
    • Removed a try-except block for typing.Final as it's now directly imported.
    • Eliminated torch.jit-specific comments within the forward method, simplifying the code.
  • source/tests/pt/model/test_export.py
    • Introduced a new test file dedicated to torch.export functionality.
    • Added TestExport class with methods to verify the export of DescrptSeA descriptor and EnergyModel with se_e2_a descriptor.
    • Included ExportIntegrationTest for broader integration testing of torch.export.export.
  • source/tests/pt/model/test_jit.py
    • Removed the import of model_se_e2_a.
    • Deleted the TestEnergyModelSeA class, which was a JIT-specific test for the se_e2_a model.
  • source/tests/pt/model/test_se_e2_a.py
    • Renamed the test_jit method to test_export to reflect the change in testing methodology.
    • Updated the test implementation to utilize torch.export.export instead of torch.jit.script.
    • Expanded the test matrix to include the type_one_side parameter for comprehensive export validation.
Activity
  • The pull request was opened by wanghan-iapcm.
  • No further activity such as comments or reviews has been recorded in the provided context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully makes the se_a descriptor compatible with torch.export. The changes include refactoring imports and removing obsolete code in se_a.py. Additionally, new tests for torch.export are introduced, and existing JIT tests are updated to use torch.export. The overall implementation is solid and aligns with the PR's goal. I have one suggestion to improve code maintainability by addressing duplicated code in the new test file.

Comment on lines +146 to +152
class ForwardLowerWrapper(torch.nn.Module):
def __init__(self, model):
super().__init__()
self.model = model

def forward(self, extended_coord, extended_atype, nlist):
return self.model.forward_lower(extended_coord, extended_atype, nlist)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This ForwardLowerWrapper class is a duplicate of the one defined in test_export_energy_model_se_a starting at line 108. To avoid code duplication and improve maintainability, this class should be defined only once at the module level and then reused in both test methods. I suggest moving the other definition to the top of the file (e.g., after imports) and removing this one.

@dosubot dosubot bot added the new feature label Feb 5, 2026
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a4729b0b6c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +48 to +58
model = DescrptSeA(
rcut=self.rcut,
rcut_smth=self.rcut_smth,
sel=self.sel,
neuron=self.neuron,
axis_neuron=self.axis_neuron,
precision="float32",
trainable=False,
type_one_side=type_one_side,
)
model.eval()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Move descriptor to DEVICE before export

When CUDA is available, env.DEVICE becomes a GPU device, but the DescrptSeA module stays on CPU. The test builds inputs on env.DEVICE and then calls torch.export.export(model, ...), which will raise a device-mismatch error (CPU parameters vs. CUDA inputs). This means the new export test will fail on GPU runners. Moving the model to env.DEVICE (or forcing CPU inputs) avoids the mismatch.

Useful? React with 👍 / 👎.

Comment on lines 128 to 132
precision=prec,
resnet_dt=idt,
seed=GLOBAL_SEED,
type_one_side=type_one_side,
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Export test mixes CPU model with DEVICE inputs

This test constructs coord_ext, atype_ext, and nlist on env.DEVICE, but the DescrptSeA instance remains on CPU. On GPU-capable environments, torch.export.export(dd0, ...) will error because module parameters and buffers are on CPU while inputs are on CUDA. This makes the test non-portable across CPU/GPU CI. Ensure the model is moved to env.DEVICE (or create CPU inputs) before exporting.

Useful? React with 👍 / 👎.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

📝 Walkthrough

Walkthrough

The changes refactor import organization in a descriptor module, introduce comprehensive TorchScript export testing, remove an outdated test class, and migrate existing tests from torch.jit.script to torch.export.export with added type_one_side parameter support.

Changes

Cohort / File(s) Summary
Descriptor Module Refactoring
deepmd/pt/model/descriptor/se_a.py
Consolidated and reorganized imports (added Final, EnvMat, EmbeddingNet, NetworkCollection, PairExcludeMask, DPTabulate, ActivationFn), removed duplicate declarations, and stripped inline JIT-related comments. No functional logic changes.
Export Testing
source/tests/pt/model/test_export.py
New comprehensive test module covering descriptor SeA export with type_one_side configurations, energy model export via forward_lower wrapper, and end-to-end integration export flow with training and model export verification. Includes setup and cleanup logic for artifacts.
Test Migration & Updates
source/tests/pt/model/test_se_e2_a.py
Renamed test_jit to test_export, added type_one_side parameter iteration and constructor support, replaced torch.jit.script serialization round-trip with direct torch.export.export call.
Test Cleanup
source/tests/pt/model/test_jit.py
Removed model_se_e2_a import and deleted TestEnergyModelSeA test class.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested labels

Python

Suggested reviewers

  • njzjz
  • iProzd
  • anyangml
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: making se_a and dpa1 descriptors compatible with torch.export. This aligns with the substantial work across multiple test files and the core descriptor module to replace TorchScript serialization with torch.export export paths.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
source/tests/pt/model/test_se_e2_a.py (1)

124-140: ⚠️ Potential issue | 🟡 Minor

Model not moved to device, potential device mismatch.

The model is created but not moved to env.DEVICE, while the input tensors are created on env.DEVICE. This could cause device mismatch errors when running on GPU.

Compare with test_consistency in this same file (line 63) which calls .to(env.DEVICE) on the model, and with test_export.py line 92.

Proposed fix
             dd0 = DescrptSeA(
                 self.rcut,
                 self.rcut_smth,
                 self.sel,
                 precision=prec,
                 resnet_dt=idt,
                 seed=GLOBAL_SEED,
                 type_one_side=type_one_side,
-            )
+            ).to(env.DEVICE)
🤖 Fix all issues with AI agents
In `@source/tests/pt/model/test_export.py`:
- Around line 45-74: The test_export_descriptor_se_a creates a DescrptSeA
instance but never moves it to env.DEVICE while inputs are created on
env.DEVICE, causing device-mismatch errors on GPU; fix by sending the model to
the same device (call model.to(env.DEVICE) or model.to(device)) before calling
model.eval() and before export (so adjust around the DescrptSeA instantiation
and before torch.export.export) to ensure model, coord_ext, atype_ext, and nlist
are all on env.DEVICE.
- Around line 158-167: The tearDown method is attempting to remove the
"checkpoint" directory with os.remove, which fails for directories; update the
cleanup logic in tearDown (the method in test_export.py) to check whether
"checkpoint" is a directory and use shutil.rmtree to remove it (mirroring how
"stat_files" is handled), otherwise use os.remove for files — ensure the branch
targets the symbol "checkpoint" and uses shutil.rmtree when
os.path.isdir("checkpoint").
🧹 Nitpick comments (1)
source/tests/pt/model/test_export.py (1)

108-114: Consider extracting ForwardLowerWrapper to reduce duplication.

The ForwardLowerWrapper class is defined identically in two places. Consider moving it to module level to avoid duplication.

Proposed refactor
+class ForwardLowerWrapper(torch.nn.Module):
+    """Wrapper to expose forward_lower for export compatibility."""
+
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, extended_coord, extended_atype, nlist):
+        return self.model.forward_lower(extended_coord, extended_atype, nlist)
+
+
 class TestExport(unittest.TestCase):
     # ... in test_export_energy_model_se_a:
-        class ForwardLowerWrapper(torch.nn.Module):
-            def __init__(self, model):
-                super().__init__()
-                self.model = model
-
-            def forward(self, extended_coord, extended_atype, nlist):
-                return self.model.forward_lower(extended_coord, extended_atype, nlist)
-
         wrapper = ForwardLowerWrapper(model)

Apply similar change to ExportIntegrationTest.test_export.

Also applies to: 146-152

Comment on lines +45 to +74
def test_export_descriptor_se_a(self):
"""Test DescrptSeA descriptor export."""
for type_one_side in [True, False]:
model = DescrptSeA(
rcut=self.rcut,
rcut_smth=self.rcut_smth,
sel=self.sel,
neuron=self.neuron,
axis_neuron=self.axis_neuron,
precision="float32",
trainable=False,
type_one_side=type_one_side,
)
model.eval()

nf = 2
nloc = 5
nnei = sum(self.sel)
nall = nloc + 10

coord_ext = torch.randn(nf, nall * 3, device=env.DEVICE)
atype_ext = torch.randint(
0, 2, (nf, nall), dtype=torch.int32, device=env.DEVICE
)
nlist = torch.randint(
0, nall, (nf, nloc, nnei), dtype=torch.int32, device=env.DEVICE
)

exported = torch.export.export(model, (coord_ext, atype_ext, nlist))
self.assertIsNotNone(exported)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Model not moved to device, potential device mismatch.

Same issue as in test_se_e2_a.py: the DescrptSeA model is created but not moved to env.DEVICE, while tensors are on env.DEVICE. This will fail on GPU.

Proposed fix
             model = DescrptSeA(
                 rcut=self.rcut,
                 rcut_smth=self.rcut_smth,
                 sel=self.sel,
                 neuron=self.neuron,
                 axis_neuron=self.axis_neuron,
                 precision="float32",
                 trainable=False,
                 type_one_side=type_one_side,
-            )
+            ).to(env.DEVICE)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_export_descriptor_se_a(self):
"""Test DescrptSeA descriptor export."""
for type_one_side in [True, False]:
model = DescrptSeA(
rcut=self.rcut,
rcut_smth=self.rcut_smth,
sel=self.sel,
neuron=self.neuron,
axis_neuron=self.axis_neuron,
precision="float32",
trainable=False,
type_one_side=type_one_side,
)
model.eval()
nf = 2
nloc = 5
nnei = sum(self.sel)
nall = nloc + 10
coord_ext = torch.randn(nf, nall * 3, device=env.DEVICE)
atype_ext = torch.randint(
0, 2, (nf, nall), dtype=torch.int32, device=env.DEVICE
)
nlist = torch.randint(
0, nall, (nf, nloc, nnei), dtype=torch.int32, device=env.DEVICE
)
exported = torch.export.export(model, (coord_ext, atype_ext, nlist))
self.assertIsNotNone(exported)
def test_export_descriptor_se_a(self):
"""Test DescrptSeA descriptor export."""
for type_one_side in [True, False]:
model = DescrptSeA(
rcut=self.rcut,
rcut_smth=self.rcut_smth,
sel=self.sel,
neuron=self.neuron,
axis_neuron=self.axis_neuron,
precision="float32",
trainable=False,
type_one_side=type_one_side,
).to(env.DEVICE)
model.eval()
nf = 2
nloc = 5
nnei = sum(self.sel)
nall = nloc + 10
coord_ext = torch.randn(nf, nall * 3, device=env.DEVICE)
atype_ext = torch.randint(
0, 2, (nf, nall), dtype=torch.int32, device=env.DEVICE
)
nlist = torch.randint(
0, nall, (nf, nloc, nnei), dtype=torch.int32, device=env.DEVICE
)
exported = torch.export.export(model, (coord_ext, atype_ext, nlist))
self.assertIsNotNone(exported)
🤖 Prompt for AI Agents
In `@source/tests/pt/model/test_export.py` around lines 45 - 74, The
test_export_descriptor_se_a creates a DescrptSeA instance but never moves it to
env.DEVICE while inputs are created on env.DEVICE, causing device-mismatch
errors on GPU; fix by sending the model to the same device (call
model.to(env.DEVICE) or model.to(device)) before calling model.eval() and before
export (so adjust around the DescrptSeA instantiation and before
torch.export.export) to ensure model, coord_ext, atype_ext, and nlist are all on
env.DEVICE.

Comment on lines +158 to +167
def tearDown(self) -> None:
for f in os.listdir("."):
if f.startswith("model") and f.endswith("pt"):
os.remove(f)
if f in ["lcurve.out", "frozen_model.pth"]:
os.remove(f)
if f in ["stat_files"]:
shutil.rmtree(f)
if f in ["checkpoint"]:
os.remove(f)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, examine the test file to understand the context
cat -n source/tests/pt/model/test_export.py | sed -n '150,175p'

Repository: deepmodeling/deepmd-kit

Length of output: 1278


🏁 Script executed:

# Search for checkpoint creation and usage in test files
rg -n "checkpoint" source/tests/pt/ --type py -C 2 | head -50

Repository: deepmodeling/deepmd-kit

Length of output: 3797


🏁 Script executed:

# Check if there's any checkpoint directory creation or usage patterns
rg -n "checkpoint" source/pt/ --type py -C 2 | head -50

Repository: deepmodeling/deepmd-kit

Length of output: 117


🏁 Script executed:

# Find _file_delete function definition
rg -n "def _file_delete" source/tests/pt/ --type py

Repository: deepmodeling/deepmd-kit

Length of output: 322


🏁 Script executed:

# Search for checkpoint creation in the codebase to understand its nature
rg -n "checkpoint.*mkdir\|mkdir.*checkpoint\|os\.makedirs.*checkpoint" source/ --type py

Repository: deepmodeling/deepmd-kit

Length of output: 49


🏁 Script executed:

# Look at how training creates checkpoints
rg -n "checkpoint" source/deepmd/pt/ --type py -C 1 | head -80

Repository: deepmodeling/deepmd-kit

Length of output: 124


🏁 Script executed:

# Read _file_delete function
cat -n source/tests/pt/test_model_compression_se_a.py | sed -n '25,40p'

Repository: deepmodeling/deepmd-kit

Length of output: 725


🏁 Script executed:

# Check repository structure
ls -la source/ | head -20

Repository: deepmodeling/deepmd-kit

Length of output: 1165


🏁 Script executed:

# Search for where "checkpoint" is created as a directory in tests
rg -n "checkpoint" source/tests/pt/model/ --type py -B 2 -A 2

Repository: deepmodeling/deepmd-kit

Length of output: 729


🏁 Script executed:

# Search for where checkpoint directory is created
rg -n "checkpoint" source/tests/pt/model/ --type py | grep -E "mkdir|makedirs|mkdir_if_not_exist"

Repository: deepmodeling/deepmd-kit

Length of output: 49


🏁 Script executed:

# Check the training code to see if checkpoint is a directory
find source -name "*.py" -type f | xargs rg -l "checkpoint" | head -5 | xargs grep -l "mkdir\|makedirs" | head -3

Repository: deepmodeling/deepmd-kit

Length of output: 49


🏁 Script executed:

# Look for what checkpoint actually contains in training context
rg -n "os\.listdir\|os\.makedirs" source/tests/pt/model/test_export.py -B 5 -A 5

Repository: deepmodeling/deepmd-kit

Length of output: 49


🏁 Script executed:

# Read the entire test_export.py file to understand test setup
cat source/tests/pt/model/test_export.py

Repository: deepmodeling/deepmd-kit

Length of output: 5727


Use shutil.rmtree() for the checkpoint directory in cleanup.

Line 167 uses os.remove("checkpoint"), which only works for files. During training execution (line 136), the trainer creates a checkpoint directory to save model state. Attempting to remove a directory with os.remove() will raise IsADirectoryError. The fix should check the path type before removal, consistent with how stat_files is handled on line 165.

Proposed fix
             if f in ["stat_files"]:
                 shutil.rmtree(f)
-            if f in ["checkpoint"]:
-                os.remove(f)
+            if f == "checkpoint":
+                if os.path.isdir(f):
+                    shutil.rmtree(f)
+                else:
+                    os.remove(f)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def tearDown(self) -> None:
for f in os.listdir("."):
if f.startswith("model") and f.endswith("pt"):
os.remove(f)
if f in ["lcurve.out", "frozen_model.pth"]:
os.remove(f)
if f in ["stat_files"]:
shutil.rmtree(f)
if f in ["checkpoint"]:
os.remove(f)
def tearDown(self) -> None:
for f in os.listdir("."):
if f.startswith("model") and f.endswith("pt"):
os.remove(f)
if f in ["lcurve.out", "frozen_model.pth"]:
os.remove(f)
if f in ["stat_files"]:
shutil.rmtree(f)
if f == "checkpoint":
if os.path.isdir(f):
shutil.rmtree(f)
else:
os.remove(f)
🤖 Prompt for AI Agents
In `@source/tests/pt/model/test_export.py` around lines 158 - 167, The tearDown
method is attempting to remove the "checkpoint" directory with os.remove, which
fails for directories; update the cleanup logic in tearDown (the method in
test_export.py) to check whether "checkpoint" is a directory and use
shutil.rmtree to remove it (mirroring how "stat_files" is handled), otherwise
use os.remove for files — ensure the branch targets the symbol "checkpoint" and
uses shutil.rmtree when os.path.isdir("checkpoint").

@codecov
Copy link

codecov bot commented Feb 5, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 81.95%. Comparing base (8787b45) to head (26a73cf).

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #5193   +/-   ##
=======================================
  Coverage   81.95%   81.95%           
=======================================
  Files         714      714           
  Lines       73441    73437    -4     
  Branches     3616     3616           
=======================================
- Hits        60187    60185    -2     
+ Misses      12091    12089    -2     
  Partials     1163     1163           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@wanghan-iapcm wanghan-iapcm changed the title feat(pt): make se_a compatible with torch.export feat(pt): make se_a and dpa1 compatible with torch.export Feb 5, 2026
self.fail(f"Full model compilation failed: {e}")

# 3. Execute
program = Runtime.get().load_program(executorch_program.buffer)

Check failure

Code scanning / CodeQL

Potentially uninitialized local variable Error test

Local variable 'executorch_program' may be used before it is initialized.
# We need to wrap it to return specific tensor(s) or handle dict if export supports it (export usually flattens).

# Pristine output (dict)
pristine_dict = model.forward_lower(coord_ext, atype_ext, nlist)

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable pristine_dict is not used.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant