Skip to content

[torchlib] Pow(int, float) isn't converted correctly #2213

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
justinchuby opened this issue Apr 17, 2025 · 0 comments
Open

[torchlib] Pow(int, float) isn't converted correctly #2213

justinchuby opened this issue Apr 17, 2025 · 0 comments
Labels
bug Something isn't working contribution welcome We welcome code contributions for this module: torchlib Related to the torch/aten function lib in development

Comments

@justinchuby
Copy link
Collaborator

import torch


class PowModel(torch.nn.Module):

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        return x ** 0.5

model = PowModel()
print(model(torch.tensor(2)))
prog = torch.onnx.export(PowModel(), (torch.tensor(2),), dynamo=True)
print(prog(torch.tensor(2)))

print(prog)

Here pytorch does type promotion on the base, but ONNX does not.

tensor(1.4142)
(tensor(1),)
ONNXProgram(
    model=
        <
            ir_version=10,
            opset_imports={'': 18},
            producer_name='pytorch',
            producer_version='2.8.0.dev20250414+cpu',
            domain=None,
            model_version=None,
        >
        graph(
            name=main_graph,
            inputs=(
                %"x"<INT64,[]>
            ),
            outputs=(
                %"pow_1"<INT64,[]>
            ),
        ) {
            0 |  # node_Constant_0
                 %"val_0"<FLOAT,[]> ⬅️ ::Constant() {value=Tensor<FLOAT,[]>(array(0.5, dtype=float32), name=None)}
            1 |  # node_Pow_1
                 %"pow_1"<INT64,[]> ⬅️ ::Pow(%"x", %"val_0")
            return %"pow_1"<INT64,[]>
        }


    ,
    exported_program=
        ExportedProgram:
            class GraphModule(torch.nn.Module):
                def forward(self, x: "i64[]"):
                     # File: /home/justinchu/dev/onnxscript/test.py:7 in forward, code: return x ** 0.5
                    pow_1: "f32[]" = torch.ops.aten.pow.Tensor_Scalar(x, 0.5);  x = None
                    return (pow_1,)
            
        Graph signature: 
            # inputs
            x: USER_INPUT
    
            # outputs
            pow_1: USER_OUTPUT
    
        Range constraints: {}

)
@justinchuby justinchuby added bug Something isn't working module: torchlib Related to the torch/aten function lib in development contribution welcome We welcome code contributions for this labels Apr 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working contribution welcome We welcome code contributions for this module: torchlib Related to the torch/aten function lib in development
Projects
None yet
Development

No branches or pull requests

1 participant