Skip to content

Add generic support for Intel Gaudi accelerator (hpu device) #11328

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

dsocek
Copy link
Contributor

@dsocek dsocek commented Apr 15, 2025

What does this PR do?

Adds generic support for HPU (Intel Gaudi) device inside HF Diffusers. This would expose Gaudi to wider audience and allow running all available pipelines in diffusers (have them functional and accelerated via HPU device). Some of these pipelines are fully optimized in Optimum-Habana, but many are not available blocking users to even run them.

This PR adds HPU support by relying on GPU-HPU migration toolkit under the hood (see https://docs.habana.ai/en/latest/PyTorch/PyTorch_Model_Porting/GPU_Migration_Toolkit/GPU_Migration_Toolkit.html). When added, one can run any diffusers pipeline out-of-the-box directly from HF diffusers (it will not be particularly optimized but it would be functional and HPU accelerated), even if this pipeline is not ported to OH.

For example, when this support is added, one can run Sana pipeline (which is NOT in Optimum-Habana) on HPU directly via hugging face diffusers like this:

import torch
from diffusers import SanaPipeline
pipe = SanaPipeline.from_pretrained("Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers", torch_dtype=torch.bfloat16)
pipe.to("hpu") # <-- KEY CHANGE
image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana"').images[0]
image.save("sana.png")

Tested 15 different diffusers pipelines with this PR on Intel Gaudi2 and Gaudi3 devices and all run OK except for 1 which fails due to unsupported complex type (HPU does not support complex data type):

Test results

Pipeline Diffusers Class Task No Errors Output Quality
SD StableDiffusionPipeline text-to-image Good
SDXL StableDiffusionXLPipeline text-to-image Good
SD3 StableDiffusion3Pipeline text-to-image Good
FLUX FluxPipeline text-to-image Good
SD3 I2I StableDiffusion3Img2ImgPipeline image-to-image Good
Sana SanaPipeline text-to-image Good
Lumina2 Lumina2Text2ImgPipeline text-to-image ❌* -
Latte LattePipeline text-to-video Good
Kolors KolorsPipeline text-to-image Good
Marigold MarigoldDepthPipeline image-to-depth Good
Music LDM MusicLDMPipeline text-to-audio Good
Omni Gen OmniGenPipeline text-to-image Good
Paint By Example PaintByExamplePipeline in-painting Good
LTX LTXPipeline text-to-video Good
SDXL+CN StableDiffusionXLControlNetPipeline controlled-text-to-image Good

*❌ The error in Lumina2 is because it uses Complex datatype which is currently not supported on HPU. This error is OK because it is expected behavior.

Copy link
Contributor

@regisss regisss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Comment on lines 448 to 453
# Enable generic support for intel gaudi accelerator using GPU/HPU migration
if device_type == "hpu" and kwargs.pop("hpu_migration", True):
os.environ["PT_HPU_GPU_MIGRATION"] = "1"
os.environ["PT_HPU_MAX_COMPOUND_OP_SIZE"] = "1"

import habana_frameworks.torch.core as htcore # noqa: F401
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this import to be always present?

import habana_frameworks.torch.core as htcore

?

Also, since we're setting some environment variables here, maybe we could logger.debug() this info?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@regisss @sayakpaul
Thanks for the quick review!

@sayakpaul
Yes the import is needed, else we will get error ModuleNotFoundError: No module named 'torch.hpu'. This requirement is also documented here (see step 2).

Good idea to add logger.debug(), I will add this next.

@sayakpaul sayakpaul requested review from DN6 and yiyixuxu April 16, 2025 08:25
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@dsocek
Copy link
Contributor Author

dsocek commented Apr 16, 2025

@sayakpaul added loggers

logger.debug('Environment variable set: PT_HPU_MAX_COMPOUND_OP_SIZE=1')

try:
import habana_frameworks.torch.core as htcore # noqa: F401
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this package be installed via PyPI? I wasn't able to find any instructions here?
https://docs.habana.ai/en/latest/PyTorch/PyTorch_Model_Porting/GPU_Migration_Toolkit/GPU_Migration_Toolkit.html#enabling-the-gpu-migration-toolkit

Could we add an import check like is_habana_available? Similar to how we have it here

_torch_xla_available, _torch_xla_version = _is_package_available("torch_xla")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DN6 Thanks for review!

The habana_frameworks.torch package is Gaudi PyTorch bridge, part of complex SW stack for Intel Gaudi accelerators. You can see installation instructions (non-trivial) here and more specifically here. However, for all practical purposes users will be working under official release docker which will have this already installed. @regisss, can you can pitch in your view on this?

Thanks for good suggestion about using importlib-based check is_habana_available(), I will add this next.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DN6 Unfortunately, we can't use importlib here. The _is_package_available falsely returns that "habana_frameworks.torch.core" is not available because it is not able to get version via importlib_metadata, even importlib.util.find_spec("habana_frameworks.torch.core") finds the package OK.

import importlib.util
import sys

# The package importlib_metadata is in a different place, depending on the python version.
if sys.version_info < (3, 8):
    import importlib_metadata
else:
    import importlib.metadata as importlib_metadata

def _is_package_available(pkg_name: str):
    pkg_exists = importlib.util.find_spec(pkg_name) is not None
    pkg_version = "N/A"

    if pkg_exists:
        try:
            pkg_version = importlib_metadata.version(pkg_name)
            print(f"Successfully imported {pkg_name} version {pkg_version}")
        except (ImportError, importlib_metadata.PackageNotFoundError):
            pkg_exists = False

    return pkg_exists, pkg_version

print(importlib.util.find_spec("habana_frameworks.torch.core") is not None)
print(_is_package_available("habana_frameworks.torch.core"))

Output:
True
(False, 'N/A')

Any other suggestion for handling this import?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dsocek Here is how we do it in Accelerate: https://github.com/huggingface/accelerate/blob/34c1779828b3d0769992e6492e6de93d869f71b5/src/accelerate/utils/imports.py#L435

habana_frameworks.torch.core should be there anyway if habana_frameworks is there right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@regisss great example how its done in accelerate!

I did a quick test, seems at the minimum we need to import habana_frameworks.torch (importing only habana_frameworks would cause an error in migration ModuleNotFoundError: No module named 'torch.hpu').

I will copy most of what we do in accelerate and update this PR.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dsocek
Copy link
Contributor Author

dsocek commented Apr 17, 2025

@DN6 @regisss
Refactored the code to use is_hpu_available() in style of how its done in accelerate.

Copy link
Contributor

@regisss regisss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dsocek
Copy link
Contributor Author

dsocek commented Apr 17, 2025

@yiyixuxu fixed style, sorry about that :)

@@ -445,6 +446,11 @@ def module_is_offloaded(module):
f"It seems like you have activated model offloading by calling `enable_model_cpu_offload`, but are now manually moving the pipeline to GPU. It is strongly recommended against doing so as memory gains from offloading are likely to be lost. Offloading automatically takes care of moving the individual components {', '.join(self.components.keys())} to GPU when needed. To make sure offloading works as expected, you should consider moving the pipeline back to CPU: `pipeline.to('cpu')` or removing the move altogether if you use offloading."
)

# Enable generic support for Intel Gaudi accelerator using GPU/HPU migration
if kwargs.pop("hpu_migration", True) and is_hpu_available():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we keep the device_type check that was here earlier? e.g. If HPU is available on the machine, and we set pipe.to(torch.float16) this path would still run and set the device silently right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DN6 with the new is_hpu_available() we dont need explicit device check. Device will silently be set to hpu within is_hpu_available() when all checks for HPU env pass

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ohh this would not be expected by our users and not aligned with our design philosophy: they would need to explicitly set device_type if they want to use the non-default one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If habana_frameworks is hooked into torch, then HPU would be the default device

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dsocek So if I understand correctly, if an HPU is available and the pipeline needs to be run on CPU it wont unless it's explicitly moved?

Assuming the following snippet runs on an HPU machine

e.g

# automatically run on HPU if it is available? 
pipe = DiffusionPipeline.from_pretrained("..")
pipe(**args)

To run on CPU you would have to explicitly set pipe.to(cpu)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DN6 @yiyixuxu
Apologies for the earlier confusion - my previous comment was incorrect. 🙇

I was referring to the if statement in the to() function of this PR where I originally used if device == "hpu", and after refactoring now use is_hpu_available(). We don't explicitly have "hpu" check anymore in the if statement.
This new check will not change any user expected behavior:

Current PR's behavior is as follows:

  • pipe.to() not called: then device used will be (default) CPU
  • pipe.to("cpu"): then device used will be CPU
  • pipe.to("hpu"): then device used will be HPU
def is_hpu_available():
    if (
        importlib.util.find_spec("habana_frameworks") is None
        or importlib.util.find_spec("habana_frameworks.torch") is None
    ):
        return False

    os.environ["PT_HPU_GPU_MIGRATION"] = "1"
    logger.debug("Environment variable set: PT_HPU_GPU_MIGRATION=1")

    import habana_frameworks.torch  # noqa: F401
    import torch

    return hasattr(torch, "hpu") and torch.hpu.is_available()

Here we 1st check is if "habana_frameworks" or "habana_frameworks.torch" are in the environment.

If they are, then we set hpu_migration RT var to true, and import habana_frameworks.torch.
We must define this run-time var before importing habana_frameworks.torch.

All of this will still keep device on CPU unless user explicitly set it to HPU

Finally we also do hard checks hasattr(torch, "hpu") and torch.hpu.is_available()

Let me know if further adjustments are needed :)

Comment on lines +346 to +352
os.environ["PT_HPU_GPU_MIGRATION"] = "1"
logger.debug("Environment variable set: PT_HPU_GPU_MIGRATION=1")

import habana_frameworks.torch # noqa: F401
import torch

return hasattr(torch, "hpu") and torch.hpu.is_available()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
os.environ["PT_HPU_GPU_MIGRATION"] = "1"
logger.debug("Environment variable set: PT_HPU_GPU_MIGRATION=1")
import habana_frameworks.torch # noqa: F401
import torch
return hasattr(torch, "hpu") and torch.hpu.is_available()

ohh, the check should just return True or False, indicate if hpu is available or not

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yiyixuxu Yes this does return True or False. We need to set migration to True for HPU before import habana_frameworks.torch so this is why I added it as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants