Intel XeSS Upscaler Plugin Now Available in Unreal Engine
Unreal Engine now has Nvidia DLSS, NIS, FSR and XeSS integration
After announcing Unreal Engine support in March, Intel has finally launched XeSS integration into Unreal Engine 4 and 5 as a plugin option. This will give developers easy XeSS integration into Unreal Engine projects without manually integrating XeSS' SDK.
The XeSS integration also realizes Unreal Engine's full compatibility with all four upscaling models from Nvidia, AMD and Intel, including DLSS, NIS, FSR, and XeSS. There's also support for Unreal Engine's proprietary upscaling solutions.
The XeSS plugin for Unreal Engine will work with versions 4.26, 4.27 and 5.0. The plugin is available exclusively on Github for the time being, but we expect it will arrive on the Unreal Engine marketplace sometime soon - like AMD's FSR and Nvidia's DLSS.
Intel's plugin will substitute Unreal Engine's Temporal Anti-Aliasing (TAA) with XeSS, applying the upscaler after the rasterization and lighting stages are complete in the rendering pipeline, with integration occurring at the beginning of the post-processing stage. This way, XeSS only upscales necessary parts of the rendering pipeline, while leaving other parts of the game - like the HUD, rendered at native resolution for better image quality.
For the uninitiated, Xe Super Sampling (XeSS) is Intel's take on temporal resolution upscaling, which competes directly with AMD's FidelityFX Super Resolution (FSR) and Nvidia's Deep-Learning Super Sampling (DLSS). From a design standpoint, XeSS aligns closely with DLSS as an AI-generated upscaler that uses AI-trained networks to upscale images. But unlike DLSS, XeSS has different modes for operation on different GPU types.
These two modes include a "higher level" version that operates on Intel's XMX AI cores found exclusively in its Arc Alchemist GPUs and a "lower level" mode that runs on the DP4a instruction set for operating on other GPU types, including those from Nvidia and AMD.
We don't know much about these modes and their actual quality and performance differences, but we do know that the DP4a model uses an alternative trained network compared to the main version that operates on Intel's XMX cores. Being different doesn't necessitate higher or worse image quality, but we wouldn't be surprised if the DP4a version features some performance and visual sacrifices.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
DP4a needs to run on GPU shader cores with INT8 operations, which are much slower compared to Intel's XMX cores. XMX natively supports INT8 and can process those operations quicker as a result.
Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.