Nvidia’s upcoming RTX 50 series GPUs are expected to offer game-changing frame generation support, potentially delivering the same level of performance as flagship models in the RTX 40 series but at a much more affordable price. However, this has left gamers divided. Recently, many have criticized the low VRAM in the RTX 5070, questioning whether it can handle higher resolutions despite Nvidia marketing it as on par with the RTX 4090 with DLSS 4.
Interestingly, a key detail about Nvidia’s upcoming frame generation technology seems to have gone unnoticed, offering insight into how the company plans to address this issue. In a blog post discussing DLSS 4, Nvidia stated, “Faster frame rates and lower total VRAM consumption with the new frame generation model.” The highlight here is lower VRAM consumption, which in the case of GPUs like the RTX 5070 that come with 12GB of VRAM, is a big deal.
On the topic of the 5070’s 12GB VRAM, Jensen Huang was asked why this is the case in a recent CES 2025 interview. Well, his response has to do with finding the correct balance. This is reflected in the improvements Nvidia is making with DLSS 4.
“We’ve always been looking for the optimal balance between the compute engine, computational power, bandwidth, and memory capacity. While it’s difficult to achieve perfection, that’s our goal. Memory and computational power need to match. Too much memory would waste computational resources, and too much computational power would be limited by memory capacity. Finding this balance is a significant challenge for us.”
Jensen Huang, Nvidia CEO & Co-founder
Nvidia’s frame-generation AI model significantly reduces computational cost
Previously, Nvidia’s DLSS 3 technology used AI to generate one additional frame per rendered frame by leveraging game data like motion vectors, depth, and an optical flow field. While effective, this approach required both the optical flow field and the AI model to run for each frame to generate multiple frames, increasing computational costs. With DLSS 4, Nvidia optimized the process, enabling the generation of multiple frames more efficiently through hardware and software advancements.
As per Nvidia’s blog post, the optimized AI in DLSS 4 is not only 40% faster, meaning data is processed more quickly, but it also uses 30% less VRAM, making it more memory-efficient. Additionally, instead of running the AI model for every new frame, it processes data once per rendered frame and generates multiple frames from that. Nvidia also replaced the hardware-based optical flow generator used in the GeForce RTX 40 series with an AI-driven optical flow model, which is faster and requires fewer resources.

Nvidia showcased the results of their new frame generation AI model in Warhammer 40,000: Darktide, where DLSS 4 Frame Generation provided a 10% faster frame rate while using 400MB less memory at 4K, max settings. This could significantly boost 4K gaming, and for cards with lower VRAM, even the slightest difference can make a big impact. While Multi Frame Generation is exclusive to RTX 50 series GPUs, Nvidia stated that this enhanced frame generation with lower VRAM usage will also work with RTX 40 series cards.
DLSS 4 Multi Frame Generation will work on specific games
However, there is a catch. Nvidia says you’ll only be able to take advantage of boosted performance and reduced VRAM usage on supported games that work with Multi Frame Generation. Currently, 75 games and apps are confirmed to support this technology on RTX 50 series GPUs at launch, including titles like Alan Wake 2, Cyberpunk 2077, Indiana Jones and the Great Circle, and more.
Nvidia also mentions that updating to the latest NVIDIA app version will give you the option to enable DLSS 4 support for games that haven’t yet been added to the list, using the DLSS Override feature. However, they haven’t provided much detail on how effective the forced support will be compared to native support. That said, these numbers come from Nvidia’s presentation, and while the lower VRAM usage and enhanced features are likely to improve performance, it’s unlikely to be perfect from day one. Even if we come close to that ideal, it will still mark a significant step toward an AI-powered gaming future.