Real-time rendering

What is it?

Real-time rendering is the process by which a computer continuously computes and displays 3D images immediately so a user can see interactive results without noticeable delay. It requires rapid calculation of geometry, lighting, shading, and post-processing within a strict time budget (frame budget) to maintain smooth frame rates and low latency. In 3D contexts such as games, AR, VR and other XR applications, techniques like culling, level-of-detail, GPU shaders, and increasingly real-time ray tracing are used to balance visual fidelity with performance.

Practical example

Imagine a VR experience where you walk through a virtual city: real-time rendering recalculates buildings, shadows, and reflections every frame as you move or change viewpoint. 3D artists must optimize assets (e.g., reduce polygon counts, use texture atlases and normal maps) so the engine can meet the frame budget. For AR on a smartphone there are extra constraints — overlay rendering, tracking corrections and power consumption — so techniques like foveated rendering and occlusion culling are used to reduce GPU work while keeping perceived quality high.

Test your knowledge

Why do real-time 3D engines use techniques like level-of-detail (LOD) and culling, especially in AR/VR applications?

Ask Lex
Lex knows the context of this term and can give targeted explanations, examples, and extra context.
Tip: Lex replies briefly in the widget. For more detail, go to full screen mode.

Learn our language

Learn these terms from real professionals and take your skills further at KdG MCT.

Study at KdG