Geometry outside your field of view, as well as occluded parts, have always been culled in 3D rendering, ever since people started doing it.PoolCleaningRobot said:Wot. They're talking about the game's field of vision, not your eyes. Basically, lets say you're playing an FPS, the game only needs to render everything in your field of vision. You can't see anything behind you so the game saves resources by not loading themSomebloke said:Given that I doubt the Kinect camera can manage all that much eye-tracking precision, with several metres between the user and the screen and likely dark rooms, I am going to assume this will mostly be a matter of concentrating detail around contextual things like the centre of the screen, for first person views, or around the cursor if the title has one, or in places to which the game designers figure themselves attracting viewer attention, e.g. because that's where the action takes place.
Will it be annoying to not get to choose what part of the screen you want to inspect closer, at high LOD level, without having to move in-game focus over there (EDIT: ...such as with the highly irritating depth of field effects you may have previously seen in games), or will we never notice? What about onlookers, if the player is not alone? -Time will tell, I guess. :7
This is about switching to lower detail rendering (...such as models with fewer polygons, reducing the number of texture layers (not changing to smaller bitmap sizes - that offers no to little speed gain; I notice some people seem to misunderstand the concept of mipmapping), using simpler particle systems, etc...) for things that are on screen, based on criteria in addition to the usual distance-based Level-Of-Detail complexity reduction.
The idea is that if you are looking where you aim, eyes on the crosshair, stuff towards the edge of the screen will be in your peripheral vision and can be rendered with less detail, because you are not likely to notice that badly.
The fovea of the human eye is really tiny.