Blog

Virtual Reality Box-

A virtual reality headset is a head-mounted device that provides virtual reality for the wearer. VR headsets are widely used with computer games but they are also used in other applications, including simulators and trainers. They comprise a stereoscopic head-mounted display (providing separate images for each eye), stereo sound, and head motion tracking sensors (which may include gyroscopes, accelerometers, structured light systems, etc.). Some VR headsets also have eye tracking sensors and gaming controllers.

Because virtual reality headsets stretch a single display across a wide field of view (up to 110 for some devices according to manufacturers), the magnification factor makes flaws in display technology much more apparent. One issue is the so-called screen-door effect, where the gaps between rows and columns of pixels become visible, kind of like looking through a screen door. This was especially noticeable in earlier prototypes and development kits, which had lower resolutions than the retail versions.

The lenses of the headset are responsible for mapping the up-close display to a wide field of view, while also providing a more comfortable distant point of focus. One challenge with this is providing consistency of focus: because eyes are free to turn within the headset, it’s important to avoid having to refocus to prevent eye strain.

Virtual reality headsets are being currently used as a means to train medical students for surgery. It allows them to perform essential procedures in a virtual, controlled environment. Students perform surgeries on virtual patients, which allows them to acquire the skills needed to perform surgeries on real patients. It also allows the students to revisit the surgeries from the perspective of the lead surgeon.
Traditionally, students had to participate in surgeries and often they would miss essential parts. Now, with the use of VR headsets, students can watch surgical procedures from the perspective of the lead surgeon without missing essential parts. Students can also pause, rewind, and fast forward surgeries. They also can perfect their techniques in a real-time simulation in a risk free environment.
Latency requirements
Virtual reality headsets have significantly higher requirements for latency the time it takes from a change in input to have a visual effect than ordinary video games. If the system is too sluggish to react to head movement, then it can cause the user to experience virtual reality sickness, a kind of motion sickness. According to a Valve engineer, the ideal latency would be 7-15 milliseconds. A major component of this latency is the refresh rate of the display, which has driven the adoption of displays with a refresh rate from 90 Hz (Oculus Rift and HTC Vive) to 120 Hz (PlayStation VR).
The graphics processing unit (GPU) also needs to be more powerful to render frames more frequently. Oculus cited the limited processing power of Xbox One and PlayStation 4 as the reason why they are targeting the PC gaming market with their first devices.

Asynchronous reprojection /time warp
A common way to reduce the perceived latency or compensate for a lower frame rate, is to take an (older) rendered frame and morph it according to the most recent head tracking data just before presenting the image on the screens. This is called asynchronous reprojection or “asynchronous time warp” in Oculus jargon.

PlayStation VR synthesizes “in-between frames” in such manner, so games that render at 60 fps natively result in 120 updates per second. SteamVR (HTC Vive) will also use “interleaved reprojection” for games that cannot keep up with its 90 Hz refresh rate, dropping down to 45 fps.

The simplest technique is applying only projection transformation to the images for each eye (simulating rotation of the eye). The downsides are that this approach cannot take into account the translation (changes in position) of the head. And the rotation can only happen around the axis of the eyeball, instead of the neck, which is the true axis for head rotation. When applied multiple times to a single frame, this causes “positional judder”, because position is not updated with every frame.

A more complex technique is positional time warp, which uses pixel depth information from the Z-buffer to morph the scene into a different perspective. This produces other artifacts because it has no information about faces that are hidden due to occlusion and cannot compensate for position-dependent effects like reflectons and specular lighting. While it gets rid of the positional judder, judder still presents itself in animations, as timewarped frames are effectively frozen.



Request a Free Estimate
Enter Your Information below and we will get back to you with an estimate within few hours
0