Idea: What if we tried alternating frames in between eyes? If you sent the 2 images asynchronously, would it mean a smoother experience?

just throwing this in here for future reference. normally you render both frames at the same time. if you watched nvidias multi-res rendering video you see what the image looks like when exiting the GPU. it's 2 images completely filling the screen. with this method you only need to render one image at a time, but at twice the framerate. this is of course more taxing on the CPU, because normally the twin images can use some of the same resources to render.

with this method you could easily flip the image 90 degrees and get a lot of surrounding space left on the GPU. this could be used in conjunction with positional timewarp and the 1000hz motion detectors the Vive/Rift CVs have.

the MEMS gyroscopes (one of the motion sensors) provide a lot of fast tilting data, so theoretically you could make the image exiting the GPU slightly larger than the one shown on the panel, and use that extra data for timewarp without getting that black outline you normally get (the lower the framerate the bigger the black line) when timewarp is used extensively. this solves that problem with timewarp completely, and makes it a more reliable tech. normally you can't do this because there isn't enough space in the GPU exit image.


by sending the data serially you also reduce the amount of latency. normally the GPU needs to render a twice as big image before sending it off to the HMD controller, where the controller also needs to wait for that data before sending it to the panel (global update requires this). with this method you render a half as big image, send it to one of the panels, the HMD panel controller card now receives this faster because there's half as much data, and therefore can display it faster. Maybe the controller part is just a marginal improvement, but there should be considerable gains on the GPU side.

as explained before this is probably more taxing on the computer, as the normal method allows for left and right eye to share resources even though they're technically rendering 2 different images. now they're not only rendering 2 images, they're also slightly away from each other in time. this is by no means twice as taxing, but I'd guess something like 10-40% more taxing. who knows, maybe by evening out the strain on the computer you actually gain performance?

the payoff is worth it if it works as intended (read: without sickness/es). twice as high perceived framerate means a smoother experience, but with little additional hardware (read: It's cheap!). even if 144hz is the theoretical maximum (why would it be?), a perceived 144hz experience might actually be better than a normal 90hz experience.

it does need a slight hardware change in the HMD itself though. the best way I can describe it is that the controller needs to be 180hz and the panels 90hz. controller receives the image from GPU and immediately puts it on the (left) panel, then, 1/180th of a second later it receives the next image, and put that on the right panel > repeat.

Uhm.... yeah, that's about it. one minor additional detail. it might make it easier to upgrade the panels on future HMDs to as high as 1400p per eye. the GPU is "free" now and the HMD doesn't have to compete for space between the eyes.

/r/virtualreality Thread