How accurately does lighthouse scan a room? When translated in to a 3d mesh, what does it look like?

Here is how I gathered it works. If anyone knows otherwise please correct me.

The lighthouses pulse out IR lasers in all directions and the various receptors on the peripherals capture the pulses if they have a direct line of sight to their origin. Peripherals can tell the difference between the light that comes from each light house. Since each photon of light ( IR laser) travels outward from lighthouse A and B in a straight line, the position and orientation of the peripheral will determine which sensors are occluded from the pulse of light from point A and which will be occluded from the pulse of light from point B.

The peripheral simply looks at which sensors captured the pulse from lighthouse A, which ones captured the pulse from lighthouse B. At any given moment the pereferal knows which sensors have a line of sight to each sensor. When you combine this data with the known dimensions of the room -- and the time delay between when the light from A and B hit their first sensor, to the time they hit their last -- you can mathematically determine the exact location and orientation of that peripheral inside the volume.

How do they get the dimensions of the room? I suspect that have to measure the room when you first set the system up and then manually enter the measurements into the system. However there is almost certainly a calibration program that lets you move a peripheral to each corner of the room and press a button to set it as a boundary. You can probably use this to set boundaries at the corners of oddly shaped rooms, around furniture and at the edges of tables.

To further clarify how I think this works I am going to explain how it doesn't work. The peripherals cannot actually 'see' walls or objects in the room. The lighthouse does not 'paint' invisible laser QR codes on the walls that a camera can look at the light bouncing off wall and interpret it. The only light the sensors recognize is light that comes directly from the origin -- not light that bounces off walls. The sensors do not see a full image (a matrix of pixels) like a camera does, so there would be no way for them to take a snapshot of a wall and detect a QR code being projected on it. The sensors only see in a binary and can only tell whether or not a laser coming directly from the base stations is striking them.

/r/oculus Thread