Google's robo-cars still struggle with stop lights, sunsets, junctions...

This highlights major issues with unmapped vision based systems. Let's be clear. This article isn't even about Google. They are highlighting issues with vision based systems with no maps.

This is one of the reasons why Google have such highly detailed maps and wants to rollout in zones. They at least know exactly where the traffic light is meant to be. So glare from the sun doesn't confuse the vehicle. If a bus is blocking the view of the traffic light so what. It knows a traffic light is meant to be there and until it is visible it will not proceed through the traffic light. One major benefit of Google's advanced system is that they can predict the path of hundreds of objects at junctions.

Image A is a direct issue Google came across in Austin. They had to manually program in all traffic lights again as they were used to traffic signals at the side of the road and not above them. They wouldn't confuse the object by the side of the road because they know exactly where the traffic lights are meant to be.

While the headline includes Google, it doesn't highlight any issues with their approach. It highlights the immense issues surrounding vision based unmapped systems or inferior maps in uncontrolled release.

Issues Google face include temporary traffic lights and failed lights. In both cases using a sensor array that can detect objects accurately from hundreds of feet out benefits them. On top of this predicting their path. This is the type of testing they will be doing on private property. When the normal rules of the road don't apply how do they approach the situation and they are not struggling at all if you understand how to approach these as a human.

/r/SelfDrivingCars Thread Link - theregister.co.uk