Google's Machine Learning Software Has Learned to Replicate Itself

I mean it's certainly cool but it is a lot less significant than the article makes out to be.

Usually the machine learning scientist builds on his experience to determine a suitable architecture for a task through trial and error. The idea of automating that process by semi-randomly searching a space of combinations of hyper parameters describing a promising architecture - instead of doing it manually - has been around for a long time (i.e. with genetic algorithms).

The problem is and has always been the inherent computational expense. Training state of the art networks takes a long time, and with this approach you're essentially training and evaluating a new child net in every iteration of the search algorithm (in this case every training step of the parent net). Hardware was bound to catch up and given Google's resources it's hardly surprising that they are taking the next step towards feasibility.

What we've got with this essentially is a tool to gain some 0.x% of accuracy and shave off a bit of development time. Like I said, neat, but not groundbreaking and of little use if you're doing state of the art ML.

/r/Futurology Thread Parent Link - sciencealert.com