Funny & Scary at same time

Wake words work by doing a small amount of signal processing on board that amounts to a rolling on device recording about 30 seconds long. Only when the device detects what sounds like it’s wake word does it connect to a server that can actually parse the sounds for spoken words.

It’s listening in in so much as looking to match a signal. Not much more onboard intelligence than that. The speech recognition comes from the audio clip after wake word detection that is sent.

The problem with saying it is “listen” implies so level of understanding or intelligence.

Imagine you are at a strip mall in a foreign country. You have the name of the store you need to go to on a piece of paper. It’s in a language and alphabet you don’t understand and so are the store signs. You hold up your piece of paper and pick the sign that looks like a match for the writing on your paper. You didn’t read the paper or the sign but you still picked out the right one. That’s all the onboard electronics are doing when listening for a wake word.

/r/AppleWatch Thread Parent Link - i.redd.it