Meaningful

It theoretically could. Math problems are largely bounded if one has taken proofs or looked for common patterns for honing problem solving heuristics, exams etc. The pipeline would largely be of the form:

  1. Translate the question into a well-formed formula using natural language understanding, for which GPT-2 is one among many developments (and where commercial/larger scale developments are arguably more advanced).
  2. If the formula is ill-defined, this can verified without AI and via unsupervised learning. Parsing is largely solved and efficient thanks to theories of language/computation, wherefrom the NN can be trained so as to develop heuristics on the generative side. This isn't too different from how humans learn to map things into equations.
  3. The rest is also symbolic via the help of automatic theorem provers.

The only arithmetic an arbitrary NN could not potentially figure out (after thinking about Scott's conjecture that it could add numbers), is multiplication/exponentiation if it's feedforward (first order) neural nets, since it just weighs inputs and their various linear combinations without extrapolating.

However, any recurrent connections changes assumptions to where it can any turing machine (and scale into general AI with more parameters, convolutions, etc). GPT-2 is functionally equivalent to an RNN. So I'm not sure why it was argued (in article comments) that the network wouldn't be able to add two numbers, when it was outlined in the paper that it's a general q/a machine, and it's established RNN's generalize into TMs.

/r/slatestarcodex Thread Parent Link - slatestarcodex.com