This comment was posted to reddit on Jun 11, 2015 at 5:53 am and was deleted within 8 hour(s) and 58 minutes.

There is a subtle difference with deep implications.

A vector in R^{n} is a column vector, period.

A linear map f : R^{n} -> R is traditionally called a *linear functional*. If f is a linear functional, it has a matrix representation as a row vector. If you multiply a row vector on the left with a column vector on the right, the result is a scalar. That's consistent with the fact that applying a linear functional to an element of R^{n} should yield an element of R, ie a scalar.

Note that a row vector times a column vector actually gives you the dot product of the two vectors, if you were to think of the row vector as a regular vector. That's why the dot product of x and y is sometimes defined as the matrix product of x^{Ty.} You need to turn x from a column vector to a row vector before matrix-multiplying it times y.

http://mathinsight.org/dot_product_matrix_notation

Linear functionals are sometimes called *covectors*. The mnemonic is "A row vector is a covector." That's a shorthand for the fact that a row vector is the matrix representation of a covector, or linear functional.

In finite dimensions this is much ado about nothing, because the space of covectors is isomorphic to R^{n} anyway. So there's not much difference between column and row vectors after all.

But in infinite dimensions, the idea of a covector acting on a vector by matrix multiplication takes on additional significance. For example the famous Bra-Ket notation of quantum physics is actually nothing more than a linear functional acting on a vector in Hilbert space. If you know the difference between a column vector and a row vector you're halfway to QM (for suitable values of "halfway" of course).

http://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation

Now if we consider the collection of linear maps from R^{n} to R (these are called linear *functionals*), then that