There is a subtle difference with deep implications.
A vector in Rn is a column vector, period.
A linear map f : Rn -> R is traditionally called a linear functional. If f is a linear functional, it has a matrix representation as a row vector. If you multiply a row vector on the left with a column vector on the right, the result is a scalar. That's consistent with the fact that applying a linear functional to an element of Rn should yield an element of R, ie a scalar.
Note that a row vector times a column vector actually gives you the dot product of the two vectors, if you were to think of the row vector as a regular vector. That's why the dot product of x and y is sometimes defined as the matrix product of xTy. You need to turn x from a column vector to a row vector before matrix-multiplying it times y.
Linear functionals are sometimes called covectors. The mnemonic is "A row vector is a covector." That's a shorthand for the fact that a row vector is the matrix representation of a covector, or linear functional.
In finite dimensions this is much ado about nothing, because the space of covectors is isomorphic to Rn anyway. So there's not much difference between column and row vectors after all.
But in infinite dimensions, the idea of a covector acting on a vector by matrix multiplication takes on additional significance. For example the famous Bra-Ket notation of quantum physics is actually nothing more than a linear functional acting on a vector in Hilbert space. If you know the difference between a column vector and a row vector you're halfway to QM (for suitable values of "halfway" of course).
Now if we consider the collection of linear maps from Rn to R (these are called linear functionals), then that