This comment was posted to reddit on Jan 02, 2015 at 5:31 am and was deleted within 1 hour(s) and 52 minutes.

I did not do the grad school thing. Graduated with a engineering degree with basic stats, linear algebra and differential equation. Then I worked in an EE lab and corrected a friends paper for NIPS and for his PhD thesis. My friend gave me his stats book when he graduated on probability theory and left for a job as an associate professor. I looked through that book on and off since I had a lack of statistical knowledge and I learned MCMC, the different distributions (Poisson, Bernoulli) and Chebyshev’s inequality in that book. I started collecting statistic notes on a doc. I also put all the statistic test on the document since I cannot remember what all of them are for. I was also pretty disappointed with the programming curriculum at the college. I spent last year learning programming to improve my engineering skills I went from Java and Matlab to Perl to R, briefly to Octave.

In late 2013 I worked for a friend briefly on marketing. I wanted to do market segmentation so I decided to do some research and found several algorithm such as CHAID, PCA, clustering, and apiori. Found those interesting so I put them down in my doc. At the same time I switched from R to python because I found that R was slow and lagged the computer when I used some packages. Also started using PANDAS from python. I was also reading up on forecasting financial markets and Game Theory just in case I cannot get a job that would be my backup skills to fall back on.

Last year I found that there were different algorithm for the same things such as in clustering, dimensionality reduction, and market basket analysis and I started collecting them like pokemon and added them into my document so I know when to use what. Also signed up for stackoverflow when I got stuck. I also wanted to learn Neural networks and Timeseries analysis and made that my goal for the end of the year. I learned looking at several text online, research papers, NIST website, and attempting to create a library for programming practice. At some point I also switched to python 3 and found sklearn. I mostly learned python through Nextdayvideo since the basics transfer over easily form the other languages. In one of the videos I found Julia programming language which I use because of its speed and ease of command inputs. With the use of PDFs, EVIEW tutorials, and peoples powerpoints I was able to learn to use most timeseries anlaysis equations including boxjenkins, vector autoregression, ARCH models, and the test they needed. I also found the wikibooks page on datamining using R and gone through it for several days. In the middle of the year I gone through the Andrew Ng class, and just improved my notes more. Some time here I also went through the Stanford NLP lectures because it was on the sidebar and I saw Naive Bayes and tough it looked interesting.

I started on neural nets with feed forward and Self Organizing Map in the wikibooks. Then I found the Hugo Larochelle series of videos on Neural nets and went through them along with the Hinton video at Google for RBMs. Last thing I looked at was Dirichlet distributions and Sarminov beta priors for meta analysis both for a research work I have.

TLDR: Learn ML by browsing vids, pdfs, and programming examples. Ended up using Python 3 and Julia as the best languages for ML