The Robots Are Coming for Phil in Accounting: Workers with college degrees and specialized training once felt relatively safe from automation. They aren’t.

This was answered in another comment. I'll repost here to clarify my take on the matter.

I think the big big impediment to new breakthroughs in AI is the actual tools, models, and methods we use. We're stuck in a particular mindset, and it's hard to change directions when what we still feel like it can be improved upon.

The fact that money is drying up has more to do with how we're getting better at seeing and understanding the limits of these approaches. We as a profession are starting to develop a broad understanding of what we can and can't easily do using what we have, and as a result we're seeing less blindly optimistic investments.

A lot of the fundamental methodology underlying our current approaches to AI was developed in the 1960s. Since then we've gotten much better about characterizing it, and building architectures that use these methods to achieve interesting results, but as time goes on we inevitably reach a point of saturation where our current systems fail to address the needs and demands that we have of them. Sure, we still have articles about new uses of ML, ever deeper NN, original architectures that solve slightly different classes of problems, and the ever-present progress towards automation, but in the grand scheme all of those are incremental optimizations.

A big factor in understanding this is the jump from treating AI as a creative problem solving challenge, to treating it like a software engineering challenge (which includes thinking about long term viability and maintenance). When trying to achieve a goal, it's easy to say "let's assume this goal is possible" before working backwards. However, just because we've assumed that our tools are powerful enough to meet our demands, doesn't make it actually true.

The issue we're hitting now has to do with the fact that the behaviors we want out of AI are getting more and more complex, and more and more nuanced, while the data we can provide is becoming more limited, and harder for us to classify. To make matters worse, we now have algorithms that have been in use for years which now need fixes and improvements, and our tooling to do this is even more limited. Training a computer to identify birds was difficult, but doable. Training a computer to identify birds you might want to see right now based on your mood and recent events in your life is much harder. Training a computer to account for the fact that your taste in birds, as well as the way you perceive emotions might change over time... That's hair-ripping territory.

Our approach to achieving these behaviors is still essentially to brute force a solution by throwing more and more data and computation at it. Unfortunately, this approach is not well suited to situations where you might have very limited data, or situations where you might get contradictory answers. Certainly you can make a system that might be able to calculate probabilities and then pick the most likely one, but the more layers you add here, the harder you make it for someone to understand what it's doing when they inevitably want to change something.

This is where the engineering challenge comes in. I think the field needs to re-evaluate the goals and expectations of what we want to achieve in the next couple of decades, and determine if our existing direction is really the right way forward. Up to now we've created a lot of systems that can stand alone, and solve specific problems, but we've done very little towards trying to create a system that can take any arbitrary number of challenges and solve them all. In fact, most of the work made in that direction has been about using the exact same brute-force approach, with a bit more recursion. However, complexity is not really on our side here. As we have to join together bigger and bigger systems, the number of possible ways to do so grows out of control, which will constantly limit our rate of progress.

Note, this is not as difficult a task as it first sounds. Our approach to traditional software development has changed quite a bit over the past 30 years, to the point where modern software is practically unrecognizable when compared to something written in the 90s. A lot of these new approaches were the result of cross-discipline innovation, which is also what I think will help AI progress.

A pretty big jump will likely happen when we eventually get large scale quantum computers. At least that will open up the possibility of finding optimal solutions for at least a class of problems that would otherwise need too much computational power. Beyond that, we really need a new generation of AI technology. Something that is built based on our current understanding of not only computation, but also the more philosophical topics such as consciousness, empathy, emotions, and intelligence. We need methods to think about, and to architect large scale AI systems; methods that capture best practices, failure scenarios, and desired outcomes using a language that's better suited for the task than anything we have now.

Until that happens, contemporary AI will take it's place as a software development methodology which will continue to get incremental improvements at a rate similar to any other software discipline over the past few decades. That is to say, it's not going away, and it will most likely continue to grow at a very fast pace; it's just that we're going to see the demand normalize, with more certifications, best practices, and a lot less philosopher CEO's talking about how the robots will inevitably turn everyone into paperclips.

I just merely wanted to help people to understand that this process is moving way faster then people realize behind the scenes. These advancements will come in our lifetimes.

/r/Futurology Thread Parent Link - nytimes.com