I worked with a version of GPT-3 and a lot of what we are seeing in clips like this is often the best response out of 5. It understands in a way that, it can derive context, tone and also craft a response which meets those requirements. This is why the new GitHub GPT-3 code work is doing well. However, it will just come out with nonsense at least some of the time. It’s effectively a good mimic if you give it the right instructions.
I also don’t see a world in Europe at least where the latter decision making examples you’ve given will come to fruition. The EU is defanging a lot of critical decision making responsibility (loans, healthcare) away from AI, and given the size of the voting bloc, this has worldwide impact. Yes, some banks already use it for credit scoring etc, but this will be watered down by coming legislation after a series of (quite rightly) scares about encoded bias.
TLDR: I think you’re right but I don’t think we’re in that future just yet, and may never be.