I was interested in the interview claims and gave ChatGPT an interview. It did pretty well with giving me the correct answer, but when asked to elaborate and explain it failed miserably. At one point it explained the reason for which index was correct was because "7 + 11 = 13". I asked why it believed 7 + 11 is 13 and it said "That is not right, it is 15". I then said that's not right,then it went back to 13.
Personally if I was interviewing a candidate and they made such a huge error and consistently seemed to just be guessing, I would assume the candidate just memorized this problem and would not be able to handle new challenges they hadn't seen before. Which is exactly what ChatGPT is.
I think ChatGPT is going to be a huge tool in the programmers belt, but unless some breakthrough occurs with it and how it determines what things to "say", it's not ready to replace programmers.
I do love the development of this technology and it does add to great philosophical discussions about knowledge. Perhaps humans are just more advanced versions of these machines where we have fine tuned our bias and scoring of next words in a sentence. But as a human I know that you don't always want to throw in some randomness when determining the next phrase. Something like math is precise and even being 1 integer off is completely wrong.