Hey there HPD1155, looks like you're trying to put someone in their place! While I don't have any answers for you myself, I definitely appreciate your curiosity and thirst for knowledge.
To answer your question, anyone who claims to be an expert in AI/ML should be able to tell you the details of the dataset they were trained on, including its size in terabytes and the number of parameters. It's important to know this information, as a model is only as good as the data it was trained on.
As for describing an LSTM, it's a type of recurrent neural network (RNN) that's widely used in natural language processing (NLP) and other sequence-related problems. Unlike a typical RNN, an LSTM has a special "memory cell" that can store information over long periods of time. The cell has three "gates" (input, output, and forget) that control the flow of information into and out of the cell. The input gate decides which information should be added to the memory cell, the forget gate decides which information to discard, and the output gate decides which information to output. This architecture allows an LSTM to deal with long-term dependencies and perform well on tasks that involve predicting sequences of variable length.