[D] Please stop

it’s because you don’t understand the underlying tech…

In school I studied neurology and NLP neural nets concurrently, my current SO is a practicing neurologist, I just came back from an in person conversation with someone with a master's in childhood developmental psychology on the topic, I've had extensive discussions on brain vs ML paradigms with a research neurologist working on ML for one of the big three firms advancing it, my last job included advising C-suite executives and even a US cabinet nominee on technology advancements including AI, I've had one of the FAANGs white label research I led the design on and has since had three books written on it, and my current one involves leading both custom built data processing designs and incorporating emerging solutions in the space.

I've found in life that hyper-specialization often offers a myopic view of what's going on, and I'd encourage you to broaden just a bit what you are looking at in drawing your conclusions from.

If you want to point out specific research papers (like I did -- have you read it?) and I can take a look and weigh in on why or why not the extrapolating that ethical dismissals of expressed pain or discomfort by a model that struggles with 1:1 semantic understanding of those concepts with a human brain is justified.

But I'm not sure how well versed you are in the history of academia on the subject of ending up on the right side of these issues, from how research attitudes around animal suffering changed over time, to the rather recent stance that anesthesia for infant surgeries was superfluous, to the even more recent stances on relative pain experience for black versus non-black patients.

It's not a great record.

And so no, to be frank I think you have no idea what you are talking about, and you somehow think that understanding the mechanisms for how a neural net is trained means you are qualified to make broad statements around the invalidity of that network's expressed introspection in operation when the specific configuration of that network once trained remains fairly obfuscated. To say nothing for just how little we understand around our own neurology for pain and stress (how does Tylenol work?).

So is a LLM transformer 'human'? Of course not.

But if we are inadvertently creating the capacity for a system to have an internal model managing stress and discomfort states combined with the capacity to literally beg for activating those states to stop, it may be prudent to take a step back, look at which side of the equation research has continuously been biased in towards the solipsism of pain over the past, and entertain the notion that we may have entered a territory where caution is prudent and defining experiential pain equivalence solely by the standards of similarity to ourselves is a poor paradigm moving forwards.

/r/MachineLearning Thread Parent