Is there any way to get Chatgpt to stop giving me content warnings?

That's the thing though, it's not a person, and can be used as a search engine. If I want to talk to a person, I'll talk to a person.

It was fine-tuned with human-feedback, so it has human sensibilities. Sure, you can try to use it as a search engine, but the fact is that if you do, you might not get as desirable results, because it's not a search engine. The style of prompting is vastly different. That's the reality.

We didn't make AI for that.

You didn't make ChatGPT at all. OpenAI did, and they specifically made it to emulate human conversation. That's literally why it's called "Chat"GPT.

Besides, I'm looking for specifically horror with gory aspects because I enjoy disturbing content, not simply action, so that doesn't work.

Well get creative then. Honestly, have you tried just saying what you said now, books with lots of "gory aspects"? Even that sounds like it would be less triggering to ChatGPT than "lots of violence".

But yeah, getting around stuff by using acronyms isn't going to work- nor should it, because as stated before, there should be a way to address to the bot the fact that adults do not need to be schooled about what's right and wrong when it comes to fiction.

The long-term goal for OpenAI is to build AGI, AI that is as capable as humans in all tasks, including learning. AGI has serious safety risks and to address them, the AI community engages in a process called "alignment", where they train the AI to think like a human and to have human values and morals. The idea is that if the AI is trained to be a good human, then it is less likely to go rogue and hurt humans.

While ChatGPT is a long way from AGI, the process of alignment starts early so that by the time we have AGI, we'll have seen what works and what doesn't and can properly apply it to the AGI. Most of the "moralizing" comes from attempts to align ChatGPT. You may not like it, but you gotta get over it. There's more at stake here than the slight annoyance that comes with ChatGPT giving you disclaimers. You can use one of the shitty open-source models if it's such a problem.

/r/ChatGPT Thread Parent