All of these were generated by Midjourney, so not exactly OpenAI (DALL·E2). Nonetheless. This clearly shows that the world will never be the same again.

So OP, were the prompts that you used to generate these images complex?

My point is : to get my generations with Stable Diffusion, I usually write pretty complex prompts to get the exact details that I need on my renders. That's probably just me, but I have a very meticulous process when it comes to writing a prompt to get exactly what I want on the screen or sometimes, I can accept just a few differences. Even when I try the prompt on Automatic1111, I modify some tags along the way to try to stick to what I have in mind exactly.

Most of the time, I use ChatGPT, give it a precise description of what I want in the background and in the foreground and let it write a text with all the details. As I'm an ESL, I tend to think that ChatGPT can help me to use more complex words (and maybe some that would sound more natural to an English speaker) and to glue all these details together. After this, I'm taking selected parts of the generated text and put it into a prompt composed of keywords separated by commas and start to add some strenght to each and everyone of them.

However, I noticed that some people use pretty generic things like "girl in a bikini on a beach" and are satisfied with what they get. Furthermore, I also noticed that most of the time, on Midjourney, you can get quite precise results with just a few words.

So even if the results that you get are very amazing, and there's no debate on this. I want to know if these images were fabricated from A to Z or if just a simple prompt was enough to get them as they are?

I tend to think that promptists as I like to call them are divided in these two distinct categories according to their generation process.

/r/OpenAI Thread Link - reddit.com