I mean, you can spend days refining a prompt while looking at a trillion variations of the same possible image. Then trying to upscale it while improving important details instead of losing them. Then checking textures and backgrounds on photoshop to clean up hallucinations.
Or indeed you can just save a cool image from the midjourney feed and print it. There's no real moral dillema yet because most people aren't trying to do art with difusion models.
In practice, can you turn off everything liquid glass or apple intelligence? We're in this weird cycle where apple is just using customers as beta testers. Yeah, the company has no leadership in machine learning or interface design. How is this my fault?
Unfortunately we deport them to the lowest bidder with a prison.