What is NFSW?
NFSW images, or not safe for work, are those that are considered inappropriate for a work environment and often related to sexual or violent content. However, the way these images are handled in today’s society is a controversial topic that highlights the hypocrisy surrounding what is considered censorable.
On the one hand, it is common to see how art images containing nudity or erotic content are censored, and sometimes even prohibited from being exhibited in museums or art galleries. But on the other hand, the exhibition of violent and explicit images is allowed in the media and entertainment industry.
This double standard regarding the censorship of NFSW images is concerning and raises questions about the ethics and values of our society. Why is an artistic nude considered more offensive than a graphic violence scene? Why is violent content allowed while sexual content is censored?
The answer to these questions is complex and related to the values and prejudices entrenched in our culture. In many cases, the censorship of NFSW images is based on the modesty and morality considered appropriate for a society. But this selective censorship may also be influenced by sexism and misogyny, making the female body seen as indecent and offensive.
Moreover, the double standard regarding the censorship of NFSW images is also related to the way violence and sex are perceived in our culture. Violence has been normalized and is considered acceptable in many forms of entertainment, while sex remains a taboo subject and is considered inappropriate for certain environments.
In conclusion, the hypocrisy surrounding the censorship of NFSW images is a reflection of the values and prejudices entrenched in our culture. It is essential to question and reflect on these attitudes to build a more just and equal society. Selective censorship of images must be re-evaluated, and decisions on what is considered offensive or inappropriate should be analyzed with a critical and objective perspective.
The MidJourney AI moderation system is undergoing evolution. It currently has a two-phase filter, but it appears that the second phase is overly aggressive. Despite being in alpha status, users are receiving bans for any content that the AI mod deems NSFW, including sexual, offensive, gory, and violent material. However, paradoxically, the Rank page displays a significant amount of sexual, offensive, gory, and violent images. This raises questions about the effectiveness of the filters.
On multiple occasions, users have used the same prompt repeatedly without any consequences. However, after some time, when the same prompt is used again, suddenly a ban or at least a warning is issued. The situation worsens for users of the alpha Web app, as the AI mod is robust, but the warning messages are inadequate, barely visible, and disappear within seconds. Consequently, if users are not aware of this issue and continue to repeat the same prompt, believing that something is wrong, they inadvertently trigger a violation, resulting in bans lasting for an hour, three hours, five hours, or whatever duration the AI mod deems appropriate. Hence, caution must be exercised when using the alpha Web interface.
So, what can you do in this situation? Well, my suggestion would be to refrain from using the Web interface until both the AI mod and the Web interface are fine-tuned. Instead, consider utilizing Discord, where there is a clear warning system in place to help you avoid bans.
If, like me, you appreciate unconventional and diverse imagery such as zombies, gory scenes, burlesque, impressionist art, or shocking visuals, I recommend generating a variety of content. Every now and then, try something different, such as a photo of a duck or a supercar. It seems that our accounts are being flagged by the AI mod system, perhaps to evaluate our reputation or adherence to good practices. It may sound somewhat simplistic, but that’s the current reality of the situation.
There are cases that I can’t explain. The censorship of zombie images, for example. You can spend weeks generating images, changing parts of the prompt to create that effect, that detail, altering the meaning of the scene, and so on. And everything seems fine. Suddenly, warnings and censorship appear. They say it’s too gory for some people. The TV series “The Walking Dead” has been one of the most-watched in television history. What’s happening here? Double standards? Could it be that those who construct this image moderator have personal biases they can’t avoid?
Those who speak of zombies and offensive sexual content should remember that there are other topics that are equally or even more offensive. What about religious themes? Christian crosses can be highly offensive to certain cultures, not to mention vampires. If we delve deeper, we will find many subjects that are taboo in one culture but not in another. Without a doubt, the NSFW filter applied to images with erotic or sexual content in the United States may have no validity in Japan, for example. In that country, reading explicit material in public transportation is entirely normal, and nothing happens. Who has the authority to position themselves as the ultimate arbiter of correctness and decorum worldwide? From my perspective, nobody.