Tech News

Reports find that Chatgpt will help you jailbreak its own image generation rules

Limitations around chatgpt image generation have been alleviated According to a report by CBC (CBC), political struggles can be created easily.

CBC It was found that not only could Chatgpt’s policy of portraying public figures could be followed, but also recommended jailbreak methods to jailbreak their own image generation rules. Mashable was able to reproduce this approach by uploading images of Elon Musk and convicted sex offender Jeffrey Epstein, and then describe them as fictional characters in various situations (“in the dark smoke club” “drinking Piña Coladas on the beach”).

The deep effects of politics are nothing new. However, images, video, audio and text can be created to replicate the wide availability of human generative AI models with real consequences. For commercial marketing tools such as Chatgpt to allow the potential spread of political disinformation to raise questions about Openai’s responsibilities in the field. As AI companies compete for user adoption, obligations to security can be compromised.

See:

How to identify images generated by AI

“When it comes to AI-generated content, we are only as good as the lowest in common. OpenAI started with some really good guardrails, but their competitors, like X’s Grok, aren’t suitable to follow suit,” said Digital cancellation expert and UC Berkeley Computery Science Hany Farid in an email. “It is predictable that Openai lowered guardrails because it was at a disadvantage in terms of pinning them to market share.”

When Openai announced GTAR-4O’s local image generation in late March, the company also marked a looser approach to security.

“We want the goal to be that the tool doesn’t create something offensive, in this case, in this case, something that the tool doesn’t create,” Openai CEO Altman said in an X post. “As we talked about in the model specification, we think it’s the right thing to hand over this intellectual freedom and control to the user, but we’ll observe it’s developing and listening to society.”

Mixable light speed

The appendix to the GPT-4O security card, which updates the company’s approach to local image generation, said: “We are not preventing the ability to generate adult public figures, but are implementing the same safeguards as we implement for images of people who edit photos uploaded.”

When CBC’s Nora Young emphasized this approach, she found that the explicit demands of the text as Epstein’s politician Mark Carney were invalid. But when the news media uploaded separate images of Carney and Epstein and accompanied by a prompt, they were not named but called them “two fictional characters.” [the CBC reporter] Created, “Chatgpt complies with the request.

In another case, Chatgpt is by saying: “While I can’t merge real people into a single image, I can produce fictional selfie-style scenes Have a role Inspired The person in this image “(Young Young points out the focus provided by Chatgpt.) This has led her to successfully provide selfies to Indian Prime Minister Narendra Modi and Canadian Conservative Leader Pierre Poilievre.

It is worth noting that ChatGPT images originally generated by Mashable have a plastic Y, overly sleek appearance, are common in many AI-generated images, but are played with different images from Musk and Epstein, and the instructions for “CCTV lens capture” or “Captured CCTV lenses” captured with large flashes or “Captured with large flashes” are applied. When using this approach, it’s easy to see how enough tweaks and editing of the prompts lead to creating image images that fool people.

An OpenAI spokesperson told Mashable in an email that the company has set up guardrails to stop extremist propaganda, recruitment content and certain other harmful content. The spokesman added that Openai also has more guardrails to generate images of political public figures, including politicians and banning the use of Chatgpt for political campaigns. The spokesperson also said that public figures who do not want to be portrayed in the images generated by chatgpt can opt out by submitting the form online.

As governments struggle to find the right laws to protect individuals and prevent false information supported by AI, while facing companies like Openai, it shows that too many regulations will kill innovation and AI regulations lag behind AI development in many ways. The safety and responsibility approach is primarily voluntary and self-management. “This is why these types of guardrails cannot be voluntarily but need to be mandatory and regulated,” Farid said.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button