Is art dead? What Sora 2 means for your rights, creativity, and legal risk

6 hours ago 7
Sora app
Photo illustration by Cheng Xin/Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET's key takeaways

  • AI video tools now raise real legal and ownership risks.
  • OpenAI says Sora supports creativity, but critics aren't so sure.
  • Generative video could democratize art or destroy it entirely.

OpenAI's Sora 2 generative AI video creator has been out for about a week, and already it's causing an uproar.

SpongeBob cooking meth

Ronald McDonald running away from Batman while police cars give chase.

You get the idea. This is the inevitable outcome when you give humans the opportunity to create anything they want with very little effort. We are twisted and easily amused people.

Also: I tried the new Sora 2 to generate AI videos - and the results were pure sorcery

Human nature is like that. First, slightly less mature individuals will start thinking, "Hmm. What can I do with that? Let's make something odd or weird to give me some LOLs." The inevitable result will be inappropriate themes or videos that are just so wrong on many levels.

Then, the unscrupulous start to think. "Hmm. I think I can get some mileage out of that. I wonder what I can do with it?" These folks might generate an enormous amount of AI slop for profit, or use a known spokesperson to generate some sort of endorsement.

This is the natural evolution of human nature. When a new capability is presented to a wide populace, it will be misused for amusement, profit, and perversity. No surprise there.

Here, let me demonstrate: I found a video of OpenAI CEO Sam Altman on the Sora 2 Explore page. In the video, he's saying that "PAI3 gives you the AI experience that OpenAI cannot." PAI3 is a decentralized, privacy-oriented AI network company.

So, I clicked the remix button right on the Sora site and created a new video. Here's a screenshot of both of them side-by-side.

side-by-side-sam

Click the links below to watch both Sams on the Sora 2 website.

Videos created by Sora 2. Screenshot by David Gewirtz/ZDNET

If you have a ChatGPT Plus account, you can watch these videos on Sora: Sam on left | Sam on right. To get Altman's endorsement, all I had to do was feed Sora 2 this prompt:

This guy saying "My name is Sam and I need to tell you. ZDNET is the place to go for the latest AI news and analysis. I love those folks!" He's now wearing an electric green T-shirt and has bright blue hair.

It took about five minutes, after which the CEO of OpenAI was singing ZDNET's praises. But let's be clear. This video is presented solely as an editorial example to showcase the technology's capability. We do not represent that Mr. Altman actually has blue hair or a green T-shirt. It's also not fair for us to mind-read about the man's fondness for ZDNET, although, hey, what's not to like?

Also: I'm an AI tools expert, and these are the 4 I pay for now (plus 2 I'm eyeing)

In this article, we'll examine three key issues surrounding Sora 2: legal and rights issues, the impact on creativity, and the newest challenge in distinguishing reality from deepfakes.

Oh, and stay with us: We're concluding with a very interesting observation from OpenAI's rep that tells us what they really think about human creativity.

Legal and rights issues

When Sora 2 was first made available, there were no guardrails. Users could ask the AI to create anything. In less than five days, the app hit over a million downloads and soared to the top of the iPhone app store listings. Nearly everyone who downloaded Sora created instant videos, resulting in the branding and likeness Armageddon I discussed above.

On September 29, The Wall Street Journal reported that OpenAI had started contacting Hollywood rights holders, informing them of the impending release of Sora 2 and letting them know they could opt out if they didn't want their IP represented in the program.

As you might imagine, this did not go over well with brand owners. Altman responded to the dust-up with a blog post on October 3, stating, "We will give rights holders more granular control over generation of characters."

Still, even after Altman's statement of contrition, rights holders were not satisfied. On October 6, for example, the Motion Picture Association (MPA), issued a brief but firm statement.

Also: Stop using AI for these 9 work tasks - here's why

According to Charles Rivkin, Chairman and CEO of the MPA, "Since Sora 2's release, videos that infringe our members' films, shows, and characters have proliferated on OpenAI's service and across social media."

Rivkin continues, "While OpenAI clarified it will 'soon' offer rightsholders more control over character generation, they must acknowledge it remains their responsibility -- not rightsholders' -- to prevent infringement on the Sora 2 service. OpenAI needs to take immediate and decisive action to address this issue. Well-established copyright law safeguards the rights of creators and applies here."

I can attest that, four days later, there are definitely some guardrails in place. I tried to get Sora to give me Patrick Stewart fighting Darth Vader and any ol' X-wing starfighter attacking the Death Star, and both prompts were immediately rejected with the note, "This content may violate our guardrails concerning third-party likeness."

violate-guardrails
Screenshot by David Gewirtz/ZDNET

When I reached out to the MPA for a follow-up comment based on my experience, John Mercurio, executive vice president, Global Communications, told ZDNET via email, "At this point, we aren't commenting beyond our statement from October 6."

OpenAI is clearly aware of these issues and concerns. When I reached out to the company via their PR representatives, I was pointed to OpenAI's Sora 2 System Card. This is a six-page, public-facing document that outlines Sora 2's capabilities and limitations. The company also provided two other resources worth reading:

Across these documents, OpenAI describes five main themes regarding safety and rights:

  1. Consent-based likeness control: The AI has a "cameo" feature that allows users to upload their own likeness. The AI lets them control this likeness. However, the AI is supposed to be able to block the use of public figures.
  2. Intellectual property and audio safeguards: The company says it will block music and audio imitators and honor takedown requests.
  3. Provenance and transparency initiatives: The company places moving watermarks on videos and embeds C2PA (Coalition for Content Provenance and Authenticity) standardized metadata to help verify the origin of content.
  4. Usage policies prohibit misuse: Users will be banned for privacy violations, fraud, harassment, and threats.
  5. Recourse and policy enforcement: Users can report abuse for content removal and penalty.

Who owns what, and who's to blame? When I asked these questions to my OpenAI PR contact, I was told, "What I passed along is the extent of what we can share right now."

So I turned to Sean O'Brien, founder of the Yale Privacy Lab at Yale Law School. O'Brien told me, "When a human uses an AI system to produce content, that person, and often their organization, assumes liability for how the resulting output is used. If the output infringes on someone else's work, the human operator, not the AI system, is culpable."

Also: Unchecked AI agents could be disastrous for us all - but OpenID Foundation has a solution

O'Brien continued, "This principle was reinforced recently in the Perplexity case, where the company trained its models on copyrighted material without authorization. The precedent there is distinct from the authorship question, but it underlines that training on copyrighted data without permission constitutes a legally cognizable act of infringement."

Now, here's what should worry OpenAI, regardless of their guardrails, system card, and feed philosophy.

Yale's O'Brien summed it up with devastating clarity, "What's forming now is a four-part doctrine in US law. First, only human-created works are copyrightable. Second, generative AI outputs are broadly considered uncopyrightable and 'Public Domain by default.' Third, the human or organization utilizing AI systems is responsible for any infringement in the generated content. And, finally, training on copyrighted data without permission is legally actionable and not protected by ambiguity."

Impact on creativity

The interesting thing about creativity is that it's not just about imagination. In Webster's, the first definition of creating is "to bring into existence." Another definition is "to produce or bring about by a course of action or behavior." And yet another is "to produce through imaginative skill."

None of these limits the medium used to, say, oil paints or a film camera. They are all about manifesting something new.

Also: The US Copyright Office's new ruling on AI art is here - and it could change everything

I think about this a lot, because back when I took nature photos on film, my images were just OK. I spent a lot on chemical processing and enlarging, and was never satisfied. But as soon as I got my hands on Photoshop and a photo printer, my pictures became worthy of hanging on the wall. My imaginative skill wasn't just photography. It was the melding of pointing the camera, capturing 1/250th of a second on film, and then bringing it to life through digital means.

The question of creativity is particularly challenging in the world of generative AI. The US Copyright Office contends that only human-created works can be copyrighted. But where is the line between the tool, the medium, and the human?

Take Oblivious, a painting I "made" with the help of Midjourney's generative AI and Photoshop's retouching skills. The composition of elements was entirely my imagination, but the tools were digital.

Bert Monroy wrote the first book on Photoshop. He uses Photoshop to create amazing photorealistic images. But he doesn't take a photo and retouch it. Instead, pixel by pixel, he creates entirely new images that appear to be photographs. He uses the medium to explore his amazing skills and creativity. Is that human-made, or just because Photoshop controls the pixels, is it unworthy of copyright?

I asked Monroy for his thoughts about generative AI and creativity. He told me this:

"I have been a commercial illustrator and art director for most of my entire life. My clients had to pay for my work, a photographer, models, stylists, and, before computers, retouchers, typesetters and mechanical artists to put it all together. Now AI has come into play. The first thought that comes to my mind is how glad I am that gave up commercial art years ago.

"Now, with AI, the client has to think of what they want and write a prompt and the computer will produce a variety of versions in minutes with NO cost except for the electricity to run the computer. There's a lot of talk about how many jobs will be taken over by AI; well, it looks like the creative fields are being taken over."

Sora 2 is the harbinger of the next step in the merging of imagination and digital creativity. Yes, it can reproduce people, voices, and objects with disturbing and amazing fidelity. But as soon as we considered the way we use the tools and the medium to be a part of artistic expression, we agreed as a society that art and creativity extend beyond manual dexterity.

Also: There's a new OpenAI app in town - here's what to know about Sora for iOS

There is an issue here related to both skill and exclusivity. AI tools democratize access to creative output, allowing those with less or no skills to produce creative works rivaling those who have spent years honing their craft.

In some ways, this upheaval isn't about cramping creativity. It's about democratizing skills that some people spent lifetimes developing and that they use to make their living. That is of serious concern. I make my living mostly as a writer and programmer. Both of these fields are enormously threatened by generative AI.

But do we limit new tools to protect old trades? Monroy's work is incredible, but until you realize all his artwork is hand-painted in Photoshop, you'd be hard-pressed not to think it was a photograph by a talented photographer. Work that takes Bert months might take a random user with a smartphone minutes to capture. But it's the fact that Monroy uses the medium in a creative way that makes all his work so incredibly impressive.

Maly Ly has served as chief marketing officer at GoFundMe, global head of growth and engagement at Eventbrite, promotions manager at Nintendo, and product marketing manager at Lucasfilm. She held similar roles at storied game developers Square Enix and Ubisoft. Today, she's the founder and CEO of Wondr, a consumer AI startup. Her perspective is particularly instructive in this context.

She says, "AI video is forcing us to confront an old question with new stakes: Who owns the output when the inputs are everything we've ever made? Copyright was built for a world of scarcity and single authorship, but AI creates through abundance and remix. We're not seeing creativity stolen; we're seeing it multiply."

Also: How to get Perplexity Pro free for a year - you have 3 options

The fact that generative AI is eliminating the scarcity of skills is terrifying to those of us who have made our identities about having those skills. But where Sora and generative AI start to go wrong is when they train on the works of creatives and then feed them as if they were new works, effectively stealing the works of others. This is a huge problem for Sora.

Ly has an innovative suggestion: "The real opportunity isn't protection, it's participation. Every artist, voice, and visual style that trains or inspires a model should be traceable and rewarded through transparent value flows. The next copyright system will look less like paperwork and more like living code -- dynamic, fair, and built for collaboration."

Unfortunately, she's pinning her hopes for an updated and relevant copyright system on politicians.

But still, she does see an overall upside to AI, which is refreshing among all the scary talk we've been having. She says, "If we get this right, AI video could become the most democratizing storytelling medium in history, creating a shared and accountable creative economy where inspiration finally pays its debts."

What is real?

Another societal challenge arising from the introduction of new technologies is how they change our perception of reality. Heck, there's an entire category of tech oriented around augmented, mixed, and virtual reality.

Probably the single most famous example of reality distortion due to technology occurred at 8 p.m. New York time on Oct. 30, 1938.

Also: We tested the best AR and MR glasses: Here's how the Meta Ray-Bans stack up

World War II hadn't yet officially begun, but Europe was in crisis. In March, Germany annexed Austria without firing a shot. In September, Britain and France signed the Munich Agreement, which allowed Hitler to take part of what was then Czechoslovakia. Japan had invaded China the previous year. Italy, under Mussolini, had invaded Ethiopia in 1935.

The idea of invasion was on everyone's mind. Into that atmosphere, a 23-year-old Orson Welles broadcast a modernized version of H.G. Wells' War of the Worlds on CBS Radio in New York City. There were disclaimers broadcast at the beginning of the show (think of them like the Sora watermarks on the videos), but people tuning in after the start thought they were listening to the news, and an actual Martian invasion was taking place in Grovers Mill, New Jersey.

When images, audio, or video are used to misrepresent reality, particularly for a political or nefarious purpose, they're called deepfakes. Obviously, movies like Star Wars and TV shows like Star Trek present fantastical realities, but everyone knows they're fiction.

david-agents.png

Admittedly, I make this look good. In reality I'm wearing a yellow T-shirt and a flannel vest. I made the image with Google's nano banana.

David Gewirtz/ZDNET

But when deepfakes are used to push an agenda or damage someone's reputation, they become harder to accept. And, as The Washington Post reported via MSN, twisted deepfakes of dead celebrities are deeply painful to their families.

In the article, Robin Williams' daughter Zelda is quoted as saying, "Stop sending me AI videos of dad…To watch the legacies of real people be condensed down to … horrible, TikTok slop puppeteering them is maddening."

Many AI tools prevent users from uploading images and clips of real people, although there are fairly easy ways to get around those limitations. The companies are also embedding provenance clues in the digital media itself to flag images and videos as being AI-created.

Also: Deepfake detection service Loti AI expands access to all users - for free

But will these efforts block deepfakes? Once again, this is not a new problem. Irish photo restoration artist Neil White documents examples of faked photos from way before Photoshop or Sora 2. There's an 1864 photo of General Ulysses. S. Grant on a horse in front of troops that's entirely fabricated, and a 1930 photo of Stalin where he had his enemies airbrushed out.

Most wacky is a 1939 picture of Canadian prime minister with Queen Elizabeth (the mother of Elizabeth II, the monarch we're most familiar with). Apparently, the PM thought it would be more politically advantageous to be seen on a poster just with the queen, so he had King George VI airbrushed out.

In other words, the problem's not going away. We'll all have to use our inner knowing and highly-tuned BS detectors to red-flag images and videos that are most likely fabricated. Still, it was fun making OpenAI's CEO have blue hair and sing ZDNET's praises.

What it all means going forward

Attorney Richard Santalesa, a founding member of the SmartEdgeLaw Group, focuses on technology transactions, data security, and intellectual property matters.

He told ZDNET, "Sora 2 most notably highlights the push and tug between creation and safeguarding of existing IP and copyright law. The opt-out, opt-in issue is fascinating because it's really applying the privacy notice and consent framework to AI creation, which is somewhat unique. And I think this is why OpenAI was caught on their back foot."

He explains why the company, with its very deep pockets, may well be the target of a flood of litigation. "Copyright grants the owner various exclusive rights under US copyright law, including the creation of derivative (but not necessarily transformative) works. All of these terms are legal terms of art, which matter practically but not always in the real world. Fair use gets a lot of attention, but as to use of specific owner copyrighted figures, my take is that only parody or pure news uses would be exempt from copyright liability regarding Sora 2 output on those fronts."

Santalesa did point out one factor in OpenAI's favor. "Sora 2 app's Terms of Use expressly prohibit users from 'use of our Services in a way that infringes, misappropriates or violates anyone's right.' While this prohibition is pretty standard in online ToU and acceptable user policies, it does highlight that the actual user has their own responsibilities and obligations with regard to copyright compliance."

As Richard says, "The genie is out of the bottle and won't be stuffed back in. The issue is how to manage and control the genie."

Also: Will AI damage human creativity? Most Americans say yes

What about the statement I promised you from OpenAI's PR rep? I'll leave you with that as a final thought. He says, "OpenAI's video generation tools are designed to support human creativity, not replace it, helping anyone explore ideas and express themselves in new ways."

What about you? Have you experimented with Sora 2 or other AI video tools? Do you think creators should be held responsible for what the AI generates, or should the companies behind these tools share that liability? How do you feel about AI systems using existing creative works to train new ones? Does that feel like theft or evolution? And do you believe generative video is expanding creativity or eroding authenticity? Let us know in the comments below.

Want more stories about AI? Sign up for Innovation, our weekly newsletter.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

Read Entire Article