April 18, 2018

Artificial Intelligence and Machine Learning:

Hollywood’s Next Frontier

The NAB Show in Las Vegas included a few panels on how media companies can leverage AI and machine learning in daily operations. For example:

  • At the heart of the NFL’s NextGen Stats initiative initiative is artificial intelligence (AI), which takes in-game statistics to a whole new level, providing fans with new ways to engage on NFL Media properties, broadcasts, and digital platforms.
  • C-SPAN is using machine learning and AI to index its vast archives of video content, using facial recognition technology to catalog speakers with incredible accuracy.
  • Movie studios like 20th Century Fox are looking to AI, and recently worked with the IBM Watson team to automatically create a movie trailer for its film Morgan.

What is driving these new initiatives?

Today, media companies are playing in a very different ball game than even a few short years ago. To engage viewers, they must produce multiple versions of the same video content in order to reach audiences on countless platforms and devices, often without additional budget or headcount. As video became the centerpiece of digital engagement, media companies had to find ways to feed this voracious machine, and researchers began touting the benefits of machine learning.

At NAB, Tom Ohanian, a broadcast and digital media transformation leader, noted that utilization AI and machine learning is centered on three main themes: help boost efficiency/decrease human workload, content insight/workflow steering, and automatic content creation.

Today neural networks, with the help of AI and machine learning, are the building blocks for speech-to-text, language recognition, metadata extraction, image recognition, and more. For example, content transcription is faster and more efficient than a human, and with similar or smaller accuracy rates. Additionally, technology can be used for closed captioning in over 80 languages with a 95-99% accuracy rate. The 5.1% error rate of machines is on par with human transcriptionists.

Jean Bolot, VP of Research & Innovation at Technicolor, where he leads its AI Lab in Palo Alto, says that production can be optimized with AI. Feeding scripts into machines and using AI can predict complexity of scenes, better predict costs of visual effects, and determine staffing needs. The goal: to make productions better, faster, and cheaper.

Even more eye opening is automated content creation. While this has been around for a few years in the blogosphere, it offers new tools for visual storytellers. For example, IBM Watson AI is now being used at Wimbledon to not only power analytics, but more importantly to automatically create highlight clips. The same technology is used to power highlights for The Masters. NHK has taken a different approach, tapping into Twitter to listen for trending topics to automatically build highlights.

Additional use cases in sports involve automatic directing of live broadcasts, and the use of AI to extract highlights from live soccer games at the proper length and correct aspect ratio for social media platforms.

Relatedly, NHK has developed five technologies, including robotic description of table tennis and CG-based sign language, utilizing an animated on-camera host who signs sporting events.

The Next Big Transition in the Editing Room
Machine learning and AI is also being touted as a tool to create rough cuts according to specific rules — for example, who is speaking, to favor a certain actor, and even interpret facial emotions. The technology demonstrated incredible accuracy.

Film editor and distinguished professor at the USC School of Cinematic Arts, Norman Hollyn, A.C.E., and author of the book, The Lean Forward Moment: Create Compelling Stories for Film, TV and the Web, says we are on the verge of the next big transition in the editing room. It’s not unlike the days when physical film editing was replaced by an all-digital workflow.

But it doesn’t stop there. Hollyn also notes that AI can be used to analyze sentiment and determine emotion, and even gauge a timeline of emotion, “a heightened version of what Walter Murch does right now.”

The movie industry is looking at AI for such things as color correction, quality control, subtitling, even instant movie trailers. Disney Research has demonstrated working models of auto-editing of multicam footage.

But these advances don’t sit so well with some in the creative community, including New York producer Kevin Heck, who produces commercials and corporate videos. “My concern is that basically having the models replacing actual human jobs and doing it in a very short time frame. I would say in about two to three years we’re going to start seeing people who own companies, the corporations that are trying to meet the bottom line, easily letting people go as a result of saving money, as opposed to doing what’s right for humanity.”

“Of course we’re concerned with that,” says Hollyn. He notes that lower level assistant jobs are likely to be altered or eliminated earlier and more dramatically than others. That being said, he argues that these technologies will give rise to entirely new job categories. “No one really knew there would be a DIT job category ten years ago…. It will be up to the people who are doing the affected jobs to change, as well.”

New Concerns of Fake News
Automatic lip replacement technology is also advancing. Researchers have seamlessly melded the lips, mouth, and interview of two unrelated Barack Obama clips to showcase a convincing way that such tools can be used to manipulate video images. Could script changes or complete actor replacement be too far behind? Could Ridley Scott use similar technology should he encounter another Kevin Spacey moment?

But there’s more. Auto lip replacement can now be combined with speech synthesis — the artificial production of human speech by concatenating previously recorded speech and rearranging it into new combinations. There’s a sinister side to this technology, potentially taking fake news to a whole new level, particularly when in the hands of bad actors.

Professor Hollyn says that it may, in fact, be AI that saves us from such scenarios “as people are already developing AI forensic tools to track changes made and identify the original source of material. The blockchain promises to bring some order to that chaos. But it will take leadership and a not insubstantial amount of money, to push for these things.”

With such dramatic advances in the field, are synthetic actors next? Widely dismissed as pure science fiction in the last decade, one need only drive a few miles away from the hubbub of the Las Vegas Convention Center to the venue for the current Cirque du Soleil show, Michael Jackson ONE. There, audience members experience a hologram of the late singer reincarnated in a lifelike performance — one totally created with computers. Fans are left spellbound, some visibly brought to tears by the performance, proving just how real such technology-created representations can be.

Legislative Solutions and Full Transparency
Developers of AI models say this is all about augmenting the human element, not replacing it. For smaller productions, AI-powered automation can make the difference between having a production or not; for large-scale productions, it allows producers to focus on storytelling.

Still, it’s not too early to ask if it’s time for legislators to get involved, or whether they should even play a role in regulating AI as it advances. Critics say researchers have an obligation to aim for full transparency and exposure of the technology and its potential impact on society. Such awareness will raise critical thinking as fake news concerns escalate.

Entertainment unions and trade organizations need to take a bigger role in examining the full implications of AI on its workforce and encourage open discussion. Thought leaders discussing the ethics and implications of artificial intelligence need more exposure in mainstream media. Elon Musk, one of the most vocal leaders in the arena, has warned that we need to “…pay close attention to the development of AI. We need to be very careful in how we adopt AI and make sure that researchers don’t get carried away.”

Patrick Perez is a digital professional and multi-platform specialist with a stellar track record generating multi-million dollar returns within the digital ecosystem: SVOD, OTT, mobile, authentication, Pay-Per-View, and other digital platforms. He lives in Los Angeles.


Sign Up for Weekly Analyst Insights

Recent Insights

January 15, 2019 // Brad Schlachter Amazon Flexes its Advertising Muscle with New Streaming Service

More than a year ago, TDG

January 10, 2019 // Gary Pfeiffer ESPN’s MegaCast – Mega Hit or Mega Hype?

Monday night’s College Football Playoff National

January 2, 2019 // Mike Fischer Cloud Gaming: Gold Mine or Fool’s Gold?

The already-hot category of cloud gaming