Blog

All the Buzz in L.A.

We’ve just returned from the annual Society of Motion Picture & Television Engineers (SMPTE) technology conference in Los Angeles. This is one of the pre−eminent motion imaging and media delivery conferences in the world, attracting papers from the best and the brightest working across a diversity of disciplines. Image capture, signal distribution, storage, displays, video compression, virtual and augmented reality, streaming – you name it, there was a session about it.

One of the more intriguing sessions covered artificial intelligence (AI) and machine learning (ML), particularly as those apply to post−production and media workflows. AI and ML are both hot−button topics right now, and more pervasive than you might think. EDID is a very rudimentary form of AI that must be programmed, but it allows displays and video sources to automatically make the best connection in terms of image resolution, frame rates, and color modes.

Internet of Things (IoT) products for the home both incorporate AI and ML, based on predictions. Every time you use an IoT device in conjunction with other devices, or perform the same set of operations when you use that device, it can “learn” the patterns and save them as a “macro.” With enough on−board intelligence, the device can ask you if you’d like to repeat previous instructions and then execute those instructions automatically.

A good example would be leaving the house, turning down the thermostat, and switching on selected lights along with an alarm. All of these actions can be saved and repeated automatically, and the group macro given a name (“Out for The Evening”). You just need to tell your voice recognition system to execute that command.

In our world, the individual commands that turn on lights in a room and activate selected pieces of AV gear are already programmed into macros, accessed from a touch screen. With facial and voice recognition, you wouldn’t even need the touchscreen – the system would recognize you automatically, determine if you are authorized to use anything in the room, and ask your preferences. (You’ll know you’re in trouble if your IoT system says, “I’m sorry Dave, I can’t allow you to do that.”)

In the SMPTE world, AI and ML can be used for more sophisticated functions. Let’s say you have a great deal of footage from a film shoot that’s been digitized. AI can search that footage automatically and sort it, based on parameters you choose. With facial recognition, it can group all takes featuring a given actor, a certain cityscape background, or daytime vs. nighttime shots. It’s conceivable that AI & ML could even look for continuity errors by rapidly scanning takes. (Did you know NASCAR has digitized over 500,000 hours of video and film from 1933 to the present in their library, searched and accessed by AI−)

There are parallels to other industries. In the legal world, document searches that were once performed by legions of low−paid clerks are now executed by AI robots, programmed to look for specific key words. Demonstrations have been made of advertising and marketing copy written entirely by AI, based on keywords and macros previously programmed. There have even been attempts to have robots write fiction!

Another popular session topic – one which took up an entire day – was high dynamic range (HDR). According to a session chair, HDR “is a hot mess right now” as there are multiple competing standards, no consistency in coding metadata for HDR program content, and a lot of unanswered questions about delivering HDR content to viewers and measuring the quality of their experience.

For many attendees, there were plenty of basic questions about HDR – how does anyone define it exactly− How often is it used in current movies and television programs− Are there metrics that can be used to define the quality of the HDR experience− What are the “killer apps” for HDR− How does HDR affect emotional and perceptual responses in viewers−

For the AV industry, both AI and HDR will be hot−button topics in 2019. With each passing year, more of the signal distribution, coding, and storage infrastructure we build and use will become automated. The day is coming when we’ll stop obsessing over display resolution and media formats and will instead search for content by name in the cloud to play back on whatever display we have on hand.

AI will create and store multiple resolutions of the desired content and stream files to us at the highest possible resolution and frame rate that our network connection can reliably support. (That’s already happening with advanced video encoders and decoders that “talk” to the network, determine the safe maximum allowable bit rate, and change it on the fly as network conditions change.)

Storage was yet another popular topic, as was blockchain. We’re not quite yet familiar with the ins and outs of blockchain (as are many of you, no doubt!), but suffice it to say that the world is moving away from scheduled media distribution to individual, on−demand content consumption from cloud servers through a myriad of distribution channels. And many of those will rely heavily on wireless connectivity, increasingly through 5G wireless networks.

The SMPTE conference wouldn’t be complete without a look into the future. Our industry is still trying to get up to speed on 4K, yet 8K video is already on our doorstep. Movie theaters are looking into LED screens to replace the decades−old projector/screen model. We can now wrap a viewer in dozens of channels of “reach out and touch it” three−dimensional sound. (Did you know the National Hot Rod Association (NHRA) is working with Dolby to add multi−channel spatial sound to its telecasts−) And while virtual reality (VR) is still struggling to get off the ground, its counterpart augmented reality (AR) is moving ahead by leaps and bounds.

How much of this will affect the AV industry− All of it, sooner or later…

Share this story

FIND WHAT YOU NEED

GET UPDATES

Want to receive alerts and updates for every new post?