Blog

Kramer Blog

Welcome to our new Blog! The place to share our thoughts on the next big ideas

  • Making AV systems and services completely IT-friendly – Easy, Secure, and Managed

    In recent years, Kramer has been shifting its strategy and investing in IT-friendly Pro AV solutions Integrating hardware, software, and cloud systems like Kramer Control, Kramer Network Enterprise AV Management Platform, VIA wireless presentation, KronoMeet Room Booking & Scheduling, Maestro integrated automation, and more. Software-driven AV functionality on a single device is implemented throughout different systems, creating Kramer Open AV Platforms. This scalable and smart approach to AV architecture enables multiple functionalities like integrated video conferencing coupled with powerful room control and automation, digital signage, and much more.

    The industry is at its final stages of the convergence shift. IT has indeed taken over AV, and this requires a new approach to AV in general. It requires new kinds of solutions and different ways of working. We are committed to positioning Kramer as a provider of IT-friendly AV solutions, investing heavily in meeting the challenges and motivations of the IT department. We have integrated our IT-friendly philosophy into many of our products and solutions. The Kramer AV over IT approach is aimed at making AV systems and services completely IT-friendly – Easy, Secure, and Managed.

    We believe that user-experience is what stands at the core of any easy and simple accessible technology – and AV is no exception. Whether it is videoconferencing or wireless collaboration, AV must be fail-safe and user-proof, and always simple to navigate. For the end user, our focus on easy operation results in a fully transparent AV experience.  

    Zero-touch automation with Kramer Maestro is our way of creating meeting space AV systems that do not require user intervention. The user can enter a room, connect his or her device into the BYOD interface – wired or wireless – and Maestro instantly activates a set of pre-configured actions to prepare the environment for collaboration.  

    We also simplify the process by equipping our VIA line with fully native support for AirPlay, Miracast, and Chromecast, enabling Windows 10, Apple devices, and Android devices to connect wirelessly and present content quickly and natively. This is what we call ‘True BYOD,’ which means users don’t need to install any extra piece of software to get connected and present wirelessly.

    “Easy” in the age of AV over IT is important to IT department and the AV installer. For the IT department, easy solutions are designed to be fail-safe so they will not generate service calls. To meet the technology and management needs of huddle spaces and small-to-medium size rooms built for ad-hoc meetings, we deliver solutions consisting of as few hardware components as possible. This strategy reduces points of failure and overall costs.

    For AV installers, “easy” also means selecting products that are simple to integrate into a larger system. Kramer’s cloud-driven control does not require programming/coding, and it offers a drag-and-drop process, making AV configuration easy and setup significantly quicker. Installers can configure a space, remotely access and update the AV without having to leave their office or make local adjustments.

    We want to meet the expectations of IT managers that they can manage AV in the same way that they manage IT. IT managers expect the ability to view and remotely access every component in the AV installation, just as they would in IT deployments. Managing AV remotely is something that, for an IT department, is natural. 

    IT departments also expect management tools that provide an immediate overview of the organization’s AV, understanding – at a quick glance – the status of all connected devices. With the Kramer Network Enterprise AV Management Platform, we are enabling IT departments to commission, deploy, and manage AV just like an IT domain, with the power to remotely manage, access, and make remote firmware upgrades and notifications via a rich dashboard accessed from any type of device.

    Recent research has found that more than 95% of IT professionals would rate security as the primary factor when choosing any type of technology. This is why Kramer considers security a strategic goal – as essential as product functionality – and the company applies IT industry security standards across its own AV portfolio.

     

    Kramer’s products comply with top industry security standards such as, IEEE 802.1x, 2048-bit SSL encryption, LDAP identity management, Common Criteria PPS 3.0 and more, and products are penetration tested against OWASP 2019 criteria. Pursuing better quality standards, Kramer is ISO 27001 certified and GDPR compliant.

     

    Kramer Open AV Platforms™ will show ISE attendees how the company is delivering on the promise of smart AV. Open AV Platforms is multiple software-driven AV functionality that runs on a single device. The license-activated nature of Open AV Platforms lets the user or customer determine what he or she needs, when they need it, rather than paying for unnecessary functionality. At any given point, a client can add an available feature or expand AV functionality by simply activating it as software. This software-driven method makes AV more scalable and more cost-effective; users can add features on an existing device and reduce the need to replace devices when extra functionality is needed.

     

    At Kramer, we understand that it is not enough to provide the right technology or solution. We have made a strategic decision to make customer education a key enabler of our business. With the launch of Kramer Academy, we are positioning ourselves as the bridge between two worlds – educating IT departments about AV and how to handle AV effectively, and enhancing the knowledge of traditional AV professionals to better understand how to operate in an IT-enabled world.

    To summarise we are making AV systems and services completely IT-friendly by making them easy to integrate and install, failsafe, Secure from cyber attack, and Manageable in a way that an IT professional would expect.

     
  • Onward and Upward

    We recently attended the annual Society of Motion Picture & Television Engineers technology conference in Los Angeles. SMPTE has been holding this event for decades and it attracts the best and the brightest to talk about advancements in everything from television and motion picture production to human visual science, advances in audio, video compression and transmission, and lately, promoting the field to women and college graduates.

     

    One invited papers session in particular stood out this year, and its focus was on 8K television. Regular readers of this blog may remember that much of the research and development in this area is being undertaken by NHK, the national Japanese broadcasting network. NHK commenced their work way, way back in 1995, first achieving 4K resolution in camera sensors in 2004 and then introducing their first 8K camera sensor in 2006.

     

    Since then, they’ve designed and built a 4-pound 8K camera system, pushed sensor speeds to as high as 480 Hz, and created a simultaneous downconversion product that ingests 8K video and spits it out at native resolution, 4K resolution, and Full HD resolution. As you can imagine, that requires a ton of processing power!

     

    At this year’s session, one speaker from NHK detailed their work in next-generation camera sensors for 8K that incorporate a unique organic photoconductive film (OPF) layer to boost sensitivity while keeping noise to a minimum. That’s a real challenge when working with small camera sensors – Super 35 sensors for 4K (Ultra HD) production are already jammed full of tiny photosites, a design headache for camera engineers. Now, imagine you’re asked to double the number of pixels in the same size sensor, where the average pixel measures about 3 micrometers!

     

    The second paper described a new 1.25” 8K camera sensor that can record video at frame rates as high as 480 Hz, or eight times as fast as conventional sensors. Using this sensor, fast motion can be captured with minimal blurring and very fine detail. The captured video is down-converted in-camera to 120 Hz for eventual recording and playback. As you might guess, the data flowing from the camera sensor is a gusher: Uncompressed, with 10-bit 4:2:2 color sampling, it approaches 100 gigabits per second (Gb/s), or more than twice as fast as the latest version of HDMI (2.1) can handle.

     

    The final NHK paper talked about setting up the world’s first full-time 4K/8K satellite broadcasting system, which launched in December of 2018. Aside from the technical challenges of bandwidth (both left-hand and right-hand circular polarization of the radio waves was necessary to carry all of the signal data), there was an additional obstacle: Many residents live in older apartment buildings, making the cable infrastructure upgrade process difficult. It was eventually solved by installing parallel lines of plastic optical fiber (POF, or Toslink) alongside existing coaxial cable, telephone, and power lines.

     

    Where is the relevance to our industry? Consider that, ten years ago, Ultra HD and 4K video was largely a lab experiment to many of us. In 2019, we were just getting used to managing Full HD signals over signal distribution and interfacing systems, wrestling with color bit depth and varying frame rates, not to mention the limitations and foibles of HDMI.

    Yet, three years later, the first commercial Ultra HD monitors washed up on our shores. A decade later, Ultra HD has become the default resolution for consumer televisions and most commercial AV monitors and displays. Just as we did in 2009, we’re wrestling with the same signal management issues, color bit depths, refresh rates, and a whole new version of HDMI…which isn’t even ready to handle the higher bit rates that 4K video requires for higher frame rates and higher color bit depths.

     

    So, while we fuss, argue, complain, and try to adjust to this latest jump in resolution to 4K, there is a country that is already working at TWICE that video resolution for acquisition, editing, storage, and distribution to the home. There’s no reason to think that we’ll catch up to them eventually – the first 8K televisions already launched to the North American market earlier this year, and we’re seeing early interest in 8K displays for specialized installations like command and control, surveillance, visualization and augmented reality, and (of all things) 3D visualization using autostereo displays.

     

    Skeptics can scoff all they want, but this never-ending push upward and onward in spatial resolution isn’t going to stop. If anything, additional momentum will be provided by enhancements like high dynamic range, wider color gamuts, and high frame rate video. (Did you know that, as display screens get larger and fields of view become wider, any flicker in images created by judder and slower frame rates becomes increasingly noticeable? NHK studied this phenomenon and concluded that a minimum frame rate of 80 Hz was required for 4K and 8K on large displays.)

     

    And as usual, we’ll be expected to interface and transport these signals. The SMPTE SDI standard for UHD (12G SDI) is already inadequate for signal-wire serial digital connections, having a maximum data rate of 11.88 Gb/s. This has resulted in 8K camera manufacturers employing four separate 12G SDI ports and some light compression in-camera to record 8K/60 video with 10-bit 4:2:2 color (uncompressed data rate of 47.7 Gb/s). And it’s also revived interest in the latest SMPTE standard for SDI, 24G (23.76 Gb/s, likely over optical fiber).

     

    This should be interesting to watch, particularly since our industry is still working around the six-year-old TMDS-based HDMI 2.0 interface (18 Gb/s), is still largely allergic to optical fiber, and is promoting an AV/IT video codec that’s barely fast enough to squeeze 4K/60 10-bit 4:4:4 video through a 10-gigabit network switch.

     

    Do you feel the need for speed yet?

     

     

  • Newer! Faster! Brighter! (And Barely Fast Enough)

    Back in September, we discussed the escalation of “Ks,” as in how many thousands of pixels the display industry is trying to stuff into next−generation LCD, OLED, and inorganic LED panels. We mentioned that the first 8K displays are now coming to market, even as our industry is still trying to come to grips with the care, feeding, and handling of 4K / Ultra HD video signals.

    Things are moving more quickly than anticipated. The HDMI Forum recently held a press conference in New York City to talk about HDMI 2.1 and where it’s headed. This newer, faster version of HDMI was first introduced at CES in 2017 and is quite the departure from previous versions.

    Instead of using transition−minimized differential signaling (TMDS), which was the foundation of digital display interfaces going back to DVI in 1999, version 2.1 has adopted a packet format very similar to that of DisplayPort. By doing so, HDMI 2.1 can now expand signal carriage to four lanes of data with an embedded clock, compared to the older 3 lanes with separate clock used in all HDMI versions through 2.0.

    There are other advantages. Because the signal is 100% digital now, it can be compressed using Display Stream Compression (DSC), which will come in really handy with the massive signals needed to handle high frame rate video and 8K. Another advantage is that the clock rates and data are free to zoom far beyond the 18 Gb/s limit of version 2.0.

    Indeed; HDMI 2.1 now has a maximum data rate of 48 Gb/s (or 12 Gb/s per lane). That number is mind−boggling: We’re only starting to see network switches with that much speed come to market. But if you run the numbers, you WILL need that kind of speed for advanced high−resolution imaging.

    Consider a 4K signal with high dynamic range and a 120 Hz frame rate. The base clock rate for such a signal, using standard CTA blanking, would be 4400 pixels (x) 2250 pixels (x) 120, or 1188 MHz (1.188 GHz. Add in 10−bit color (the minimum for HDR) with 4:4:4 (RGB) color resolution, and the grand total (after shopper coupons) is 1188 (x) 12 (x) 3 = 42.77 Gb/s. Going to lower color resolution lowers the tab a little: With 4:2:2 color, the data rate is 28.51 Gb/s and with 4:2:0 color, it drops to 21.39 Gb/s.

    That’s still pretty fast – too fast for HDMI 2.0. And if we start talking about 8K imaging, things get even crazier. An 8K video stream (again, using standard CTA blanking) with just 10−bit RGB color at 60 Hz refresh will leave you in a cloud of dust:

    8800 (x) 4500 (x) 60 (x) 12 (x) 3 = 85.536 Gb/s.

    Zoom−zoom! We’d have to drop to 4:2:0 color resolution just to get that signal through an HDMI 2.1 connection. Even 4:2:2 color would be too fast at about 57 Gb/s. The current version of DisplayPort would also vanish in the rear−view mirror, as it is capped at 32.4 Gb/s. (We expect to hear about a new version of DP at CES next month, presumably one that’s a LOT faster.)

    This is presumably where DSC would enter the picture. It is capable of 2:1 compression with extremely low latency, and that would get our example 8K/60 signal down to earth and to a point where it could travel over HDMI 2.1 (but not DP). The only catch is, DSC requires quite a bit of computation to work correctly and is considered “CPU−hungry,” which of course adds cost to its implementation.

    What’s curious about HDMI 2.1 to us is the continued lack of a native optical transport specification. Any signal running in the 40 Gb/s range should probably travel over optical fiber. Certainly, if it’s going to travel through a 40 Gb/s network switch, that transport will be as pulses of light and not electrons dancing on the outer edge of copper conductors.

    We inquired at the NYC press event if any HDMI Forum members were actually making v2.1 transmitter and receiver chipsets yet. So far, only one company in Japan (Socionext) is doing that, but you would be hard−pressed to find any commercial or consumer products that support V2.1 at present. (We’ll certainly have our eyes open at CES for one!)

    As mentioned in September, it’s expected that over 5 million 8K TVs will be shipped worldwide by the end of 2020 – just two years from now. Hand−in−hand will be a small but growing number of 8K monitors for commercial use (yes, there are customers waiting for such products, believe it or not) and the vast majority of those will come from super−sized LCD panel “Fabs” in China that are currently under construction or just firing up.

    We’ve frequently used this expression in the past: “What good is a Ferrari if you live on a dirt road−” Well, that’s pretty much the situation we’re looking at with the next generation of displays. Higher resolution, high dynamic range, wider color gamuts, and high frame rates will all add up to super−sized packages of display data that dwarf what we switch and distribute today.

    New codecs like JPEG XS / TiCo will help to squeeze things through network switches, but we’ll still have a choke point at the physical display interface. And we don’t have any real solutions to the problem just yet: Do we use compression− Double up on interface connections− Skip the traditional HDMI / DP interface altogether, and use a decoder inside the display to decompress the signal−

    Stay tuned…

  • All the Buzz in L.A.

    We’ve just returned from the annual Society of Motion Picture & Television Engineers (SMPTE) technology conference in Los Angeles. This is one of the pre−eminent motion imaging and media delivery conferences in the world, attracting papers from the best and the brightest working across a diversity of disciplines. Image capture, signal distribution, storage, displays, video compression, virtual and augmented reality, streaming – you name it, there was a session about it.

    One of the more intriguing sessions covered artificial intelligence (AI) and machine learning (ML), particularly as those apply to post−production and media workflows. AI and ML are both hot−button topics right now, and more pervasive than you might think. EDID is a very rudimentary form of AI that must be programmed, but it allows displays and video sources to automatically make the best connection in terms of image resolution, frame rates, and color modes.

    Internet of Things (IoT) products for the home both incorporate AI and ML, based on predictions. Every time you use an IoT device in conjunction with other devices, or perform the same set of operations when you use that device, it can “learn” the patterns and save them as a “macro.” With enough on−board intelligence, the device can ask you if you’d like to repeat previous instructions and then execute those instructions automatically.

    A good example would be leaving the house, turning down the thermostat, and switching on selected lights along with an alarm. All of these actions can be saved and repeated automatically, and the group macro given a name (“Out for The Evening”). You just need to tell your voice recognition system to execute that command.

    In our world, the individual commands that turn on lights in a room and activate selected pieces of AV gear are already programmed into macros, accessed from a touch screen. With facial and voice recognition, you wouldn’t even need the touchscreen – the system would recognize you automatically, determine if you are authorized to use anything in the room, and ask your preferences. (You’ll know you’re in trouble if your IoT system says, “I’m sorry Dave, I can’t allow you to do that.”)

    In the SMPTE world, AI and ML can be used for more sophisticated functions. Let’s say you have a great deal of footage from a film shoot that’s been digitized. AI can search that footage automatically and sort it, based on parameters you choose. With facial recognition, it can group all takes featuring a given actor, a certain cityscape background, or daytime vs. nighttime shots. It’s conceivable that AI & ML could even look for continuity errors by rapidly scanning takes. (Did you know NASCAR has digitized over 500,000 hours of video and film from 1933 to the present in their library, searched and accessed by AI−)

    There are parallels to other industries. In the legal world, document searches that were once performed by legions of low−paid clerks are now executed by AI robots, programmed to look for specific key words. Demonstrations have been made of advertising and marketing copy written entirely by AI, based on keywords and macros previously programmed. There have even been attempts to have robots write fiction!

    Another popular session topic – one which took up an entire day – was high dynamic range (HDR). According to a session chair, HDR “is a hot mess right now” as there are multiple competing standards, no consistency in coding metadata for HDR program content, and a lot of unanswered questions about delivering HDR content to viewers and measuring the quality of their experience.

    For many attendees, there were plenty of basic questions about HDR – how does anyone define it exactly− How often is it used in current movies and television programs− Are there metrics that can be used to define the quality of the HDR experience− What are the “killer apps” for HDR− How does HDR affect emotional and perceptual responses in viewers−

    For the AV industry, both AI and HDR will be hot−button topics in 2019. With each passing year, more of the signal distribution, coding, and storage infrastructure we build and use will become automated. The day is coming when we’ll stop obsessing over display resolution and media formats and will instead search for content by name in the cloud to play back on whatever display we have on hand.

    AI will create and store multiple resolutions of the desired content and stream files to us at the highest possible resolution and frame rate that our network connection can reliably support. (That’s already happening with advanced video encoders and decoders that “talk” to the network, determine the safe maximum allowable bit rate, and change it on the fly as network conditions change.)

    Storage was yet another popular topic, as was blockchain. We’re not quite yet familiar with the ins and outs of blockchain (as are many of you, no doubt!), but suffice it to say that the world is moving away from scheduled media distribution to individual, on−demand content consumption from cloud servers through a myriad of distribution channels. And many of those will rely heavily on wireless connectivity, increasingly through 5G wireless networks.

    The SMPTE conference wouldn’t be complete without a look into the future. Our industry is still trying to get up to speed on 4K, yet 8K video is already on our doorstep. Movie theaters are looking into LED screens to replace the decades−old projector/screen model. We can now wrap a viewer in dozens of channels of “reach out and touch it” three−dimensional sound. (Did you know the National Hot Rod Association (NHRA) is working with Dolby to add multi−channel spatial sound to its telecasts−) And while virtual reality (VR) is still struggling to get off the ground, its counterpart augmented reality (AR) is moving ahead by leaps and bounds.

    How much of this will affect the AV industry− All of it, sooner or later…

  • Is There a Future for Projectors

    As a company primarily focused on signal management (switching, mixing, interfacing, and format conversion), Kramer doesn’t pick sides when it comes to signal sources and “sinks” (a/k/a displays). We’re more concerned with getting the signals there intact over a variety of connections, which today could mean anything from full−bandwidth HDMI cables to AV−over−IT and fast WiFi.

    But we can’t help but observe overarching trends in the AV industry. And one that clearly stands out is a shift away from front projection to direct−view displays, a category that includes everything from flat panel LCD and OLED technology to emissive LED walls, particularly those that use fine−pitch LED arrays.

    If you like to attend concerts by popular singers and groups (and who doesn’t−), you may have noticed the extensive use of image magnification (IMAG) in the form of towers of LED cubes, or arrayed as wide walls behind the band (or even both!) It’s hard to miss these stacks, particularly if the concert is outdoors and the sun hasn’t set yet.

    Sharp−eyed viewers might also notice that just about every touring act – whether it be U2, Paul McCartney, Blake Shelton, Keith Urban, or a Broadway musical – now uses LED walls as set pieces and IMAG displays. And why not− They’re super bright, scalable, and from a staging standpoint, comparatively easy to assemble and disassemble. At least, more so than flying projectors and screens, the “old school” way it used to be done.

    Fact is, LED walls (which primarily use components made in China and are being priced very competitively) have substantially eaten in to the market share of high−brightness projectors. And it’s easy to see why, as there are no lamps to change (or filters) and lenses to fit. You need a bigger image− You simply build a bigger LED wall. No worries about high ambient light levels, not when you’ve got upward of 3,000 nits of brightness to start with.

    To achieve that level of brightness with a projector, you’d have to start with well over 30,000 lumens. And that doesn’t even consider the size of the projected image. For those playing at home, let’s assume we want to light up a 10’ x 18’ screen area with the equivalent of 3,000 nits, or 877 foot−Lamberts.

    10 x 18 = 180 square feet (x) 877 = 157,860 lumens

    Yikes! That’s a LOT of lumens. Even five stacked 30,000 lumens projectors would come up short.

    Perhaps a more dramatic example can be found closer to home. As noted in previous commentaries, the free−fall in LCD panel pricing has resulted in bargain−basement deals on televisions, specifically those with Ultra HD (4K) resolution. But you may not have noticed just how low those prices have fallen recently.

    It is now possible to purchase 75−inch Ultra HDTVs for less than $1,500, with some Chinese brands falling perilously close to $1,000. This, in turn, has led panel manufacturers to “go big” and bring out even larger panels in the 80” range. Consequently, anyone can buy 82−inch and 85−inch Ultra HD televisions for less than $4,000 – and these sets also support high dynamic range and its associated, wider color space.

    It wasn’t that many years ago that a Full HD home theater projector with about 2,000 lumens light output was priced at around $7,000. Add in a screen, brackets, and associated wiring, and you’d be well on your way to $10,000! Given that many home theater installations used screens in the 80−inch to 90−inch range, it’s almost a no−brainer to opt for the self−contained LCD television and do away with the screen, bracket, and a lot of extra wiring.

    As you can see, fine−pitch LEDs and super−sized LCD televisions and monitors are nibbling away at projector market share from both ends. And this trend is only going to continue as panel prices and LED pitches continue to drop. So, what market does that leave for projectors−

    The answer is any image that requires three−dimensional mapping, like curved walls, spheres, or unusual shapes (trapezoidal, multiple planes). LCD panels can be formed in many ways, but it’s not easy to make them into curved shapes. Their more inexpensive cousins (OLEDs) can be printed onto flexible substrates and warped into all kinds of unusual shapes – even cylinders – but have nowhere near the horsepower of inorganic LED walls. Projectors make more sense here, logistically and financially.

    Where projectors fell a bit behind but are catching up is in resolution. The move to 4K started first and foremost with projectors almost 15 years ago, but attention shifted to large LCD displays around 2012. Since then, LCDs and now OLEDs have dominated the discussion, and compared to an 85−inch Ultra HDTV with HDR, a ‘true’ 4K home theater projector is quite an expensive beast. Projectors that use lower−resolution chips and image shifting have now come to the AV marketplace to try and keep pace with the move to 4K.

    From our standpoint, all of these trends point toward two things: Faster clock rates and a ton of pixel data moving from point A to point B. Doesn’t matter whether you opt for an LED videowall, a super−large LCD display, or a 4K projector! The increased refresh rates, expanded color bit depth for HDR, and some new tricks like high frame rates that will result will put greater demands on your signal switching and distribution systems.

    Got your back. Our engineers already have their calculators out…

     

  • How Many K's Do You Need

    In baseball, the letter ‘K’ is shorthand for strikeout – getting a batter to swing at or take a third strike. It’s not unusual to see fans of a particular batter holding up signs with large letter ‘Ks” on them to signify how many strikeouts a pitcher compiles during a game. By the way, the record for a nine−inning game is 20 strikeouts, an almost−impossible feat accomplished by just two major league pitchers. Who were they− (Answer at the end of this article)

    In the world of electronics, “K” stands for the more conventional value of one thousand, being derived from the “K” in “Kilo,” which according to the dictionary is “…a Greek combining form meaning “thousand,” introduced from French in the nomenclature of the metric system” and “…French, representing Greek chī́lioi or a thousand.”

    The display industry has become fixated on “Ks” lately. Until the late 1990s, we didn’t have any displays capable of “kilo” pixels of resolution: Just 20 years ago, the first plasma display monitors came to market with 1,280 horizontal imaging pixels, making them the first “kilo” displays (at least, in one axis.) After the turn of the 21st century, we started to see displays panels with almost 2,000 horizontal pixels (1920, to be exact) and for the first time, more than 1,000 vertical pixels (1080 and 1200, respectively).

    Wow, that was a lot of pixels – 2,073,000 to be precise. And most of us figured that would be good for some time to come – who would need more resolution than that−

    Turns out, everyone. Aside from some unusual high−resolution displays from Apple that had 2550 horizontal pixels, we were stuck at 1920x1080 and 1920x1200 for a few years. That is, until 2012, when a new crop of so−called “4K” displays made an appearance at the IFA consumer electronics show. (To be fair, Sony had been selling high−brightness SXRD projectors for cinema applications since the mid−2000s and these models had 4096 horizontal pixels.)

    Six years later, televisions and monitors with 4K resolution (mostly 3840 horizontal and 2160 vertical pixels) have become commodities. What happened− For starters, Chinese display manufacturers made big bets on 4K LCD panels for TVs, figuring that Full HD (1920x1080) would provide diminishing returns over time. They constructed new fabrication lines to crank tens and hundreds of thousands of panels each month.

    Korean manufacturers didn’t sit on their hands, ramping up production of 4K LCD and organic light−emitting diode (OLED) panels for televisions and commercial applications. “4K” became a buzzword for the latest and greatest in flatscreen displays. Screen sizes increased as prices continued to drop from $238 per diagonal inch for those original, limited function 84−inch 4K LCD monitors to an amazing $9 per diagonal inch for a 55−inch 4K television today.

    While that’s a bargain price for consumers, there’s little or no profit for panel and display manufacturers when a 60−inch 4K TV can be had for less than $1,000. So, the Chinese took the lead again, deciding maybe it was time to jump to the “next” K – 8K, or more specifically, 7680 horizontal and 4320 vertical pixels. (For those keeping score at home, that represents about 33 million total pixels, or 16 times the resolution of an old−fashioned Full HDTV.)

    What – has the world gone crazy− Aside from some Ultra HD Blu−ray discs and Netflix/Amazon streaming, there’s very little 4K content to watch today. And you want me to buy an 8K TV next time around−

    Remember – it’s not about content, it’s about profitability. And it’s also about televisions and monitors getting larger and larger. 8K resolution on a 42−inch display that sits ten feet from the nearest viewer makes no sense at all. But 8K resolution on an 80−inch display that sits just a few feet away does make sense. Think of how coarse outdoor LED signs appeared in the early 2000s. Now, wander through InfoComm and notice the 20−foot and 30−foot fine−pitch LED displays that are popping up everywhere: With a fine dot pitch (say, 1.2mm and down), they’re approaching 8K resolution.

    Market research done by a few companies predicts that by 2020, over 5 million 8K TVs will be produced and sold worldwide by 2020 – less than a year and a half from now. Granted, many of those sales will take place in China, but you will see 8K televisions on store shelves by Christmas of this year and certainly no later than the 2019 Super Bowl.

    Think we’re crazy− At the recent IFA Show, both LG and Samsung announced they were bringing 80 inch−class 8K televisions to retail this fall. LG’s entry, which has no model number or pricing information yet, is an 88−inch OLED TV. (For those keeping score at home, that’s a little more than seven diagonal feet.) Samsung’s answer is the Q900FN QLED 8K TV, an 85−inch LCD display that uses quantum dot backlighting to produce high dynamic range images and is supposed to arrive on these shores in October. There’s no way to predict retail prices for either product, but it’s a safe bet they’ll be more than $9 per diagonal inch.

    If you still can’t get your head around the fact that 8K TV is right around the corner, this will blow you away; Innolux, a Taiwanese display manufacturer, showed a 16K 100−inch monitor at a trade show in China in late August. Yep, you read that right – 16K, or more specifically, 15,360 horizontal x 8,640 vertical pixels. If you can’t see the pixel structure on an 8K display unless you are just 1.5 feet away, you’ll never spot it on this display without a jeweler’s loupe.

    Crazy, right− So, what does this mean for signal management and interfacing products− With Full HD, we have to move 2,073,000 pixels every second. For 4K (Ultra HD), the payload jumps to 9.9 million pixels per second, and for 8K, we’re looking at 39.6 million pixels per second (including the blanking interval). Our 594 MHz pixel clock for 4K now accelerates to about 2.4 GHz and a data rate of about 22 gigabits per second (Gb/s) now rockets to about 90 Gb/s for a 10−bit RGB 8K signal. Got bandwidth− We sure hope so….

  • On Voice Recognition Hardware for Control (and the Underlying Motives)

    It’s generally accepted that most of the product innovation in the AV industry originated in the world of consumer electronics. Blu−ray discs, tablets, big and inexpensive Ultra HD displays, faster WiFi, and streaming video all got off the starting line ten years ago. Even more recent trends like Internet of Things control systems and the migration from projected images to fine pitch LED screens are largely driven by consumer behavior.

    Look at all of the wireless collaboration systems that must continually update their operating systems to handle content from Apple hardware as each version of iOS comes to market. (It’s a wonder Apple hasn’t come out with its own wireless collaboration platform.) And incremental improvements to WiFi are also being driven by a growing demand to stream video to the home. Speaking of video, more institutions of higher learning are posting instructional videos to YouTube channels, and of course those channels must be accessible to students.

    The migration of AV signals to IT infrastructures has followed a similar trend in the CE world. Cable TV companies are already implementing delivery of video, audio, and data over fast wireless networks in the home, eliminating the need for a traditional coaxial cable connection. That’s largely because many new homes are not being built with category wire in their walls. Instead, builders assume homeowners will just rely on fast wireless to make all their connections.

    It’s not a stretch to imagine a meeting room in the not−too−distant future that relies on wireless connectivity for every device – even videoconferencing. We’re already at that point with home offices, where teleconferences use GoToMeeting, WebEx, Skype, Zoom, and other software−based codecs through 802.1ac routers. It works! So why not add in wireless voice control systems−

    Well, things don’t always work out as planned. A recent survey conducted by the Web site The Information revealed that, while about 50 million Amazon Alexa voice recognition systems have been purchased, only 100,000 or so of their users have actually ordered anything from Amazon using Alexa. It appears that Alexa users are primarily ordering up music from streaming sites and not for ordering groceries, toilet paper, or pet food. (Or headphones, bicycles, smart televisions, clothing, batteries, etc.)

    It’s clear that the Alexa platform is quite popular, but still a novelty for many. Anecdotal evidence reveals that there is still a learning curve to be tackled for people to fully embrace voice control systems, whether they come from Amazon, Google (Assistant), Samsung (Bixby), or other companies. But with an increasing number of gadgets coming to market equipped with network interfaces, there will be solid growth in IoT control systems applications. Right now, things like doorbell cameras linked to televisions and lights that dim and change color seem to be the popular applications.

    One thing that may give pause is the perception that some large corporation is collecting data on you and your viewing/buying/consumption habits as you issue voice commands. Well, they probably are. But that concern could present a market opportunity for a somewhat dumber voice control system that can handle all of the control “stuff” and leave the shopping to others. Granted, such a device would come at a cost premium: Amazon eats a lot of the cost of Alexa boxes because they want you to buy things with it and figure they’ll make their profit at the back end, especially from Prime members.

    What’s intriguing is the fact that most appliances for the home that were shown at CES back in January incorporate the Google Assistant platform. (“OK, Google!”) Google is not in the retail business, but they are very much in the “big data” business. Their voice control product is obviously aimed at supporting IoT control in the home, but it can and will also gather data every time you issue a command, even if it’s just to stream music from the Google Play store.

    Samsung’s Bixby VC platform exists for yet a different reason, and that is to convince you to buy as many Samsung products for your home as possible. Televisions, washers and dryers, refrigerators, tablets, laptops, smartphones – any and all of these can easily integrate into a Bixby−controlled universe of appliances. Samsung has gone so far as to state that EVERY product they make will be “connected” by 2020.

    So, what does all of this mean to our market− First off, the more end−users become comfortable with voice recognition and control, the more they’ll request it be part of an AV installation. If you can walk into your house and tell Bixby to turn on the lights and television (and probably the oven to warm up last night’s pizza), you will logically assume that you can walk into any room and turn things on with your dulcet tones. That would include classrooms, meeting rooms, and even huddle spaces.

    Second, the smart companies will be busy developing drivers for these VC systems so they can indeed be used to control the AV gear in a given room. (Even if it’s just adjusting lights and drapes at first.) Once a potential customer learns that a high−profile installation has adopted VC, they’ll want it for their facility. There’s nothing like keeping up with the Joneses to motivate customers, especially in high−profile installations.

    Third, the adoption of VC systems will drive IT administrators crazy. The security issues alone could represent a Pandora’s Box (or Pandora’s voice control) to admins: How can you be sure the voice recognition system isn’t responding to a recording− Do you require two−factor identification, such as facial recognition, a thumbprint, or even a spoken password before executing a command or series of commands−

    Fourth, someone will develop a generic voice control system not tied to selling groceries, collecting data, or checking to see if the spin cycle is over. This VC system will be targeted specifically at the commercial AV market (and possibly residential customers) and come with an appropriate interface to translate commands into addressable IP packets to operate just about anything with an Internet hook−up. There may even be multiple, incompatible VC products offered for sale.

    But make no mistake, voice control is coming. You have our word on that. (And “no, Alexa, I did not just order five cases of Double Stuff Oreos!”)

  • Speed is Everything

    If you’ve been paying even the slightest attention to trends in the AV industry, you know that fast wireless connectivity has become a very important part of any installation. In particular, WiFi is integral to all of the wireless collaboration and presentation−sharing devices that are overrunning classrooms and meeting rooms.


    What you may not be aware of is how hard WiFi protocol developers are working to try and stay ahead of the tidal wave of consumer and commercial products that are completely and utterly reliant on wireless connections. Those readers who’ve been around long enough to remember the crude attempts in the late 1990s to stream static images and presentations to projectors should be suitably amazed that anyone can now watch 1080p/60 video on a mobile device – without dropped packets and buffering.


    But is that benchmark good enough− Nope, and particularly with 4K video now getting a foothold. The current “fast” protocol is IEEE 802.11ac, otherwise known as channel−bonding WiFi. This protocol combines two or more 20 MHz wireless channels to boost bandwidth and connection speeds, enabling the “holy grail” of reliable 1080p/60 streaming.


    Well, that used to be the holy grail. Now, customers want to stream multiple 1080p/60 videos without buffering while simultaneously passing the usual TCP/IP traffic. So, the hard−working wireless tecchies have come up with an even faster standard – 802.11ax.


    What’s in an “x−” Supposedly, improved performance, extended coverage and longer battery life.  802.11ax can deliver a single video stream at 3.5Gb/s, and with new multiplexing technology, can deliver four simultaneous streams to a single endpoint with a theoretical bandwidth of 14Gbps.


    801.22ax does this by using a higher level of modulation – quadrature amplitude modulation (QAM), to be precise. Your cable TV company sends you digital TV programs using 256−QAM (or 256 levels of symbols). 802.11ax goes even further by employing 1024−QAM (more symbols of data) and combines it with more antennas (multiple in, multiple out or MIMO).


    A newer version of MIMO, Multiple User (MU−MIMO) can provide up to eight simultaneous streams of video from one wireless access point. Unlike 802.11ac, version “x” operates in both the 2.4 GHz and 5 GHz wireless bands to combine and format channels. And a technology called Orthogonal Frequency Division Multiple Access (OFDMA) allows each MU−MIMO stream to be split in four additional streams, boosting the effective bandwidth per user by a factor of four.


    The difference between earlier versions of the 802.11 protocol and version “x” is like the difference between a Toyota Corolla and a Ferrari. Whereas older versions of wireless lacked the ability to dynamically shape streams and antenna beaming paths, “x” basically modifies every single parameter of a wireless connection to optimize streams of packets. (Did we mention it’s also smart enough to detect on−going activity and wait until the channel is clear to begin transmissions−)


    If you’ve figured out that you’ll need all−new WiFi routers and access points to use any of these features, you win the giant stuffed teddy bear! Your in−room wireless networks will have to be upgraded to 802.11ax at some point, but the good news is that new WiFi gear is pretty inexpensive. What’s more, 802.11ax is backward−compatible with systems like 802.11ac and 802.11n.


    Hey – what a minute. Haven’t we written about 802.11ad before and called it the best thing since waffle irons− True, we did. And for sheer speed, you can’t beat “d” as it supports streaming rates of over 3 Gb/s per channel, with six 2 GHz channels to work with. No congestion here – it’s the equivalent of having an 8−lane highway accessible during rush hour.


    But there’s a catch. (There’s ALWAYS a catch.) 802.11ad operates in the 60 GHz radio band; way, WAY above the frequencies used for 802.11ac and 802.11ax. Radio waves at this frequency are quite small – a full wave measures just under 2 inches, meaning that antennas or this band can be formed right on a semiconductor layer.


    And the catch is that 60 GHz radio waves have a very limited range at maximum power levels (FCC rules allow about 1 watt), which means if you move more than about 30 feet or 10 meters from a 60 GHz WiFi access point, you’ll probably lose the signal. 60 GHz signals also won’t pass through solid objects of any kind, which does provide a backward form of security when you think about it.


    In contrast, 2.4 GHz wireless signals can travel through many solid surfaces that are conductive. And 5 GHz radio waves can penetrate wood floors, glass, sheet rock, curtains, and plastics for quite a distance. With boosters, you can increase the range of 2.4 and 5 GHz signals by several tens of feet. (A similar trick will work at 60 GHz with steerable antennas.)


    Practically speaking, both wireless modes can co−exist nicely. “d” is perfectly suited for short−range, high−bandwidth links in rooms and “x” can pick up the slack over longer distances in larger spaces. Indeed, we’ve seen tri−band modems come to market in recent years that incorporate the older “c” version with “d,” although they have been slow to gain adoption.


    As our industry finds more ways to get rid of wires, especially when connecting devices configured for Internet of Things (IoT) control, demands for more efficient and faster WiFi will only increase. Versions “x” and “d” should satisfy those demands for a few years…we hope…

     

  • INFOCOMM 2018: What Did It All Mean

     If you managed to make it out to this year’s running of InfoComm, you might have summarized your trip to colleagues with these talking points:


    (a) LED displays, and

    (b) AV−over−IT.

     

    Indeed; it was impossible to escape these two trends. LED walls and cubes were everywhere in the Las Vegas Convention Center, in many cases promoted by a phalanx of Chinese brands you’ve likely never heard of. But make no mistake about it – LEDs are the future of displays, whether they are used for massive outdoor signage or compact indoor arrays.


    With the development of micro LED technology, we’re going to see an expansion of LEDs into televisions, monitors, and even that smart watch on your wrist. (Yes, Apple is working on micro LEDs for personal electronics.)


    Projector manufacturers are understandably nervous about the inroads LEDs are making into large venues. Indeed; this author recently saw Paul Simon’s “farewell tour” performance at the Wells Fargo Center in Philadelphia, and the backdrop was an enormous widescreen LED wall that provided crystal−clear image magnification (very handy when concertgoers around you are up and dancing, blocking your view of the stage).


    As for the other talking point – well, it was impossible to avoid in conversations at InfoComm. Between manufacturers hawking their “ideal” solutions for compressing and streaming audio and video and all of the seminars in classrooms and booths, you’d think that AV−over−IT is a done deal.


    The truth is a little different. Not all installations are looking to route signals through a 10 Gb/s Cisco switch. In fact, a brand−spanking−new studio built for ESPN in lower Manhattan, overlooking the East River and the Brooklyn Bridge, relies on almost 500 circuits of 3G SDI video through an enormous router. Any network−centric signal distribution within this space is mostly for IT traffic.

    That’s not to say that installers are poo−pooing AV−over−IT and the new SMPTE 2110 standards for network distribution of deterministic video. It’s still early in the game and sometimes tried−and−tested signal distribution methods like SDI are perfectly acceptable, especially in the case of this particular facility with its 1080p/60 backbone.


    Even so, the writing on the all couldn’t be more distinct with respect to LEDs and network distribution of AV. But there were other concerns at the show that didn’t receive nearly as much media attention.


    At the IMCCA Emerging Trends session on Tuesday, several presentations focused on interfacing humans and technology. With “OK Google” and Alexa all the rage, discussions focused on how fast these consumer interfaces would migrate to AV control systems. An important point was made about the need for two−factor authentication – simple voice control might not be adequately secure for say, a boardroom in a large financial institution.


    What would the second factor be− Facial recognition− (This was a popular suggestion.) Fingerprints− Retinal scans− A numeric code that could be spoken or entered on a keypad− The name of your favorite pet− Given that hackers in England recently gained access to a casino’s customer database via an Internet−connected thermometer in a fish tank, two−factor authentication for AV control systems doesn’t seem like a bad idea.


    Another topic of discussion was 8K video. With a majority of display manufacturers showing 4K LCD (and in some cases OLED) monitors in Vegas, the logical question was: Could resolutions be pushed higher− Of course, the answer is a resounding “yes!”


    Display analysts predict there will be over 5 million 8K televisions shipped by 2022 and we’re bound to see commercial monitors adapted from those products. But 8K doesn’t have to be achieved in a single, stand−alone display: With the advent of smaller 4K monitors (some as small as 43 inches), it is a simple matter to tile a 2x2 array to achieve 7680x4320 pixels. And there doesn’t appear to be a shortage of customers for such a display, especially in the command and control and process control verticals.


    The other conversations of interest revolved around the need for faster wireless. We now have 802.1ac channel bonding, with 802.11ax on the horizon. For in−room super−speed WiFi, 802.11ad provides six channels at 60 GHz, each 2 GHz wide or 100x the bandwidth of individual channels at 2.4 and 5 GHz.


    But wise voices counsel to pay attention to 5G mobile networks, which promise download speeds of 1 Gb/s. While not appropriate for in−room AV connectivity, 5G delivery of streaming video assets to classrooms and meetings is inevitable. Some purveyors of wireless connectivity services like AT&T and Verizon insist that 5G could eventually make WiFi obsolete. (That’s a bit of a stretch, but this author understands the motivation for making such a claim.)


    The point of this missive− Simply that our industry is headed for some mind−boggling changes in the next decade. Networked AV, LEDs, 8K video and displays, multi−factor authentication for control systems, and super−fast wireless connections are all in the wings.


    And if you were observant at InfoComm, you know it’s coming…and quickly.

  • “Who Are Those Guys”

    Readers may remember Butch Cassidy and The Sundance Kid, a classic western from 1969 that featured cinema icons Paul Newman and Robert Redford in the title roles as a pair of happy−go−lucky (and often violent) train and bank robbers. They were the leaders of the infamous “Wild Bunch,” also known as the Hole−In−The−Wall Gang, so named for the remote hideaway in Wyoming that they used to evade authorities after pulling off a heist.

    The story goes that Cassidy and The Kid eventually decamped to South America to get away from relentless manhunts, but ultimately met their maker during a bloody shoot−out in Bolivia. In the movie, Butch and The Kid are constantly riding across the countryside from Argentina to Chile and Bolivia, pursued by a small but determined band of soldiers. “Who ARE those guys−” was The Kid’s constant refrain, as he looked over his shoulder with fear.

    In the AV industry, there are plenty of “those guys” manufacturing hardware and writing software code. For the display industry, “those guys” are Chinese LCD panel fabricators, who are slowly subsuming the flat panel display business once dominated by Japan and later by Korea. In consumer electronics, “those guys” are companies like Hisense and TCL in televisions, Huawei and ZTE in smartphones, and Lenovo in laptops.

    You can find corollaries in control systems, video encoders, network switches, and cable. But there’s one sector of the industry where “those guys” haven’t been able to catch up with the leaders – and that’s the display interface.

    For the past 16 years, the High Definition Multimedia Interface (HDMI) has ruled the roost for display connections, pushing aside VGA at first and then DVI on everything from televisions and Blu−ray players to laptop computers and camcorders. It’s evolved numerous times from a basic plug−and−play interface for televisions and AV receivers to a high−speed transport system for 4K and ultimately 8K video. Ironically, HDMI is often the input and output connection for video encoders and decoders that, in theory, could displace it from the market altogether.

    So, who are “those guys” in this sector− Why, the folks at the Video Electronics Standards Association (VESA), who developed and periodically update DisplayPort. First launched in 2006, DisplayPort was intended to replace the old analog VGA connector with a newer, 100%−digital version that could handle many times the bandwidth of an XGA (1024x768) or UXGA (1600x1200) video signal.

    Other forward−looking features included direct display drivers (no need for a video card), support for optical fiber, multiplexing with USB and other data bus formats, and even a wireless specification (it never really caught on). Like HDMI, DP had its “mini” and “micro” versions (Mini DP and Mobility DP).

    In recent years, VESA stayed current by upping the speed limit from 21.6 to 32 gigabits per second (Gb/s), supporting the USB 3.0 Alternate Mode, adding some cool bells and whistles like simultaneous multi−display output, adopting the first compression system for display signals (Display Stream Compression), recognizing high dynamic range metadata formats, and even accepting color formats other than RGB.

    Best of all, there continue to be no royalties associated with DP use, unlike HDMI. The specification is available to anyone who’s interested, unlike HDMI. And DP was ready to support deep color and high frame rate 4K video as recently as 2013, unlike HDMI.

    However…unlike HDMI, DisplayPort has had limited success penetrating the consumer electronics display interfacing market. While some laptop manufacturers have adopted the interface, along with commercial AV monitors and video cards for high−performance PCs, HDMI is still the undisputed king of the hill when it comes to plugging any sort of media device into a display.

    Even long−time supporters of DP have switched allegiances. Apple, known for using Mini DisplayPort on its MacBook laptops, is now adding HDMI connections. Lenovo, another DP stalwart, is doing the same thing on its newer ThinkPad laptops. Clearly, the HDMI Forum isn’t worried at all about “those guys.”

    That’s not to say “those guys” are giving up the chase. Earlier this year at CES, VESA had several stands in their booth demonstrating a new set of standards for high dynamic range and wide color gamuts on computer monitors – specifically, those using LCD technology. DisplayHDR calls out specific numbers that must be achieved to qualify for DisplayHDR 400, DisplayHDR 600, and DisplayHDR 1000 certification.

    Those numbers fall into the categories of 10% full white, full screen white “flash,” and full screen white “sustained” operation, minimum black level, minimum color gamut, minimum color bit depth, and black−to−white transition time. With interest in HDR video growing, the DisplayHDR specifications are an attempt to get around vague descriptions of things like color range (“70% of NTSC!”) and contrast ratios that don’t specify how the measurements were taken.

    And this is actually a good thing. In the CE world, the UHD Alliance has a vague set of minimum requirements for a TV to qualify as high dynamic range. Compared to the more stringent DisplayHDR requirements, the UHD Alliance specs are equivalent to asking if you can walk and chew gum at the same time. Whereas HDMI version 2.0 (currently the fastest available) can transport an Ultra HD signal with 8−bit RGB color safely at 60 Hz, that’s setting the bar kinda low in our opinion.

    In contrast, DisplayPort 1.3 and 1.4 (adds HDR metadata and support for 4:2:0 and 4:2:2 color) aren’t even breathing hard with a 12−bit RGB Ultra HD video stream refreshed at 60 Hz. And that means a computer display certified to meet one of the DisplayHDR standards can actually accept a robust HDR signal. (Note that VESA isn’t choosing sides here – DisplayHDR−certified screens can also use HDMI connections, but signal options are limited by HDMI 2.0’s top speed of 18 Gb/s.)

    With HDMI 2.1 looming on the horizon – a new version of the interface that liberally borrows from DisplayPort architecture – it doesn’t appear that “those guys” are going to catch up any time soon. But you never know: Even Butch and Sundance had their day of reckoning…

    You can learn more about DisplayHDR here. Check it out!

  • NAB 2018: It's All Up in the Air

    We just returned from our annual visit to the NAB Show in Las Vegas with a lot to think about – not the least of which was, where do we go from here−

     

    From “here,” we mean conventional AV signal management and distribution, using industry standard formats like HD SDI, DVI, and HDMI. “There” isn’t as clearly defined, but we’re pretty sure it will ultimately involve TCP and IP, fast switches, and optical and twisted−pair cable.

     

    Even so, there was no shortage of vendors trying to convince booth visitors that AV−over−IT is the way to go, and right now! Some NAB exhibitors have staked their entire business model on it, with flashy exhibits featuring powerful codecs, cloud media storage and retrieval, high dynamic range (HDR) imaging, and production workflows (editing, color correction, and visual effects) all interconnected via an IT infrastructure.

     

    And, of course, there is now a SMPTE standard for transporting professional media over managed AV networks (note the word “managed”), and that’s ST 2110. The pertinent documents that define the standards are (to date) SMPTE ST 2110−10/−20/−30 for addressing system concerns and uncompressed video and audio streams, and SMPTE ST 2110−21 for specifying traffic shaping and delivery timing of uncompressed video.

     

    Others at NAB weren’t so sure about this rush to IT and extolled the virtues of next−generation SDI (6G, 12G, and even 24G). Their argument is that deterministic video doesn’t always travel well with the non−real−time traffic you find on networks. And the “pro” SDI crowd may have an argument, based on all of the 12G connectivity demos we saw. 3G video, to be more specific, runs at about 2.97 Gb/s, so a 12G connection would be good for 11.88 Gb/s – fast enough to transport an uncompressed 4K/60 video signal with 8−bit 4:2:2 color or 10−bit 4:2:0 color.

     

    The challenge to date has been to manufacture suitable cables for the transport of 12G SDI signals that will meet signal−to−noise (S/N) specifications. Moving to optical fiber is one way around the problem, but it appears most of the demos we saw relied on coaxial connections. To drive a 4K display, some manufacturers rely on quad 3G inputs. Ditto on getting 4K footage out of a camera, although to be fair, there were some single−wire 4K camera connections exhibited.

     

    In an earlier blog post, we talked about the quantum leap to 8K video and displays. Well, we were quite surprised – perhaps pleasantly – to see Sharp exhibiting at NAB, showing an entire acquisition, editing, production, storage, and display system for 8K video. (Yes, that Sharp, the same guys that make those huge LCD displays. And copiers. And kitchen appliances. Now owned by Hon Hai precision industries.)

     

    Sharp’s 8K broadcast camera, more accurately the 8C−B60A, uses a single Super 35mm sensor with effective resolution of 7680x4320 pixels arrayed in a Bayer format. That’s 16 times the resolution of a Full HD camera, which means data rates that are 16x that of 3G SDI. In case you are math challenged, we’re talking in the range of 48 Gb/s of data for a 4320p/60 video signal with 8−bit 4:2:2 color, which requires four 12G connections.

     

    Like they say at the dragstrip, “Now THAT’S fast!” It’s so fast in fact that you can’t even use an expensive 40 Gb/s network switch to port this signal over an IT network.  Indeed; light compression must come into play to get that number down to a manageable level. The TICO (Tiny Codec) is a good candidate – 4:1 TiCo compression would pack our 8K signal back down to 12 Gb/s. JPEG2000 4:1 would do the same thing, and both are low−latency codecs (and are similar to each other in how they work). For that matter, 4:1 compression would drop a 4K/60 signal down to 3G levels, making it a heckuva lot easier to switch.

     

    The Blue River NT technology sweeping through the AV industry uses a codec that’s adapted from VESA’s Display Stream Compression (DSC) and is even gentler, packing things down a maximum of 2:1 to get a 4K/60 10−bit 4:4:4 video stream through a 10 Gb/s switch. We haven’t seen it used in conjunction with an 8K video source yet, but be advised that DSC can actually work up to 3:1 compression levels and still remain visually lossless with very low latency.

     

    In the NHK booth, you could watch a demonstration of 8K/60 video traveling through a 10 Gb/s switch using so−called mezzanine compression based on the TiCo system. In this case, NHK was using 5:1 TiCo compression to slow down a 40 Gb/s 8K/60 video stream to 8 Gb/s. Even our 48 Gb/s example from earlier would make it under the bar at 9.6 Gb/s.

     

    So, what does this all mean− First off, SDI isn’t quite dead yet, to paraphrase Monty Python. It may not be suitable for long distance transmission of 4K video, but it’s still workable for short runs from cameras to switchers and other studio gear, and longer runs using optical connections. (The SNR hurdle still has to be cleared for long coaxial cable runs.)

     

    Second, it’s becoming clear that some degree of light compression is going to be a way of life with 4K and 8K production, especially when you factor in the additional bits for HDR and high−frame rate video (of which there was plenty on display in Vegas). You think 48 Gb/s is fast− Try moving 8K/120 video around: Both NHK and NTT were showing exactly that, with corresponding data rates of about 96 Gb/s. Definitely “funny car” territory.

     

    Third, we’re still a long way from resolving the SDI vs. IP argument. Indeed; some recent studio projects we’re aware of have been built using both SDI and IP architectures to move data and video on separate paths. While offerings like cloud storage will require the network hookup, point−to−point 1080p video can still travel happily over SDI connections. (It should also be pointed out that, for all the ballyhoo about 4K, very little in the way of 4K video production is being undertaken at present.)

     

    NAB 2018 reflected all of this thinking, and then some. It’s almost as if everyone else at the show was waiting for the other guy to take the first step. SDI, or IP− Or Both− Come on, doggone it, make up your mind….. 

  • Tired Of 4K TV Yet− Here Comes 8K TV….

    Yes, you read that right: 8K displays are coming. For that matter, 8K broadcasting has already been underway in Japan since 2012, and several companies are developing 8K video cameras to be shown at next month’s NAB show in Las Vegas.

    “Hold on a minute!” you’re probably thinking. “I don’t even own a 4K TV yet. And now they’re already on the endangered species list−”

    Well, not exactly. But two recent press releases show just how crazy the world of display technology has become.

    The first release came from Insight Media in February and stated that, “The 2020 Tokyo Olympics will be a major driver in the development of 8K infrastructure with Japanese broadcaster NHK leading efforts to produce and broadcast Olympic programming to homes…cameras from Hitachi, Astrodesign, Ikegami, Sharp and Sony address the many challenges in capturing 8K video…the display industry plans for massive expansion of Gen 10.5 capacity, which will enable efficient production of 65" and 75" display panels for both LCD and OLED TV…. sales of 8K Flat Panel TVs are expected to increase from 0.1 million in 2018 to 5.8 million in 2022, with China leading the way representing more than 60% of the total market during this period.”

    You read that right. Almost 6 million 8K LCD and OLED TVs are expected to be sold four years now and over 3 million of those sales will be in China.

    But there’s more. Analyst firm IHS Markit issued their own forecasts for 8K TV earlier this month, predicting that, “While ultra−high definition (UHD) panels are estimated to account for more than 98 percent of the 60−inch and larger display market in 2017, most TV panel suppliers are planning to mass produce 8K displays in 2018. The 7680 x 4320−pixel resolution display is expected to make up about 1 percent of the 60−inch and larger display market this year and 9 percent in 2020.”

    According to HIS Markit, companies with skin in the 8K game include Innolux, which will supply 65−inch LCD panels to Sharp for use in consumer televisions and in commercial AV displays. Meanwhile, Sharp – which had previously shown an 85−inch 8K TV prototype – will ramp up production of a new 70−inch 8K LCD display (LV−70X500E) in their Sakai Gen 10 LCD plant. This display was shown in Sharp’s booth at ISE, along with their new 8K video camera.

    Sony and Samsung are also expected to launch 8K LCD TVs this year. Both companies showed prototypes at CES with Samsung’s offering measuring about 85 inches. Sony’s prototype also measured 85 inches but included micro light−emitting diodes (LEDs) in the backlight to achieve what Sony described as “full high dynamic range,” achieving peak (specular) brightness of 10,000 nits. (That’ll give you a pretty good sunburn!)

    Other players in 8K include LG Display, who already announced an 88−inch 8K OLED TV prior to CES, and Chinese fabricators BOE, AUO, and China Electronics Corporation (CEC). What’s even more interesting is that some of these 8K LCD and OLED panels will be equipped with indium gallium zinc oxide (IGZO) switching transistors.

    No, IGZO isn’t a cure for aging. But what it does is provide much higher pixel density in a given screen size with lower power consumption. More importantly, it will allow these 8K TVs to refresh their pictures as fast as 120 Hz – double the normal refresh rate we use today. And that will be important as High Frame Rate (HFR) video production ramps up.

    Predictably, prices for TVs and monitors using panels with 4K resolution are collapsing. In the AV channel, 4K (Ultra HD) displays are only beginning to show up in product lines, but manufacturers are well aware of pricing trends with Ultra HD vs. Full HD (1920x1080p). With some consumer models now selling for as little as $8 per diagonal inch, the move from Full HD to 4K / Ultra HD will pick up lots of steam.

    And with 8K displays now becoming a ‘premium’ product, 4K / Ultra HD will be the ‘everyday’ or mainstream display offering in screen sizes as small as 40 inches and as large as – well, you name it. We’ve already seen 84−inch, 88−inch, and 98−inch commercial displays, and prototypes as large as 120 inches – yes, 10’ of diagonal screen, wrap your head around that – have been exhibited at CES and other shows.

    We saw quite a few demonstrations of 4K commercial displays at ISE and expect to see a whole lot more at InfoComm in June, along with the inevitable price wars. And there will be the usual “my encoder handles 4K better than yours with less latency” battles, shoot−outs, and arguments. But that could ultimately turn out to be the appetizer in this full−course meal.

    For companies manufacturing signal distribution and switching equipment, 4K / Ultra HD already presents us with a full plate. 8K would be too much to bite off at present! Consider that an 8K/60 video signal using 12−bit RGB color requires a data rate approaching 100 gigabits per second (Gb/s), as compared to a 12−bit, 60 Hz Full HD signal’s rate of about 6 Gb/s, and you can see we will have some pretty steep hills to climb to manage 8K.

    Distributing 8K over a network will be equally challenging and will require switching speeds somewhere north of 40 Gb/s even for a basic form of 8K video, which (we assume) will also incorporate high dynamic range and wide color gamuts. 40 Gb/s switches do exist but are pricey and would require 8K signals to be compressed by at least 25% to be manageable. And they’d certainly use optical fiber for all their connections.

    To summarize, 4K / Ultra HD isn’t on the endangered species just yet. (You can still buy Full HD monitors and TVs, if that’s any comfort.) And the pace of change in display technology is so rapid nowadays that you can’t be blamed if you feel like Rip Van Winkle sometimes!

    But whether it makes sense or not – or whether we’re ready or not – it’s “full speed ahead” for 8K displays as we head into the third decade of the 21st century…

  • CES 2018: Get Ready For The Next Wave Of Displays

    You could be forgiven if you wondered where all of the televisions disappeared to at this year’s CES. Ten years ago, the walls of booths occupied by the likes of Sharp, Sony, Panasonic, Samsung, and LG were stuffed full of LCD and plasma televisions. This was the flagship product for all of these companies and a big part of their sales.

    But this year− A very different look. With the continued emphasis on “connected everything,” TVs moved to the background as “connected solutions” for home and office grabbed center stage. And there’s a good reason why: Display panels are inexpensive to manufacture now and the TVs they wind up in have dropped dramatically in price.

    A quick check at online pre−Super Bowl TV sales showed that you can pick up a first−tier 55−inch 4K (Ultra HD) TV with “smart” functionality for about $500, spending about $100 less for a 2nd−tier brand. Want high dynamic range− Add around $300 − $400 to the price. And we’re talking about Ultra HDTVs here, not Full HD sets that can be had in the same screen size for as little as $399.

    You can attribute this collapse in TV prices to large−scale manufacturing in China, where both raw material and labor costs are much lower than in older industrial companies. Robotics (another big thing at CES) also play a part: The most up−to−date display panel fabrication lines in Asia may sit in multistory buildings, but they only require about 15 to 20 people to monitor and control everything.

    Lower production costs and increasing use of robotics have made it possible to jump to 8K (7680x4320) display resolution. Indeed; many pundits are predicting that 8K displays will replace 4K in a very short time period. But that’s a fanciful prediction at best, given that there is no commercially−produced 8K video and movie content and the storage required for such content would amount to 16 times that needed for plain old Full HD (1920x1080).

    Still, large flat screen displays continue to push projectors out of the market. More AV installations are now using large LCD screens, some with 4K resolution. At the high end, light−emitting diode (LED) displays are now preferred for large indoor and outdoor electronic signs. They’re intensely bright, pushing out 3,000, 4,000, and 5,000 nits over wide viewing angles and creating images that hold up well even under full daylight.

    But now there’s a wild card, and that’s the micro LED display. Most commercial LED displays have a dot (pixel) pitch of 4−6 mm for outdoor use. In recent years, fine pitch LED displays have dropped down below 2 mm with some videowalls touting 1.8, 1.6, 1.2, and even .9mm pitches. (For some perspective, a 50−inch plasma monitor from 1999 had about a 1.2mm dot pitch and 1366x768 resolution.)

    The micro LED takes that a step further with dot pitches much smaller than 1 mm. Take that same 50−inch TV and stuff it full of 4K pixels (3840x2160), and you’ll see that a dot pitch of about .3mm is required for each pixel. (8K resolution would drop that in half again to .15mm.) It’s easy nowadays to form LCD and OLED pixels that small, but micro LEDs are a bit trickier.

    Nevertheless, Samsung showed a 146−inch diagonal micro LED display with 8K resolution. Because the display uses LEDs exclusively, it’s very bright (over 2,000 nits for specular highlights) and has a wide viewing angle. This concept display was also able to show high dynamic range (HDR) video and a much wider color gamut than we usually see. Since LEDs can pulse on and off at very high speeds, this type of display is perfect for the next big revolution in imaging – high frame rate video.

    We’ve seen micro LED technology before. Sony exhibited a hand−wired TV full of micro LEDs about 6−7 years ago at CES, and conservative estimates were that it probably cost in excess of $100,000 to make. Samsung’s model surely came in a lot lower than that, and for one big reason: It’s modular. The final product is actually made up of several smaller LED tiles, which is quite a revolutionary approach to building a TV.

    Here’s what we find interesting: It may actually catch on. Tiling is a familiar concept to those in the AV and staging markets who routinely put together large displays for temporary or permanent installations. The thinking at CES is, “why not do this with televisions−” In essence, you could decide just how big a display you’d want in your home or office and then order up the correct number of tiles. Stack them together, connect all of the driver cables, and away you go.

    Okay, maybe it won’t be that simple. But building TVs out of super−thin tiles could represent a significant manufacturing revolution, just as flat screen displays kicked tube TVs out of the market 15 years ago. What we don’t know is the final pixel resolution of those tiled TVs and how we’ll interface signals to them. Newer and faster versions of HDMI and DisplayPort may be the answer. Or perhaps it will require an entirely different method of signal transport.

    Stay tuned!

  • CES 2018: Tying It All Together

    A few weeks ago was the annual Consumer Electronics Show in Las Vegas. For the first time, the majority of exhibits in Las Vegas emphasized applications over hardware, or to put it another way, “it’s not what you have, it’s what you do with it.”

    And that shouldn’t be a surprise at all. Prices of commercial and consumer gear have been steadily declining over the last decade to the point where much of that gear is now considered consumable and disposable. Buy it, use it, and replace it over ever−shorter product life cycles.

    Attendees wandering through the LVCC couldn’t help but pick up on the “connected” vibe: Connecting and controlling everything with voice commands is all the buzz nowadays. So are faster WiFi and 5G cellular, along with smart, connected appliances and smart, connected cars. To make things even more interesting, Amazon and Google voice recognition systems were found on everything from televisions to cars.

    Speech recognition and control has come a long way since we first saw it implemented at the turn of this decade by companies like Conexant, and it works. And it’s cheap. And you can use it to control just about everything in your home that’s tied to a network, so it’s not unreasonable to assume voice recognition could also be used to operate everything in an AV installation.

    This isn’t fantasy. Every TV manufacturer had at least one model at CES that supported Amazon or Google Assistant. (Some models support both systems.) You can link your TV to your refrigerator, washer, dryer, and other appliances in your home and control just about anything or get status updates. Or you can just ask your assistant general questions, and depending on the question, the system can anticipate what you’re about to do and activate or deactivate devices.

    LG has this feature in their 2018 TVs (ThinQ with Cloi), while Samsung claims that every product they make will be interconnected by 2020 and voice controlled using their Bixby system. While the Chinese brands are not quite up that level, they did show sample rooms with interconnected devices that all respond to voice prompts.

    In addition, Samsung’s purchase of Harman in 2016 gives them entry to the multi−billion−dollar car audio market. And by extension, they can support voice recognition and control in cars, linking them back to homes and offices. On the TV side, both TiVo and Comcast have had voice control and search features for some time, using adaptive intelligence to hunt down and locate programs.

    Examples were shown of voice commands through an LG OLED TV to (a) adjust room lighting, (b) adjust room temperature and humidity, (c) check where the washer and /or dryer cycles stand, (d) check to see what’s in the refrigerator and suggest a recipe for the food that was found, and (e) ultimately order takeout food from a restaurant.

    Another key part of this voice−centered control system is machine learning. As implemented in televisions, these systems can anticipate which programs you’re likely to watch. As part of a wider control system, they can remember what room temperatures you prefer, when you cycle room lights, and what combinations of lighting/temperature/humidity you like when you retire for the night. Needless to say, the system can also activate alarms and outside lighting.

    Samsung also showed an advanced in−door wide−angle camera to see who’s ringing the doorbell. Not exactly a new concept, but it can be linked to your TV or a display in your kitchen appliances. (Yes, that’s becoming a thing now, with a large LCD screen that serves as a hub for everything from your daily schedule to recalling recipes from cloud storage.)

    Another example of machine learning was discussed at the Panasonic press conference. Their big thing is “smart cities” (also a themed area in the Westgate Convention Center) wherein everything is connected – your home, your car, the highways, you name it. Panasonic talked about getting into your car and driving to Starbucks (either with you driving or an autonomous system) and the car will automatically call ahead and order your favorite beverage.

    If it’s this easy to implement in the consumer space, why aren’t we doing more of it in the professional AV world− Kramer Control is already using the icon−oriented drag−and−drop method of building a control system, using cloud−based drivers. All of the control systems for home use that were shown at CES work the same way. The big question is, which voice recognition system will be paired up with this next generation of control systems−

    A big concern that comes to mind is security. It sounds like a great idea to command the function of every piece of hardware in a building, but if all of that gear is interconnected through Ethernet or WiFi, then it’s open to hacking from the outside world. Google’s Nest thermostat was hacked a couple of years ago, so it’s reasonable to assume anything from a TV to a projector, lighting control system, or HVAVC could be at risk.

    Samsung announced at CES that every product they make will be connected by 2020, largely using 5G cellular networks. No doubt companies like LG, Sony, Panasonic, and Chinese brands will follow suit. After all, it’s what consumers want – right− (At least, that’s what we heard all week long in Las Vegas…)

FIND WHAT YOU NEED

GET UPDATES

Want to receive alerts and updates for every new post?