A Primer on Video Encoding
What is Video Encoding?
Whenever you watch a video – whether it is streamed over a network or downloaded onto a device – chances are that the original files have been compressed and the file size shrunk to make it more manageable for the bandwidth and storage available to you at the time.
This compression is known as video encoding and it is a hugely important process to ensure that video data can be stored and transmitted efficiently, even over constrained networks where bandwidth is limited.
Video encoding is particularly important for defense applications, where video data is collected from a variety of sensors across the battlespace and sent over wide area networks with limited or congested bandwidth, often to be viewed in real-time for rapid decision making.
Take for example platforms carrying out vital Intelligence, Surveillance and Reconnaissance (ISR) missions such as persistent unmanned aerial systems (UAS) that collect hours of mission critical video footage during operations.
It would be a significant strain on the network if the UAS sent large raw video data from its powerful electro-optic infrared (EOIR) sensors back to operators on the ground. This is one of the key reasons for encoding, ensuring that networks are not overburdened and that warfighters can still receive vital ISR data to inform operational decision making.
Related Blog Post - Rugged Computing for Unmanned Systems
Read the Blog PostHow does video encoding work?
Video encoding is the process of compressing video content and associated meta data to ensure that the data files are smaller and more manageable from a storage and bandwidth perspective, but at the same time to retain as much quality and information as possible from the original source.
When the compressed video data is then decompressed, or decoded, it is returned to its original form – although as we shall see, this may not always be the original quality depending on the codec being used.
This whole process is not possible without codecs, which is a portmanteau of encoder and decoder.
One of the earliest and most widely-used video codec standards was MPEG-2, defined by the Motion Pictures Expert Group (MPEG), which became a standard for digital TV. In very simple terms, MPEG-2 compression worked by removing repeated frames from the same scene (known as temporal redundancy), as well as removing replicated elements within single frame of video (known as spatial redundancy).
These compression techniques are also known as inter-frame and intra-frame compression.
But MPEG-2 was “lossy”, meaning that the quality after decompression was less than before it was compressed. This resulted in the blocky video files that we often associated with low quality videos, and while this is a nuisance when watching videos for entertainment, it can be the difference between mission success and failure for critical applications such as military ISR.
Imagine watching a low-quality video while trying to identify if a person has a weapon or not? Or whether the person you’re surveilling is the high-value target you’re looking for?
H.264 vs. H.265 – the future of codecs
In the era of high definition and ultra-high definition, the H.264 codec has changed the game when it comes to streaming high-quality digital video, and is still probably the most widely used for a number of applications.
Using either hardware or software, the H.264 codec standard is able to compress video data so that it requires only half the storage space of MPEG-2, but it retains the same high quality. It can even be used for ultra-high resolution such as 4K and even 8K.
H.264 is also known as Advanced Video Coding (AVC), or MPEG-4 Part 10.
But the rapid advance of technology is showing no signs of slowing, and H.264 AVC has been superseded by the next-generation compression standard known as HEVC – or High Efficiency Video Compression.
This is H.265 (or MPEG-H Part 2), and it is seeing increased adoption thanks to its high performance capabilities, including significantly reduced bitrates and file sizes for the same quality as H.264. This is particularly important for sending video data over networks.
Video encoding using rugged computers and servers
Systel has a long heritage of supplying rugged rackmount servers and embedded computers that are able to perform mission-critical functions, including specialist video encoding to support ISR missions across domains.
It has been standard for some time for rackmount servers to have integrated and dedicated graphic cards that provide the encoding and decoding capabilities. In recent years, video encoding has also been possible using mobile CPUs – such as Intel’s I series – that have integrated graphics, as seen in many consumer products such as laptops, and embedded GPUs featuring NVIDIA’s Turing and latest-gen Ampere architectures. NVIDIA’s Jetson system on modules (SOMs), integrated into Systel’s Kite-Strike products, offer robust video encode and decode capabilities for edge-deployed AI compute solutions.
Rugged embedded video capture and encoder cards are becoming more prevalent – this has opened up the possibility of providing direct hardware video capture and encode capabilities on embedded computers installed on mobile platforms such as UAS or ground vehicles, and not just on large rackmount server configurations.
Systel’s Hawk-Strike IV – part of our Strike mission computer family – can be configured to provides up to four SDI or analog video inputs, with a hardware video encoder that can convert the video data to H.264/H.265 for display inside the platform or to go over a network. This configuration can also support 360-degree situational awareness (360SA), using video feeds from cameras that are placed around a vehicle.
Glass-to-glass latency is critical, especially for 360SA systems, and the challenge is ensuring that the encoding process does not add latency that is not noticeable for the human operator. In some instances, it is more beneficial to use raw video data to ensure lower latency. For encoded video streams, each component of the latency chain must be optimized (we’ll take care of the compute component!) to ensure lowest possible latency for real-time capabilities.
However our end users wish to process their video data and whatever the scenario, we offer a wide portfolio of MOSA-aligned rugged configurable COTS and MCOTS solutions that will enable them to achieve operational and mission success.