Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption
  • Start Monitoring for Free

Streaming video in Safari: Why is it so difficult?

safari h264

The problem

I recently implemented support for AI tagging of videos in my product Sortal . A part of the feature is that you can then play back the videos you uploaded. I thought, no problem — video streaming seems pretty simple.

Streaming Video in Safari: Why Is It So Difficult?

In fact, it is so simple (just a few lines of code) that I chose video streaming as the theme for examples in my book Bootstrapping Microservices .

But when we came to testing in Safari, I learned the ugly truth. So let me rephrase the previous assertion: video streaming is simple for Chrome , but not so much for Safari .

Why is it so difficult for Safari? What does it take to make it work for Safari? The answers to these questions are revealed in this blog post.

Try it for yourself

Before we start looking at the code together, please try it for yourself! The code that accompanies this blog post is available on GitHub . You can download the code or use Git to clone the repository. You’ll need Node.js installed to try it out.

Start the server as instructed in the readme and navigate your browser to http://localhost:3000 . You’ll see either Figure 1 or Figure 2, depending on whether you are viewing the page in Chrome or Safari.

Notice that in Figure 2, when the webpage is viewed in Safari, the video on the left side doesn’t work. However, the example on the right does work, and this post explains how I achieved a working version of the video streaming code for Safari.

Video Streaming Example in Chrome

Basic video streaming

The basic form of video streaming that works in Chrome is trivial to implement in your HTTP server. We are simply streaming the entire video file from the backend to the frontend, as illustrated in Figure 3.

Simple Video Streaming Flow

In the frontend

To render a video in the frontend, we use the HTML5 video element. There’s not much to it; Listing 1 shows how it works. This is the version that works only in Chrome. You can see that the src of the video is handled in the backend by the /works-in-chrome route.

Listing 1: A simple webpage to render streaming video that works in Chrome

In the backend.

The backend for this example is a very simple HTTP server built on the Express framework running on Node.js. You can see the code in Listing 2. This is where the /works-in-chrome route is implemented.

In response to the HTTP GET request, we stream the whole file to the browser. Along the way, we set various HTTP response headers.

The content-type header is set to video/mp4 so the browser knows it’s receiving a video.

Then we stat the file to get its length and set that as the content-length header so the browser knows how much data it’s receiving.

Listing 2: Node.js Express web server with simple video streaming that works for Chrome

But it doesn’t work in safari.

Unfortunately, we can’t just send the entire video file to Safari and expect it to work. Chrome can deal with it, but Safari refuses to play the game.

What’s missing?

Safari doesn’t want the entire file delivered in one go. That’s why the brute-force tactic of streaming the whole file doesn’t work.

Safari would like to stream portions of the file so that it can be incrementally buffered in a piecemeal fashion. It also wants random, ad hoc access to any portion of the file that it requires.

safari h264

Over 200k developers use LogRocket to create better digital experiences

safari h264

This actually makes sense. Imagine that a user wants to rewind the video a bit — you wouldn’t want to start the whole file streaming again, would you?

Instead, Safari wants to just go back a bit and request that portion of the file again. In fact, this works in Chrome as well. Even though the basic streaming video works in Chrome, Chrome can indeed issue HTTP range requests for more efficient handling of streaming videos.

Figure 4 gives you an idea of how this works. We need to modify our HTTP server so that rather than streaming the entire video file to the frontend, we can instead serve random access portions of the file depending on what the browser is requesting.

Range Streaming Flow

Supporting HTTP range requests

Specifically, we have to support HTTP range requests. But how do we implement it?

There’s surprisingly little readable documentation for it. Of course, we could read the HTTP specifications, but who has the time and motivation for that? (I’ll give you links to resources at the end of this post.)

Instead, allow me to guide you through an overview of my implementation. The key to it is the HTTP request range header that starts with the prefix "bytes=" .

This header is how the frontend asks for a particular range of bytes to be retrieved from the video file. You can see in Listing 3 how we can parse the value for this header to obtain starting and ending values for the range of bytes.

More great articles from LogRocket:

  • Don't miss a moment with The Replay , a curated newsletter from LogRocket
  • Learn how LogRocket's Galileo cuts through the noise to proactively resolve issues in your app
  • Use React's useEffect to optimize your application's performance
  • Switch between multiple versions of Node
  • Discover how to use the React children prop with TypeScript
  • Explore creating a custom mouse cursor with CSS
  • Advisory boards aren’t just for executives. Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

Listing 3: Parsing the HTTP range header

Responding to the http head request.

An HTTP HEAD request is how the frontend probes the backend for information on a particular resource. We should take some care with how we handle this.

The Express framework also sends HEAD requests to our HTTP GET handler, so we can check the req.method and return early from the request handler before we do more work than is necessary for the HEAD request.

Listing 4 shows how we respond to the HEAD request. We don’t have to return any data from the file, but we do have to configure the response headers to tell the frontend that we are supporting the HTTP range request and to let it know the full size of the video file.

The accept-ranges response header used here indicates that this request handler can respond to an HTTP range request.

Listing 4: Responding to the HTTP HEAD request

Full file vs. partial file.

Now for the tricky part. Are we sending the full file or are we sending a portion of the file?

With some care, we can make our request handler support both methods. You can see in Listing 5 how we compute retrievedLength from the start and end variables when it is a range request and those variables are defined; otherwise, we just use contentLength (the complete file’s size) when it’s not a range request.

Listing 5: Determining the content length based on the portion of the file requested

Send status code and response headers.

We’ve dealt with the HEAD request. All that’s left to handle is the HTTP GET request.

Listing 6 shows how we send an appropriate success status code and response headers.

The status code varies depending on whether this is a request for the full file or a range request for a portion of the file. If it’s a range request, the status code will be 206 (for partial content); otherwise, we use the regular old success status code of 200.

Listing 6: Sending response headers

Streaming a portion of the file.

Now the easiest part: streaming a portion of the file. The code in Listing 7 is almost identical to the code in the basic video streaming example way back in Listing 2.

The difference now is that we are passing in the options object. Conveniently, the createReadStream function from Node.js’ file system module takes start and end values in the options object, which enable reading a portion of the file from the hard drive.

In the case of an HTTP range request, the earlier code in Listing 3 will have parsed the start and end values from the header, and we inserted them into the options object.

In the case of a normal HTTP GET request (not a range request), the start and end won’t have been parsed and won’t be in the options object, in that case, we are simply reading the entire file.

Listing 7: Streaming a portion of the file

Putting it all together.

Now let’s put all the code together into a complete request handler for streaming video that works in both Chrome and Safari.

Listing 8 is the combined code from Listing 3 through to Listing 7, so you can see it all in context. This request handler can work either way. It can retrieve a portion of the video file if requested to do so by the browser. Otherwise, it retrieves the entire file.

Listing 8: Full HTTP request handler

Updated frontend code.

Nothing needs to change in the frontend code besides making sure the video element is pointing to an HTTP route that can handle HTTP range requests.

Listing 9 shows that we have simply rerouted the video element to a route called /works-in-chrome-and-safari . This frontend will work both in Chrome and in Safari.

Listing 9: Updated frontend code

Even though video streaming is simple to get working for Chrome, it’s quite a bit more difficult to figure out for Safari — at least if you are trying to figure it out by yourself from the HTTP specification.

Lucky for you, I’ve already trodden that path, and this blog post has laid the groundwork that you can build on for your own streaming video implementation.

  • Example code for this blog post
  • A Stack Overflow post that helped me understand what I was missing
  • HTTP specification
  • Range requests
  • 206 Partial Content success status

Get set up with LogRocket's modern error tracking in minutes:

  • Visit https://logrocket.com/signup/ to get an app ID

Install LogRocket via npm or script tag. LogRocket.init() must be called client-side, not server-side

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)

safari h264

Stop guessing about your digital experience with LogRocket

Recent posts:.

Using The Resizeobserver Api In React For Responsive Designs

Using the ResizeObserver API in React for responsive designs

With ResizeObserver, you can build aesthetic React apps with responsive components that look and behave as you intend on any device.

safari h264

Creating JavaScript tables using Tabulator

Explore React Tabulator to create interactive JavaScript tables, easily integrating pagination, search functionality, and bulk data submission.

safari h264

How to create heatmaps in JavaScript: The Heat.js library

This tutorial will explore the application of heatmaps in JavaScript projects, focusing on how to use the Heat.js library to generate them.

safari h264

Eleventy adoption guide: Overview, examples, and alternatives

Eleventy (11ty) is a compelling solution for developers seeking a straightforward, performance-oriented approach to static site generation.

safari h264

10 Replies to "Streaming video in Safari: Why is it so difficult?"

Hi, thank you for this blog, it is very helpful. It works perfectly with Safari (Mac), but not in Mobile Safari on ios (downloading the full video before starting to play…). Do you know why?

Thanks for your feedback!

That’s strange… the production version I’m using works although I never tested the code for the blog post. It might have been broken when I simplified it.

Could you please log an issue against the code in GitHub, that will be an easier way to work through the problem, thanks.

Ashley Davis

I’ve encountered this problem with a web page I’m writing that shows videos. I’m doing the range queries correctly, but they still malfunction because the user shows first one video, then another. Your code assumes there’s only one video path, so it won’t work in this case. Range requests don’t include the file path. So how does the server code know which file to retrieve the bytes of data from? I tried using a session variable, but this fails because the session variable can change asynchronously with respect to what the video player is actually doing. The user can reposition and switch between a number of different videos! On Mac/Safari, this abnormally halts the video player.

Range requests can have a route, they can have query parameters or they can have a body… so that’s at least three ways to statelessly include the file path or id of a video to play. Use which ever one best satisfies your needs.

The problem doesn’t happen because of accept range requests (using the Accept-Ranges header), or for requests to open and play a video file. You are correct, all information from the server is available there.

The problem happens for range requests from the client, which are made by the client application, such as a video player. These requests ARE ONLY MARKED by the HTTP_RANGE header. There is no other information that the server can use to locate the file whose bytes are to be returned to the client. This is one of the rare cases where a client calls a server to obtain information. Usually, it’s the other way around.

This appears to be a very poor design, since multiple overlapping video displaying can be done by one server on behalf of one user. The bytes returned by the server may go to the wrong client player, which causes a crash of the player.

You must be misunderstanding something, If what you were saying were true then YouTube wouldn’t work.

You might be able to learn more here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests

YouTube works fine because it serves one video at a time. So its byte requests don’t need to specify which file is the context. When a client of mine tried to access a second video right after a first, the log showed that the byte requests were interleaved, probably due to lack of sync in setting the path in the Session variables. Anyway, when I look at the $_SERVER information for byte requests from the video player, the video path is NOT there, just the range of bytes.

Ok maybe I’m misunderstanding something.

But I just opened two browser windows and started two YouTube videos simultaneously. It was serving two videos simultaneously and seemed to have no problem. In fact I imagine that YouTube is serving millions of videos at once because that’s how many people are probably watching YouTube at the same time.

If you want I can look at your code and see if I can spot a problem with it, just email me on [email protected]

It is possible that my explanation is incorrect. The fact is that my code works for Firefox, Chrome, and Safari on a mobile device. It only fails for Safari on MacOS. It is a known bug with apparently no solution published as yet.

Do you have links to information on the bug? I might be able to help if I learn more about it.

Leave a Reply Cancel reply

  • Skip to main content
  • Skip to search
  • Skip to select language
  • Sign up for free

Codecs used by WebRTC

The WebRTC API makes it possible to construct websites and apps that let users communicate in real time, using audio and/or video as well as optional data and other information. To communicate, the two devices need to be able to agree upon a mutually-understood codec for each track so they can successfully communicate and present the shared media. This guide reviews the codecs that browsers are required to implement as well as other codecs that some or all browsers support for WebRTC.

Containerless media

WebRTC uses bare MediaStreamTrack objects for each track being shared from one peer to another, without a container or even a MediaStream associated with the tracks. Which codecs can be within those tracks is not mandated by the WebRTC specification. However, RFC 7742 specifies that all WebRTC-compatible browsers must support VP8 and H.264 's Constrained Baseline profile for video, and RFC 7874 specifies that browsers must support at least the Opus codec as well as G.711 's PCMA and PCMU formats.

These two RFCs also lay out options that must be supported for each codec, as well as specific user comfort features such as echo cancellation. This guide reviews the codecs that browsers are required to implement as well as other codecs that some or all browsers support for WebRTC.

While compression is always a necessity when dealing with media on the web, it's of additional importance when videoconferencing in order to ensure that the participants are able to communicate without lag or interruptions. Of secondary importance is the need to keep the video and audio synchronized, so that the movements and any ancillary information (such as slides or a projection) are presented at the same time as the audio that corresponds.

General codec requirements

Before looking at codec-specific capabilities and requirements, there are a few overall requirements that must be met by any codec configuration used with WebRTC.

Unless the SDP specifically signals otherwise, the web browser receiving a WebRTC video stream must be able to handle video at 20 FPS at a minimum resolution of 320 pixels wide by 240 pixels tall. It's encouraged that video be encoded at a frame rate and size no lower than that, since that's essentially the lower bound of what WebRTC generally is expected to handle.

SDP supports a codec-independent way to specify preferred video resolutions ( RFC 6236 . This is done by sending an a=imageattr SDP attribute to indicate the maximum resolution that is acceptable. The sender is not required to support this mechanism, however, so you have to be prepared to receive media at a different resolution than you requested. Beyond this simple maximum resolution request, specific codecs may offer further ways to ask for specific media configurations.

Supported video codecs

WebRTC establishes a baseline set of codecs which all compliant browsers are required to support. Some browsers may choose to allow other codecs as well.

Below are the video codecs which are required in any fully WebRTC-compliant browser, as well as the profiles which are required and the browsers which actually meet the requirement.

For details on WebRTC-related considerations for each codec, see the sub-sections below by following the links on each codec's name.

Complete details of what video codecs and configurations WebRTC is required to support can be found in RFC 7742: WebRTC Video Processing and Codec Requirements . It's worth noting that the RFC covers a variety of video-related requirements, including color spaces (sRGB is the preferred, but not required, default color space), recommendations for webcam processing features (automatic focus, automatic white balance, automatic light level), and so on.

Note: These requirements are for web browsers and other fully-WebRTC compliant products. Non-WebRTC products that are able to communicate with WebRTC to some extent may or may not support these codecs, although they're encouraged to by the specification documents.

In addition to the mandatory codecs, some browsers support additional codecs as well. Those are listed in the following table.

VP8, which we describe in general in the main guide to video codecs used on the web , has some specific requirements that must be followed when using it to encode or decode a video track on a WebRTC connection.

Unless signaled otherwise, VP8 will use square pixels (that is, pixels with an aspect ratio of 1:1).

Other notes

The network payload format for sharing VP8 using RTP (such as when using WebRTC) is described in RFC 7741: RTP Payload Format for VP8 Video .

AVC / H.264

Support for AVC's Constrained Baseline (CB) profile is required in all fully-compliant WebRTC implementations. CB is a subset of the main profile, and is specifically designed for low-complexity, low-delay applications such as mobile video and videoconferencing, as well as for platforms with lower performing video processing capabilities.

Our overview of AVC and its features can be found in the main video codec guide.

Special parameter support requirements

AVC offers a wide array of parameters for controlling optional values. In order to improve reliability of WebRTC media sharing across multiple platforms and browsers, it's required that WebRTC endpoints that support AVC handle certain parameters in specific ways. Sometimes this means a parameter must (or must not) be supported. Sometimes it means requiring a specific value for a parameter, or that a specific set of values be allowed. And sometimes the requirements are more intricate.

Parameters which are useful but not required

These parameters don't have to be supported by the WebRTC endpoint, and their use is not required either. Their use can improve the user experience in various ways, but don't have to be used. Indeed, some of these are pretty complicated to use.

If specified and supported by the software, the max-br parameter specifies the maximum video bit rate in units of 1,000 bps for VCL and 1,200 bps for NAL. You'll find details about this on page 47 of RFC 6184 .

If specified and supported by the software, max-cpb specifies the maximum coded picture buffer size. This is a fairly complicated parameter whose unit size can vary. See page 45 of RFC 6184 for details.

If specified and supported, max-dpb indicates the maximum decoded picture buffer size, given in units of 8/3 macroblocks. See RFC 6184, page 46 for further details.

If specified and supported by the software, max-fs specifies the maximum size of a single video frame, given as a number of macroblocks.

If specified and supported by the software, this value is an integer specifying the maximum rate at which macroblocks should be processed per second (in macroblocks per second).

If specified and supported by the software, this specifies an integer stating the maximum static macroblock processing rate in static macroblocks per second (using the hypothetical assumption that all macroblocks are static macroblocks).

Parameters with specific requirements

These parameters may or may not be required, but have some special requirement when used.

All endpoints are required to support mode 1 (non-interleaved mode). Support for other packetization modes is optional, and the parameter itself is not required to be specified.

Sequence and picture information for AVC can be sent either in-band or out-of-band. When AVC is used with WebRTC, this information must be signaled in-band; the sprop-parameter-sets parameter must therefore not be included in the SDP.

Parameters which must be specified

These parameters must be specified whenever using AVC in a WebRTC connection.

All WebRTC implementations are required to specify and interpret this parameter in their SDP, identifying the sub-profile used by the codec. The specific value that is set is not defined; what matters is that the parameter be used at all. This is useful to note, since in RFC 6184 ("RTP Payload Format for H.264 Video"), profile-level-id is entirely optional.

Other requirements

For the purposes of supporting switching between portrait and landscape orientations, there are two methods that can be used. The first is the video orientation (CVO) header extension to the RTP protocol. However, if this isn't signaled as supported in the SDP, then it's encouraged that browsers support Display Orientation SEI messages, though not required.

Unless signaled otherwise, the pixel aspect ratio is 1:1, indicating that pixels are square.

The payload format used for AVC in WebRTC is described in RFC 6184: RTP Payload Format for H.264 Video . AVC implementations for WebRTC are required to support the special "filler payload" and "full frame freeze" SEI messages; these are used to support switching among multiple input streams seamlessly.

Supported audio codecs

The audio codecs which RFC 7874 mandates that all WebRTC-compatible browsers must support are shown in the table below.

See below for more details about any WebRTC-specific considerations that exist for each codec listed above.

It's useful to note that RFC 7874 defines more than a list of audio codecs that a WebRTC-compliant browser must support; it also provides recommendations and requirements for special audio features such as echo cancellation, noise reduction, and audio leveling.

Note: The list above indicates the minimum required set of codecs that all WebRTC-compatible endpoints are required to implement. A given browser may also support other codecs; however, cross-platform and cross-device compatibility may be at risk if you use other codecs without carefully ensuring that support exists in all browsers your users might choose.

In addition to the mandatory audio codecs, some browsers support additional codecs as well. Those are listed in the following table.

Internet Low Bitrate Codec ( iLBC ) is an open-source narrow-band codec developed by Global IP Solutions and now Google, designed specifically for streaming voice audio. Google and some other browser developers have adopted it for WebRTC.

The Internet Speech Audio Codec ( iSAC ) is another codec developed by Global IP Solutions and now owned by Google, which has open-sourced it. It's used by Google Talk, QQ, and other instant messaging clients and is specifically designed for voice transmissions which are encapsulated within an RTP stream.

Comfort noise ( CN ) is a form of artificial background noise which is used to fill gaps in a transmission instead of using pure silence. This helps to avoid a jarring effect that can occur when voice activation and similar features cause a stream to stop sending data temporarily—a capability known as Discontinuous Transmission (DTX). In RFC 3389 , a method for providing an appropriate filler to use during silences.

Comfort Noise is used with G.711, and may potentially be used with other codecs that don't have a built-in CN feature. Opus, for example, has its own CN capability; as such, using RFC 3389 CN with the Opus codec is not recommended.

An audio sender is never required to use discontinuous transmission or comfort noise.

The Opus format, defined by RFC 6716 is the primary format for audio in WebRTC. The RTP payload format for Opus is found in RFC 7587 . You can find more general information about Opus and its capabilities, and how other APIs can support Opus, in the corresponding section of our guide to audio codecs used on the web .

Both the speech and general audio modes should be supported. Opus's scalability and flexibility are useful when dealing with audio that may have varying degrees of complexity. Its support of in-band stereo signals allows support for stereo without complicating the demultiplexing process.

The entire range of bit rates supported by Opus (6 kbps to 510 kbps) is supported in WebRTC, with the bit rate allowed to be dynamically changed. Higher bit rates typically improve quality.

Bit rate recommendations

Given a 20 millisecond frame size, the following table shows the recommended bit rates for various forms of media.

The bit rate may be adjusted at any time. In order to avoid network congestion, the average audio bit rate should not exceed the available network bandwidth (minus any other known or anticipated added bandwidth requirements).

G.711 defines the format for Pulse Code Modulation ( PCM ) audio as a series of 8-bit integer samples taken at a sample rate of 8,000 Hz, yielding a bit rate of 64 kbps. Both µ-law and A-law encodings are allowed.

G.711 is defined by the ITU and its payload format is defined in RFC 3551, section 4.5.14 .

WebRTC requires that G.711 use 8-bit samples at the standard 64 kbps rate, even though G.711 supports some other variations. Neither G.711.0 (lossless compression), G.711.1 (wideband capability), nor any other extensions to the G.711 standard are mandated by WebRTC.

Due to its low sample rate and sample size, G.711 audio quality is generally considered poor by modern standards, even though it's roughly equivalent to what a landline telephone sounds like. It is generally used as a least common denominator to ensure that browsers can achieve an audio connection regardless of platforms and browsers, or as a fallback option in general.

Specifying and configuring codecs

Getting the supported codecs.

Because a given browser and platform may have different availability among the potential codecs—and may have multiple profiles or levels supported for a given codec—the first step when configuring codecs for an RTCPeerConnection is to get the list of available codecs. To do this, you first have to establish a connection on which to get the list.

There are a couple of ways you can do this. The most efficient way is to use the static method RTCRtpSender.getCapabilities() (or the equivalent RTCRtpReceiver.getCapabilities() for a receiver), specifying the type of media as the input parameter. For example, to determine the supported codecs for video, you can do this:

Now codecList is an array codec objects, each describing one codec configuration. Also present in the list will be entries for retransmission (RTX), redundant coding (RED), and forward error correction (FEC).

If the connection is in the process of starting up, you can use the icegatheringstatechange event to watch for the completion of ICE candidate gathering, then fetch the list.

The event handler for icegatheringstatechange is established; in it, we look to see if the ICE gathering state is complete , indicating that no further candidates will be collected. The method RTCPeerConnection.getSenders() is called to get a list of all the RTCRtpSender objects used by the connection.

With that in hand, we walk through the list of senders, looking for the first one whose MediaStreamTrack indicates that its kind is video , indicating that the track's data is video media. We then call that sender's getParameters() method and set codecList to the codecs property in the returned object, and then return to the caller.

If no video track is found, we set codecList to null .

On return, then, codecList is either null to indicate that no video tracks were found or it's an array of RTCRtpCodecParameters objects, each describing one permitted codec configuration. Of special importance in these objects: the payloadType property, which is a one-byte value which uniquely identifies the described configuration.

Note: The two methods for obtaining lists of codecs shown here use different output types in their codec lists. Be aware of this when using the results.

Customizing the codec list

Once you have a list of the available codecs, you can alter it and then send the revised list to RTCRtpTransceiver.setCodecPreferences() to rearrange the codec list. This changes the order of preference of the codecs, letting you tell WebRTC to prefer a different codec over all others.

In this sample, the function changeVideoCodec() takes as input the MIME type of the codec you wish to use. The code starts by getting a list of all of the RTCPeerConnection 's transceivers.

Then, for each transceiver, we get the kind of media represented by the transceiver from the RTCRtpSender 's track's kind . We also get the lists of all codecs supported by the browser for both sending and receiving video, using the getCapabilities() static method of both RTCRtpSender and RTCRtpReceiver .

If the media is video, we call a method called preferCodec() for both the sender's and receiver's codec lists; this method rearranges the codec list the way we want (see below).

Finally, we call the RTCRtpTransceiver 's setCodecPreferences() method to specify that the given send and receive codecs are allowed, in the newly rearranged order.

That's done for each transceiver on the RTCPeerConnection ; once all of the transceivers have been updated, we call the onnegotiationneeded event handler, which will create a new offer, update the local description, send the offer along to the remote peer, and so on, thereby triggering the renegotiation of the connection.

The preferCodec() function called by the code above looks like this to move a specified codec to the top of the list (to be prioritized during negotiation):

This code is just splitting the codec list into two arrays: one containing codecs whose MIME type matches the one specified by the mimeType parameter, and the other with all the other codecs. Once the list has been split up, they're concatenated back together with the entries matching the given mimeType first, followed by all of the other codecs. The rearranged list is then returned to the caller.

Default codecs

Unless otherwise specified, the default—or, more accurately, preferred—codecs requested by each browser's implementation of WebRTC are shown in the table below.

Choosing the right codec

Before choosing a codec that isn't one of the mandatory codecs (VP8 or AVC for video and Opus or PCM for audio), you should seriously consider the potential drawbacks: in particular, only these codecs can be generally assumed to be available on essentially all devices that support WebRTC.

If you choose to prefer a codec other than the mandatory ones, you should at least allow for fallback to one of the mandatory codecs if support is unavailable for the codec you prefer.

In general, if it's available and the audio you wish to send has a sample rate greater than 8 kHz, you should strongly consider using Opus as your primary codec. For voice-only connections in a constrained environment, using G.711 at an 8 kHz sample rate can provide an acceptable experience for conversation, but typically you'll use G.711 as a fallback option, since there are other options which are more efficient and sound better, such as Opus in its narrowband mode.

There are a number of factors that come into play when deciding upon a video codec (or set of codecs) to support.

Licensing terms

Before choosing a video codec, make sure you're aware of any licensing requirements around the codec you select; you can find information about possible licensing concerns in our main guide to video codecs used on the web . Of the two mandatory codecs for video—VP8 and AVC/H.264—only VP8 is completely free of licensing requirements. If you select AVC, make sure you're; aware of any potential fees you may need to pay; that said, the patent holders have generally said that most typical website developers shouldn't need to worry about paying the license fees, which are typically focused more on the developers of the encoding and decoding software.

Warning: The information here does not constitute legal advice! Be sure to confirm your exposure to liability before making any final decisions where potential exists for licensing issues.

Power needs and battery life

Another factor to consider—especially on mobile platforms—is the impact a codec may have on battery life. If a codec is handled in hardware on a given platform, that codec is likely to allow for much better battery life and less heat production.

For example, Safari for iOS and iPadOS introduced WebRTC with AVC as the only supported video codec. AVC has the advantage, on iOS and iPadOS, of being able to be encoded and decoded in hardware. Safari 12.1 introduced support for VP8 within IRC, which improves interoperability, but at a cost—VP8 has no hardware support on iOS devices, so using it causes increased processor impact and reduced battery life.

Performance

Fortunately, VP8 and AVC perform similarly from an end-user perspective, and are equally adequate for use in videoconferencing and other WebRTC solutions. The final decision is yours. Whichever you choose, be sure to read the information provided in this article about any particular configuration issues you may need to contend with for that codec.

Keep in mind that choosing a codec that isn't on the list of mandatory codecs likely runs the risk of selecting a codec which isn't supported by a browser your users might prefer. See the article Handling media support issues in web content to learn more about how to offer support for your preferred codecs while still being able to fall back on browsers that don't implement that codec.

Security implications

There are interesting potential security issues that come up while selecting and configuring codecs. WebRTC video is protected using Datagram Transport Layer Security ( DTLS ), but it is theoretically possible for a motivated party to infer the amount of change that's occurring from frame to frame when using variable bit rate (VBR) codecs, by monitoring the stream's bit rate and how it changes over time. This could potentially allow a bad actor to infer something about the content of the stream, given the ebb and flow of the bit rate.

For more about security considerations when using AVC in WebRTC, see RFC 6184, section 9: RTP Payload Format for H.264 Video: Security Considerations .

RTP payload format media types

It may be useful to refer to the IANA 's list of RTP payload format media types; this is a complete list of the MIME media types defined for potential use in RTP streams, such as those used in WebRTC. Most of these are not used in WebRTC contexts, but the list may still be useful.

See also RFC 4855 , which covers the registry of media types.

  • Introduction to WebRTC protocols
  • WebRTC connectivity
  • Guide to video codecs used on the web
  • Guide to audio codecs used on the web
  • Digital video concepts
  • Digital audio concepts
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

webrtcHacks

[10 years of] guides and information for WebRTC developers

Guide apple , code , getUserMedia , ios , Safari Chad Phillips · September 7, 2018

Guide to WebRTC with Safari in the Wild (Chad Phillips)

It has been more than a year since Apple first added WebRTC support to Safari. My original post reviewing the implementation continues to be popular here, but it does not reflect some of the updates since the first limited release. More importantly, given its differences and limitations, many questions still remained on how to best develop WebRTC applications for Safari.

I ran into Chad Phillips at Cluecon  (again) this year and we ended up talking about his arduous experience making WebRTC work on Safari. He had a great, recent list of tips and tricks so I asked him to share it here.

Chad is a long-time open source guy and contributor to the FreeSWITCH product. He has been involved with WebRTC development since 2015. He recently launched  MoxieMeet , a videoconferencing platform for online experiential events, where he is CTO and developed a lot of the insights for this post.

{“editor”, “ chad hart “}

safari h264

In June of 2017, Apple became the last major vendor to release support for WebRTC, paving the (still bumpy) road for platform interoperability.

And yet, more than a year later, I continue to be surprised by the lack of guidance available for developers to integrate their WebRTC apps with Safari/iOS. Outside of a couple posts by the Webkit team, some scattered StackOverflow questions, the knowledge to be gleaned from scouring the Webkit bug reports for WebRTC, and a few posts on this very website , I really haven’t seen much support available. This post is an attempt to begin rectifying the gap.

I have spent many months of hard work integrating WebRTC in Safari for a very complex videoconferencing application. Most of my time was spent getting iOS working, although some of the below pointers also apply to Safari on MacOS.

This post assumes you have some level of experience with implementing WebRTC — it’s not meant to be a beginner’s how to, but a guide for experienced developers to smooth the process of integrating their apps with Safari/iOS. Where appropriate I’ll point to related issues filed in the Webkit bug tracker so that you may add your voice to those discussions, as well as some other informative posts.

I did an awful lot of bushwacking in order to claim iOS support in my app, hopefully the knowledge below will make a smoother journey for you!

Some good news first

First, the good news:

  • Apple’s current implementation is fairly solid
  • For something simple like a 1-1 audio/video call, the integration is quite easy

Let’s have a look at some requirements and trouble areas.

General Guidelines and Annoyances

Use the current webrtc spec.

mozilla docs

If you’re building your application from scratch, I recommend using the current WebRTC API spec (it’s undergone several iterations). The following resources are great in this regard:

  • https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API
  • https://github.com/webrtc/samples

For those of you running apps with older WebRTC implementations, I’d recommend you upgrade to the latest spec if you can, as the next release of iOS  disables the legacy APIs by default. In particular, it’s best to avoid the legacy addStream APIs, which make it more difficult to manipulate tracks in a stream.

More background on this here: https://blog.mozilla.org/webrtc/the-evolution-of-webrtc/

iPhone and iPad have unique rules – test both

camera not working

Since the iPhone and iPad have different rules and limitations, particularly around video, I’d strongly recommend that you test your app on both devices. It’s probably smarter to start by getting it working fully on the iPhone, which seems to have more limitations than the iPad.

More background on this here: https://webkit.org/blog/6784/new-video-policies-for-ios

Let the iOS madness begin

It’s possible that may be all you need to get your app working on iOS. If not, now comes the bad news: the iOS implementation has some rather maddening bugs/restrictions, especially in more complex scenarios like multiparty conference calls.

Other browsers on iOS missing WebRTC integration

must use safari

The WebRTC APIs have not yet been exposed to iOS browsers using WKWebView  . In practice, this means that your web-based WebRTC application will only work in Safari on iOS, and not in any other browser the user may have installed (Chrome, for example), nor in an ‘in-app’ version of Safari.

To avoid user confusion, you’ll probably want to include some helpful user error message if they try to open your app in another browser/environment besides Safari proper.

Related issues:

  • https://bugs.webkit.org/show_bug.cgi?id=183201
  • https://bugs.chromium.org/p/chromium/issues/detail?id=752458

No beforeunload event, use pagehide

According to this Safari event documentation , the unload event has been deprecated, and the beforeunload   event has been completely removed in Safari. So if you’re using these events, for example, to handle call cleanup, you’ll want to refactor your code to use the pagehide   event on Safari instead.

source:  https://gist.github.com/thehunmonkgroup/6bee8941a49b86be31a787fe8f4b8cfe

Getting & playing media, playsinline attribute.

Step one is to add the required playsinline   attribute to your video tags, which allows the video to start playing on iOS. So this:

Becomes this:

playsinline   was originally only a requirement for Safari on iOS, but now you might need to use it in some cases in Chrome too – see Dag-Inge’s post  for more on that..

See the thread here for details on this issue requirement: https://github.com/webrtc/samples/issues/929

Autoplay rules

Next you’ll need to be aware of the Webkit WebRTC rules on autoplaying audio/video. The main rules are:

  • MediaStream-backed media will autoplay if the web page is already capturing.
  • MediaStream-backed media will autoplay if the web page is already playing audio
  • A user gesture is required to initiate any audio playback – WebRTC or otherwise.

This is good news for the common use case of a video call, since you’ve most likely already gotten permission from the user to use their microphone/camera, which satisfies the first rule. Note that these rules work alongside the base autoplay rules for MacOS and iOS, so it’s good to be aware of them as well.

Related webkit posts:

  • https://webkit.org/blog/7763/a-closer-look-into-webrtc
  • https://webkit.org/blog/7734/auto-play-policy-changes-for-macos
  • https://webkit.org/blog/6784/new-video-policies-for-ios

No low/limited video resolutions

no low res cropped

UPDATE 2019-08-18:

Unfortunately this bug has only gotten worse in  iOS 12, as their attempt to fix it broke the sending of video to peer connections for non-standard resolutions. On the positive side the issue does seem to be fully fixed in the latest iOS 13 Beta: https://bugs.webkit.org/show_bug.cgi?id=195868

Visiting https://jsfiddle.net/thehunmonkgroup/kmgebrfz/15/   (or the webrtcHack’s WebRTC-Camera-Resolution project) in a WebRTC-compatible browser will give you a quick analysis of common resolutions that are supported by the tested device/browser combination. You’ll notice that in Safari on both MacOS and iOS, there aren’t any available low video resolutions such as the industry standard QQVGA, or 160×120 pixels. These small resolutions are pretty useful for serving thumbnail-sized videos — think of the filmstrip of users in a Google Hangouts call, for example.

Now you could just send whatever the lowest available native resolution is along the peer connection and let the receiver’s browser downscale the video, but you’ll run the risk of saturating the download bandwidth for users that have less speedy internet in mesh/SFU scenarios.

I’ve worked around this issue by restricting the bitrate of the sent video, which is a fairly quick and dirty compromise. Another solution that would take a bit more work is to handle downscaling the video stream in your app before passing it to the peer connection, although that will result in the client’s device spending some CPU cycles.

Example code:

  • https://webrtc.github.io/samples/src/content/peerconnection/bandwidth/

New getUserMedia() request kills existing stream track

gUM bug

If your application grabs media streams from multiple getUserMedia ( )   requests, you are likely in for problems with iOS. From my testing, the issue can be summarized as follows: if getUserMedia ( )   requests a media type requested in a previous getUserMedia ( )  , the previously requested media track’s muted   property is set to true, and there is no way to programmatically unmute it. Data will still be sent along a peer connection, but it’s not of much use to the other party with the track muted! This limitation is currently expected behavior on iOS.

I was able to successfully work around it by:

  • Grabbing a global audio/video stream early on in my application’s lifecycle
  • Using MediaStream . clone ( )  ,  MediaStream . addTrack ( )  , MediaStream . removeTrack ( )   to create/manipulate additional streams from the global stream without calling getUserMedia ( )   again.

source:  https://gist.github.com/thehunmonkgroup/2c3be48a751f6b306f473d14eaa796a0

See this post for more: https://developer.mozilla.org/en-US/docs/Web/API/MediaStream  and

this related issue: https://bugs.webkit.org/show_bug.cgi?id=179363

Managing Media Devices

Media device ids change on page reload.

This has been improved as of iOS 12.2, where device IDs are now stable across browsing sessions after getUserMedia ( )   has been called once. However, device IDs are still not preserved across browser sessions, so this improvement isn’t really helpful for storing a user’s device preferences longer term. For more info, see https://webkit.org/blog/8672/on-the-road-to-webrtc-1-0-including-vp8/

Many applications include support for user selection of audio/video devices. This eventually boils down to passing the deviceId to getUserMedia ( )   as a constraint.

Unfortunately for you as a developer, as part of Webkit’s security protocols, random deviceId ’s are generated for all devices on each new page load. This means, unlike every other platform, you can’t simply stuff the user’s selected deviceId into persistent storage for future reuse.

The cleanest workaround I’ve found for this issue is:

  • Store both device . deviceId   and device . label   for the device the user selects
  • Try using the saved deviceId
  • If that fails, enumerate the devices again, and try looking up the deviceId   from the saved device label.

On a related note: Webkit further prevents fingerprinting by only exposing a user’s actual available devices after the user has granted device access. In practice, this means you need to make a getUserMedia ( )   call before  you call enumerateDevices ( )  .

source:  https://gist.github.com/thehunmonkgroup/197983bc111677c496bbcc502daeec56

Related issue: https://bugs.webkit.org/show_bug.cgi?id=179220

Related post: https://webkit.org/blog/7763/a-closer-look-into-webrtc

Speaker selection not supported

Webkit does not yet support HTMLMediaElement . setSinkId ( )  , which is the API method used for assigning audio output to a specific device. If your application includes support for this, you’ll need to make sure it can handle cases where the underlying API support is missing.

source:  https://gist.github.com/thehunmonkgroup/1e687259167e3a48a55cd0f3260deb70

Related issue: https://bugs.webkit.org/show_bug.cgi?id=179415

PeerConnections & Calling

Beware, no vp8 support.

Support for VP8 has now been added as of iOS 12.2. See https://webkit.org/blog/8672/on-the-road-to-webrtc-1-0-including-vp8/

While the W3C spec clearly states that support for the VP8 video codec (along with the H.264 codec) is to be implemented, Apple has thus far chosen to not support it. Sadly, this is anything but a technical issue, as libwebrtc includes VP8 support, and Webkit actively disables  it.

So at this time, my advice to achieve the best interoperability in various scenarios is:

  • Multiparty MCU – make sure that H.264 is a supported codec
  • Multiparty SFU – use H.264
  • Multiparty Mesh and peer to peer – pray everyone can negotiate a common codec

I say best interop because while this gets you a long way, it won’t be all the way. For example, Chrome for Android does not support software H.264 encoding yet. In my testing, many (but not all) Android phones have hardware H.264 encoding, but those that are missing hardware encoding will not work in Chrome for Android.

Associated bug reports:

  • https://bugs.webkit.org/show_bug.cgi?id=167257
  • https://bugs.webkit.org/show_bug.cgi?id=173141
  • https://bugs.chromium.org/p/chromium/issues/detail?id=719023

Send/receive only streams

As previously mentioned, iOS doesn’t support the legacy WebRTC APIs. However, not all browser implementations fully support the current specification either.

As of this writing, a good example is creating a send only audio/video peer connection. iOS doesn’t support the legacy RTCPeerConnection . createOffer ( )   options of offerToReceiveAudio  / offerToReceiveVideo  , and the current stable Chrome doesn’t support the RTCRtpTransceiver   spec by default.

Other more esoteric bugs and limitations

There are certainly other corner cases you can hit that seem a bit out of scope for this post. However, an excellent resource should you run aground is the Webkit issue queue, which you can filter just for WebRTC-related issues: https://bugs.webkit.org/buglist.cgi?component=WebRTC&list_id=4034671&product=WebKit&resolution=—

Remember, Webkit/Apple’s implementation is young

It’s still missing some features (like the speaker selection mentioned above), and in my testing isn’t as stable as the more mature implementation in Google Chrome.

There have also been some major bugs — capturing audio was completely broken for the majority of the iOS 12 Beta release cycle (thankfully they finally fixed that in Beta 8).

Apple’s long-term commitment to WebRTC as a platform isn’t clear, particularly because they haven’t released much information about it beyond basic support. As an example, the previously mentioned lack of VP8 support is troubling with respect to their intention to honor the agreed upon W3C specifications.

These are things worth thinking about when considering a browser-native implementation versus a native app. For now, I’m cautiously optimistic, and hopeful that their support of WebRTC will continue, and extend into other non-Safari browsers on iOS.

{“author”: “ Chad Phillips “}

Related Posts

Put in a Bug in Apple’s Apple – Alex Gouaillard’s Plan

Reader Interactions

safari h264

September 7, 2018 at 9:42 am

One of the most detailed posts I’ve seen on the subject; thank you Chad, for sharing.

safari h264

September 11, 2018 at 7:04 am

Please also note that Safari does not support data channels.

safari h264

September 11, 2018 at 12:40 pm

@JSmitty, all of the ‘RTCDataChannel’ examples at https://webrtc.github.io/samples/ do work in Safari on MacOS, but do not currently work in Safari on iOS 11/12. I’ve filed https://bugs.webkit.org/show_bug.cgi?id=189503 and https://github.com/webrtc/samples/issues/1123 — would like to get some feedback on those before I incorporate this info into the post. Thanks for the heads up!

safari h264

September 26, 2018 at 2:44 pm

OK, so I’ve confirmed data channels DO work in Safari on iOS, but there’s a caveat: iOS does not include local ICE candidates by default, and many of the data channel examples I’ve seen depend on that, as they’re merely sending data between two peer connections on the same device.

See https://bugs.webkit.org/show_bug.cgi?id=189503#c2 for how to temporarily enable local ICE on iOS.

safari h264

January 22, 2020 at 4:21 pm

Great article. Thanks Chad & Chad for sharing your expertise.

As to DataChannel support. Looks like Safari officially still doesn’t support it according to the support matrix. https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChannel

My own testing shows that DataChannel works between two Safari browser windows. However at this time (Jan 2020) it does not work between Chrome and Safari windows. Also fails between Safari and aiortc (Python WebRTC provider). DataChannel works fine between Chrome and aiortc.

A quick way to test this problem is via sharedrop.io Transferring files works fine between same brand browser windows, but not across brands.

Hope Apple is working on the compatibility issues with Chrome.

safari h264

September 13, 2018 at 2:37 pm

Nice summary Chad. Thanks for this! –

safari h264

September 18, 2018 at 4:29 pm

Very good post, Chad. Just what I was looking for. Thanks for sharing this knowledge. 🙂

safari h264

October 4, 2018 at 10:11 am

Thanks for this Chad, currently struggling with this myself, where a portable ‘web’ app is being written.. I’m hopeful it will creep into wkwebview soon!

safari h264

October 5, 2018 at 2:43 am

Thanks for detailing the issues.

One suggestion for any future article would be including the iOS Safari limitation on simultaneous playing of multiple elements with audio present.

This means refactoring so that multiple (remote) audio sources are rendered by a single element.

October 5, 2018 at 9:46 am

There’s a good bit of detail/discussion about this limitation here: https://bugs.webkit.org/show_bug.cgi?id=176282

Media servers that mix the audio are a good solution.

safari h264

December 18, 2018 at 1:10 pm

The same issue I’m facing New getUserMedia() request kills existing stream track. Let’s see whether it helps me or not.

December 19, 2018 at 6:23 am

iOS calling getUserMedia() again kills video display of first getUserMedia(). This is the issue I’m facing but I want to pass the stream from one peer to another peer.

safari h264

April 26, 2019 at 12:07 am

Thank you Chad for sharing this, I was struggling with the resolution issue on iOS and I was not sure why I was not getting the full hd streaming. Hope this will get supported soon.

safari h264

May 21, 2019 at 12:54 am

VP8 is a nightmare. I work on a platform where we publish user-generated content, including video, and the lack of support for VP8 forces us to do expensive transcoding on these videos. I wonder why won’t vendors just settle on a universal codec for mobile video.

August 18, 2019 at 2:17 pm

VP8 is supported as of iOS 12.2: https://webkit.org/blog/8672/on-the-road-to-webrtc-1-0-including-vp8/

safari h264

July 3, 2019 at 3:38 am

Great Post! Chad I am facing an issue with iOS Safari, The issue is listed below. I am using KMS lib for room server handling and calling, There wasn’t any support for Safari / iOS safari in it, I added adapter.js (shim) to make my application run on Safari and iOS (Safari). After adding it worked perfectly on Safari and iOS, but when more than 2 persons join the call, The last added remote stream works fine but the existing remote stream(s) get struck/disconnected which means only peer to peer call works fine but not multiple remote streams. Can you please guide how to handle multiple remote streams in iOS (Safari). Thanks

July 3, 2019 at 1:45 pm

Your best bet is probably to search the webkit bugtracker, and/or post a bug there.

safari h264

August 7, 2019 at 6:40 pm

No low/limited video resolutions: 1920×1080 not supported -> are you talking about IOS12 ? Because I’m doing 4K on IOS 12.3.1 with janus echo test with iphone XS Max (only one with 4K front cam) Of course if I run your script on my MBP it will say fullHD not supported -> because the cam is only 720p.

August 18, 2019 at 2:20 pm

That may be a standard camera resolution on that particular iPhone. The larger issue has been that only resolutions natively supported by the camera have been available, leading to difficultly in reliably selecting resolutions in apps, especially lower resolutions like those used in thumbnails.

Thankfully, this appears to be fully addressed in the latest beta of iOS 13.

safari h264

April 18, 2020 at 7:01 am

How many days of work I lost before find this article. It’s amazing and explain a lot the reasons of all the strange bugs in iOS. Thank you so much.

safari h264

September 21, 2020 at 11:38 am

Hi, i’m having issues with Safari on iOS. In the video tag, adding autoplay and playsinline doesn’t work on our Webrtc implementation. Obviously it works fine in any browser on any other platform.

I need to add the controls tag, then manually go to full screen and press play.

Is there a way to play the video inside the web page ?

safari h264

December 9, 2020 at 2:35 am

First of all, thanks for detailing the issues.

This article is unique to provide many insides for WebRTC/Safari related issues. I learned a lot and applied some the techniques in our production application.

But I had very unique case which I am struggling with right now, as you might guess with Safari. I would be very grateful if you can help me or at least to guide to the right direction.

We have webrtc-based one-2-one video chat, one side always mobile app (host) who is the initiator and the other side is always browser both desktop and mobile. Making the app working across different networks was pain in the neck up to recently, but managed to fix this by changing some configurations. So the issue was in different networks WebRTC was not generating relay and most of the time server reflexive candidates, as you know without at least stun provided candidates parties cannot establish any connection. Solution was simple as though it look a lot of search on google, ( https://github.com/pion/webrtc/issues/810 ), we found out that mobile data providers mostly assigning IPv6 to mobile users. And when they used mobile data plan instead of local wifi, they could not connect to each other. By the way, we are using cloud provider for STUN/TURN servers (Xirsys). And when we asked their technical support team they said their servers should handle IPv6 based requests, but in practice it did not work. So we updated RTCPeerConnection configurations, namely, added optional constraints (and this optional constraints are also not provided officially, found them from other non official sources), the change was just disabling IPv6 on both mobile app (iOS and Android) and browser. After this change, it just worked perfectly until we found out Safari was not working at all. So we reverted back for Safari and disabled IPv6 for other cases (chrome, firefox, Android browsers)

const iceServers = [ { urls: “stun:” }, { urls: [“turn:”,”turn:”,… ], credential: “secret”, username: “secret” } ];

let RTCConfig; // just dirty browser detection const ua = navigator.userAgent.toLocaleLowerCase(); const isSafari = ua.includes(“safari”) && !ua.includes(“chrome”);

if (isSafari) { RTCConfig = iceServers; } else { RTCConfig = { iceServers, constraints: { optional: [{ googIPv6: false }] } }; }

if I wrap iceServers array inside envelop object and optional constraints and use it in new RTCPeerConnection(RTCConfig); is is throwing error saying: Attempted to assign readonly property pointing into => safari_shim.js : 255

Can you please help with this issue, our main customers use iPhone, so making our app work in Safari across different networks are very critical to our business. If you provide some kind of paid consultation, it is also ok for us

Looking forward to hearing from you

safari h264

July 13, 2022 at 2:42 pm

Thanks for the great summary regarding Safari/IOS. The work-around for low-bandwidth issue is very interesting. I played with the sample. It worked as expected. It’s played on the same device, isn’t it? When I tried to add a similar “a=AS:500\r\n” to the sdp and tested it on different devices – one being windows laptop with browser: Chrome, , another an ipad with browser: Safari – it seemed not working. The symptom was: the stream was not received or sent. In a word, the connections for media communications was not there. I checked the sdp, it’s like,

sdp”: { “type”: “offer”, “sdp”: ” v=0\r\n o=- 3369656808988177967 2 IN IP4 127.0.0.1\r\n s=-\r\n t=0 0\r\n a=group:BUNDLE 0 1 2\r\n a=extmap-allow-mixed\r\n a=msid-semantic: WMS 7BLOSVujr811EZHSiFZI2t8yMML8LpOgo0in\r\n m=audio 9 UDP/TLS/RTP/SAVPF 111 63 103 104 9 0 8 106 105 13 110 112 113 126\r\n c=IN IP4 0.0.0.0\r\n b=AS:500\r\n … }

Also I didn’t quite understand the statement in the article. “I’ve worked around this issue by restricting the bitrate of the sent video, which is a fairly quick and dirty compromise. Another solution that would take a bit more work is to handle downscaling the video stream in your app before passing it to the peer connection” – don’t the both scenarios work on the sent side?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Using HEIF or HEVC media on Apple devices

Upgrade to iOS 11 or later or macOS High Sierra or later to view, edit, or duplicate HEIF or HEVC media captured with an iPhone or iPad.

iOS 11 and macOS High Sierra introduced support for these new, industry-standard media formats:

HEIF (High Efficiency Image File Format) for photos

HEVC (High Efficiency Video Coding), also known as H.265, for videos

HEIF and HEVC offer better compression than JPEG and H.264, so they use less storage space on your devices and iCloud Photos , while preserving the same visual quality.

To fully view, edit, or duplicate HEIF and HEVC media on your device, upgrade to the latest version of iOS 11 or later or macOS High Sierra or later .

Capturing this media

When using iOS 11 or later, iPadOS, or visionOS, the following devices can capture media in HEIF or HEVC format. Other devices can view, edit, or duplicate this media with limitations , if using iOS 11 or later, iPadOS, visionOS, or macOS High Sierra or later.

iPhone 7 or iPhone 7 Plus or later

iPad (6th generation) or later

iPad Air (3rd generation) or later

iPad mini (5th generation) or later

iPad Pro (10.5 inch), iPad Pro (11 inch), and iPad Pro 12.9-inch (2nd generation) or later

Apple Vision Pro

Learn how to identify your iPhone model or iPad model .

Though capturing in HEIF and HEVC format is recommended, you can set these devices to capture media using the older formats, which are more broadly compatible with other devices and operating systems:

Go to Settings > Camera.

Tap Formats.

Tap Most Compatible. This setting is available only on devices that can capture media in HEIF or HEVC format, and only when using iOS 11 or later, or iPadOS.

All new photos and videos will now use JPEG or H.264 format. To return to using the space-saving HEIF and HEVC formats, choose High Efficiency.

Working with this media

Support for HEIF and HEVC is built into iOS 11 and later, iPadOS, visionOS, and macOS High Sierra and later, which lets you view, edit, and duplicate this media in apps that support these features.

On some older devices, support for HEVC is affected by the resolution and frame rate (fps) of the video. Resolutions of 1080p or lower and frame rates of 60 fps or lower are more broadly compatible with older devices. To reduce the resolution and frame rate that your compatible i Phone or iPad uses for recording video, go to Settings > Camera > Record Video, as well as Settings > Camera > Record Slo-mo.

No alt supplied for Image

Sharing and converting this media

If sharing this media via iCloud Photos, the media is preserved in its original format, resolution, and frame rate. If your device can't fully view, edit, or duplicate HEIF or HEVC media in iCloud Photos, or displays it at a lower resolution, upgrade to iOS 11 or later or macOS High Sierra or later .

If sharing this media using other methods, such as AirDrop, Messages, or email, the media might automatically be shared in a more compatible format, such as JPEG or H.264, depending on whether the receiving device supports the newer media format.

To convert HEIF and HEVC media manually, export it to a different format from an Apple or third-party app. For example:

If you open an HEIF image in Photos or Preview on your Mac, choose File > Export, then choose a format such as JPEG or PNG before saving.

If you open an HEVC video in QuickTime Player on your Mac, choose File > Export As. In the dialog that opens, use the pop-up menu to change from Smaller File Size (HEVC) to Greater Compatibility (H.264) before you click Save. If you see an HEVC checkbox instead of a pop-up menu, just deselect the checkbox before saving.

Importing this media via USB

When you import HEIF or HEVC media from an attached iPhone or iPad into Photos, Image Capture, or a PC, the media might be converted to JPEG or H.264. If you don't want it to be converted, go to Settings > Photos on your device, then scroll down and tap Keep Originals.

safari h264

Related topics

safari h264

Explore Apple Support Community

Find what’s been asked and answered by Apple customers.

safari h264

Contact Apple Support

Need more help? Save time by starting your support request online and we'll connect you to an expert.

safari h264

The Challenging Path to WebRTC H.264 Video Codec Hardware Support

WebRTC H.264 hardware acceleration is no guarantee for anything. Not even for hardware acceleration.

safari h264

There was a big war going on when it came to the video codec in WebRTC. Should we all be using VP8 or should we be using H.264 ? A lot of digital ink was spilled on this topic (here as well as in other places). The final decision that was made?

Both VP8 and H.264 became mandatory to implement by browsers.

So… which of these video codecs should you use in your application? Here’s a free mini video course to help you decide.

Enroll to free course

Fast forward to today, and you have this interesting conundrum:

  • Chrome, Firefox and Edge implement VP8 and H.264
  • Safari implements H.264. No VP8

Leaving aside the question of what mandatory really means in English (leaving it here for the good people at Apple to review), that makes only a fraction of the whole story.

There are reasons why one would like to use VP8:

  • It has been there from the start, so its implementation is highly optimized already
  • Royalty free, so no need to deal with patents and payments and whatnot. I know there’s FUD around patents in VP8, but for the most part, 100% of the industry is treating it as free
  • It nicely supports simulcast , so quite friendly to video group calling scenarios

There are reasons why one would like to use H.264:

  • You already have H.264 equipment, so don’t want to transcode – be it cameras, video conferencing gear or the need to broadcast via HLS or RTMP
  • You want to support Safari
  • You want to leverage hardware based encoding and decoding to increase battery life on your mobile devices

I want to open up the challenges here. Especially in leveraging hardware based encoding in WebRTC H.264 implementations. Before we dive into them though, there’s one more thing I want to make clear:

You can use a mobile app with VP8 (or H.264) on iOS devices.

The fact that Apple decided NOT to implement VP8, doesn’t bar your own mobile app from supporting it.

WebRTC H.264 Challenges

Before you decide going for a WebRTC H.264 implementation, you should need to take into consideration a few of the challenges associated with it.

I want to start by explaining one thing about video codecs – they come with multiple features, knobs, capabilities, configurations and profiles. These additional doozies are there to improve the final quality of the video, but they aren’t always there. To use them, BOTH the encoder and the decode need to support them, which where a lot of the problems you’ll be facing stem from.

#1 – You might not have access to a hardware implementation of H.264

In the past, developers had no access to the H.264 codec on iOS. You could only get it to record a file or playback one. Not use it to stream media in real time. This has changed and now that’s possible.

But there’s also Android to contend with. And in Android, you’re living in the wild wild west and not the world wide web.

It would be safe to say that all modern Android devices today have H.264 encoder and decoder available in hardware acceleration, which is great. But do you have access to it?

safari h264

The illustration above shows the value chain of the hardware acceleration. Who’s in charge of exposing that API to you as a developer?

The silicon designer? The silicon manufacturer? The one who built the hardware acceleration component and licensed it to the chipset vendor? Maybe the handset manufacturer? Or is it Google?

The answer is all of them and none of them.

WebRTC is a corner case of a niche of a capability inside the device. No one cares about it enough to make sure it works out of the factory gate. Which is why in some of the devices, you won’t have access to the hardware acceleration for H.264 and will be left to deal with a software implementation.

Which brings us to the next challenge:

#2 – Software implementations of H.264 encoders might require royalty payments

safari h264

Since you will be needing a software implementation of H.264, you might end up needing to pay royalties for using this codec.

I know there’s this thing called OpenH264. I am not a lawyer, though my understanding is that you can’t really compile it on your own if you want to keep it “open” in the sense of no royalty payments. And you’ll probably need to compile it or link it with your code statically to work.

This being the case, tread carefully here.

Oh, and if you’re using a 3rd party CPaaS, you might want to ask that vendor if he is taking care of that royalty payment for you – my guess is that he isn’t.

#3 – Simulcast isn’t really supported. At least not everywhere

safari h264

Simulcast is how most of us do group video calls these days. At least until SVC becomes more widely available.

What simulcast does is allows devices to send multiple resolutions/bitrates of the same video towards the server. This removes the need of an SFU to transcode media and at the same time, let the SFU offer the most suitable experience for each participant without resorting to lowest common denominator type of strategies.

The problem is that simulcast in H.264 isn’t available yet in any of the web browsers. It is coming to Chrome, but that’s about it for now. And even when it will be, there’s no guarantee that Apple will be so kind as to add it to Safari.

It is better than nothing, though not as good as VP8 simulcast support today.

#4 – H.264 hardware implementations aren’t always compatible with WebRTC

Here’s the kicker – I learned this one last month, from a thread in discuss-webrtc – the implementation requirements of H.264 in WebRTC are such that it isn’t always easy to use hardware acceleration even if and when it is available.

Read this from that thread:

Remember to differentiate between the encoder and the decoder. The Chrome software encoder is OpenH264 – https://github.com/cisco/openh264 Contributions are welcome, but the encoder currently doesn’t support either High or Main (or even full Baseline), according to the README file. Hardware encoders vary greatly in their capabilities.

Harald Alvestrand from Google offers here a few interesting statements. Let me translate them for you:

  • H.264 encoders and decoders are different kinds of pain. You need to solve the problem of each of these separately (more about that later)

safari h264

  • The econder’s implementation of OpenH264 isn’t really High profile or Main profile or even Baseline profile. It just implements something in-between that fits well into real time communications
  • And if you decide not to use it and use a hardware encoder, then be sure to check what that encoder is capable of, as this is the wild wild west as we said, so even if the encoder is accessible, it is going to be like a box of chocolate – you never know what they’re going to support

safari h264

And then comes this nice reply from the good guys at Fuze:

@Harald: we’ve actually been facing issues related to the different profiles support with OpenH264 and the hardware encoders. Wouldn’t it make more sense for Chrome to only offer profiles supported by both? Here’s the bad corner case we hit: we were accidentally picking a profile only supported by the hardware encoder on Mac. As a result, when Chrome detected CPU issues for instance, it would try to reduce quality to a level not supported by the hardware encoder which actually led to a fallback to the software encoder… which didn’t support the profile. There didn’t seem to be a good way to handle this scenario as the other side would just stop receiving anything.

If I may translate this one as well for your entertainment:

  • You pick a profile for the encoder which might not be available in the decoder. And Chrome doesn’t seem to be doing the matchmaking here (not sure if that true and if Chrome can even do that if it really wanted to)
  • Mac’s hardware acceleration for the encoder of H.264, as any other Apple product, has its very own configuration to it, which is supported only by it. But somehow, it doesn’t at some point which kills off the ability to even use that configuration when you try to fallback to software
  • This is one edge case, but there are probably more like it lurking around

So. Got hardware encoder and/or decoder. Might not be able to use it.

#5 – For now, H.264 video quality is… lower than VP8

That implementation of H.264 in WebRTC? It isn’t as good as the VP8 one. At least not in Chrome.

I’ve taken testRTC for a spin on this one, running AppRTC with it. Once with VP8 and another time with H.264. Here’s what I got:

safari h264

This is for the same scenario running on the same machines encoding the same raw video. The outgoing bitrate variance for VP8 is 0.115 while it is 0.157 for H.264 (the lower the better). Not such a big difference. The framerate of H.264 seems to be somewhat lower at times.

I tried out our new scoring system in testRTC that is available in beta on both these test runs, and got these numbers:

safari h264

The 9.0 score was given to the VP8 test run while H.264 got an 8.8 score.

There’s a bit of a difference with how stable VP8’s implementation is versus the H.264 one. It isn’t that Cisco’s H.264 code is bad. It might just be that the way it got integrated into WebRTC isn’t as optimized as the VP8’s integration.

Then there’s this from the same discuss-webrtc thread:

We tried h264 baseline at 6mbps. The problem we ran into is the bitrate drastically jumped all over the place.

I am not sure if this relates to the fact that it is H.264 or just to trying to use WebRTC at such high bitrates, or the machine or something else entirely. But the encoder here is suspect as well.

I also have a feeling that Google’s own telemetry and stats about the video codecs being used will point to VP8 having a larger portion of ongoing WebRTC sessions.

#6 – The future lies in AV1

After VP8 and H.264 there’s VP9 and HEVC/H.265 respectively.

A WebRTC video codec war have started between HEVC and AV1 recently. I am siding with AV1 here – AV1, includes as its founding members Apple, Google, Microsoft and Mozilla (who all happen to be the companies behind the major web browsers).

The best trajectory to video codecs in WebRTC will look something like this:

safari h264

Why doesn’t this happen in VP8?

It does. To some extent. But a lot less.

The challenges in VP8 are limited as it is mostly software based, with a single main implementation to baseline against – the one coming from Google directly. Which happens to be the one used by Chrome’s WebRTC as well.

Since everyone work against the same codebase, using the same bitstreams and software to test against, you don’t see the same set of headaches.

There’s also the limitation of available hardware acceleration for VP8, which ends up being an advantage here – hardware acceleration is hard to upgrade. Software is easy. Especially if it gets automatically upgraded every 6-8 weeks like Chrome does.

Hardware beats software at speed and performance. But software beats hardware on flexibility and agility. Every. Day. of. The. Week.

What’s Next?

The current situation isn’t a healthy one, but it is all we’ve got to work with.

I am not advocating against H.264, just against using it blindingly.

safari h264

How the future will unfold depends greatly on the progress made in AV1 as well as the steps Apple will be taking with WebRTC and their decisions of the video codecs to incorporate into Webkit, Safari and the iOS ecosystem.

Whatever you end up deciding to go with, make sure you do it with your eyes wide open.

You may also like

Does webrtc need a change in governance, rtc@scale 2024 – an event summary.

Your email address will not be published. Required fields are marked

H.264 codec use is certainly with it’s challenges but absolutely has its place in WebRTC. When interfacing with video phones, the only available codec is H.264. I think a broader investigation into the performance characteristics of the browsers software implementations is warranted. We have seen Chromes H.264 browser-embedded software encoder/decoder begin exhibiting poor performance or even losing entire frames despite the host machine having more than adequate hardware. Chrome only officially supports a fragment of hardware accelerated platforms, and currently I believe it is restricted purely to Nvidia chipsets. Our efforts have been to uncover these H264 related video problems and to identify solutions. It is a juggling act between correctly configured buffers, SDP video options, the host machines capabilities, and the browsers codec implementation itself. It is a complex and delicate dance that can result in pristine video performance, or the more likely scenario, unusable video. Many factors contribute to this and I would love to have a much larger discussion on the topic to bring more attention to the codecs implementation, what can currently be done, and what needs to be done in the future to address compatibility and quality concerns.

Thanks John

H.264 definitely has its place in WebRTC. If somehow that part wasn’t understood then sorry for that 🙂

The thing is, that if you want to use H.264 in WebRTC today, then you better have a very good reason for doing it, and you better know what you’re doing and how to make it work.

Nice post Tsahi. The thing I found more interesting about H.264 is using it for 1:1 iOS calls. In that case you don’t need to worry about Android and OpenH264 and you get access to High Profile.

The fact that it is (supposed to be) used by Duo in that specific setup makes me feel it is worth the effort and it is tested/stable enough in that setup. Still it is not trivial to use and the battery savings can be not as big as they were supposed to be.

As with the case of John, your 2c are worth their weight in pure gold 😉

Mind you that the VP8 realtime hardware requirements haven’t really changed since 2012: https://www.webmproject.org/hardware/rtc-coding-requirements/ The page doesn’t say “simulcast” anywhere but the requirements ensure the encoder and decoder are capable of decoding what Chrome produces with simulcast including temporal scalability.

and VP8 hardware accelleration is around the corner for Intels Kaby lake processors: https://groups.google.com/a/chromium.org/forum/?utm_medium=email&utm_source=footer#!msg/blink-dev/vbYCDv5ve5w/pq-uV_QRAwAJ

Thanks Fippo 🙂

I think VP8 is a lot simpler. If you add it to your hardware, there’s only one reason for you to do so (or a main one) – WebRTC. And the way you test WebRTC today is by running it against Chrome.

H.264 is the swiss army knife of the current video codec generation, which means it gets pitted against many different use cases where WebRTC is but a minor niche.

As far as I understood, codec hardware acceleration today is manly accelerated components, algorithms and methods rather than complete codecs. So your GPU could do efficient transforms, motion vector estimation, etc which then can be used for acceleration of any codec (VP8/9 as well as H.). With this methodology you also get the advantage of being able to update the CPU codec logic without replacing the accelerating hardware and without falling back to software-only processing.

Thanks. Do note you are making the assumption all devices everywhere have GPU that is accessible to you with all the building blocks needed to get that codec implemented and that implementation is uniform in its feature set across all devices. Which is where things fall apart, even without the fact that many of these devices don’t accelerate video coding using a GPU in the first place.

Apparently the Edge team also doesn’t understand the meaning of “mandatory”. The underdog of WebRTC: Data Channels! 😉

How do these pieces of video support software and specifications for VP8, H.264 and WebRTC relate to the SpectrumTV website in Safari (it currently works – does not use VP8) and in Firefox (it does not work, since, v62.0, ~9/5/18 – uses VP8)

Safari is still at its infancy when it comes to WebRTC, so things tend to break there more in other places. Who fixes stuff when things break? Should it be Safari or the other browser vendors? That’s not easy to answer.

As a developer, the question then becomes who do you care about more at the moment? Safari users or Firefox/other users?

Assuming iOS (not inside an app) is of high priority for you, then you’ll be opting for H.264 and make do with whatever quirks and limits Safari imposes on you. Otherwise, I’d just focus on VP8 if I were you.

H.264 High Profile level 4.2, CODECS attribute of the EXT-X-STREAM-INF tag

Hi Experts,

iPhone6/6s support H.264 High Profile level 4.2.

http://www.apple.com/jp/iphone-6s/specs/

http://www.apple.com/jp/iphone-6/specs/

However, there is no description about "H.264 High Profile level 4.2" in apples's formal spec sheet.

https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/FrequentlyAskedQuestions/FrequentlyAskedQuestions.html

What is the value of H.264 High Profile level 4.2?

e.g. H.264 High Profile level 4.1 "avc1.640029"

  • HTTP Live Streaming

These are determined by various specs but the value you seek is:

avc1.64002A

MPEG-4/H.264 video format

Commonly used video compression format.

  • 4 - 123 : Supported
  • 124 : Supported
  • 125 - 127 : Supported
  • 12 - 123 : Supported
  • 3.1 : Not supported
  • 3.2 - 17.3 : Supported
  • 17.4 : Supported
  • 17.5 - TP : Supported
  • 2 - 20 : Not supported
  • 21 - 34 : Partial support
  • 35 - 124 : Supported
  • 125 : Supported
  • 126 - 128 : Supported
  • 9 - 24 : Not supported
  • 25 - 108 : Supported
  • 109 : Supported
  • 5.5 - 8 : Not supported
  • 9 - 10 : Supported
  • 11 : Supported

Chrome for Android

Safari on ios.

  • 17.5 : Supported

Samsung Internet

  • 4 - 23 : Supported
  • 24 : Supported
  • all : Not supported

Opera Mobile

  • 10 : Not supported
  • 11 - 12.1 : Supported
  • 80 : Supported

UC Browser for Android

  • 15.5 : Supported

Android Browser

  • 2.1 - 2.2 : Partial support
  • 2.3 : Partial support
  • 3 - 4.3 : Partial support
  • 4.4 - 4.4.4 : Supported

Firefox for Android

  • 125 : Partial support
  • 14.9 : Supported

Baidu Browser

  • 13.52 : Supported

KaiOS Browser

  • 2.5 : Supported
  • 3 : Supported

Firefox supports H.264 on Windows 7 and later since version 21. Firefox supports H.264 on Linux since version 26 if the appropriate gstreamer plug-ins are installed.

Partial support for older Firefox versions refers to the lack of support in OS X & some non-Android Linux platforms.

WebRTC by Dr Alex

Webrtc by an insider. use at your own risk. :-), how to enable hevc/h265 and av1 in #webrtc in your browser..

CoSMo provided the H264 simulcast implementation to chrome and safari (based on earlier patch by highfive, kudos to them). We helped Intel and Apple work together to put H265 in libwebrtc. AOMedia members, we also were among the first to have a realtime implementation of AV1 in libwebrtc, and have been regularly speaking publicly at different conferences about it. Today, some of this work is becoming available in consumer versions of the browser. Let us give you through enabling it, and taking it out for a ride.

If you are only interested in enabling the codecs, you can just skip the following sections and go directly to the bottom of this blog post.

Why are Apple, Google, taking decisions that appears to contradict when it comes to codecs, or webrtc?

If you want to generate traffic, take all the announcements, point to the inconsistencies, and claim that there is a codec war, or secret roadmaps. If you can bring it to the conspiracy theory level, and create a big enough controversy, you have generated all the buzz you could hope for.

More seriously, first, a little word about the perceived inconsistencies of decisions. People see Apple, or Google, as unique entities with a common unified goal. The reality is, beyond a certain size, any corporation is divided into teams, or units, with different goals, and internal politics emerge.

Let’s take Apple for example: you have the VOD side and the RTC side. On the VOD side you have the king technology HLS, that has reigned without mercy on the streaming world for most the past decade. Even though it is **NOT** a standard, it is used everywhere, by almost everyone, and is mandated by the apple store to stream anything to an apple Device. The entire AppleTV Ecosystem is based on it. Lots of revenue is based on it. This technology is based on usual codecs, and specifically H264 and HEVC / H265 for video. Those are mandatory to use if you want to be compliant, and for your app to go in the store. That gives Apple a huge leverage. The pendant is that Apple is providing hardware for H265 support in all its device. Needless to say, latency, or bandwidth management, (or security, which HLS delegate to the underlying transport) is not a focus for this side of Apple, and the focus is more on what they call quality, and resolution. In the HLS world quality means adding buffers everywhere.

Now, it is worth stopping here for minute to address the licensing problem . H265 licensing situation is a mess. However, as with any codec, if you leverage a Hardware implementation, the burden of the license is on the hardware provider . That s a key point that a lot of people in the webrtc industry forget or underestimate.

Let me provide a telling example.

When we provided the H264 simulcast implementation to libwebrtc as a patch, it took 9 months to get accepted by Google. Apple actually adopted it before Google did. The (official) reason? Legal review. Even with a much simpler H264 license landscape than H265, the legal review for H264 simulcast took 9 months to google (and maybe 6 for apple). Admittedly, there was more than the codec to be validated (RTP, simulcast itself), but still.

Most of the people in the WebRTC ecosystem know that today H264 in WebRTC is only supported in Android on a limited number of devices that have Hardware Acceleration. One of the reason is that shipping it with a software implementation would make the browser vendors liable. Windows Firefox users have been prompted to download the openH264 dll the first time they used that codec. That iss because for Firefox not to be liable, they both:

  • cannot compile the codec implementation (which Cisco does for everyone with openH264),
  • but also cannot ship it. The end user needs to install it itself. Since legally the only binding action on a web page is a click in a prompt, here you go.

On the RTC side of things at Apple, you have Facetime and now Safari. For them using hardware accelerated encoder is always good (battery life has a great impact on UX), but it should not come at the cost of latency. Those are antagonistic goals to the much bigger HLS/MPEG-DASH/CMAF team within Apple. For example, as of today, none of the hardware encoder have a true real-time mode, even if a private API called VTC is used, and should be made public soon, with among other things a 0-frame buffer. They are fighting an uphill battle, as, outside of the voip/webrtc world, #webrtc is perceived as a low quality solution that is barely good enough for 1-1 social calls and not much more. The fact that HLS / MPEG DSH is generating directly a lot of revenue, and that safari and FaceTime are not, is making their fight even more difficult. The improvements of webrtc usage in the past 10 years, the pressure from cisco originally (a big part of their cisco/apple partnership was about enabling the same experience with webrtc that FaceTime, or native call could provide, and led to the opening of h264 hardware acceleration API, and replayKit among other things), and then from all the other big players that had a product depending on webrtc (first app in app store: FB messenger, second app in the pp store: WhatsApp …). The current Work From Home situation also provide extra pressure for vendors to support webrtc.

With all this in mind, let’s revisit some of Apple decisions.

Why adding H265 in webrtc ? The question is more, why not? The code was already made available by INTEL. Apple already had H265 hardware acceleration. It does not reduce the capacity for those who can’t support it, but it allows peers with the capacity to have an improved experience. It’s only a win. It was was helping internally bringing the two groups together and leveraging a common asset. In practice, it took less than 2 days for one very jet-lagged Apple engineer and the main coder behind the implementation at INTEL to get the code into libwebrtc-in-webkit and to have a working version. It was not a big effort.

Of course, Apple never takes a decision about webrtc without asking google / mozilla / MS about it, because they cannot afford to maintain too big of a fork and because the web platform is consensus based. MS already had an hardware only support of H265, and google was not opposed to the patch, if it was hardware based, replicating more or less what they had done with H264 for android.

Why caring about AOMedia and AV1 if you’re betting on H265?

First, those are not mutually exclusive. The decision to use H265 in multiple Apple products was taken a long time ago, while Av1 is more recent, and already much more efficient. As far as the codec is concerned, AV1 has been out for years now, and AOMedia is already discussing about AV2. While Apple, as usual, did not comment on why they joined AOMedia, the individual they sent belong to the HLS group, and only asked questions about CMAF packaging of the AV1 bitstream, seemingly indicating that it has nothing to do with WebRTC for the time being.

The fact that the webrtc team within safari decided to support H265 and that the HLS team at Apple is involved in AOMedia do not seem linked at all for now.

What is interesting about AOMedia is that the membership comes with some very interesting protective measure when it comes to IP. This has been a problem plaguing the streaming industry for long, and the state of H265 licensing is but one example of it, and it is possible that Apple joining AOMedia was in part motivated by the legal protection that AOMedia provides, both in term of legal due diligence on codecs, and on litigation protection fund for the members.

What about Google then?

Google is also a big corporation, with the same problems. If you look at Webrtc, you have the core Webrtc team in Stockholm, the hangout/meet team, the Webrtc network team in Seattle, the Webrtc chrome team in mountain view, the stadia team and the duo team in Seattle. There are two P. managers, Niklas and Huib, and then the founding fathers Serge and Justin. That of course overlaps with the chrome team (if only for the implementation, build system test system, QA …. ) and the youtube team, which owns the codec development , and the Stadia client. Lot os stakeholders with sometimes different focus, roadmap, and timeline.

For YouTube, the 2-pass version of the encoder is the most important. Since they own the codec team, it can make the real-time aspects of codec development a secondary goal at time. That being said, from our experience, libaom has had a real-time mode way before SVT-AV1 (the other contender for becoming the official reference code base for AV2 moving forward) as, so practically, there has been no problem for us during the AV1 in webrtc project.

The google representative at AOMedia (and founder), is from the youtube team. However, in the Real-time group inside the codec group, which discuss about RTp payload and SVC for MANE and SFUs specifically, Qoogle Webrtc team is represented by no less than 3 engineers and one engineering manager.

Since so many products depend on webrtc the two bigger groups (chrome and youtube) have to take webrtc into account. When working on enabling RTC AV1 in webrtc in chrome, all those groups had to be involved, and it seemed to be a first for those specific individuals, but business as usual otherwise. libaom had to add a real-time mode, which was done in april 2019. The default libaom support in chrome was non-realtime, and decoder only, which makes sense for youtube, but is not appropriate for webrtc. The liboam version had to be updated, and support for the encoder added in chrome, which was done in march 2020. Then the webrtc team add to add the RTP payload support, which took 5 months roughly between november 2019 and april 2020. Then we jumped in to prepare an SFU and the tests. SVC support should land soon and is more or less the last remaining big feature to declare beta status, at which point we need to test, test, and tests some more to find corner cases, and make sure the spec is complete.

The only way to be faster is to have a product that sits on the side. At google, as far as webrtc and communications are concerned, this is DUO. DUO is native only, has its own infrastructure, and can afford to a certain extend to release features without depending on any other google groups, or Chrome (or the standard committees) to agree to them and implement them. That’s how DUO was the first product to release true end-to-end encryption, or how DUO is the first one to release AV1 support. That explains why DUO is always first and the other google products are catching up later.

ENABLING H265 in SAFARI (TECH PREVIEW )

Every now and then, I plant eastern eggs in my blog posts. That allows to differentiate with other bloggers who copy content, proxying one’s quote and sources, without giving one any credit. With apple news, this is especially efficient as there are much less info out there, and most of the webrtc bloggers do not care reading webkit commits and tickets.

Once upon a time, I blogged about first screen sharing in Safari, or new webdriver API in safari for testing webrtc features. Of course, those were not usable as is, and you needed some pretty low level command line magic to make it work, and/or to recompile webkit nightly.

This time again, I pointed to SFT 104, and very quickly (time is the essence to capture the light), the info spread around based on a one-liner in the release notes. Only those who actually tested realised, the support had been added but was not enabled.

This time, the eastern eggs goes to voluntas, the main developer of SORA, one of the best webrtc SFU out there.

So if you want to enable H265 in safari, you will need to get Safari Tech preview 105 or newer, and enable it through the developer menu, under “experimental features” and then webrtc prefixed options. The first results show a drastic reduction in CPU consumption, as expected.

safari h264

HOW TO ENABLE Real-Time AV1 in CHROME

libwebrtc and chrome are notoriously difficult to compile. Asking people who wants to benchmark or do black-box testing to compile it themselves is unrealistic. That applies to many individuals currently working on the AV1 payload specification, who still need to make sure things run the way they should.

To mitigate this problem, and make Av1 implementation easier to test. CoSMo is preparing for everyone pre-compiled native apps examples (peerconnection_client, appRTCMobile) that run on mac, windows, linux, iOS and android. They come in two flavour, 1-to-1 in p2p, and 1-to-1 through an SFU. the code is also open source, for the more advanced coders out there to inspire themselves from. While libwebrtc comes with AV1 enable by default (for desktop platforms), Chrome does not yet. Here too CoSMo is providing custom builds of Chrome on windows, mac and linux for people to test their app. We provide the necessary patches for appRTCMobile (macOS), and chrome (desktop) for now, and plan to add support in the obj-c and android bindings unless google beat us to it 🙂

When the underlying implementation will provide for SVC support, the SFU code will be updated to supports AV1 SVC as well.

All of that (and a lot more) is explained in the wiki section of the corresponding project:

https://github.com/CoSMoSoftware/libwebrtc-AV1/wiki

Happy Hacking.

Share this:

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Notify me of follow-up comments by email.

Notify me of new posts by email.

Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

mikecorp

safari h264 not supported ?

i have a question regarding streaming video on Safari. H264 not supported in my browser?

I want to see my security camera at https://live.amaryllo.eu/amaryllolive.html but it says This browser doesn't support H264 stream. Is there workaround?

Using latest OSX.

Posted on Dec 15, 2015 10:34 AM

QuickTimeKirk

Posted on Dec 15, 2015 10:57 AM

Safari and QuickTime have supported H.264 since 2005.

I can view your H.264 files but I also block Flash so your Flash videos do not work for me.

Loading page content

Page content loaded

Dec 15, 2015 10:57 AM in response to mikecorp

Sitelogo

  • PLAN A VISIT
  • BUY TICKETS
  • Buy Tickets
  • Plan Your Visit
  • Hours & Directions
  • Dining & Catering
  • Groups & Field Trips
  • Senior Safari
  • Celebrations
  • Access for All
  • Virtual Tour Map
  • Entertainment
  • Conservation
  • School-based Classes & Assemblies
  • Spring Camp
  • Summer Camp
  • Scout Programs
  • Education Toolkit
  • Happy Hollow Experience Programs
  • Beekeeping & Beyond
  • Happy Hollow Foundation
  • Ways to give
  • Hooray for Happy Hollow Event
  • Dreambuilders
  • Corporate and Community Sponsors
  • Annual Report
  • Donate   Support Happy Hollow DONATE TODAY
  • Today's Hours:  10:00 am to 4:00 pm

Senior Safari  

safari h264

The 2024 season of Senior Safari is just around the corner! This year we will celebrate our 10-year anniversary with a special season kickoff on Thursday, May 23 at 8:30 a.m. at the entry plaza of Happy Hollow Park & Zoo.

Pre-registration will be held Monday, April 29 to Friday, May 3 from 10 a.m. to noon in Happy Hollow’s Entry Plaza . Visitors 50+ who pre-register during these times can enjoy free parking at 748 Story Road and free same-day admission! For ages 50+ only.

Here is the 2024 schedule:

safari h264

About Senior Safari:

Happy Hollow Park & Zoo is a place for the young and the young at heart. Visitors age 50 and up are invited to take over San Jose’s iconic Happy Hollow Park & Zoo and feel like a kid again! This program is a joint partnership between Happy Hollow Park & Zoo and Happy Hollow Foundation. Get some fresh air and exercise as you enjoy:

  • Early entry into the park and zoo (9- 10 a.m.)
  • Animal Meet & Greets
  • Zookeeper chats
  • 10,000-step challenge
  • Variety of activities
  • Healthy breakfasts and coffee available for purchase

Interested in supporting this free event? Click here to make a donation.

2024 Sponsors

Presenting Kaiser Permanente

Gold Star One Credit Union

Silver Santa Clara County Department of Aging and Adult Services San Jose Water Company

Bronze  AARP California

Friends of Senior Safari Councilmember Pam Foley The Health Trust

Senior Safari is designed to help older adults improve their health, avoid social isolation and enjoy a unique environment that stimulates both mind and body. Senior Safari admission and parking are free, and guests are welcome to stay for the day after gates open to the general public. Guests who enter before 10 a.m. may stay the whole day for free!

A safari experience like no other

Spend the Night

Welcome to Safari West

Go on safari.

Unleash Your Wildheart

Start the Adventure Group visits click here

Safari Glamping

Arrive as a Guest - Return as a Wild Friend

Dining in the Wild

Sonoma's South African Braai Experience

Private Adventures

Come face-to-face with the wildest animals around on your very own Private Safari Adventure!

Flamingo, American

One of the most unique birds on the planet, flamingos are able to survive in harsh conditions through a variety of incredible adaptations.

Read More Show another animal

Safari West Happenings

See all Happenings

Safari West Presents! Morro Coast Audubon Society

Conservation Dinner Series:   Jessica Griffiths of Morro Coast Audubon Society joins us in the elephant room after dinner to give a talk about birds.

Explore Share

Safari West Presents! Wildlands Network

Conservation Dinner Series: Mari Galloway of Wildlands Network joins us in the elephant room after dinner to give a talk about protecting local ecosystems, advancing Innovative Policy, transforming transportation networks, and building community.

Inspiring Wildlife Posts

See all News

You OTTO know it's my birthday!

Otto was born to proud parents, Eesha and Ongava on April 2nd, 2023. Named in honor of Peter Lang's father,...

The Incredible Story of the Laysan Duck

People usually come to Safari West to see the showstoppers—giraffes, rhinos, cheetahs, or zebras. While these are all incredible animals,...

Think Your Family is Bananas? So is Ours​

​​The Holidays are here and it’s no secret that getting your troop together can be as stressful as fun. But...

IMAGES

  1. How to enable or disable WebRTC H264 in safari on iPhone 6

    safari h264

  2. Managing Codecs

    safari h264

  3. How to Convert H.265 to H.264 Efficiently?

    safari h264

  4. инструкция Safari H.264

    safari h264

  5. How to Convert MOV to H264 in Batch and Online Free

    safari h264

  6. H264 Vs H265

    safari h264

VIDEO

  1. World Best Safari Experience

  2. safari

  3. African Safari 1 SD 480p

  4. Safari Shortcuts in Mac

  5. iPad Today 65: Kindle Fire vs. iPad, Bookmarklet Mania, Shadowgun!

  6. Safari for Windows default opening animation 4K

COMMENTS

  1. mp4 H264 video won't play in iPhone safari

    Answers without enough detail may be edited or deleted. MP4 is a container format, while H. 264 is a video compression codec. So to play H. 264 in Safari, you will need a video container like MP4 to host the encoded video. Settings based on the MPEG-4 format offer a choice of two encoders: H.264 and HEVC (High-Efficiency Video Coding, also ...

  2. Web video codec guide

    Safari supports HEVC for all devices on macOS High Sierra or later. Container support: ISOBMFF, MPEG-TS, MP4 QuickTime: RTP / WebRTC compatible No: Supporting/Maintaining organization: ITU / MPEG: Specifications ... (H.264) video codec, ideally with AAC as your audio codec. This is because the MP4 container with AVC and AAC codecs within is a ...

  3. Delivering Video Content for Safari

    Learn how to optimize the video content for your website in Safari, the default browser for macOS and iOS devices. Discover the best practices and tools for delivering high-quality and adaptive video streams, using the WebKit framework and the latest features of Safari.

  4. browser

    If the browser coughs it up, and doesn't want to play it (remember to check it with Chrome or Safari), then you can just convert the file using one of the free encoders on the net. For example, I've been using the free H.264 encoder to convert my video files to H.264 :) Good luck! UPD2: I actually took and encoded the file... The file size ...

  5. Streaming video in Safari: Why is it so difficult?

    The content-type header is set to video/mp4 so the browser knows it's receiving a video. Then we stat the file to get its length and set that as the content-length header so the browser knows how much data it's receiving. Listing 2: Node.js Express web server with simple video streaming that works for Chrome.

  6. Codecs used by WebRTC

    Chrome, Edge, Firefox, Safari (12.1+) AVC / H.264: Constrained Baseline (CB) Chrome (52+), Edge, Firefox, Safari. ... Of the two mandatory codecs for video—VP8 and AVC/H.264—only VP8 is completely free of licensing requirements. If you select AVC, make sure you're; aware of any potential fees you may need to pay; that said, the patent ...

  7. Examples

    Gear 4 - 960x720 @ 2 Mbps. 1 audio-only variant. Gear 0 AAC - 22.05 kHz stereo @ 40 kbps. . Developer. HTTP Live Streaming. Examples. View various examples of .M3U8 files formatted to index streams and .ts media segment files on your Mac, iPhone, iPad, and Apple TV.

  8. Guide to WebRTC with Safari in the Wild (Chad Phillips)

    Multiparty MCU - make sure that H.264 is a supported codec; Multiparty SFU - use H.264; Multiparty Mesh and peer to peer - pray everyone can negotiate a common codec; I say best interop because while this gets you a long way, it won't be all the way. For example, Chrome for Android does not support software H.264 encoding yet.

  9. Using HEIF or HEVC media on Apple devices

    Tap Formats. Tap Most Compatible. This setting is available only on devices that can capture media in HEIF or HEVC format, and only when using iOS 11 or later, or iPadOS. All new photos and videos will now use JPEG or H.264 format. To return to using the space-saving HEIF and HEVC formats, choose High Efficiency.

  10. The Challenging Path to WebRTC H.264 Video Codec Hardware Support

    There are reasons why one would like to use H.264: You already have H.264 equipment, so don't want to transcode - be it cameras, video conferencing gear or the need to broadcast via HLS or RTMP. You want to support Safari. You want to leverage hardware based encoding and decoding to increase battery life on your mobile devices.

  11. H.264 High Profile level 4.2, CODE…

    H.264 High Profile level 4.2, CODECS attribute of the EXT-X-STREAM-INF tag. iPhone6/6s support H.264 High Profile level 4.2. However, there is no description about "H.264 High Profile level 4.2" in apples's formal spec sheet.

  12. MPEG-4/H.264 video format

    Firefox supports H.264 on Linux since version 26 if the appropriate gstreamer plug-ins are installed. Partial support for older Firefox versions refers to the lack of support in OS X & some non-Android Linux platforms. 1 The Android 2.3 browser requires specific handling to play videos. 2 Partial support refers to the lack of hardware acceleration.

  13. How to enable HEVC/H265 and AV1 in #webrtc in your browser

    CoSMo provided the H264 simulcast implementation to chrome and safari (based on earlier patch by highfive, kudos to them). We helped Intel and Apple work together to put H265 in libwebrtc. AOMedia members, we also were among the first to have a realtime implementation of AV1 in libwebrtc, and have been regularly speaking publicly at different ...

  14. <video> Tag in Safari

    (1) Actually H264 is the avc1 and H265 is the hvc1 (some players want hevc, instead of the "hvc1", to be used as the codec name).Test codecs=hevc in Safari.(2) Maybe your H265 is not correct (for Safari) in some way. Check with Media Info online then paste its text results into your question. This way maybe someone can check for any possible problem with your H265 file.

  15. safari h264 not supported ?

    QuickTimeKirk. Level 10. 125,941 points. Dec 15, 2015 10:57 AM in response to mikecorp. Safari and QuickTime have supported H.264 since 2005. I can view your H.264 files but I also block Flash so your Flash videos do not work for me. safari h264 not supported ? .

  16. Senior Safari

    The 2024 season of Senior Safari is just around the corner! This year we will celebrate our 10-year anniversary with a special season kickoff on Thursday, May 23 at 8:30 a.m. at the entry plaza of Happy Hollow Park & Zoo.. Pre-registration will be held Monday, April 29 to Friday, May 3 from 10 a.m. to noon in Happy Hollow's Entry Plaza.Visitors 50+ who pre-register during these times can ...

  17. Go on Safari!

    If you or a member of your party has any special needs, please don't hesitate to contact us at 707 566-3667. Let us help you plan the safari adventure of a lifetime at Safari West! 400 Acres of Exploration. From cheetahs to lemurs, giraffes to wildebeest, there's always something amazing waiting just around the bend.

  18. Get Safari to prefer HEVC in HTML 5 video tag

    Hardware support for H.264 is widely available, even on older hardware, whereas HEVC/H.265 hardware support isn't and requires more CPU and will ultimately require more energy (battery) as well. So, although your device (e.g. MBP) and software can decode and play the HEVC video, Safari may prefer the H.264 video source if available.

  19. 521 Safari Dr, San Jose, CA 95123

    The listing broker's offer of compensation is made only to participants of the MLS where the listing is filed. California. Santa Clara County. San Jose. 95123. Blossom Valley. 521 Safari Dr, San Jose, CA 95123 is pending. Zillow has 85 photos of this 4 beds, 2 baths, 1,495 Square Feet single family home with a list price of $1,399,000.

  20. Safari West

    Safari West Presents! Sonoma County Wildlife Rescue. Friday May 3, 2024 | 8:00pm - 8:30pm. Conservation Dinner Series: Taylor Thomas of Sonoma County Wildlife Rescuejoins us in the elephant room after dinner to give a talk about wildlife rehab and rescues.