These files should not be added

This commit is contained in:
Gabe Kangas
2020-10-19 22:22:33 -07:00
parent 226242f07b
commit 205ef8926e
6368 changed files with 0 additions and 603959 deletions

View File

@@ -1,486 +0,0 @@
<a name="2.2.0"></a>
# [2.2.0](https://github.com/videojs/http-streaming/compare/v2.1.0...v2.2.0) (2020-09-25)
### Features
* default handleManfiestRedirect to true ([#927](https://github.com/videojs/http-streaming/issues/927)) ([556321f](https://github.com/videojs/http-streaming/commit/556321f))
* support MPD.Location ([#926](https://github.com/videojs/http-streaming/issues/926)) ([c4a43d7](https://github.com/videojs/http-streaming/commit/c4a43d7))
* Update minimumUpdatePeriod handling ([#942](https://github.com/videojs/http-streaming/issues/942)) ([8648e76](https://github.com/videojs/http-streaming/commit/8648e76))
### Bug Fixes
* audio groups with the same uri as media do not count ([#952](https://github.com/videojs/http-streaming/issues/952)) ([3927c0c](https://github.com/videojs/http-streaming/commit/3927c0c))
* dash manifest not refreshed if only some playlists are updated ([#949](https://github.com/videojs/http-streaming/issues/949)) ([31d3441](https://github.com/videojs/http-streaming/commit/31d3441))
* detect demuxed video underflow gaps ([#948](https://github.com/videojs/http-streaming/issues/948)) ([d0ef298](https://github.com/videojs/http-streaming/commit/d0ef298))
* MPD not refreshed if minimumUpdatePeriod is 0 ([#954](https://github.com/videojs/http-streaming/issues/954)) ([3a0682f](https://github.com/videojs/http-streaming/commit/3a0682f)), closes [#942](https://github.com/videojs/http-streaming/issues/942)
* noop vtt segment loader handle data ([#959](https://github.com/videojs/http-streaming/issues/959)) ([d1dcd7b](https://github.com/videojs/http-streaming/commit/d1dcd7b))
* report the correct buffered regardless of playlist change ([#950](https://github.com/videojs/http-streaming/issues/950)) ([043ccc6](https://github.com/videojs/http-streaming/commit/043ccc6))
* Throw a player error when trying to play DRM content without eme ([#938](https://github.com/videojs/http-streaming/issues/938)) ([ce4d6fd](https://github.com/videojs/http-streaming/commit/ce4d6fd))
* use playlist NAME when available as its ID ([#929](https://github.com/videojs/http-streaming/issues/929)) ([2269464](https://github.com/videojs/http-streaming/commit/2269464))
* use TIME_FUDGE_FACTOR rather than rounding by decimal digits ([#881](https://github.com/videojs/http-streaming/issues/881)) ([7eb112d](https://github.com/videojs/http-streaming/commit/7eb112d))
### Chores
* **package:** remove engine check in pkcs7 ([#947](https://github.com/videojs/http-streaming/issues/947)) ([89392fa](https://github.com/videojs/http-streaming/commit/89392fa))
* mark angel one dash subs as broken ([#956](https://github.com/videojs/http-streaming/issues/956)) ([56a0970](https://github.com/videojs/http-streaming/commit/56a0970))
* mediaConfig_ -> staringMediaInfo_, startingMedia_ -> currentMediaInfo_ ([#953](https://github.com/videojs/http-streaming/issues/953)) ([8801d1c](https://github.com/videojs/http-streaming/commit/8801d1c))
* playlist selector logging ([#921](https://github.com/videojs/http-streaming/issues/921)) ([ccdbaef](https://github.com/videojs/http-streaming/commit/ccdbaef))
* update m3u8-parser to v4.4.3 ([#928](https://github.com/videojs/http-streaming/issues/928)) ([af5b4ee](https://github.com/videojs/http-streaming/commit/af5b4ee))
### Reverts
* fix: use playlist NAME when available as its ID ([#929](https://github.com/videojs/http-streaming/issues/929)) ([#957](https://github.com/videojs/http-streaming/issues/957)) ([fe8376b](https://github.com/videojs/http-streaming/commit/fe8376b))
<a name="2.1.0"></a>
# [2.1.0](https://github.com/videojs/http-streaming/compare/v2.0.0...v2.1.0) (2020-07-28)
### Features
* Easier manual playlist switching, add codecs to renditions ([#850](https://github.com/videojs/http-streaming/issues/850)) ([f60fa1f](https://github.com/videojs/http-streaming/commit/f60fa1f))
* exclude all incompatable browser/muxer codecs ([#903](https://github.com/videojs/http-streaming/issues/903)) ([2d0f0d7](https://github.com/videojs/http-streaming/commit/2d0f0d7))
* expose canChangeType on the VHS property ([#911](https://github.com/videojs/http-streaming/issues/911)) ([a4ab285](https://github.com/videojs/http-streaming/commit/a4ab285))
* let back buffer be configurable ([8c96e6c](https://github.com/videojs/http-streaming/commit/8c96e6c))
* Support codecs switching when possible via sourceBuffer.changeType ([#841](https://github.com/videojs/http-streaming/issues/841)) ([267cc34](https://github.com/videojs/http-streaming/commit/267cc34))
### Bug Fixes
* always append init segment after trackinfo change ([#913](https://github.com/videojs/http-streaming/issues/913)) ([ea3650a](https://github.com/videojs/http-streaming/commit/ea3650a))
* cleanup mediasource listeners on dispose ([#871](https://github.com/videojs/http-streaming/issues/871)) ([e50f4c9](https://github.com/videojs/http-streaming/commit/e50f4c9))
* do not try to use unsupported audio ([#896](https://github.com/videojs/http-streaming/issues/896)) ([7711b26](https://github.com/videojs/http-streaming/commit/7711b26))
* do not use remove source buffer on ie 11 ([#904](https://github.com/videojs/http-streaming/issues/904)) ([1ab0f07](https://github.com/videojs/http-streaming/commit/1ab0f07))
* do not wait for audio appends for muxed segments ([#894](https://github.com/videojs/http-streaming/issues/894)) ([406cbcd](https://github.com/videojs/http-streaming/commit/406cbcd))
* Fixed issue with MPEG-Dash MPD Playlist Finalisation during Live Play. ([#874](https://github.com/videojs/http-streaming/issues/874)) ([c807930](https://github.com/videojs/http-streaming/commit/c807930))
* handle null return value from CaptionParser.parse ([#890](https://github.com/videojs/http-streaming/issues/890)) ([7b8fff2](https://github.com/videojs/http-streaming/commit/7b8fff2)), closes [#863](https://github.com/videojs/http-streaming/issues/863)
* have reloadSourceOnError get src from player ([#893](https://github.com/videojs/http-streaming/issues/893)) ([1e50bc5](https://github.com/videojs/http-streaming/commit/1e50bc5)), closes [videojs/video.js#6744](https://github.com/videojs/video.js/issues/6744)
* initialize EME for all playlists and PSSH values ([#872](https://github.com/videojs/http-streaming/issues/872)) ([e0e497f](https://github.com/videojs/http-streaming/commit/e0e497f))
* more conservative stalled download check, better logging ([#884](https://github.com/videojs/http-streaming/issues/884)) ([615e77f](https://github.com/videojs/http-streaming/commit/615e77f))
* pause/abort loaders before an exclude, preventing bad appends ([#902](https://github.com/videojs/http-streaming/issues/902)) ([c9126e1](https://github.com/videojs/http-streaming/commit/c9126e1))
* stop alt loaders on main mediachanging to prevent append race ([#895](https://github.com/videojs/http-streaming/issues/895)) ([8690c78](https://github.com/videojs/http-streaming/commit/8690c78))
* Support aac data with or without id3 tags by using mux.js[@5](https://github.com/5).6.6 ([#899](https://github.com/videojs/http-streaming/issues/899)) ([9c742ce](https://github.com/videojs/http-streaming/commit/9c742ce))
* Use revokeObjectURL dispose for created MSE blob urls ([#849](https://github.com/videojs/http-streaming/issues/849)) ([ca73cac](https://github.com/videojs/http-streaming/commit/ca73cac))
* Wait for sourceBuffer creation so drm setup uses valid codecs ([#878](https://github.com/videojs/http-streaming/issues/878)) ([f879563](https://github.com/videojs/http-streaming/commit/f879563))
### Chores
* Add vhs & mpc (vhs.masterPlaylistController_) to window of index.html ([#875](https://github.com/videojs/http-streaming/issues/875)) ([bab61d6](https://github.com/videojs/http-streaming/commit/bab61d6))
* **demo:** add a representations selector to the demo page ([#901](https://github.com/videojs/http-streaming/issues/901)) ([0a54ae2](https://github.com/videojs/http-streaming/commit/0a54ae2))
* fix tears of steal playready on the demo page ([#915](https://github.com/videojs/http-streaming/issues/915)) ([29a10d0](https://github.com/videojs/http-streaming/commit/29a10d0))
* keep window vhs/mpc up to date on source switch ([#883](https://github.com/videojs/http-streaming/issues/883)) ([3ba85fd](https://github.com/videojs/http-streaming/commit/3ba85fd))
* update DASH stream urls ([#918](https://github.com/videojs/http-streaming/issues/918)) ([902c2a5](https://github.com/videojs/http-streaming/commit/902c2a5))
* update local video.js ([#876](https://github.com/videojs/http-streaming/issues/876)) ([c2cc9aa](https://github.com/videojs/http-streaming/commit/c2cc9aa))
* use playready license server ([#916](https://github.com/videojs/http-streaming/issues/916)) ([6728837](https://github.com/videojs/http-streaming/commit/6728837))
### Code Refactoring
* remove duplicate bufferIntersection code in util/buffer.js ([#880](https://github.com/videojs/http-streaming/issues/880)) ([0ca43bd](https://github.com/videojs/http-streaming/commit/0ca43bd))
* simplify setupEmeOptions and add tests ([#869](https://github.com/videojs/http-streaming/issues/869)) ([e3921ed](https://github.com/videojs/http-streaming/commit/e3921ed))
<a name="2.0.0"></a>
# [2.0.0](https://github.com/videojs/http-streaming/compare/v2.0.0-rc.2...v2.0.0) (2020-06-16)
### Features
* add external vhs properties and deprecate hls and dash references ([#859](https://github.com/videojs/http-streaming/issues/859)) ([22af0b2](https://github.com/videojs/http-streaming/commit/22af0b2))
* Use VHS playback on any non-Safari browser ([#843](https://github.com/videojs/http-streaming/issues/843)) ([225d127](https://github.com/videojs/http-streaming/commit/225d127))
### Chores
* fix demo page on firefox, always use vhs on safari ([#851](https://github.com/videojs/http-streaming/issues/851)) ([d567b7d](https://github.com/videojs/http-streaming/commit/d567b7d))
* **stats:** update vhs usage in the stats page ([#867](https://github.com/videojs/http-streaming/issues/867)) ([4dda42a](https://github.com/videojs/http-streaming/commit/4dda42a))
### Code Refactoring
* Move caption parser to webworker, saving 5732b offloading work ([#863](https://github.com/videojs/http-streaming/issues/863)) ([491d194](https://github.com/videojs/http-streaming/commit/491d194))
* remove aes-decrypter objects from Hls saving 1415gz bytes ([#860](https://github.com/videojs/http-streaming/issues/860)) ([a4f8302](https://github.com/videojs/http-streaming/commit/a4f8302))
### Documentation
* add supported features doc ([#848](https://github.com/videojs/http-streaming/issues/848)) ([38f5860](https://github.com/videojs/http-streaming/commit/38f5860))
### Reverts
* "fix: Use middleware and a wrapped function for seeking instead of relying on unreliable 'seeking' events ([#161](https://github.com/videojs/http-streaming/issues/161))"([#856](https://github.com/videojs/http-streaming/issues/856)) ([1165f8e](https://github.com/videojs/http-streaming/commit/1165f8e))
### BREAKING CHANGES
* The Hls object which was exposed on videojs no longer has Decrypter, AsyncStream, and decrypt from aes-decrypter.
<a name="1.10.2"></a>
## [1.10.2](https://github.com/videojs/http-streaming/compare/v1.10.1...v1.10.2) (2019-05-13)
### Bug Fixes
* clear the blacklist for other playlists if final rendition errors ([#479](https://github.com/videojs/http-streaming/issues/479)) ([fe3b378](https://github.com/videojs/http-streaming/commit/fe3b378)), closes [#396](https://github.com/videojs/http-streaming/issues/396) [#471](https://github.com/videojs/http-streaming/issues/471)
* **development:** rollup watch, via `npm run watch`, should work for es/cjs ([#484](https://github.com/videojs/http-streaming/issues/484)) ([ad6f292](https://github.com/videojs/http-streaming/commit/ad6f292))
* **HLSe:** slice keys properly on IE11 ([#506](https://github.com/videojs/http-streaming/issues/506)) ([681cd6f](https://github.com/videojs/http-streaming/commit/681cd6f))
* **package:** update mpd-parser to version 0.8.1 🚀 ([#490](https://github.com/videojs/http-streaming/issues/490)) ([a49ad3a](https://github.com/videojs/http-streaming/commit/a49ad3a))
* **package:** update mux.js to version 5.1.2 🚀 ([#477](https://github.com/videojs/http-streaming/issues/477)) ([57a38e9](https://github.com/videojs/http-streaming/commit/57a38e9)), closes [#503](https://github.com/videojs/http-streaming/issues/503) [#504](https://github.com/videojs/http-streaming/issues/504)
* **source-updater:** run callbacks after setting timestampOffset ([#480](https://github.com/videojs/http-streaming/issues/480)) ([6ecf859](https://github.com/videojs/http-streaming/commit/6ecf859))
* livestream timeout issues ([#469](https://github.com/videojs/http-streaming/issues/469)) ([cf3fafc](https://github.com/videojs/http-streaming/commit/cf3fafc)), closes [segment#16](https://github.com/segment/issues/16) [segment#15](https://github.com/segment/issues/15) [segment#16](https://github.com/segment/issues/16) [segment#15](https://github.com/segment/issues/15) [segment#16](https://github.com/segment/issues/16)
* remove both vttjs listeners to prevent leaking one of them ([#495](https://github.com/videojs/http-streaming/issues/495)) ([1db1e72](https://github.com/videojs/http-streaming/commit/1db1e72))
### Performance Improvements
* don't enable captionParser for audio or subtitle loaders ([#487](https://github.com/videojs/http-streaming/issues/487)) ([358877f](https://github.com/videojs/http-streaming/commit/358877f))
<a name="1.10.1"></a>
## [1.10.1](https://github.com/videojs/http-streaming/compare/v1.10.0...v1.10.1) (2019-04-16)
### Bug Fixes
* **dash-playlist-loader:** clear out timers on dispose ([#472](https://github.com/videojs/http-streaming/issues/472)) ([2f1c222](https://github.com/videojs/http-streaming/commit/2f1c222))
### Reverts
* "fix: clear the blacklist for other playlists if final rendition errors ([#396](https://github.com/videojs/http-streaming/issues/396))" ([#471](https://github.com/videojs/http-streaming/issues/471)) ([dd55028](https://github.com/videojs/http-streaming/commit/dd55028))
<a name="1.10.0"></a>
# [1.10.0](https://github.com/videojs/http-streaming/compare/v1.9.3...v1.10.0) (2019-04-12)
### Features
* add option to cache encrpytion keys in the player ([#446](https://github.com/videojs/http-streaming/issues/446)) ([599b94d](https://github.com/videojs/http-streaming/commit/599b94d)), closes [#140](https://github.com/videojs/http-streaming/issues/140)
* add support for dash manifests describing sidx boxes ([#455](https://github.com/videojs/http-streaming/issues/455)) ([80dde16](https://github.com/videojs/http-streaming/commit/80dde16))
### Bug Fixes
* clear the blacklist for other playlists if final rendition errors ([#396](https://github.com/videojs/http-streaming/issues/396)) ([6e6c8c2](https://github.com/videojs/http-streaming/commit/6e6c8c2))
* on dispose, don't call abort on SourceBuffer until after remove() has finished ([3806750](https://github.com/videojs/http-streaming/commit/3806750))
### Documentation
* **README:** update broken link to full docs ([#440](https://github.com/videojs/http-streaming/issues/440)) ([fbd615c](https://github.com/videojs/http-streaming/commit/fbd615c))
<a name="1.9.3"></a>
## [1.9.3](https://github.com/videojs/http-streaming/compare/v1.9.2...v1.9.3) (2019-03-21)
### Bug Fixes
* **id3:** ignore unsupported id3 frames ([#437](https://github.com/videojs/http-streaming/issues/437)) ([7040b7d](https://github.com/videojs/http-streaming/commit/7040b7d)), closes [videojs/video.js#5823](https://github.com/videojs/video.js/issues/5823)
### Documentation
* add diagrams for playlist loaders ([#426](https://github.com/videojs/http-streaming/issues/426)) ([52201f9](https://github.com/videojs/http-streaming/commit/52201f9))
<a name="1.9.2"></a>
## [1.9.2](https://github.com/videojs/http-streaming/compare/v1.9.1...v1.9.2) (2019-03-14)
### Bug Fixes
* expose `custom` segment property in the segment metadata track ([#429](https://github.com/videojs/http-streaming/issues/429)) ([17510da](https://github.com/videojs/http-streaming/commit/17510da))
<a name="1.9.1"></a>
## [1.9.1](https://github.com/videojs/http-streaming/compare/v1.9.0...v1.9.1) (2019-03-05)
### Bug Fixes
* fix for streams that would occasionally never fire an `ended` event ([fc09926](https://github.com/videojs/http-streaming/commit/fc09926))
* Fix video playback freezes caused by not using absolute current time ([#401](https://github.com/videojs/http-streaming/issues/401)) ([957ecfd](https://github.com/videojs/http-streaming/commit/957ecfd))
* only fire seekablechange when values of seekable ranges actually change ([#415](https://github.com/videojs/http-streaming/issues/415)) ([a4c056e](https://github.com/videojs/http-streaming/commit/a4c056e))
* Prevent infinite buffering at the start of looped video on edge ([#392](https://github.com/videojs/http-streaming/issues/392)) ([b6d1b97](https://github.com/videojs/http-streaming/commit/b6d1b97))
### Code Refactoring
* align DashPlaylistLoader closer to PlaylistLoader states ([#386](https://github.com/videojs/http-streaming/issues/386)) ([5d80fe7](https://github.com/videojs/http-streaming/commit/5d80fe7))
<a name="1.9.0"></a>
# [1.9.0](https://github.com/videojs/http-streaming/compare/v1.8.0...v1.9.0) (2019-02-07)
### Features
* Use exposed transmuxer time modifications for more accurate conversion between program and player times ([#371](https://github.com/videojs/http-streaming/issues/371)) ([41df5c0](https://github.com/videojs/http-streaming/commit/41df5c0))
### Bug Fixes
* m3u8 playlist is not updating when only endList changes ([#373](https://github.com/videojs/http-streaming/issues/373)) ([c7d1306](https://github.com/videojs/http-streaming/commit/c7d1306))
* Prevent exceptions from being thrown by the MediaSource ([#389](https://github.com/videojs/http-streaming/issues/389)) ([8c06366](https://github.com/videojs/http-streaming/commit/8c06366))
### Chores
* Update mux.js to the latest version 🚀 ([#397](https://github.com/videojs/http-streaming/issues/397)) ([38ec2a5](https://github.com/videojs/http-streaming/commit/38ec2a5))
### Tests
* added test for playlist not updating when only endList changes ([#394](https://github.com/videojs/http-streaming/issues/394)) ([39d0be2](https://github.com/videojs/http-streaming/commit/39d0be2))
<a name="1.8.0"></a>
# [1.8.0](https://github.com/videojs/http-streaming/compare/v1.7.0...v1.8.0) (2019-01-10)
### Features
* expose custom M3U8 mapper API ([#325](https://github.com/videojs/http-streaming/issues/325)) ([609beb3](https://github.com/videojs/http-streaming/commit/609beb3))
### Bug Fixes
* **id3:** cuechange event not being triggered on audio-only HLS streams ([#334](https://github.com/videojs/http-streaming/issues/334)) ([bab70fd](https://github.com/videojs/http-streaming/commit/bab70fd)), closes [#130](https://github.com/videojs/http-streaming/issues/130)
<a name="1.7.0"></a>
# [1.7.0](https://github.com/videojs/http-streaming/compare/v1.6.0...v1.7.0) (2019-01-04)
### Features
* expose custom M3U8 parser API ([#331](https://github.com/videojs/http-streaming/issues/331)) ([b0643a4](https://github.com/videojs/http-streaming/commit/b0643a4))
<a name="1.6.0"></a>
# [1.6.0](https://github.com/videojs/http-streaming/compare/v1.5.1...v1.6.0) (2018-12-21)
### Features
* Add allowSeeksWithinUnsafeLiveWindow property ([#320](https://github.com/videojs/http-streaming/issues/320)) ([74b28e8](https://github.com/videojs/http-streaming/commit/74b28e8))
### Chores
* add clock.ticks to now async operations in tests ([#315](https://github.com/videojs/http-streaming/issues/315)) ([895c86a](https://github.com/videojs/http-streaming/commit/895c86a))
### Documentation
* Add README entry on DRM and videojs-contrib-eme ([#307](https://github.com/videojs/http-streaming/issues/307)) ([93b6167](https://github.com/videojs/http-streaming/commit/93b6167))
<a name="1.5.1"></a>
## [1.5.1](https://github.com/videojs/http-streaming/compare/v1.5.0...v1.5.1) (2018-12-06)
### Bug Fixes
* added missing manifest information on to segments (EXT-X-PROGRAM-DATE-TIME) ([#236](https://github.com/videojs/http-streaming/issues/236)) ([a35dd09](https://github.com/videojs/http-streaming/commit/a35dd09))
* remove player props on dispose to stop middleware ([#229](https://github.com/videojs/http-streaming/issues/229)) ([cd13f9f](https://github.com/videojs/http-streaming/commit/cd13f9f))
### Documentation
* add dash to package.json description ([#267](https://github.com/videojs/http-streaming/issues/267)) ([3296c68](https://github.com/videojs/http-streaming/commit/3296c68))
* add documentation for reloadSourceOnError ([#266](https://github.com/videojs/http-streaming/issues/266)) ([7448b37](https://github.com/videojs/http-streaming/commit/7448b37))
<a name="1.5.0"></a>
# [1.5.0](https://github.com/videojs/http-streaming/compare/v1.4.2...v1.5.0) (2018-11-13)
### Features
* Add useBandwidthFromLocalStorage option ([#275](https://github.com/videojs/http-streaming/issues/275)) ([60c88ae](https://github.com/videojs/http-streaming/commit/60c88ae))
### Bug Fixes
* don't wait for requests to finish when encountering an error in media-segment-request ([#286](https://github.com/videojs/http-streaming/issues/286)) ([970e3ce](https://github.com/videojs/http-streaming/commit/970e3ce))
* throttle final playlist reloads when using DASH ([#277](https://github.com/videojs/http-streaming/issues/277)) ([1c2887a](https://github.com/videojs/http-streaming/commit/1c2887a))
<a name="1.4.2"></a>
## [1.4.2](https://github.com/videojs/http-streaming/compare/v1.4.1...v1.4.2) (2018-11-01)
### Chores
* pin to node 8 for now ([#279](https://github.com/videojs/http-streaming/issues/279)) ([f900dc4](https://github.com/videojs/http-streaming/commit/f900dc4))
* update mux.js to 5.0.1 ([#282](https://github.com/videojs/http-streaming/issues/282)) ([af6ee4f](https://github.com/videojs/http-streaming/commit/af6ee4f))
<a name="1.4.1"></a>
## [1.4.1](https://github.com/videojs/http-streaming/compare/v1.4.0...v1.4.1) (2018-10-25)
### Bug Fixes
* **subtitles:** set default property if default and autoselect are both enabled ([#239](https://github.com/videojs/http-streaming/issues/239)) ([ee594e5](https://github.com/videojs/http-streaming/commit/ee594e5))
<a name="1.4.0"></a>
# [1.4.0](https://github.com/videojs/http-streaming/compare/v1.3.1...v1.4.0) (2018-10-24)
### Features
* limited experimental DASH multiperiod support ([#268](https://github.com/videojs/http-streaming/issues/268)) ([a213807](https://github.com/videojs/http-streaming/commit/a213807))
* smoothQualityChange flag ([#235](https://github.com/videojs/http-streaming/issues/235)) ([0e4fdf9](https://github.com/videojs/http-streaming/commit/0e4fdf9))
### Bug Fixes
* immediately setup EME if available ([#263](https://github.com/videojs/http-streaming/issues/263)) ([7577e90](https://github.com/videojs/http-streaming/commit/7577e90))
<a name="1.3.1"></a>
## [1.3.1](https://github.com/videojs/http-streaming/compare/v1.3.0...v1.3.1) (2018-10-15)
### Bug Fixes
* ensure content loops ([#259](https://github.com/videojs/http-streaming/issues/259)) ([26300df](https://github.com/videojs/http-streaming/commit/26300df))
<a name="1.3.0"></a>
# [1.3.0](https://github.com/videojs/http-streaming/compare/v1.2.6...v1.3.0) (2018-10-05)
### Features
* add an option to ignore player size in selection logic ([#238](https://github.com/videojs/http-streaming/issues/238)) ([7ae42b1](https://github.com/videojs/http-streaming/commit/7ae42b1))
### Documentation
* Update CONTRIBUTING.md ([#242](https://github.com/videojs/http-streaming/issues/242)) ([9d83e9d](https://github.com/videojs/http-streaming/commit/9d83e9d))
<a name="1.2.6"></a>
## [1.2.6](https://github.com/videojs/http-streaming/compare/v1.2.5...v1.2.6) (2018-09-21)
### Bug Fixes
* stutter after fast quality change in IE/Edge ([#213](https://github.com/videojs/http-streaming/issues/213)) ([2c0d9b2](https://github.com/videojs/http-streaming/commit/2c0d9b2))
### Documentation
* update issue template to link to the troubleshooting guide ([#215](https://github.com/videojs/http-streaming/issues/215)) ([413f0e8](https://github.com/videojs/http-streaming/commit/413f0e8))
* update README notes for video.js 7 ([#200](https://github.com/videojs/http-streaming/issues/200)) ([d68ce0c](https://github.com/videojs/http-streaming/commit/d68ce0c))
* update troubleshooting guide for Edge/mobile Chrome ([#216](https://github.com/videojs/http-streaming/issues/216)) ([21e5335](https://github.com/videojs/http-streaming/commit/21e5335))
<a name="1.2.5"></a>
## [1.2.5](https://github.com/videojs/http-streaming/compare/v1.2.4...v1.2.5) (2018-08-24)
### Bug Fixes
* fix replay functionality ([#204](https://github.com/videojs/http-streaming/issues/204)) ([fd6be83](https://github.com/videojs/http-streaming/commit/fd6be83))
<a name="1.2.4"></a>
## [1.2.4](https://github.com/videojs/http-streaming/compare/v1.2.3...v1.2.4) (2018-08-13)
### Bug Fixes
* Remove buffered data on fast quality switches ([#113](https://github.com/videojs/http-streaming/issues/113)) ([bc94fbb](https://github.com/videojs/http-streaming/commit/bc94fbb))
<a name="1.2.3"></a>
## [1.2.3](https://github.com/videojs/http-streaming/compare/v1.2.2...v1.2.3) (2018-08-09)
### Chores
* link to minified example in main page ([#189](https://github.com/videojs/http-streaming/issues/189)) ([15a7f92](https://github.com/videojs/http-streaming/commit/15a7f92))
* use netlify for easier testing ([#188](https://github.com/videojs/http-streaming/issues/188)) ([d2e0d35](https://github.com/videojs/http-streaming/commit/d2e0d35))
<a name="1.2.2"></a>
## [1.2.2](https://github.com/videojs/http-streaming/compare/v1.2.1...v1.2.2) (2018-08-07)
### Bug Fixes
* typeof minification ([#182](https://github.com/videojs/http-streaming/issues/182)) ([7c68335](https://github.com/videojs/http-streaming/commit/7c68335))
* Use middleware and a wrapped function for seeking instead of relying on unreliable 'seeking' events ([#161](https://github.com/videojs/http-streaming/issues/161)) ([6c68761](https://github.com/videojs/http-streaming/commit/6c68761))
### Chores
* add logo ([#184](https://github.com/videojs/http-streaming/issues/184)) ([a55626c](https://github.com/videojs/http-streaming/commit/a55626c))
### Documentation
* add note for Safari captions error ([#174](https://github.com/videojs/http-streaming/issues/174)) ([7b03530](https://github.com/videojs/http-streaming/commit/7b03530))
### Tests
* add support for real segments in tests ([#178](https://github.com/videojs/http-streaming/issues/178)) ([2b07fca](https://github.com/videojs/http-streaming/commit/2b07fca))
<a name="1.2.1"></a>
## [1.2.1](https://github.com/videojs/http-streaming/compare/v1.2.0...v1.2.1) (2018-07-17)
### Bug Fixes
* convert non-latin characters in IE ([#157](https://github.com/videojs/http-streaming/issues/157)) ([17678fb](https://github.com/videojs/http-streaming/commit/17678fb))
<a name="1.2.0"></a>
# [1.2.0](https://github.com/videojs/http-streaming/compare/v1.1.0...v1.2.0) (2018-07-16)
### Features
* **captions:** write in-band captions from DASH fmp4 segments to the textTrack API ([#108](https://github.com/videojs/http-streaming/issues/108)) ([7c11911](https://github.com/videojs/http-streaming/commit/7c11911))
### Chores
* add welcome bot config from video.js ([#150](https://github.com/videojs/http-streaming/issues/150)) ([922cfee](https://github.com/videojs/http-streaming/commit/922cfee))
<a name="1.1.0"></a>
# [1.1.0](https://github.com/videojs/http-streaming/compare/v1.0.2...v1.1.0) (2018-06-06)
### Features
* Utilize option to override native on tech ([#76](https://github.com/videojs/http-streaming/issues/76)) ([5c7ab4c](https://github.com/videojs/http-streaming/commit/5c7ab4c))
### Chores
* update tests and pages for video.js 7 ([#102](https://github.com/videojs/http-streaming/issues/102)) ([d6f5005](https://github.com/videojs/http-streaming/commit/d6f5005))
<a name="1.0.2"></a>
## [1.0.2](https://github.com/videojs/http-streaming/compare/v1.0.1...v1.0.2) (2018-05-17)
### Bug Fixes
* make project Video.js 7 ready ([#92](https://github.com/videojs/http-streaming/issues/92)) ([decad87](https://github.com/videojs/http-streaming/commit/decad87))
* make sure that es build is babelified ([#97](https://github.com/videojs/http-streaming/issues/97)) ([5f0428d](https://github.com/videojs/http-streaming/commit/5f0428d))
### Documentation
* update documentation with a glossary and intro page, added DASH background ([#94](https://github.com/videojs/http-streaming/issues/94)) ([4b0fde9](https://github.com/videojs/http-streaming/commit/4b0fde9))
<a name="1.0.1"></a>
## [1.0.1](https://github.com/videojs/http-streaming/compare/v1.0.0...v1.0.1) (2018-04-12)
### Bug Fixes
* minified build ([#84](https://github.com/videojs/http-streaming/issues/84)) ([2402ac6](https://github.com/videojs/http-streaming/commit/2402ac6))
<a name="1.0.0"></a>
# [1.0.0](https://github.com/videojs/http-streaming/compare/v0.9.0...v1.0.0) (2018-04-10)
### Chores
* sync videojs-contrib-hls updates ([#75](https://github.com/videojs/http-streaming/issues/75)) ([9223588](https://github.com/videojs/http-streaming/commit/9223588))
* update the aes-decrypter ([#71](https://github.com/videojs/http-streaming/issues/71)) ([27ed914](https://github.com/videojs/http-streaming/commit/27ed914))
### Documentation
* update docs for overrideNative ([#77](https://github.com/videojs/http-streaming/issues/77)) ([98ca6d3](https://github.com/videojs/http-streaming/commit/98ca6d3))
* update known issues for fmp4 captions ([#79](https://github.com/videojs/http-streaming/issues/79)) ([c418301](https://github.com/videojs/http-streaming/commit/c418301))
<a name="0.9.0"></a>
# [0.9.0](https://github.com/videojs/http-streaming/compare/v0.8.0...v0.9.0) (2018-03-30)
### Features
* support in-manifest DRM data ([#60](https://github.com/videojs/http-streaming/issues/60)) ([a1cad82](https://github.com/videojs/http-streaming/commit/a1cad82))
<a name="0.8.0"></a>
# [0.8.0](https://github.com/videojs/http-streaming/compare/v0.7.2...v0.8.0) (2018-03-30)
### Code Refactoring
* export corrections ([#68](https://github.com/videojs/http-streaming/issues/68)) ([aab3b90](https://github.com/videojs/http-streaming/commit/aab3b90))
* use rollup for build ([#69](https://github.com/videojs/http-streaming/issues/69)) ([c28c25c](https://github.com/videojs/http-streaming/commit/c28c25c))
# 0.7.0
* feat: Live support for DASH
# 0.6.1
* use webwackify for webworkers to support webpack bundle ([#50](https://github.com/videojs/http-streaming/pull/45))
# 0.5.3
* fix: program date time handling ([#45](https://github.com/videojs/http-streaming/pull/45))
* update m3u8-parser to v4.2.0
* use segment program date time info
* feat: Adding support for segments in Period and Representation ([#47](https://github.com/videojs/http-streaming/pull/47))
* wait for both main and audio loaders for endOfStream if main starting media unknown ([#44](https://github.com/videojs/http-streaming/pull/44))
# 0.5.2
* add debug logging statement for seekable updates ([#40](https://github.com/videojs/http-streaming/pull/40))
# 0.5.1
* Fix audio only streams with EXT-X-MEDIA tags ([#34](https://github.com/videojs/http-streaming/pull/34))
* Merge videojs-contrib-hls master into http-streaming master ([#35](https://github.com/videojs/http-streaming/pull/35))
* Update sinon to 1.10.3=
* Update videojs-contrib-quality-levels to ^2.0.4
* Fix test for event handler cleanup on dispose by calling event handling methods
* fix: Don't reset eme options ([#32](https://github.com/videojs/http-streaming/pull/32))
# 0.5.0
* update mpd-parser to support more segment list types ([#27](https://github.com/videojs/http-streaming/issues/27))
# 0.4.0
* Removed Flash support ([#15](https://github.com/videojs/http-streaming/issues/15))
* Blacklist playlists not supported by browser media source before initial selection ([#17](https://github.com/videojs/http-streaming/issues/17))
# 0.3.1
* Skip flash-based source handler with DASH sources ([#14](https://github.com/videojs/http-streaming/issues/14))
# 0.3.0
* Added additional properties to the stats object ([#10](https://github.com/videojs/http-streaming/issues/10))
# 0.2.1
* Updated the mpd-parser to fix IE11 DASH support ([#12](https://github.com/videojs/http-streaming/issues/12))
# 0.2.0
* Initial DASH Support ([#8](https://github.com/videojs/http-streaming/issues/8))
# 0.1.0
* Initial release, based on [videojs-contrib-hls 5.12.2](https://github.com/videojs/videojs-contrib-hls)

View File

@@ -1,30 +0,0 @@
# CONTRIBUTING
We welcome contributions from everyone!
## Getting Started
Make sure you have Node.js 8 or higher and npm installed.
1. Fork this repository and clone your fork
1. Install dependencies: `npm install`
1. Run a development server: `npm start`
### Making Changes
Refer to the [video.js plugin conventions][conventions] for more detail on best practices and tooling for video.js plugin authorship.
When you've made your changes, push your commit(s) to your fork and issue a pull request against the original repository.
### Running Tests
Testing is a crucial part of any software project. For all but the most trivial changes (typos, etc) test cases are expected. Tests are run in actual browsers using [Karma][karma].
- In all available and supported browsers: `npm test`
- In a specific browser: `npm run test:chrome`, `npm run test:firefox`, etc.
- While development server is running (`npm start`), navigate to [`http://localhost:9999/test/`][local]
[karma]: http://karma-runner.github.io/
[local]: http://localhost:9999/test/
[conventions]: https://github.com/videojs/generator-videojs-plugin/blob/master/docs/conventions.md

View File

@@ -1,49 +0,0 @@
Copyright Brightcove, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
The AES decryption implementation in this project is derived from the
Stanford Javascript Cryptography Library
(http://bitwiseshiftleft.github.io/sjcl/). That work is covered by the
following copyright and permission notice:
Copyright 2009-2010 Emily Stark, Mike Hamburg, Dan Boneh.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation
are those of the authors and should not be interpreted as representing
official policies, either expressed or implied, of the authors.

View File

@@ -1,936 +0,0 @@
<img width=300 src="./logo.svg" alt="VHS Logo consisting of a VHS tape, the Video.js logo and the words VHS" />
# videojs-http-streaming (VHS)
[![Build Status][travis-icon]][travis-link]
[![Slack Status][slack-icon]][slack-link]
[![Greenkeeper badge][greenkeeper-icon]][greenkeeper-link]
Play HLS, DASH, and future HTTP streaming protocols with video.js, even where they're not
natively supported.
Included in video.js 7 by default! See the [video.js 7 blog post](https://blog.videojs.com/video-js-7-is-here/)
Maintenance Status: Stable
Video.js Compatibility: 6.0, 7.0
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Installation](#installation)
- [NPM](#npm)
- [CDN](#cdn)
- [Releases](#releases)
- [Manual Build](#manual-build)
- [Contributing](#contributing)
- [Troubleshooting](#troubleshooting)
- [Talk to us](#talk-to-us)
- [Getting Started](#getting-started)
- [Compatibility](#compatibility)
- [Browsers which support MSE](#browsers-which-support-mse)
- [Native only](#native-only)
- [Flash Support](#flash-support)
- [DRM](#drm)
- [Documentation](#documentation)
- [Options](#options)
- [How to use](#how-to-use)
- [Initialization](#initialization)
- [Source](#source)
- [List](#list)
- [withCredentials](#withcredentials)
- [handleManifestRedirects](#handlemanifestredirects)
- [useCueTags](#usecuetags)
- [overrideNative](#overridenative)
- [blacklistDuration](#blacklistduration)
- [bandwidth](#bandwidth)
- [useBandwidthFromLocalStorage](#usebandwidthfromlocalstorage)
- [enableLowInitialPlaylist](#enablelowinitialplaylist)
- [limitRenditionByPlayerDimensions](#limitrenditionbyplayerdimensions)
- [useDevicePixelRatio](#usedevicepixelratio)
- [smoothQualityChange](#smoothqualitychange)
- [allowSeeksWithinUnsafeLiveWindow](#allowseekswithinunsafelivewindow)
- [customTagParsers](#customtagparsers)
- [customTagMappers](#customtagmappers)
- [cacheEncryptionKeys](#cacheencryptionkeys)
- [handlePartialData](#handlepartialdata)
- [Runtime Properties](#runtime-properties)
- [vhs.playlists.master](#vhsplaylistsmaster)
- [vhs.playlists.media](#vhsplaylistsmedia)
- [vhs.systemBandwidth](#vhssystembandwidth)
- [vhs.bandwidth](#vhsbandwidth)
- [vhs.throughput](#vhsthroughput)
- [vhs.selectPlaylist](#vhsselectplaylist)
- [vhs.representations](#vhsrepresentations)
- [vhs.xhr](#vhsxhr)
- [vhs.stats](#vhsstats)
- [Events](#events)
- [loadedmetadata](#loadedmetadata)
- [VHS Usage Events](#vhs-usage-events)
- [Presence Stats](#presence-stats)
- [Use Stats](#use-stats)
- [In-Band Metadata](#in-band-metadata)
- [Segment Metadata](#segment-metadata)
- [Object as Source](#object-as-source)
- [Hosting Considerations](#hosting-considerations)
- [Known Issues and Workarounds](#known-issues-and-workarounds)
- [Fragmented MP4 Support](#fragmented-mp4-support)
- [Assets with an Audio-Only Rate Get Stuck in Audio-Only](#assets-with-an-audio-only-rate-get-stuck-in-audio-only)
- [DASH Assets with `$Time` Interpolation and `SegmentTimeline`s with No `t`](#dash-assets-with-time-interpolation-and-segmenttimelines-with-no-t)
- [Testing](#testing)
- [Debugging](#debugging)
- [Release History](#release-history)
- [Building](#building)
- [Development](#development)
- [Tools](#tools)
- [Commands](#commands)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Installation
### NPM
To install `videojs-http-streaming` with npm run
```bash
npm install --save @videojs/http-streaming
```
### CDN
Select a version of VHS from the [CDN](https://unpkg.com/@videojs/http-streaming/dist/)
### Releases
Download a release of [videojs-http-streaming](https://github.com/videojs/http-streaming/releases)
### Manual Build
Download a copy of this git repository and then follow the steps in [Building](#building)
## Contributing
See [CONTRIBUTING.md](/CONTRIBUTING.md)
## Troubleshooting
See [our troubleshooting guide](/docs/troubleshooting.md)
## Talk to us
Drop by our slack channel (#playback) on the [Video.js slack][slack-link].
## Getting Started
This library is included in video.js 7 by default, if you are using an older version of video.js then
get a copy of [videojs-http-streaming](#installation) and include it in your page along with video.js:
```html
<video-js id=vid1 width=600 height=300 class="vjs-default-skin" controls>
<source
src="https://example.com/index.m3u8"
type="application/x-mpegURL">
</video-js>
<script src="video.js"></script>
<script src="videojs-http-streaming.min.js"></script>
<script>
var player = videojs('vid1');
player.play();
</script>
```
Check out our [live example](https://jsbin.com/gejugat/edit?html,output) if you're having trouble.
Is it recommended to use the `<video-js>` element or load a source with `player.src(sourceObject)` in order to prevent the video element from playing the source natively where HLS is supported.
## Compatibility
The [Media Source Extensions](http://caniuse.com/#feat=mediasource) API is required for http-streaming to play HLS or MPEG-DASH.
### Browsers which support MSE
- Chrome
- Firefox
- Internet Explorer 11 Windows 10 or 8.1
These browsers have some level of native HLS support, which will be used unless the [overrideNative](#overridenative) option is used:
- Chrome Android
- Firefox Android
- Edge
### Native only
- Mac Safari
- iOS Safari
Mac Safari does have MSE support, but native HLS is recommended
### Flash Support
This plugin does not support Flash playback. Instead, it is recommended that users use the [videojs-flashls-source-handler](https://github.com/brightcove/videojs-flashls-source-handler) plugin as a fallback option for browsers that don't have a native
[HLS](https://caniuse.com/#feat=http-live-streaming)/[DASH](https://caniuse.com/#feat=mpeg-dash) player or support for [Media Source Extensions](http://caniuse.com/#feat=mediasource).
### DRM
DRM is supported through [videojs-contrib-eme](https://github.com/videojs/videojs-contrib-eme). In order to use DRM, include the videojs-contrib-eme plug, [initialize it](https://github.com/videojs/videojs-contrib-eme#initialization), and add options to either the [plugin](https://github.com/videojs/videojs-contrib-eme#plugin-options) or the [source](https://github.com/videojs/videojs-contrib-eme#source-options).
Detailed option information can be found in the [videojs-contrib-eme README](https://github.com/videojs/videojs-contrib-eme/blob/master/README.md).
## Documentation
[HTTP Live Streaming](https://developer.apple.com/streaming/) (HLS) has
become a de-facto standard for streaming video on mobile devices
thanks to its native support on iOS and Android. There are a number of
reasons independent of platform to recommend the format, though:
- Supports (client-driven) adaptive bitrate selection
- Delivered over standard HTTP ports
- Simple, text-based manifest format
- No proprietary streaming servers required
Unfortunately, all the major desktop browsers except for Safari are
missing HLS support. That leaves web developers in the unfortunate
position of having to maintain alternate renditions of the same video
and potentially having to forego HTML-based video entirely to provide
the best desktop viewing experience.
This project addresses that situation by providing a polyfill for HLS
on browsers that have support for [Media Source
Extensions](http://caniuse.com/#feat=mediasource).
You can deploy a single HLS stream, code against the
regular HTML5 video APIs, and create a fast, high-quality video
experience across all the big web device categories.
Check out the [full documentation](docs/README.md) for details on how HLS works
and advanced configuration. A description of the [adaptive switching
behavior](docs/bitrate-switching.md) is available, too.
videojs-http-streaming supports a bunch of HLS features. Here
are some highlights:
- video-on-demand and live playback modes
- backup or redundant streams
- mid-segment quality switching
- AES-128 segment encryption
- CEA-608 captions are automatically translated into standard HTML5
[caption text tracks][0]
- In-Manifest WebVTT subtitles are automatically translated into standard HTML5
subtitle tracks
- Timed ID3 Metadata is automatically translated into HTML5 metedata
text tracks
- Highly customizable adaptive bitrate selection
- Automatic bandwidth tracking
- Cross-domain credentials support with CORS
- Tight integration with video.js and a philosophy of exposing as much
as possible with standard HTML APIs
- Stream with multiple audio tracks and switching to those audio tracks
(see the docs folder) for info
- Media content in
[fragmented MP4s](https://developer.apple.com/videos/play/wwdc2016/504/)
instead of the MPEG2-TS container format.
[0]: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/track
For a more complete list of supported and missing features, refer to
[this doc](docs/supported-features.md).
### Options
#### How to use
##### Initialization
You may pass in an options object to the hls source handler at player
initialization. You can pass in options just like you would for other
parts of video.js:
```javascript
// html5 for html hls
videojs(video, {
html5: {
vhs: {
withCredentials: true
}
}
});
```
##### Source
Some options, such as `withCredentials` can be passed in to vhs during
`player.src`
```javascript
var player = videojs('some-video-id');
player.src({
src: 'https://d2zihajmogu5jn.cloudfront.net/bipbop-advanced/bipbop_16x9_variant.m3u8',
type: 'application/x-mpegURL',
withCredentials: true
});
```
#### List
##### withCredentials
* Type: `boolean`
* can be used as a source option
* can be used as an initialization option
When the `withCredentials` property is set to `true`, all XHR requests for
manifests and segments would have `withCredentials` set to `true` as well. This
enables storing and passing cookies from the server that the manifests and
segments live on. This has some implications on CORS because when set, the
`Access-Control-Allow-Origin` header cannot be set to `*`, also, the response
headers require the addition of `Access-Control-Allow-Credentials` header which
is set to `true`.
See html5rocks's [article](http://www.html5rocks.com/en/tutorials/cors/)
for more info.
##### handleManifestRedirects
* Type: `boolean`
* Default: `false`
* can be used as a source option
* can be used as an initialization option
When the `handleManifestRedirects` property is set to `true`, manifest requests
which are redirected will have their URL updated to the new URL for future
requests.
##### useCueTags
* Type: `boolean`
* can be used as an initialization option
When the `useCueTags` property is set to `true,` a text track is created with
label 'ad-cues' and kind 'metadata'. The track is then added to
`player.textTracks()`. Changes in active cue may be
tracked by following the Video.js cue points API for text tracks. For example:
```javascript
lettextTracks = player.textTracks();
letcuesTrack;
for (let i = 0; i < textTracks.length; i++) {
if (textTracks[i].label === 'ad-cues') {
cuesTrack = textTracks[i];
}
}
cuesTrack.addEventListener('cuechange', function() {
letactiveCues = cuesTrack.activeCues;
for (let i = 0; i < activeCues.length; i++) {
let activeCue = activeCues[i];
console.log('Cue runs from ' + activeCue.startTime +
' to ' + activeCue.endTime);
}
});
```
##### overrideNative
* Type: `boolean`
* can be used as an initialization option
Try to use videojs-http-streaming even on platforms that provide some
level of HLS support natively. There are a number of platforms that
*technically* play back HLS content but aren't very reliable or are
missing features like CEA-608 captions support. When `overrideNative`
is true, if the platform supports Media Source Extensions
videojs-http-streaming will take over HLS playback to provide a more
consistent experience.
```javascript
// via the constructor
var player = videojs('playerId', {
html5: {
vhs: {
overrideNative: true
},
nativeAudioTracks: false,
nativeVideoTracks: false
}
});
```
Since MSE playback may be desirable on all browsers with some native support other than Safari, `overrideNative: !videojs.browser.IS_SAFARI` could be used.
##### blacklistDuration
* Type: `number`
* can be used as an initialization option
When the `blacklistDuration` property is set to a time duration in seconds,
if a playlist is blacklisted, it will be blacklisted for a period of that
customized duration. This enables the blacklist duration to be configured
by the user.
##### bandwidth
* Type: `number`
* can be used as an initialization option
When the `bandwidth` property is set (bits per second), it will be used in
the calculation for initial playlist selection, before more bandwidth
information is seen by the player.
##### useBandwidthFromLocalStorage
* Type: `boolean`
* can be used as an initialization option
If true, `bandwidth` and `throughput` values are stored in and retrieved from local
storage on startup (for initial rendition selection). This setting is `false` by default.
##### enableLowInitialPlaylist
* Type: `boolean`
* can be used as an initialization option
When `enableLowInitialPlaylist` is set to true, it will be used to select
the lowest bitrate playlist initially. This helps to decrease playback start time.
This setting is `false` by default.
##### limitRenditionByPlayerDimensions
* Type: `boolean`
* can be used as an initialization option
##### useDevicePixelRatio
* Type: `boolean`
* can be used as an initialization option.
If true, this will take the device pixel ratio into account when doing rendition switching. This means that if you have a player with the width of `540px` in a high density display with a device pixel ratio of 2, a rendition of `1080p` will be allowed.
This setting is `false` by default.
When `limitRenditionByPlayerDimensions` is set to true, rendition
selection logic will take into account the player size and rendition
resolutions when making a decision.
This setting is `true` by default.
##### smoothQualityChange
* Type: `boolean`
* can be used as a source option
* can be used as an initialization option
When the `smoothQualityChange` property is set to `true`, a manual quality
change triggered via the [representations API](#vhsrepresentations) will use
smooth quality switching rather than the default fast (buffer-ejecting)
quality switching. Using smooth quality switching will mean no loading spinner
will appear during quality switches, but will cause quality switches to only
be visible after a few seconds.
Note that this _only_ affects quality changes triggered via the representations
API; automatic quality switches based on available bandwidth will always be
smooth switches.
##### allowSeeksWithinUnsafeLiveWindow
* Type: `boolean`
* can be used as a source option
When `allowSeeksWithinUnsafeLiveWindow` is set to `true`, if the active playlist is live
and a seek is made to a time between the safe live point (end of manifest minus three
times the target duration,
see [the hls spec](https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-6.3.3)
for details) and the end of the playlist, the seek is allowed, rather than corrected to
the safe live point.
This option can help in instances where the live stream's target duration is greater than
the segment durations, playback ends up in the unsafe live window, and there are gaps in
the content. In this case the player will attempt to seek past the gaps but end up seeking
inside of the unsafe range, leading to a correction and seek back into a previously played
content.
The property defaults to `false`.
##### customTagParsers
* Type: `Array`
* can be used as a source option
With `customTagParsers` you can pass an array of custom m3u8 tag parser objects. See https://github.com/videojs/m3u8-parser#custom-parsers
##### customTagMappers
* Type: `Array`
* can be used as a source option
Similar to `customTagParsers`, with `customTagMappers` you can pass an array of custom m3u8 tag mapper objects. See https://github.com/videojs/m3u8-parser#custom-parsers
##### cacheEncryptionKeys
* Type: `boolean`
* can be used as a source option
* can be used as an initialization option
This option forces the player to cache AES-128 encryption keys internally instead of requesting the key alongside every segment request.
This option defaults to `false`.
##### handlePartialData
* Type: `boolean`,
* Default: `false`
* Use partial appends in the transmuxer and segment loader
### Runtime Properties
Runtime properties are attached to the tech object when HLS is in
use. You can get a reference to the VHS source handler like this:
```javascript
var vhs = player.tech().vhs;
```
If you *were* thinking about modifying runtime properties in a
video.js plugin, we'd recommend you avoid it. Your plugin won't work
with videos that don't use videojs-http-streaming and the best plugins
work across all the media types that video.js supports. If you're
deploying videojs-http-streaming on your own website and want to make a
couple tweaks though, go for it!
#### vhs.playlists.master
Type: `object`
An object representing the parsed master playlist. If a media playlist
is loaded directly, a master playlist with only one entry will be
created.
#### vhs.playlists.media
Type: `function`
A function that can be used to retrieve or modify the currently active
media playlist. The active media playlist is referred to when
additional video data needs to be downloaded. Calling this function
with no arguments returns the parsed playlist object for the active
media playlist. Calling this function with a playlist object from the
master playlist or a URI string as specified in the master playlist
will kick off an asynchronous load of the specified media
playlist. Once it has been retreived, it will become the active media
playlist.
#### vhs.systemBandwidth
Type: `number`
`systemBandwidth` is a combination of two serial processes' bitrates. The first
is the network bitrate provided by `bandwidth` and the second is the bitrate of
the entire process after that (decryption, transmuxing, and appending) provided
by `throughput`. This value is used by the default implementation of `selectPlaylist`
to select an appropriate bitrate to play.
Since the two process are serial, the overall system bandwidth is given by:
`systemBandwidth = 1 / (1 / bandwidth + 1 / throughput)`
#### vhs.bandwidth
Type: `number`
The number of bits downloaded per second in the last segment download.
Before the first video segment has been downloaded, it's hard to
estimate bandwidth accurately. The HLS tech uses a starting value of 4194304 or 0.5 MB/s. If you
have a more accurate source of bandwidth information, you can override
this value as soon as the HLS tech has loaded to provide an initial
bandwidth estimate.
#### vhs.throughput
Type: `number`
The number of bits decrypted, transmuxed, and appended per second as a cumulative average across active processing time.
#### vhs.selectPlaylist
Type: `function`
A function that returns the media playlist object to use to download
the next segment. It is invoked by the tech immediately before a new
segment is downloaded. You can override this function to provide your
adaptive streaming logic. You must, however, be sure to return a valid
media playlist object that is present in `player.tech().vhs.master`.
Overridding this function with your own is very powerful but is overkill
for many purposes. Most of the time, you should use the much simpler
function below to selectively enable or disable a playlist from the
adaptive streaming logic.
#### vhs.representations
Type: `function`
It is recommended to include the [videojs-contrib-quality-levels](https://github.com/videojs/videojs-contrib-quality-levels) plugin to your page so that videojs-http-streaming will automatically populate the QualityLevelList exposed on the player by the plugin. You can access this list by calling `player.qualityLevels()`. See the [videojs-contrib-quality-levels project page](https://github.com/videojs/videojs-contrib-quality-levels) for more information on how to use the api.
Example, only enabling representations with a width greater than or equal to 720:
```javascript
var qualityLevels = player.qualityLevels();
for (var i = 0; i < qualityLevels.length; i++) {
var quality = qualityLevels[i];
if (quality.width >= 720) {
quality.enabled = true;
} else {
quality.enabled = false;
}
}
```
If including [videojs-contrib-quality-levels](https://github.com/videojs/videojs-contrib-quality-levels) is not an option, you can use the representations api. To get all of the available representations, call the `representations()` method on `player.tech().vhs`. This will return a list of plain objects, each with `width`, `height`, `bandwidth`, and `id` properties, and an `enabled()` method.
```javascript
player.tech().vhs.representations();
```
To see whether the representation is enabled or disabled, call its `enabled()` method with no arguments. To set whether it is enabled/disabled, call its `enabled()` method and pass in a boolean value. Calling `<representation>.enabled(true)` will allow the adaptive bitrate algorithm to select the representation while calling `<representation>.enabled(false)` will disallow any selection of that representation.
Example, only enabling representations with a width greater than or equal to 720:
```javascript
player.tech().vhs.representations().forEach(function(rep) {
if (rep.width >= 720) {
rep.enabled(true);
} else {
rep.enabled(false);
}
});
```
#### vhs.xhr
Type: `function`
The xhr function that is used by HLS internally is exposed on the per-
player `vhs` object. While it is possible, we do not recommend replacing
the function with your own implementation. Instead, the `xhr` provides
the ability to specify a `beforeRequest` function that will be called
with an object containing the options that will be used to create the
xhr request.
Example:
```javascript
player.tech().vhs.xhr.beforeRequest = function(options) {
options.uri = options.uri.replace('example.com', 'foo.com');
return options;
};
```
The global `videojs.Vhs` also exposes an `xhr` property. Specifying a
`beforeRequest` function on that will allow you to intercept the options
for *all* requests in every player on a page. For consistency across
browsers the video source should be set at runtime once the video player
is ready.
Example
```javascript
videojs.Vhs.xhr.beforeRequest = function(options) {
/*
* Modifications to requests that will affect every player.
*/
return options;
};
var player = videojs('video-player-id');
player.ready(function() {
this.src({
src: 'https://d2zihajmogu5jn.cloudfront.net/bipbop-advanced/bipbop_16x9_variant.m3u8',
type: 'application/x-mpegURL',
});
});
```
For information on the type of options that you can modify see the
documentation at [https://github.com/Raynos/xhr](https://github.com/Raynos/xhr).
#### vhs.stats
Type: `object`
This object contains a summary of HLS and player related stats.
| Property Name | Type | Description |
| --------------------- | ------ | ----------- |
| bandwidth | number | Rate of the last segment download in bits/second |
| mediaRequests | number | Total number of media segment requests |
| mediaRequestsAborted | number | Total number of aborted media segment requests |
| mediaRequestsTimedout | number | Total number of timedout media segment requests |
| mediaRequestsErrored | number | Total number of errored media segment requests |
| mediaTransferDuration | number | Total time spent downloading media segments in milliseconds |
| mediaBytesTransferred | number | Total number of content bytes downloaded |
| mediaSecondsLoaded | number | Total number of content seconds downloaded |
| buffered | array | List of time ranges of content that are in the SourceBuffer |
| currentTime | number | The current position of the player |
| currentSource | object | The source object. Has the structure `{src: 'url', type: 'mimetype'}` |
| currentTech | string | The name of the tech in use |
| duration | number | Duration of the video in seconds |
| master | object | The [master playlist object](#vhsplaylistsmaster) |
| playerDimensions | object | Contains the width and height of the player |
| seekable | array | List of time ranges that the player can seek to |
| timestamp | number | Timestamp of when `vhs.stats` was accessed |
| videoPlaybackQuality | object | Media playback quality metrics as specified by the [W3C's Media Playback Quality API](https://wicg.github.io/media-playback-quality/) |
### Events
Standard HTML video events are handled by video.js automatically and
are triggered on the player object.
#### loadedmetadata
Fired after the first segment is downloaded for a playlist. This will not happen
until playback if video.js's `metadata` setting is `none`
### VHS Usage Events
Usage tracking events are fired when we detect a certain HLS feature, encoding setting,
or API is used. These can be helpful for analytics, and to pinpoint the cause of HLS errors.
For instance, if errors are being fired in tandem with a usage event indicating that the
player was playing an AES encrypted stream, then we have a possible avenue to explore when
debugging the error.
Note that although these usage events are listed below, they may change at any time without
a major version change.
VHS usage events are triggered on the tech with the exception of the 3 vhs-reload-error
events, which are triggered on the player.
To listen for usage events triggered on the tech, listen for the event type of `'usage'`:
```javascript
player.on('ready', () => {
player.tech().on('usage', (e) => {
console.log(e.name);
});
});
```
Note that these events are triggered as soon as a case is encountered, and often only
once. For example, the `vhs-demuxed` usage event will be triggered as soon as the master
manifest is downloaded and parsed, and will not be triggered again.
#### Presence Stats
Each of the following usage events are fired once per source if (and when) detected:
| Name | Description |
| ------------- | ------------- |
| vhs-webvtt | master manifest has at least one segmented WebVTT playlist |
| vhs-aes | a playlist is AES encrypted |
| vhs-fmp4 | a playlist used fMP4 segments |
| vhs-demuxed | audio and video are demuxed by default |
| vhs-alternate-audio | alternate audio available in the master manifest |
| vhs-playlist-cue-tags | a playlist used cue tags (see useCueTags(#usecuetags) for details) |
| vhs-bandwidth-from-local-storage | starting bandwidth was retrieved from local storage (see useBandwidthFromLocalStorage(#useBandwidthFromLocalStorage) for details) |
| vhs-throughput-from-local-storage | starting throughput was retrieved from local storage (see useBandwidthFromLocalStorage(#useBandwidthFromLocalStorage) for details) |
#### Use Stats
Each of the following usage events are fired per use:
| Name | Description |
| ------------- | ------------- |
| vhs-gap-skip | player skipped a gap in the buffer |
| vhs-player-access | player.vhs was accessed |
| vhs-audio-change | a user selected an alternate audio stream |
| vhs-rendition-disabled | a rendition was disabled |
| vhs-rendition-enabled | a rendition was enabled |
| vhs-rendition-blacklisted | a rendition was blacklisted |
| vhs-timestamp-offset | a timestamp offset was set in HLS (can identify discontinuities) |
| vhs-unknown-waiting | the player stopped for an unknown reason and we seeked to current time try to address it |
| vhs-live-resync | playback fell off the back of a live playlist and we resynced to the live point |
| vhs-video-underflow | we seeked to current time to address video underflow |
| vhs-error-reload-initialized | the reloadSourceOnError plugin was initialized |
| vhs-error-reload | the reloadSourceOnError plugin reloaded a source |
| vhs-error-reload-canceled | an error occurred too soon after the last reload, so we didn't reload again (to prevent error loops) |
### In-Band Metadata
The HLS tech supports [timed
metadata](https://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/HTTP_Live_Streaming_Metadata_Spec/Introduction/Introduction.html)
embedded as [ID3 tags](http://id3.org/id3v2.3.0). When a stream is
encountered with embedded metadata, an [in-band metadata text
track](https://html.spec.whatwg.org/multipage/embedded-content.html#text-track-in-band-metadata-track-dispatch-type)
will automatically be created and populated with cues as they are
encountered in the stream. UTF-8 encoded
[TXXX](http://id3.org/id3v2.3.0#User_defined_text_information_frame)
and [WXXX](http://id3.org/id3v2.3.0#User_defined_URL_link_frame) ID3
frames are mapped to cue points and their values set as the cue
text. Cues are created for all other frame types and the data is
attached to the generated cue:
```javascript
cue.value.data
```
There are lots of guides and references to using text tracks [around
the web](http://www.html5rocks.com/en/tutorials/track/basics/).
### Segment Metadata
You can get metadata about the segments currently in the buffer by using the `segment-metadata`
text track. You can get the metadata of the currently rendered segment by looking at the
track's `activeCues` array. The metadata will be attached to the `cue.value` property and
will have this structure
```javascript
cue.value = {
byteLength, // The size of the segment in bytes
bandwidth, // The peak bitrate reported by the segment's playlist
resolution, // The resolution reported by the segment's playlist
codecs, // The codecs reported by the segment's playlist
uri, // The Segment uri
timeline, // Timeline of the segment for detecting discontinuities
playlist, // The Playlist uri
start, // Segment start time
end // Segment end time
};
```
Example:
Detect when a change in quality is rendered on screen
```javascript
let tracks = player.textTracks();
let segmentMetadataTrack;
for (let i = 0; i < tracks.length; i++) {
if (tracks[i].label === 'segment-metadata') {
segmentMetadataTrack = tracks[i];
}
}
let previousPlaylist;
if (segmentMetadataTrack) {
segmentMetadataTrack.on('cuechange', function() {
let activeCue = segmentMetadataTrack.activeCues[0];
if (activeCue) {
if (previousPlaylist !== activeCue.value.playlist) {
console.log('Switched from rendition ' + previousPlaylist +
' to rendition ' + activeCue.value.playlist);
}
previousPlaylist = activeCue.value.playlist;
}
});
}
```
### Object as Source
*Note* that this is an advanced use-case, and may be more fragile for production
environments, as the schema for a VHS object and how it's used internally are not set in
stone and may change in future releases.
In normal use, VHS accepts a URL as the source of the video. But VHS also has the ability
to accept a JSON object as the source.
Passing a JSON object as the source has many uses. A couple of examples include:
* The manifest has already been downloaded, so there's no need to make another request
* You want to change some aspect of the manifest, e.g., add a segment, without modifying
the manifest itself
In order to pass a JSON object as the source, provide a parsed manifest object in via a
[data URI](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URIs),
and using the "vnd.videojs.vhs+json" media type when setting the source type. For instance:
```
var player = videojs('some-video-id');
const parser = new M3u8Parser();
parser.push(manifestString);
parser.end();
player.src({
src: `data:application/vnd.videojs.vhs+json,${JSON.stringify(parser.manifest)}`,
type: 'application/vnd.videojs.vhs+json'
});
```
The manifest object should follow the "VHS manifest object schema" (a somewhat flexible
and informally documented structure) provided in the README of
[m3u8-parser](https://github.com/videojs/m3u8-parser) and
[mpd-parser](https://github.com/videojs/mpd-parser). This may be referred to in the
project as `vhs-json`.
## Hosting Considerations
Unlike a native HLS implementation, the HLS tech has to comply with
the browser's security policies. That means that all the files that
make up the stream must be served from the same domain as the page
hosting the video player or from a server that has appropriate [CORS
headers](https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS)
configured. Easy [instructions are
available](http://enable-cors.org/server.html) for popular webservers
and most CDNs should have no trouble turning CORS on for your account.
## Known Issues and Workarounds
Issues that are currenty known. If you want to
help find a solution that would be appreciated!
### Fragmented MP4 Support
Edge has native support for HLS but only in the MPEG2-TS container. If
you attempt to play an HLS stream with fragmented MP4 segments (without
[overriding native playback](#overridenative)), Edge will stall.
Fragmented MP4s are only supported on browsers that have
[Media Source Extensions](http://caniuse.com/#feat=mediasource) available.
### Assets with an Audio-Only Rate Get Stuck in Audio-Only
Some assets which have an audio-only rate and the lowest-bandwidth
audio + video rate isn't that low get stuck in audio-only mode. This is
because the initial bandwidth calculation thinks it there's insufficient
bandwidth for selecting the lowest-quality audio+video playlist, so it picks
the only-audio one, which unfortunately locks it to being audio-only forever,
preventing a switch to the audio+video playlist when it gets a better
estimation of bandwidth.
Until we've implemented a full fix, it is recommended to set the
[`enableLowInitialPlaylist` option](#enablelowinitialplaylist) for any assets
that include an audio-only rate; it should always select the lowest-bandwidth
audio+video playlist for its first playlist.
It's also worth mentioning that Apple no longer requires having an audio-only
rate; instead, they require a 192kbps audio+video rate (see Apple's current
[HLS Authoring Specification][]). Removing the audio-only rate would of course
eliminate this problem since there would be only audio+video playlists to
choose from.
Follow progress on this in issue [#175](https://github.com/videojs/http-streaming/issues/175).
[HLS Authoring Specification]: https://developer.apple.com/documentation/http_live_streaming/hls_authoring_specification_for_apple_devices
### DASH Assets with `$Time` Interpolation and `SegmentTimeline`s with No `t`
DASH assets which use `$Time` in a `SegmentTemplate`, and also have a
`SegmentTimeline` where only the first `S` has a `t` and the rest only have a
`d`, do not load currently.
There is currently no workaround for this, but you can track progress on this
in issue [#256](https://github.com/videojs/http-streaming/issues/256).
## Testing
For testing, you run `npm run test`. You will need Chrome and Firefox for running the tests.
_videojs-http-streaming uses [BrowserStack](https://browserstack.com) for compatibility testing._
## Debugging
videojs-http-streaming makes use of `videojs.log` for debug logging. You can enable these logs
by setting the log level to `debug` using `videojs.log.level('debug')`. You can access a complete
history of the logs using `videojs.log.history()`. This history is maintained even when the
log level is not set to `debug`.
`vhs.stats` can also be helpful when debugging. Accessing this object will give you
a snapshot summary of various HLS and player stats. See [vhs.stats](#vhsstats) for details
about what this object contains.
__NOTE__: The `debug` level is only available in video.js v6.6.0+. With earlier versions of
video.js, no debug messages will be logged to console.
## Release History
Check out the [changelog](CHANGELOG.md) for a summary of each release.
## Building
To build a copy of videojs-http-streaming run the following commands
```bash
git clone https://github.com/videojs/http-streaming
cd http-streaming
npm i
npm run build
```
videojs-http-streaming will have created all of the files for using it in a dist folder
## Development
### Tools
* Download stream locally with the [HLS Fetcher](https://github.com/videojs/hls-fetcher)
* Simulate errors with [Murphy](https://github.com/videojs/murphy)
* Inspect content with [Thumbcoil](http://thumb.co.il)
### Commands
All commands for development are listed in the `package.json` file and are run using
```bash
npm run <command>
```
[slack-icon]: http://slack.videojs.com/badge.svg
[slack-link]: http://slack.videojs.com
[travis-icon]: https://travis-ci.org/videojs/http-streaming.svg?branch=main
[travis-link]: https://travis-ci.org/videojs/http-streaming
[issue-stats-link]: http://issuestats.com/github/videojs/http-streaming
[issue-stats-pr-icon]: http://issuestats.com/github/videojs/http-streaming/badge/pr
[issue-stats-issues-icon]: http://issuestats.com/github/videojs/http-streaming/badge/issue
[greenkeeper-icon]: https://badges.greenkeeper.io/videojs/http-streaming.svg
[greenkeeper-link]: https://greenkeeper.io/

View File

@@ -1,50 +0,0 @@
# Overview
This project supports both [HLS][hls] and [MPEG-DASH][dash] playback in the video.js player. This document is intended as a primer for anyone interested in contributing or just better understanding how bits from a server get turned into video on their display.
## HTTP Live Streaming
[HLS][apple-hls-intro] has two primary characteristics that distinguish it from other video formats:
- Delivered over HTTP(S): it uses the standard application protocol of the web to deliver all its data
- Segmented: longer videos are broken up into smaller chunks which can be downloaded independently and switched between at runtime
A standard HLS stream consists of a *Master Playlist* which references one or more *Media Playlists*. Each Media Playlist contains one or more sequential video segments. All these components form a logical hierarchy that informs the player of the different quality levels of the video available and how to address the individual segments of video at each of those levels:
![HLS Format](images/hls-format.png)
HLS streams can be delivered in two different modes: a "static" mode for videos that can be played back from any point, often referred to as video-on-demand (VOD); or a "live" mode where later portions of the video become available as time goes by. In the static mode, the Master and Media playlists are fixed. The player is guaranteed that the set of video segments referenced by those playlists will not change over time.
Live mode can work in one of two ways. For truly live events, the most common configuration is for each individual Media Playlist to only include the latest video segment and a small number of consecutive previous segments. In this mode, the player may be able to seek backwards a short time in the video but probably not all the way back to the beginning. In the other live configuration, new video segments can be appended to the Media Playlists but older segments are never removed. This configuration allows the player to seek back to the beginning of the stream at any time during the broadcast and transitions seamlessly to the static stream type when the event finishes.
If you're interested in a more in-depth treatment of the HLS format, check out [Apple's documentation][apple-hls-intro] and the IETF [Draft Specification][hls-spec].
## Dynamic Adaptive Streaming over HTTP
Similar to HLS, [DASH][dash-wiki] content is segmented and is delivered over HTTP(s).
A DASH stream consits of a *Media Presentation Description*(MPD) that describes segment metadata such as timing information, URLs, resolution and bitrate. Each segment can contain either ISO base media file format(e.g MP4) or MPEG-2 TS data. Typically, the MPD will describe the various *Representations* that map to collections of segments at different bitrates to allow bitrate selection. These Representations can be organized as a SegmentList, SegmentTemplate, SegmentBase, or SegmentTimeline.
DASH streams can be delivered in both video-on-demand(VOD) and live streaming modes. In the VOD case, the MPD describes all the segments and representations available and the player can chose which representation to play based on it's capabilities.
Live mode is accomplished using the ISOBMFF Live profile if the segments are in ISOBMFF. There are a few different ways to setup the MPD including but not limited to updating the MPD after an interval of time, using *Periods*, or using the *availabilityTimeOffset* field. A few examples of this are provided by the [DASH Reference Client][dash-if-reference-client]. The MPD will provide enough information for the player to playback the live stream and seek back as far as is specified in the MPD.
If you're interested in a more in-depth description of MPEG-DASH, check out [MDN's tutorial on setting up DASH][mdn-dash-tut] or the [DASHIF Guidelines][dash-if-guide].
# Further Documentation
- [Architechture](arch.md)
- [Glossary](glossary.md)
- [Adaptive Bitrate Switching](bitrate-switching.md)
- [Multiple Alternative Audio Tracks](multiple-alternative-audio-tracks.md)
- [reloadSourceOnError](reload-source-on-error.md)
# Helpful Tools
- [FFmpeg](http://trac.ffmpeg.org/wiki/CompilationGuide)
- [Thumbcoil](http://thumb.co.il/): web based video inspector
[hls]: /docs/intro.md#http-live-streaming
[dash]: /docs/intro.md#dynamic-adaptive-streaming-over-http
[apple-hls-intro]: https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
[hls-spec]: https://datatracker.ietf.org/doc/draft-pantos-http-live-streaming/
[dash-wiki]: https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP
[dash-if-reference-client]: https://reference.dashif.org/dash.js/
[mdn-dash-tut]: https://developer.mozilla.org/en-US/Apps/Fundamentals/Audio_and_video_delivery/Setting_up_adaptive_streaming_media_sources
[dash-if-guide]: http://dashif.org/guidelines/

View File

@@ -1,28 +0,0 @@
## HLS Project Overview
This project has three primary duties:
1. Download and parse playlist files
1. Implement the [HTMLVideoElement](https://html.spec.whatwg.org/multipage/embedded-content.html#the-video-element) interface
1. Feed content bits to a SourceBuffer by downloading and transmuxing video segments
### Playlist Management
The [playlist loader](../src/playlist-loader.js) handles all of the details of requesting, parsing, updating, and switching playlists at runtime. It's operation is described by this state diagram:
![Playlist Loader States](images/playlist-loader-states.png)
During VOD playback, the loader will move quickly to the HAVE_METADATA state and then stay there unless a quality switch request sends it to SWITCHING_MEDIA while it fetches an alternate playlist. The loader enters the HAVE_CURRENT_METADATA when a live stream is detected and it's time to refresh the current media playlist to find out about new video segments.
### HLS Tech
Currently, the HLS project integrates with [video.js](http://www.videojs.com/) as a [tech](https://github.com/videojs/video.js/blob/master/docs/guides/tech.md). That means it's responsible for providing an interface that closely mirrors the `<video>` element. You can see that implementation in [videojs-http-streaming.js](../src/videojs-http-streaming.js), the primary entry point of the project.
### Transmuxing
Most browsers don't have support for the file type that HLS video segments are stored in. To get HLS playing back on those browsers, contrib-hls strings together a number of technologies:
1. The [Netstream](http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.html) in [video.js SWF](https://github.com/videojs/video-js-swf) has a special mode of operation that allows binary video data packaged as an [FLV](http://en.wikipedia.org/wiki/Flash_Video) to be provided directly
1. [videojs-contrib-media-sources](https://github.com/videojs/videojs-contrib-media-sources) provides an abstraction layer over the SWF that operates like a [Media Source](https://w3c.github.io/media-source/#mediasource)
1. A pure javascript transmuxer that repackages HLS segments as FLVs
Transmuxing is the process of transforming media stored in one container format into another container without modifying the underlying media data. If that last sentence doesn't make any sense to you, check out the [Introduction to Media](media.md) for more details.
### Buffer Management
Buffering in contrib-hls is driven by two functions in videojs-hls.js: fillBuffer() and drainBuffer(). During its operation, contrib-hls periodically calls fillBuffer() which determines when more video data is required and begins a segment download if so. Meanwhile, drainBuffer() is invoked periodically during playback to process incoming segments and append them onto the [SourceBuffer](http://w3c.github.io/media-source/#sourcebuffer). In conjunction with a goal buffer length, this producer-consumer relationship drives the buffering behavior of contrib-hls.

View File

@@ -1,44 +0,0 @@
# Adaptive Switching Behavior
The HLS tech tries to ensure the highest-quality viewing experience
possible, given the available bandwidth and encodings. This doesn't
always mean using the highest-bitrate rendition available-- if the player
is 300px by 150px, it would be a big waste of bandwidth to download a 4k
stream. By default, the player attempts to load the highest-bitrate
variant that is less than the most recently detected segment bandwidth,
with one condition: if there are multiple variants with dimensions greater
than the current player size, it will only switch up one size greater
than the current player size.
If you're the visual type, the whole process is illustrated
below. Whenever a new segment is downloaded, we calculate the download
bitrate based on the size of the segment and the time it took to
download:
![New bitrate info is available](images/bitrate-switching-1.png)
First, we filter out all the renditions that have a higher bitrate
than the new measurement:
![Bitrate filtering](images/bitrate-switching-2.png)
Then we get rid of any renditions that are bigger than the current
player dimensions:
![Resolution filtering](images/bitrate-switching-3.png)
We don't want to signficant quality drop just because your player is
one pixel too small, so we add back in the next highest
resolution. The highest bitrate rendition that remains is the one that
gets used:
![Final selection](images/bitrate-switching-4.png)
If it turns out no rendition is acceptable based on the filtering
described above, the first encoding listed in the master playlist will
be used.
If you'd like your player to use a different set of priorities, it's
possible to completely replace the rendition selection logic. For
instance, you could always choose the most appropriate rendition by
resolution, even though this might mean more stalls during playback.
See the documentation on `player.vhs.selectPlaylist` for more details.

View File

@@ -1,122 +0,0 @@
# Creating Content
## Commands for creating tests streams
### Streams with EXT-X-PROGRAM-DATE-TIME for testing seekToProgramTime and convertToProgramTime
lavfi and testsrc are provided for creating a test stream in ffmpeg
-g 300 sets the GOP size to 300 (keyframe interval, at 30fps, one keyframe every 10 seconds)
-f hls sets the format to HLS (creates an m3u8 and TS segments)
-hls\_time 10 sets the goal segment size to 10 seconds
-hls\_list\_size 20 sets the number of segments in the m3u8 file to 20
-program\_date\_time an hls flag for setting #EXT-X-PROGRAM-DATE-TIME on each segment
```
ffmpeg \
-f lavfi \
-i testsrc=duration=200:size=1280x720:rate=30 \
-g 300 \
-f hls \
-hls_time 10 \
-hls_list_size 20 \
-hls_flags program_date_time \
stream.m3u8
```
## Commands used for segments in `test/segments` dir
### video.ts
Copy only the first two video frames, leave out audio.
```
$ ffmpeg -i index0.ts -vframes 2 -an -vcodec copy video.ts
```
### audio.ts
Copy only the first two audio frames, leave out video.
```
$ ffmpeg -i index0.ts -aframes 2 -vn -acodec copy audio.ts
```
### caption.ts
Copy the first two frames of video out of a ts segment that already includes CEA-608 captions.
`ffmpeg -i index0.ts -vframes 2 -an -vcodec copy caption.ts`
### id3.ts
Copy only the first five frames of video, leave out audio.
`ffmpeg -i index0.ts -vframes 5 -an -vcodec copy smaller.ts`
Create an ID3 tag using [id3taggenerator][apple_streaming_tools]:
`id3taggenerator -text "{\"id\":1, \"data\": \"id3\"}" -o tag.id3`
Create a file `macro.txt` with the following:
`0 id3 tag.id3`
Run [mediafilesegmenter][apple_streaming_tools] with the small video segment and macro file, to produce a new segment with ID3 tags inserted at the specified times.
`mediafilesegmenter -start-segments-with-iframe --target-duration=1 --meta-macro-file=macro.txt -s -A smaller.ts`
### mp4Video.mp4
Copy only the first two video frames, leave out audio.
movflags:
* frag\_keyframe: "Start a new fragment at each video keyframe."
* empty\_moov: "Write an initial moov atom directly at the start of the file, without describing any samples in it."
* omit\_tfhd\_offset: "Do not write any absolute base\_data\_offset in tfhd atoms. This avoids tying fragments to absolute byte positions in the file/streams." (see also: https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing)
```
$ ffmpeg -i file.mp4 -movflags frag_keyframe+empty_moov+omit_tfhd_offset -vframes 2 -an -vcodec copy mp4Video.mp4
```
### mp4Audio.mp4
Copy only the first two audio frames, leave out video.
movflags:
* frag\_keyframe: "Start a new fragment at each video keyframe."
* empty\_moov: "Write an initial moov atom directly at the start of the file, without describing any samples in it."
* omit\_tfhd\_offset: "Do not write any absolute base\_data\_offset in tfhd atoms. This avoids tying fragments to absolute byte positions in the file/streams." (see also: https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing)
```
$ ffmpeg -i file.mp4 -movflags frag_keyframe+empty_moov+omit_tfhd_offset -aframes 2 -vn -acodec copy mp4Audio.mp4
```
### mp4VideoInit.mp4 and mp4AudioInit.mp4
Using DASH as the format type (-f) will lead to two init segments, one for video and one for audio. Using HLS will lead to one joined.
Renamed from .m4s to .mp4
```
$ ffmpeg -i input.mp4 -f dash out.mpd
```
### webmVideoInit.webm and webmVideo.webm
```
$ cat mp4VideoInit.mp4 mp4Video.mp4 > video.mp4
$ ffmpeg -i video.mp4 -dash_segment_type webm -c:v libvpx-vp9 -f dash output.mpd
$ mv init-stream0.webm webmVideoInit.webm
$ mv chunk-stream0-00001.webm webmVideo.webm
```
## Other useful commands
### Joined (audio and video) initialization segment (for HLS)
Using DASH as the format type (-f) will lead to two init segments, one for video and one for audio. Using HLS will lead to one joined.
Note that -hls\_fmp4\_init\_filename defaults to init.mp4, but is here for readability.
Without specifying fmp4 for hls\_segment\_type, ffmpeg defaults to ts.
```
$ ffmpeg -i input.mp4 -f hls -hls_fmp4_init_filename init.mp4 -hls_segment_type fmp4 out.m3u8
```
[apple_streaming_tools]: https://developer.apple.com/documentation/http_live_streaming/about_apple_s_http_live_streaming_tools

View File

@@ -1,87 +0,0 @@
# DASH Playlist Loader
## Purpose
The [DashPlaylistLoader][dpl] (DPL) is responsible for requesting MPDs, parsing them and keeping track of the media "playlists" associated with the MPD. The [DPL] is used with a [SegmentLoader] to load fmp4 fragments from a DASH source.
## Basic Responsibilities
1. To request an MPD.
2. To parse an MPD into a format [videojs-http-streaming][vhs] can understand.
3. To refresh MPDs according to their minimumUpdatePeriod.
4. To allow selection of a specific media stream.
5. To sync the client clock with a server clock according to the UTCTiming node.
6. To refresh a live MPD for changes.
## Design
The [DPL] is written to be as similar as possible to the [PlaylistLoader][pl]. This means that majority of the public API for these two classes are the same, and so are the states they go through and events that they trigger.
### States
![DashPlaylistLoader States](images/dash-playlist-loader-states.nomnoml.svg)
- `HAVE_NOTHING` the state before the MPD is received and parsed.
- `HAVE_MASTER` the state before a media stream is setup but the MPD has been parsed.
- `HAVE_METADATA` the state after a media stream is setup.
### API
- `load()` this will either start or kick the loader during playback.
- `start()` this will start the [DPL] and request the MPD.
- `parseMasterXml()` this will parse the MPD manifest and return the result.
- `media()` this will return the currently active media stream or set a new active media stream.
### Events
- `loadedplaylist` signals the setup of a master playlist, representing the DASH source as a whole, from the MPD; or a media playlist, representing a media stream.
- `loadedmetadata` signals initial setup of a media stream.
- `minimumUpdatePeriod` signals that a update period has ended and the MPD must be requested again.
- `playlistunchanged` signals that no changes have been made to a MPD.
- `mediaupdatetimeout` signals that a live MPD and media stream must be refreshed.
- `mediachanging` signals that the currently active media stream is going to be changed.
- `mediachange` signals that the new media stream has been updated.
### Interaction with Other Modules
![DPL with MPC and MG](images/dash-playlist-loader-mpc-mg-sequence.plantuml.png)
### Special Features
There are a few features of [DPL] that are different from [PL] due to fundamental differences between HLS and DASH standards.
#### MinimumUpdatePeriod
This is a time period specified in the MPD after which the MPD should be re-requested and parsed. There could be any number of changes to the MPD between these update periods.
#### SyncClientServerClock
There is a UTCTiming node in the MPD that allows the client clock to be synced with a clock on the server. This may affect the results of parsing the MPD.
#### Requesting `sidx` Boxes
To be filled out.
### Previous Behavior
Until version 1.9.0 of [VHS], we thought that [DPL] could skip the `HAVE_NOTHING` and `HAVE_MASTER` states, as no other XHR requests are needed once the MPD has been downloaded and parsed. However, this is incorrect as there are some Presentations that signal the use of a "Segment Index box" or `sidx`. This `sidx` references specific byte ranges in a file that could contain media or potentially other `sidx` boxes.
A DASH MPD that describes a `sidx` is therefore similar to an HLS master manifest, in that the MPD contains references to something that must be requested and parsed first before references to media segments can be obtained. With this in mind, it was necessary to update the initialization and state transitions of [DPL] to allow further XHR requests to be made after the initial request for the MPD.
### Current Behavior
In [this PR](https://github.com/videojs/http-streaming/pull/386), the [DPL] was updated to go through the `HAVE_NOTHING` and `HAVE_MASTER` states before arriving at `HAVE_METADATA`. If the MPD does not contain `sidx` boxes, then this transition happens quickly after `load()` is called, spending little time in the `HAVE_MASTER` state.
The initial media selection for `masterPlaylistLoader` is made in the `loadedplaylist` handler located in [MasterPlaylistController][mpc]. We now use `hasPendingRequest` to determine whether to automatically select a media playlist for the `masterPlaylistLoader` as a fallback in case one is not selected by [MPC]. The child [DPL]s are created with a media playlist passed in as an argument, so this fallback is not necessary for them. Instead, that media playlist is saved and auto-selected once we enter the `HAVE_MASTER` state.
The `updateMaster` method will return `null` if no updates are found.
The `selectinitialmedia` event is not triggered until an audioPlaylistLoader (which for DASH is always a child [DPL]) has a media playlist. This is signaled by triggering `loadedmetadata` on the respective [DPL]. This event is used to initialize the [Representations API][representations] and setup EME (see [contrib-eme]).
[dpl]: ../src/dash-playlist-loader.js
[sl]: ../src/segment-loader.js
[vhs]: intro.md
[pl]: ../src/playlist-loader.js
[mpc]: ../src/master-playlist-controller.js
[representations]: ../README.md#hlsrepresentations
[contrib-eme]: https://github.com/videojs/videojs-contrib-eme

View File

@@ -1,23 +0,0 @@
# Glossary
**Playlist**: This is a representation of an HLS or DASH manifest.
**Media Playlist**: This is a manifest that represents a single rendition or media stream of the source.
**Master Playlist Controller**: This acts as the main controller for the playback engine. It interacts with the SegmentLoaders, PlaylistLoaders, PlaybackWatcher, etc.
**Playlist Loader**: This will request the source and load the master manifest. It is also instructed by the ABR algorithm to load a media playlist or wraps a media playlist if it is provided as the source. There are more details about the playlist loader [here](./arch.md).
**DASH Playlist Loader**: This will do as the PlaylistLoader does, but for DASH sources. It also handles DASH specific functionaltiy, such as refreshing the MPD according to the minimumRefreshPeriod and synchronizing to a server clock.
**Segment Loader**: This determines which segment should be loaded, requests it via the Media Segment Loader and passes the result to the Source Updater.
**Media Segment Loader**: This requests a given segment, decrypts the segment if necessary, and returns it to the Segment Loader.
**Source Updater**: This manages the browser's [SourceBuffers](https://developer.mozilla.org/en-US/docs/Web/API/SourceBuffer). It appends decrypted segment bytes provided by the Segment Loader to the corresponding Source Buffer.
**ABR(Adaptive Bitrate) Algorithm**: This concept is described more in detail [here](https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming). Our chosen ABR algorithm is referenced by [selectPlaylist](../README.md#hlsselectplaylist) and is described more [here](./bitrate-switching.md).
**Playback Watcher**: This attemps to resolve common playback stalls caused by improper seeking, gaps in content and browser issues.
**Sync Controller**: This will attempt to create a mapping between the segment index and a display time on the player.

View File

@@ -1,20 +0,0 @@
# Encrypted HTTP Live Streaming
The [HLS spec](http://tools.ietf.org/html/draft-pantos-http-live-streaming-13#section-6.2.3) requires segments to be encrypted with AES-128 in CBC mode with PKCS7 padding. You can encrypt data to that specification with a combination of [OpenSSL](https://www.openssl.org/) and the [pkcs7 utility](https://github.com/brightcove/pkcs7). From the command-line:
```sh
# encrypt the text "hello" into a file
# since this is for testing, skip the key salting so the output is stable
# using -nosalt outside of testing is a terrible idea!
echo -n "hello" | pkcs7 | \
openssl enc -aes-128-cbc -nopad -nosalt -K $KEY -iv $IV > hello.encrypted
# xxd is a handy way of translating binary into a format easily consumed by
# javascript
xxd -i hello.encrypted
```
Later, you can decrypt it:
```sh
openssl enc -d -nopad -aes-128-cbc -K $KEY -iv $IV
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

View File

@@ -1,12 +0,0 @@
<svg width="228" height="310" version="1.1" baseProfile="full" viewbox="0 0 228 310" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events" style="font-weight:bold; font-size:10pt; font-family:'Arial', Helvetica, sans-serif;;stroke-width:2;stroke-linejoin:round;stroke-linecap:round"><text x="134" y="111" style="font-weight:normal;">load()</text>
<path d="M114 81 L114 105 L114 129 L114 129 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M110.8 121 L114 125 L117.2 121 L114 129 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="134" y="211" style="font-weight:normal;">media()</text>
<path d="M114 181 L114 205 L114 229 L114 229 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M110.8 221 L114 225 L117.2 221 L114 229 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<rect x="37" y="30" height="50" width="154" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="57" y="60" style="">HAVE_NOTHING</text>
<rect x="40" y="130" height="50" width="149" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="60" y="160" style="">HAVE_MASTER</text>
<rect x="30" y="230" height="50" width="168" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="50" y="260" style="">HAVE_METADATA</text></svg>

Before

Width:  |  Height:  |  Size: 1.4 KiB

View File

@@ -1,125 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="744.09448819"
height="1052.3622047"
id="svg2"
version="1.1"
inkscape:version="0.48.2 r9819"
sodipodi:docname="New document 1">
<defs
id="defs4" />
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="0.74898074"
inkscape:cx="405.31989"
inkscape:cy="721.1724"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:window-width="1165"
inkscape:window-height="652"
inkscape:window-x="0"
inkscape:window-y="0"
inkscape:window-maximized="0" />
<metadata
id="metadata7">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1">
<g
id="g3832">
<g
transform="translate(-80,0)"
id="g3796">
<rect
style="fill:none;stroke:#000000;stroke-width:4.99253178;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
id="rect3756"
width="195.00757"
height="75.007133"
x="57.496265"
y="302.08554" />
<text
xml:space="preserve"
style="font-size:39.94025421px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="80.563461"
y="353.93951"
id="text3758"
sodipodi:linespacing="125%"
transform="scale(0.99841144,1.0015911)"><tspan
sodipodi:role="line"
id="tspan3760"
x="80.563461"
y="353.93951">Header</tspan></text>
</g>
<g
transform="translate(-80,0)"
id="g3801">
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="278.44489"
y="354.50266"
id="text3762"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3764"
x="278.44489"
y="354.50266">Raw Bitstream Payload (RBSP)</tspan></text>
<rect
style="fill:none;stroke:#000000;stroke-width:5;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
id="rect3768"
width="660.63977"
height="75"
x="252.5"
y="302.09293" />
</g>
</g>
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="10.078175"
y="432.12851"
id="text3806"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3808"
x="10.078175"
y="432.12851">1 byte</tspan></text>
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="-31.193787"
y="252.32137"
id="text3810"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3812"
x="-31.193787"
y="252.32137">H264 Network Abstraction Layer (NAL) Unit</tspan></text>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 4.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 162 KiB

View File

@@ -1,26 +0,0 @@
<svg width="304" height="610" version="1.1" baseProfile="full" viewbox="0 0 304 610" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events" style="font-weight:bold; font-size:10pt; font-family:'Arial', Helvetica, sans-serif;;stroke-width:2;stroke-linejoin:round;stroke-linecap:round"><text x="172" y="111" style="font-weight:normal;">load()</text>
<path d="M152 81 L152 105 L152 129 L152 129 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 121 L152 125 L155.2 121 L152 129 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="172" y="211" style="font-weight:normal;">media()</text>
<path d="M152 181 L152 205 L152 229 L152 229 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 221 L152 225 L155.2 221 L152 229 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="172" y="311" style="font-weight:normal;">media()/ start()</text>
<path d="M152 281 L152 305 L152 329 L152 329 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 321 L152 325 L155.2 321 L152 329 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M152 381 L152 405 L152 429 L152 429 " style="stroke:#33322E;fill:none;stroke-dasharray:4 4;"></path>
<path d="M148.8 421 L152 425 L155.2 421 L152 429 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M155.2 389 L152 385 L148.8 389 L152 381 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M152 481 L152 505 L152 529 L152 529 " style="stroke:#33322E;fill:none;stroke-dasharray:4 4;"></path>
<path d="M148.8 521 L152 525 L155.2 521 L152 529 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M155.2 489 L152 485 L148.8 489 L152 481 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<rect x="75" y="30" height="50" width="154" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="95" y="60" style="">HAVE_NOTHING</text>
<rect x="78" y="130" height="50" width="149" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="98" y="160" style="">HAVE_MASTER</text>
<rect x="56" y="230" height="50" width="192" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="76.3" y="260" style="">SWITCHING_MEDIA</text>
<rect x="68" y="330" height="50" width="168" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="88" y="360" style="">HAVE_METADATA</text>
<text x="67" y="460" style="font-weight:normal;font-style:italic;">mediaupdatetimeout</text>
<rect x="30" y="530" height="50" width="244" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="50" y="560" style="">HAVE_CURRENT_METADATA</text></svg>

Before

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

View File

@@ -1,119 +0,0 @@
@startuml
header DashPlaylistLoader sequences
title DashPlaylistLoader sequences: Master Manifest with Alternate Audio
Participant "MasterPlaylistController" as MPC #red
Participant "MasterDashPlaylistLoader" as MPL #blue
Participant "mainSegmentLoader" as SL #blue
Participant "AudioDashPlaylistLoader" as APL #green
Participant "audioSegmentLoader" as ASL #green
Participant "external server" as ext #brown
Participant "mpdParser" as parser #orange
Participant "mediaGroups" as MG #purple
Participant Tech #lightblue
== Initialization ==
MPC -> MPL : construct MasterPlaylistLoader
MPC -> MPL: load()
== Requesting Master Manifest ==
MPL -> MPL : start()
MPL -> ext: xhr request for master manifest
ext -> MPL : response with master manifest
MPL -> parser: parse manifest
parser -> MPL: object representing manifest
note over MPL #lightblue: trigger 'loadedplaylist'
== Requesting Video Manifest ==
note over MPL #lightblue: handling loadedplaylist
MPL -> MPL: media(x)
alt if no sidx
note over MPL #lightgray: zero delay to fake network request
else if sidx
break
MPL -> ext: request sidx
end
end
note over MPL #lightblue: trigger 'loadedmetadata' on master loader [T1]
note over MPL #lightblue: handling 'loadedmetadata'
opt vod and preload !== 'none'
MPL -> SL: playlist()
MPL -> SL: load()
end
== Initializing Media Groups, Choosing Active Tracks ==
MPL -> MG: setupMediaGroups()
MG -> MG: initialize()
== Initializing Alternate Audio Loader ==
MG -> APL: create child playlist loader for alt audio
MG -> MG: activeGroup and audio variant selected
MG -> MG: enable activeTrack, onTrackChanged()
MG -> ASL: reset audio segment loader
== Requesting Alternate Audio Manifest ==
MG -> MG: startLoaders()
MG -> APL: load()
APL -> APL: start()
APL -> APL: zero delay to fake network request
break finish pending tasks
MG -> Tech: add audioTrack
MPL -> MPC: setupSourceBuffers_()
MPL -> MPC: setupFirstPlay()
loop mainSegmentLoader.monitorBufferTick_()
SL -> ext: requests media segments
ext -> SL: response with media segment bytes
end
end
APL -> APL: zero delay over
APL -> APL: media(x)
alt if no sidx
note over APL #lightgray: zero delay to fake network request
else if sidx
break
MPL -> ext: request sidx
end
end
== Requesting Alternate Audio Segments ==
note over APL #lightblue: trigger 'loadedplaylist'
note over APL #lightblue: handling 'loadedplaylist'
APL -> ASL: playlist()
note over ASL #lightblue: trigger 'loadedmetadata' [T2]
note over APL #lightblue: handling 'loadedmetadata'
APL -> ASL: playlist()
APL -> ASL: load()
loop audioSegmentLoader.monitorBufferTick_()
ASL -> ext: requests media segments
ext -> ASL: response with media segment bytes
end
@enduml

View File

@@ -1,21 +0,0 @@
#title: DASH Playlist Loader States
#arrowSize: 0.5
#bendSize: 1
#direction: down
#gutter: 10
#edgeMargin: 1
#edges: rounded
#fillArrows: false
#font: Arial
#fontSize: 10
#leading: 1
#lineWidth: 2
#padding: 20
#spacing: 50
#stroke: #33322E
#zoom: 1
#.label: align=center visual=none italic
[HAVE_NOTHING] load()-> [HAVE_MASTER]
[HAVE_MASTER] media()-> [HAVE_METADATA]

View File

@@ -1,246 +0,0 @@
@startuml
header PlaylistLoader sequences
title PlaylistLoader sequences: Master Manifest and Alternate Audio
Participant "MasterPlaylistController" as MPC #red
Participant "MasterPlaylistLoader" as MPL #blue
Participant "mainSegmentLoader" as SL #blue
Participant "AudioPlaylistLoader" as APL #green
Participant "audioSegmentLoader" as ASL #green
Participant "external server" as ext #brown
Participant "m3u8Parser" as parser #orange
Participant "mediaGroups" as MG #purple
Participant Tech #lightblue
== Initialization ==
group MasterPlaylistController.constructor()
MPC -> MPL : setting up MasterPlaylistLoader
note left #lightyellow
sets up mediaupdatetimeout
handler for live playlist staleness
end note
note over MPL #lightgray: state = 'HAVE_NOTHING'
MPC -> MPL: load()
end
group MasterPlaylistLoader.load()
MPL -> MPL : start()
note left #lightyellow: not started yet
== Requesting Master Manifest ==
group start()
note over MPL #lightgray: started = true
MPL -> ext: xhr request for master manifest
ext -> MPL : response with master manifest
MPL -> parser: parse master manifest
parser -> MPL: object representing manifest
MPL -> MPL: set loader's master playlist
note over MPL #lightgray: state = 'HAVE_MASTER'
note over MPL #lightblue: trigger 'loadedplaylist' on master loader
== Requesting Video Manifest ==
group 'loadedplaylist' handler
note over MPL #lightblue: handling loadedplaylist
MPL -> MPL : media()
note left #lightgray: select initial (video) playlist
note over MPL #lightyellow: state = 'SWITCHING_MEDIA'
group media()
MPL -> ext : request child manifest
ext -> MPL: child manifest returned
MPL -> MPL: haveMetadata()
note over MPL #lightyellow: state = 'HAVE_METADATA'
group haveMetadata()
MPL -> parser: parse child manifest
parser -> MPL: object representing the child manifest
note over MPL #lightyellow
update master and media playlists
end note
opt live
MPL -> MPL: setup mediaupdatetimeout
end
note over MPL #lightblue
trigger 'loadedplaylist' on master loader.
This does not end up requesting segments
at this point.
end note
group MasterPlaylistLoader 'loadedplaylist' handler
MPL -> MPL : setup durationchange handler
end
end
== Requesting Video Segments ==
note over MPL #lightblue: trigger 'loadedmetadata'
group 'loadedmetadata' handler
note over MPL #lightblue: handling 'loadedmetadata'
opt vod and preload !== 'none'
MPL -> SL: playlist()
note over SL #lightyellow: updates playlist
MPL -> SL: load()
note right #lightgray
This does nothing as mimeTypes
have not been set yet.
end note
end
MPL -> MG: setupMediaGroups()
== Initializing Media Groups, Choosing Active Tracks ==
group MediaGroups.setupMediaGroups()
group initialize()
MG -> APL: create child playlist loader for alt audio
note over APL #lightyellow: state = 'HAVE_NOTHING'
note left #lightgray
setup 'loadedmetadata' and 'loadedplaylist' listeners
on child alt audio playlist loader
end note
MG -> Tech: add audioTracks
end
MG -> MG: activeGroup and audio variant selected
MG -> MG: enable activeTrack, onTrackChanged()
note left #lightgray
There is no activePlaylistLoader at this point,
but there is an audio playlistLoader
end note
group onTrackChanged()
MG -> SL: reset mainSegmentLoader
note left #lightgray: Clears buffer, aborts all inflight requests
== Requesting Alternate Audio Manifest ==
MG -> MG: startLoaders()
group startLoaders()
note over MG #lightyellow
activePlaylistLoader = AudioPlaylistLoader
end note
MG -> APL: load()
end
group AudioPlaylistLoader.load()
APL -> APL: start()
group alt start()
note over APL #lightyellow: started = true
APL -> ext: request alt audio media manifest
break MasterPlaylistLoader 'loadedmetadata' handler
MPL -> MPC: setupSourceBuffers()
note left #lightgray
This will set mimeType.
Segments can be loaded from now on.
end note
MPL -> MPC: setupFirstPlay()
note left #lightgray
Immediate exit since the player
is paused
end note
end
ext -> APL: responds with child manifest
APL -> parser: parse child manifest
parser -> APL: object representing child manifest returned
note over APL #lightyellow: state = 'HAVE_MASTER'
note left #lightgray: Infer a master playlist
APL -> APL: haveMetadata()
note over APL #lightyellow: state = 'HAVE_METADATA'
group haveMetadata()
APL -> parser: parsing the child manifest again
parser -> APL: returning object representing child manifest
note over APL #lightyellow
update master and media references
end note
== Requesting Alternate Audio Segments ==
note over APL #lightblue: trigger 'loadedplaylist'
group 'loadedplaylist' handler
note over APL #lightblue: handling 'loadedplaylist'
APL -> ASL: playlist()
note over ASL #lightyellow: set playlist
end
end
note over APL #lightblue: trigger 'loadedmetadata'
group 'loadedmetadata' handler
note over APL #lightblue: handling 'loadedmetadata'
APL -> ASL: playlist()
APL -> ASL: load()
loop audioSegmentLoader.load()
ASL -> ext: requests media segments
ext -> ASL: response with media segment bytes
end
end
end
end
end
end
end
end
end
end
@enduml

View File

@@ -1,114 +0,0 @@
@startuml
header PlaylistLoader sequences
title PlaylistLoader sequences: Master Manifest and Alternate Audio
Participant "MasterPlaylistController" as MPC #red
Participant "MasterPlaylistLoader" as MPL #blue
Participant "mainSegmentLoader" as SL #blue
Participant "AudioPlaylistLoader" as APL #green
Participant "audioSegmentLoader" as ASL #green
Participant "external server" as ext #brown
Participant "m3u8Parser" as parser #orange
Participant "mediaGroups" as MG #purple
Participant Tech #lightblue
== Initialization ==
MPC -> MPL : construct MasterPlaylistLoader
MPC -> MPL: load()
MPL -> MPL : start()
== Requesting Master Manifest ==
MPL -> ext: xhr request for master manifest
ext -> MPL : response with master manifest
MPL -> parser: parse master manifest
parser -> MPL: object representing manifest
note over MPL #lightblue: trigger 'loadedplaylist'
== Requesting Video Manifest ==
note over MPL #lightblue: handling loadedplaylist
MPL -> MPL : media()
MPL -> ext : request child manifest
ext -> MPL: child manifest returned
MPL -> parser: parse child manifest
parser -> MPL: object representing the child manifest
note over MPL #lightblue: trigger 'loadedplaylist'
note over MPL #lightblue: handleing 'loadedplaylist'
MPL -> SL: playlist()
MPL -> SL: load()
== Requesting Video Segments ==
note over MPL #lightblue: trigger 'loadedmetadata'
note over MPL #lightblue: handling 'loadedmetadata'
opt vod and preload !== 'none'
MPL -> SL: playlist()
MPL -> SL: load()
end
MPL -> MG: setupMediaGroups()
== Initializing Media Groups, Choosing Active Tracks ==
MG -> APL: create child playlist loader for alt audio
MG -> MG: activeGroup and audio variant selected
MG -> MG: enable activeTrack, onTrackChanged()
MG -> SL: reset mainSegmentLoader
== Requesting Alternate Audio Manifest ==
MG -> MG: startLoaders()
MG -> APL: load()
APL -> APL: start()
APL -> ext: request alt audio media manifest
break finish pending tasks
MG -> Tech: add audioTracks
MPL -> MPC: setupSourceBuffers()
MPL -> MPC: setupFirstPlay()
loop on monitorBufferTick
SL -> ext: requests media segments
ext -> SL: response with media segment bytes
end
end
ext -> APL: responds with child manifest
APL -> parser: parse child manifest
parser -> APL: object representing child manifest returned
== Requesting Alternate Audio Segments ==
note over APL #lightblue: trigger 'loadedplaylist'
note over APL #lightblue: handling 'loadedplaylist'
APL -> ASL: playlist()
note over APL #lightblue: trigger 'loadedmetadata'
note over APL #lightblue: handling 'loadedmetadata'
APL -> ASL: playlist()
APL -> ASL: load()
loop audioSegmentLoader.load()
ASL -> ext: requests media segments
ext -> ASL: response with media segment bytes
end
@enduml

View File

@@ -1,25 +0,0 @@
#title: Playlist Loader States
#arrowSize: 0.5
#bendSize: 1
#direction: down
#gutter: 10
#edgeMargin: 1
#edges: rounded
#fillArrows: false
#font: Arial
#fontSize: 10
#leading: 1
#lineWidth: 2
#padding: 20
#spacing: 50
#stroke: #33322E
#zoom: 1
#.label: align=center visual=none italic
[HAVE_NOTHING] load()-> [HAVE_MASTER]
[HAVE_MASTER] media()-> [SWITCHING_MEDIA]
[SWITCHING_MEDIA] media()/ start()-> [HAVE_METADATA]
[HAVE_METADATA] <--> [<label> mediaupdatetimeout]
[<label> mediaupdatetimeout] <--> [HAVE_CURRENT_METADATA]

View File

@@ -1,13 +0,0 @@
@startuml
state "Download Segment" as DL
state "Prepare for Append" as PfA
[*] -> DL
DL -> PfA
PfA : transmux (if needed)
PfA -> Append
Append : MSE source buffer
Append -> [*]
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.1 KiB

View File

@@ -1,57 +0,0 @@
@startuml
participant SegmentLoader order 1
participant "media-segment-request" order 2
participant "videojs-contrib-media-sources" order 3
participant mux.js order 4
participant "Native Source Buffer" order 5
SegmentLoader -> "media-segment-request" : mediaSegmentRequest(...)
group Request
"media-segment-request" -> SegmentLoader : doneFn(...)
note left
At end of all requests
(key/segment/init segment)
end note
SegmentLoader -> SegmentLoader : handleSegment(...)
note left
"Probe" (parse) segment for
timing and track information
end note
SegmentLoader -> "videojs-contrib-media-sources" : append to "fake" source buffer
note left
Source buffer here is a
wrapper around native buffers
end note
group Transmux
"videojs-contrib-media-sources" -> mux.js : postMessage(...setAudioAppendStart...)
note left
Used for checking for overlap when
prefixing audio with silence.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...alignGopsWith...)
note left
Used for aligning gops when overlapping
content (switching renditions) to fix
some browser glitching.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...push...)
note left
Pushes bytes into the transmuxer pipeline.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...flush...)
"mux.js" -> "videojs-contrib-media-sources" : postMessage(...data...)
"videojs-contrib-media-sources" -> "Native Source Buffer" : append
"Native Source Buffer" -> "videojs-contrib-media-sources" : //updateend//
"videojs-contrib-media-sources" -> SegmentLoader : handleUpdateEnd(...)
end
end
SegmentLoader -> SegmentLoader : handleUpdateEnd_()
note left
Saves segment timing info
and starts next request.
end note
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

View File

@@ -1,29 +0,0 @@
@startuml
state "Request Segment" as RS
state "Partial Response (1)" as PR1
state "..." as DDD
state "Partial Response (n)" as PRN
state "Prepare for Append (1)" as PfA1
state "Prepare for Append (n)" as PfAN
state "Append (1)" as A1
state "Append (n)" as AN
[*] -> RS
RS --> PR1
PR1 --> DDD
DDD --> PRN
PR1 -> PfA1
PfA1 : transmux (if needed)
PfA1 -> A1
A1 : MSE source buffer
PRN -> PfAN
PfAN : transmux (if needed)
PfAN -> AN
AN : MSE source buffer
AN --> [*]
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -1,109 +0,0 @@
# LHLS
### Table of Contents
* [Background](#background)
* [Current Support for LHLS in VHS](#current-support-for-lhls-in-vhs)
* [Request a Segment in Pieces](#request-a-segment-in-pieces)
* [Transmux and Append Segment Pieces](#transmux-and-append-segment-pieces)
* [videojs-contrib-media-sources background](#videojs-contrib-media-sources-background)
* [Transmux Before Append](#transmux-before-append)
* [Transmux Within media-segment-request](#transmux-within-media-segment-request)
* [mux.js](#muxjs)
* [The New Flow](#the-new-flow)
* [Resources](#resources)
### Background
LHLS stands for Low-Latency HLS (see [Periscope's post](https://medium.com/@periscopecode/introducing-lhls-media-streaming-eb6212948bef)). It's meant to be used for ultra low latency live streaming, where a server can send pieces of a segment before the segment is done being written to, and the player can append those pieces to the browser, allowing sub segment duration latency from true live.
In order to support LHLS, a few components are required:
* A server that supports [chunked transfer encoding](https://en.wikipedia.org/wiki/Chunked_transfer_encoding).
* A client that can:
* request segment pieces
* transmux segment pieces (for browsers that don't natively support the media type)
* append segment pieces
### Current Support for LHLS in VHS
At the moment, VHS doesn't support any of the client requirements. It waits until a request is completed and the transmuxer expects full segments.
Current flow:
![current flow](./current-flow.plantuml.png)
Expected flow:
![expected flow](./expected-flow.plantuml.png)
### Request Segment Pieces
The first change was to request pieces of a segment. There are a few approaches to accomplish this:
* [Range Requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests)
* requires server support
* more round trips
* [Fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API)
* limited browser support
* doesn't support aborts
* [Plain text MIME type](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Sending_and_Receiving_Binary_Data)
* slightly non-standard
* incurs a cost of converting from string to bytes
*Plain text MIME type* was chosen because of its wide support. It provides a mechanism to access progressive bytes downloaded on [XMLHttpRequest progress events](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequestEventTarget/onprogress).
This change was made in [media-segment-request](https://github.com/videojs/http-streaming/blob/master/src/media-segment-request.js).
### Transmux and Append Segment Pieces
Getting the progress bytes is easy. Supporting partial transmuxing and appending is harder.
Current flow:
![current transmux and append flow](./current-transmux-and-append-flow.plantuml.png)
In order to support partial transmuxing and appending in the current flow, videojs-contrib-media-sources would have to get more complicated.
##### videojs-contrib-media-sources background
Browsers, via MSE source buffers, only support a limited set of media types. For most browsers, this means MP4/fragmented MP4. HLS uses TS segments (it also supports fragmented MP4, but that case is less common). This is why transmuxing is necessary.
Just like Video.js is a wrapper around the browser video element, bridging compatibility and adding support to extend features, videojs-contrib-media-sources provides support for more media types across different browsers by building in a transmuxer.
Not only did videojs-contrib-media-sources allow us to transmux TS to FMP4, but it also allowed us to transmux TS to FLV for flash support.
Over time, the complexity of logic grew in videojs-contrib-media-sources, and it coupled tightly with videojs-contrib-hls and videojs-http-streaming, firing events to communicate between the two.
Once flash support was moved to a distinct flash module, [via flashls](https://github.com/brightcove/videojs-flashls-source-handler), it was decided to move the videojs-contrib-media-sources logic into VHS, and to remove coupled logic by using only the native source buffers (instead of the wrapper) and transmuxing somewhere within VHS before appending.
##### Transmux Before Append
As the LHLS work started, and videojs-contrib-media-sources needed more logic, the native media source [abstraction leaked](https://en.wikipedia.org/wiki/Leaky_abstraction), adding non-standard functions to work around limitations. In addition, the logic in videojs-contrib-media-sources required more conditional paths, leading to more confusing code.
It was decided that it would be easier to do the transmux before append work in the process of adding support for LHLS. This was widely considered a *good decision*, and provided a means of reducing tech debt while adding in a new feature.
##### Transmux Within media-segment-request
Work started by moving transmuxing into segment-loader, however, we quickly realized that media-segment-request provided a better home.
media-segment-request already handled decrypting segments. If it handled transmuxing as well, then segment-loader could stick with only deciding which segment to request, getting bytes as FMP4, and appending them.
The transmuxing logic moved to a new module called segment-transmuxer, which wrapped around the [WebWorker](https://developer.mozilla.org/en-US/docs/Web/API/Worker/Worker) that wrapped around mux.js (the transmuxer itself).
##### mux.js
While most of the [mux.js pipeline](https://github.com/videojs/mux.js/blob/master/docs/diagram.png) supports pushing pieces of data (and should support LHLS by default), its "flushes" to send transmuxed data back to the caller expected full segments.
Much of the pipeline was reused, however, the top level audio and video segment streams, as well as the entry point, were rewritten so that instead of providing a full segment on flushes, each frame of video was provided individually (audio frames still flush as a group). The new concept of partial flushes was added into the pipeline to handle this case.
##### The New Flow
One benefit to transmuxing before appending is the possibility of extracting track and timing information from the segments. Previously, this required a separate parsing step to happen on the full segment. Now, it is included in the transmuxing pipeline, and comes back to us on separate callbacks.
![new segment loader sequence](./new-segment-loader-sequence.plantuml.png)
### Resources
* https://medium.com/@periscopecode/introducing-lhls-media-streaming-eb6212948bef
* https://github.com/jordicenzano/webserver-chunked-growingfiles

View File

@@ -1,118 +0,0 @@
@startuml
participant SegmentLoader order 1
participant "media-segment-request" order 2
participant XMLHttpRequest order 3
participant "segment-transmuxer" order 4
participant mux.js order 5
SegmentLoader -> "media-segment-request" : mediaSegmentRequest(...)
"media-segment-request" -> XMLHttpRequest : request for segment/key/init segment
group Request
XMLHttpRequest -> "media-segment-request" : //segment progress//
note over "media-segment-request" #moccasin
If handling partial data,
tries to transmux new
segment bytes.
end note
"media-segment-request" -> SegmentLoader : progressFn(...)
note left
Forwards "progress" events from
the XML HTTP Request.
end note
group Transmux
"media-segment-request" -> "segment-transmuxer" : transmux(...)
"segment-transmuxer" -> mux.js : postMessage(...setAudioAppendStart...)
note left
Used for checking for overlap when
prefixing audio with silence.
end note
"segment-transmuxer" -> mux.js : postMessage(...alignGopsWith...)
note left
Used for aligning gops when overlapping
content (switching renditions) to fix
some browser glitching.
end note
"segment-transmuxer" -> mux.js : postMessage(...push...)
note left
Pushes bytes into the transmuxer pipeline.
end note
"segment-transmuxer" -> mux.js : postMessage(...partialFlush...)
note left #moccasin
Collates any complete frame data
from partial segment and
caches remainder.
end note
"segment-transmuxer" -> mux.js : postMessage(...flush...)
note left
Collates any complete frame data
from segment, caches only data
required between segments.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...trackinfo...)
"segment-transmuxer" -> "media-segment-request" : onTrackInfo(...)
"media-segment-request" -> SegmentLoader : trackInfoFn(...)
note left
Gets whether the segment
has audio and/or video.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...audioTimingInfo...)
"segment-transmuxer" -> "media-segment-request" : onAudioTimingInfo(...)
"mux.js" -> "segment-transmuxer" : postMessage(...videoTimingInfo...)
"segment-transmuxer" -> "media-segment-request" : onVideoTimingInfo(...)
"media-segment-request" -> SegmentLoader : timingInfoFn(...)
note left
Gets the audio/video
start/end times.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...caption...)
"segment-transmuxer" -> "media-segment-request" : onCaptions(...)
"media-segment-request" -> SegmentLoader : captionsFn(...)
note left
Gets captions from transmux.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...id3Frame...)
"segment-transmuxer" -> "media-segment-request" : onId3(...)
"media-segment-request" -> SegmentLoader : id3Fn(...)
note left
Gets metadata from transmux.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...data...)
"segment-transmuxer" -> "media-segment-request" : onData(...)
"media-segment-request" -> SegmentLoader : dataFn(...)
note left
Gets an fmp4 segment
ready to be appended.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...done...)
note left
Gathers GOP info, and calls
done callback.
end note
"segment-transmuxer" -> "media-segment-request" : onDone(...)
"media-segment-request" -> SegmentLoader : doneFn(...)
note left
Queues callbacks on source
buffer queue to wait for
appends to complete.
end note
end
XMLHttpRequest -> "media-segment-request" : //segment request finished//
end
SegmentLoader -> SegmentLoader : handleAppendsDone_()
note left
Saves segment timing info
and starts next request.
end note
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

View File

@@ -1,36 +0,0 @@
# Transmux Before Append Changes
## Overview
In moving our transmuxing stage from after append (to a virtual source buffer from videojs-contrib-media-sources) to before appending (to a native source buffer), some changes were required, and others made the logic simpler. What follows are some details into some of the changes made, why they were made, and what impact they will have.
### Source Buffer Creation
In a pre-TBA (transmux before append) world, videojs-contrib-media-source's source buffers provided an abstraction around the native source buffers. They also required a bit more information than the native buffers. For instance, they used the full mime types instead of simply relying on the codec information, when creating the source buffers. This provided the container types, which let the virtual source buffer know whether the media needed to be transmuxed or not. In a post-TBA world, the container type is no longer required, therefore only the codec strings are passed along.
In terms of when the source buffers are created, in the post-TBA world, the creation of source buffers is delayed until we are sure we have all of the information we need. This means that we don't create the native source buffers until the PMT is parsed from the main media. Even if the content is demuxed, we only need to parse the main media, since, for now, we don't rely on codec information from the segment itself, and instead use the manifest-provided codec info, or default codecs. While we could create the source buffers earlier if the codec information is provided in the manifest, delaying provides a simpler, single, code path, and more opportunity for us to be flexible with how much codec info is provided by the attribute. While the HLS specification requires this information, other formats may not, and we have seen content that plays fine but does not adhere to the strict rules of providing all necessary codec information.
### Appending Init Segments
Previously, init segments were handled by videojs-contrib-media-sources for TS segments and segment-loader for FMP4 segments.
videojs-contrib-media-sources and TS:
* video segments
* append the video init segment returned from the transmuxer with every segment
* audio segments
* append the audio init segment returned from the transmuxer only in the following cases:
* first append
* after timestampOffset is set
* audio track events: change/addtrack/removetrack
* 'mediachange' event
segment-loader and FMP4:
* if segment.map is set:
* save (cache) the init segment after the request finished
* append the init segment directly to the source buffer if the segment loader's activeInitSegmentId doesn't match the segment.map generated init segment ID
With the transmux before append and LHLS changes, we only append video init segments on changes as well. This is more important with LHLS, as prepending an init segment before every frame of video would be wasteful.
### Test Changes
Some tests were removed because they were no longer relevant after the change to creating source buffers later. For instance, `waits for both main and audio loaders to finish before calling endOfStream if main loader starting media is unknown` no longer can be tested by waiting for an audio loader response and checking for end of stream, as the test will time out since MasterPlaylistController will wait for track info from the main loader before the source buffers are created. That condition is checked elsewhere.

View File

@@ -1,29 +0,0 @@
This doc is just a stub right now. Check back later for updates.
# General
When we talk about video, we normally think about it as one monolithic thing. If you ponder it for a moment though, you'll realize it's actually two distinct sorts of information that are presented to the viewer in tandem: a series of pictures and a sequence of audio samples. The temporal nature of audio and video is shared but the techniques used to efficiently transmit them are very different and necessitate a lot of the complexity in video file formats. Bundling up these (at least) two streams into a single package is the first of many issues introduced by the need to serialize video data and is solved by meta-formats called _containers_.
Containers formats are probably the most recongnizable of the video components because they get the honor of determining the file extension. You've probably heard of MP4, MOV, and WMV, all of which are container formats. Containers specify how to serialize audio, video, and metadata streams into a sequential series of bits and how to unpack them for decoding. Containers are basically a box that can hold video information and timed media data:
![Containers](images/containers.png)
- codecs
- containers, multiplexing
# MPEG2-TS
![MPEG2-TS Structure](images/mp2t-structure.png)
![MPEG2-TS Packet Types](images/mp2t-packet-types.png)
- streaming vs storage
- program table
- program map table
- history, context
# H.264
- NAL units
- Annex B vs MP4 elementary stream
- access unit -> sample
# MP4
- origins: quicktime

View File

@@ -1,75 +0,0 @@
# Media Source Extensions Notes
A collection of findings experimenting with Media Source Extensions on
Chrome 36.
* Specifying an audio and video codec when creating a source buffer
but passing in an initialization segment with only a video track
results in a decode error
## ISO Base Media File Format (BMFF)
### Init Segment
A working initialization segment is outlined below. It may be possible
to trim this structure down further.
- `ftyp`
- `moov`
- `mvhd`
- `trak`
- `tkhd`
- `mdia`
- `mdhd`
- `hdlr`
- `minf`
- `mvex`
### Media Segment
The structure of a minimal media segment that actually encapsulates
movie data is outlined below:
- `moof`
- `mfhd`
- `traf`
- `tfhd`
- `tfdt`
- `trun` containing samples
- `mdat`
### Structure
sample: time {number}, data {array}
chunk: samples {array}
track: samples {array}
segment: moov {box}, mdats {array} | moof {box}, mdats {array}, data {array}
track
chunk
sample
movie fragment -> track fragment -> [samples]
### Sample Data Offsets
Movie-fragment Relative Addressing: all trun data offsets are relative
to the containing moof (?).
Without default-base-is-moof, the base data offset for each trun in
trafs after the first is the *end* of the previous traf.
#### iso5/DASH Style
moof
|- traf (default-base-is-moof)
| |- trun_0 <size of moof> + 0
| `- trun_1 <size of moof> + 100
`- traf (default-base-is-moof)
`- trun_2 <size of moof> + 300
mdat
|- samples_for_trun_0 (100 bytes)
|- samples_for_trun_1 (200 bytes)
`- samples_for_trun_2
#### Single Track Style
moof
`- traf
`- trun_0 <size of moof> + 0
mdat
`- samples_for_trun_0

View File

@@ -1,96 +0,0 @@
# Multiple Alternative Audio Tracks
## General
m3u8 manifests with multiple audio streams will have those streams added to `video.js` in an `AudioTrackList`. The `AudioTrackList` can be accessed using `player.audioTracks()` or `tech.audioTracks()`.
## Mapping m3u8 metadata to AudioTracks
The mapping between `AudioTrack` and the parsed m3u8 file is fairly straight forward. The table below shows the mapping
| m3u8 | AudioTrack |
|---------|------------|
| label | label |
| lang | language |
| default | enabled |
| ??? | kind |
| ??? | id |
As you can see m3u8's do not have a property for `AudioTrack.id`, which means that we let `video.js` randomly generate the id for `AudioTrack`s. This will have no real impact on any part of the system as we do not use the `id` anywhere.
The other property that does not have a mapping in the m3u8 is `AudioTrack.kind`. It was decided that we would set the `kind` to `main` when `default` is set to `true` and in other cases we set it to `alternative` unless the track has `characteristics` which include `public.accessibility.describes-video`, in which case we set it to `main-desc` (note that this `kind` indicates that the track is a mix of the main track and description, so it can be played *instead* of the main track; a track with kind `description` *only* has the description, not the main track).
Below is a basic example of a mapping
m3u8 layout
``` JavaScript
{
'media-group-1': [{
'audio-track-1': {
default: true,
lang: 'eng'
},
'audio-track-2': {
default: false,
lang: 'fr'
},
'audio-track-3': {
default: false,
lang: 'eng',
characteristics: 'public.accessibility.describes-video'
}
}]
}
```
Corresponding AudioTrackList when media-group-1 is used (before any tracks have been changed)
``` JavaScript
[{
label: 'audio-tracks-1',
enabled: true,
language: 'eng',
kind: 'main',
id: 'random'
}, {
label: 'audio-tracks-2',
enabled: false,
language: 'fr',
kind: 'alternative',
id: 'random'
}, {
label: 'audio-tracks-3',
enabled: false,
language: 'eng',
kind: 'main-desc',
id: 'random'
}]
```
## Startup (how tracks are added and used)
> AudioTrack & AudioTrackList live in video.js
1. `HLS` creates a `MasterPlaylistController` and watches for the `loadedmetadata` event
1. `HLS` parses the m3u8 using the `MasterPlaylistController`
1. `MasterPlaylistController` creates a `PlaylistLoader` for the master m3u8
1. `MasterPlaylistController` creates `PlaylistLoader`s for every audio playlist
1. `MasterPlaylistController` creates a `SegmentLoader` for the main m3u8
1. `MasterPlaylistController` creates a `SegmentLoader` for a potential audio playlist
1. `HLS` sees the `loadedmetadata` and finds the currently selected MediaGroup and all the metadata
1. `HLS` removes all `AudioTrack`s from the `AudioTrackList`
1. `HLS` created `AudioTrack`s for the MediaGroup and adds them to the `AudioTrackList`
1. `HLS` calls `MasterPlaylistController`s `useAudio` with no arguments (causes it to use the currently enabled audio)
1. `MasterPlaylistController` turns off the current audio `PlaylistLoader` if it is on
1. `MasterPlaylistController` maps the `label` to the `PlaylistLoader` containing the audio
1. `MasterPlaylistController` turns on that `PlaylistLoader` and the Corresponding `SegmentLoader` (master or audio only)
1. `MediaSource`/`mux.js` determine how to mux
## How tracks are switched
> AudioTrack & AudioTrackList live in video.js
1. `HLS` is setup to watch for the `changed` event on the `AudioTrackList`
1. User selects a new `AudioTrack` from a menu (where only one track can be enabled)
1. `AudioTrackList` enables the new `Audiotrack` and disables all others
1. `AudioTrackList` triggers a `changed` event
1. `HLS` sees the `changed` event and finds the newly enabled `AudioTrack`
1. `HLS` sends the `label` for the new `AudioTrack` to `MasterPlaylistController`s `useAudio` function
1. `MasterPlaylistController` turns off the current audio `PlaylistLoader` if it is on
1. `MasterPlaylistController` maps the `label` to the `PlaylistLoader` containing the audio
1. `MasterPlaylistController` maps the `label` to the `PlaylistLoader` containing the audio
1. `MasterPlaylistController` turns on that `PlaylistLoader` and the Corresponding `SegmentLoader` (master or audio only)
1. `MediaSource`/`mux.js` determine how to mux

View File

@@ -1,16 +0,0 @@
# How to get player time from program time
NOTE: See the doc on [Program Time to Player Time](program-time-to-player-time.md) for definitions and an overview of the conversion process.
## Overview
To convert a program time to a player time, the following steps must be taken:
1. Find the right segment by sequentially searching through the playlist until the program time requested is >= the EXT-X-PROGRAM-DATE-TIME of the segment, and < the EXT-X-PROGRAM-DATE-TIME of the following segment (or the end of the playlist is reached).
2. Determine the segment's start and end player times.
To accomplish #2, the segment must be downloaded and transmuxed (right now only TS segments are handled, and TS is always transmuxed to FMP4). This will obtain start and end times post transmuxer modifications. These are the times that the source buffer will recieve and report for the segment's newly created MP4 fragment.
Since there isn't a simple code path for downloading a segment without appending, the easiest approach is to seek to the estimated start time of that segment using the playlist duration calculation function. Because this process is not always accurate (manifest timing values are almost never accurate), a few seeks may be required to accurately seek into that segment.
If all goes well, and the target segment is downloaded and transmuxed, the player time may be found by taking the difference between the requested program time and the EXT-X-PROGRAM-DATE-TIME of the segment, then adding that difference to `segment.videoTimingInfo.transmuxedPresentationStart`.

View File

@@ -1,47 +0,0 @@
# Playlist Loader
## Purpose
The [PlaylistLoader][pl] (PL) is responsible for requesting m3u8s, parsing them and keeping track of the media "playlists" associated with the manifest. The [PL] is used with a [SegmentLoader] to load ts or fmp4 fragments from an HLS source.
## Basic Responsibilities
1. To request an m3u8.
2. To parse a m3u8 into a format [videojs-http-streaming][vhs] can understand.
3. To allow selection of a specific media stream.
4. To refresh a live master m3u8 for changes.
## Design
### States
![PlaylistLoader States](images/playlist-loader-states.nomnoml.svg)
- `HAVE_NOTHING` the state before the m3u8 is received and parsed.
- `HAVE_MASTER` the state before a media manifest is parsed and setup but after the master manifest has been parsed and setup.
- `HAVE_METADATA` the state after a media stream is setup.
- `SWITCHING_MEDIA` the intermediary state we go though while changing to a newly selected media playlist
- `HAVE_CURRENT_METADATA` a temporary state after requesting a refresh of the live manifest and before receiving the update
### API
- `load()` this will either start or kick the loader during playback.
- `start()` this will start the [PL] and request the m3u8.
- `media()` this will return the currently active media stream or set a new active media stream.
### Events
- `loadedplaylist` signals the setup of a master playlist, representing the HLS source as a whole, from the m3u8; or a media playlist, representing a media stream.
- `loadedmetadata` signals initial setup of a media stream.
- `playlistunchanged` signals that no changes have been made to a m3u8.
- `mediaupdatetimeout` signals that a live m3u8 and media stream must be refreshed.
- `mediachanging` signals that the currently active media stream is going to be changed.
- `mediachange` signals that the new media stream has been updated.
### Interaction with Other Modules
![PL with MPC and MG](images/playlist-loader-mpc-mg-sequence.plantuml.png)
[pl]: ../src/playlist-loader.js
[sl]: ../src/segment-loader.js
[vhs]: intro.md

View File

@@ -1,141 +0,0 @@
# How to get program time from player time
## Definitions
NOTE: All times referenced in seconds unless otherwise specified.
*Player Time*: any time that can be gotten/set from player.currentTime() (e.g., any time within player.seekable().start(0) to player.seekable().end(0)).<br />
*Stream Time*: any time within one of the stream's segments. Used by video frames (e.g., dts, pts, base media decode time). While these times natively use clock values, throughout the document the times are referenced in seconds.<br />
*Program Time*: any time referencing the real world (e.g., EXT-X-PROGRAM-DATE-TIME).<br />
*Start of Segment*: the pts (presentation timestamp) value of the first frame in a segment.<br />
## Overview
In order to convert from a *player time* to a *stream time*, an "anchor point" is required to match up a *player time*, *stream time*, and *program time*.
Two anchor points that are usable are the time since the start of a new timeline (e.g., the time since the last discontinuity or start of the stream), and the start of a segment. Because, in our requirements for this conversion, each segment is tagged with its *program time* in the form of an [EXT-X-PROGRAM-DATE-TIME tag](https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.6), using the segment start as the anchor point is the easiest solution. It's the closest potential anchor point to the time to convert, and it doesn't require us to track time changes across segments (e.g., trimmed or prepended content).
Those time changes are the result of the transmuxer, which can add/remove content in order to keep the content playable (without gaps or other breaking changes between segments), particularly when a segment doesn't start with a key frame.
In order to make use of the segment start, and to calculate the offset between the segment start and the time to convert, a few properties are needed:
1. The start of the segment before transmuxing
1. Time changes made to the segment during transmuxing
1. The start of the segment after transmuxing
While the start of the segment before and after transmuxing is trivial to retrieve, getting the time changes made during transmuxing is more complicated, as we must account for any trimming, prepending, and gap filling made during the transmux stage. However, the required use-case only needs the position of a video frame, allowing us to ignore any changes made to the audio timeline (because VHS uses video as the timeline of truth), as well as a couple of the video modifications.
What follows are the changes made to a video stream by the transmuxer that could alter the timeline, and if they must be accounted for in the conversion:
* Keyframe Pulling
* Used when: the segment doesn't start with a keyframe.
* Impact: the keyframe with the lowest dts value in the segment is "pulled" back to the first dts value in the segment, and all frames in-between are dropped.
* Need to account in time conversion? No. If a keyframe is pulled, and frames before it are dropped, then the segment will maintain the same segment duration, and the viewer is only seeing the keyframe during that period.
* GOP Fusion
* Used when: the segment doesn't start with a keyframe.
* Impact: if GOPs were saved from previous segment appends, the last GOP will be prepended to the segment.
* Need to account in time conversion? Yes. The segment is artificially extended, so while it shouldn't impact the stream time itself (since it will overlap with content already appended), it will impact the post transmux start of segment.
* GOPS to Align With
* Used when: switching renditions, or appending segments with overlapping GOPs (intersecting time ranges).
* Impact: GOPs in the segment will be dropped until there are no overlapping GOPs with previous segments.
* Need to account in time conversion? No. So long as we aren't switching renditions, and the content is sane enough to not contain overlapping GOPs, this should not have a meaningful impact.
Among the changes, with only GOP Fusion having an impact, the task is simplified. Instead of accounting for any changes to the video stream, only those from GOP Fusion should be accounted for. Since GOP fusion will potentially only prepend frames to the segment, we just need the number of seconds prepended to the segment when offsetting the time. As such, we can add the following properties to each segment:
```
segment: {
// calculated start of segment from either end of previous segment or end of last buffer
// (in stream time)
start,
...
videoTimingInfo: {
// number of seconds prepended by GOP fusion
transmuxerPrependedSeconds
// start of transmuxed segment (in player time)
transmuxedPresentationStart
}
}
```
## The Formula
With the properties listed above, calculating a *program time* from a *player time* is given as follows:
```
const playerTimeToProgramTime = (playerTime, segment) => {
if (!segment.dateTimeObject) {
// Can't convert without an "anchor point" for the program time (i.e., a time that can
// be used to map the start of a segment with a real world time).
return null;
}
const transmuxerPrependedSeconds = segment.videoTimingInfo.transmuxerPrependedSeconds;
const transmuxedStart = segment.videoTimingInfo.transmuxedPresentationStart;
// get the start of the content from before old content is prepended
const startOfSegment = transmuxedStart + transmuxerPrependedSeconds;
const offsetFromSegmentStart = playerTime - startOfSegment;
return new Date(segment.dateTimeObject.getTime() + offsetFromSegmentStart * 1000);
};
```
## Examples
```
// Program Times:
// segment1: 2018-11-10T00:00:30.1Z => 2018-11-10T00:00:32.1Z
// segment2: 2018-11-10T00:00:32.1Z => 2018-11-10T00:00:34.1Z
// segment3: 2018-11-10T00:00:34.1Z => 2018-11-10T00:00:36.1Z
//
// Player Times:
// segment1: 0 => 2
// segment2: 2 => 4
// segment3: 4 => 6
const segment1 = {
dateTimeObject: 2018-11-10T00:00:30.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0,
transmuxedPresentationStart: 0
}
};
playerTimeToProgramTime(0.1, segment1);
// startOfSegment = 0 + 0 = 0
// offsetFromSegmentStart = 0.1 - 0 = 0.1
// return 2018-11-10T00:00:30.1Z + 0.1 = 2018-11-10T00:00:30.2Z
const segment2 = {
dateTimeObject: 2018-11-10T00:00:32.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0.3,
transmuxedPresentationStart: 1.7
}
};
playerTimeToProgramTime(2.5, segment2);
// startOfSegment = 1.7 + 0.3 = 2
// offsetFromSegmentStart = 2.5 - 2 = 0.5
// return 2018-11-10T00:00:32.1Z + 0.5 = 2018-11-10T00:00:32.6Z
const segment3 = {
dateTimeObject: 2018-11-10T00:00:34.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0.2,
transmuxedPresentationStart: 3.8
}
};
playerTimeToProgramTime(4, segment3);
// startOfSegment = 3.8 + 0.2 = 4
// offsetFromSegmentStart = 4 - 4 = 0
// return 2018-11-10T00:00:34.1Z + 0 = 2018-11-10T00:00:34.1Z
```
## Transmux Before Append Changes
Even though segment timing values are retained for transmux before append, the formula does not need to change, as all that matters for calculation is the offset from the transmuxed segment start, which can then be applied to the stream time start of segment, or the program time start of segment.
## Getting the Right Segment
In order to make use of the above calculation, the right segment must be chosen for a given player time. This time may be retrieved by simply using the times of the segment after transmuxing (as the start/end pts/dts values then reflect the player time it should slot into in the source buffer). These are included in `videoTimingInfo` as `transmuxedPresentationStart` and `transmuxedPresentationEnd`.
Although there may be a small amount of overlap due to `transmuxerPrependedSeconds`, as long as the search is sequential from the beginning of the playlist to the end, the right segment will be found, as the prepended times will only come from content from prior segments.

View File

@@ -1,43 +0,0 @@
# Using the reloadSourceOnError Plugin
Call the plugin to activate it:
```js
player.reloadSourceOnError()
```
Now if the player encounters a fatal error during playback, it will automatically
attempt to reload the current source. If the error was caused by a transient
browser or networking problem, this can allow playback to continue with a minimum
of disruption to your viewers.
The plugin will only restart your player once in a 30 second time span so that your
player doesn't get into a reload loop if it encounters non-transient errors. You
can tweak the amount of time required between restarts by adjusting the
`errorInterval` option.
If your video URLs are time-sensitive, the original source could be invalid by the
time an error occurs. If that's the case, you can provide a `getSource` callback
to regenerate a valid source object. In your callback, the `this` keyword is a
reference to the player that errored. The first argument to `getSource` is a
function. Invoke that function and pass in your new source object when you're ready.
```js
player.reloadSourceOnError({
// getSource allows you to override the source object used when an error occurs
getSource: function(reload) {
console.log('Reloading because of an error');
// call reload() with a fresh source object
// you can do this step asynchronously if you want (but the error dialog will
// show up while you're waiting)
reload({
src: 'https://example.com/index.m3u8?token=abc123ef789',
type: 'application/x-mpegURL'
});
},
// errorInterval specifies the minimum amount of seconds that must pass before
// another reload will be attempted
errorInterval: 5
});
```

View File

@@ -1,291 +0,0 @@
# Supported Features
## Browsers
Any browser that supports [MSE] (media source extensions). See
https://caniuse.com/#feat=mediasource
Note that browsers with native HLS support may play content with the native player, unless
the [overrideNative] option is used. Some notable browsers with native HLS players are:
* Safari (macOS and iOS)
* Chrome Android
* Firefox Android
However, due to the limited features offered by some of the native players, the only
browser on which VHS defaults to using the native player is Safari (macOS and iOS).
## Streaming Formats and Media Types
### Streaming Formats
VHS aims to be mostly streaming format agnostic. So long as the manifest can be parsed to
a common JSON representation, VHS should be able to play it. However, due to some large
differences between the major streaming formats (HLS and DASH), some format specific code
is included in VHS. If you have another format you would like supported, please reach out
to us (e.g., file an issue).
* [HLS] (HTTP Live Streaming)
* [MPEG-DASH] (Dynamic Adaptive Streaming over HTTP)
### Media Container Formats
* [TS] (MPEG Transport Stream)
* [MP4] (MPEG-4 Part 14: MP4, M4A, M4V, M4S, MPA), ISOBMFF
* [AAC] (Advanced Audio Coding)
### Codecs
If the content is packaged in an [MP4] container, then any codec supported by the browser
is supported. If the content is packaged in a [TS] container, then the codec must be
supported by [the transmuxer]. The following codecs are supported by the transmuxer:
* [AVC] (Advanced Video Coding, h.264)
* [AVC1] (Advnced Video Coding, h.265)
* [HE-AAC] (High Efficiency Advanced Audio Coding, mp4a.40.5)
* LC-AAC (Low Complexity Advanced Audio Coding, mp4a.40.2)
## General Notable Features
The following is a list of some, but not all, common streaming features supported by VHS.
It is meant to highlight some common use cases (and provide for easy searching), but is
not meant serve as an exhaustive list.
* VOD (video on demand)
* LIVE
* Multiple audio tracks
* Timed [ID3] Metadata is automatically translated into HTML5 metedata text tracks
* Cross-domain credentials support with [CORS]
* Any browser supported resolution (e.g., 4k)
* Any browser supported framerate (e.g., 60fps)
* [DRM] via [videojs-contrib-eme]
* Audio only (non DRM)
* Video only (non DRM)
* In-manifest [WebVTT] subtitles are automatically translated into standard HTML5 subtitle
tracks
* [AES-128] and SAMPLE-AES segment encryption
## Notable Missing Features
Note that the following features have not yet been implemented or may work but are not
currently suppported in browsers that do not rely on the native player. For browsers that
use the native player (e.g., Safari for HLS), please refer to their documentation.
### Container Formats
* [WebM]
* [WAV]
* [MP3]
* [OGG]
### Codecs
If the content is packaged within an [MP4] container and the browser supports the codec, it
will play. However, the following are some codecs that are not routinely tested, or are not
supported when packaged within [TS].
* [MP3]
* [Vorbis]
* [WAV]
* [FLAC]
* [Opus]
* [VP8]
* [VP9]
* [Dolby Vision] (DVHE)
* [Dolby Digital] Audio (AC-3)
* [Dolby Digital Plus] (E-AC-3)
### General Missing Features
* Audio/video only DRM streams
### HLS Missing Features
Note: features for low latency HLS in the [2nd edition of HTTP Live Streaming] are on the
roadmap, but not currently available.
VHS strives to support all of the features in the HLS specification, however, some have
not yet been implemented. VHS currently supports everything in the
[HLS specification v7, revision 23], except the following:
* Use of [EXT-X-MAP] with [TS] segments
* [EXT-X-MAP] is currently supported for [MP4] segments, but not yet for TS
* I-Frame playlists via [EXT-X-I-FRAMES-ONLY] and [EXT-X-I-FRAME-STREAM-INF]
* [MP3] Audio
* [Dolby Digital] Audio (AC-3)
* [Dolby Digital Plus] Audio (E-AC-3)
* KEYFORMATVERSIONS of [EXT-X-KEY]
* [EXT-X-DATERANGE]
* [EXT-X-SESSION-DATA]
* [EXT-X-SESSION-KEY]
* [EXT-X-INDEPENDENT-SEGMENTS]
* Use of [EXT-X-START] (value parsed but not used)
* Alternate video via [EXT-X-MEDIA] of type video
* ASSOC-LANGUAGE in [EXT-X-MEDIA]
* CHANNELS in [EXT-X-MEDIA]
* Use of AVERAGE-BANDWIDTH in [EXT-X-STREAM-INF] (value parsed but not used)
* Use of FRAME-RATE in [EXT-X-STREAM-INF] (value parsed but not used)
* Use of HDCP-LEVEL in [EXT-X-STREAM-INF]
In the event of encoding changes within a playlist (see
https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-6.3.3), the
behavior will depend on the browser.
### DASH Missing Features
DASH support is more recent than HLS support in VHS, however, VHS strives to achieve as
complete compatibility as possible with the DASH spec. The following are some notable
features in the DASH specification that are not yet implemented in VHS:
Note that many of the following are parsed by [mpd-parser] but are either not yet used, or
simply take on their default values (in the case where they have valid defaults).
* Audio rendition switching
* Each video rendition is paired with an audio rendition for the duration of playback.
* MPD
* @id
* @profiles
* @availabilityStartTime
* @availabilityEndTime
* @minBufferTime
* @maxSegmentDuration
* @maxSubsegmentDuration
* ProgramInformation
* Metrics
* Period
* @xlink:href
* @xlink:actuate
* @id
* @duration
* Normally used for determing the PeriodStart of the next period, VHS instead relies
on segment durations to determine timing of each segment and timeline
* @bitstreamSwitching
* Subset
* AdaptationSet
* @xlink:href
* @xlink:actuate
* @id
* @group
* @par (picture aspect ratio)
* @minBandwidth
* @maxBandwidth
* @minWidth
* @maxWidth
* @minHeight
* @maxHeight
* @minFrameRate
* @maxFrameRate
* @segmentAlignment
* @bitstreamSwitching
* @subsegmentAlignment
* @subsegmentStartsWithSAP
* Accessibility
* Rating
* Viewpoint
* ContentComponent
* Representation
* @id (used for SegmentTemplate but not exposed otherwise)
* @qualityRanking
* @dependencyId (dependent representation)
* @mediaStreamStructureId
* SubRepresentation
* CommonAttributesElements (for AdaptationSet, Representation and SubRepresentation elements)
* @profiles
* @sar
* @frameRate
* @audioSamplingRate
* @segmentProfiles
* @maximumSAPPeriod
* @startWithSAP
* @maxPlayoutRate
* @codingDependency
* @scanType
* FramePacking
* AudioChannelConfiguration
* SegmentBase
* @presentationTimeOffset
* @indexRangeExact
* RepresentationIndex
* MultipleSegmentBaseInformation elements
* SegmentList
* @xlink:href
* @xlink:actuate
* MultipleSegmentBaseInformation
* SegmentURL
* @index
* @indexRange
* SegmentTemplate
* MultipleSegmentBaseInformation
* @index
* @bitstreamSwitching
* BaseURL
* @serviceLocation
* Template-based Segment URL construction
* Live DASH assets that use $Time$ in a SegmentTemplate, and also have a SegmentTimeline
where only the first S has a t and the rest only have a d do not update on playlist
refreshes
See: https://github.com/videojs/http-streaming#dash-assets-with-time-interpolation-and-segmenttimelines-with-no-t
* ContentComponent elements
* Right now manifests are assumed to have a single content component, with the properties
described directly on the AdaptationSet element
* SubRepresentation elements
* Subset elements
* Early Available Periods (may work, but has not been tested)
* Access to subsegments via a subsegment index ('ssix')
* The @profiles attribute is ignored (best support for all profiles is attempted, without
consideration of the specific profile). For descriptions on profiles, see section 8 of
the DASH spec.
* Construction of byte range URLs via a BaseURL byteRange template (Annex E.2)
* Multiperiod content where the representation sets are not the same across periods
* In the event that an S element has a t attribute that is greater than what is expected,
it is not treated as a discontinuity, but instead retains its segment value, and may
result in a gap in the content
[MSE]: https://www.w3.org/TR/media-source/
[HLS]: https://en.wikipedia.org/wiki/HTTP_Live_Streaming
[MPEG-DASH]: https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP
[TS]: https://en.wikipedia.org/wiki/MPEG_transport_stream
[MP4]: https://en.wikipedia.org/wiki/MPEG-4_Part_14
[AAC]: https://en.wikipedia.org/wiki/Advanced_Audio_Coding
[AVC]: https://en.wikipedia.org/wiki/Advanced_Video_Coding
[AVC1]: https://en.wikipedia.org/wiki/Advanced_Video_Coding
[HE-AAC]: https://en.wikipedia.org/wiki/High-Efficiency_Advanced_Audio_Coding
[ID3]: https://en.wikipedia.org/wiki/ID3
[CORS]: https://en.wikipedia.org/wiki/Cross-origin_resource_sharing
[DRM]: https://en.wikipedia.org/wiki/Digital_rights_management
[WebVTT]: https://www.w3.org/TR/webvtt1/
[AES-128]: https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
[WebM]: https://en.wikipedia.org/wiki/WebM
[WAV]: https://en.wikipedia.org/wiki/WAV
[MP3]: https://en.wikipedia.org/wiki/MP3
[OGG]: https://en.wikipedia.org/wiki/Ogg
[Vorbis]: https://en.wikipedia.org/wiki/Vorbis
[FLAC]: https://en.wikipedia.org/wiki/FLAC
[Opus]: https://en.wikipedia.org/wiki/Opus_(audio_format)
[VP8]: https://en.wikipedia.org/wiki/VP8
[VP9]: https://en.wikipedia.org/wiki/VP9
[overrideNative]: https://github.com/videojs/http-streaming#overridenative
[the transmuxer]: https://github.com/videojs/mux.js
[videojs-contrib-eme]: https://github.com/videojs/videojs-contrib-eme
[2nd edition of HTTP Live Streaming]: https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-07.html
[HLS specification v7, revision 23]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23
[EXT-X-MAP]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.5
[EXT-X-STREAM-INF]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.2
[EXT-X-SESSION-DATA]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.4
[EXT-X-DATERANGE]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.7
[EXT-X-KEY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.7
[EXT-X-I-FRAMES-ONLY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.3.6
[EXT-X-I-FRAME-STREAM-INF]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.3
[EXT-X-SESSION-KEY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.5
[EXT-X-INDEPENDENT-SEGMENTS]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.5.1
[EXT-X-START]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.5.2
[EXT-X-MEDIA]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.1
[Dolby Vision]: https://en.wikipedia.org/wiki/High-dynamic-range_video#Dolby_Vision
[Dolby Digital]: https://en.wikipedia.org/wiki/Dolby_Digital
[Dolby Digital Plus]: https://en.wikipedia.org/wiki/Dolby_Digital_Plus
[mpd-parser]: https://github.com/videojs/mpd-parser

View File

@@ -1,67 +0,0 @@
# Troubleshooting Guide
## Other troubleshooting guides
For issues around data embedded into media segments (e.g., 608 captions), see the [mux.js troubleshooting guide](https://github.com/videojs/mux.js/blob/master/docs/troubleshooting.md).
## Tools
### Thumbcoil
Thumbcoil is a video inspector tool that can unpackage various media containers and inspect the bitstreams therein. Thumbcoil runs entirely within your browser so that none of your video data is ever transmitted to a server.
http://thumb.co.il<br/>
http://beta.thumb.co.il<br/>
https://github.com/videojs/thumbcoil<br/>
## Table of Contents
- [Content plays on Mac but not on Windows](#content-plays-on-mac-but-not-windows)
- ["No compatible source was found" on IE11 Win 7](#no-compatible-source-was-found-on-ie11-win-7)
- [CORS: No Access-Control-Allow-Origin header](#cors-no-access-control-allow-origin-header)
- [Desktop Safari/iOS Safari/Android Chrome/Edge exhibit different behavior from other browsers](#desktop-safariios-safariandroid-chromeedge-exhibit-different-behavior-from-other-browsers)
- [MEDIA_ERR_DECODE error on Desktop Safari](#media_err_decode-error-on-desktop-safari)
- [Network requests are still being made while paused](#network-requests-are-still-being-made-while-paused)
## Content plays on Mac but not Windows
Some browsers may not be able to play audio sample rates higher than 48 kHz. See https://docs.microsoft.com/en-gb/windows/desktop/medfound/aac-decoder#format-constraints
Potential solution: re-encode with a Windows supported audio sample rate
## "No compatible source was found" on IE11 Win 7
videojs-http-streaming does not support Flash HLS playback (like the videojs-contrib-hls plugin does)
Solution: include the FlasHLS source handler https://github.com/brightcove/videojs-flashls-source-handler#usage
## CORS: No Access-Control-Allow-Origin header
If you see an error along the lines of
```
XMLHttpRequest cannot load ... No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin ... is therefore not allowed access.
```
you need to properly configure CORS on your server: https://github.com/videojs/http-streaming#hosting-considerations
## Desktop Safari/iOS Safari/Android Chrome/Edge exhibit different behavior from other browsers
Some browsers support native playback of certain streaming formats. By default, we defer to the native players. However, this means that features specific to videojs-http-streaming will not be available.
On Edge and mobile Chrome, 608 captions, ID3 tags or live streaming may not work as expected with native playback, it is recommended that `overrideNative` be used on those platforms if necessary.
Solution: use videojs-http-streaming based playback on those devices: https://github.com/videojs/http-streaming#overridenative
## MEDIA_ERR_DECODE error on Desktop Safari
This error may occur for a number of reasons, as it is particularly common for misconfigured content. One instance of misconfiguration is if the source manifest has `CLOSED-CAPTIONS=NONE` and an external text track is loaded into the player. Safari does not allow the inclusion any captions if the manifest indicates that captions will not be provided.
Solution: remove `CLOSED-CAPTIONS=NONE` from the manifest
## Network requests are still being made while paused
There are a couple of cases where network requests will still be made by VHS when the video is paused.
1) If the forward buffer (buffered content ahead of the playhead) has not reached the GOAL\_BUFFER\_LENGTH. For instance, if the playhead is at time 10 seconds, the buffered range goes from 5 seconds to 20 seconds, and the GOAL\_BUFFER\_LENGTH is set to 30 seconds, then segments will continue to be requested, even while paused, until the buffer ends at a time greater than or equal to 10 seconds (current time) + 30 seconds (GOAL\_BUFFER\_LENGTH) = 40 seconds. This is expected behavior in order to provide a better playback experience.
2) If the stream is LIVE, then the manifest will continue to be refreshed even while paused. This is because it is easier to keep playback in sync if we receieve manifest updates consistently.

View File

@@ -1,113 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>videojs-http-streaming Demo</title>
<link rel="icon" href="logo.svg">
<link href="node_modules/video.js/dist/video-js.css" rel="stylesheet">
<link href="node_modules/videojs-http-source-selector/dist/videojs-http-source-selector.css" rel="stylesheet">
<style>
body {
font-family: Arial, sans-serif;
margin: 20px;
}
.info {
background-color: #eee;
border: thin solid #333;
border-radius: 3px;
padding: 0 5px;
margin: 20px 0;
}
label {
display: block;
width: 700px;
width: fit-content;
margin-top: 4px;
}
.options label {
background-color: hsl(0, 0%, 90%);
padding: 0.25em;
margin: 0.25em;
}
input[type=url], select {
min-width: 600px;
}
h3 {
margin-bottom: 5px;
}
#keysystems {
display: block;
}
</style>
</head>
<body>
<div id="player-fixture">
</div>
<label>Representations</label>
<select id='representations'></select>
<h3>Options</h3>
<div class="options">
<label>
<input id=minified type="checkbox">
Minified VHS (reloads player)
</label>
<label>
<input id=liveui type="checkbox">
Enable the live UI (reloads player)
</label>
<label>
<input id=debug type="checkbox">
Debug Logging
</label>
<label>
<input id=muted type="checkbox">
Muted
</label>
<label>
<input id=autoplay type="checkbox">
Autoplay
</label>
<label>
<input id=partial type="checkbox">
Handle Partial (reloads player)
</label>
</div>
<h3>Load a URL</h3>
<label>Url:</label>
<input id=url type=url>
<label>Type: (uses url extension if blank, usually application/x-mpegURL or application/dash+xml)</label>
<input id=type type=text>
<label>Optional Keystems JSON:</label>
<textarea id=keysystems cols=100 rows=5></textarea>
<button id=load-url type=button>Load</button>
<h3>Load a Source</h3>
<select id=load-source>
<optgroup label="hls">
</optgroup>
<optgroup label="dash">
</optgroup>
<optgroup label="drm">
</optgroup>
<optgroup label="live">
</optgroup>
<optgroup label="low latency live">
</optgroup>
</select>
<h3>Navigation</h3>
<ul>
<li><a href="test/debug.html">Run unit tests in browser.</a></li>
<li><a href="docs/api/">Read generated docs.</a></li>
<li><a href="utils/stats/">Stats</a></li>
</ul>
<script src="scripts/index-demo-page.js"></script>
<script>
window.startDemo(function(player) {
// do something with setup player
});
</script>
</body>
</html>

View File

@@ -1,152 +0,0 @@
{
"_args": [
[
"@videojs/http-streaming@2.2.0",
"/home/runner/work/owncast/owncast/build/javascript"
]
],
"_from": "@videojs/http-streaming@2.2.0",
"_id": "@videojs/http-streaming@2.2.0",
"_inBundle": false,
"_integrity": "sha512-0LlotccmEt19Z03rI5busKGBvtpYNPHsnLNz0wl6zVEpx8trzYxTQSwKDCUD994hfzLeikQTHJc8BqMrfs1OgA==",
"_location": "/@videojs/http-streaming",
"_phantomChildren": {},
"_requested": {
"type": "version",
"registry": true,
"raw": "@videojs/http-streaming@2.2.0",
"name": "@videojs/http-streaming",
"escapedName": "@videojs%2fhttp-streaming",
"scope": "@videojs",
"rawSpec": "2.2.0",
"saveSpec": null,
"fetchSpec": "2.2.0"
},
"_requiredBy": [
"/"
],
"_resolved": "https://registry.npmjs.org/@videojs/http-streaming/-/http-streaming-2.2.0.tgz",
"_spec": "2.2.0",
"_where": "/home/runner/work/owncast/owncast/build/javascript",
"author": {
"name": "Brightcove, Inc"
},
"browserslist": [
"defaults",
"ie 11"
],
"bugs": {
"url": "https://github.com/videojs/http-streaming/issues"
},
"dependencies": {
"@babel/runtime": "^7.5.5",
"@videojs/vhs-utils": "^2.2.0",
"aes-decrypter": "3.0.2",
"global": "^4.3.2",
"m3u8-parser": "4.4.3",
"mpd-parser": "0.12.0",
"mux.js": "5.6.6",
"video.js": "^6 || ^7"
},
"description": "Play back HLS and DASH with Video.js, even where it's not natively supported",
"devDependencies": {
"@gkatsev/rollup-plugin-bundle-worker": "^1.0.2",
"@videojs/generator-helpers": "~1.2.0",
"d3": "^3.4.8",
"es5-shim": "^4.5.13",
"es6-shim": "^0.35.5",
"jsdoc": "github:BrandonOCasey/jsdoc#feat/plugin-from-cli",
"karma": "^4.0.0",
"lodash": "^4.17.4",
"lodash-compat": "^3.10.0",
"nomnoml": "^0.3.0",
"rollup": "^1.19.4",
"shelljs": "^0.8.2",
"sinon": "^8.1.1",
"url-toolkit": "^2.1.3",
"videojs-contrib-eme": "^3.2.0",
"videojs-contrib-quality-levels": "^2.0.4",
"videojs-generate-karma-config": "~5.3.1",
"videojs-generate-rollup-config": "~5.0.1",
"videojs-generator-verify": "~2.0.0",
"videojs-http-source-selector": "^1.1.6",
"videojs-standard": "^8.0.3"
},
"engines": {
"node": ">=8",
"npm": ">=5"
},
"files": [
"CONTRIBUTING.md",
"dist/",
"docs/",
"index.html",
"scripts/",
"src/"
],
"generator-videojs-plugin": {
"version": "7.6.3"
},
"homepage": "https://github.com/videojs/http-streaming#readme",
"husky": {
"hooks": {
"pre-commit": "lint-staged"
}
},
"keywords": [
"videojs",
"videojs-plugin"
],
"license": "Apache-2.0",
"lint-staged": {
"*.js": [
"vjsstandard --fix",
"git add"
],
"README.md": [
"doctoc --notitle",
"git add"
]
},
"main": "dist/videojs-http-streaming.cjs.js",
"module": "dist/videojs-http-streaming.es.js",
"name": "@videojs/http-streaming",
"repository": {
"type": "git",
"url": "git+ssh://git@github.com/videojs/http-streaming.git"
},
"scripts": {
"build": "npm-run-all -s clean -p build:*",
"build-prod": "cross-env-shell NO_TEST_BUNDLE=1 'npm run build'",
"build-test": "cross-env-shell TEST_BUNDLE_ONLY=1 'npm run build'",
"build:js": "rollup -c scripts/rollup.config.js",
"clean": "shx rm -rf ./dist ./test/dist && shx mkdir -p ./dist ./test/dist",
"docs": "npm-run-all docs:*",
"docs:api": "jsdoc src -g plugins/markdown -r -d docs/api",
"docs:images": "node ./scripts/create-docs-images.js",
"docs:toc": "doctoc --notitle README.md",
"lint": "vjsstandard",
"netlify": "node scripts/netlify.js",
"prenetlify": "npm run build",
"prepublishOnly": "npm-run-all build-prod && vjsverify --verbose",
"preversion": "npm test",
"server": "karma start scripts/karma.conf.js --singleRun=false --auto-watch",
"start": "npm-run-all -p server watch",
"test": "npm-run-all lint build-test && karma start scripts/karma.conf.js",
"update-changelog": "conventional-changelog -p videojs -i CHANGELOG.md -s",
"version": "is-prerelease || npm run update-changelog && git add CHANGELOG.md",
"watch": "npm-run-all -p watch:*",
"watch:js": "npm run build:js -- -w"
},
"version": "2.2.0",
"vjsstandard": {
"ignore": [
"dist",
"docs",
"deploy",
"test/dist",
"utils",
"src/*.worker.js"
]
}
}

View File

@@ -1,31 +0,0 @@
/* eslint-disable no-console */
const nomnoml = require('nomnoml');
const fs = require('fs');
const path = require('path');
const basePath = path.resolve(__dirname, '..');
const docImageDir = path.join(basePath, 'docs/images');
const nomnomlSourceDir = path.join(basePath, 'docs/images/sources');
const buildImages = {
build() {
const files = fs.readdirSync(nomnomlSourceDir);
while (files.length > 0) {
const file = path.resolve(nomnomlSourceDir, files.shift());
const basename = path.basename(file, 'txt');
if (/.nomnoml/.test(basename)) {
const fileContents = fs.readFileSync(file, 'utf-8');
const generated = nomnoml.renderSvg(fileContents);
const newFilePath = path.join(docImageDir, basename) + 'svg';
const outFile = fs.createWriteStream(newFilePath);
console.log(`wrote file ${newFilePath}`);
outFile.write(generated);
}
}
}
};
buildImages.build();

View File

@@ -1,131 +0,0 @@
/* global window */
const fs = require('fs');
const path = require('path');
const baseDir = path.join(__dirname, '..');
const manifestsDir = path.join(baseDir, 'test', 'manifests');
const segmentsDir = path.join(baseDir, 'test', 'segments');
const base64ToUint8Array = function(base64) {
const decoded = window.atob(base64);
const uint8Array = new Uint8Array(new ArrayBuffer(decoded.length));
for (let i = 0; i < decoded.length; i++) {
uint8Array[i] = decoded.charCodeAt(i);
}
return uint8Array;
};
const utf16CharCodesToString = (typedArray) => {
let val = '';
Array.prototype.forEach.call(typedArray, (x) => {
val += String.fromCharCode(x);
});
return val;
};
const getManifests = () => (fs.readdirSync(manifestsDir) || [])
.filter((f) => ((/\.(m3u8|mpd)/).test(path.extname(f))))
.map((f) => path.resolve(manifestsDir, f));
const getSegments = () => (fs.readdirSync(segmentsDir) || [])
.filter((f) => ((/\.(ts|mp4|key|webm|aac|ac3)/).test(path.extname(f))))
.map((f) => path.resolve(segmentsDir, f));
const buildManifestString = function() {
let manifests = 'export default {\n';
getManifests().forEach((file) => {
// translate this manifest
manifests += ' \'' + path.basename(file, path.extname(file)) + '\': ';
manifests += fs.readFileSync(file, 'utf8')
.split(/\r\n|\n/)
// quote and concatenate
.map((line) => ' \'' + line + '\\n\' +\n')
.join('')
// strip leading spaces and the trailing '+'
.slice(4, -3);
manifests += ',\n';
});
// clean up and close the objects
manifests = manifests.slice(0, -2);
manifests += '\n};\n';
return manifests;
};
const buildSegmentString = function() {
const segmentData = {};
getSegments().forEach((file) => {
// read the file directly as a buffer before converting to base64
const base64Segment = fs.readFileSync(file).toString('base64');
segmentData[path.basename(file, path.extname(file))] = base64Segment;
});
const segmentDataExportStrings = Object.keys(segmentData).reduce((acc, key) => {
// use a function since the segment may be cleared out on usage
acc.push(`export const ${key} = () => {
cache.${key} = cache.${key} || base64ToUint8Array('${segmentData[key]}');
const dest = new Uint8Array(cache.${key}.byteLength);
dest.set(cache.${key});
return dest;
};`);
// strings can be used to fake responseText in progress events
// when testing partial appends of data
acc.push(`export const ${key}String = () => {
cache.${key}String = cache.${key}String || utf16CharCodesToString(${key}());
return cache.${key}String;
};`);
return acc;
}, []);
const segmentsFile =
'const cache = {};\n' +
`const base64ToUint8Array = ${base64ToUint8Array.toString()};\n` +
`const utf16CharCodesToString = ${utf16CharCodesToString.toString()};\n` +
segmentDataExportStrings.join('\n');
return segmentsFile;
};
/* we refer to them as .js, so that babel and other plugins can work on them */
const segmentsKey = 'create-test-data!segments.js';
const manifestsKey = 'create-test-data!manifests.js';
module.exports = function() {
return {
name: 'createTestData',
buildStart() {
this.addWatchFile(segmentsDir);
this.addWatchFile(manifestsDir);
[].concat(getSegments())
.concat(getManifests())
.forEach((file) => this.addWatchFile(file));
},
resolveId(importee, importer) {
// if this is not an id we can resolve return
if (importee.indexOf('create-test-data!') !== 0) {
return;
}
const name = importee.split('!')[1];
return (name.indexOf('segments') === 0) ? segmentsKey : manifestsKey;
},
load(id) {
if (id === segmentsKey) {
return buildSegmentString.call(this);
}
if (id === manifestsKey) {
return buildManifestString.call(this);
}
}
};
};

View File

@@ -1,415 +0,0 @@
/* global window document */
/* eslint-disable vars-on-top, no-var, object-shorthand, no-console */
(function(window) {
var representationsEl = document.getElementById('representations');
representationsEl.addEventListener('change', function() {
var selectedIndex = representationsEl.selectedIndex;
if (!selectedIndex || selectedIndex < 1 || !window.vhs) {
return;
}
var selectedOption = representationsEl.options[representationsEl.selectedIndex];
if (!selectedOption) {
return;
}
var id = selectedOption.value;
window.vhs.representations().forEach(function(rep) {
rep.playlist.disabled = rep.id !== id;
});
window.mpc.smoothQualityChange_();
});
var hlsOptGroup = document.querySelector('[label="hls"]');
var dashOptGroup = document.querySelector('[label="dash"]');
var drmOptGroup = document.querySelector('[label="drm"]');
var liveOptGroup = document.querySelector('[label="live"]');
var llliveOptGroup = document.querySelector('[label="low latency live"]');
// get the sources list squared away
var xhr = new window.XMLHttpRequest();
xhr.addEventListener('load', function() {
var sources = JSON.parse(xhr.responseText);
sources.forEach(function(source) {
const option = document.createElement('option');
option.innerText = source.name;
option.value = source.uri;
if (source.keySystems) {
option.setAttribute('data-key-systems', JSON.stringify(source.keySystems, null, 2));
}
if (source.mimetype) {
option.setAttribute('data-mimetype', source.mimetype);
}
if (source.features.indexOf('low-latency') !== -1) {
llliveOptGroup.appendChild(option);
} else if (source.features.indexOf('live') !== -1) {
liveOptGroup.appendChild(option);
} else if (source.keySystems) {
drmOptGroup.appendChild(option);
} else if (source.mimetype === 'application/x-mpegurl') {
hlsOptGroup.appendChild(option);
} else if (source.mimetype === 'application/dash+xml') {
dashOptGroup.appendChild(option);
}
});
});
xhr.open('GET', './scripts/sources.json');
xhr.send();
// all relevant elements
var urlButton = document.getElementById('load-url');
var sources = document.getElementById('load-source');
var stateEls = {};
var getInputValue = function(el) {
if (el.type === 'url' || el.type === 'text' || el.nodeName.toLowerCase() === 'textarea') {
return encodeURIComponent(el.value);
} else if (el.type === 'checkbox') {
return el.checked;
}
console.warn('unhandled input type ' + el.type);
return '';
};
var setInputValue = function(el, value) {
if (el.type === 'url' || el.type === 'text' || el.nodeName.toLowerCase() === 'textarea') {
el.value = decodeURIComponent(value);
} else {
el.checked = value === 'true' ? true : false;
}
};
var newEvent = function(name) {
var event;
if (typeof window.Event === 'function') {
event = new window.Event(name);
} else {
event = document.createEvent('Event');
event.initEvent(name, true, true);
}
return event;
};
// taken from video.js
var getFileExtension = function(path) {
var splitPathRe;
var pathParts;
if (typeof path === 'string') {
splitPathRe = /^(\/?)([\s\S]*?)((?:\.{1,2}|[^\/]*?)(\.([^\.\/\?]+)))(?:[\/]*|[\?].*)$/i;
pathParts = splitPathRe.exec(path);
if (pathParts) {
return pathParts.pop().toLowerCase();
}
}
return '';
};
var saveState = function() {
var query = '';
if (!window.history.replaceState) {
return;
}
Object.keys(stateEls).forEach(function(elName) {
var symbol = query.length ? '&' : '?';
query += symbol + elName + '=' + getInputValue(stateEls[elName]);
});
window.history.replaceState({}, 'vhs demo', query);
};
window.URLSearchParams = window.URLSearchParams || function(locationSearch) {
this.get = function(name) {
var results = new RegExp('[\?&]' + name + '=([^&#]*)').exec(locationSearch);
return results ? decodeURIComponent(results[1]) : null;
};
};
// eslint-disable-next-line
var loadState = function() {
var params = new window.URLSearchParams(window.location.search);
return Object.keys(stateEls).reduce(function(acc, elName) {
acc[elName] = typeof params.get(elName) !== 'object' ? params.get(elName) : getInputValue(stateEls[elName]);
return acc;
}, {});
};
// eslint-disable-next-line
var reloadScripts = function(urls, cb) {
var el = document.getElementById('reload-scripts');
var onload = function() {
var script;
if (!urls.length) {
cb();
return;
}
script = document.createElement('script');
script.src = urls.shift();
script.onload = onload;
el.appendChild(script);
};
if (!el) {
el = document.createElement('div');
el.id = 'reload-scripts';
document.body.appendChild(el);
}
while (el.firstChild) {
el.removeChild(el.firstChild);
}
onload();
};
var regenerateRepresentations = function() {
while (representationsEl.firstChild) {
representationsEl.removeChild(representationsEl.firstChild);
}
var selectedIndex;
window.vhs.representations().forEach(function(rep, i) {
var option = document.createElement('option');
option.value = rep.id;
option.innerText = JSON.stringify({
id: rep.id,
videoCodec: rep.codecs.video,
audioCodec: rep.codecs.audio,
bandwidth: rep.bandwidth,
heigth: rep.heigth,
width: rep.width
});
if (window.mpc.media().id === rep.id) {
selectedIndex = i;
}
representationsEl.appendChild(option);
});
representationsEl.selectedIndex = selectedIndex;
};
['debug', 'autoplay', 'muted', 'minified', 'liveui', 'partial', 'url', 'type', 'keysystems'].forEach(function(name) {
stateEls[name] = document.getElementById(name);
});
window.startDemo = function(cb) {
var state = loadState();
Object.keys(state).forEach(function(elName) {
setInputValue(stateEls[elName], state[elName]);
});
Array.prototype.forEach.call(sources.options, function(s, i) {
if (s.value === state.url) {
sources.selectedIndex = i;
}
});
stateEls.muted.addEventListener('change', function(event) {
saveState();
window.player.muted(event.target.checked);
});
stateEls.autoplay.addEventListener('change', function(event) {
saveState();
window.player.autoplay(event.target.checked);
});
stateEls.debug.addEventListener('change', function(event) {
saveState();
window.videojs.log.level(event.target.checked ? 'debug' : 'info');
});
stateEls.partial.addEventListener('change', function(event) {
saveState();
window.videojs.options = window.videojs.options || {};
window.videojs.options.vhs = window.videojs.options.vhs || {};
window.videojs.options.vhs.handlePartialData = event.target.checked;
if (window.player) {
window.player.src(window.player.currentSource());
}
});
stateEls.liveui.addEventListener('change', function(event) {
saveState();
stateEls.minified.dispatchEvent(newEvent('change'));
});
stateEls.minified.addEventListener('change', function(event) {
var urls = [
'node_modules/video.js/dist/alt/video.core',
'node_modules/videojs-contrib-eme/dist/videojs-contrib-eme',
'node_modules/videojs-contrib-quality-levels/dist/videojs-contrib-quality-levels',
'node_modules/videojs-http-source-selector/dist/videojs-http-source-selector',
'dist/videojs-http-streaming'
].map(function(url) {
return url + (event.target.checked ? '.min' : '') + '.js';
});
saveState();
if (window.player) {
window.player.dispose();
delete window.player;
}
if (window.videojs) {
delete window.videojs;
}
reloadScripts(urls, function() {
var player;
var fixture = document.getElementById('player-fixture');
var videoEl = document.createElement('video-js');
videoEl.setAttribute('controls', '');
videoEl.className = 'vjs-default-skin';
fixture.appendChild(videoEl);
stateEls.partial.dispatchEvent(newEvent('change'));
player = window.player = window.videojs(videoEl, {
plugins: {
httpSourceSelector: {
default: 'auto'
}
},
liveui: stateEls.liveui.checked,
html5: {
vhs: {
overrideNative: true
}
}
});
player.width(640);
player.height(264);
// configure videojs-contrib-eme
player.eme();
stateEls.debug.dispatchEvent(newEvent('change'));
stateEls.muted.dispatchEvent(newEvent('change'));
stateEls.autoplay.dispatchEvent(newEvent('change'));
// run the load url handler for the intial source
if (stateEls.url.value) {
urlButton.dispatchEvent(newEvent('click'));
} else {
sources.dispatchEvent(newEvent('change'));
}
player.on('loadedmetadata', function() {
if (player.vhs) {
window.vhs = player.tech_.vhs;
window.mpc = player.tech_.vhs.masterPlaylistController_;
window.mpc.masterPlaylistLoader_.on('mediachange', regenerateRepresentations);
regenerateRepresentations();
} else {
window.vhs = null;
window.mpc = null;
}
});
cb(player);
});
});
const urlButtonClick = function(event) {
var ext;
var type = stateEls.type.value;
if (!type.trim()) {
ext = getFileExtension(stateEls.url.value);
if (ext === 'mpd') {
type = 'application/dash+xml';
} else if (ext === 'm3u8') {
type = 'application/x-mpegURL';
}
}
saveState();
var source = {
src: stateEls.url.value,
type: type
};
if (stateEls.keysystems.value) {
source.keySystems = JSON.parse(stateEls.keysystems.value);
}
sources.selectedIndex = -1;
Array.prototype.forEach.call(sources.options, function(s, i) {
if (s.value === stateEls.url.value) {
sources.selectedIndex = i;
}
});
window.player.src(source);
};
urlButton.addEventListener('click', urlButtonClick);
urlButton.addEventListener('tap', urlButtonClick);
sources.addEventListener('change', function(event) {
var selectedOption = sources.options[sources.selectedIndex];
if (!selectedOption) {
return;
}
var src = selectedOption.value;
stateEls.url.value = src;
stateEls.type.value = selectedOption.getAttribute('data-mimetype');
stateEls.keysystems.value = selectedOption.getAttribute('data-key-systems');
urlButton.dispatchEvent(newEvent('click'));
});
stateEls.url.addEventListener('keyup', function(event) {
if (event.key === 'Enter') {
urlButton.click();
}
});
stateEls.type.addEventListener('keyup', function(event) {
if (event.key === 'Enter') {
urlButton.click();
}
});
// run the change handler for the first time
stateEls.minified.dispatchEvent(newEvent('change'));
};
}(window));

View File

@@ -1,41 +0,0 @@
const generate = require('videojs-generate-karma-config');
module.exports = function(config) {
// see https://github.com/videojs/videojs-generate-karma-config
// for options
const options = {
coverage: false,
preferHeadless: false,
browsers(aboutToRun) {
return aboutToRun.filter(function(launcherName) {
return !(/^Safari/).test(launcherName);
});
},
files(defaults) {
defaults.unshift('node_modules/es5-shim/es5-shim.js');
defaults.unshift('node_modules/es6-shim/es6-shim.js');
defaults.splice(
defaults.indexOf('node_modules/video.js/dist/video.js'),
1,
'node_modules/video.js/dist/alt/video.core.js'
);
return defaults;
},
browserstackLaunchers(defaults) {
delete defaults.bsSafariMojave;
delete defaults.bsSafariElCapitan;
return defaults;
},
serverBrowsers() {
return [];
}
};
config = generate(config, options);
// any other custom stuff not supported by options here!
};

View File

@@ -1,35 +0,0 @@
const path = require('path');
const sh = require('shelljs');
const deployDir = 'deploy';
const files = [
'node_modules/video.js/dist/video-js.css',
'node_modules/video.js/dist/alt/video.core.js',
'node_modules/video.js/dist/alt/video.core.min.js',
'node_modules/videojs-contrib-eme/dist/videojs-contrib-eme.js',
'node_modules/videojs-contrib-eme/dist/videojs-contrib-eme.min.js',
'node_modules/videojs-contrib-quality-levels/dist/videojs-contrib-quality-levels.js',
'node_modules/videojs-contrib-quality-levels/dist/videojs-contrib-quality-levels.min.js',
'node_modules/videojs-http-source-selector/dist/videojs-http-source-selector.css',
'node_modules/videojs-http-source-selector/dist/videojs-http-source-selector.js',
'node_modules/videojs-http-source-selector/dist/videojs-http-source-selector.min.js',
'node_modules/d3/d3.min.js',
'logo.svg',
'scripts/sources.json',
'scripts/index-demo-page.js'
];
// cleanup previous deploy
sh.rm('-rf', deployDir);
// make sure the directory exists
sh.mkdir('-p', deployDir);
// create nested directories
files
.map((file) => path.dirname(file))
.forEach((dir) => sh.mkdir('-p', path.join(deployDir, dir)));
// copy files/folders to deploy dir
files
.concat('dist', 'index.html', 'utils')
.forEach((file) => sh.cp('-r', file, path.join(deployDir, file)));

View File

@@ -1,85 +0,0 @@
const generate = require('videojs-generate-rollup-config');
const worker = require('@gkatsev/rollup-plugin-bundle-worker');
const {terser} = require('rollup-plugin-terser');
const createTestData = require('./create-test-data.js');
// see https://github.com/videojs/videojs-generate-rollup-config
// for options
const options = {
input: 'src/videojs-http-streaming.js',
distName: 'videojs-http-streaming',
globals(defaults) {
defaults.browser.xmldom = 'window';
defaults.test.xmldom = 'window';
return defaults;
},
externals(defaults) {
return Object.assign(defaults, {
module: defaults.module.concat([
'aes-decrypter',
'm3u8-parser',
'mpd-parser',
'mux.js',
'@videojs/vhs-utils'
])
});
},
plugins(defaults) {
defaults.module.splice(2, 0, 'worker');
defaults.browser.splice(2, 0, 'worker');
defaults.test.splice(3, 0, 'worker');
defaults.test.splice(0, 0, 'createTestData');
// istanbul is only in the list for regular builds and not watch
if (defaults.test.indexOf('istanbul') !== -1) {
defaults.test.splice(defaults.test.indexOf('istanbul'), 1);
}
return defaults;
},
primedPlugins(defaults) {
return Object.assign(defaults, {
worker: worker(),
uglify: terser({
output: {comments: 'some'},
compress: {passes: 2},
include: [/^.+\.min\.js$/]
}),
createTestData: createTestData()
});
},
babel(defaults) {
const presetEnvSettings = defaults.presets[0][1];
presetEnvSettings.exclude = presetEnvSettings.exclude || [];
presetEnvSettings.exclude.push('@babel/plugin-transform-typeof-symbol');
return defaults;
}
};
const config = generate(options);
// Add additonal builds/customization here!
// export the builds to rollup
export default [
config.makeBuild('browser', {
input: 'src/decrypter-worker.js',
output: {
format: 'iife',
name: 'decrypterWorker',
file: 'src/decrypter-worker.worker.js'
},
external: []
}),
config.makeBuild('browser', {
input: 'src/transmuxer-worker.js',
output: {
format: 'iife',
name: 'transmuxerWorker',
file: 'src/transmuxer-worker.worker.js'
},
external: []
})
].concat(Object.values(config.builds));

View File

@@ -1,307 +0,0 @@
[
{
"name": "Bipbop - Muxed TS with 1 alt Audio, 5 captions",
"uri": "https://d2zihajmogu5jn.cloudfront.net/bipbop-advanced/bipbop_16x9_variant.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "FMP4 and ts both muxed",
"uri": "https://d2zihajmogu5jn.cloudfront.net/ts-fmp4/index.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Advanced Bipbop - ts and captions muxed",
"uri": "https://devstreaming-cdn.apple.com/videos/streaming/examples/img_bipbop_adv_example_ts/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Advanced Bipbop - FMP4 and captions muxed",
"uri": "https://devstreaming-cdn.apple.com/videos/streaming/examples/img_bipbop_adv_example_fmp4/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Advanced Bipbop - FMP4 hevc, demuxed",
"uri": "https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_adv_example_hevc/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Angel One - FMP4 demuxed, many audio/captions",
"uri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/hls.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Parkour - FMP4 demuxed",
"uri": "https://bitdash-a.akamaihd.net/content/MI201109210084_1/m3u8s-fmp4/f08e80da-bf1d-4e3d-8899-f0f6155f6efa.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Song - ts Audio only",
"uri": "https://s3.amazonaws.com/qa.jwplayer.com/~alex/121628/new_master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Coit Tower drone footage - 4 8 second segment",
"uri": "https://d2zihajmogu5jn.cloudfront.net/CoitTower/master_ts_segtimes.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Disney's Oceans trailer - HLSe, ts Encrypted",
"uri": "https://playertest.longtailvideo.com/adaptive/oceans_aes/oceans_aes.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Sintel - ts with audio/subs and a 4k rendtion",
"uri": "https://bitmovin-a.akamaihd.net/content/sintel/hls/playlist.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Ipsum Subs - HLS + subtitles",
"uri": "https://d2zihajmogu5jn.cloudfront.net/hls-webvtt/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Video Only",
"uri": "https://d2zihajmogu5jn.cloudfront.net/video-only/out.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Audio Only",
"uri": "https://d2zihajmogu5jn.cloudfront.net/audio-only/out.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat 4K",
"uri": "https://d2zihajmogu5jn.cloudfront.net/4k-hls/out.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Misaligned - 3, 5, 7, second segment playlists",
"uri": "https://d2zihajmogu5jn.cloudfront.net/misaligned-playlists/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "BBB-CMIF: Big Buck Bunny Dark Truths - demuxed, fmp4",
"uri": "https://storage.googleapis.com/shaka-demo-assets/bbb-dark-truths-hls/hls.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Big Buck Bunny - demuxed audio/video, includes 4K, burns in frame, pts, resolution, bitrate values",
"uri": "https://dash.akamaized.net/akamai/bbb_30fps/bbb_30fps.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Angel One - fmp4, webm, subs (TODO: subs are broken), alternate audio tracks",
"uri": "https://storage.googleapis.com/shaka-demo-assets/angel-one/dash.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Angel One - Widevine, fmp4, webm, subs, alternate audio tracks",
"uri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-widevine/dash.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": "https://cwip-shaka-proxy.appspot.com/no_auth"
}
},
{
"name": "BBB-CMIF: Big Buck Bunny Dark Truths - demuxed, fmp4",
"uri": "https://storage.googleapis.com/shaka-demo-assets/bbb-dark-truths/dash.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "SIDX demuxed, 2 audio",
"uri": "https://dash.akamaized.net/dash264/TestCases/10a/1/iis_forest_short_poem_multi_lang_480p_single_adapt_aaclc_sidx.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "SIDX bipbop-like",
"uri": "https://download.tsi.telecom-paristech.fr/gpac/DASH_CONFORMANCE/TelecomParisTech/mp4-onDemand/mp4-onDemand-mpd-AV.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Google self-driving car - SIDX",
"uri": "https://yt-dash-mse-test.commondatastorage.googleapis.com/media/car-20120827-manifest.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Sintel - single rendition",
"uri": "https://d2zihajmogu5jn.cloudfront.net/sintel_dash/sintel_vod.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "HLS - Live - Axinom live stream, may not always be available",
"uri": "https://akamai-axtest.akamaized.net/routes/lapd-v1-acceptance/www_c4/Manifest.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live"]
},
{
"name": "DASH - Live - Axinom live stream, may not always be available",
"uri": "https://akamai-axtest.akamaized.net/routes/lapd-v1-acceptance/www_c4/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "DASH - Live simulated DASH from DASH IF",
"uri": "https://livesim.dashif.org/livesim/mup_30/testpic_2s/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "DASH - Shaka Player Source Simulated Live",
"uri": "https://storage.googleapis.com/shaka-live-assets/player-source.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "Apple's LL-HLS test stream",
"uri": "https://ll-hls-test.apple.com/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live", "low-latency"]
},
{
"name": "Apple's LL-HLS test stream, cmaf, fmp4",
"uri": "https://ll-hls-test.apple.com/cmaf/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live", "low-latency"]
},
{
"name": "Axinom Multi DRM - DASH, 4k, HEVC, Playready, Widevine",
"uri": "https://media.axprod.net/TestVectors/v7-MultiDRM-SingleKey/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.microsoft.playready": {
"url": "https://drm-playready-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiOWViNDA1MGQtZTQ0Yi00ODAyLTkzMmUtMjdkNzUwODNlMjY2IiwiZW5jcnlwdGVkX2tleSI6ImxLM09qSExZVzI0Y3Iya3RSNzRmbnc9PSJ9XX19.4lWwW46k-oWcah8oN18LPj5OLS5ZU-_AQv7fe0JhNjA"
}
},
"com.widevine.alpha": {
"url": "https://drm-widevine-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiOWViNDA1MGQtZTQ0Yi00ODAyLTkzMmUtMjdkNzUwODNlMjY2IiwiZW5jcnlwdGVkX2tleSI6ImxLM09qSExZVzI0Y3Iya3RSNzRmbnc9PSJ9XX19.4lWwW46k-oWcah8oN18LPj5OLS5ZU-_AQv7fe0JhNjA"
}
}
}
},
{
"name": "Axinom Multi DRM, Multi Period - DASH, 4k, HEVC, Playready, Widevine",
"uri": "https://media.axprod.net/TestVectors/v7-MultiDRM-MultiKey-MultiPeriod/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.microsoft.playready": {
"url": "https://drm-playready-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiMDg3Mjc4NmUtZjllNy00NjVmLWEzYTItNGU1YjBlZjhmYTQ1IiwiZW5jcnlwdGVkX2tleSI6IlB3NitlRVlOY3ZqWWJmc2gzWDNmbWc9PSJ9LHsiaWQiOiJjMTRmMDcwOS1mMmI5LTQ0MjctOTE2Yi02MWI1MjU4NjUwNmEiLCJlbmNyeXB0ZWRfa2V5IjoiLzErZk5paDM4bXFSdjR5Y1l6bnQvdz09In0seyJpZCI6IjhiMDI5ZTUxLWQ1NmEtNDRiZC05MTBmLWQ0YjVmZDkwZmJhMiIsImVuY3J5cHRlZF9rZXkiOiJrcTBKdVpFanBGTjhzYVRtdDU2ME9nPT0ifSx7ImlkIjoiMmQ2ZTkzODctNjBjYS00MTQ1LWFlYzItYzQwODM3YjRiMDI2IiwiZW5jcnlwdGVkX2tleSI6IlRjUlFlQld4RW9IT0tIcmFkNFNlVlE9PSJ9LHsiaWQiOiJkZTAyZjA3Zi1hMDk4LTRlZTAtYjU1Ni05MDdjMGQxN2ZiYmMiLCJlbmNyeXB0ZWRfa2V5IjoicG9lbmNTN0dnbWVHRmVvSjZQRUFUUT09In0seyJpZCI6IjkxNGU2OWY0LTBhYjMtNDUzNC05ZTlmLTk4NTM2MTVlMjZmNiIsImVuY3J5cHRlZF9rZXkiOiJlaUkvTXNsbHJRNHdDbFJUL0xObUNBPT0ifSx7ImlkIjoiZGE0NDQ1YzItZGI1ZS00OGVmLWIwOTYtM2VmMzQ3YjE2YzdmIiwiZW5jcnlwdGVkX2tleSI6IjJ3K3pkdnFycERWM3hSMGJKeTR1Z3c9PSJ9LHsiaWQiOiIyOWYwNWU4Zi1hMWFlLTQ2ZTQtODBlOS0yMmRjZDQ0Y2Q3YTEiLCJlbmNyeXB0ZWRfa2V5IjoiL3hsU0hweHdxdTNnby9nbHBtU2dhUT09In0seyJpZCI6IjY5ZmU3MDc3LWRhZGQtNGI1NS05NmNkLWMzZWRiMzk5MTg1MyIsImVuY3J5cHRlZF9rZXkiOiJ6dTZpdXpOMnBzaTBaU3hRaUFUa1JRPT0ifV19fQ.BXr93Et1krYMVs-CUnf7F3ywJWFRtxYdkR7Qn4w3-to"
}
},
"com.widevine.alpha": {
"url": "https://drm-widevine-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiMDg3Mjc4NmUtZjllNy00NjVmLWEzYTItNGU1YjBlZjhmYTQ1IiwiZW5jcnlwdGVkX2tleSI6IlB3NitlRVlOY3ZqWWJmc2gzWDNmbWc9PSJ9LHsiaWQiOiJjMTRmMDcwOS1mMmI5LTQ0MjctOTE2Yi02MWI1MjU4NjUwNmEiLCJlbmNyeXB0ZWRfa2V5IjoiLzErZk5paDM4bXFSdjR5Y1l6bnQvdz09In0seyJpZCI6IjhiMDI5ZTUxLWQ1NmEtNDRiZC05MTBmLWQ0YjVmZDkwZmJhMiIsImVuY3J5cHRlZF9rZXkiOiJrcTBKdVpFanBGTjhzYVRtdDU2ME9nPT0ifSx7ImlkIjoiMmQ2ZTkzODctNjBjYS00MTQ1LWFlYzItYzQwODM3YjRiMDI2IiwiZW5jcnlwdGVkX2tleSI6IlRjUlFlQld4RW9IT0tIcmFkNFNlVlE9PSJ9LHsiaWQiOiJkZTAyZjA3Zi1hMDk4LTRlZTAtYjU1Ni05MDdjMGQxN2ZiYmMiLCJlbmNyeXB0ZWRfa2V5IjoicG9lbmNTN0dnbWVHRmVvSjZQRUFUUT09In0seyJpZCI6IjkxNGU2OWY0LTBhYjMtNDUzNC05ZTlmLTk4NTM2MTVlMjZmNiIsImVuY3J5cHRlZF9rZXkiOiJlaUkvTXNsbHJRNHdDbFJUL0xObUNBPT0ifSx7ImlkIjoiZGE0NDQ1YzItZGI1ZS00OGVmLWIwOTYtM2VmMzQ3YjE2YzdmIiwiZW5jcnlwdGVkX2tleSI6IjJ3K3pkdnFycERWM3hSMGJKeTR1Z3c9PSJ9LHsiaWQiOiIyOWYwNWU4Zi1hMWFlLTQ2ZTQtODBlOS0yMmRjZDQ0Y2Q3YTEiLCJlbmNyeXB0ZWRfa2V5IjoiL3hsU0hweHdxdTNnby9nbHBtU2dhUT09In0seyJpZCI6IjY5ZmU3MDc3LWRhZGQtNGI1NS05NmNkLWMzZWRiMzk5MTg1MyIsImVuY3J5cHRlZF9rZXkiOiJ6dTZpdXpOMnBzaTBaU3hRaUFUa1JRPT0ifV19fQ.BXr93Et1krYMVs-CUnf7F3ywJWFRtxYdkR7Qn4w3-to"
}
}
}
},
{
"name": "Axinom Clear - DASH, 4k, HEVC",
"uri": "https://media.axprod.net/TestVectors/v7-Clear/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Axinom Clear MultiPeriod - DASH, 4k, HEVC",
"uri": "https://media.axprod.net/TestVectors/v7-Clear/Manifest_MultiPeriod.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "DASH-IF simulated live",
"uri": "https://livesim.dashif.org/livesim/testpic_2s/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "Tears of Steal - Widevine (Unified Streaming)",
"uri": "https://demo.unified-streaming.com/video/tears-of-steel/tears-of-steel-dash-widevine.ism/.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": "https://widevine-proxy.appspot.com/proxy"
}
},
{
"name": "Tears of Steal - PlayReady (Unified Streaming)",
"uri": "https://demo.unified-streaming.com/video/tears-of-steel/tears-of-steel-dash-playready.ism/.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.microsoft.playready": "https://test.playready.microsoft.com/service/rightsmanager.asmx"
}
},
{
"name": "Unified Streaming Live DASH",
"uri": "https://live.unified-streaming.com/scte35/scte35.isml/.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "Unified Streaming Live HLS",
"uri": "https://live.unified-streaming.com/scte35/scte35.isml/.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live"]
},
{
"name": "DOESN'T WORK - Bayerrischer Rundfunk Recorded Loop - DASH, may not always be available",
"uri": "https://irtdashreference-i.akamaihd.net/dash/live/901161/keepixo1/manifestBR2.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "DOESN'T WORK - Bayerrischer Rundfunk Recorded Loop - HLS, may not always be available",
"uri": "https://irtdashreference-i.akamaihd.net/dash/live/901161/keepixo1/playlistBR2.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live"]
},
{
"name": "Big Buck Bunny - Azure - DASH, Widevine, PlayReady",
"uri": "https://amssamples.streaming.mediaservices.windows.net/622b189f-ec39-43f2-93a2-201ac4e31ce1/BigBuckBunny.ism/manifest(format=mpd-time-csf)",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": "https://amssamples.keydelivery.mediaservices.windows.net/Widevine/?KID=1ab45440-532c-4399-94dc-5c5ad9584bac",
"com.microsoft.playready": "https://amssamples.keydelivery.mediaservices.windows.net/PlayReady/"
}
},
{
"name": "Big Buck Bunny Audio only, groups have same uri as renditons",
"uri": "https://d2zihajmogu5jn.cloudfront.net/audio-only-dupe-groups/prog_index.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
}
]

View File

@@ -1,92 +0,0 @@
/**
* @file ad-cue-tags.js
*/
import window from 'global/window';
/**
* Searches for an ad cue that overlaps with the given mediaTime
*/
export const findAdCue = function(track, mediaTime) {
const cues = track.cues;
for (let i = 0; i < cues.length; i++) {
const cue = cues[i];
if (mediaTime >= cue.adStartTime && mediaTime <= cue.adEndTime) {
return cue;
}
}
return null;
};
export const updateAdCues = function(media, track, offset = 0) {
if (!media.segments) {
return;
}
let mediaTime = offset;
let cue;
for (let i = 0; i < media.segments.length; i++) {
const segment = media.segments[i];
if (!cue) {
// Since the cues will span for at least the segment duration, adding a fudge
// factor of half segment duration will prevent duplicate cues from being
// created when timing info is not exact (e.g. cue start time initialized
// at 10.006677, but next call mediaTime is 10.003332 )
cue = findAdCue(track, mediaTime + (segment.duration / 2));
}
if (cue) {
if ('cueIn' in segment) {
// Found a CUE-IN so end the cue
cue.endTime = mediaTime;
cue.adEndTime = mediaTime;
mediaTime += segment.duration;
cue = null;
continue;
}
if (mediaTime < cue.endTime) {
// Already processed this mediaTime for this cue
mediaTime += segment.duration;
continue;
}
// otherwise extend cue until a CUE-IN is found
cue.endTime += segment.duration;
} else {
if ('cueOut' in segment) {
cue = new window.VTTCue(
mediaTime,
mediaTime + segment.duration,
segment.cueOut
);
cue.adStartTime = mediaTime;
// Assumes tag format to be
// #EXT-X-CUE-OUT:30
cue.adEndTime = mediaTime + parseFloat(segment.cueOut);
track.addCue(cue);
}
if ('cueOutCont' in segment) {
// Entered into the middle of an ad cue
// Assumes tag formate to be
// #EXT-X-CUE-OUT-CONT:10/30
const [adOffset, adTotal] = segment.cueOutCont.split('/').map(parseFloat);
cue = new window.VTTCue(
mediaTime,
mediaTime + segment.duration,
''
);
cue.adStartTime = mediaTime - adOffset;
cue.adEndTime = cue.adStartTime + adTotal;
track.addCue(cue);
}
}
mediaTime += segment.duration;
}
};

View File

@@ -1,114 +0,0 @@
/**
* @file bin-utils.js
*/
/**
* convert a TimeRange to text
*
* @param {TimeRange} range the timerange to use for conversion
* @param {number} i the iterator on the range to convert
*/
const textRange = function(range, i) {
return range.start(i) + '-' + range.end(i);
};
/**
* format a number as hex string
*
* @param {number} e The number
* @param {number} i the iterator
*/
const formatHexString = function(e, i) {
const value = e.toString(16);
return '00'.substring(0, 2 - value.length) + value + (i % 2 ? ' ' : '');
};
const formatAsciiString = function(e) {
if (e >= 0x20 && e < 0x7e) {
return String.fromCharCode(e);
}
return '.';
};
/**
* Creates an object for sending to a web worker modifying properties that are TypedArrays
* into a new object with seperated properties for the buffer, byteOffset, and byteLength.
*
* @param {Object} message
* Object of properties and values to send to the web worker
* @return {Object}
* Modified message with TypedArray values expanded
* @function createTransferableMessage
*/
export const createTransferableMessage = function(message) {
const transferable = {};
Object.keys(message).forEach((key) => {
const value = message[key];
if (ArrayBuffer.isView(value)) {
transferable[key] = {
bytes: value.buffer,
byteOffset: value.byteOffset,
byteLength: value.byteLength
};
} else {
transferable[key] = value;
}
});
return transferable;
};
/**
* Returns a unique string identifier for a media initialization
* segment.
*/
export const initSegmentId = function(initSegment) {
const byterange = initSegment.byterange || {
length: Infinity,
offset: 0
};
return [
byterange.length, byterange.offset, initSegment.resolvedUri
].join(',');
};
/**
* Returns a unique string identifier for a media segment key.
*/
export const segmentKeyId = function(key) {
return key.resolvedUri;
};
/**
* utils to help dump binary data to the console
*/
export const hexDump = (data) => {
const bytes = Array.prototype.slice.call(data);
const step = 16;
let result = '';
let hex;
let ascii;
for (let j = 0; j < bytes.length / step; j++) {
hex = bytes.slice(j * step, j * step + step).map(formatHexString).join('');
ascii = bytes.slice(j * step, j * step + step).map(formatAsciiString).join('');
result += hex + ' ' + ascii + '\n';
}
return result;
};
export const tagDump = ({ bytes }) => hexDump(bytes);
export const textRanges = (ranges) => {
let result = '';
let i;
for (i = 0; i < ranges.length; i++) {
result += textRange(ranges, i) + ' ';
}
return result;
};

View File

@@ -1,15 +0,0 @@
export default {
GOAL_BUFFER_LENGTH: 30,
MAX_GOAL_BUFFER_LENGTH: 60,
BACK_BUFFER_LENGTH: 30,
GOAL_BUFFER_LENGTH_RATE: 1,
// 0.5 MB/s
INITIAL_BANDWIDTH: 4194304,
// A fudge factor to apply to advertised playlist bitrates to account for
// temporary flucations in client bandwidth
BANDWIDTH_VARIANCE: 1.2,
// How much of the buffer must be filled before we consider upswitching
BUFFER_LOW_WATER_LINE: 0,
MAX_BUFFER_LOW_WATER_LINE: 30,
BUFFER_LOW_WATER_LINE_RATE: 1
};

View File

@@ -1,866 +0,0 @@
import videojs from 'video.js';
import {
parse as parseMpd,
parseUTCTiming
} from 'mpd-parser';
import {
refreshDelay,
updateMaster as updatePlaylist
} from './playlist-loader';
import { resolveUrl, resolveManifestRedirect } from './resolve-url';
import parseSidx from 'mux.js/lib/tools/parse-sidx';
import { segmentXhrHeaders } from './xhr';
import window from 'global/window';
import {
forEachMediaGroup,
addPropertiesToMaster
} from './manifest';
import containerRequest from './util/container-request.js';
import {toUint8} from '@videojs/vhs-utils/dist/byte-helpers';
const { EventTarget, mergeOptions } = videojs;
/**
* Parses the master XML string and updates playlist URI references.
*
* @param {Object} config
* Object of arguments
* @param {string} config.masterXml
* The mpd XML
* @param {string} config.srcUrl
* The mpd URL
* @param {Date} config.clientOffset
* A time difference between server and client
* @param {Object} config.sidxMapping
* SIDX mappings for moof/mdat URIs and byte ranges
* @return {Object}
* The parsed mpd manifest object
*/
export const parseMasterXml = ({ masterXml, srcUrl, clientOffset, sidxMapping }) => {
const master = parseMpd(masterXml, {
manifestUri: srcUrl,
clientOffset,
sidxMapping
});
addPropertiesToMaster(master, srcUrl);
return master;
};
/**
* Returns a new master manifest that is the result of merging an updated master manifest
* into the original version.
*
* @param {Object} oldMaster
* The old parsed mpd object
* @param {Object} newMaster
* The updated parsed mpd object
* @return {Object}
* A new object representing the original master manifest with the updated media
* playlists merged in
*/
export const updateMaster = (oldMaster, newMaster) => {
let noChanges = true;
let update = mergeOptions(oldMaster, {
// These are top level properties that can be updated
duration: newMaster.duration,
minimumUpdatePeriod: newMaster.minimumUpdatePeriod
});
// First update the playlists in playlist list
for (let i = 0; i < newMaster.playlists.length; i++) {
const playlistUpdate = updatePlaylist(update, newMaster.playlists[i]);
if (playlistUpdate) {
update = playlistUpdate;
noChanges = false;
}
}
// Then update media group playlists
forEachMediaGroup(newMaster, (properties, type, group, label) => {
if (properties.playlists && properties.playlists.length) {
const id = properties.playlists[0].id;
const playlistUpdate = updatePlaylist(update, properties.playlists[0]);
if (playlistUpdate) {
update = playlistUpdate;
// update the playlist reference within media groups
update.mediaGroups[type][group][label].playlists[0] = update.playlists[id];
noChanges = false;
}
}
});
if (newMaster.minimumUpdatePeriod !== oldMaster.minimumUpdatePeriod) {
noChanges = false;
}
if (noChanges) {
return null;
}
return update;
};
export const generateSidxKey = (sidxInfo) => {
// should be non-inclusive
const sidxByteRangeEnd =
sidxInfo.byterange.offset +
sidxInfo.byterange.length -
1;
return sidxInfo.uri + '-' +
sidxInfo.byterange.offset + '-' +
sidxByteRangeEnd;
};
// SIDX should be equivalent if the URI and byteranges of the SIDX match.
// If the SIDXs have maps, the two maps should match,
// both `a` and `b` missing SIDXs is considered matching.
// If `a` or `b` but not both have a map, they aren't matching.
const equivalentSidx = (a, b) => {
const neitherMap = Boolean(!a.map && !b.map);
const equivalentMap = neitherMap || Boolean(a.map && b.map &&
a.map.byterange.offset === b.map.byterange.offset &&
a.map.byterange.length === b.map.byterange.length);
return equivalentMap &&
a.uri === b.uri &&
a.byterange.offset === b.byterange.offset &&
a.byterange.length === b.byterange.length;
};
// exported for testing
export const compareSidxEntry = (playlists, oldSidxMapping) => {
const newSidxMapping = {};
for (const id in playlists) {
const playlist = playlists[id];
const currentSidxInfo = playlist.sidx;
if (currentSidxInfo) {
const key = generateSidxKey(currentSidxInfo);
if (!oldSidxMapping[key]) {
break;
}
const savedSidxInfo = oldSidxMapping[key].sidxInfo;
if (equivalentSidx(savedSidxInfo, currentSidxInfo)) {
newSidxMapping[key] = oldSidxMapping[key];
}
}
}
return newSidxMapping;
};
/**
* A function that filters out changed items as they need to be requested separately.
*
* The method is exported for testing
*
* @param {Object} masterXml the mpd XML
* @param {string} srcUrl the mpd url
* @param {Date} clientOffset a time difference between server and client (passed through and not used)
* @param {Object} oldSidxMapping the SIDX to compare against
*/
export const filterChangedSidxMappings = (masterXml, srcUrl, clientOffset, oldSidxMapping) => {
// Don't pass current sidx mapping
const master = parseMpd(masterXml, {
manifestUri: srcUrl,
clientOffset
});
const videoSidx = compareSidxEntry(master.playlists, oldSidxMapping);
let mediaGroupSidx = videoSidx;
forEachMediaGroup(master, (properties, mediaType, groupKey, labelKey) => {
if (properties.playlists && properties.playlists.length) {
const playlists = properties.playlists;
mediaGroupSidx = mergeOptions(
mediaGroupSidx,
compareSidxEntry(playlists, oldSidxMapping)
);
}
});
return mediaGroupSidx;
};
// exported for testing
export const requestSidx_ = (loader, sidxRange, playlist, xhr, options, finishProcessingFn) => {
const sidxInfo = {
// resolve the segment URL relative to the playlist
uri: resolveManifestRedirect(options.handleManifestRedirects, sidxRange.resolvedUri),
// resolvedUri: sidxRange.resolvedUri,
byterange: sidxRange.byterange,
// the segment's playlist
playlist
};
const sidxRequestOptions = videojs.mergeOptions(sidxInfo, {
responseType: 'arraybuffer',
headers: segmentXhrHeaders(sidxInfo)
});
return containerRequest(sidxInfo.uri, xhr, (err, request, container, bytes) => {
if (err) {
return finishProcessingFn(err, request);
}
if (!container || container !== 'mp4') {
return finishProcessingFn({
status: request.status,
message: `Unsupported ${container || 'unknown'} container type for sidx segment at URL: ${sidxInfo.uri}`,
// response is just bytes in this case
// but we really don't want to return that.
response: '',
playlist,
internal: true,
blacklistDuration: Infinity,
// MEDIA_ERR_NETWORK
code: 2
}, request);
}
// if we already downloaded the sidx bytes in the container request, use them
const {offset, length} = sidxInfo.byterange;
if (bytes.length >= (length + offset)) {
return finishProcessingFn(err, {
response: bytes.subarray(offset, offset + length),
status: request.status,
uri: request.uri
});
}
// otherwise request sidx bytes
loader.request = xhr(sidxRequestOptions, finishProcessingFn);
});
};
export default class DashPlaylistLoader extends EventTarget {
// DashPlaylistLoader must accept either a src url or a playlist because subsequent
// playlist loader setups from media groups will expect to be able to pass a playlist
// (since there aren't external URLs to media playlists with DASH)
constructor(srcUrlOrPlaylist, vhs, options = { }, masterPlaylistLoader) {
super();
const { withCredentials = false, handleManifestRedirects = false } = options;
this.vhs_ = vhs;
this.withCredentials = withCredentials;
this.handleManifestRedirects = handleManifestRedirects;
if (!srcUrlOrPlaylist) {
throw new Error('A non-empty playlist URL or object is required');
}
// event naming?
this.on('minimumUpdatePeriod', () => {
this.refreshXml_();
});
// live playlist staleness timeout
this.on('mediaupdatetimeout', () => {
this.refreshMedia_(this.media().id);
});
this.state = 'HAVE_NOTHING';
this.loadedPlaylists_ = {};
// initialize the loader state
// The masterPlaylistLoader will be created with a string
if (typeof srcUrlOrPlaylist === 'string') {
this.srcUrl = srcUrlOrPlaylist;
// TODO: reset sidxMapping between period changes
// once multi-period is refactored
this.sidxMapping_ = {};
return;
}
this.setupChildLoader(masterPlaylistLoader, srcUrlOrPlaylist);
}
setupChildLoader(masterPlaylistLoader, playlist) {
this.masterPlaylistLoader_ = masterPlaylistLoader;
this.childPlaylist_ = playlist;
}
dispose() {
this.trigger('dispose');
this.stopRequest();
this.loadedPlaylists_ = {};
window.clearTimeout(this.minimumUpdatePeriodTimeout_);
window.clearTimeout(this.mediaRequest_);
window.clearTimeout(this.mediaUpdateTimeout);
this.off();
}
hasPendingRequest() {
return this.request || this.mediaRequest_;
}
stopRequest() {
if (this.request) {
const oldRequest = this.request;
this.request = null;
oldRequest.onreadystatechange = null;
oldRequest.abort();
}
}
sidxRequestFinished_(playlist, master, startingState, doneFn) {
return (err, request) => {
// disposed
if (!this.request) {
return;
}
// pending request is cleared
this.request = null;
if (err) {
// use the provided error or create one
// see requestSidx_ for the container request
// that can cause this.
this.error = typeof err === 'object' ? err : {
status: request.status,
message: 'DASH playlist request error at URL: ' + playlist.uri,
response: request.response,
// MEDIA_ERR_NETWORK
code: 2
};
if (startingState) {
this.state = startingState;
}
this.trigger('error');
return;
}
const bytes = toUint8(request.response);
const sidx = parseSidx(bytes.subarray(8));
return doneFn(master, sidx);
};
}
media(playlist) {
// getter
if (!playlist) {
return this.media_;
}
// setter
if (this.state === 'HAVE_NOTHING') {
throw new Error('Cannot switch media playlist from ' + this.state);
}
const startingState = this.state;
// find the playlist object if the target playlist has been specified by URI
if (typeof playlist === 'string') {
if (!this.master.playlists[playlist]) {
throw new Error('Unknown playlist URI: ' + playlist);
}
playlist = this.master.playlists[playlist];
}
const mediaChange = !this.media_ || playlist.id !== this.media_.id;
// switch to previously loaded playlists immediately
if (mediaChange &&
this.loadedPlaylists_[playlist.id] &&
this.loadedPlaylists_[playlist.id].endList) {
this.state = 'HAVE_METADATA';
this.media_ = playlist;
// trigger media change if the active media has been updated
if (mediaChange) {
this.trigger('mediachanging');
this.trigger('mediachange');
}
return;
}
// switching to the active playlist is a no-op
if (!mediaChange) {
return;
}
// switching from an already loaded playlist
if (this.media_) {
this.trigger('mediachanging');
}
if (!playlist.sidx) {
// Continue asynchronously if there is no sidx
// wait one tick to allow haveMaster to run first on a child loader
this.mediaRequest_ = window.setTimeout(
this.haveMetadata.bind(this, { startingState, playlist }),
0
);
// exit early and don't do sidx work
return;
}
// we have sidx mappings
let oldMaster;
let sidxMapping;
// sidxMapping is used when parsing the masterXml, so store
// it on the masterPlaylistLoader
if (this.masterPlaylistLoader_) {
oldMaster = this.masterPlaylistLoader_.master;
sidxMapping = this.masterPlaylistLoader_.sidxMapping_;
} else {
oldMaster = this.master;
sidxMapping = this.sidxMapping_;
}
const sidxKey = generateSidxKey(playlist.sidx);
sidxMapping[sidxKey] = {
sidxInfo: playlist.sidx
};
this.request = requestSidx_(
this,
playlist.sidx,
playlist,
this.vhs_.xhr,
{ handleManifestRedirects: this.handleManifestRedirects },
this.sidxRequestFinished_(playlist, oldMaster, startingState, (newMaster, sidx) => {
if (!newMaster || !sidx) {
throw new Error('failed to request sidx');
}
// update loader's sidxMapping with parsed sidx box
sidxMapping[sidxKey].sidx = sidx;
// everything is ready just continue to haveMetadata
this.haveMetadata({
startingState,
playlist: newMaster.playlists[playlist.id]
});
})
);
}
haveMetadata({startingState, playlist}) {
this.state = 'HAVE_METADATA';
this.loadedPlaylists_[playlist.id] = playlist;
this.mediaRequest_ = null;
// This will trigger loadedplaylist
this.refreshMedia_(playlist.id);
// fire loadedmetadata the first time a media playlist is loaded
// to resolve setup of media groups
if (startingState === 'HAVE_MASTER') {
this.trigger('loadedmetadata');
} else {
// trigger media change if the active media has been updated
this.trigger('mediachange');
}
}
pause() {
this.stopRequest();
window.clearTimeout(this.mediaUpdateTimeout);
window.clearTimeout(this.minimumUpdatePeriodTimeout_);
if (this.state === 'HAVE_NOTHING') {
// If we pause the loader before any data has been retrieved, its as if we never
// started, so reset to an unstarted state.
this.started = false;
}
}
load(isFinalRendition) {
window.clearTimeout(this.mediaUpdateTimeout);
window.clearTimeout(this.minimumUpdatePeriodTimeout_);
const media = this.media();
if (isFinalRendition) {
const delay = media ? (media.targetDuration / 2) * 1000 : 5 * 1000;
this.mediaUpdateTimeout = window.setTimeout(() => this.load(), delay);
return;
}
// because the playlists are internal to the manifest, load should either load the
// main manifest, or do nothing but trigger an event
if (!this.started) {
this.start();
return;
}
if (media && !media.endList) {
this.trigger('mediaupdatetimeout');
} else {
this.trigger('loadedplaylist');
}
}
start() {
this.started = true;
// We don't need to request the master manifest again
// Call this asynchronously to match the xhr request behavior below
if (this.masterPlaylistLoader_) {
this.mediaRequest_ = window.setTimeout(
this.haveMaster_.bind(this),
0
);
return;
}
// request the specified URL
this.request = this.vhs_.xhr({
uri: this.srcUrl,
withCredentials: this.withCredentials
}, (error, req) => {
// disposed
if (!this.request) {
return;
}
// clear the loader's request reference
this.request = null;
if (error) {
this.error = {
status: req.status,
message: 'DASH playlist request error at URL: ' + this.srcUrl,
responseText: req.responseText,
// MEDIA_ERR_NETWORK
code: 2
};
if (this.state === 'HAVE_NOTHING') {
this.started = false;
}
return this.trigger('error');
}
this.masterXml_ = req.responseText;
if (req.responseHeaders && req.responseHeaders.date) {
this.masterLoaded_ = Date.parse(req.responseHeaders.date);
} else {
this.masterLoaded_ = Date.now();
}
this.srcUrl = resolveManifestRedirect(this.handleManifestRedirects, this.srcUrl, req);
this.syncClientServerClock_(this.onClientServerClockSync_.bind(this));
});
}
/**
* Parses the master xml for UTCTiming node to sync the client clock to the server
* clock. If the UTCTiming node requires a HEAD or GET request, that request is made.
*
* @param {Function} done
* Function to call when clock sync has completed
*/
syncClientServerClock_(done) {
const utcTiming = parseUTCTiming(this.masterXml_);
// No UTCTiming element found in the mpd. Use Date header from mpd request as the
// server clock
if (utcTiming === null) {
this.clientOffset_ = this.masterLoaded_ - Date.now();
return done();
}
if (utcTiming.method === 'DIRECT') {
this.clientOffset_ = utcTiming.value - Date.now();
return done();
}
this.request = this.vhs_.xhr({
uri: resolveUrl(this.srcUrl, utcTiming.value),
method: utcTiming.method,
withCredentials: this.withCredentials
}, (error, req) => {
// disposed
if (!this.request) {
return;
}
if (error) {
// sync request failed, fall back to using date header from mpd
// TODO: log warning
this.clientOffset_ = this.masterLoaded_ - Date.now();
return done();
}
let serverTime;
if (utcTiming.method === 'HEAD') {
if (!req.responseHeaders || !req.responseHeaders.date) {
// expected date header not preset, fall back to using date header from mpd
// TODO: log warning
serverTime = this.masterLoaded_;
} else {
serverTime = Date.parse(req.responseHeaders.date);
}
} else {
serverTime = Date.parse(req.responseText);
}
this.clientOffset_ = serverTime - Date.now();
done();
});
}
haveMaster_() {
this.state = 'HAVE_MASTER';
// clear media request
this.mediaRequest_ = null;
if (!this.masterPlaylistLoader_) {
this.updateMainManifest_(parseMasterXml({
masterXml: this.masterXml_,
srcUrl: this.srcUrl,
clientOffset: this.clientOffset_,
sidxMapping: this.sidxMapping_
}));
// We have the master playlist at this point, so
// trigger this to allow MasterPlaylistController
// to make an initial playlist selection
this.trigger('loadedplaylist');
} else if (!this.media_) {
// no media playlist was specifically selected so select
// the one the child playlist loader was created with
this.media(this.childPlaylist_);
}
}
updateMinimumUpdatePeriodTimeout_() {
// Clear existing timeout
window.clearTimeout(this.minimumUpdatePeriodTimeout_);
const createMUPTimeout = (mup) => {
this.minimumUpdatePeriodTimeout_ = window.setTimeout(() => {
this.trigger('minimumUpdatePeriod');
}, mup);
};
const minimumUpdatePeriod = this.master && this.master.minimumUpdatePeriod;
if (minimumUpdatePeriod > 0) {
createMUPTimeout(minimumUpdatePeriod);
// If the minimumUpdatePeriod has a value of 0, that indicates that the current
// MPD has no future validity, so a new one will need to be acquired when new
// media segments are to be made available. Thus, we use the target duration
// in this case
} else if (minimumUpdatePeriod === 0) {
// If we haven't yet selected a playlist, wait until then so we know the
// target duration
if (!this.media()) {
this.one('loadedplaylist', () => {
createMUPTimeout(this.media().targetDuration * 1000);
});
} else {
createMUPTimeout(this.media().targetDuration * 1000);
}
}
}
/**
* Handler for after client/server clock synchronization has happened. Sets up
* xml refresh timer if specificed by the manifest.
*/
onClientServerClockSync_() {
this.haveMaster_();
if (!this.hasPendingRequest() && !this.media_) {
this.media(this.master.playlists[0]);
}
this.updateMinimumUpdatePeriodTimeout_();
}
/**
* Given a new manifest, update our pointer to it and update the srcUrl based on the location elements of the manifest, if they exist.
*
* @param {Object} updatedManifest the manifest to update to
*/
updateMainManifest_(updatedManifest) {
this.master = updatedManifest;
// if locations isn't set or is an empty array, exit early
if (!this.master.locations || !this.master.locations.length) {
return;
}
const location = this.master.locations[0];
if (location !== this.srcUrl) {
this.srcUrl = location;
}
}
/**
* Sends request to refresh the master xml and updates the parsed master manifest
* TODO: Does the client offset need to be recalculated when the xml is refreshed?
*/
refreshXml_() {
// The srcUrl here *may* need to pass through handleManifestsRedirects when
// sidx is implemented
this.request = this.vhs_.xhr({
uri: this.srcUrl,
withCredentials: this.withCredentials
}, (error, req) => {
// disposed
if (!this.request) {
return;
}
// clear the loader's request reference
this.request = null;
if (error) {
this.error = {
status: req.status,
message: 'DASH playlist request error at URL: ' + this.srcUrl,
responseText: req.responseText,
// MEDIA_ERR_NETWORK
code: 2
};
if (this.state === 'HAVE_NOTHING') {
this.started = false;
}
return this.trigger('error');
}
this.masterXml_ = req.responseText;
// This will filter out updated sidx info from the mapping
this.sidxMapping_ = filterChangedSidxMappings(
this.masterXml_,
this.srcUrl,
this.clientOffset_,
this.sidxMapping_
);
const master = parseMasterXml({
masterXml: this.masterXml_,
srcUrl: this.srcUrl,
clientOffset: this.clientOffset_,
sidxMapping: this.sidxMapping_
});
const updatedMaster = updateMaster(this.master, master);
const currentSidxInfo = this.media().sidx;
if (updatedMaster) {
if (currentSidxInfo) {
const sidxKey = generateSidxKey(currentSidxInfo);
// the sidx was updated, so the previous mapping was removed
if (!this.sidxMapping_[sidxKey]) {
const playlist = this.media();
this.request = requestSidx_(
this,
playlist.sidx,
playlist,
this.vhs_.xhr,
{ handleManifestRedirects: this.handleManifestRedirects },
this.sidxRequestFinished_(playlist, master, this.state, (newMaster, sidx) => {
if (!newMaster || !sidx) {
throw new Error('failed to request sidx on minimumUpdatePeriod');
}
// update loader's sidxMapping with parsed sidx box
this.sidxMapping_[sidxKey].sidx = sidx;
this.updateMinimumUpdatePeriodTimeout_();
// TODO: do we need to reload the current playlist?
this.refreshMedia_(this.media().id);
return;
})
);
}
} else {
this.updateMainManifest_(updatedMaster);
if (this.media_) {
this.media_ = this.master.playlists[this.media_.id];
}
}
}
this.updateMinimumUpdatePeriodTimeout_();
});
}
/**
* Refreshes the media playlist by re-parsing the master xml and updating playlist
* references. If this is an alternate loader, the updated parsed manifest is retrieved
* from the master loader.
*/
refreshMedia_(mediaID) {
if (!mediaID) {
throw new Error('refreshMedia_ must take a media id');
}
let oldMaster;
let newMaster;
if (this.masterPlaylistLoader_) {
oldMaster = this.masterPlaylistLoader_.master;
newMaster = parseMasterXml({
masterXml: this.masterPlaylistLoader_.masterXml_,
srcUrl: this.masterPlaylistLoader_.srcUrl,
clientOffset: this.masterPlaylistLoader_.clientOffset_,
sidxMapping: this.masterPlaylistLoader_.sidxMapping_
});
} else {
oldMaster = this.master;
newMaster = parseMasterXml({
masterXml: this.masterXml_,
srcUrl: this.srcUrl,
clientOffset: this.clientOffset_,
sidxMapping: this.sidxMapping_
});
}
const updatedMaster = updateMaster(oldMaster, newMaster);
if (updatedMaster) {
if (this.masterPlaylistLoader_) {
this.masterPlaylistLoader_.master = updatedMaster;
} else {
this.master = updatedMaster;
}
this.media_ = updatedMaster.playlists[mediaID];
} else {
this.media_ = oldMaster.playlists[mediaID];
this.trigger('playlistunchanged');
}
if (!this.media().endList) {
this.mediaUpdateTimeout = window.setTimeout(() => {
this.trigger('mediaupdatetimeout');
}, refreshDelay(this.media(), !!updatedMaster));
}
this.trigger('loadedplaylist');
}
}

View File

@@ -1,48 +0,0 @@
/* global self */
import { Decrypter } from 'aes-decrypter';
import { createTransferableMessage } from './bin-utils';
/**
* Our web worker interface so that things can talk to aes-decrypter
* that will be running in a web worker. the scope is passed to this by
* webworkify.
*
* @param {Object} self
* the scope for the web worker
*/
const DecrypterWorker = function(self) {
self.onmessage = function(event) {
const data = event.data;
const encrypted = new Uint8Array(
data.encrypted.bytes,
data.encrypted.byteOffset,
data.encrypted.byteLength
);
const key = new Uint32Array(
data.key.bytes,
data.key.byteOffset,
data.key.byteLength / 4
);
const iv = new Uint32Array(
data.iv.bytes,
data.iv.byteOffset,
data.iv.byteLength / 4
);
/* eslint-disable no-new, handle-callback-err */
new Decrypter(
encrypted,
key,
iv,
function(err, bytes) {
self.postMessage(createTransferableMessage({
source: data.source,
decrypted: bytes
}), [bytes.buffer]);
}
);
/* eslint-enable */
};
};
export default new DecrypterWorker(self);

View File

@@ -1,666 +0,0 @@
/*! @name @videojs/http-streaming @version 2.2.0 @license Apache-2.0 */
var decrypterWorker = (function () {
'use strict';
function _defineProperties(target, props) {
for (var i = 0; i < props.length; i++) {
var descriptor = props[i];
descriptor.enumerable = descriptor.enumerable || false;
descriptor.configurable = true;
if ("value" in descriptor) descriptor.writable = true;
Object.defineProperty(target, descriptor.key, descriptor);
}
}
function _createClass(Constructor, protoProps, staticProps) {
if (protoProps) _defineProperties(Constructor.prototype, protoProps);
if (staticProps) _defineProperties(Constructor, staticProps);
return Constructor;
}
var createClass = _createClass;
function _inheritsLoose(subClass, superClass) {
subClass.prototype = Object.create(superClass.prototype);
subClass.prototype.constructor = subClass;
subClass.__proto__ = superClass;
}
var inheritsLoose = _inheritsLoose;
/*! @name @videojs/vhs-utils @version 2.2.0 @license MIT */
/**
* @file stream.js
*/
/**
* A lightweight readable stream implemention that handles event dispatching.
*
* @class Stream
*/
var Stream =
/*#__PURE__*/
function () {
function Stream() {
this.listeners = {};
}
/**
* Add a listener for a specified event type.
*
* @param {string} type the event name
* @param {Function} listener the callback to be invoked when an event of
* the specified type occurs
*/
var _proto = Stream.prototype;
_proto.on = function on(type, listener) {
if (!this.listeners[type]) {
this.listeners[type] = [];
}
this.listeners[type].push(listener);
}
/**
* Remove a listener for a specified event type.
*
* @param {string} type the event name
* @param {Function} listener a function previously registered for this
* type of event through `on`
* @return {boolean} if we could turn it off or not
*/
;
_proto.off = function off(type, listener) {
if (!this.listeners[type]) {
return false;
}
var index = this.listeners[type].indexOf(listener); // TODO: which is better?
// In Video.js we slice listener functions
// on trigger so that it does not mess up the order
// while we loop through.
//
// Here we slice on off so that the loop in trigger
// can continue using it's old reference to loop without
// messing up the order.
this.listeners[type] = this.listeners[type].slice(0);
this.listeners[type].splice(index, 1);
return index > -1;
}
/**
* Trigger an event of the specified type on this stream. Any additional
* arguments to this function are passed as parameters to event listeners.
*
* @param {string} type the event name
*/
;
_proto.trigger = function trigger(type) {
var callbacks = this.listeners[type];
if (!callbacks) {
return;
} // Slicing the arguments on every invocation of this method
// can add a significant amount of overhead. Avoid the
// intermediate object creation for the common case of a
// single callback argument
if (arguments.length === 2) {
var length = callbacks.length;
for (var i = 0; i < length; ++i) {
callbacks[i].call(this, arguments[1]);
}
} else {
var args = Array.prototype.slice.call(arguments, 1);
var _length = callbacks.length;
for (var _i = 0; _i < _length; ++_i) {
callbacks[_i].apply(this, args);
}
}
}
/**
* Destroys the stream and cleans up.
*/
;
_proto.dispose = function dispose() {
this.listeners = {};
}
/**
* Forwards all `data` events on this stream to the destination stream. The
* destination stream should provide a method `push` to receive the data
* events as they arrive.
*
* @param {Stream} destination the stream that will receive all `data` events
* @see http://nodejs.org/api/stream.html#stream_readable_pipe_destination_options
*/
;
_proto.pipe = function pipe(destination) {
this.on('data', function (data) {
destination.push(data);
});
};
return Stream;
}();
var stream = Stream;
/*! @name pkcs7 @version 1.0.4 @license Apache-2.0 */
/**
* Returns the subarray of a Uint8Array without PKCS#7 padding.
*
* @param padded {Uint8Array} unencrypted bytes that have been padded
* @return {Uint8Array} the unpadded bytes
* @see http://tools.ietf.org/html/rfc5652
*/
function unpad(padded) {
return padded.subarray(0, padded.byteLength - padded[padded.byteLength - 1]);
}
/*! @name aes-decrypter @version 3.0.2 @license Apache-2.0 */
/**
* @file aes.js
*
* This file contains an adaptation of the AES decryption algorithm
* from the Standford Javascript Cryptography Library. That work is
* covered by the following copyright and permissions notice:
*
* Copyright 2009-2010 Emily Stark, Mike Hamburg, Dan Boneh.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
* IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* The views and conclusions contained in the software and documentation
* are those of the authors and should not be interpreted as representing
* official policies, either expressed or implied, of the authors.
*/
/**
* Expand the S-box tables.
*
* @private
*/
var precompute = function precompute() {
var tables = [[[], [], [], [], []], [[], [], [], [], []]];
var encTable = tables[0];
var decTable = tables[1];
var sbox = encTable[4];
var sboxInv = decTable[4];
var i;
var x;
var xInv;
var d = [];
var th = [];
var x2;
var x4;
var x8;
var s;
var tEnc;
var tDec; // Compute double and third tables
for (i = 0; i < 256; i++) {
th[(d[i] = i << 1 ^ (i >> 7) * 283) ^ i] = i;
}
for (x = xInv = 0; !sbox[x]; x ^= x2 || 1, xInv = th[xInv] || 1) {
// Compute sbox
s = xInv ^ xInv << 1 ^ xInv << 2 ^ xInv << 3 ^ xInv << 4;
s = s >> 8 ^ s & 255 ^ 99;
sbox[x] = s;
sboxInv[s] = x; // Compute MixColumns
x8 = d[x4 = d[x2 = d[x]]];
tDec = x8 * 0x1010101 ^ x4 * 0x10001 ^ x2 * 0x101 ^ x * 0x1010100;
tEnc = d[s] * 0x101 ^ s * 0x1010100;
for (i = 0; i < 4; i++) {
encTable[i][x] = tEnc = tEnc << 24 ^ tEnc >>> 8;
decTable[i][s] = tDec = tDec << 24 ^ tDec >>> 8;
}
} // Compactify. Considerable speedup on Firefox.
for (i = 0; i < 5; i++) {
encTable[i] = encTable[i].slice(0);
decTable[i] = decTable[i].slice(0);
}
return tables;
};
var aesTables = null;
/**
* Schedule out an AES key for both encryption and decryption. This
* is a low-level class. Use a cipher mode to do bulk encryption.
*
* @class AES
* @param key {Array} The key as an array of 4, 6 or 8 words.
*/
var AES =
/*#__PURE__*/
function () {
function AES(key) {
/**
* The expanded S-box and inverse S-box tables. These will be computed
* on the client so that we don't have to send them down the wire.
*
* There are two tables, _tables[0] is for encryption and
* _tables[1] is for decryption.
*
* The first 4 sub-tables are the expanded S-box with MixColumns. The
* last (_tables[01][4]) is the S-box itself.
*
* @private
*/
// if we have yet to precompute the S-box tables
// do so now
if (!aesTables) {
aesTables = precompute();
} // then make a copy of that object for use
this._tables = [[aesTables[0][0].slice(), aesTables[0][1].slice(), aesTables[0][2].slice(), aesTables[0][3].slice(), aesTables[0][4].slice()], [aesTables[1][0].slice(), aesTables[1][1].slice(), aesTables[1][2].slice(), aesTables[1][3].slice(), aesTables[1][4].slice()]];
var i;
var j;
var tmp;
var sbox = this._tables[0][4];
var decTable = this._tables[1];
var keyLen = key.length;
var rcon = 1;
if (keyLen !== 4 && keyLen !== 6 && keyLen !== 8) {
throw new Error('Invalid aes key size');
}
var encKey = key.slice(0);
var decKey = [];
this._key = [encKey, decKey]; // schedule encryption keys
for (i = keyLen; i < 4 * keyLen + 28; i++) {
tmp = encKey[i - 1]; // apply sbox
if (i % keyLen === 0 || keyLen === 8 && i % keyLen === 4) {
tmp = sbox[tmp >>> 24] << 24 ^ sbox[tmp >> 16 & 255] << 16 ^ sbox[tmp >> 8 & 255] << 8 ^ sbox[tmp & 255]; // shift rows and add rcon
if (i % keyLen === 0) {
tmp = tmp << 8 ^ tmp >>> 24 ^ rcon << 24;
rcon = rcon << 1 ^ (rcon >> 7) * 283;
}
}
encKey[i] = encKey[i - keyLen] ^ tmp;
} // schedule decryption keys
for (j = 0; i; j++, i--) {
tmp = encKey[j & 3 ? i : i - 4];
if (i <= 4 || j < 4) {
decKey[j] = tmp;
} else {
decKey[j] = decTable[0][sbox[tmp >>> 24]] ^ decTable[1][sbox[tmp >> 16 & 255]] ^ decTable[2][sbox[tmp >> 8 & 255]] ^ decTable[3][sbox[tmp & 255]];
}
}
}
/**
* Decrypt 16 bytes, specified as four 32-bit words.
*
* @param {number} encrypted0 the first word to decrypt
* @param {number} encrypted1 the second word to decrypt
* @param {number} encrypted2 the third word to decrypt
* @param {number} encrypted3 the fourth word to decrypt
* @param {Int32Array} out the array to write the decrypted words
* into
* @param {number} offset the offset into the output array to start
* writing results
* @return {Array} The plaintext.
*/
var _proto = AES.prototype;
_proto.decrypt = function decrypt(encrypted0, encrypted1, encrypted2, encrypted3, out, offset) {
var key = this._key[1]; // state variables a,b,c,d are loaded with pre-whitened data
var a = encrypted0 ^ key[0];
var b = encrypted3 ^ key[1];
var c = encrypted2 ^ key[2];
var d = encrypted1 ^ key[3];
var a2;
var b2;
var c2; // key.length === 2 ?
var nInnerRounds = key.length / 4 - 2;
var i;
var kIndex = 4;
var table = this._tables[1]; // load up the tables
var table0 = table[0];
var table1 = table[1];
var table2 = table[2];
var table3 = table[3];
var sbox = table[4]; // Inner rounds. Cribbed from OpenSSL.
for (i = 0; i < nInnerRounds; i++) {
a2 = table0[a >>> 24] ^ table1[b >> 16 & 255] ^ table2[c >> 8 & 255] ^ table3[d & 255] ^ key[kIndex];
b2 = table0[b >>> 24] ^ table1[c >> 16 & 255] ^ table2[d >> 8 & 255] ^ table3[a & 255] ^ key[kIndex + 1];
c2 = table0[c >>> 24] ^ table1[d >> 16 & 255] ^ table2[a >> 8 & 255] ^ table3[b & 255] ^ key[kIndex + 2];
d = table0[d >>> 24] ^ table1[a >> 16 & 255] ^ table2[b >> 8 & 255] ^ table3[c & 255] ^ key[kIndex + 3];
kIndex += 4;
a = a2;
b = b2;
c = c2;
} // Last round.
for (i = 0; i < 4; i++) {
out[(3 & -i) + offset] = sbox[a >>> 24] << 24 ^ sbox[b >> 16 & 255] << 16 ^ sbox[c >> 8 & 255] << 8 ^ sbox[d & 255] ^ key[kIndex++];
a2 = a;
a = b;
b = c;
c = d;
d = a2;
}
};
return AES;
}();
/**
* A wrapper around the Stream class to use setTimeout
* and run stream "jobs" Asynchronously
*
* @class AsyncStream
* @extends Stream
*/
var AsyncStream =
/*#__PURE__*/
function (_Stream) {
inheritsLoose(AsyncStream, _Stream);
function AsyncStream() {
var _this;
_this = _Stream.call(this, stream) || this;
_this.jobs = [];
_this.delay = 1;
_this.timeout_ = null;
return _this;
}
/**
* process an async job
*
* @private
*/
var _proto = AsyncStream.prototype;
_proto.processJob_ = function processJob_() {
this.jobs.shift()();
if (this.jobs.length) {
this.timeout_ = setTimeout(this.processJob_.bind(this), this.delay);
} else {
this.timeout_ = null;
}
}
/**
* push a job into the stream
*
* @param {Function} job the job to push into the stream
*/
;
_proto.push = function push(job) {
this.jobs.push(job);
if (!this.timeout_) {
this.timeout_ = setTimeout(this.processJob_.bind(this), this.delay);
}
};
return AsyncStream;
}(stream);
/**
* Convert network-order (big-endian) bytes into their little-endian
* representation.
*/
var ntoh = function ntoh(word) {
return word << 24 | (word & 0xff00) << 8 | (word & 0xff0000) >> 8 | word >>> 24;
};
/**
* Decrypt bytes using AES-128 with CBC and PKCS#7 padding.
*
* @param {Uint8Array} encrypted the encrypted bytes
* @param {Uint32Array} key the bytes of the decryption key
* @param {Uint32Array} initVector the initialization vector (IV) to
* use for the first round of CBC.
* @return {Uint8Array} the decrypted bytes
*
* @see http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
* @see http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher_Block_Chaining_.28CBC.29
* @see https://tools.ietf.org/html/rfc2315
*/
var decrypt = function decrypt(encrypted, key, initVector) {
// word-level access to the encrypted bytes
var encrypted32 = new Int32Array(encrypted.buffer, encrypted.byteOffset, encrypted.byteLength >> 2);
var decipher = new AES(Array.prototype.slice.call(key)); // byte and word-level access for the decrypted output
var decrypted = new Uint8Array(encrypted.byteLength);
var decrypted32 = new Int32Array(decrypted.buffer); // temporary variables for working with the IV, encrypted, and
// decrypted data
var init0;
var init1;
var init2;
var init3;
var encrypted0;
var encrypted1;
var encrypted2;
var encrypted3; // iteration variable
var wordIx; // pull out the words of the IV to ensure we don't modify the
// passed-in reference and easier access
init0 = initVector[0];
init1 = initVector[1];
init2 = initVector[2];
init3 = initVector[3]; // decrypt four word sequences, applying cipher-block chaining (CBC)
// to each decrypted block
for (wordIx = 0; wordIx < encrypted32.length; wordIx += 4) {
// convert big-endian (network order) words into little-endian
// (javascript order)
encrypted0 = ntoh(encrypted32[wordIx]);
encrypted1 = ntoh(encrypted32[wordIx + 1]);
encrypted2 = ntoh(encrypted32[wordIx + 2]);
encrypted3 = ntoh(encrypted32[wordIx + 3]); // decrypt the block
decipher.decrypt(encrypted0, encrypted1, encrypted2, encrypted3, decrypted32, wordIx); // XOR with the IV, and restore network byte-order to obtain the
// plaintext
decrypted32[wordIx] = ntoh(decrypted32[wordIx] ^ init0);
decrypted32[wordIx + 1] = ntoh(decrypted32[wordIx + 1] ^ init1);
decrypted32[wordIx + 2] = ntoh(decrypted32[wordIx + 2] ^ init2);
decrypted32[wordIx + 3] = ntoh(decrypted32[wordIx + 3] ^ init3); // setup the IV for the next round
init0 = encrypted0;
init1 = encrypted1;
init2 = encrypted2;
init3 = encrypted3;
}
return decrypted;
};
/**
* The `Decrypter` class that manages decryption of AES
* data through `AsyncStream` objects and the `decrypt`
* function
*
* @param {Uint8Array} encrypted the encrypted bytes
* @param {Uint32Array} key the bytes of the decryption key
* @param {Uint32Array} initVector the initialization vector (IV) to
* @param {Function} done the function to run when done
* @class Decrypter
*/
var Decrypter =
/*#__PURE__*/
function () {
function Decrypter(encrypted, key, initVector, done) {
var step = Decrypter.STEP;
var encrypted32 = new Int32Array(encrypted.buffer);
var decrypted = new Uint8Array(encrypted.byteLength);
var i = 0;
this.asyncStream_ = new AsyncStream(); // split up the encryption job and do the individual chunks asynchronously
this.asyncStream_.push(this.decryptChunk_(encrypted32.subarray(i, i + step), key, initVector, decrypted));
for (i = step; i < encrypted32.length; i += step) {
initVector = new Uint32Array([ntoh(encrypted32[i - 4]), ntoh(encrypted32[i - 3]), ntoh(encrypted32[i - 2]), ntoh(encrypted32[i - 1])]);
this.asyncStream_.push(this.decryptChunk_(encrypted32.subarray(i, i + step), key, initVector, decrypted));
} // invoke the done() callback when everything is finished
this.asyncStream_.push(function () {
// remove pkcs#7 padding from the decrypted bytes
done(null, unpad(decrypted));
});
}
/**
* a getter for step the maximum number of bytes to process at one time
*
* @return {number} the value of step 32000
*/
var _proto = Decrypter.prototype;
/**
* @private
*/
_proto.decryptChunk_ = function decryptChunk_(encrypted, key, initVector, decrypted) {
return function () {
var bytes = decrypt(encrypted, key, initVector);
decrypted.set(bytes, encrypted.byteOffset);
};
};
createClass(Decrypter, null, [{
key: "STEP",
get: function get() {
// 4 * 8000;
return 32000;
}
}]);
return Decrypter;
}();
/**
* @file bin-utils.js
*/
/**
* Creates an object for sending to a web worker modifying properties that are TypedArrays
* into a new object with seperated properties for the buffer, byteOffset, and byteLength.
*
* @param {Object} message
* Object of properties and values to send to the web worker
* @return {Object}
* Modified message with TypedArray values expanded
* @function createTransferableMessage
*/
var createTransferableMessage = function createTransferableMessage(message) {
var transferable = {};
Object.keys(message).forEach(function (key) {
var value = message[key];
if (ArrayBuffer.isView(value)) {
transferable[key] = {
bytes: value.buffer,
byteOffset: value.byteOffset,
byteLength: value.byteLength
};
} else {
transferable[key] = value;
}
});
return transferable;
};
/* global self */
/**
* Our web worker interface so that things can talk to aes-decrypter
* that will be running in a web worker. the scope is passed to this by
* webworkify.
*
* @param {Object} self
* the scope for the web worker
*/
var DecrypterWorker = function DecrypterWorker(self) {
self.onmessage = function (event) {
var data = event.data;
var encrypted = new Uint8Array(data.encrypted.bytes, data.encrypted.byteOffset, data.encrypted.byteLength);
var key = new Uint32Array(data.key.bytes, data.key.byteOffset, data.key.byteLength / 4);
var iv = new Uint32Array(data.iv.bytes, data.iv.byteOffset, data.iv.byteLength / 4);
/* eslint-disable no-new, handle-callback-err */
new Decrypter(encrypted, key, iv, function (err, bytes) {
self.postMessage(createTransferableMessage({
source: data.source,
decrypted: bytes
}), [bytes.buffer]);
});
/* eslint-enable */
};
};
var decrypterWorker = new DecrypterWorker(self);
return decrypterWorker;
}());

View File

@@ -1,226 +0,0 @@
import videojs from 'video.js';
import window from 'global/window';
import { Parser as M3u8Parser } from 'm3u8-parser';
import { resolveUrl } from './resolve-url';
const { log } = videojs;
export const createPlaylistID = (index, uri) => {
return `${index}-${uri}`;
};
/**
* Parses a given m3u8 playlist
*
* @param {string} manifestString
* The downloaded manifest string
* @param {Object[]} [customTagParsers]
* An array of custom tag parsers for the m3u8-parser instance
* @param {Object[]} [customTagMappers]
* An array of custom tag mappers for the m3u8-parser instance
* @return {Object}
* The manifest object
*/
export const parseManifest = ({
manifestString,
customTagParsers = [],
customTagMappers = []
}) => {
const parser = new M3u8Parser();
customTagParsers.forEach(customParser => parser.addParser(customParser));
customTagMappers.forEach(mapper => parser.addTagMapper(mapper));
parser.push(manifestString);
parser.end();
return parser.manifest;
};
/**
* Loops through all supported media groups in master and calls the provided
* callback for each group
*
* @param {Object} master
* The parsed master manifest object
* @param {Function} callback
* Callback to call for each media group
*/
export const forEachMediaGroup = (master, callback) => {
['AUDIO', 'SUBTITLES'].forEach((mediaType) => {
for (const groupKey in master.mediaGroups[mediaType]) {
for (const labelKey in master.mediaGroups[mediaType][groupKey]) {
const mediaProperties = master.mediaGroups[mediaType][groupKey][labelKey];
callback(mediaProperties, mediaType, groupKey, labelKey);
}
}
});
};
/**
* Adds properties and attributes to the playlist to keep consistent functionality for
* playlists throughout VHS.
*
* @param {Object} config
* Arguments object
* @param {Object} config.playlist
* The media playlist
* @param {string} [config.uri]
* The uri to the media playlist (if media playlist is not from within a master
* playlist)
* @param {string} id
* ID to use for the playlist
*/
export const setupMediaPlaylist = ({ playlist, uri, id }) => {
playlist.id = id;
if (uri) {
// For media playlists, m3u8-parser does not have access to a URI, as HLS media
// playlists do not contain their own source URI, but one is needed for consistency in
// VHS.
playlist.uri = uri;
}
// For HLS master playlists, even though certain attributes MUST be defined, the
// stream may still be played without them.
// For HLS media playlists, m3u8-parser does not attach an attributes object to the
// manifest.
//
// To avoid undefined reference errors through the project, and make the code easier
// to write/read, add an empty attributes object for these cases.
playlist.attributes = playlist.attributes || {};
};
/**
* Adds ID, resolvedUri, and attributes properties to each playlist of the master, where
* necessary. In addition, creates playlist IDs for each playlist and adds playlist ID to
* playlist references to the playlists array.
*
* @param {Object} master
* The master playlist
*/
export const setupMediaPlaylists = (master) => {
let i = master.playlists.length;
while (i--) {
const playlist = master.playlists[i];
setupMediaPlaylist({
playlist,
id: createPlaylistID(i, playlist.uri)
});
playlist.resolvedUri = resolveUrl(master.uri, playlist.uri);
master.playlists[playlist.id] = playlist;
// URI reference added for backwards compatibility
master.playlists[playlist.uri] = playlist;
// Although the spec states an #EXT-X-STREAM-INF tag MUST have a BANDWIDTH attribute,
// the stream can be played without it. Although an attributes property may have been
// added to the playlist to prevent undefined references, issue a warning to fix the
// manifest.
if (!playlist.attributes.BANDWIDTH) {
log.warn('Invalid playlist STREAM-INF detected. Missing BANDWIDTH attribute.');
}
}
};
/**
* Adds resolvedUri properties to each media group.
*
* @param {Object} master
* The master playlist
*/
export const resolveMediaGroupUris = (master) => {
forEachMediaGroup(master, (properties) => {
if (properties.uri) {
properties.resolvedUri = resolveUrl(master.uri, properties.uri);
}
});
};
/**
* Creates a master playlist wrapper to insert a sole media playlist into.
*
* @param {Object} media
* Media playlist
* @param {string} uri
* The media URI
*
* @return {Object}
* Master playlist
*/
export const masterForMedia = (media, uri) => {
const id = createPlaylistID(0, uri);
const master = {
mediaGroups: {
'AUDIO': {},
'VIDEO': {},
'CLOSED-CAPTIONS': {},
'SUBTITLES': {}
},
uri: window.location.href,
resolvedUri: window.location.href,
playlists: [{
uri,
id,
resolvedUri: uri,
// m3u8-parser does not attach an attributes property to media playlists so make
// sure that the property is attached to avoid undefined reference errors
attributes: {}
}]
};
// set up ID reference
master.playlists[id] = master.playlists[0];
// URI reference added for backwards compatibility
master.playlists[uri] = master.playlists[0];
return master;
};
/**
* Does an in-place update of the master manifest to add updated playlist URI references
* as well as other properties needed by VHS that aren't included by the parser.
*
* @param {Object} master
* Master manifest object
* @param {string} uri
* The source URI
*/
export const addPropertiesToMaster = (master, uri) => {
master.uri = uri;
for (let i = 0; i < master.playlists.length; i++) {
if (!master.playlists[i].uri) {
// Set up phony URIs for the playlists since playlists are referenced by their URIs
// throughout VHS, but some formats (e.g., DASH) don't have external URIs
// TODO: consider adding dummy URIs in mpd-parser
const phonyUri = `placeholder-uri-${i}`;
master.playlists[i].uri = phonyUri;
}
}
forEachMediaGroup(master, (properties, mediaType, groupKey, labelKey) => {
if (!properties.playlists ||
!properties.playlists.length ||
properties.playlists[0].uri) {
return;
}
// Set up phony URIs for the media group playlists since playlists are referenced by
// their URIs throughout VHS, but some formats (e.g., DASH) don't have external URIs
const phonyUri = `placeholder-uri-${mediaType}-${groupKey}-${labelKey}`;
const id = createPlaylistID(0, phonyUri);
properties.playlists[0].uri = phonyUri;
properties.playlists[0].id = id;
// setup ID and URI references (URI for backwards compatibility)
master.playlists[id] = properties.playlists[0];
master.playlists[phonyUri] = properties.playlists[0];
});
setupMediaPlaylists(master);
resolveMediaGroupUris(master);
};

File diff suppressed because it is too large Load Diff

View File

@@ -1,839 +0,0 @@
import videojs from 'video.js';
import PlaylistLoader from './playlist-loader';
import DashPlaylistLoader from './dash-playlist-loader';
import noop from './util/noop';
/**
* Convert the properties of an HLS track into an audioTrackKind.
*
* @private
*/
const audioTrackKind_ = (properties) => {
let kind = properties.default ? 'main' : 'alternative';
if (properties.characteristics &&
properties.characteristics.indexOf('public.accessibility.describes-video') >= 0) {
kind = 'main-desc';
}
return kind;
};
/**
* Pause provided segment loader and playlist loader if active
*
* @param {SegmentLoader} segmentLoader
* SegmentLoader to pause
* @param {Object} mediaType
* Active media type
* @function stopLoaders
*/
export const stopLoaders = (segmentLoader, mediaType) => {
segmentLoader.abort();
segmentLoader.pause();
if (mediaType && mediaType.activePlaylistLoader) {
mediaType.activePlaylistLoader.pause();
mediaType.activePlaylistLoader = null;
}
};
/**
* Start loading provided segment loader and playlist loader
*
* @param {PlaylistLoader} playlistLoader
* PlaylistLoader to start loading
* @param {Object} mediaType
* Active media type
* @function startLoaders
*/
export const startLoaders = (playlistLoader, mediaType) => {
// Segment loader will be started after `loadedmetadata` or `loadedplaylist` from the
// playlist loader
mediaType.activePlaylistLoader = playlistLoader;
playlistLoader.load();
};
/**
* Returns a function to be called when the media group changes. It performs a
* non-destructive (preserve the buffer) resync of the SegmentLoader. This is because a
* change of group is merely a rendition switch of the same content at another encoding,
* rather than a change of content, such as switching audio from English to Spanish.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Handler for a non-destructive resync of SegmentLoader when the active media
* group changes.
* @function onGroupChanged
*/
export const onGroupChanged = (type, settings) => () => {
const {
segmentLoaders: {
[type]: segmentLoader,
main: mainSegmentLoader
},
mediaTypes: { [type]: mediaType }
} = settings;
const activeTrack = mediaType.activeTrack();
const activeGroup = mediaType.activeGroup(activeTrack);
const previousActiveLoader = mediaType.activePlaylistLoader;
stopLoaders(segmentLoader, mediaType);
if (!activeGroup) {
// there is no group active
return;
}
if (!activeGroup.playlistLoader) {
if (previousActiveLoader) {
// The previous group had a playlist loader but the new active group does not
// this means we are switching from demuxed to muxed audio. In this case we want to
// do a destructive reset of the main segment loader and not restart the audio
// loaders.
mainSegmentLoader.resetEverything();
}
return;
}
// Non-destructive resync
segmentLoader.resyncLoader();
startLoaders(activeGroup.playlistLoader, mediaType);
};
export const onGroupChanging = (type, settings) => () => {
const {
segmentLoaders: {
[type]: segmentLoader
}
} = settings;
segmentLoader.abort();
segmentLoader.pause();
};
/**
* Returns a function to be called when the media track changes. It performs a
* destructive reset of the SegmentLoader to ensure we start loading as close to
* currentTime as possible.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Handler for a destructive reset of SegmentLoader when the active media
* track changes.
* @function onTrackChanged
*/
export const onTrackChanged = (type, settings) => () => {
const {
segmentLoaders: {
[type]: segmentLoader,
main: mainSegmentLoader
},
mediaTypes: { [type]: mediaType }
} = settings;
const activeTrack = mediaType.activeTrack();
const activeGroup = mediaType.activeGroup(activeTrack);
const previousActiveLoader = mediaType.activePlaylistLoader;
stopLoaders(segmentLoader, mediaType);
if (!activeGroup) {
// there is no group active so we do not want to restart loaders
return;
}
if (type === 'AUDIO') {
if (!activeGroup.playlistLoader) {
// when switching from demuxed audio/video to muxed audio/video (noted by no
// playlist loader for the audio group), we want to do a destructive reset of the
// main segment loader and not restart the audio loaders
mainSegmentLoader.setAudio(true);
// don't have to worry about disabling the audio of the audio segment loader since
// it should be stopped
mainSegmentLoader.resetEverything();
return;
}
// although the segment loader is an audio segment loader, call the setAudio
// function to ensure it is prepared to re-append the init segment (or handle other
// config changes)
segmentLoader.setAudio(true);
mainSegmentLoader.setAudio(false);
}
if (previousActiveLoader === activeGroup.playlistLoader) {
// Nothing has actually changed. This can happen because track change events can fire
// multiple times for a "single" change. One for enabling the new active track, and
// one for disabling the track that was active
startLoaders(activeGroup.playlistLoader, mediaType);
return;
}
if (segmentLoader.track) {
// For WebVTT, set the new text track in the segmentloader
segmentLoader.track(activeTrack);
}
// destructive reset
segmentLoader.resetEverything();
startLoaders(activeGroup.playlistLoader, mediaType);
};
export const onError = {
/**
* Returns a function to be called when a SegmentLoader or PlaylistLoader encounters
* an error.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Error handler. Logs warning (or error if the playlist is blacklisted) to
* console and switches back to default audio track.
* @function onError.AUDIO
*/
AUDIO: (type, settings) => () => {
const {
segmentLoaders: { [type]: segmentLoader},
mediaTypes: { [type]: mediaType },
blacklistCurrentPlaylist
} = settings;
stopLoaders(segmentLoader, mediaType);
// switch back to default audio track
const activeTrack = mediaType.activeTrack();
const activeGroup = mediaType.activeGroup();
const id = (activeGroup.filter(group => group.default)[0] || activeGroup[0]).id;
const defaultTrack = mediaType.tracks[id];
if (activeTrack === defaultTrack) {
// Default track encountered an error. All we can do now is blacklist the current
// rendition and hope another will switch audio groups
blacklistCurrentPlaylist({
message: 'Problem encountered loading the default audio track.'
});
return;
}
videojs.log.warn('Problem encountered loading the alternate audio track.' +
'Switching back to default.');
for (const trackId in mediaType.tracks) {
mediaType.tracks[trackId].enabled = mediaType.tracks[trackId] === defaultTrack;
}
mediaType.onTrackChanged();
},
/**
* Returns a function to be called when a SegmentLoader or PlaylistLoader encounters
* an error.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Error handler. Logs warning to console and disables the active subtitle track
* @function onError.SUBTITLES
*/
SUBTITLES: (type, settings) => () => {
const {
segmentLoaders: { [type]: segmentLoader},
mediaTypes: { [type]: mediaType }
} = settings;
videojs.log.warn('Problem encountered loading the subtitle track.' +
'Disabling subtitle track.');
stopLoaders(segmentLoader, mediaType);
const track = mediaType.activeTrack();
if (track) {
track.mode = 'disabled';
}
mediaType.onTrackChanged();
}
};
export const setupListeners = {
/**
* Setup event listeners for audio playlist loader
*
* @param {string} type
* MediaGroup type
* @param {PlaylistLoader|null} playlistLoader
* PlaylistLoader to register listeners on
* @param {Object} settings
* Object containing required information for media groups
* @function setupListeners.AUDIO
*/
AUDIO: (type, playlistLoader, settings) => {
if (!playlistLoader) {
// no playlist loader means audio will be muxed with the video
return;
}
const {
tech,
requestOptions,
segmentLoaders: { [type]: segmentLoader }
} = settings;
playlistLoader.on('loadedmetadata', () => {
const media = playlistLoader.media();
segmentLoader.playlist(media, requestOptions);
// if the video is already playing, or if this isn't a live video and preload
// permits, start downloading segments
if (!tech.paused() || (media.endList && tech.preload() !== 'none')) {
segmentLoader.load();
}
});
playlistLoader.on('loadedplaylist', () => {
segmentLoader.playlist(playlistLoader.media(), requestOptions);
// If the player isn't paused, ensure that the segment loader is running
if (!tech.paused()) {
segmentLoader.load();
}
});
playlistLoader.on('error', onError[type](type, settings));
},
/**
* Setup event listeners for subtitle playlist loader
*
* @param {string} type
* MediaGroup type
* @param {PlaylistLoader|null} playlistLoader
* PlaylistLoader to register listeners on
* @param {Object} settings
* Object containing required information for media groups
* @function setupListeners.SUBTITLES
*/
SUBTITLES: (type, playlistLoader, settings) => {
const {
tech,
requestOptions,
segmentLoaders: { [type]: segmentLoader },
mediaTypes: { [type]: mediaType }
} = settings;
playlistLoader.on('loadedmetadata', () => {
const media = playlistLoader.media();
segmentLoader.playlist(media, requestOptions);
segmentLoader.track(mediaType.activeTrack());
// if the video is already playing, or if this isn't a live video and preload
// permits, start downloading segments
if (!tech.paused() || (media.endList && tech.preload() !== 'none')) {
segmentLoader.load();
}
});
playlistLoader.on('loadedplaylist', () => {
segmentLoader.playlist(playlistLoader.media(), requestOptions);
// If the player isn't paused, ensure that the segment loader is running
if (!tech.paused()) {
segmentLoader.load();
}
});
playlistLoader.on('error', onError[type](type, settings));
}
};
export const initialize = {
/**
* Setup PlaylistLoaders and AudioTracks for the audio groups
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @function initialize.AUDIO
*/
'AUDIO': (type, settings) => {
const {
vhs,
sourceType,
segmentLoaders: { [type]: segmentLoader },
requestOptions,
master: { mediaGroups, playlists },
mediaTypes: {
[type]: {
groups,
tracks
}
},
masterPlaylistLoader
} = settings;
// force a default if we have none
if (!mediaGroups[type] ||
Object.keys(mediaGroups[type]).length === 0) {
mediaGroups[type] = { main: { default: { default: true } } };
}
for (const groupId in mediaGroups[type]) {
if (!groups[groupId]) {
groups[groupId] = [];
}
// List of playlists that have an AUDIO attribute value matching the current
// group ID
const groupPlaylists = playlists.filter(playlist => {
return playlist.attributes[type] === groupId;
});
for (const variantLabel in mediaGroups[type][groupId]) {
let properties = mediaGroups[type][groupId][variantLabel];
// List of playlists for the current group ID that have a matching uri with
// this alternate audio variant
const matchingPlaylists = groupPlaylists.filter(playlist => {
return playlist.resolvedUri === properties.resolvedUri;
});
if (matchingPlaylists.length) {
// If there is a playlist that has the same uri as this audio variant, assume
// that the playlist is audio only. We delete the resolvedUri property here
// to prevent a playlist loader from being created so that we don't have
// both the main and audio segment loaders loading the same audio segments
// from the same playlist.
delete properties.resolvedUri;
}
let playlistLoader;
// if vhs-json was provided as the source, and the media playlist was resolved,
// use the resolved media playlist object
if (sourceType === 'vhs-json' && properties.playlists) {
playlistLoader = new PlaylistLoader(
properties.playlists[0],
vhs,
requestOptions
);
} else if (properties.resolvedUri) {
playlistLoader = new PlaylistLoader(
properties.resolvedUri,
vhs,
requestOptions
);
} else if (properties.playlists && sourceType === 'dash') {
playlistLoader = new DashPlaylistLoader(
properties.playlists[0],
vhs,
requestOptions,
masterPlaylistLoader
);
} else {
// no resolvedUri means the audio is muxed with the video when using this
// audio track
playlistLoader = null;
}
properties = videojs.mergeOptions(
{ id: variantLabel, playlistLoader },
properties
);
setupListeners[type](type, properties.playlistLoader, settings);
groups[groupId].push(properties);
if (typeof tracks[variantLabel] === 'undefined') {
const track = new videojs.AudioTrack({
id: variantLabel,
kind: audioTrackKind_(properties),
enabled: false,
language: properties.language,
default: properties.default,
label: variantLabel
});
tracks[variantLabel] = track;
}
}
}
// setup single error event handler for the segment loader
segmentLoader.on('error', onError[type](type, settings));
},
/**
* Setup PlaylistLoaders and TextTracks for the subtitle groups
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @function initialize.SUBTITLES
*/
'SUBTITLES': (type, settings) => {
const {
tech,
vhs,
sourceType,
segmentLoaders: { [type]: segmentLoader },
requestOptions,
master: { mediaGroups },
mediaTypes: {
[type]: {
groups,
tracks
}
},
masterPlaylistLoader
} = settings;
for (const groupId in mediaGroups[type]) {
if (!groups[groupId]) {
groups[groupId] = [];
}
for (const variantLabel in mediaGroups[type][groupId]) {
if (mediaGroups[type][groupId][variantLabel].forced) {
// Subtitle playlists with the forced attribute are not selectable in Safari.
// According to Apple's HLS Authoring Specification:
// If content has forced subtitles and regular subtitles in a given language,
// the regular subtitles track in that language MUST contain both the forced
// subtitles and the regular subtitles for that language.
// Because of this requirement and that Safari does not add forced subtitles,
// forced subtitles are skipped here to maintain consistent experience across
// all platforms
continue;
}
let properties = mediaGroups[type][groupId][variantLabel];
let playlistLoader;
if (sourceType === 'hls') {
playlistLoader =
new PlaylistLoader(properties.resolvedUri, vhs, requestOptions);
} else if (sourceType === 'dash') {
playlistLoader = new DashPlaylistLoader(
properties.playlists[0],
vhs,
requestOptions,
masterPlaylistLoader
);
} else if (sourceType === 'vhs-json') {
playlistLoader = new PlaylistLoader(
// if the vhs-json object included the media playlist, use the media playlist
// as provided, otherwise use the resolved URI to load the playlist
properties.playlists ? properties.playlists[0] : properties.resolvedUri,
vhs,
requestOptions
);
}
properties = videojs.mergeOptions({
id: variantLabel,
playlistLoader
}, properties);
setupListeners[type](type, properties.playlistLoader, settings);
groups[groupId].push(properties);
if (typeof tracks[variantLabel] === 'undefined') {
const track = tech.addRemoteTextTrack({
id: variantLabel,
kind: 'subtitles',
default: properties.default && properties.autoselect,
language: properties.language,
label: variantLabel
}, false).track;
tracks[variantLabel] = track;
}
}
}
// setup single error event handler for the segment loader
segmentLoader.on('error', onError[type](type, settings));
},
/**
* Setup TextTracks for the closed-caption groups
*
* @param {String} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @function initialize['CLOSED-CAPTIONS']
*/
'CLOSED-CAPTIONS': (type, settings) => {
const {
tech,
master: { mediaGroups },
mediaTypes: {
[type]: {
groups,
tracks
}
}
} = settings;
for (const groupId in mediaGroups[type]) {
if (!groups[groupId]) {
groups[groupId] = [];
}
for (const variantLabel in mediaGroups[type][groupId]) {
const properties = mediaGroups[type][groupId][variantLabel];
// We only support CEA608 captions for now, so ignore anything that
// doesn't use a CCx INSTREAM-ID
if (!properties.instreamId.match(/CC\d/)) {
continue;
}
// No PlaylistLoader is required for Closed-Captions because the captions are
// embedded within the video stream
groups[groupId].push(videojs.mergeOptions({ id: variantLabel }, properties));
if (typeof tracks[variantLabel] === 'undefined') {
const track = tech.addRemoteTextTrack({
id: properties.instreamId,
kind: 'captions',
default: properties.default && properties.autoselect,
language: properties.language,
label: variantLabel
}, false).track;
tracks[variantLabel] = track;
}
}
}
}
};
/**
* Returns a function used to get the active group of the provided type
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Function that returns the active media group for the provided type. Takes an
* optional parameter {TextTrack} track. If no track is provided, a list of all
* variants in the group, otherwise the variant corresponding to the provided
* track is returned.
* @function activeGroup
*/
export const activeGroup = (type, settings) => (track) => {
const {
masterPlaylistLoader,
mediaTypes: { [type]: { groups } }
} = settings;
const media = masterPlaylistLoader.media();
if (!media) {
return null;
}
let variants = null;
if (media.attributes[type]) {
variants = groups[media.attributes[type]];
}
variants = variants || groups.main;
if (typeof track === 'undefined') {
return variants;
}
if (track === null) {
// An active track was specified so a corresponding group is expected. track === null
// means no track is currently active so there is no corresponding group
return null;
}
return variants.filter((props) => props.id === track.id)[0] || null;
};
export const activeTrack = {
/**
* Returns a function used to get the active track of type provided
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Function that returns the active media track for the provided type. Returns
* null if no track is active
* @function activeTrack.AUDIO
*/
AUDIO: (type, settings) => () => {
const { mediaTypes: { [type]: { tracks } } } = settings;
for (const id in tracks) {
if (tracks[id].enabled) {
return tracks[id];
}
}
return null;
},
/**
* Returns a function used to get the active track of type provided
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Function that returns the active media track for the provided type. Returns
* null if no track is active
* @function activeTrack.SUBTITLES
*/
SUBTITLES: (type, settings) => () => {
const { mediaTypes: { [type]: { tracks } } } = settings;
for (const id in tracks) {
if (tracks[id].mode === 'showing' || tracks[id].mode === 'hidden') {
return tracks[id];
}
}
return null;
}
};
/**
* Setup PlaylistLoaders and Tracks for media groups (Audio, Subtitles,
* Closed-Captions) specified in the master manifest.
*
* @param {Object} settings
* Object containing required information for setting up the media groups
* @param {Tech} settings.tech
* The tech of the player
* @param {Object} settings.requestOptions
* XHR request options used by the segment loaders
* @param {PlaylistLoader} settings.masterPlaylistLoader
* PlaylistLoader for the master source
* @param {VhsHandler} settings.vhs
* VHS SourceHandler
* @param {Object} settings.master
* The parsed master manifest
* @param {Object} settings.mediaTypes
* Object to store the loaders, tracks, and utility methods for each media type
* @param {Function} settings.blacklistCurrentPlaylist
* Blacklists the current rendition and forces a rendition switch.
* @function setupMediaGroups
*/
export const setupMediaGroups = (settings) => {
['AUDIO', 'SUBTITLES', 'CLOSED-CAPTIONS'].forEach((type) => {
initialize[type](type, settings);
});
const {
mediaTypes,
masterPlaylistLoader,
tech,
vhs
} = settings;
// setup active group and track getters and change event handlers
['AUDIO', 'SUBTITLES'].forEach((type) => {
mediaTypes[type].activeGroup = activeGroup(type, settings);
mediaTypes[type].activeTrack = activeTrack[type](type, settings);
mediaTypes[type].onGroupChanged = onGroupChanged(type, settings);
mediaTypes[type].onGroupChanging = onGroupChanging(type, settings);
mediaTypes[type].onTrackChanged = onTrackChanged(type, settings);
});
// DO NOT enable the default subtitle or caption track.
// DO enable the default audio track
const audioGroup = mediaTypes.AUDIO.activeGroup();
if (audioGroup) {
const groupId = (audioGroup.filter(group => group.default)[0] || audioGroup[0]).id;
mediaTypes.AUDIO.tracks[groupId].enabled = true;
mediaTypes.AUDIO.onTrackChanged();
}
masterPlaylistLoader.on('mediachange', () => {
['AUDIO', 'SUBTITLES'].forEach(type => mediaTypes[type].onGroupChanged());
});
masterPlaylistLoader.on('mediachanging', () => {
['AUDIO', 'SUBTITLES'].forEach(type => mediaTypes[type].onGroupChanging());
});
// custom audio track change event handler for usage event
const onAudioTrackChanged = () => {
mediaTypes.AUDIO.onTrackChanged();
tech.trigger({ type: 'usage', name: 'vhs-audio-change' });
tech.trigger({ type: 'usage', name: 'hls-audio-change' });
};
tech.audioTracks().addEventListener('change', onAudioTrackChanged);
tech.remoteTextTracks().addEventListener(
'change',
mediaTypes.SUBTITLES.onTrackChanged
);
vhs.on('dispose', () => {
tech.audioTracks().removeEventListener('change', onAudioTrackChanged);
tech.remoteTextTracks().removeEventListener(
'change',
mediaTypes.SUBTITLES.onTrackChanged
);
});
// clear existing audio tracks and add the ones we just created
tech.clearTracks('audio');
for (const id in mediaTypes.AUDIO.tracks) {
tech.audioTracks().addTrack(mediaTypes.AUDIO.tracks[id]);
}
};
/**
* Creates skeleton object used to store the loaders, tracks, and utility methods for each
* media type
*
* @return {Object}
* Object to store the loaders, tracks, and utility methods for each media type
* @function createMediaTypes
*/
export const createMediaTypes = () => {
const mediaTypes = {};
['AUDIO', 'SUBTITLES', 'CLOSED-CAPTIONS'].forEach((type) => {
mediaTypes[type] = {
groups: {},
tracks: {},
activePlaylistLoader: null,
activeGroup: noop,
activeTrack: noop,
onGroupChanged: noop,
onTrackChanged: noop
};
});
return mediaTypes;
};

View File

@@ -1,932 +0,0 @@
import videojs from 'video.js';
import { createTransferableMessage } from './bin-utils';
import { stringToArrayBuffer } from './util/string-to-array-buffer';
import { transmux } from './segment-transmuxer';
import { probeTsSegment } from './util/segment';
import mp4probe from 'mux.js/lib/mp4/probe';
import { segmentXhrHeaders } from './xhr';
import {
detectContainerForBytes,
isLikelyFmp4MediaSegment
} from '@videojs/vhs-utils/dist/containers';
export const REQUEST_ERRORS = {
FAILURE: 2,
TIMEOUT: -101,
ABORTED: -102
};
/**
* Abort all requests
*
* @param {Object} activeXhrs - an object that tracks all XHR requests
*/
const abortAll = (activeXhrs) => {
activeXhrs.forEach((xhr) => {
xhr.abort();
});
};
/**
* Gather important bandwidth stats once a request has completed
*
* @param {Object} request - the XHR request from which to gather stats
*/
const getRequestStats = (request) => {
return {
bandwidth: request.bandwidth,
bytesReceived: request.bytesReceived || 0,
roundTripTime: request.roundTripTime || 0
};
};
/**
* If possible gather bandwidth stats as a request is in
* progress
*
* @param {Event} progressEvent - an event object from an XHR's progress event
*/
const getProgressStats = (progressEvent) => {
const request = progressEvent.target;
const roundTripTime = Date.now() - request.requestTime;
const stats = {
bandwidth: Infinity,
bytesReceived: 0,
roundTripTime: roundTripTime || 0
};
stats.bytesReceived = progressEvent.loaded;
// This can result in Infinity if stats.roundTripTime is 0 but that is ok
// because we should only use bandwidth stats on progress to determine when
// abort a request early due to insufficient bandwidth
stats.bandwidth = Math.floor((stats.bytesReceived / stats.roundTripTime) * 8 * 1000);
return stats;
};
/**
* Handle all error conditions in one place and return an object
* with all the information
*
* @param {Error|null} error - if non-null signals an error occured with the XHR
* @param {Object} request - the XHR request that possibly generated the error
*/
const handleErrors = (error, request) => {
if (request.timedout) {
return {
status: request.status,
message: 'HLS request timed-out at URL: ' + request.uri,
code: REQUEST_ERRORS.TIMEOUT,
xhr: request
};
}
if (request.aborted) {
return {
status: request.status,
message: 'HLS request aborted at URL: ' + request.uri,
code: REQUEST_ERRORS.ABORTED,
xhr: request
};
}
if (error) {
return {
status: request.status,
message: 'HLS request errored at URL: ' + request.uri,
code: REQUEST_ERRORS.FAILURE,
xhr: request
};
}
return null;
};
/**
* Handle responses for key data and convert the key data to the correct format
* for the decryption step later
*
* @param {Object} segment - a simplified copy of the segmentInfo object
* from SegmentLoader
* @param {Function} finishProcessingFn - a callback to execute to continue processing
* this request
*/
const handleKeyResponse = (segment, finishProcessingFn) => (error, request) => {
const response = request.response;
const errorObj = handleErrors(error, request);
if (errorObj) {
return finishProcessingFn(errorObj, segment);
}
if (response.byteLength !== 16) {
return finishProcessingFn({
status: request.status,
message: 'Invalid HLS key at URL: ' + request.uri,
code: REQUEST_ERRORS.FAILURE,
xhr: request
}, segment);
}
const view = new DataView(response);
segment.key.bytes = new Uint32Array([
view.getUint32(0),
view.getUint32(4),
view.getUint32(8),
view.getUint32(12)
]);
return finishProcessingFn(null, segment);
};
/**
* Handle init-segment responses
*
* @param {Object} segment - a simplified copy of the segmentInfo object
* from SegmentLoader
* @param {Function} finishProcessingFn - a callback to execute to continue processing
* this request
*/
const handleInitSegmentResponse =
({segment, finishProcessingFn}) => (error, request) => {
const response = request.response;
const errorObj = handleErrors(error, request);
if (errorObj) {
return finishProcessingFn(errorObj, segment);
}
// stop processing if received empty content
if (response.byteLength === 0) {
return finishProcessingFn({
status: request.status,
message: 'Empty HLS segment content at URL: ' + request.uri,
code: REQUEST_ERRORS.FAILURE,
xhr: request
}, segment);
}
segment.map.bytes = new Uint8Array(request.response);
const type = detectContainerForBytes(segment.map.bytes);
// TODO: We should also handle ts init segments here, but we
// only know how to parse mp4 init segments at the moment
if (type !== 'mp4') {
return finishProcessingFn({
status: request.status,
message: `Found unsupported ${type || 'unknown'} container for initialization segment at URL: ${request.uri}`,
code: REQUEST_ERRORS.FAILURE,
internal: true,
xhr: request
}, segment);
}
const tracks = mp4probe.tracks(segment.map.bytes);
tracks.forEach(function(track) {
segment.map.tracks = segment.map.tracks || {};
// only support one track of each type for now
if (segment.map.tracks[track.type]) {
return;
}
segment.map.tracks[track.type] = track;
if (track.id && track.timescale) {
segment.map.timescales = segment.map.timescales || {};
segment.map.timescales[track.id] = track.timescale;
}
});
return finishProcessingFn(null, segment);
};
/**
* Response handler for segment-requests being sure to set the correct
* property depending on whether the segment is encryped or not
* Also records and keeps track of stats that are used for ABR purposes
*
* @param {Object} segment - a simplified copy of the segmentInfo object
* from SegmentLoader
* @param {Function} finishProcessingFn - a callback to execute to continue processing
* this request
*/
const handleSegmentResponse = ({
segment,
finishProcessingFn,
responseType
}) => (error, request) => {
const response = request.response;
const errorObj = handleErrors(error, request);
if (errorObj) {
return finishProcessingFn(errorObj, segment);
}
const newBytes =
// although responseText "should" exist, this guard serves to prevent an error being
// thrown for two primary cases:
// 1. the mime type override stops working, or is not implemented for a specific
// browser
// 2. when using mock XHR libraries like sinon that do not allow the override behavior
(responseType === 'arraybuffer' || !request.responseText) ?
request.response :
stringToArrayBuffer(request.responseText.substring(segment.lastReachedChar || 0));
// stop processing if received empty content
if (response.byteLength === 0) {
return finishProcessingFn({
status: request.status,
message: 'Empty HLS segment content at URL: ' + request.uri,
code: REQUEST_ERRORS.FAILURE,
xhr: request
}, segment);
}
segment.stats = getRequestStats(request);
if (segment.key) {
segment.encryptedBytes = new Uint8Array(newBytes);
} else {
segment.bytes = new Uint8Array(newBytes);
}
return finishProcessingFn(null, segment);
};
const transmuxAndNotify = ({
segment,
bytes,
isPartial,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
doneFn
}) => {
const fmp4Tracks = segment.map && segment.map.tracks || {};
const isMuxed = Boolean(fmp4Tracks.audio && fmp4Tracks.video);
// Keep references to each function so we can null them out after we're done with them.
// One reason for this is that in the case of full segments, we want to trust start
// times from the probe, rather than the transmuxer.
let audioStartFn = timingInfoFn.bind(null, segment, 'audio', 'start');
const audioEndFn = timingInfoFn.bind(null, segment, 'audio', 'end');
let videoStartFn = timingInfoFn.bind(null, segment, 'video', 'start');
const videoEndFn = timingInfoFn.bind(null, segment, 'video', 'end');
// Check to see if we are appending a full segment.
if (!isPartial && !segment.lastReachedChar) {
// In the full segment transmuxer, we don't yet have the ability to extract a "proper"
// start time. Meaning cached frame data may corrupt our notion of where this segment
// really starts. To get around this, full segment appends should probe for the info
// needed.
const probeResult = probeTsSegment(bytes, segment.baseStartTime);
if (probeResult) {
trackInfoFn(segment, {
hasAudio: probeResult.hasAudio,
hasVideo: probeResult.hasVideo,
isMuxed
});
trackInfoFn = null;
if (probeResult.hasAudio && !isMuxed) {
audioStartFn(probeResult.audioStart);
}
if (probeResult.hasVideo) {
videoStartFn(probeResult.videoStart);
}
audioStartFn = null;
videoStartFn = null;
}
}
transmux({
bytes,
transmuxer: segment.transmuxer,
audioAppendStart: segment.audioAppendStart,
gopsToAlignWith: segment.gopsToAlignWith,
isPartial,
remux: isMuxed,
onData: (result) => {
result.type = result.type === 'combined' ? 'video' : result.type;
dataFn(segment, result);
},
onTrackInfo: (trackInfo) => {
if (trackInfoFn) {
if (isMuxed) {
trackInfo.isMuxed = true;
}
trackInfoFn(segment, trackInfo);
}
},
onAudioTimingInfo: (audioTimingInfo) => {
// we only want the first start value we encounter
if (audioStartFn && typeof audioTimingInfo.start !== 'undefined') {
audioStartFn(audioTimingInfo.start);
audioStartFn = null;
}
// we want to continually update the end time
if (audioEndFn && typeof audioTimingInfo.end !== 'undefined') {
audioEndFn(audioTimingInfo.end);
}
},
onVideoTimingInfo: (videoTimingInfo) => {
// we only want the first start value we encounter
if (videoStartFn && typeof videoTimingInfo.start !== 'undefined') {
videoStartFn(videoTimingInfo.start);
videoStartFn = null;
}
// we want to continually update the end time
if (videoEndFn && typeof videoTimingInfo.end !== 'undefined') {
videoEndFn(videoTimingInfo.end);
}
},
onVideoSegmentTimingInfo: (videoSegmentTimingInfo) => {
videoSegmentTimingInfoFn(videoSegmentTimingInfo);
},
onId3: (id3Frames, dispatchType) => {
id3Fn(segment, id3Frames, dispatchType);
},
onCaptions: (captions) => {
captionsFn(segment, [captions]);
},
onDone: (result) => {
// To handle partial appends, there won't be a done function passed in (since
// there's still, potentially, more segment to process), so there's nothing to do.
if (!doneFn || isPartial) {
return;
}
result.type = result.type === 'combined' ? 'video' : result.type;
doneFn(null, segment, result);
}
});
};
const handleSegmentBytes = ({
segment,
bytes,
isPartial,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
doneFn
}) => {
const bytesAsUint8Array = new Uint8Array(bytes);
// TODO:
// We should have a handler that fetches the number of bytes required
// to check if something is fmp4. This will allow us to save bandwidth
// because we can only blacklist a playlist and abort requests
// by codec after trackinfo triggers.
if (isLikelyFmp4MediaSegment(bytesAsUint8Array)) {
segment.isFmp4 = true;
const {tracks} = segment.map;
const trackInfo = {
isFmp4: true,
hasVideo: !!tracks.video,
hasAudio: !!tracks.audio
};
// if we have a audio track, with a codec that is not set to
// encrypted audio
if (tracks.audio && tracks.audio.codec && tracks.audio.codec !== 'enca') {
trackInfo.audioCodec = tracks.audio.codec;
}
// if we have a video track, with a codec that is not set to
// encrypted video
if (tracks.video && tracks.video.codec && tracks.video.codec !== 'encv') {
trackInfo.videoCodec = tracks.video.codec;
}
if (tracks.video && tracks.audio) {
trackInfo.isMuxed = true;
}
// since we don't support appending fmp4 data on progress, we know we have the full
// segment here
trackInfoFn(segment, trackInfo);
// The probe doesn't provide the segment end time, so only callback with the start
// time. The end time can be roughly calculated by the receiver using the duration.
//
// Note that the start time returned by the probe reflects the baseMediaDecodeTime, as
// that is the true start of the segment (where the playback engine should begin
// decoding).
const timingInfo = mp4probe.startTime(segment.map.timescales, bytesAsUint8Array);
if (trackInfo.hasAudio && !trackInfo.isMuxed) {
timingInfoFn(segment, 'audio', 'start', timingInfo);
}
if (trackInfo.hasVideo) {
timingInfoFn(segment, 'video', 'start', timingInfo);
}
const finishLoading = (captions) => {
// if the track still has audio at this point it is only possible
// for it to be audio only. See `tracks.video && tracks.audio` if statement
// above.
// we make sure to use segment.bytes here as that
dataFn(segment, {data: bytes, type: trackInfo.hasAudio && !trackInfo.isMuxed ? 'audio' : 'video'});
if (captions && captions.length) {
captionsFn(segment, captions);
}
doneFn(null, segment, {});
};
// Run through the CaptionParser in case there are captions.
// Initialize CaptionParser if it hasn't been yet
if (!tracks.video || !bytes.byteLength || !segment.transmuxer) {
finishLoading();
return;
}
const buffer = bytes instanceof ArrayBuffer ? bytes : bytes.buffer;
const byteOffset = bytes instanceof ArrayBuffer ? 0 : bytes.byteOffset;
const listenForCaptions = (event) => {
if (event.data.action !== 'mp4Captions') {
return;
}
segment.transmuxer.removeEventListener('message', listenForCaptions);
const data = event.data.data;
// transfer ownership of bytes back to us.
segment.bytes = bytes = new Uint8Array(data, data.byteOffset || 0, data.byteLength);
finishLoading(event.data.captions);
};
segment.transmuxer.addEventListener('message', listenForCaptions);
// transfer ownership of bytes to worker.
segment.transmuxer.postMessage({
action: 'pushMp4Captions',
timescales: segment.map.timescales,
trackIds: [tracks.video.id],
data: buffer,
byteOffset,
byteLength: bytes.byteLength
}, [ buffer ]);
return;
}
// VTT or other segments that don't need processing
if (!segment.transmuxer) {
doneFn(null, segment, {});
return;
}
if (typeof segment.container === 'undefined') {
segment.container = detectContainerForBytes(bytesAsUint8Array);
}
if (segment.container !== 'ts' && segment.container !== 'aac') {
trackInfoFn(segment, {hasAudio: false, hasVideo: false});
doneFn(null, segment, {});
return;
}
// ts or aac
transmuxAndNotify({
segment,
bytes,
isPartial,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
doneFn
});
};
/**
* Decrypt the segment via the decryption web worker
*
* @param {WebWorker} decryptionWorker - a WebWorker interface to AES-128 decryption
* routines
* @param {Object} segment - a simplified copy of the segmentInfo object
* from SegmentLoader
* @param {Function} trackInfoFn - a callback that receives track info
* @param {Function} dataFn - a callback that is executed when segment bytes are available
* and ready to use
* @param {Function} doneFn - a callback that is executed after decryption has completed
*/
const decryptSegment = ({
decryptionWorker,
segment,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
doneFn
}) => {
const decryptionHandler = (event) => {
if (event.data.source === segment.requestId) {
decryptionWorker.removeEventListener('message', decryptionHandler);
const decrypted = event.data.decrypted;
segment.bytes = new Uint8Array(
decrypted.bytes,
decrypted.byteOffset,
decrypted.byteLength
);
handleSegmentBytes({
segment,
bytes: segment.bytes,
isPartial: false,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
doneFn
});
}
};
decryptionWorker.addEventListener('message', decryptionHandler);
let keyBytes;
if (segment.key.bytes.slice) {
keyBytes = segment.key.bytes.slice();
} else {
keyBytes = new Uint32Array(Array.prototype.slice.call(segment.key.bytes));
}
// this is an encrypted segment
// incrementally decrypt the segment
decryptionWorker.postMessage(createTransferableMessage({
source: segment.requestId,
encrypted: segment.encryptedBytes,
key: keyBytes,
iv: segment.key.iv
}), [
segment.encryptedBytes.buffer,
keyBytes.buffer
]);
};
/**
* This function waits for all XHRs to finish (with either success or failure)
* before continueing processing via it's callback. The function gathers errors
* from each request into a single errors array so that the error status for
* each request can be examined later.
*
* @param {Object} activeXhrs - an object that tracks all XHR requests
* @param {WebWorker} decryptionWorker - a WebWorker interface to AES-128 decryption
* routines
* @param {Function} trackInfoFn - a callback that receives track info
* @param {Function} timingInfoFn - a callback that receives timing info
* @param {Function} id3Fn - a callback that receives ID3 metadata
* @param {Function} captionsFn - a callback that receives captions
* @param {Function} dataFn - a callback that is executed when segment bytes are available
* and ready to use
* @param {Function} doneFn - a callback that is executed after all resources have been
* downloaded and any decryption completed
*/
const waitForCompletion = ({
activeXhrs,
decryptionWorker,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
doneFn
}) => {
let count = 0;
let didError = false;
return (error, segment) => {
if (didError) {
return;
}
if (error) {
didError = true;
// If there are errors, we have to abort any outstanding requests
abortAll(activeXhrs);
// Even though the requests above are aborted, and in theory we could wait until we
// handle the aborted events from those requests, there are some cases where we may
// never get an aborted event. For instance, if the network connection is lost and
// there were two requests, the first may have triggered an error immediately, while
// the second request remains unsent. In that case, the aborted algorithm will not
// trigger an abort: see https://xhr.spec.whatwg.org/#the-abort()-method
//
// We also can't rely on the ready state of the XHR, since the request that
// triggered the connection error may also show as a ready state of 0 (unsent).
// Therefore, we have to finish this group of requests immediately after the first
// seen error.
return doneFn(error, segment);
}
count += 1;
if (count === activeXhrs.length) {
// Keep track of when *all* of the requests have completed
segment.endOfAllRequests = Date.now();
if (segment.encryptedBytes) {
return decryptSegment({
decryptionWorker,
segment,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
doneFn
});
}
// Otherwise, everything is ready just continue
handleSegmentBytes({
segment,
bytes: segment.bytes,
isPartial: false,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
doneFn
});
}
};
};
/**
* Calls the abort callback if any request within the batch was aborted. Will only call
* the callback once per batch of requests, even if multiple were aborted.
*
* @param {Object} loadendState - state to check to see if the abort function was called
* @param {Function} abortFn - callback to call for abort
*/
const handleLoadEnd = ({ loadendState, abortFn }) => (event) => {
const request = event.target;
if (request.aborted && abortFn && !loadendState.calledAbortFn) {
abortFn();
loadendState.calledAbortFn = true;
}
};
/**
* Simple progress event callback handler that gathers some stats before
* executing a provided callback with the `segment` object
*
* @param {Object} segment - a simplified copy of the segmentInfo object
* from SegmentLoader
* @param {Function} progressFn - a callback that is executed each time a progress event
* is received
* @param {Function} trackInfoFn - a callback that receives track info
* @param {Function} dataFn - a callback that is executed when segment bytes are available
* and ready to use
* @param {Event} event - the progress event object from XMLHttpRequest
*/
const handleProgress = ({
segment,
progressFn,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
handlePartialData
}) => (event) => {
const request = event.target;
if (request.aborted) {
return;
}
// don't support encrypted segments or fmp4 for now
if (
handlePartialData &&
!segment.key &&
// although responseText "should" exist, this guard serves to prevent an error being
// thrown on the next check for two primary cases:
// 1. the mime type override stops working, or is not implemented for a specific
// browser
// 2. when using mock XHR libraries like sinon that do not allow the override behavior
request.responseText &&
// in order to determine if it's an fmp4 we need at least 8 bytes
request.responseText.length >= 8
) {
const newBytes = stringToArrayBuffer(request.responseText.substring(segment.lastReachedChar || 0));
if (segment.lastReachedChar || !isLikelyFmp4MediaSegment(new Uint8Array(newBytes))) {
segment.lastReachedChar = request.responseText.length;
handleSegmentBytes({
segment,
bytes: newBytes,
isPartial: true,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn
});
}
}
segment.stats = videojs.mergeOptions(segment.stats, getProgressStats(event));
// record the time that we receive the first byte of data
if (!segment.stats.firstBytesReceivedAt && segment.stats.bytesReceived) {
segment.stats.firstBytesReceivedAt = Date.now();
}
return progressFn(event, segment);
};
/**
* Load all resources and does any processing necessary for a media-segment
*
* Features:
* decrypts the media-segment if it has a key uri and an iv
* aborts *all* requests if *any* one request fails
*
* The segment object, at minimum, has the following format:
* {
* resolvedUri: String,
* [transmuxer]: Object,
* [byterange]: {
* offset: Number,
* length: Number
* },
* [key]: {
* resolvedUri: String
* [byterange]: {
* offset: Number,
* length: Number
* },
* iv: {
* bytes: Uint32Array
* }
* },
* [map]: {
* resolvedUri: String,
* [byterange]: {
* offset: Number,
* length: Number
* },
* [bytes]: Uint8Array
* }
* }
* ...where [name] denotes optional properties
*
* @param {Function} xhr - an instance of the xhr wrapper in xhr.js
* @param {Object} xhrOptions - the base options to provide to all xhr requests
* @param {WebWorker} decryptionWorker - a WebWorker interface to AES-128
* decryption routines
* @param {Object} segment - a simplified copy of the segmentInfo object
* from SegmentLoader
* @param {Function} abortFn - a callback called (only once) if any piece of a request was
* aborted
* @param {Function} progressFn - a callback that receives progress events from the main
* segment's xhr request
* @param {Function} trackInfoFn - a callback that receives track info
* @param {Function} id3Fn - a callback that receives ID3 metadata
* @param {Function} captionsFn - a callback that receives captions
* @param {Function} dataFn - a callback that receives data from the main segment's xhr
* request, transmuxed if needed
* @param {Function} doneFn - a callback that is executed only once all requests have
* succeeded or failed
* @return {Function} a function that, when invoked, immediately aborts all
* outstanding requests
*/
export const mediaSegmentRequest = ({
xhr,
xhrOptions,
decryptionWorker,
segment,
abortFn,
progressFn,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
doneFn,
handlePartialData
}) => {
const activeXhrs = [];
const finishProcessingFn = waitForCompletion({
activeXhrs,
decryptionWorker,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
doneFn
});
// optionally, request the decryption key
if (segment.key && !segment.key.bytes) {
const keyRequestOptions = videojs.mergeOptions(xhrOptions, {
uri: segment.key.resolvedUri,
responseType: 'arraybuffer'
});
const keyRequestCallback = handleKeyResponse(segment, finishProcessingFn);
const keyXhr = xhr(keyRequestOptions, keyRequestCallback);
activeXhrs.push(keyXhr);
}
// optionally, request the associated media init segment
if (segment.map && !segment.map.bytes) {
const initSegmentOptions = videojs.mergeOptions(xhrOptions, {
uri: segment.map.resolvedUri,
responseType: 'arraybuffer',
headers: segmentXhrHeaders(segment.map)
});
const initSegmentRequestCallback = handleInitSegmentResponse({
segment,
finishProcessingFn
});
const initSegmentXhr = xhr(initSegmentOptions, initSegmentRequestCallback);
activeXhrs.push(initSegmentXhr);
}
const segmentRequestOptions = videojs.mergeOptions(xhrOptions, {
uri: segment.resolvedUri,
responseType: 'arraybuffer',
headers: segmentXhrHeaders(segment)
});
if (handlePartialData) {
// setting to text is required for partial responses
// conversion to ArrayBuffer happens later
segmentRequestOptions.responseType = 'text';
segmentRequestOptions.beforeSend = (xhrObject) => {
// XHR binary charset opt by Marcus Granado 2006 [http://mgran.blogspot.com]
// makes the browser pass through the "text" unparsed
xhrObject.overrideMimeType('text/plain; charset=x-user-defined');
};
}
const segmentRequestCallback = handleSegmentResponse({
segment,
finishProcessingFn,
responseType: segmentRequestOptions.responseType
});
const segmentXhr = xhr(segmentRequestOptions, segmentRequestCallback);
segmentXhr.addEventListener(
'progress',
handleProgress({
segment,
progressFn,
trackInfoFn,
timingInfoFn,
videoSegmentTimingInfoFn,
id3Fn,
captionsFn,
dataFn,
handlePartialData
})
);
activeXhrs.push(segmentXhr);
// since all parts of the request must be considered, but should not make callbacks
// multiple times, provide a shared state object
const loadendState = {};
activeXhrs.forEach((activeXhr) => {
activeXhr.addEventListener(
'loadend',
handleLoadEnd({ loadendState, abortFn })
);
});
return () => abortAll(activeXhrs);
};

View File

@@ -1,634 +0,0 @@
/**
* @file playback-watcher.js
*
* Playback starts, and now my watch begins. It shall not end until my death. I shall
* take no wait, hold no uncleared timeouts, father no bad seeks. I shall wear no crowns
* and win no glory. I shall live and die at my post. I am the corrector of the underflow.
* I am the watcher of gaps. I am the shield that guards the realms of seekable. I pledge
* my life and honor to the Playback Watch, for this Player and all the Players to come.
*/
import window from 'global/window';
import * as Ranges from './ranges';
import logger from './util/logger';
import videojs from 'video.js';
// Set of events that reset the playback-watcher time check logic and clear the timeout
const timerCancelEvents = [
'seeking',
'seeked',
'pause',
'playing',
'error'
];
/**
* Returns whether or not the current time should be considered close to buffered content,
* taking into consideration whether there's enough buffered content for proper playback.
*
* @param {Object} options
* Options object
* @param {TimeRange} options.buffered
* Current buffer
* @param {number} options.targetDuration
* The active playlist's target duration
* @param {number} options.currentTime
* The current time of the player
* @return {boolean}
* Whether the current time should be considered close to the buffer
*/
export const closeToBufferedContent = ({ buffered, targetDuration, currentTime }) => {
if (!buffered.length) {
return false;
}
// At least two to three segments worth of content should be buffered before there's a
// full enough buffer to consider taking any actions.
if (buffered.end(0) - buffered.start(0) < targetDuration * 2) {
return false;
}
// It's possible that, on seek, a remove hasn't completed and the buffered range is
// somewhere past the current time. In that event, don't consider the buffered content
// close.
if (currentTime > buffered.start(0)) {
return false;
}
// Since target duration generally represents the max (or close to max) duration of a
// segment, if the buffer is within a segment of the current time, the gap probably
// won't be closed, and current time should be considered close to buffered content.
return buffered.start(0) - currentTime < targetDuration;
};
/**
* @class PlaybackWatcher
*/
export default class PlaybackWatcher {
/**
* Represents an PlaybackWatcher object.
*
* @class
* @param {Object} options an object that includes the tech and settings
*/
constructor(options) {
this.masterPlaylistController_ = options.masterPlaylistController;
this.tech_ = options.tech;
this.seekable = options.seekable;
this.allowSeeksWithinUnsafeLiveWindow = options.allowSeeksWithinUnsafeLiveWindow;
this.media = options.media;
this.consecutiveUpdates = 0;
this.lastRecordedTime = null;
this.timer_ = null;
this.checkCurrentTimeTimeout_ = null;
this.logger_ = logger('PlaybackWatcher');
this.logger_('initialize');
const canPlayHandler = () => this.monitorCurrentTime_();
const waitingHandler = () => this.techWaiting_();
const cancelTimerHandler = () => this.cancelTimer_();
const fixesBadSeeksHandler = () => this.fixesBadSeeks_();
const mpc = this.masterPlaylistController_;
const loaderTypes = ['main', 'subtitle', 'audio'];
const loaderChecks = {};
loaderTypes.forEach((type) => {
loaderChecks[type] = {
reset: () => this.resetSegmentDownloads_(type),
updateend: () => this.checkSegmentDownloads_(type)
};
mpc[`${type}SegmentLoader_`].on('appendsdone', loaderChecks[type].updateend);
// If a rendition switch happens during a playback stall where the buffer
// isn't changing we want to reset. We cannot assume that the new rendition
// will also be stalled, until after new appends.
mpc[`${type}SegmentLoader_`].on('playlistupdate', loaderChecks[type].reset);
// Playback stalls should not be detected right after seeking.
// This prevents one segment playlists (single vtt or single segment content)
// from being detected as stalling. As the buffer will not change in those cases, since
// the buffer is the entire video duration.
this.tech_.on(['seeked', 'seeking'], loaderChecks[type].reset);
});
this.tech_.on('seekablechanged', fixesBadSeeksHandler);
this.tech_.on('waiting', waitingHandler);
this.tech_.on(timerCancelEvents, cancelTimerHandler);
this.tech_.on('canplay', canPlayHandler);
// Define the dispose function to clean up our events
this.dispose = () => {
this.logger_('dispose');
this.tech_.off('seekablechanged', fixesBadSeeksHandler);
this.tech_.off('waiting', waitingHandler);
this.tech_.off(timerCancelEvents, cancelTimerHandler);
this.tech_.off('canplay', canPlayHandler);
loaderTypes.forEach((type) => {
mpc[`${type}SegmentLoader_`].off('appendsdone', loaderChecks[type].updateend);
mpc[`${type}SegmentLoader_`].off('playlistupdate', loaderChecks[type].reset);
this.tech_.off(['seeked', 'seeking'], loaderChecks[type].reset);
});
if (this.checkCurrentTimeTimeout_) {
window.clearTimeout(this.checkCurrentTimeTimeout_);
}
this.cancelTimer_();
};
}
/**
* Periodically check current time to see if playback stopped
*
* @private
*/
monitorCurrentTime_() {
this.checkCurrentTime_();
if (this.checkCurrentTimeTimeout_) {
window.clearTimeout(this.checkCurrentTimeTimeout_);
}
// 42 = 24 fps // 250 is what Webkit uses // FF uses 15
this.checkCurrentTimeTimeout_ =
window.setTimeout(this.monitorCurrentTime_.bind(this), 250);
}
/**
* Reset stalled download stats for a specific type of loader
*
* @param {string} type
* The segment loader type to check.
*
* @listens SegmentLoader#playlistupdate
* @listens Tech#seeking
* @listens Tech#seeked
*/
resetSegmentDownloads_(type) {
const loader = this.masterPlaylistController_[`${type}SegmentLoader_`];
if (this[`${type}StalledDownloads_`] > 0) {
this.logger_(`resetting possible stalled download count for ${type} loader`);
}
this[`${type}StalledDownloads_`] = 0;
this[`${type}Buffered_`] = loader.buffered_();
}
/**
* Checks on every segment `appendsdone` to see
* if segment appends are making progress. If they are not
* and we are still downloading bytes. We blacklist the playlist.
*
* @param {string} type
* The segment loader type to check.
*
* @listens SegmentLoader#appendsdone
*/
checkSegmentDownloads_(type) {
const mpc = this.masterPlaylistController_;
const loader = mpc[`${type}SegmentLoader_`];
const buffered = loader.buffered_();
const isBufferedDifferent = Ranges.isRangeDifferent(this[`${type}Buffered_`], buffered);
this[`${type}Buffered_`] = buffered;
// if another watcher is going to fix the issue or
// the buffered value for this loader changed
// appends are working
if (isBufferedDifferent) {
this.resetSegmentDownloads_(type);
return;
}
this[`${type}StalledDownloads_`]++;
this.logger_(`found #${this[`${type}StalledDownloads_`]} ${type} appends that did not increase buffer (possible stalled download)`, {
playlistId: loader.playlist_ && loader.playlist_.id,
buffered: Ranges.timeRangesToArray(buffered)
});
// after 10 possibly stalled appends with no reset, exclude
if (this[`${type}StalledDownloads_`] < 10) {
return;
}
this.logger_(`${type} loader stalled download exclusion`);
this.resetSegmentDownloads_(type);
this.tech_.trigger({type: 'usage', name: `vhs-${type}-download-exclusion`});
if (type === 'subtitle') {
// TODO: Is there anything else that we can do here?
// removing the track and disabling could have accesiblity implications.
const track = loader.track();
const label = track.label || track.language || 'Unknown';
videojs.log.warn(`Text track "${label}" is not working correctly. It will be disabled and excluded.`);
track.mode = 'disabled';
this.tech_.textTracks().removeTrack(track);
return;
}
// TODO: should we exclude audio tracks rather than main tracks
// when type is audio?
mpc.blacklistCurrentPlaylist({
message: `Excessive ${type} segment downloading detected.`
}, Infinity);
}
/**
* The purpose of this function is to emulate the "waiting" event on
* browsers that do not emit it when they are waiting for more
* data to continue playback
*
* @private
*/
checkCurrentTime_() {
if (this.tech_.seeking() && this.fixesBadSeeks_()) {
this.consecutiveUpdates = 0;
this.lastRecordedTime = this.tech_.currentTime();
return;
}
if (this.tech_.paused() || this.tech_.seeking()) {
return;
}
const currentTime = this.tech_.currentTime();
const buffered = this.tech_.buffered();
if (this.lastRecordedTime === currentTime &&
(!buffered.length ||
currentTime + Ranges.SAFE_TIME_DELTA >= buffered.end(buffered.length - 1))) {
// If current time is at the end of the final buffered region, then any playback
// stall is most likely caused by buffering in a low bandwidth environment. The tech
// should fire a `waiting` event in this scenario, but due to browser and tech
// inconsistencies. Calling `techWaiting_` here allows us to simulate
// responding to a native `waiting` event when the tech fails to emit one.
return this.techWaiting_();
}
if (this.consecutiveUpdates >= 5 &&
currentTime === this.lastRecordedTime) {
this.consecutiveUpdates++;
this.waiting_();
} else if (currentTime === this.lastRecordedTime) {
this.consecutiveUpdates++;
} else {
this.consecutiveUpdates = 0;
this.lastRecordedTime = currentTime;
}
}
/**
* Cancels any pending timers and resets the 'timeupdate' mechanism
* designed to detect that we are stalled
*
* @private
*/
cancelTimer_() {
this.consecutiveUpdates = 0;
if (this.timer_) {
this.logger_('cancelTimer_');
clearTimeout(this.timer_);
}
this.timer_ = null;
}
/**
* Fixes situations where there's a bad seek
*
* @return {boolean} whether an action was taken to fix the seek
* @private
*/
fixesBadSeeks_() {
const seeking = this.tech_.seeking();
if (!seeking) {
return false;
}
const seekable = this.seekable();
const currentTime = this.tech_.currentTime();
const isAfterSeekableRange = this.afterSeekableWindow_(
seekable,
currentTime,
this.media(),
this.allowSeeksWithinUnsafeLiveWindow
);
let seekTo;
if (isAfterSeekableRange) {
const seekableEnd = seekable.end(seekable.length - 1);
// sync to live point (if VOD, our seekable was updated and we're simply adjusting)
seekTo = seekableEnd;
}
if (this.beforeSeekableWindow_(seekable, currentTime)) {
const seekableStart = seekable.start(0);
// sync to the beginning of the live window
// provide a buffer of .1 seconds to handle rounding/imprecise numbers
seekTo = seekableStart +
// if the playlist is too short and the seekable range is an exact time (can
// happen in live with a 3 segment playlist), then don't use a time delta
(seekableStart === seekable.end(0) ? 0 : Ranges.SAFE_TIME_DELTA);
}
if (typeof seekTo !== 'undefined') {
this.logger_(`Trying to seek outside of seekable at time ${currentTime} with ` +
`seekable range ${Ranges.printableRange(seekable)}. Seeking to ` +
`${seekTo}.`);
this.tech_.setCurrentTime(seekTo);
return true;
}
const buffered = this.tech_.buffered();
if (
closeToBufferedContent({
buffered,
targetDuration: this.media().targetDuration,
currentTime
})
) {
seekTo = buffered.start(0) + Ranges.SAFE_TIME_DELTA;
this.logger_(`Buffered region starts (${buffered.start(0)}) ` +
` just beyond seek point (${currentTime}). Seeking to ${seekTo}.`);
this.tech_.setCurrentTime(seekTo);
return true;
}
return false;
}
/**
* Handler for situations when we determine the player is waiting.
*
* @private
*/
waiting_() {
if (this.techWaiting_()) {
return;
}
// All tech waiting checks failed. Use last resort correction
const currentTime = this.tech_.currentTime();
const buffered = this.tech_.buffered();
const currentRange = Ranges.findRange(buffered, currentTime);
// Sometimes the player can stall for unknown reasons within a contiguous buffered
// region with no indication that anything is amiss (seen in Firefox). Seeking to
// currentTime is usually enough to kickstart the player. This checks that the player
// is currently within a buffered region before attempting a corrective seek.
// Chrome does not appear to continue `timeupdate` events after a `waiting` event
// until there is ~ 3 seconds of forward buffer available. PlaybackWatcher should also
// make sure there is ~3 seconds of forward buffer before taking any corrective action
// to avoid triggering an `unknownwaiting` event when the network is slow.
if (currentRange.length && currentTime + 3 <= currentRange.end(0)) {
this.cancelTimer_();
this.tech_.setCurrentTime(currentTime);
this.logger_(`Stopped at ${currentTime} while inside a buffered region ` +
`[${currentRange.start(0)} -> ${currentRange.end(0)}]. Attempting to resume ` +
'playback by seeking to the current time.');
// unknown waiting corrections may be useful for monitoring QoS
this.tech_.trigger({type: 'usage', name: 'vhs-unknown-waiting'});
this.tech_.trigger({type: 'usage', name: 'hls-unknown-waiting'});
return;
}
}
/**
* Handler for situations when the tech fires a `waiting` event
*
* @return {boolean}
* True if an action (or none) was needed to correct the waiting. False if no
* checks passed
* @private
*/
techWaiting_() {
const seekable = this.seekable();
const currentTime = this.tech_.currentTime();
if (this.tech_.seeking() && this.fixesBadSeeks_()) {
// Tech is seeking or bad seek fixed, no action needed
return true;
}
if (this.tech_.seeking() || this.timer_ !== null) {
// Tech is seeking or already waiting on another action, no action needed
return true;
}
if (this.beforeSeekableWindow_(seekable, currentTime)) {
const livePoint = seekable.end(seekable.length - 1);
this.logger_(`Fell out of live window at time ${currentTime}. Seeking to ` +
`live point (seekable end) ${livePoint}`);
this.cancelTimer_();
this.tech_.setCurrentTime(livePoint);
// live window resyncs may be useful for monitoring QoS
this.tech_.trigger({type: 'usage', name: 'vhs-live-resync'});
this.tech_.trigger({type: 'usage', name: 'hls-live-resync'});
return true;
}
const sourceUpdater = this.tech_.vhs.masterPlaylistController_.sourceUpdater_;
const buffered = this.tech_.buffered();
const videoUnderflow = this.videoUnderflow_({
audioBuffered: sourceUpdater.audioBuffered(),
videoBuffered: sourceUpdater.videoBuffered(),
currentTime
});
if (videoUnderflow) {
// Even though the video underflowed and was stuck in a gap, the audio overplayed
// the gap, leading currentTime into a buffered range. Seeking to currentTime
// allows the video to catch up to the audio position without losing any audio
// (only suffering ~3 seconds of frozen video and a pause in audio playback).
this.cancelTimer_();
this.tech_.setCurrentTime(currentTime);
// video underflow may be useful for monitoring QoS
this.tech_.trigger({type: 'usage', name: 'vhs-video-underflow'});
this.tech_.trigger({type: 'usage', name: 'hls-video-underflow'});
return true;
}
const nextRange = Ranges.findNextRange(buffered, currentTime);
// check for gap
if (nextRange.length > 0) {
const difference = nextRange.start(0) - currentTime;
this.logger_(`Stopped at ${currentTime}, setting timer for ${difference}, seeking ` +
`to ${nextRange.start(0)}`);
this.cancelTimer_();
this.timer_ = setTimeout(
this.skipTheGap_.bind(this),
difference * 1000,
currentTime
);
return true;
}
// All checks failed. Returning false to indicate failure to correct waiting
return false;
}
afterSeekableWindow_(seekable, currentTime, playlist, allowSeeksWithinUnsafeLiveWindow = false) {
if (!seekable.length) {
// we can't make a solid case if there's no seekable, default to false
return false;
}
let allowedEnd = seekable.end(seekable.length - 1) + Ranges.SAFE_TIME_DELTA;
const isLive = !playlist.endList;
if (isLive && allowSeeksWithinUnsafeLiveWindow) {
allowedEnd = seekable.end(seekable.length - 1) + (playlist.targetDuration * 3);
}
if (currentTime > allowedEnd) {
return true;
}
return false;
}
beforeSeekableWindow_(seekable, currentTime) {
if (seekable.length &&
// can't fall before 0 and 0 seekable start identifies VOD stream
seekable.start(0) > 0 &&
currentTime < seekable.start(0) - Ranges.SAFE_TIME_DELTA) {
return true;
}
return false;
}
videoUnderflow_({videoBuffered, audioBuffered, currentTime}) {
// audio only content will not have video underflow :)
if (!videoBuffered) {
return;
}
let gap;
// find a gap in demuxed content.
if (videoBuffered.length && audioBuffered.length) {
// in Chrome audio will continue to play for ~3s when we run out of video
// so we have to check that the video buffer did have some buffer in the
// past.
const lastVideoRange = Ranges.findRange(videoBuffered, currentTime - 3);
const videoRange = Ranges.findRange(videoBuffered, currentTime);
const audioRange = Ranges.findRange(audioBuffered, currentTime);
if (audioRange.length && !videoRange.length && lastVideoRange.length) {
gap = {start: lastVideoRange.end(0), end: audioRange.end(0)};
}
// find a gap in muxed content.
} else {
const nextRange = Ranges.findNextRange(videoBuffered, currentTime);
// Even if there is no available next range, there is still a possibility we are
// stuck in a gap due to video underflow.
if (!nextRange.length) {
gap = this.gapFromVideoUnderflow_(videoBuffered, currentTime);
}
}
if (gap) {
this.logger_(`Encountered a gap in video from ${gap.start} to ${gap.end}. ` +
`Seeking to current time ${currentTime}`);
return true;
}
return false;
}
/**
* Timer callback. If playback still has not proceeded, then we seek
* to the start of the next buffered region.
*
* @private
*/
skipTheGap_(scheduledCurrentTime) {
const buffered = this.tech_.buffered();
const currentTime = this.tech_.currentTime();
const nextRange = Ranges.findNextRange(buffered, currentTime);
this.cancelTimer_();
if (nextRange.length === 0 ||
currentTime !== scheduledCurrentTime) {
return;
}
this.logger_(
'skipTheGap_:',
'currentTime:', currentTime,
'scheduled currentTime:', scheduledCurrentTime,
'nextRange start:', nextRange.start(0)
);
// only seek if we still have not played
this.tech_.setCurrentTime(nextRange.start(0) + Ranges.TIME_FUDGE_FACTOR);
this.tech_.trigger({type: 'usage', name: 'vhs-gap-skip'});
this.tech_.trigger({type: 'usage', name: 'hls-gap-skip'});
}
gapFromVideoUnderflow_(buffered, currentTime) {
// At least in Chrome, if there is a gap in the video buffer, the audio will continue
// playing for ~3 seconds after the video gap starts. This is done to account for
// video buffer underflow/underrun (note that this is not done when there is audio
// buffer underflow/underrun -- in that case the video will stop as soon as it
// encounters the gap, as audio stalls are more noticeable/jarring to a user than
// video stalls). The player's time will reflect the playthrough of audio, so the
// time will appear as if we are in a buffered region, even if we are stuck in a
// "gap."
//
// Example:
// video buffer: 0 => 10.1, 10.2 => 20
// audio buffer: 0 => 20
// overall buffer: 0 => 10.1, 10.2 => 20
// current time: 13
//
// Chrome's video froze at 10 seconds, where the video buffer encountered the gap,
// however, the audio continued playing until it reached ~3 seconds past the gap
// (13 seconds), at which point it stops as well. Since current time is past the
// gap, findNextRange will return no ranges.
//
// To check for this issue, we see if there is a gap that starts somewhere within
// a 3 second range (3 seconds +/- 1 second) back from our current time.
const gaps = Ranges.findGaps(buffered);
for (let i = 0; i < gaps.length; i++) {
const start = gaps.start(i);
const end = gaps.end(i);
// gap is starts no more than 4 seconds back
if (currentTime - start < 4 && currentTime - start > 2) {
return {
start,
end
};
}
}
return null;
}
}

View File

@@ -1,623 +0,0 @@
/**
* @file playlist-loader.js
*
* A state machine that manages the loading, caching, and updating of
* M3U8 playlists.
*
*/
import { resolveUrl, resolveManifestRedirect } from './resolve-url';
import videojs from 'video.js';
import window from 'global/window';
import {
parseManifest,
addPropertiesToMaster,
masterForMedia,
setupMediaPlaylist
} from './manifest';
const { mergeOptions, EventTarget } = videojs;
/**
* Returns a new array of segments that is the result of merging
* properties from an older list of segments onto an updated
* list. No properties on the updated playlist will be overridden.
*
* @param {Array} original the outdated list of segments
* @param {Array} update the updated list of segments
* @param {number=} offset the index of the first update
* segment in the original segment list. For non-live playlists,
* this should always be zero and does not need to be
* specified. For live playlists, it should be the difference
* between the media sequence numbers in the original and updated
* playlists.
* @return a list of merged segment objects
*/
export const updateSegments = (original, update, offset) => {
const result = update.slice();
offset = offset || 0;
const length = Math.min(original.length, update.length + offset);
for (let i = offset; i < length; i++) {
result[i - offset] = mergeOptions(original[i], result[i - offset]);
}
return result;
};
export const resolveSegmentUris = (segment, baseUri) => {
if (!segment.resolvedUri) {
segment.resolvedUri = resolveUrl(baseUri, segment.uri);
}
if (segment.key && !segment.key.resolvedUri) {
segment.key.resolvedUri = resolveUrl(baseUri, segment.key.uri);
}
if (segment.map && !segment.map.resolvedUri) {
segment.map.resolvedUri = resolveUrl(baseUri, segment.map.uri);
}
};
/**
* Returns a new master playlist that is the result of merging an
* updated media playlist into the original version. If the
* updated media playlist does not match any of the playlist
* entries in the original master playlist, null is returned.
*
* @param {Object} master a parsed master M3U8 object
* @param {Object} media a parsed media M3U8 object
* @return {Object} a new object that represents the original
* master playlist with the updated media playlist merged in, or
* null if the merge produced no change.
*/
export const updateMaster = (master, media) => {
const result = mergeOptions(master, {});
const playlist = result.playlists[media.id];
if (!playlist) {
return null;
}
// consider the playlist unchanged if the number of segments is equal, the media
// sequence number is unchanged, and this playlist hasn't become the end of the playlist
if (playlist.segments &&
media.segments &&
playlist.segments.length === media.segments.length &&
playlist.endList === media.endList &&
playlist.mediaSequence === media.mediaSequence) {
return null;
}
const mergedPlaylist = mergeOptions(playlist, media);
// if the update could overlap existing segment information, merge the two segment lists
if (playlist.segments) {
mergedPlaylist.segments = updateSegments(
playlist.segments,
media.segments,
media.mediaSequence - playlist.mediaSequence
);
}
// resolve any segment URIs to prevent us from having to do it later
mergedPlaylist.segments.forEach((segment) => {
resolveSegmentUris(segment, mergedPlaylist.resolvedUri);
});
// TODO Right now in the playlists array there are two references to each playlist, one
// that is referenced by index, and one by URI. The index reference may no longer be
// necessary.
for (let i = 0; i < result.playlists.length; i++) {
if (result.playlists[i].id === media.id) {
result.playlists[i] = mergedPlaylist;
}
}
result.playlists[media.id] = mergedPlaylist;
// URI reference added for backwards compatibility
result.playlists[media.uri] = mergedPlaylist;
return result;
};
/**
* Calculates the time to wait before refreshing a live playlist
*
* @param {Object} media
* The current media
* @param {boolean} update
* True if there were any updates from the last refresh, false otherwise
* @return {number}
* The time in ms to wait before refreshing the live playlist
*/
export const refreshDelay = (media, update) => {
const lastSegment = media.segments[media.segments.length - 1];
let delay;
if (update && lastSegment && lastSegment.duration) {
delay = lastSegment.duration * 1000;
} else {
// if the playlist is unchanged since the last reload or last segment duration
// cannot be determined, try again after half the target duration
delay = (media.targetDuration || 10) * 500;
}
return delay;
};
/**
* Load a playlist from a remote location
*
* @class PlaylistLoader
* @extends Stream
* @param {string|Object} src url or object of manifest
* @param {boolean} withCredentials the withCredentials xhr option
* @class
*/
export default class PlaylistLoader extends EventTarget {
constructor(src, vhs, options = { }) {
super();
if (!src) {
throw new Error('A non-empty playlist URL or object is required');
}
const { withCredentials = false, handleManifestRedirects = false } = options;
this.src = src;
this.vhs_ = vhs;
this.withCredentials = withCredentials;
this.handleManifestRedirects = handleManifestRedirects;
const vhsOptions = vhs.options_;
this.customTagParsers = (vhsOptions && vhsOptions.customTagParsers) || [];
this.customTagMappers = (vhsOptions && vhsOptions.customTagMappers) || [];
// initialize the loader state
this.state = 'HAVE_NOTHING';
// live playlist staleness timeout
this.on('mediaupdatetimeout', () => {
if (this.state !== 'HAVE_METADATA') {
// only refresh the media playlist if no other activity is going on
return;
}
this.state = 'HAVE_CURRENT_METADATA';
this.request = this.vhs_.xhr({
uri: resolveUrl(this.master.uri, this.media().uri),
withCredentials: this.withCredentials
}, (error, req) => {
// disposed
if (!this.request) {
return;
}
if (error) {
return this.playlistRequestError(this.request, this.media(), 'HAVE_METADATA');
}
this.haveMetadata({
playlistString: this.request.responseText,
url: this.media().uri,
id: this.media().id
});
});
});
}
playlistRequestError(xhr, playlist, startingState) {
const {
uri,
id
} = playlist;
// any in-flight request is now finished
this.request = null;
if (startingState) {
this.state = startingState;
}
this.error = {
playlist: this.master.playlists[id],
status: xhr.status,
message: `HLS playlist request error at URL: ${uri}.`,
responseText: xhr.responseText,
code: (xhr.status >= 500) ? 4 : 2
};
this.trigger('error');
}
/**
* Update the playlist loader's state in response to a new or updated playlist.
*
* @param {string} [playlistString]
* Playlist string (if playlistObject is not provided)
* @param {Object} [playlistObject]
* Playlist object (if playlistString is not provided)
* @param {string} url
* URL of playlist
* @param {string} id
* ID to use for playlist
*/
haveMetadata({ playlistString, playlistObject, url, id }) {
// any in-flight request is now finished
this.request = null;
this.state = 'HAVE_METADATA';
const playlist = playlistObject || parseManifest({
manifestString: playlistString,
customTagParsers: this.customTagParsers,
customTagMappers: this.customTagMappers
});
setupMediaPlaylist({
playlist,
uri: url,
id
});
// merge this playlist into the master
const update = updateMaster(this.master, playlist);
this.targetDuration = playlist.targetDuration;
if (update) {
this.master = update;
this.media_ = this.master.playlists[id];
} else {
this.trigger('playlistunchanged');
}
// refresh live playlists after a target duration passes
if (!this.media().endList) {
window.clearTimeout(this.mediaUpdateTimeout);
this.mediaUpdateTimeout = window.setTimeout(() => {
this.trigger('mediaupdatetimeout');
}, refreshDelay(this.media(), !!update));
}
this.trigger('loadedplaylist');
}
/**
* Abort any outstanding work and clean up.
*/
dispose() {
this.trigger('dispose');
this.stopRequest();
window.clearTimeout(this.mediaUpdateTimeout);
window.clearTimeout(this.finalRenditionTimeout);
this.off();
}
stopRequest() {
if (this.request) {
const oldRequest = this.request;
this.request = null;
oldRequest.onreadystatechange = null;
oldRequest.abort();
}
}
/**
* When called without any arguments, returns the currently
* active media playlist. When called with a single argument,
* triggers the playlist loader to asynchronously switch to the
* specified media playlist. Calling this method while the
* loader is in the HAVE_NOTHING causes an error to be emitted
* but otherwise has no effect.
*
* @param {Object=} playlist the parsed media playlist
* object to switch to
* @param {boolean=} is this the last available playlist
*
* @return {Playlist} the current loaded media
*/
media(playlist, isFinalRendition) {
// getter
if (!playlist) {
return this.media_;
}
// setter
if (this.state === 'HAVE_NOTHING') {
throw new Error('Cannot switch media playlist from ' + this.state);
}
// find the playlist object if the target playlist has been
// specified by URI
if (typeof playlist === 'string') {
if (!this.master.playlists[playlist]) {
throw new Error('Unknown playlist URI: ' + playlist);
}
playlist = this.master.playlists[playlist];
}
window.clearTimeout(this.finalRenditionTimeout);
if (isFinalRendition) {
const delay = (playlist.targetDuration / 2) * 1000 || 5 * 1000;
this.finalRenditionTimeout =
window.setTimeout(this.media.bind(this, playlist, false), delay);
return;
}
const startingState = this.state;
const mediaChange = !this.media_ || playlist.id !== this.media_.id;
// switch to fully loaded playlists immediately
if (this.master.playlists[playlist.id].endList ||
// handle the case of a playlist object (e.g., if using vhs-json with a resolved
// media playlist or, for the case of demuxed audio, a resolved audio media group)
(playlist.endList && playlist.segments.length)) {
// abort outstanding playlist requests
if (this.request) {
this.request.onreadystatechange = null;
this.request.abort();
this.request = null;
}
this.state = 'HAVE_METADATA';
this.media_ = playlist;
// trigger media change if the active media has been updated
if (mediaChange) {
this.trigger('mediachanging');
if (startingState === 'HAVE_MASTER') {
// The initial playlist was a master manifest, and the first media selected was
// also provided (in the form of a resolved playlist object) as part of the
// source object (rather than just a URL). Therefore, since the media playlist
// doesn't need to be requested, loadedmetadata won't trigger as part of the
// normal flow, and needs an explicit trigger here.
this.trigger('loadedmetadata');
} else {
this.trigger('mediachange');
}
}
return;
}
// switching to the active playlist is a no-op
if (!mediaChange) {
return;
}
this.state = 'SWITCHING_MEDIA';
// there is already an outstanding playlist request
if (this.request) {
if (playlist.resolvedUri === this.request.url) {
// requesting to switch to the same playlist multiple times
// has no effect after the first
return;
}
this.request.onreadystatechange = null;
this.request.abort();
this.request = null;
}
// request the new playlist
if (this.media_) {
this.trigger('mediachanging');
}
this.request = this.vhs_.xhr({
uri: playlist.resolvedUri,
withCredentials: this.withCredentials
}, (error, req) => {
// disposed
if (!this.request) {
return;
}
playlist.resolvedUri = resolveManifestRedirect(this.handleManifestRedirects, playlist.resolvedUri, req);
if (error) {
return this.playlistRequestError(this.request, playlist, startingState);
}
this.haveMetadata({
playlistString: req.responseText,
url: playlist.uri,
id: playlist.id
});
// fire loadedmetadata the first time a media playlist is loaded
if (startingState === 'HAVE_MASTER') {
this.trigger('loadedmetadata');
} else {
this.trigger('mediachange');
}
});
}
/**
* pause loading of the playlist
*/
pause() {
this.stopRequest();
window.clearTimeout(this.mediaUpdateTimeout);
if (this.state === 'HAVE_NOTHING') {
// If we pause the loader before any data has been retrieved, its as if we never
// started, so reset to an unstarted state.
this.started = false;
}
// Need to restore state now that no activity is happening
if (this.state === 'SWITCHING_MEDIA') {
// if the loader was in the process of switching media, it should either return to
// HAVE_MASTER or HAVE_METADATA depending on if the loader has loaded a media
// playlist yet. This is determined by the existence of loader.media_
if (this.media_) {
this.state = 'HAVE_METADATA';
} else {
this.state = 'HAVE_MASTER';
}
} else if (this.state === 'HAVE_CURRENT_METADATA') {
this.state = 'HAVE_METADATA';
}
}
/**
* start loading of the playlist
*/
load(isFinalRendition) {
window.clearTimeout(this.mediaUpdateTimeout);
const media = this.media();
if (isFinalRendition) {
const delay = media ? (media.targetDuration / 2) * 1000 : 5 * 1000;
this.mediaUpdateTimeout = window.setTimeout(() => this.load(), delay);
return;
}
if (!this.started) {
this.start();
return;
}
if (media && !media.endList) {
this.trigger('mediaupdatetimeout');
} else {
this.trigger('loadedplaylist');
}
}
/**
* start loading of the playlist
*/
start() {
this.started = true;
if (typeof this.src === 'object') {
// in the case of an entirely constructed manifest object (meaning there's no actual
// manifest on a server), default the uri to the page's href
if (!this.src.uri) {
this.src.uri = window.location.href;
}
// resolvedUri is added on internally after the initial request. Since there's no
// request for pre-resolved manifests, add on resolvedUri here.
this.src.resolvedUri = this.src.uri;
// Since a manifest object was passed in as the source (instead of a URL), the first
// request can be skipped (since the top level of the manifest, at a minimum, is
// already available as a parsed manifest object). However, if the manifest object
// represents a master playlist, some media playlists may need to be resolved before
// the starting segment list is available. Therefore, go directly to setup of the
// initial playlist, and let the normal flow continue from there.
//
// Note that the call to setup is asynchronous, as other sections of VHS may assume
// that the first request is asynchronous.
setTimeout(() => {
this.setupInitialPlaylist(this.src);
}, 0);
return;
}
// request the specified URL
this.request = this.vhs_.xhr({
uri: this.src,
withCredentials: this.withCredentials
}, (error, req) => {
// disposed
if (!this.request) {
return;
}
// clear the loader's request reference
this.request = null;
if (error) {
this.error = {
status: req.status,
message: `HLS playlist request error at URL: ${this.src}.`,
responseText: req.responseText,
// MEDIA_ERR_NETWORK
code: 2
};
if (this.state === 'HAVE_NOTHING') {
this.started = false;
}
return this.trigger('error');
}
this.src = resolveManifestRedirect(this.handleManifestRedirects, this.src, req);
const manifest = parseManifest({
manifestString: req.responseText,
customTagParsers: this.customTagParsers,
customTagMappers: this.customTagMappers
});
this.setupInitialPlaylist(manifest);
});
}
srcUri() {
return typeof this.src === 'string' ? this.src : this.src.uri;
}
/**
* Given a manifest object that's either a master or media playlist, trigger the proper
* events and set the state of the playlist loader.
*
* If the manifest object represents a master playlist, `loadedplaylist` will be
* triggered to allow listeners to select a playlist. If none is selected, the loader
* will default to the first one in the playlists array.
*
* If the manifest object represents a media playlist, `loadedplaylist` will be
* triggered followed by `loadedmetadata`, as the only available playlist is loaded.
*
* In the case of a media playlist, a master playlist object wrapper with one playlist
* will be created so that all logic can handle playlists in the same fashion (as an
* assumed manifest object schema).
*
* @param {Object} manifest
* The parsed manifest object
*/
setupInitialPlaylist(manifest) {
this.state = 'HAVE_MASTER';
if (manifest.playlists) {
this.master = manifest;
addPropertiesToMaster(this.master, this.srcUri());
// If the initial master playlist has playlists wtih segments already resolved,
// then resolve URIs in advance, as they are usually done after a playlist request,
// which may not happen if the playlist is resolved.
manifest.playlists.forEach((playlist) => {
if (playlist.segments) {
playlist.segments.forEach((segment) => {
resolveSegmentUris(segment, playlist.resolvedUri);
});
}
});
this.trigger('loadedplaylist');
if (!this.request) {
// no media playlist was specifically selected so start
// from the first listed one
this.media(this.master.playlists[0]);
}
return;
}
// In order to support media playlists passed in as vhs-json, the case where the uri
// is not provided as part of the manifest should be considered, and an appropriate
// default used.
const uri = this.srcUri() || window.location.href;
this.master = masterForMedia(manifest, uri);
this.haveMetadata({
playlistObject: manifest,
url: uri,
id: this.master.playlists[0].id
});
this.trigger('loadedmetadata');
}
}

View File

@@ -1,489 +0,0 @@
import window from 'global/window';
import Config from './config';
import Playlist from './playlist';
import { codecsForPlaylist } from './util/codecs.js';
import logger from './util/logger';
const logFn = logger('PlaylistSelector');
const representationToString = function(representation) {
if (!representation || !representation.playlist) {
return;
}
const playlist = representation.playlist;
return JSON.stringify({
id: playlist.id,
bandwidth: representation.bandwidth,
width: representation.width,
height: representation.height,
codecs: playlist.attributes && playlist.attributes.CODECS || ''
});
};
// Utilities
/**
* Returns the CSS value for the specified property on an element
* using `getComputedStyle`. Firefox has a long-standing issue where
* getComputedStyle() may return null when running in an iframe with
* `display: none`.
*
* @see https://bugzilla.mozilla.org/show_bug.cgi?id=548397
* @param {HTMLElement} el the htmlelement to work on
* @param {string} the proprety to get the style for
*/
const safeGetComputedStyle = function(el, property) {
if (!el) {
return '';
}
const result = window.getComputedStyle(el);
if (!result) {
return '';
}
return result[property];
};
/**
* Resuable stable sort function
*
* @param {Playlists} array
* @param {Function} sortFn Different comparators
* @function stableSort
*/
const stableSort = function(array, sortFn) {
const newArray = array.slice();
array.sort(function(left, right) {
const cmp = sortFn(left, right);
if (cmp === 0) {
return newArray.indexOf(left) - newArray.indexOf(right);
}
return cmp;
});
};
/**
* A comparator function to sort two playlist object by bandwidth.
*
* @param {Object} left a media playlist object
* @param {Object} right a media playlist object
* @return {number} Greater than zero if the bandwidth attribute of
* left is greater than the corresponding attribute of right. Less
* than zero if the bandwidth of right is greater than left and
* exactly zero if the two are equal.
*/
export const comparePlaylistBandwidth = function(left, right) {
let leftBandwidth;
let rightBandwidth;
if (left.attributes.BANDWIDTH) {
leftBandwidth = left.attributes.BANDWIDTH;
}
leftBandwidth = leftBandwidth || window.Number.MAX_VALUE;
if (right.attributes.BANDWIDTH) {
rightBandwidth = right.attributes.BANDWIDTH;
}
rightBandwidth = rightBandwidth || window.Number.MAX_VALUE;
return leftBandwidth - rightBandwidth;
};
/**
* A comparator function to sort two playlist object by resolution (width).
*
* @param {Object} left a media playlist object
* @param {Object} right a media playlist object
* @return {number} Greater than zero if the resolution.width attribute of
* left is greater than the corresponding attribute of right. Less
* than zero if the resolution.width of right is greater than left and
* exactly zero if the two are equal.
*/
export const comparePlaylistResolution = function(left, right) {
let leftWidth;
let rightWidth;
if (left.attributes.RESOLUTION &&
left.attributes.RESOLUTION.width) {
leftWidth = left.attributes.RESOLUTION.width;
}
leftWidth = leftWidth || window.Number.MAX_VALUE;
if (right.attributes.RESOLUTION &&
right.attributes.RESOLUTION.width) {
rightWidth = right.attributes.RESOLUTION.width;
}
rightWidth = rightWidth || window.Number.MAX_VALUE;
// NOTE - Fallback to bandwidth sort as appropriate in cases where multiple renditions
// have the same media dimensions/ resolution
if (leftWidth === rightWidth &&
left.attributes.BANDWIDTH &&
right.attributes.BANDWIDTH) {
return left.attributes.BANDWIDTH - right.attributes.BANDWIDTH;
}
return leftWidth - rightWidth;
};
/**
* Chooses the appropriate media playlist based on bandwidth and player size
*
* @param {Object} master
* Object representation of the master manifest
* @param {number} playerBandwidth
* Current calculated bandwidth of the player
* @param {number} playerWidth
* Current width of the player element (should account for the device pixel ratio)
* @param {number} playerHeight
* Current height of the player element (should account for the device pixel ratio)
* @param {boolean} limitRenditionByPlayerDimensions
* True if the player width and height should be used during the selection, false otherwise
* @return {Playlist} the highest bitrate playlist less than the
* currently detected bandwidth, accounting for some amount of
* bandwidth variance
*/
export const simpleSelector = function(
master,
playerBandwidth,
playerWidth,
playerHeight,
limitRenditionByPlayerDimensions
) {
const options = {
bandwidth: playerBandwidth,
width: playerWidth,
height: playerHeight,
limitRenditionByPlayerDimensions
};
// convert the playlists to an intermediary representation to make comparisons easier
let sortedPlaylistReps = master.playlists.map((playlist) => {
let bandwidth;
const width = playlist.attributes.RESOLUTION && playlist.attributes.RESOLUTION.width;
const height = playlist.attributes.RESOLUTION && playlist.attributes.RESOLUTION.height;
bandwidth = playlist.attributes.BANDWIDTH;
bandwidth = bandwidth || window.Number.MAX_VALUE;
return {
bandwidth,
width,
height,
playlist
};
});
stableSort(sortedPlaylistReps, (left, right) => left.bandwidth - right.bandwidth);
// filter out any playlists that have been excluded due to
// incompatible configurations
sortedPlaylistReps = sortedPlaylistReps.filter((rep) => !Playlist.isIncompatible(rep.playlist));
// filter out any playlists that have been disabled manually through the representations
// api or blacklisted temporarily due to playback errors.
let enabledPlaylistReps = sortedPlaylistReps.filter((rep) => Playlist.isEnabled(rep.playlist));
if (!enabledPlaylistReps.length) {
// if there are no enabled playlists, then they have all been blacklisted or disabled
// by the user through the representations api. In this case, ignore blacklisting and
// fallback to what the user wants by using playlists the user has not disabled.
enabledPlaylistReps = sortedPlaylistReps.filter((rep) => !Playlist.isDisabled(rep.playlist));
}
// filter out any variant that has greater effective bitrate
// than the current estimated bandwidth
const bandwidthPlaylistReps = enabledPlaylistReps.filter((rep) => rep.bandwidth * Config.BANDWIDTH_VARIANCE < playerBandwidth);
let highestRemainingBandwidthRep =
bandwidthPlaylistReps[bandwidthPlaylistReps.length - 1];
// get all of the renditions with the same (highest) bandwidth
// and then taking the very first element
const bandwidthBestRep = bandwidthPlaylistReps.filter((rep) => rep.bandwidth === highestRemainingBandwidthRep.bandwidth)[0];
// if we're not going to limit renditions by player size, make an early decision.
if (limitRenditionByPlayerDimensions === false) {
const chosenRep = (
bandwidthBestRep ||
enabledPlaylistReps[0] ||
sortedPlaylistReps[0]
);
if (chosenRep && chosenRep.playlist) {
let type = 'sortedPlaylistReps';
if (bandwidthBestRep) {
type = 'bandwidthBestRep';
}
if (enabledPlaylistReps[0]) {
type = 'enabledPlaylistReps';
}
logFn(`choosing ${representationToString(chosenRep)} using ${type} with options`, options);
return chosenRep.playlist;
}
logFn('could not choose a playlist with options', options);
return null;
}
// filter out playlists without resolution information
const haveResolution = bandwidthPlaylistReps.filter((rep) => rep.width && rep.height);
// sort variants by resolution
stableSort(haveResolution, (left, right) => left.width - right.width);
// if we have the exact resolution as the player use it
const resolutionBestRepList = haveResolution.filter((rep) => rep.width === playerWidth && rep.height === playerHeight);
highestRemainingBandwidthRep = resolutionBestRepList[resolutionBestRepList.length - 1];
// ensure that we pick the highest bandwidth variant that have exact resolution
const resolutionBestRep = resolutionBestRepList.filter((rep) => rep.bandwidth === highestRemainingBandwidthRep.bandwidth)[0];
let resolutionPlusOneList;
let resolutionPlusOneSmallest;
let resolutionPlusOneRep;
// find the smallest variant that is larger than the player
// if there is no match of exact resolution
if (!resolutionBestRep) {
resolutionPlusOneList = haveResolution.filter((rep) => rep.width > playerWidth || rep.height > playerHeight);
// find all the variants have the same smallest resolution
resolutionPlusOneSmallest = resolutionPlusOneList.filter((rep) => rep.width === resolutionPlusOneList[0].width &&
rep.height === resolutionPlusOneList[0].height);
// ensure that we also pick the highest bandwidth variant that
// is just-larger-than the video player
highestRemainingBandwidthRep =
resolutionPlusOneSmallest[resolutionPlusOneSmallest.length - 1];
resolutionPlusOneRep = resolutionPlusOneSmallest.filter((rep) => rep.bandwidth === highestRemainingBandwidthRep.bandwidth)[0];
}
// fallback chain of variants
const chosenRep = (
resolutionPlusOneRep ||
resolutionBestRep ||
bandwidthBestRep ||
enabledPlaylistReps[0] ||
sortedPlaylistReps[0]
);
if (chosenRep && chosenRep.playlist) {
let type = 'sortedPlaylistReps';
if (resolutionPlusOneRep) {
type = 'resolutionPlusOneRep';
} else if (resolutionBestRep) {
type = 'resolutionBestRep';
} else if (bandwidthBestRep) {
type = 'bandwidthBestRep';
} else if (enabledPlaylistReps[0]) {
type = 'enabledPlaylistReps';
}
logFn(`choosing ${representationToString(chosenRep)} using ${type} with options`, options);
return chosenRep.playlist;
}
logFn('could not choose a playlist with options', options);
return null;
};
// Playlist Selectors
/**
* Chooses the appropriate media playlist based on the most recent
* bandwidth estimate and the player size.
*
* Expects to be called within the context of an instance of VhsHandler
*
* @return {Playlist} the highest bitrate playlist less than the
* currently detected bandwidth, accounting for some amount of
* bandwidth variance
*/
export const lastBandwidthSelector = function() {
const pixelRatio = this.useDevicePixelRatio ? window.devicePixelRatio || 1 : 1;
return simpleSelector(
this.playlists.master,
this.systemBandwidth,
parseInt(safeGetComputedStyle(this.tech_.el(), 'width'), 10) * pixelRatio,
parseInt(safeGetComputedStyle(this.tech_.el(), 'height'), 10) * pixelRatio,
this.limitRenditionByPlayerDimensions
);
};
/**
* Chooses the appropriate media playlist based on an
* exponential-weighted moving average of the bandwidth after
* filtering for player size.
*
* Expects to be called within the context of an instance of VhsHandler
*
* @param {number} decay - a number between 0 and 1. Higher values of
* this parameter will cause previous bandwidth estimates to lose
* significance more quickly.
* @return {Function} a function which can be invoked to create a new
* playlist selector function.
* @see https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average
*/
export const movingAverageBandwidthSelector = function(decay) {
let average = -1;
if (decay < 0 || decay > 1) {
throw new Error('Moving average bandwidth decay must be between 0 and 1.');
}
return function() {
const pixelRatio = this.useDevicePixelRatio ? window.devicePixelRatio || 1 : 1;
if (average < 0) {
average = this.systemBandwidth;
}
average = decay * this.systemBandwidth + (1 - decay) * average;
return simpleSelector(
this.playlists.master,
average,
parseInt(safeGetComputedStyle(this.tech_.el(), 'width'), 10) * pixelRatio,
parseInt(safeGetComputedStyle(this.tech_.el(), 'height'), 10) * pixelRatio,
this.limitRenditionByPlayerDimensions
);
};
};
/**
* Chooses the appropriate media playlist based on the potential to rebuffer
*
* @param {Object} settings
* Object of information required to use this selector
* @param {Object} settings.master
* Object representation of the master manifest
* @param {number} settings.currentTime
* The current time of the player
* @param {number} settings.bandwidth
* Current measured bandwidth
* @param {number} settings.duration
* Duration of the media
* @param {number} settings.segmentDuration
* Segment duration to be used in round trip time calculations
* @param {number} settings.timeUntilRebuffer
* Time left in seconds until the player has to rebuffer
* @param {number} settings.currentTimeline
* The current timeline segments are being loaded from
* @param {SyncController} settings.syncController
* SyncController for determining if we have a sync point for a given playlist
* @return {Object|null}
* {Object} return.playlist
* The highest bandwidth playlist with the least amount of rebuffering
* {Number} return.rebufferingImpact
* The amount of time in seconds switching to this playlist will rebuffer. A
* negative value means that switching will cause zero rebuffering.
*/
export const minRebufferMaxBandwidthSelector = function(settings) {
const {
master,
currentTime,
bandwidth,
duration,
segmentDuration,
timeUntilRebuffer,
currentTimeline,
syncController
} = settings;
// filter out any playlists that have been excluded due to
// incompatible configurations
const compatiblePlaylists = master.playlists.filter(playlist => !Playlist.isIncompatible(playlist));
// filter out any playlists that have been disabled manually through the representations
// api or blacklisted temporarily due to playback errors.
let enabledPlaylists = compatiblePlaylists.filter(Playlist.isEnabled);
if (!enabledPlaylists.length) {
// if there are no enabled playlists, then they have all been blacklisted or disabled
// by the user through the representations api. In this case, ignore blacklisting and
// fallback to what the user wants by using playlists the user has not disabled.
enabledPlaylists = compatiblePlaylists.filter(playlist => !Playlist.isDisabled(playlist));
}
const bandwidthPlaylists =
enabledPlaylists.filter(Playlist.hasAttribute.bind(null, 'BANDWIDTH'));
const rebufferingEstimates = bandwidthPlaylists.map((playlist) => {
const syncPoint = syncController.getSyncPoint(
playlist,
duration,
currentTimeline,
currentTime
);
// If there is no sync point for this playlist, switching to it will require a
// sync request first. This will double the request time
const numRequests = syncPoint ? 1 : 2;
const requestTimeEstimate = Playlist.estimateSegmentRequestTime(
segmentDuration,
bandwidth,
playlist
);
const rebufferingImpact = (requestTimeEstimate * numRequests) - timeUntilRebuffer;
return {
playlist,
rebufferingImpact
};
});
const noRebufferingPlaylists = rebufferingEstimates.filter((estimate) => estimate.rebufferingImpact <= 0);
// Sort by bandwidth DESC
stableSort(
noRebufferingPlaylists,
(a, b) => comparePlaylistBandwidth(b.playlist, a.playlist)
);
if (noRebufferingPlaylists.length) {
return noRebufferingPlaylists[0];
}
stableSort(rebufferingEstimates, (a, b) => a.rebufferingImpact - b.rebufferingImpact);
return rebufferingEstimates[0] || null;
};
/**
* Chooses the appropriate media playlist, which in this case is the lowest bitrate
* one with video. If no renditions with video exist, return the lowest audio rendition.
*
* Expects to be called within the context of an instance of VhsHandler
*
* @return {Object|null}
* {Object} return.playlist
* The lowest bitrate playlist that contains a video codec. If no such rendition
* exists pick the lowest audio rendition.
*/
export const lowestBitrateCompatibleVariantSelector = function() {
// filter out any playlists that have been excluded due to
// incompatible configurations or playback errors
const playlists = this.playlists.master.playlists.filter(Playlist.isEnabled);
// Sort ascending by bitrate
stableSort(
playlists,
(a, b) => comparePlaylistBandwidth(a, b)
);
// Parse and assume that playlists with no video codec have no video
// (this is not necessarily true, although it is generally true).
//
// If an entire manifest has no valid videos everything will get filtered
// out.
const playlistsWithVideo = playlists.filter(playlist => !!codecsForPlaylist(this.playlists.master, playlist).video);
return playlistsWithVideo[0] || null;
};

View File

@@ -1,559 +0,0 @@
/**
* @file playlist.js
*
* Playlist related utilities.
*/
import videojs from 'video.js';
import window from 'global/window';
import {TIME_FUDGE_FACTOR} from './ranges.js';
const {createTimeRange} = videojs;
/**
* walk backward until we find a duration we can use
* or return a failure
*
* @param {Playlist} playlist the playlist to walk through
* @param {Number} endSequence the mediaSequence to stop walking on
*/
const backwardDuration = function(playlist, endSequence) {
let result = 0;
let i = endSequence - playlist.mediaSequence;
// if a start time is available for segment immediately following
// the interval, use it
let segment = playlist.segments[i];
// Walk backward until we find the latest segment with timeline
// information that is earlier than endSequence
if (segment) {
if (typeof segment.start !== 'undefined') {
return { result: segment.start, precise: true };
}
if (typeof segment.end !== 'undefined') {
return {
result: segment.end - segment.duration,
precise: true
};
}
}
while (i--) {
segment = playlist.segments[i];
if (typeof segment.end !== 'undefined') {
return { result: result + segment.end, precise: true };
}
result += segment.duration;
if (typeof segment.start !== 'undefined') {
return { result: result + segment.start, precise: true };
}
}
return { result, precise: false };
};
/**
* walk forward until we find a duration we can use
* or return a failure
*
* @param {Playlist} playlist the playlist to walk through
* @param {number} endSequence the mediaSequence to stop walking on
*/
const forwardDuration = function(playlist, endSequence) {
let result = 0;
let segment;
let i = endSequence - playlist.mediaSequence;
// Walk forward until we find the earliest segment with timeline
// information
for (; i < playlist.segments.length; i++) {
segment = playlist.segments[i];
if (typeof segment.start !== 'undefined') {
return {
result: segment.start - result,
precise: true
};
}
result += segment.duration;
if (typeof segment.end !== 'undefined') {
return {
result: segment.end - result,
precise: true
};
}
}
// indicate we didn't find a useful duration estimate
return { result: -1, precise: false };
};
/**
* Calculate the media duration from the segments associated with a
* playlist. The duration of a subinterval of the available segments
* may be calculated by specifying an end index.
*
* @param {Object} playlist a media playlist object
* @param {number=} endSequence an exclusive upper boundary
* for the playlist. Defaults to playlist length.
* @param {number} expired the amount of time that has dropped
* off the front of the playlist in a live scenario
* @return {number} the duration between the first available segment
* and end index.
*/
const intervalDuration = function(playlist, endSequence, expired) {
if (typeof endSequence === 'undefined') {
endSequence = playlist.mediaSequence + playlist.segments.length;
}
if (endSequence < playlist.mediaSequence) {
return 0;
}
// do a backward walk to estimate the duration
const backward = backwardDuration(playlist, endSequence);
if (backward.precise) {
// if we were able to base our duration estimate on timing
// information provided directly from the Media Source, return
// it
return backward.result;
}
// walk forward to see if a precise duration estimate can be made
// that way
const forward = forwardDuration(playlist, endSequence);
if (forward.precise) {
// we found a segment that has been buffered and so it's
// position is known precisely
return forward.result;
}
// return the less-precise, playlist-based duration estimate
return backward.result + expired;
};
/**
* Calculates the duration of a playlist. If a start and end index
* are specified, the duration will be for the subset of the media
* timeline between those two indices. The total duration for live
* playlists is always Infinity.
*
* @param {Object} playlist a media playlist object
* @param {number=} endSequence an exclusive upper
* boundary for the playlist. Defaults to the playlist media
* sequence number plus its length.
* @param {number=} expired the amount of time that has
* dropped off the front of the playlist in a live scenario
* @return {number} the duration between the start index and end
* index.
*/
export const duration = function(playlist, endSequence, expired) {
if (!playlist) {
return 0;
}
if (typeof expired !== 'number') {
expired = 0;
}
// if a slice of the total duration is not requested, use
// playlist-level duration indicators when they're present
if (typeof endSequence === 'undefined') {
// if present, use the duration specified in the playlist
if (playlist.totalDuration) {
return playlist.totalDuration;
}
// duration should be Infinity for live playlists
if (!playlist.endList) {
return window.Infinity;
}
}
// calculate the total duration based on the segment durations
return intervalDuration(
playlist,
endSequence,
expired
);
};
/**
* Calculate the time between two indexes in the current playlist
* neight the start- nor the end-index need to be within the current
* playlist in which case, the targetDuration of the playlist is used
* to approximate the durations of the segments
*
* @param {Object} playlist a media playlist object
* @param {number} startIndex
* @param {number} endIndex
* @return {number} the number of seconds between startIndex and endIndex
*/
export const sumDurations = function(playlist, startIndex, endIndex) {
let durations = 0;
if (startIndex > endIndex) {
[startIndex, endIndex] = [endIndex, startIndex];
}
if (startIndex < 0) {
for (let i = startIndex; i < Math.min(0, endIndex); i++) {
durations += playlist.targetDuration;
}
startIndex = 0;
}
for (let i = startIndex; i < endIndex; i++) {
durations += playlist.segments[i].duration;
}
return durations;
};
/**
* Determines the media index of the segment corresponding to the safe edge of the live
* window which is the duration of the last segment plus 2 target durations from the end
* of the playlist.
*
* A liveEdgePadding can be provided which will be used instead of calculating the safe live edge.
* This corresponds to suggestedPresentationDelay in DASH manifests.
*
* @param {Object} playlist
* a media playlist object
* @param {number} [liveEdgePadding]
* A number in seconds indicating how far from the end we want to be.
* If provided, this value is used instead of calculating the safe live index from the target durations.
* Corresponds to suggestedPresentationDelay in DASH manifests.
* @return {number}
* The media index of the segment at the safe live point. 0 if there is no "safe"
* point.
* @function safeLiveIndex
*/
export const safeLiveIndex = function(playlist, liveEdgePadding) {
if (!playlist.segments.length) {
return 0;
}
let i = playlist.segments.length;
const lastSegmentDuration = playlist.segments[i - 1].duration || playlist.targetDuration;
const safeDistance = typeof liveEdgePadding === 'number' ?
liveEdgePadding :
lastSegmentDuration + playlist.targetDuration * 2;
if (safeDistance === 0) {
return i;
}
let distanceFromEnd = 0;
while (i--) {
distanceFromEnd += playlist.segments[i].duration;
if (distanceFromEnd >= safeDistance) {
break;
}
}
return Math.max(0, i);
};
/**
* Calculates the playlist end time
*
* @param {Object} playlist a media playlist object
* @param {number=} expired the amount of time that has
* dropped off the front of the playlist in a live scenario
* @param {boolean|false} useSafeLiveEnd a boolean value indicating whether or not the
* playlist end calculation should consider the safe live end
* (truncate the playlist end by three segments). This is normally
* used for calculating the end of the playlist's seekable range.
* This takes into account the value of liveEdgePadding.
* Setting liveEdgePadding to 0 is equivalent to setting this to false.
* @param {number} liveEdgePadding a number indicating how far from the end of the playlist we should be in seconds.
* If this is provided, it is used in the safe live end calculation.
* Setting useSafeLiveEnd=false or liveEdgePadding=0 are equivalent.
* Corresponds to suggestedPresentationDelay in DASH manifests.
* @return {number} the end time of playlist
* @function playlistEnd
*/
export const playlistEnd = function(playlist, expired, useSafeLiveEnd, liveEdgePadding) {
if (!playlist || !playlist.segments) {
return null;
}
if (playlist.endList) {
return duration(playlist);
}
if (expired === null) {
return null;
}
expired = expired || 0;
const endSequence = useSafeLiveEnd ? safeLiveIndex(playlist, liveEdgePadding) : playlist.segments.length;
return intervalDuration(
playlist,
playlist.mediaSequence + endSequence,
expired
);
};
/**
* Calculates the interval of time that is currently seekable in a
* playlist. The returned time ranges are relative to the earliest
* moment in the specified playlist that is still available. A full
* seekable implementation for live streams would need to offset
* these values by the duration of content that has expired from the
* stream.
*
* @param {Object} playlist a media playlist object
* dropped off the front of the playlist in a live scenario
* @param {number=} expired the amount of time that has
* dropped off the front of the playlist in a live scenario
* @param {number} liveEdgePadding how far from the end of the playlist we should be in seconds.
* Corresponds to suggestedPresentationDelay in DASH manifests.
* @return {TimeRanges} the periods of time that are valid targets
* for seeking
*/
export const seekable = function(playlist, expired, liveEdgePadding) {
const useSafeLiveEnd = true;
const seekableStart = expired || 0;
const seekableEnd = playlistEnd(playlist, expired, useSafeLiveEnd, liveEdgePadding);
if (seekableEnd === null) {
return createTimeRange();
}
return createTimeRange(seekableStart, seekableEnd);
};
/**
* Determine the index and estimated starting time of the segment that
* contains a specified playback position in a media playlist.
*
* @param {Object} playlist the media playlist to query
* @param {number} currentTime The number of seconds since the earliest
* possible position to determine the containing segment for
* @param {number} startIndex
* @param {number} startTime
* @return {Object}
*/
export const getMediaInfoForTime = function(
playlist,
currentTime,
startIndex,
startTime
) {
let i;
let segment;
const numSegments = playlist.segments.length;
let time = currentTime - startTime;
if (time < 0) {
// Walk backward from startIndex in the playlist, adding durations
// until we find a segment that contains `time` and return it
if (startIndex > 0) {
for (i = startIndex - 1; i >= 0; i--) {
segment = playlist.segments[i];
time += (segment.duration + TIME_FUDGE_FACTOR);
if (time > 0) {
return {
mediaIndex: i,
startTime: startTime - sumDurations(playlist, startIndex, i)
};
}
}
}
// We were unable to find a good segment within the playlist
// so select the first segment
return {
mediaIndex: 0,
startTime: currentTime
};
}
// When startIndex is negative, we first walk forward to first segment
// adding target durations. If we "run out of time" before getting to
// the first segment, return the first segment
if (startIndex < 0) {
for (i = startIndex; i < 0; i++) {
time -= playlist.targetDuration;
if (time < 0) {
return {
mediaIndex: 0,
startTime: currentTime
};
}
}
startIndex = 0;
}
// Walk forward from startIndex in the playlist, subtracting durations
// until we find a segment that contains `time` and return it
for (i = startIndex; i < numSegments; i++) {
segment = playlist.segments[i];
time -= segment.duration + TIME_FUDGE_FACTOR;
if (time < 0) {
return {
mediaIndex: i,
startTime: startTime + sumDurations(playlist, startIndex, i)
};
}
}
// We are out of possible candidates so load the last one...
return {
mediaIndex: numSegments - 1,
startTime: currentTime
};
};
/**
* Check whether the playlist is blacklisted or not.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is blacklisted or not
* @function isBlacklisted
*/
export const isBlacklisted = function(playlist) {
return playlist.excludeUntil && playlist.excludeUntil > Date.now();
};
/**
* Check whether the playlist is compatible with current playback configuration or has
* been blacklisted permanently for being incompatible.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is incompatible or not
* @function isIncompatible
*/
export const isIncompatible = function(playlist) {
return playlist.excludeUntil && playlist.excludeUntil === Infinity;
};
/**
* Check whether the playlist is enabled or not.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is enabled or not
* @function isEnabled
*/
export const isEnabled = function(playlist) {
const blacklisted = isBlacklisted(playlist);
return (!playlist.disabled && !blacklisted);
};
/**
* Check whether the playlist has been manually disabled through the representations api.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is disabled manually or not
* @function isDisabled
*/
export const isDisabled = function(playlist) {
return playlist.disabled;
};
/**
* Returns whether the current playlist is an AES encrypted HLS stream
*
* @return {boolean} true if it's an AES encrypted HLS stream
*/
export const isAes = function(media) {
for (let i = 0; i < media.segments.length; i++) {
if (media.segments[i].key) {
return true;
}
}
return false;
};
/**
* Checks if the playlist has a value for the specified attribute
*
* @param {string} attr
* Attribute to check for
* @param {Object} playlist
* The media playlist object
* @return {boolean}
* Whether the playlist contains a value for the attribute or not
* @function hasAttribute
*/
export const hasAttribute = function(attr, playlist) {
return playlist.attributes && playlist.attributes[attr];
};
/**
* Estimates the time required to complete a segment download from the specified playlist
*
* @param {number} segmentDuration
* Duration of requested segment
* @param {number} bandwidth
* Current measured bandwidth of the player
* @param {Object} playlist
* The media playlist object
* @param {number=} bytesReceived
* Number of bytes already received for the request. Defaults to 0
* @return {number|NaN}
* The estimated time to request the segment. NaN if bandwidth information for
* the given playlist is unavailable
* @function estimateSegmentRequestTime
*/
export const estimateSegmentRequestTime = function(
segmentDuration,
bandwidth,
playlist,
bytesReceived = 0
) {
if (!hasAttribute('BANDWIDTH', playlist)) {
return NaN;
}
const size = segmentDuration * playlist.attributes.BANDWIDTH;
return (size - (bytesReceived * 8)) / bandwidth;
};
/*
* Returns whether the current playlist is the lowest rendition
*
* @return {Boolean} true if on lowest rendition
*/
export const isLowestEnabledRendition = (master, media) => {
if (master.playlists.length === 1) {
return true;
}
const currentBandwidth = media.attributes.BANDWIDTH || Number.MAX_VALUE;
return (master.playlists.filter((playlist) => {
if (!isEnabled(playlist)) {
return false;
}
return (playlist.attributes.BANDWIDTH || 0) < currentBandwidth;
}).length === 0);
};
// exports
export default {
duration,
seekable,
safeLiveIndex,
getMediaInfoForTime,
isEnabled,
isDisabled,
isBlacklisted,
isIncompatible,
playlistEnd,
isAes,
hasAttribute,
estimateSegmentRequestTime,
isLowestEnabledRendition
};

View File

@@ -1,439 +0,0 @@
/**
* ranges
*
* Utilities for working with TimeRanges.
*
*/
import videojs from 'video.js';
// Fudge factor to account for TimeRanges rounding
export const TIME_FUDGE_FACTOR = 1 / 30;
// Comparisons between time values such as current time and the end of the buffered range
// can be misleading because of precision differences or when the current media has poorly
// aligned audio and video, which can cause values to be slightly off from what you would
// expect. This value is what we consider to be safe to use in such comparisons to account
// for these scenarios.
export const SAFE_TIME_DELTA = TIME_FUDGE_FACTOR * 3;
/**
* Clamps a value to within a range
*
* @param {number} num - the value to clamp
* @param {number} start - the start of the range to clamp within, inclusive
* @param {number} end - the end of the range to clamp within, inclusive
* @return {number}
*/
const clamp = function(num, [start, end]) {
return Math.min(Math.max(start, num), end);
};
const filterRanges = function(timeRanges, predicate) {
const results = [];
let i;
if (timeRanges && timeRanges.length) {
// Search for ranges that match the predicate
for (i = 0; i < timeRanges.length; i++) {
if (predicate(timeRanges.start(i), timeRanges.end(i))) {
results.push([timeRanges.start(i), timeRanges.end(i)]);
}
}
}
return videojs.createTimeRanges(results);
};
/**
* Attempts to find the buffered TimeRange that contains the specified
* time.
*
* @param {TimeRanges} buffered - the TimeRanges object to query
* @param {number} time - the time to filter on.
* @return {TimeRanges} a new TimeRanges object
*/
export const findRange = function(buffered, time) {
return filterRanges(buffered, function(start, end) {
return start - SAFE_TIME_DELTA <= time &&
end + SAFE_TIME_DELTA >= time;
});
};
/**
* Returns the TimeRanges that begin later than the specified time.
*
* @param {TimeRanges} timeRanges - the TimeRanges object to query
* @param {number} time - the time to filter on.
* @return {TimeRanges} a new TimeRanges object.
*/
export const findNextRange = function(timeRanges, time) {
return filterRanges(timeRanges, function(start) {
return start - TIME_FUDGE_FACTOR >= time;
});
};
/**
* Returns gaps within a list of TimeRanges
*
* @param {TimeRanges} buffered - the TimeRanges object
* @return {TimeRanges} a TimeRanges object of gaps
*/
export const findGaps = function(buffered) {
if (buffered.length < 2) {
return videojs.createTimeRanges();
}
const ranges = [];
for (let i = 1; i < buffered.length; i++) {
const start = buffered.end(i - 1);
const end = buffered.start(i);
ranges.push([start, end]);
}
return videojs.createTimeRanges(ranges);
};
/**
* Search for a likely end time for the segment that was just appened
* based on the state of the `buffered` property before and after the
* append. If we fin only one such uncommon end-point return it.
*
* @param {TimeRanges} original - the buffered time ranges before the update
* @param {TimeRanges} update - the buffered time ranges after the update
* @return {number|null} the end time added between `original` and `update`,
* or null if one cannot be unambiguously determined.
*/
export const findSoleUncommonTimeRangesEnd = function(original, update) {
let i;
let start;
let end;
const result = [];
const edges = [];
// In order to qualify as a possible candidate, the end point must:
// 1) Not have already existed in the `original` ranges
// 2) Not result from the shrinking of a range that already existed
// in the `original` ranges
// 3) Not be contained inside of a range that existed in `original`
const overlapsCurrentEnd = function(span) {
return (span[0] <= end && span[1] >= end);
};
if (original) {
// Save all the edges in the `original` TimeRanges object
for (i = 0; i < original.length; i++) {
start = original.start(i);
end = original.end(i);
edges.push([start, end]);
}
}
if (update) {
// Save any end-points in `update` that are not in the `original`
// TimeRanges object
for (i = 0; i < update.length; i++) {
start = update.start(i);
end = update.end(i);
if (edges.some(overlapsCurrentEnd)) {
continue;
}
// at this point it must be a unique non-shrinking end edge
result.push(end);
}
}
// we err on the side of caution and return null if didn't find
// exactly *one* differing end edge in the search above
if (result.length !== 1) {
return null;
}
return result[0];
};
/**
* Calculate the intersection of two TimeRanges
*
* @param {TimeRanges} bufferA
* @param {TimeRanges} bufferB
* @return {TimeRanges} The interesection of `bufferA` with `bufferB`
*/
export const bufferIntersection = function(bufferA, bufferB) {
let start = null;
let end = null;
let arity = 0;
const extents = [];
const ranges = [];
if (!bufferA || !bufferA.length || !bufferB || !bufferB.length) {
return videojs.createTimeRange();
}
// Handle the case where we have both buffers and create an
// intersection of the two
let count = bufferA.length;
// A) Gather up all start and end times
while (count--) {
extents.push({time: bufferA.start(count), type: 'start'});
extents.push({time: bufferA.end(count), type: 'end'});
}
count = bufferB.length;
while (count--) {
extents.push({time: bufferB.start(count), type: 'start'});
extents.push({time: bufferB.end(count), type: 'end'});
}
// B) Sort them by time
extents.sort(function(a, b) {
return a.time - b.time;
});
// C) Go along one by one incrementing arity for start and decrementing
// arity for ends
for (count = 0; count < extents.length; count++) {
if (extents[count].type === 'start') {
arity++;
// D) If arity is ever incremented to 2 we are entering an
// overlapping range
if (arity === 2) {
start = extents[count].time;
}
} else if (extents[count].type === 'end') {
arity--;
// E) If arity is ever decremented to 1 we leaving an
// overlapping range
if (arity === 1) {
end = extents[count].time;
}
}
// F) Record overlapping ranges
if (start !== null && end !== null) {
ranges.push([start, end]);
start = null;
end = null;
}
}
return videojs.createTimeRanges(ranges);
};
/**
* Calculates the percentage of `segmentRange` that overlaps the
* `buffered` time ranges.
*
* @param {TimeRanges} segmentRange - the time range that the segment
* covers adjusted according to currentTime
* @param {TimeRanges} referenceRange - the original time range that the
* segment covers
* @param {number} currentTime - time in seconds where the current playback
* is at
* @param {TimeRanges} buffered - the currently buffered time ranges
* @return {number} percent of the segment currently buffered
*/
const calculateBufferedPercent = function(
adjustedRange,
referenceRange,
currentTime,
buffered
) {
const referenceDuration = referenceRange.end(0) - referenceRange.start(0);
const adjustedDuration = adjustedRange.end(0) - adjustedRange.start(0);
const bufferMissingFromAdjusted = referenceDuration - adjustedDuration;
const adjustedIntersection = bufferIntersection(adjustedRange, buffered);
const referenceIntersection = bufferIntersection(referenceRange, buffered);
let adjustedOverlap = 0;
let referenceOverlap = 0;
let count = adjustedIntersection.length;
while (count--) {
adjustedOverlap += adjustedIntersection.end(count) -
adjustedIntersection.start(count);
// If the current overlap segment starts at currentTime, then increase the
// overlap duration so that it actually starts at the beginning of referenceRange
// by including the difference between the two Range's durations
// This is a work around for the way Flash has no buffer before currentTime
// TODO: see if this is still necessary since Flash isn't included
if (adjustedIntersection.start(count) === currentTime) {
adjustedOverlap += bufferMissingFromAdjusted;
}
}
count = referenceIntersection.length;
while (count--) {
referenceOverlap += referenceIntersection.end(count) -
referenceIntersection.start(count);
}
// Use whichever value is larger for the percentage-buffered since that value
// is likely more accurate because the only way
return Math.max(adjustedOverlap, referenceOverlap) / referenceDuration * 100;
};
/**
* Return the amount of a range specified by the startOfSegment and segmentDuration
* overlaps the current buffered content.
*
* @param {number} startOfSegment - the time where the segment begins
* @param {number} segmentDuration - the duration of the segment in seconds
* @param {number} currentTime - time in seconds where the current playback
* is at
* @param {TimeRanges} buffered - the state of the buffer
* @return {number} percentage of the segment's time range that is
* already in `buffered`
*/
export const getSegmentBufferedPercent = function(
startOfSegment,
segmentDuration,
currentTime,
buffered
) {
const endOfSegment = startOfSegment + segmentDuration;
// The entire time range of the segment
const originalSegmentRange = videojs.createTimeRanges([[
startOfSegment,
endOfSegment
]]);
// The adjusted segment time range that is setup such that it starts
// no earlier than currentTime
// Flash has no notion of a back-buffer so adjustedSegmentRange adjusts
// for that and the function will still return 100% if a only half of a
// segment is actually in the buffer as long as the currentTime is also
// half-way through the segment
const adjustedSegmentRange = videojs.createTimeRanges([[
clamp(startOfSegment, [currentTime, endOfSegment]),
endOfSegment
]]);
// This condition happens when the currentTime is beyond the segment's
// end time
if (adjustedSegmentRange.start(0) === adjustedSegmentRange.end(0)) {
return 0;
}
const percent = calculateBufferedPercent(
adjustedSegmentRange,
originalSegmentRange,
currentTime,
buffered
);
// If the segment is reported as having a zero duration, return 0%
// since it is likely that we will need to fetch the segment
if (isNaN(percent) || percent === Infinity || percent === -Infinity) {
return 0;
}
return percent;
};
/**
* Gets a human readable string for a TimeRange
*
* @param {TimeRange} range
* @return {string} a human readable string
*/
export const printableRange = (range) => {
const strArr = [];
if (!range || !range.length) {
return '';
}
for (let i = 0; i < range.length; i++) {
strArr.push(range.start(i) + ' => ' + range.end(i));
}
return strArr.join(', ');
};
/**
* Calculates the amount of time left in seconds until the player hits the end of the
* buffer and causes a rebuffer
*
* @param {TimeRange} buffered
* The state of the buffer
* @param {Numnber} currentTime
* The current time of the player
* @param {number} playbackRate
* The current playback rate of the player. Defaults to 1.
* @return {number}
* Time until the player has to start rebuffering in seconds.
* @function timeUntilRebuffer
*/
export const timeUntilRebuffer = function(buffered, currentTime, playbackRate = 1) {
const bufferedEnd = buffered.length ? buffered.end(buffered.length - 1) : 0;
return (bufferedEnd - currentTime) / playbackRate;
};
/**
* Converts a TimeRanges object into an array representation
*
* @param {TimeRanges} timeRanges
* @return {Array}
*/
export const timeRangesToArray = (timeRanges) => {
const timeRangesList = [];
for (let i = 0; i < timeRanges.length; i++) {
timeRangesList.push({
start: timeRanges.start(i),
end: timeRanges.end(i)
});
}
return timeRangesList;
};
/**
* Determines if two time range objects are different.
*
* @param {TimeRange} a
* the first time range object to check
*
* @param {TimeRange} b
* the second time range object to check
*
* @return {Boolean}
* Whether the time range objects differ
*/
export const isRangeDifferent = function(a, b) {
// same object
if (a === b) {
return false;
}
// one or the other is undefined
if (!a && b || (!b && a)) {
return true;
}
// length is different
if (a.length !== b.length) {
return true;
}
// see if any start/end pair is different
for (let i = 0; i < a.length; i++) {
if (a.start(i) !== b.start(i) || a.end(i) !== b.end(i)) {
return true;
}
}
// if the length and every pair is the same
// this is the same time range
return false;
};

View File

@@ -1,127 +0,0 @@
import videojs from 'video.js';
const defaultOptions = {
errorInterval: 30,
getSource(next) {
const tech = this.tech({ IWillNotUseThisInPlugins: true });
const sourceObj = tech.currentSource_ || this.currentSource();
return next(sourceObj);
}
};
/**
* Main entry point for the plugin
*
* @param {Player} player a reference to a videojs Player instance
* @param {Object} [options] an object with plugin options
* @private
*/
const initPlugin = function(player, options) {
let lastCalled = 0;
let seekTo = 0;
const localOptions = videojs.mergeOptions(defaultOptions, options);
player.ready(() => {
player.trigger({type: 'usage', name: 'vhs-error-reload-initialized'});
player.trigger({type: 'usage', name: 'hls-error-reload-initialized'});
});
/**
* Player modifications to perform that must wait until `loadedmetadata`
* has been triggered
*
* @private
*/
const loadedMetadataHandler = function() {
if (seekTo) {
player.currentTime(seekTo);
}
};
/**
* Set the source on the player element, play, and seek if necessary
*
* @param {Object} sourceObj An object specifying the source url and mime-type to play
* @private
*/
const setSource = function(sourceObj) {
if (sourceObj === null || sourceObj === undefined) {
return;
}
seekTo = (player.duration() !== Infinity && player.currentTime()) || 0;
player.one('loadedmetadata', loadedMetadataHandler);
player.src(sourceObj);
player.trigger({type: 'usage', name: 'vhs-error-reload'});
player.trigger({type: 'usage', name: 'hls-error-reload'});
player.play();
};
/**
* Attempt to get a source from either the built-in getSource function
* or a custom function provided via the options
*
* @private
*/
const errorHandler = function() {
// Do not attempt to reload the source if a source-reload occurred before
// 'errorInterval' time has elapsed since the last source-reload
if (Date.now() - lastCalled < localOptions.errorInterval * 1000) {
player.trigger({type: 'usage', name: 'vhs-error-reload-canceled'});
player.trigger({type: 'usage', name: 'hls-error-reload-canceled'});
return;
}
if (!localOptions.getSource ||
typeof localOptions.getSource !== 'function') {
videojs.log.error('ERROR: reloadSourceOnError - The option getSource must be a function!');
return;
}
lastCalled = Date.now();
return localOptions.getSource.call(player, setSource);
};
/**
* Unbind any event handlers that were bound by the plugin
*
* @private
*/
const cleanupEvents = function() {
player.off('loadedmetadata', loadedMetadataHandler);
player.off('error', errorHandler);
player.off('dispose', cleanupEvents);
};
/**
* Cleanup before re-initializing the plugin
*
* @param {Object} [newOptions] an object with plugin options
* @private
*/
const reinitPlugin = function(newOptions) {
cleanupEvents();
initPlugin(player, newOptions);
};
player.on('error', errorHandler);
player.on('dispose', cleanupEvents);
// Overwrite the plugin function so that we can correctly cleanup before
// initializing the plugin
player.reloadSourceOnError = reinitPlugin;
};
/**
* Reload the source when an error is detected as long as there
* wasn't an error previously within the last 30 seconds
*
* @param {Object} [options] an object with plugin options
*/
const reloadSourceOnError = function(options) {
initPlugin(this, options);
};
export default reloadSourceOnError;

View File

@@ -1,111 +0,0 @@
import { isIncompatible, isEnabled } from './playlist.js';
import { codecsForPlaylist } from './util/codecs.js';
/**
* Returns a function that acts as the Enable/disable playlist function.
*
* @param {PlaylistLoader} loader - The master playlist loader
* @param {string} playlistID - id of the playlist
* @param {Function} changePlaylistFn - A function to be called after a
* playlist's enabled-state has been changed. Will NOT be called if a
* playlist's enabled-state is unchanged
* @param {boolean=} enable - Value to set the playlist enabled-state to
* or if undefined returns the current enabled-state for the playlist
* @return {Function} Function for setting/getting enabled
*/
const enableFunction = (loader, playlistID, changePlaylistFn) => (enable) => {
const playlist = loader.master.playlists[playlistID];
const incompatible = isIncompatible(playlist);
const currentlyEnabled = isEnabled(playlist);
if (typeof enable === 'undefined') {
return currentlyEnabled;
}
if (enable) {
delete playlist.disabled;
} else {
playlist.disabled = true;
}
if (enable !== currentlyEnabled && !incompatible) {
// Ensure the outside world knows about our changes
changePlaylistFn();
if (enable) {
loader.trigger('renditionenabled');
} else {
loader.trigger('renditiondisabled');
}
}
return enable;
};
/**
* The representation object encapsulates the publicly visible information
* in a media playlist along with a setter/getter-type function (enabled)
* for changing the enabled-state of a particular playlist entry
*
* @class Representation
*/
class Representation {
constructor(vhsHandler, playlist, id) {
const {
masterPlaylistController_: mpc,
options_: { smoothQualityChange }
} = vhsHandler;
// Get a reference to a bound version of the quality change function
const changeType = smoothQualityChange ? 'smooth' : 'fast';
const qualityChangeFunction = mpc[`${changeType}QualityChange_`].bind(mpc);
// some playlist attributes are optional
if (playlist.attributes.RESOLUTION) {
const resolution = playlist.attributes.RESOLUTION;
this.width = resolution.width;
this.height = resolution.height;
}
this.bandwidth = playlist.attributes.BANDWIDTH;
this.codecs = codecsForPlaylist(mpc.master(), playlist);
this.playlist = playlist;
// The id is simply the ordinality of the media playlist
// within the master playlist
this.id = id;
// Partially-apply the enableFunction to create a playlist-
// specific variant
this.enabled = enableFunction(
vhsHandler.playlists,
playlist.id,
qualityChangeFunction
);
}
}
/**
* A mixin function that adds the `representations` api to an instance
* of the VhsHandler class
*
* @param {VhsHandler} vhsHandler - An instance of VhsHandler to add the
* representation API into
*/
const renditionSelectionMixin = function(vhsHandler) {
const playlists = vhsHandler.playlists;
// Add a single API-specific function to the VhsHandler instance
vhsHandler.representations = () => {
if (!playlists || !playlists.master || !playlists.master.playlists) {
return [];
}
return playlists
.master
.playlists
.filter((media) => !isIncompatible(media))
.map((e, i) => new Representation(vhsHandler, e, e.id));
};
};
export default renditionSelectionMixin;

View File

@@ -1,36 +0,0 @@
/**
* @file resolve-url.js - Handling how URLs are resolved and manipulated
*/
import _resolveUrl from '@videojs/vhs-utils/dist/resolve-url.js';
export const resolveUrl = _resolveUrl;
/**
* Checks whether xhr request was redirected and returns correct url depending
* on `handleManifestRedirects` option
*
* @api private
*
* @param {string} url - an url being requested
* @param {XMLHttpRequest} req - xhr request result
*
* @return {string}
*/
export const resolveManifestRedirect = (handleManifestRedirect, url, req) => {
// To understand how the responseURL below is set and generated:
// - https://fetch.spec.whatwg.org/#concept-response-url
// - https://fetch.spec.whatwg.org/#atomic-http-redirect-handling
if (
handleManifestRedirect &&
req &&
req.responseURL &&
url !== req.responseURL
) {
return req.responseURL;
}
return url;
};
export default resolveUrl;

File diff suppressed because it is too large Load Diff

View File

@@ -1,240 +0,0 @@
const transmuxQueue = [];
let currentTransmux;
export const handleData_ = (event, transmuxedData, callback) => {
const {
type,
initSegment,
captions,
captionStreams,
metadata,
videoFrameDtsTime,
videoFramePtsTime
} = event.data.segment;
transmuxedData.buffer.push({
captions,
captionStreams,
metadata
});
// right now, boxes will come back from partial transmuxer, data from full
const boxes = event.data.segment.boxes || {
data: event.data.segment.data
};
const result = {
type,
// cast ArrayBuffer to TypedArray
data: new Uint8Array(
boxes.data,
boxes.data.byteOffset,
boxes.data.byteLength
),
initSegment: new Uint8Array(
initSegment.data,
initSegment.byteOffset,
initSegment.byteLength
)
};
if (typeof videoFrameDtsTime !== 'undefined') {
result.videoFrameDtsTime = videoFrameDtsTime;
}
if (typeof videoFramePtsTime !== 'undefined') {
result.videoFramePtsTime = videoFramePtsTime;
}
callback(result);
};
export const handleDone_ = ({
transmuxedData,
callback
}) => {
// Previously we only returned data on data events,
// not on done events. Clear out the buffer to keep that consistent.
transmuxedData.buffer = [];
// all buffers should have been flushed from the muxer, so start processing anything we
// have received
callback(transmuxedData);
};
export const handleGopInfo_ = (event, transmuxedData) => {
transmuxedData.gopInfo = event.data.gopInfo;
};
export const processTransmux = ({
transmuxer,
bytes,
audioAppendStart,
gopsToAlignWith,
isPartial,
remux,
onData,
onTrackInfo,
onAudioTimingInfo,
onVideoTimingInfo,
onVideoSegmentTimingInfo,
onId3,
onCaptions,
onDone
}) => {
const transmuxedData = {
isPartial,
buffer: []
};
const handleMessage = (event) => {
if (!currentTransmux) {
// disposed
return;
}
if (event.data.action === 'data') {
handleData_(event, transmuxedData, onData);
}
if (event.data.action === 'trackinfo') {
onTrackInfo(event.data.trackInfo);
}
if (event.data.action === 'gopInfo') {
handleGopInfo_(event, transmuxedData);
}
if (event.data.action === 'audioTimingInfo') {
onAudioTimingInfo(event.data.audioTimingInfo);
}
if (event.data.action === 'videoTimingInfo') {
onVideoTimingInfo(event.data.videoTimingInfo);
}
if (event.data.action === 'videoSegmentTimingInfo') {
onVideoSegmentTimingInfo(event.data.videoSegmentTimingInfo);
}
if (event.data.action === 'id3Frame') {
onId3([event.data.id3Frame], event.data.id3Frame.dispatchType);
}
if (event.data.action === 'caption') {
onCaptions(event.data.caption);
}
// wait for the transmuxed event since we may have audio and video
if (event.data.type !== 'transmuxed') {
return;
}
transmuxer.onmessage = null;
handleDone_({
transmuxedData,
callback: onDone
});
/* eslint-disable no-use-before-define */
dequeue();
/* eslint-enable */
};
transmuxer.onmessage = handleMessage;
if (audioAppendStart) {
transmuxer.postMessage({
action: 'setAudioAppendStart',
appendStart: audioAppendStart
});
}
// allow empty arrays to be passed to clear out GOPs
if (Array.isArray(gopsToAlignWith)) {
transmuxer.postMessage({
action: 'alignGopsWith',
gopsToAlignWith
});
}
if (typeof remux !== 'undefined') {
transmuxer.postMessage({
action: 'setRemux',
remux
});
}
if (bytes.byteLength) {
const buffer = bytes instanceof ArrayBuffer ? bytes : bytes.buffer;
const byteOffset = bytes instanceof ArrayBuffer ? 0 : bytes.byteOffset;
transmuxer.postMessage(
{
action: 'push',
// Send the typed-array of data as an ArrayBuffer so that
// it can be sent as a "Transferable" and avoid the costly
// memory copy
data: buffer,
// To recreate the original typed-array, we need information
// about what portion of the ArrayBuffer it was a view into
byteOffset,
byteLength: bytes.byteLength
},
[ buffer ]
);
}
// even if we didn't push any bytes, we have to make sure we flush in case we reached
// the end of the segment
transmuxer.postMessage({ action: isPartial ? 'partialFlush' : 'flush' });
};
export const dequeue = () => {
currentTransmux = null;
if (transmuxQueue.length) {
currentTransmux = transmuxQueue.shift();
if (typeof currentTransmux === 'function') {
currentTransmux();
} else {
processTransmux(currentTransmux);
}
}
};
export const processAction = (transmuxer, action) => {
transmuxer.postMessage({ action });
dequeue();
};
export const enqueueAction = (action, transmuxer) => {
if (!currentTransmux) {
currentTransmux = action;
processAction(transmuxer, action);
return;
}
transmuxQueue.push(processAction.bind(null, transmuxer, action));
};
export const reset = (transmuxer) => {
enqueueAction('reset', transmuxer);
};
export const endTimeline = (transmuxer) => {
enqueueAction('endTimeline', transmuxer);
};
export const transmux = (options) => {
if (!currentTransmux) {
currentTransmux = options;
processTransmux(options);
return;
}
transmuxQueue.push(options);
};
export const dispose = () => {
// clear out module-level references
currentTransmux = null;
transmuxQueue.length = 0;
};
export default {
reset,
dispose,
endTimeline,
transmux
};

View File

@@ -1,806 +0,0 @@
/**
* @file source-updater.js
*/
import videojs from 'video.js';
import logger from './util/logger';
import noop from './util/noop';
import { bufferIntersection } from './ranges.js';
import {getMimeForCodec} from '@videojs/vhs-utils/dist/codecs.js';
import window from 'global/window';
import toTitleCase from './util/to-title-case.js';
const bufferTypes = [
'video',
'audio'
];
const updating = (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
return (sourceBuffer && sourceBuffer.updating) || sourceUpdater.queuePending[type];
};
const nextQueueIndexOfType = (type, queue) => {
for (let i = 0; i < queue.length; i++) {
const queueEntry = queue[i];
if (queueEntry.type === 'mediaSource') {
// If the next entry is a media source entry (uses multiple source buffers), block
// processing to allow it to go through first.
return null;
}
if (queueEntry.type === type) {
return i;
}
}
return null;
};
const shiftQueue = (type, sourceUpdater) => {
if (sourceUpdater.queue.length === 0) {
return;
}
let queueIndex = 0;
let queueEntry = sourceUpdater.queue[queueIndex];
if (queueEntry.type === 'mediaSource') {
if (!sourceUpdater.updating() && sourceUpdater.mediaSource.readyState !== 'closed') {
sourceUpdater.queue.shift();
queueEntry.action(sourceUpdater);
if (queueEntry.doneFn) {
queueEntry.doneFn();
}
// Only specific source buffer actions must wait for async updateend events. Media
// Source actions process synchronously. Therefore, both audio and video source
// buffers are now clear to process the next queue entries.
shiftQueue('audio', sourceUpdater);
shiftQueue('video', sourceUpdater);
}
// Media Source actions require both source buffers, so if the media source action
// couldn't process yet (because one or both source buffers are busy), block other
// queue actions until both are available and the media source action can process.
return;
}
if (type === 'mediaSource') {
// If the queue was shifted by a media source action (this happens when pushing a
// media source action onto the queue), then it wasn't from an updateend event from an
// audio or video source buffer, so there's no change from previous state, and no
// processing should be done.
return;
}
// Media source queue entries don't need to consider whether the source updater is
// started (i.e., source buffers are created) as they don't need the source buffers, but
// source buffer queue entries do.
if (!sourceUpdater.started_ || sourceUpdater.mediaSource.readyState === 'closed' || updating(type, sourceUpdater)) {
return;
}
if (queueEntry.type !== type) {
queueIndex = nextQueueIndexOfType(type, sourceUpdater.queue);
if (queueIndex === null) {
// Either there's no queue entry that uses this source buffer type in the queue, or
// there's a media source queue entry before the next entry of this type, in which
// case wait for that action to process first.
return;
}
queueEntry = sourceUpdater.queue[queueIndex];
}
sourceUpdater.queue.splice(queueIndex, 1);
queueEntry.action(type, sourceUpdater);
if (!queueEntry.doneFn) {
// synchronous operation, process next entry
shiftQueue(type, sourceUpdater);
return;
}
// asynchronous operation, so keep a record that this source buffer type is in use
sourceUpdater.queuePending[type] = queueEntry;
};
const cleanupBuffer = (type, sourceUpdater) => {
const buffer = sourceUpdater[`${type}Buffer`];
const titleType = toTitleCase(type);
if (!buffer) {
return;
}
buffer.removeEventListener('updateend', sourceUpdater[`on${titleType}UpdateEnd_`]);
buffer.removeEventListener('error', sourceUpdater[`on${titleType}Error_`]);
sourceUpdater.codecs[type] = null;
sourceUpdater[`${type}Buffer`] = null;
};
const inSourceBuffers = (mediaSource, sourceBuffer) => mediaSource && sourceBuffer &&
Array.prototype.indexOf.call(mediaSource.sourceBuffers, sourceBuffer) !== -1;
const actions = {
appendBuffer: (bytes, segmentInfo) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Appending segment ${segmentInfo.mediaIndex}'s ${bytes.length} bytes to ${type}Buffer`);
sourceBuffer.appendBuffer(bytes);
},
remove: (start, end) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Removing ${start} to ${end} from ${type}Buffer`);
sourceBuffer.remove(start, end);
},
timestampOffset: (offset) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Setting ${type}timestampOffset to ${offset}`);
sourceBuffer.timestampOffset = offset;
},
callback: (callback) => (type, sourceUpdater) => {
callback();
},
endOfStream: (error) => (sourceUpdater) => {
if (sourceUpdater.mediaSource.readyState !== 'open') {
return;
}
sourceUpdater.logger_(`Calling mediaSource endOfStream(${error || ''})`);
try {
sourceUpdater.mediaSource.endOfStream(error);
} catch (e) {
videojs.log.warn('Failed to call media source endOfStream', e);
}
},
duration: (duration) => (sourceUpdater) => {
sourceUpdater.logger_(`Setting mediaSource duration to ${duration}`);
try {
sourceUpdater.mediaSource.duration = duration;
} catch (e) {
videojs.log.warn('Failed to set media source duration', e);
}
},
abort: () => (type, sourceUpdater) => {
if (sourceUpdater.mediaSource.readyState !== 'open') {
return;
}
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`calling abort on ${type}Buffer`);
try {
sourceBuffer.abort();
} catch (e) {
videojs.log.warn(`Failed to abort on ${type}Buffer`, e);
}
},
addSourceBuffer: (type, codec) => (sourceUpdater) => {
const titleType = toTitleCase(type);
const mime = getMimeForCodec(codec);
sourceUpdater.logger_(`Adding ${type}Buffer with codec ${codec} to mediaSource`);
const sourceBuffer = sourceUpdater.mediaSource.addSourceBuffer(mime);
sourceBuffer.addEventListener('updateend', sourceUpdater[`on${titleType}UpdateEnd_`]);
sourceBuffer.addEventListener('error', sourceUpdater[`on${titleType}Error_`]);
sourceUpdater.codecs[type] = codec;
sourceUpdater[`${type}Buffer`] = sourceBuffer;
},
removeSourceBuffer: (type) => (sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
cleanupBuffer(type, sourceUpdater);
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Removing ${type}Buffer with codec ${sourceUpdater.codecs[type]} from mediaSource`);
try {
sourceUpdater.mediaSource.removeSourceBuffer(sourceBuffer);
} catch (e) {
videojs.log.warn(`Failed to removeSourceBuffer ${type}Buffer`, e);
}
},
changeType: (codec) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
const mime = getMimeForCodec(codec);
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
// do not update codec if we don't need to.
if (sourceUpdater.codecs[type] === codec) {
return;
}
sourceUpdater.logger_(`changing ${type}Buffer codec from ${sourceUpdater.codecs[type]} to ${codec}`);
sourceBuffer.changeType(mime);
sourceUpdater.codecs[type] = codec;
}
};
const pushQueue = ({type, sourceUpdater, action, doneFn, name}) => {
sourceUpdater.queue.push({
type,
action,
doneFn,
name
});
shiftQueue(type, sourceUpdater);
};
const onUpdateend = (type, sourceUpdater) => (e) => {
// Although there should, in theory, be a pending action for any updateend receieved,
// there are some actions that may trigger updateend events without set definitions in
// the w3c spec. For instance, setting the duration on the media source may trigger
// updateend events on source buffers. This does not appear to be in the spec. As such,
// if we encounter an updateend without a corresponding pending action from our queue
// for that source buffer type, process the next action.
if (sourceUpdater.queuePending[type]) {
const doneFn = sourceUpdater.queuePending[type].doneFn;
sourceUpdater.queuePending[type] = null;
if (doneFn) {
// if there's an error, report it
doneFn(sourceUpdater[`${type}Error_`]);
}
}
shiftQueue(type, sourceUpdater);
};
/**
* A queue of callbacks to be serialized and applied when a
* MediaSource and its associated SourceBuffers are not in the
* updating state. It is used by the segment loader to update the
* underlying SourceBuffers when new data is loaded, for instance.
*
* @class SourceUpdater
* @param {MediaSource} mediaSource the MediaSource to create the SourceBuffer from
* @param {string} mimeType the desired MIME type of the underlying SourceBuffer
*/
export default class SourceUpdater extends videojs.EventTarget {
constructor(mediaSource) {
super();
this.mediaSource = mediaSource;
this.sourceopenListener_ = () => shiftQueue('mediaSource', this);
this.mediaSource.addEventListener('sourceopen', this.sourceopenListener_);
this.logger_ = logger('SourceUpdater');
// initial timestamp offset is 0
this.audioTimestampOffset_ = 0;
this.videoTimestampOffset_ = 0;
this.queue = [];
this.queuePending = {
audio: null,
video: null
};
this.delayedAudioAppendQueue_ = [];
this.videoAppendQueued_ = false;
this.codecs = {};
this.onVideoUpdateEnd_ = onUpdateend('video', this);
this.onAudioUpdateEnd_ = onUpdateend('audio', this);
this.onVideoError_ = (e) => {
// used for debugging
this.videoError_ = e;
};
this.onAudioError_ = (e) => {
// used for debugging
this.audioError_ = e;
};
this.started_ = false;
}
ready() {
return this.started_;
}
createSourceBuffers(codecs) {
if (this.ready()) {
// already created them before
return;
}
// the intial addOrChangeSourceBuffers will always be
// two add buffers.
this.addOrChangeSourceBuffers(codecs);
this.started_ = true;
this.trigger('ready');
}
/**
* Add a type of source buffer to the media source.
*
* @param {string} type
* The type of source buffer to add.
*
* @param {string} codec
* The codec to add the source buffer with.
*/
addSourceBuffer(type, codec) {
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.addSourceBuffer(type, codec),
name: 'addSourceBuffer'
});
}
/**
* call abort on a source buffer.
*
* @param {string} type
* The type of source buffer to call abort on.
*/
abort(type) {
pushQueue({
type,
sourceUpdater: this,
action: actions.abort(type),
name: 'abort'
});
}
/**
* Call removeSourceBuffer and remove a specific type
* of source buffer on the mediaSource.
*
* @param {string} type
* The type of source buffer to remove.
*/
removeSourceBuffer(type) {
if (!this.canRemoveSourceBuffer()) {
videojs.log.error('removeSourceBuffer is not supported!');
return;
}
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.removeSourceBuffer(type),
name: 'removeSourceBuffer'
});
}
/**
* Whether or not the removeSourceBuffer function is supported
* on the mediaSource.
*
* @return {boolean}
* if removeSourceBuffer can be called.
*/
canRemoveSourceBuffer() {
// IE reports that it supports removeSourceBuffer, but often throws
// errors when attempting to use the function. So we report that it
// does not support removeSourceBuffer.
return !videojs.browser.IE_VERSION && window.MediaSource &&
window.MediaSource.prototype &&
typeof window.MediaSource.prototype.removeSourceBuffer === 'function';
}
/**
* Whether or not the changeType function is supported
* on our SourceBuffers.
*
* @return {boolean}
* if changeType can be called.
*/
static canChangeType() {
return window.SourceBuffer &&
window.SourceBuffer.prototype &&
typeof window.SourceBuffer.prototype.changeType === 'function';
}
/**
* Whether or not the changeType function is supported
* on our SourceBuffers.
*
* @return {boolean}
* if changeType can be called.
*/
canChangeType() {
return this.constructor.canChangeType();
}
/**
* Call the changeType function on a source buffer, given the code and type.
*
* @param {string} type
* The type of source buffer to call changeType on.
*
* @param {string} codec
* The codec string to change type with on the source buffer.
*/
changeType(type, codec) {
if (!this.canChangeType()) {
videojs.log.error('changeType is not supported!');
return;
}
pushQueue({
type,
sourceUpdater: this,
action: actions.changeType(codec),
name: 'changeType'
});
}
/**
* Add source buffers with a codec or, if they are already created,
* call changeType on source buffers using changeType.
*
* @param {Object} codecs
* Codecs to switch to
*/
addOrChangeSourceBuffers(codecs) {
if (!codecs || typeof codecs !== 'object' || Object.keys(codecs).length === 0) {
throw new Error('Cannot addOrChangeSourceBuffers to undefined codecs');
}
Object.keys(codecs).forEach((type) => {
const codec = codecs[type];
if (!this.ready()) {
return this.addSourceBuffer(type, codec);
}
if (this.canChangeType()) {
this.changeType(type, codec);
}
});
}
/**
* Queue an update to append an ArrayBuffer.
*
* @param {MediaObject} object containing audioBytes and/or videoBytes
* @param {Function} done the function to call when done
* @see http://www.w3.org/TR/media-source/#widl-SourceBuffer-appendBuffer-void-ArrayBuffer-data
*/
appendBuffer(options, doneFn) {
const {segmentInfo, type, bytes} = options;
this.processedAppend_ = true;
if (type === 'audio' && this.videoBuffer && !this.videoAppendQueued_) {
this.delayedAudioAppendQueue_.push([options, doneFn]);
this.logger_(`delayed audio append of ${bytes.length} until video append`);
return;
}
pushQueue({
type,
sourceUpdater: this,
action: actions.appendBuffer(bytes, segmentInfo || {mediaIndex: -1}),
doneFn,
name: 'appendBuffer'
});
if (type === 'video') {
this.videoAppendQueued_ = true;
if (!this.delayedAudioAppendQueue_.length) {
return;
}
const queue = this.delayedAudioAppendQueue_.slice();
this.logger_(`queuing delayed audio ${queue.length} appendBuffers`);
this.delayedAudioAppendQueue_.length = 0;
queue.forEach((que) => {
this.appendBuffer.apply(this, que);
});
}
}
/**
* Get the audio buffer's buffered timerange.
*
* @return {TimeRange}
* The audio buffer's buffered time range
*/
audioBuffered() {
// no media source/source buffer or it isn't in the media sources
// source buffer list
if (!inSourceBuffers(this.mediaSource, this.audioBuffer)) {
return videojs.createTimeRange();
}
return this.audioBuffer.buffered ? this.audioBuffer.buffered :
videojs.createTimeRange();
}
/**
* Get the video buffer's buffered timerange.
*
* @return {TimeRange}
* The video buffer's buffered time range
*/
videoBuffered() {
// no media source/source buffer or it isn't in the media sources
// source buffer list
if (!inSourceBuffers(this.mediaSource, this.videoBuffer)) {
return videojs.createTimeRange();
}
return this.videoBuffer.buffered ? this.videoBuffer.buffered :
videojs.createTimeRange();
}
/**
* Get a combined video/audio buffer's buffered timerange.
*
* @return {TimeRange}
* the combined time range
*/
buffered() {
const video = inSourceBuffers(this.mediaSource, this.videoBuffer) ? this.videoBuffer : null;
const audio = inSourceBuffers(this.mediaSource, this.audioBuffer) ? this.audioBuffer : null;
if (audio && !video) {
return this.audioBuffered();
}
if (video && !audio) {
return this.videoBuffered();
}
return bufferIntersection(this.audioBuffered(), this.videoBuffered());
}
/**
* Add a callback to the queue that will set duration on the mediaSource.
*
* @param {number} duration
* The duration to set
*
* @param {Function} [doneFn]
* function to run after duration has been set.
*/
setDuration(duration, doneFn = noop) {
// In order to set the duration on the media source, it's necessary to wait for all
// source buffers to no longer be updating. "If the updating attribute equals true on
// any SourceBuffer in sourceBuffers, then throw an InvalidStateError exception and
// abort these steps." (source: https://www.w3.org/TR/media-source/#attributes).
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.duration(duration),
name: 'duration',
doneFn
});
}
/**
* Add a mediaSource endOfStream call to the queue
*
* @param {Error} [error]
* Call endOfStream with an error
*
* @param {Function} [doneFn]
* A function that should be called when the
* endOfStream call has finished.
*/
endOfStream(error = null, doneFn = noop) {
if (typeof error !== 'string') {
error = undefined;
}
// In order to set the duration on the media source, it's necessary to wait for all
// source buffers to no longer be updating. "If the updating attribute equals true on
// any SourceBuffer in sourceBuffers, then throw an InvalidStateError exception and
// abort these steps." (source: https://www.w3.org/TR/media-source/#attributes).
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.endOfStream(error),
name: 'endOfStream',
doneFn
});
}
/**
* Queue an update to remove a time range from the buffer.
*
* @param {number} start where to start the removal
* @param {number} end where to end the removal
* @param {Function} [done=noop] optional callback to be executed when the remove
* operation is complete
* @see http://www.w3.org/TR/media-source/#widl-SourceBuffer-remove-void-double-start-unrestricted-double-end
*/
removeAudio(start, end, done = noop) {
if (!this.audioBuffered().length || this.audioBuffered().end(0) === 0) {
done();
return;
}
pushQueue({
type: 'audio',
sourceUpdater: this,
action: actions.remove(start, end),
doneFn: done,
name: 'remove'
});
}
/**
* Queue an update to remove a time range from the buffer.
*
* @param {number} start where to start the removal
* @param {number} end where to end the removal
* @param {Function} [done=noop] optional callback to be executed when the remove
* operation is complete
* @see http://www.w3.org/TR/media-source/#widl-SourceBuffer-remove-void-double-start-unrestricted-double-end
*/
removeVideo(start, end, done = noop) {
if (!this.videoBuffered().length || this.videoBuffered().end(0) === 0) {
done();
return;
}
pushQueue({
type: 'video',
sourceUpdater: this,
action: actions.remove(start, end),
doneFn: done,
name: 'remove'
});
}
/**
* Whether the underlying sourceBuffer is updating or not
*
* @return {boolean} the updating status of the SourceBuffer
*/
updating() {
// the audio/video source buffer is updating
if (updating('audio', this) || updating('video', this)) {
return true;
}
return false;
}
/**
* Set/get the timestampoffset on the audio SourceBuffer
*
* @return {number} the timestamp offset
*/
audioTimestampOffset(offset) {
if (typeof offset !== 'undefined' &&
this.audioBuffer &&
// no point in updating if it's the same
this.audioTimestampOffset_ !== offset) {
pushQueue({
type: 'audio',
sourceUpdater: this,
action: actions.timestampOffset(offset),
name: 'timestampOffset'
});
this.audioTimestampOffset_ = offset;
}
return this.audioTimestampOffset_;
}
/**
* Set/get the timestampoffset on the video SourceBuffer
*
* @return {number} the timestamp offset
*/
videoTimestampOffset(offset) {
if (typeof offset !== 'undefined' &&
this.videoBuffer &&
// no point in updating if it's the same
this.videoTimestampOffset !== offset) {
pushQueue({
type: 'video',
sourceUpdater: this,
action: actions.timestampOffset(offset),
name: 'timestampOffset'
});
this.videoTimestampOffset_ = offset;
}
return this.videoTimestampOffset_;
}
/**
* Add a function to the queue that will be called
* when it is its turn to run in the audio queue.
*
* @param {Function} callback
* The callback to queue.
*/
audioQueueCallback(callback) {
if (!this.audioBuffer) {
return;
}
pushQueue({
type: 'audio',
sourceUpdater: this,
action: actions.callback(callback),
name: 'callback'
});
}
/**
* Add a function to the queue that will be called
* when it is its turn to run in the video queue.
*
* @param {Function} callback
* The callback to queue.
*/
videoQueueCallback(callback) {
if (!this.videoBuffer) {
return;
}
pushQueue({
type: 'video',
sourceUpdater: this,
action: actions.callback(callback),
name: 'callback'
});
}
/**
* dispose of the source updater and the underlying sourceBuffer
*/
dispose() {
this.trigger('dispose');
bufferTypes.forEach((type) => {
this.abort(type);
if (this.canRemoveSourceBuffer()) {
this.removeSourceBuffer(type);
} else {
this[`${type}QueueCallback`](() => cleanupBuffer(type, this));
}
});
this.videoAppendQueued_ = false;
this.delayedAudioAppendQueue_.length = 0;
if (this.sourceopenListener_) {
this.mediaSource.removeEventListener('sourceopen', this.sourceopenListener_);
}
this.off();
}
}

View File

@@ -1,521 +0,0 @@
/**
* @file sync-controller.js
*/
import {sumDurations} from './playlist';
import videojs from 'video.js';
import logger from './util/logger';
export const syncPointStrategies = [
// Stategy "VOD": Handle the VOD-case where the sync-point is *always*
// the equivalence display-time 0 === segment-index 0
{
name: 'VOD',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
if (duration !== Infinity) {
const syncPoint = {
time: 0,
segmentIndex: 0
};
return syncPoint;
}
return null;
}
},
// Stategy "ProgramDateTime": We have a program-date-time tag in this playlist
{
name: 'ProgramDateTime',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
if (!syncController.datetimeToDisplayTime) {
return null;
}
const segments = playlist.segments || [];
let syncPoint = null;
let lastDistance = null;
currentTime = currentTime || 0;
for (let i = 0; i < segments.length; i++) {
const segment = segments[i];
if (segment.dateTimeObject) {
const segmentTime = segment.dateTimeObject.getTime() / 1000;
const segmentStart = segmentTime + syncController.datetimeToDisplayTime;
const distance = Math.abs(currentTime - segmentStart);
// Once the distance begins to increase, or if distance is 0, we have passed
// currentTime and can stop looking for better candidates
if (lastDistance !== null && (distance === 0 || lastDistance < distance)) {
break;
}
lastDistance = distance;
syncPoint = {
time: segmentStart,
segmentIndex: i
};
}
}
return syncPoint;
}
},
// Stategy "Segment": We have a known time mapping for a timeline and a
// segment in the current timeline with timing data
{
name: 'Segment',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
const segments = playlist.segments || [];
let syncPoint = null;
let lastDistance = null;
currentTime = currentTime || 0;
for (let i = 0; i < segments.length; i++) {
const segment = segments[i];
if (segment.timeline === currentTimeline &&
typeof segment.start !== 'undefined') {
const distance = Math.abs(currentTime - segment.start);
// Once the distance begins to increase, we have passed
// currentTime and can stop looking for better candidates
if (lastDistance !== null && lastDistance < distance) {
break;
}
if (!syncPoint || lastDistance === null || lastDistance >= distance) {
lastDistance = distance;
syncPoint = {
time: segment.start,
segmentIndex: i
};
}
}
}
return syncPoint;
}
},
// Stategy "Discontinuity": We have a discontinuity with a known
// display-time
{
name: 'Discontinuity',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
let syncPoint = null;
currentTime = currentTime || 0;
if (playlist.discontinuityStarts && playlist.discontinuityStarts.length) {
let lastDistance = null;
for (let i = 0; i < playlist.discontinuityStarts.length; i++) {
const segmentIndex = playlist.discontinuityStarts[i];
const discontinuity = playlist.discontinuitySequence + i + 1;
const discontinuitySync = syncController.discontinuities[discontinuity];
if (discontinuitySync) {
const distance = Math.abs(currentTime - discontinuitySync.time);
// Once the distance begins to increase, we have passed
// currentTime and can stop looking for better candidates
if (lastDistance !== null && lastDistance < distance) {
break;
}
if (!syncPoint || lastDistance === null || lastDistance >= distance) {
lastDistance = distance;
syncPoint = {
time: discontinuitySync.time,
segmentIndex
};
}
}
}
}
return syncPoint;
}
},
// Stategy "Playlist": We have a playlist with a known mapping of
// segment index to display time
{
name: 'Playlist',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
if (playlist.syncInfo) {
const syncPoint = {
time: playlist.syncInfo.time,
segmentIndex: playlist.syncInfo.mediaSequence - playlist.mediaSequence
};
return syncPoint;
}
return null;
}
}
];
export default class SyncController extends videojs.EventTarget {
constructor(options = {}) {
super();
// ...for synching across variants
this.timelines = [];
this.discontinuities = [];
this.datetimeToDisplayTime = null;
this.logger_ = logger('SyncController');
}
/**
* Find a sync-point for the playlist specified
*
* A sync-point is defined as a known mapping from display-time to
* a segment-index in the current playlist.
*
* @param {Playlist} playlist
* The playlist that needs a sync-point
* @param {number} duration
* Duration of the MediaSource (Infinite if playing a live source)
* @param {number} currentTimeline
* The last timeline from which a segment was loaded
* @return {Object}
* A sync-point object
*/
getSyncPoint(playlist, duration, currentTimeline, currentTime) {
const syncPoints = this.runStrategies_(
playlist,
duration,
currentTimeline,
currentTime
);
if (!syncPoints.length) {
// Signal that we need to attempt to get a sync-point manually
// by fetching a segment in the playlist and constructing
// a sync-point from that information
return null;
}
// Now find the sync-point that is closest to the currentTime because
// that should result in the most accurate guess about which segment
// to fetch
return this.selectSyncPoint_(syncPoints, { key: 'time', value: currentTime });
}
/**
* Calculate the amount of time that has expired off the playlist during playback
*
* @param {Playlist} playlist
* Playlist object to calculate expired from
* @param {number} duration
* Duration of the MediaSource (Infinity if playling a live source)
* @return {number|null}
* The amount of time that has expired off the playlist during playback. Null
* if no sync-points for the playlist can be found.
*/
getExpiredTime(playlist, duration) {
if (!playlist || !playlist.segments) {
return null;
}
const syncPoints = this.runStrategies_(
playlist,
duration,
playlist.discontinuitySequence,
0
);
// Without sync-points, there is not enough information to determine the expired time
if (!syncPoints.length) {
return null;
}
const syncPoint = this.selectSyncPoint_(syncPoints, {
key: 'segmentIndex',
value: 0
});
// If the sync-point is beyond the start of the playlist, we want to subtract the
// duration from index 0 to syncPoint.segmentIndex instead of adding.
if (syncPoint.segmentIndex > 0) {
syncPoint.time *= -1;
}
return Math.abs(syncPoint.time + sumDurations(playlist, syncPoint.segmentIndex, 0));
}
/**
* Runs each sync-point strategy and returns a list of sync-points returned by the
* strategies
*
* @private
* @param {Playlist} playlist
* The playlist that needs a sync-point
* @param {number} duration
* Duration of the MediaSource (Infinity if playing a live source)
* @param {number} currentTimeline
* The last timeline from which a segment was loaded
* @return {Array}
* A list of sync-point objects
*/
runStrategies_(playlist, duration, currentTimeline, currentTime) {
const syncPoints = [];
// Try to find a sync-point in by utilizing various strategies...
for (let i = 0; i < syncPointStrategies.length; i++) {
const strategy = syncPointStrategies[i];
const syncPoint = strategy.run(
this,
playlist,
duration,
currentTimeline,
currentTime
);
if (syncPoint) {
syncPoint.strategy = strategy.name;
syncPoints.push({
strategy: strategy.name,
syncPoint
});
}
}
return syncPoints;
}
/**
* Selects the sync-point nearest the specified target
*
* @private
* @param {Array} syncPoints
* List of sync-points to select from
* @param {Object} target
* Object specifying the property and value we are targeting
* @param {string} target.key
* Specifies the property to target. Must be either 'time' or 'segmentIndex'
* @param {number} target.value
* The value to target for the specified key.
* @return {Object}
* The sync-point nearest the target
*/
selectSyncPoint_(syncPoints, target) {
let bestSyncPoint = syncPoints[0].syncPoint;
let bestDistance = Math.abs(syncPoints[0].syncPoint[target.key] - target.value);
let bestStrategy = syncPoints[0].strategy;
for (let i = 1; i < syncPoints.length; i++) {
const newDistance = Math.abs(syncPoints[i].syncPoint[target.key] - target.value);
if (newDistance < bestDistance) {
bestDistance = newDistance;
bestSyncPoint = syncPoints[i].syncPoint;
bestStrategy = syncPoints[i].strategy;
}
}
this.logger_(`syncPoint for [${target.key}: ${target.value}] chosen with strategy` +
` [${bestStrategy}]: [time:${bestSyncPoint.time},` +
` segmentIndex:${bestSyncPoint.segmentIndex}]`);
return bestSyncPoint;
}
/**
* Save any meta-data present on the segments when segments leave
* the live window to the playlist to allow for synchronization at the
* playlist level later.
*
* @param {Playlist} oldPlaylist - The previous active playlist
* @param {Playlist} newPlaylist - The updated and most current playlist
*/
saveExpiredSegmentInfo(oldPlaylist, newPlaylist) {
const mediaSequenceDiff = newPlaylist.mediaSequence - oldPlaylist.mediaSequence;
// When a segment expires from the playlist and it has a start time
// save that information as a possible sync-point reference in future
for (let i = mediaSequenceDiff - 1; i >= 0; i--) {
const lastRemovedSegment = oldPlaylist.segments[i];
if (lastRemovedSegment && typeof lastRemovedSegment.start !== 'undefined') {
newPlaylist.syncInfo = {
mediaSequence: oldPlaylist.mediaSequence + i,
time: lastRemovedSegment.start
};
this.logger_(`playlist refresh sync: [time:${newPlaylist.syncInfo.time},` +
` mediaSequence: ${newPlaylist.syncInfo.mediaSequence}]`);
this.trigger('syncinfoupdate');
break;
}
}
}
/**
* Save the mapping from playlist's ProgramDateTime to display. This should
* only ever happen once at the start of playback.
*
* @param {Playlist} playlist - The currently active playlist
*/
setDateTimeMapping(playlist) {
if (!this.datetimeToDisplayTime &&
playlist.segments &&
playlist.segments.length &&
playlist.segments[0].dateTimeObject) {
const playlistTimestamp = playlist.segments[0].dateTimeObject.getTime() / 1000;
this.datetimeToDisplayTime = -playlistTimestamp;
}
}
/**
* Calculates and saves timeline mappings, playlist sync info, and segment timing values
* based on the latest timing information.
*
* @param {Object} options
* Options object
* @param {SegmentInfo} options.segmentInfo
* The current active request information
* @param {boolean} options.shouldSaveTimelineMapping
* If there's a timeline change, determines if the timeline mapping should be
* saved in timelines.
*/
saveSegmentTimingInfo({ segmentInfo, shouldSaveTimelineMapping }) {
const didCalculateSegmentTimeMapping = this.calculateSegmentTimeMapping_(
segmentInfo,
segmentInfo.timingInfo,
shouldSaveTimelineMapping
);
if (didCalculateSegmentTimeMapping) {
this.saveDiscontinuitySyncInfo_(segmentInfo);
// If the playlist does not have sync information yet, record that information
// now with segment timing information
if (!segmentInfo.playlist.syncInfo) {
segmentInfo.playlist.syncInfo = {
mediaSequence: segmentInfo.playlist.mediaSequence + segmentInfo.mediaIndex,
time: segmentInfo.segment.start
};
}
}
}
timestampOffsetForTimeline(timeline) {
if (typeof this.timelines[timeline] === 'undefined') {
return null;
}
return this.timelines[timeline].time;
}
mappingForTimeline(timeline) {
if (typeof this.timelines[timeline] === 'undefined') {
return null;
}
return this.timelines[timeline].mapping;
}
/**
* Use the "media time" for a segment to generate a mapping to "display time" and
* save that display time to the segment.
*
* @private
* @param {SegmentInfo} segmentInfo
* The current active request information
* @param {Object} timingInfo
* The start and end time of the current segment in "media time"
* @param {boolean} shouldSaveTimelineMapping
* If there's a timeline change, determines if the timeline mapping should be
* saved in timelines.
* @return {boolean}
* Returns false if segment time mapping could not be calculated
*/
calculateSegmentTimeMapping_(segmentInfo, timingInfo, shouldSaveTimelineMapping) {
const segment = segmentInfo.segment;
let mappingObj = this.timelines[segmentInfo.timeline];
if (segmentInfo.timestampOffset !== null) {
mappingObj = {
time: segmentInfo.startOfSegment,
mapping: segmentInfo.startOfSegment - timingInfo.start
};
if (shouldSaveTimelineMapping) {
this.timelines[segmentInfo.timeline] = mappingObj;
this.trigger('timestampoffset');
this.logger_(`time mapping for timeline ${segmentInfo.timeline}: ` +
`[time: ${mappingObj.time}] [mapping: ${mappingObj.mapping}]`);
}
segment.start = segmentInfo.startOfSegment;
segment.end = timingInfo.end + mappingObj.mapping;
} else if (mappingObj) {
segment.start = timingInfo.start + mappingObj.mapping;
segment.end = timingInfo.end + mappingObj.mapping;
} else {
return false;
}
return true;
}
/**
* Each time we have discontinuity in the playlist, attempt to calculate the location
* in display of the start of the discontinuity and save that. We also save an accuracy
* value so that we save values with the most accuracy (closest to 0.)
*
* @private
* @param {SegmentInfo} segmentInfo - The current active request information
*/
saveDiscontinuitySyncInfo_(segmentInfo) {
const playlist = segmentInfo.playlist;
const segment = segmentInfo.segment;
// If the current segment is a discontinuity then we know exactly where
// the start of the range and it's accuracy is 0 (greater accuracy values
// mean more approximation)
if (segment.discontinuity) {
this.discontinuities[segment.timeline] = {
time: segment.start,
accuracy: 0
};
} else if (playlist.discontinuityStarts && playlist.discontinuityStarts.length) {
// Search for future discontinuities that we can provide better timing
// information for and save that information for sync purposes
for (let i = 0; i < playlist.discontinuityStarts.length; i++) {
const segmentIndex = playlist.discontinuityStarts[i];
const discontinuity = playlist.discontinuitySequence + i + 1;
const mediaIndexDiff = segmentIndex - segmentInfo.mediaIndex;
const accuracy = Math.abs(mediaIndexDiff);
if (!this.discontinuities[discontinuity] ||
this.discontinuities[discontinuity].accuracy > accuracy) {
let time;
if (mediaIndexDiff < 0) {
time = segment.start - sumDurations(
playlist,
segmentInfo.mediaIndex,
segmentIndex
);
} else {
time = segment.end + sumDurations(
playlist,
segmentInfo.mediaIndex + 1,
segmentIndex
);
}
this.discontinuities[discontinuity] = {
time,
accuracy
};
}
}
}
}
dispose() {
this.trigger('dispose');
this.off();
}
}

View File

@@ -1,48 +0,0 @@
import videojs from 'video.js';
/**
* The TimelineChangeController acts as a source for segment loaders to listen for and
* keep track of latest and pending timeline changes. This is useful to ensure proper
* sync, as each loader may need to make a consideration for what timeline the other
* loader is on before making changes which could impact the other loader's media.
*
* @class TimelineChangeController
* @extends videojs.EventTarget
*/
export default class TimelineChangeController extends videojs.EventTarget {
constructor() {
super();
this.pendingTimelineChanges_ = {};
this.lastTimelineChanges_ = {};
}
clearPendingTimelineChange(type) {
this.pendingTimelineChanges_[type] = null;
this.trigger('pendingtimelinechange');
}
pendingTimelineChange({ type, from, to }) {
if (typeof from === 'number' && typeof to === 'number') {
this.pendingTimelineChanges_[type] = { type, from, to };
this.trigger('pendingtimelinechange');
}
return this.pendingTimelineChanges_[type];
}
lastTimelineChange({ type, from, to }) {
if (typeof from === 'number' && typeof to === 'number') {
this.lastTimelineChanges_[type] = { type, from, to };
delete this.pendingTimelineChanges_[type];
this.trigger('timelinechange');
}
return this.lastTimelineChanges_[type];
}
dispose() {
this.trigger('dispose');
this.pendingTimelineChanges_ = {};
this.lastTimelineChanges_ = {};
this.off();
}
}

View File

@@ -1,433 +0,0 @@
/* global self */
/**
* @file transmuxer-worker.js
*/
/**
* videojs-contrib-media-sources
*
* Copyright (c) 2015 Brightcove
* All rights reserved.
*
* Handles communication between the browser-world and the mux.js
* transmuxer running inside of a WebWorker by exposing a simple
* message-based interface to a Transmuxer object.
*/
import {Transmuxer as FullMux} from 'mux.js/lib/mp4/transmuxer';
import PartialMux from 'mux.js/lib/partial/transmuxer';
import CaptionParser from 'mux.js/lib/mp4/caption-parser';
import {
secondsToVideoTs,
videoTsToSeconds
} from 'mux.js/lib/utils/clock';
const typeFromStreamString = (streamString) => {
if (streamString === 'AudioSegmentStream') {
return 'audio';
}
return streamString === 'VideoSegmentStream' ? 'video' : '';
};
/**
* Re-emits transmuxer events by converting them into messages to the
* world outside the worker.
*
* @param {Object} transmuxer the transmuxer to wire events on
* @private
*/
const wireFullTransmuxerEvents = function(self, transmuxer) {
transmuxer.on('data', function(segment) {
// transfer ownership of the underlying ArrayBuffer
// instead of doing a copy to save memory
// ArrayBuffers are transferable but generic TypedArrays are not
// @link https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers#Passing_data_by_transferring_ownership_(transferable_objects)
const initArray = segment.initSegment;
segment.initSegment = {
data: initArray.buffer,
byteOffset: initArray.byteOffset,
byteLength: initArray.byteLength
};
const typedArray = segment.data;
segment.data = typedArray.buffer;
self.postMessage({
action: 'data',
segment,
byteOffset: typedArray.byteOffset,
byteLength: typedArray.byteLength
}, [segment.data]);
});
transmuxer.on('done', function(data) {
self.postMessage({ action: 'done' });
});
transmuxer.on('gopInfo', function(gopInfo) {
self.postMessage({
action: 'gopInfo',
gopInfo
});
});
transmuxer.on('videoSegmentTimingInfo', function(timingInfo) {
const videoSegmentTimingInfo = {
start: {
decode: videoTsToSeconds(timingInfo.start.dts),
presentation: videoTsToSeconds(timingInfo.start.pts)
},
end: {
decode: videoTsToSeconds(timingInfo.end.dts),
presentation: videoTsToSeconds(timingInfo.end.pts)
},
baseMediaDecodeTime: videoTsToSeconds(timingInfo.baseMediaDecodeTime)
};
if (timingInfo.prependedContentDuration) {
videoSegmentTimingInfo.prependedContentDuration = videoTsToSeconds(timingInfo.prependedContentDuration);
}
self.postMessage({
action: 'videoSegmentTimingInfo',
videoSegmentTimingInfo
});
});
transmuxer.on('id3Frame', function(id3Frame) {
self.postMessage({
action: 'id3Frame',
id3Frame
});
});
transmuxer.on('caption', function(caption) {
self.postMessage({
action: 'caption',
caption
});
});
transmuxer.on('trackinfo', function(trackInfo) {
self.postMessage({
action: 'trackinfo',
trackInfo
});
});
transmuxer.on('audioTimingInfo', function(audioTimingInfo) {
// convert to video TS since we prioritize video time over audio
self.postMessage({
action: 'audioTimingInfo',
audioTimingInfo: {
start: videoTsToSeconds(audioTimingInfo.start),
end: videoTsToSeconds(audioTimingInfo.end)
}
});
});
transmuxer.on('videoTimingInfo', function(videoTimingInfo) {
self.postMessage({
action: 'videoTimingInfo',
videoTimingInfo: {
start: videoTsToSeconds(videoTimingInfo.start),
end: videoTsToSeconds(videoTimingInfo.end)
}
});
});
};
const wirePartialTransmuxerEvents = function(self, transmuxer) {
transmuxer.on('data', function(event) {
// transfer ownership of the underlying ArrayBuffer
// instead of doing a copy to save memory
// ArrayBuffers are transferable but generic TypedArrays are not
// @link https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers#Passing_data_by_transferring_ownership_(transferable_objects)
const initSegment = {
data: event.data.track.initSegment.buffer,
byteOffset: event.data.track.initSegment.byteOffset,
byteLength: event.data.track.initSegment.byteLength
};
const boxes = {
data: event.data.boxes.buffer,
byteOffset: event.data.boxes.byteOffset,
byteLength: event.data.boxes.byteLength
};
const segment = {
boxes,
initSegment,
type: event.type,
sequence: event.data.sequence
};
if (typeof event.data.videoFrameDts !== 'undefined') {
segment.videoFrameDtsTime = videoTsToSeconds(event.data.videoFrameDts);
}
if (typeof event.data.videoFramePts !== 'undefined') {
segment.videoFramePtsTime = videoTsToSeconds(event.data.videoFramePts);
}
self.postMessage({
action: 'data',
segment
}, [ segment.boxes.data, segment.initSegment.data ]);
});
transmuxer.on('id3Frame', function(id3Frame) {
self.postMessage({
action: 'id3Frame',
id3Frame
});
});
transmuxer.on('caption', function(caption) {
self.postMessage({
action: 'caption',
caption
});
});
transmuxer.on('done', function(data) {
self.postMessage({
action: 'done',
type: typeFromStreamString(data)
});
});
transmuxer.on('partialdone', function(data) {
self.postMessage({
action: 'partialdone',
type: typeFromStreamString(data)
});
});
transmuxer.on('endedsegment', function(data) {
self.postMessage({
action: 'endedSegment',
type: typeFromStreamString(data)
});
});
transmuxer.on('trackinfo', function(trackInfo) {
self.postMessage({ action: 'trackinfo', trackInfo });
});
transmuxer.on('audioTimingInfo', function(audioTimingInfo) {
// This can happen if flush is called when no
// audio has been processed. This should be an
// unusual case, but if it does occur should not
// result in valid data being returned
if (audioTimingInfo.start === null) {
self.postMessage({
action: 'audioTimingInfo',
audioTimingInfo
});
return;
}
// convert to video TS since we prioritize video time over audio
const timingInfoInSeconds = {
start: videoTsToSeconds(audioTimingInfo.start)
};
if (audioTimingInfo.end) {
timingInfoInSeconds.end = videoTsToSeconds(audioTimingInfo.end);
}
self.postMessage({
action: 'audioTimingInfo',
audioTimingInfo: timingInfoInSeconds
});
});
transmuxer.on('videoTimingInfo', function(videoTimingInfo) {
const timingInfoInSeconds = {
start: videoTsToSeconds(videoTimingInfo.start)
};
if (videoTimingInfo.end) {
timingInfoInSeconds.end = videoTsToSeconds(videoTimingInfo.end);
}
self.postMessage({
action: 'videoTimingInfo',
videoTimingInfo: timingInfoInSeconds
});
});
};
/**
* All incoming messages route through this hash. If no function exists
* to handle an incoming message, then we ignore the message.
*
* @class MessageHandlers
* @param {Object} options the options to initialize with
*/
class MessageHandlers {
constructor(self, options) {
this.options = options || {};
this.self = self;
this.init();
}
/**
* initialize our web worker and wire all the events.
*/
init() {
if (this.transmuxer) {
this.transmuxer.dispose();
}
this.transmuxer = this.options.handlePartialData ?
new PartialMux(this.options) :
new FullMux(this.options);
if (this.options.handlePartialData) {
wirePartialTransmuxerEvents(this.self, this.transmuxer);
} else {
wireFullTransmuxerEvents(this.self, this.transmuxer);
}
}
pushMp4Captions(data) {
if (!this.captionParser) {
this.captionParser = new CaptionParser();
this.captionParser.init();
}
const segment = new Uint8Array(data.data, data.byteOffset, data.byteLength);
const parsed = this.captionParser.parse(
segment,
data.trackIds,
data.timescales
);
this.self.postMessage({
action: 'mp4Captions',
captions: parsed && parsed.captions || [],
data: segment.buffer
}, [segment.buffer]);
}
clearAllMp4Captions() {
if (this.captionParser) {
this.captionParser.clearAllCaptions();
}
}
clearParsedMp4Captions() {
if (this.captionParser) {
this.captionParser.clearParsedCaptions();
}
}
/**
* Adds data (a ts segment) to the start of the transmuxer pipeline for
* processing.
*
* @param {ArrayBuffer} data data to push into the muxer
*/
push(data) {
// Cast array buffer to correct type for transmuxer
const segment = new Uint8Array(data.data, data.byteOffset, data.byteLength);
this.transmuxer.push(segment);
}
/**
* Recreate the transmuxer so that the next segment added via `push`
* start with a fresh transmuxer.
*/
reset() {
this.transmuxer.reset();
}
/**
* Set the value that will be used as the `baseMediaDecodeTime` time for the
* next segment pushed in. Subsequent segments will have their `baseMediaDecodeTime`
* set relative to the first based on the PTS values.
*
* @param {Object} data used to set the timestamp offset in the muxer
*/
setTimestampOffset(data) {
const timestampOffset = data.timestampOffset || 0;
this.transmuxer.setBaseMediaDecodeTime(Math.round(secondsToVideoTs(timestampOffset)));
}
setAudioAppendStart(data) {
this.transmuxer.setAudioAppendStart(Math.ceil(secondsToVideoTs(data.appendStart)));
}
setRemux(data) {
this.transmuxer.setRemux(data.remux);
}
/**
* Forces the pipeline to finish processing the last segment and emit it's
* results.
*
* @param {Object} data event data, not really used
*/
flush(data) {
this.transmuxer.flush();
// transmuxed done action is fired after both audio/video pipelines are flushed
self.postMessage({
action: 'done',
type: 'transmuxed'
});
}
partialFlush(data) {
this.transmuxer.partialFlush();
// transmuxed partialdone action is fired after both audio/video pipelines are flushed
self.postMessage({
action: 'partialdone',
type: 'transmuxed'
});
}
endTimeline() {
this.transmuxer.endTimeline();
// transmuxed endedtimeline action is fired after both audio/video pipelines end their
// timelines
self.postMessage({
action: 'endedtimeline',
type: 'transmuxed'
});
}
alignGopsWith(data) {
this.transmuxer.alignGopsWith(data.gopsToAlignWith.slice());
}
}
/**
* Our web worker interface so that things can talk to mux.js
* that will be running in a web worker. the scope is passed to this by
* webworkify.
*
* @param {Object} self the scope for the web worker
*/
const TransmuxerWorker = function(self) {
self.onmessage = function(event) {
if (event.data.action === 'init' && event.data.options) {
this.messageHandlers = new MessageHandlers(self, event.data.options);
return;
}
if (!this.messageHandlers) {
this.messageHandlers = new MessageHandlers(self);
}
if (event.data && event.data.action && event.data.action !== 'init') {
if (this.messageHandlers[event.data.action]) {
this.messageHandlers[event.data.action](event.data);
}
}
};
};
export default new TransmuxerWorker(self);

File diff suppressed because it is too large Load Diff

View File

@@ -1,102 +0,0 @@
/**
* @file - codecs.js - Handles tasks regarding codec strings such as translating them to
* codec strings, or translating codec strings into objects that can be examined.
*/
import {
translateLegacyCodec,
parseCodecs,
codecsFromDefault
} from '@videojs/vhs-utils/dist/codecs.js';
/**
* Returns a set of codec strings parsed from the playlist or the default
* codec strings if no codecs were specified in the playlist
*
* @param {Playlist} media the current media playlist
* @return {Object} an object with the video and audio codecs
*/
const getCodecs = function(media) {
// if the codecs were explicitly specified, use them instead of the
// defaults
const mediaAttributes = media.attributes || {};
if (mediaAttributes.CODECS) {
return parseCodecs(mediaAttributes.CODECS);
}
};
export const isMaat = (master, media) => {
const mediaAttributes = media.attributes || {};
return master && master.mediaGroups && master.mediaGroups.AUDIO &&
mediaAttributes.AUDIO &&
master.mediaGroups.AUDIO[mediaAttributes.AUDIO];
};
export const isMuxed = (master, media) => {
if (!isMaat(master, media)) {
return true;
}
const mediaAttributes = media.attributes || {};
const audioGroup = master.mediaGroups.AUDIO[mediaAttributes.AUDIO];
for (const groupId in audioGroup) {
// If an audio group has a URI (the case for HLS, as HLS will use external playlists),
// or there are listed playlists (the case for DASH, as the manifest will have already
// provided all of the details necessary to generate the audio playlist, as opposed to
// HLS' externally requested playlists), then the content is demuxed.
if (!audioGroup[groupId].uri && !audioGroup[groupId].playlists) {
return true;
}
}
return false;
};
/**
* Calculates the codec strings for a working configuration of
* SourceBuffers to play variant streams in a master playlist. If
* there is no possible working configuration, an empty object will be
* returned.
*
* @param master {Object} the m3u8 object for the master playlist
* @param media {Object} the m3u8 object for the variant playlist
* @return {Object} the codec strings.
*
* @private
*/
export const codecsForPlaylist = function(master, media) {
const mediaAttributes = media.attributes || {};
const codecInfo = getCodecs(media) || {};
// HLS with multiple-audio tracks must always get an audio codec.
// Put another way, there is no way to have a video-only multiple-audio HLS!
if (isMaat(master, media) && !codecInfo.audio) {
if (!isMuxed(master, media)) {
// It is possible for codecs to be specified on the audio media group playlist but
// not on the rendition playlist. This is mostly the case for DASH, where audio and
// video are always separate (and separately specified).
const defaultCodecs = codecsFromDefault(master, mediaAttributes.AUDIO);
if (defaultCodecs) {
codecInfo.audio = defaultCodecs.audio;
}
}
}
const codecs = {};
if (codecInfo.video) {
codecs.video = translateLegacyCodec(`${codecInfo.video.type}${codecInfo.video.details}`);
}
if (codecInfo.audio) {
codecs.audio = translateLegacyCodec(`${codecInfo.audio.type}${codecInfo.audio.details}`);
}
return codecs;
};

View File

@@ -1,85 +0,0 @@
import {detectContainerForBytes, getId3Offset} from '@videojs/vhs-utils/dist/containers';
import {stringToBytes, concatTypedArrays} from '@videojs/vhs-utils/dist/byte-helpers';
import {callbackWrapper} from '../xhr';
// calls back if the request is readyState DONE
// which will only happen if the request is complete.
const callbackOnCompleted = (request, cb) => {
if (request.readyState === 4) {
return cb();
}
return;
};
const containerRequest = (uri, xhr, cb) => {
let bytes = [];
let id3Offset;
let finished = false;
const endRequestAndCallback = function(err, req, type, _bytes) {
req.abort();
finished = true;
return cb(err, req, type, _bytes);
};
const progressListener = function(error, request) {
if (finished) {
return;
}
if (error) {
return endRequestAndCallback(error, request, '', bytes);
}
// grap the new part of content that was just downloaded
const newPart = request.responseText.substring(
bytes && bytes.byteLength || 0,
request.responseText.length
);
// add that onto bytes
bytes = concatTypedArrays(bytes, stringToBytes(newPart, true));
id3Offset = id3Offset || getId3Offset(bytes);
// we need at least 10 bytes to determine a type
// or we need at least two bytes after an id3Offset
if (bytes.length < 10 || (id3Offset && bytes.length < id3Offset + 2)) {
return callbackOnCompleted(request, () => endRequestAndCallback(error, request, '', bytes));
}
const type = detectContainerForBytes(bytes);
// if this looks like a ts segment but we don't have enough data
// to see the second sync byte, wait until we have enough data
// before declaring it ts
if (type === 'ts' && bytes.length < 188) {
return callbackOnCompleted(request, () => endRequestAndCallback(error, request, '', bytes));
}
// this may be an unsynced ts segment
// wait for 376 bytes before detecting no container
if (!type && bytes.length < 376) {
return callbackOnCompleted(request, () => endRequestAndCallback(error, request, '', bytes));
}
return endRequestAndCallback(null, request, type, bytes);
};
const options = {
uri,
beforeSend(request) {
// this forces the browser to pass the bytes to us unprocessed
request.overrideMimeType('text/plain; charset=x-user-defined');
request.addEventListener('progress', function({total, loaded}) {
return callbackWrapper(request, null, {statusCode: request.status}, progressListener);
});
}
};
const request = xhr(options, function(error, response) {
return callbackWrapper(request, error, response, progressListener);
});
return request;
};
export default containerRequest;

View File

@@ -1,119 +0,0 @@
import { ONE_SECOND_IN_TS } from 'mux.js/lib/utils/clock';
/**
* Returns a list of gops in the buffer that have a pts value of 3 seconds or more in
* front of current time.
*
* @param {Array} buffer
* The current buffer of gop information
* @param {number} currentTime
* The current time
* @param {Double} mapping
* Offset to map display time to stream presentation time
* @return {Array}
* List of gops considered safe to append over
*/
export const gopsSafeToAlignWith = (buffer, currentTime, mapping) => {
if (typeof currentTime === 'undefined' || currentTime === null || !buffer.length) {
return [];
}
// pts value for current time + 3 seconds to give a bit more wiggle room
const currentTimePts = Math.ceil((currentTime - mapping + 3) * ONE_SECOND_IN_TS);
let i;
for (i = 0; i < buffer.length; i++) {
if (buffer[i].pts > currentTimePts) {
break;
}
}
return buffer.slice(i);
};
/**
* Appends gop information (timing and byteLength) received by the transmuxer for the
* gops appended in the last call to appendBuffer
*
* @param {Array} buffer
* The current buffer of gop information
* @param {Array} gops
* List of new gop information
* @param {boolean} replace
* If true, replace the buffer with the new gop information. If false, append the
* new gop information to the buffer in the right location of time.
* @return {Array}
* Updated list of gop information
*/
export const updateGopBuffer = (buffer, gops, replace) => {
if (!gops.length) {
return buffer;
}
if (replace) {
// If we are in safe append mode, then completely overwrite the gop buffer
// with the most recent appeneded data. This will make sure that when appending
// future segments, we only try to align with gops that are both ahead of current
// time and in the last segment appended.
return gops.slice();
}
const start = gops[0].pts;
let i = 0;
for (i; i < buffer.length; i++) {
if (buffer[i].pts >= start) {
break;
}
}
return buffer.slice(0, i).concat(gops);
};
/**
* Removes gop information in buffer that overlaps with provided start and end
*
* @param {Array} buffer
* The current buffer of gop information
* @param {Double} start
* position to start the remove at
* @param {Double} end
* position to end the remove at
* @param {Double} mapping
* Offset to map display time to stream presentation time
*/
export const removeGopBuffer = (buffer, start, end, mapping) => {
const startPts = Math.ceil((start - mapping) * ONE_SECOND_IN_TS);
const endPts = Math.ceil((end - mapping) * ONE_SECOND_IN_TS);
const updatedBuffer = buffer.slice();
let i = buffer.length;
while (i--) {
if (buffer[i].pts <= endPts) {
break;
}
}
if (i === -1) {
// no removal because end of remove range is before start of buffer
return updatedBuffer;
}
let j = i + 1;
while (j--) {
if (buffer[j].pts <= startPts) {
break;
}
}
// clamp remove range start to 0 index
j = Math.max(j, 0);
updatedBuffer.splice(j, i - j + 1);
return updatedBuffer;
};

View File

@@ -1,11 +0,0 @@
import videojs from 'video.js';
const logger = (source) => {
if (videojs.log.debug) {
return videojs.log.debug.bind(videojs, 'VHS:', `${source} >`);
}
return function() {};
};
export default logger;

View File

@@ -1 +0,0 @@
export default function noop() {}

View File

@@ -1,58 +0,0 @@
import tsInspector from 'mux.js/lib/tools/ts-inspector.js';
import { ONE_SECOND_IN_TS } from 'mux.js/lib/utils/clock';
/**
* Probe an mpeg2-ts segment to determine the start time of the segment in it's
* internal "media time," as well as whether it contains video and/or audio.
*
* @private
* @param {Uint8Array} bytes - segment bytes
* @return {Object} The start time of the current segment in "media time" as well as
* whether it contains video and/or audio
*/
export const probeTsSegment = (bytes, baseStartTime) => {
const timeInfo = tsInspector.inspect(bytes, baseStartTime * ONE_SECOND_IN_TS);
if (!timeInfo) {
return null;
}
const result = {
// each type's time info comes back as an array of 2 times, start and end
hasVideo: timeInfo.video && timeInfo.video.length === 2 || false,
hasAudio: timeInfo.audio && timeInfo.audio.length === 2 || false
};
if (result.hasVideo) {
result.videoStart = timeInfo.video[0].ptsTime;
}
if (result.hasAudio) {
result.audioStart = timeInfo.audio[0].ptsTime;
}
return result;
};
/**
* Combine all segments into a single Uint8Array
*
* @param {Object} segmentObj
* @return {Uint8Array} concatenated bytes
* @private
*/
export const concatSegments = (segmentObj) => {
let offset = 0;
let tempBuffer;
if (segmentObj.bytes) {
tempBuffer = new Uint8Array(segmentObj.bytes);
// combine the individual segments into one large typed-array
segmentObj.segments.forEach((segment) => {
tempBuffer.set(segment, offset);
offset += segment.byteLength;
});
}
return tempBuffer;
};

View File

@@ -1,41 +0,0 @@
const shallowEqual = function(a, b) {
// if both are undefined
// or one or the other is undefined
// they are not equal
if ((!a && !b) || (!a && b) || (a && !b)) {
return false;
}
// they are the same object and thus, equal
if (a === b) {
return true;
}
// sort keys so we can make sure they have
// all the same keys later.
const akeys = Object.keys(a).sort();
const bkeys = Object.keys(b).sort();
// different number of keys, not equal
if (akeys.length !== bkeys.length) {
return false;
}
for (let i = 0; i < akeys.length; i++) {
const key = akeys[i];
// different sorted keys, not equal
if (key !== bkeys[i]) {
return false;
}
// different values, not equal
if (a[key] !== b[key]) {
return false;
}
}
return true;
};
export default shallowEqual;

View File

@@ -1,9 +0,0 @@
export const stringToArrayBuffer = (string) => {
const view = new Uint8Array(new ArrayBuffer(string.length));
for (let i = 0; i < string.length; i++) {
view[i] = string.charCodeAt(i);
}
return view.buffer;
};

View File

@@ -1,2 +0,0 @@
export const uint8ToUtf8 = (uintArray) =>
decodeURIComponent(escape(String.fromCharCode.apply(null, uintArray)));

Some files were not shown because too many files have changed in this diff Show More