
Output video from multiple webcams on one page
- Tutorial
Since I used getUserMedia beforeTo capture the sound from the microphone, I thought that there would be no problems with the video either, but they still crawled out into the light. Those. There were no problems with capturing the video stream itself, but with the simultaneous output of data from several sources on one page, it was not so simple as we wanted.
So, let's start from the very beginning, namely, capture and output video from one source. To do this, we will use the getUserMedia f-th , which is supported in all normal older browsers ( Stream API ), of course, except IE.
Explanations
- All the code examples below will be written in angularjs , because now I am writing on it.
- All scripts will be written to work with the Chrome and Opera browsers, below it will be written why.
getUserMedia
To access the webcam, you need to ask the user for permission, and then getUserMedia enters the scene , it takes three arguments:
- constraints - here we indicate what type of data we want to access. We will consider it in more detail below;
- successCallback - the function returns a LocalMediaStream object , this is our stream from the camera;
- errorCallback - the function fulfills if an error occurs while trying to capture the stream or if the user refused to provide access to his device.
As an output means, we will use the video element , into the src attribute of which the URL element in Blob format from the LocalMediaStream object will be passed .
As a result, the easiest way to capture a stream would be:
//Кусок из директивы
navigator.webkitGetUserMedia({'video': true}, function (stream) {
var video = document.createElement("video");
video.src = window.URL.createObjectURL(stream);
video.controls = true;
video.play();
angular.element(document.querySelector('body')).append(video);
}, function (e) {
alert("Ошибка при доступе к камере!");
});
Here the following happens:
- We create a video element ;
- Using the createObjectURL ff from the LocalMediaStream object , we create a Blob type URL element, which we pass as a source to the video element ;
- Allow auto-play;
- Paste our created element on the page.
The minimum problem is solved, we brought the stream from one camera to our page. Now we need to remove the streams from the rest of our cameras.
MediaStreamTrack
Of course, in trying to solve my problem, I turned for help to the MediaStreamTrack object , which is an interface for working with streams from all multimedia devices to which the browser was able to reach. MediaStreamTrack is still a rather rare beast and is found in the latest versions of Chrome , Opera and Firefox . So why do we need him? And then to get information about data sources.
In general, we found a guiding thread to solve our problem. As soon as I felt the joy of a dream come true, I realized that I could not get all the sources at once for their conclusion. After a hysterical search for a solution, it was found that in Chrome andOpere MediaStreamTrack object has f- getSources , which is our salvation. As the name implies, this file returns an object that contains information about all sources of audio and video.
Well, let's find our cameras:
getMediaSources: function () {
var mediaSources = [];
MediaStreamTrack.getSources(function (sources) {
an.forEach(sources, function (val, key) {
if (sources[key].kind === 'video') {
mediaSources.push(val);
}
});
});
}
The sources object that fS getSources provided us with is an array of objects with information about data sources. Each of these objects contains the following information:
- id - a unique identifier for the source generated by the browser;
- kind - type to which the source belongs ( audio or video );
- label - the label of the device (source), in my case there was a USB Video Device ;
- facing - as I understand it, the parameter is only relevant for mobile platforms and points to the front and rear camera (Takes two User values - front camera and environment - rear camera).
Decision
Thus, to summarize what we can do now. We can get a list of all sources with source identifiers, and we can also intercept data from them and output them. It remains only to put it all together, and we will get what we aspired to.
The sequence of actions will be as follows:
- When loading a page with the help of MediaStreamTrack.getSources we define all sources of the video signal;
- We list the sources on the page. We are doing this so that we still have to give permission to access each camera. This can be avoided if the page works via https.
- When we click on any source from the list, we intercept data from it using GetUserMedia , create a video element for it and display it. (If we select the same source several times, a copy of the stream will simply be made)
Before giving the final working example, we will return to the webkitGetUserMedia functions , namely, its first argument constraints . The documentation says that source types are transferred there in the format:
{"video": true,"audio":true}
This is clearly not enough for us, because we need to at least pass the source identifier. It turns out that instead of the standard object, you can transfer the so-called restrictive object. Thanks to him, we can configure quite a lot of parameters, such as frame rate and resolution.
var constraints = {};
constraints.video = {
mandatory: {
minWidth: 640,
minHeight: 480,
minFrameRate: 30
},
optional: [
{
sourceId: sourceid
}
]
};
Our object is divided into two parts:
- mandatory - mandatory restrictions for our video are indicated here, and if they cannot be met, an exception will be raised.
- optional are optional parameters that, if possible, will be applied to the stream (i.e. if we indicate here that we want to have a video signal with a frame rate of not 30, but 60, and our camera provides such a stream, then we we get what we want, and if the camera does not meet the conditions, then the video will be output with a frequency of 30 frames, which will correspond to the value of the minFrameRate parameter in the mandatory block ).
From the parameters that can be configured, I found these:
- frameRate - frame rate
- aspectRatio - aspect ratio
- minWidth - minimum width
- minHeight - minimum height
- sourceId - unique identifier of the source
- width
- height
Now everything is ready to write the final version of our module:
Module code
/**
* Created by abaddon on 11.09.14.
*/
/*global window, document, angular, MediaStreamTrack, console, navigator */
(function (w, d, an, mst, nav) {
"use strict";
angular.module("camersRoom", []).
value("$sectors", {}).
directive("ngVideoSector", ['$sectors', function ($sectors) {
return {
restrict: "A",
link: function (scope, elem, attr) {
$sectors[attr.ngVideoSector] = elem;
}
};
}]).
directive("ngRoomPlace", ["$room", "$sectors", "$compile", function ($room, $sectors, $compile) {
return {
restrict: "A",
controller: function ($scope, $element) {
this.createViews = function (html) {
var videoBlock = $sectors.rec, content;
videoBlock.append(html);
content = videoBlock.contents();
$compile(content)($scope);
};
},
link: function (scope, elem, attr, cont) {
if ($room.support) {
var mediaSources = [], html, count;
$room.getMediaSources().then(function (sources) {
an.forEach(sources, function (val, key) {
if (sources[key].kind === 'video') {/*find only video devices. Отбираем только видео устройства*/
mediaSources.push(val);
}
});
count = mediaSources.length;
if (count) {
html = $room.createSourcePreview(mediaSources);
cont.createViews(html);
} else {
scope.error = {
show: true,
text: "Ну для работы надо хоть одну камеру подключить!"
};
}
/*create video block views.*/
});
} else {
scope.error = {
show: true,
text: "Очень жаль, но ваш браузер никуда не годится. Откройте Google Chrome"
};
}
}
};
}]).
factory("$room", ["$q", "$sectors", function ($q, $sectors) {
var Room = function () {
var methods = {
get support() {
return !!this.media;
},
set support(value) {
this.media = value;
}
};
an.extend(this, methods);
this.support = mst.getSources;
};
Room.prototype = {
_createVideoElement: function (stream) {
var video = d.createElement("video");
video.src = w.URL.createObjectURL(stream);
video.controls = true;
video.play();
$sectors.place.append(video);
},
getMediaSources: function () {/*get all media sources. Получение всех медиа аудио, видео устройств*/
var defer = $q.defer();
mst.getSources(function (sources) {
defer.resolve(sources);
});
return defer.promise;
},
createSourcePreview: function (mediaSources) {
var htmlString = '', i = 0;
an.forEach(mediaSources, function (val) {
i++;
htmlString += '';
});
return htmlString;
},
addVideoPlace: function (sourceid) {
var constraints = {};
constraints.video = {
mandatory: {
minWidth: 640,
minHeight: 480,
minFrameRate: 30
},
optional: [
{ sourceId: sourceid }
]
};
nav.webkitGetUserMedia(constraints, function (stream) {
this._createVideoElement(stream);
}.bind(this), function (e) {
alert("Ошибка при получении потока с камеры!");
});
}
};
return new Room();
}]);
}(window, document, angular, MediaStreamTrack, navigator));
I will not provide the html code for the template. Everything can be seen on the demo and on github .
That's all, thanks for your attention, and I hope that this article will be useful to someone.
Well, of course, the list of references:
- www.w3.org/TR/mediacapture-streams
- www.blaccspot.com/blog/webrtc/tutorials/getusermedia-with-resolution-constraints-tutorial
- developer.mozilla.org/en-US/docs/Web/API
- www.sitepoint.com/introduction-getusermedia-api
- w3c.github.io/mediacapture-main/getusermedia.html#idl-def-Constraints
- muaz-khan.blogspot.ru