A Package that allows ease of use for Dyte's video middlewares
Explore the docs »
View Demo
·
Report Bug
·
Request Feature
The goal of Video Background Transformer is to create Dyte Meetings' VideoMiddlewares with ease and make it fun to play around with.
Please make sure that the Dyte Web Core & UI Kit versions are >= 2.0 in your webapp.
To upgrade from 1.x to 2.x, please refer to our upgrade guide.
Versions lesser than 2.0 don't have support to disable per frame rendering of video middlewares, which is crucial for this middleware to improve quality & speed.
If you can't upgrade to Web Core 2.x yet and have to use a Web Core version older than 2.0, please use the legacy version of this video background middleware instead.
- npm
- Web Core Version >= 2.0 (Web Core, React Web Core, Angular Web Core etc.)
- UI Kit Version >= 2.0 (UI Kit, React UI Kit, Angular UI Kit etc.)
npm install @dytesdk/video-background-transformer
Note: Make sure that you are using the 2.x versions of Web Core & UI Kit.
Disable the default per frame rendering of video middleware to improve speed and quality by letting this middleware control it on its own.
await meeting.self.setVideoMiddlewareGlobalConfig({
disablePerFrameCanvasRendering: true
});
A videoBackgroundTransformer
object can be created using the DyteVideoBackgroundTransformer.init({meeting: meeting})
method.
const videoBackgroundTransformer = await DyteVideoBackgroundTransformer.init({
meeting,
});
Types of middlewares exposed by videoBackgroundTransformer
:
-
createStaticBackgroundVideoMiddleware
expects animageUrl
as a parameter and then creates the image the background for the current user.
meeting.self.addVideoMiddleware(
await videoBackgroundTransformer.createStaticBackgroundVideoMiddleware(imageUrl)
);
-
createBackgroundBlurVideoMiddleware
expectsblurLength
as a parameter (4px by default) and blurs the background of the user by the given blurLength.
meeting.self.addVideoMiddleware(await dyteVideoBackgroundTransformer.createBackgroundBlurVideoMiddleware(10))
Note: Some browsers or their old versions might not have support for WebGL or the browser APIs that this package uses. We would recommend checking the support beforehand using:
if(DyteVideoBackgroundTransformer.isSupported()){
const videoBackgroundTransformer = await DyteVideoBackgroundTransformer.init({
meeting: meeting,
});
meeting.self.addVideoMiddleware(
await videoBackgroundTransformer.createStaticBackgroundVideoMiddleware(`REPLACE_THIS_WITH_IMAGE_URL`)
);
}
Note: Image URLs must allow CORS to avoid tainting the canvas. You can find such images on https://unsplash.com/ & https://imgur.com.
If in case you want to tweak the segmentation for better, sharper results, Please pass the desired segmentation config while initialising DyteVideoBackgroundTransformer.
const videoBackgroundTransformer = await DyteVideoBackgroundTransformer.init({
meeting,
segmentationConfig: {
model: 'mlkit', // 'meet' | 'mlkit'
backend: 'wasmSimd',
inputResolution: '256x256', // '256x144' for meet
pipeline: 'webgl2', // 'webgl2' | 'canvas2dCpu' // canvas2dCpu gives sharper blur, webgl2 is faster.
targetFps: 35,
}
});
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'feat: Add some AmazingFeature'
) - Push to the Branch (
git push -u origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE
for more information.