When a video is done transcoding, you'll see the
output property appear in the list asset endpoint response, containing URLs for every asset created. You'll generally want to use ABR streaming, so note the HLS URL as HLS has the widest device compatibility:
Then, use the Veeplay video player to render the video:
Direct upload is a feature we're currently developing, and will be available soon. Meanwhile, you'll need to provide a publicly accessible URL for the source media file. If you have the file in your local filesystem, a simple way to get a temporary URL is to use
ngrok to create a local fileserver - see ngrok's docs on how to achieve this.
No. After the Veeplay API processes the source file, you can safely make it private or remove it from your storage.
No. After the Veeplay API processes the source file, it is removed from our systems permanently.
See the supported inputs section of the API docs for a list of accepted formats.
When generating multiple renditions to support Adaptive Bitrate streaming with HLS and DASH, Veeplay doesn't use a static bitrate ladder - instead, we use machine learning to infer the optimal ladder for conversion per-title, based on properties of each individual video. This results in an optimal selection of renditions being generated, that maximize the perceived video quality while minimizing bandwidth requirements.
You can setup a webhook URL to receive notifications on every media asset status update during the ingestion workflow. Read more about setting up webhooks in the API documentation.
Yes, cropping is supported. See the
clip parameter of the create asset endpoint. Here's an example input that crops a 40s clip starting at 10 seconds:
Yes, overlays are supported. See the
overlays parameter of the create asset endpoint. Here's an example that adds a logo to the bottom left area of the video, with the width equal to 20% of the full video width:
Yes, audio normalization is supported. This will bring audio loudness levels of your input to a standard target level during encoding. The algorithm used for audio normalization is ITU-R BS.1770-1, and the target loudness value is -24 LKFS.
audio_normalization parameter of the create asset endpoint. Here's an example of applying audio normalization: