You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: basic_pipeline/07_Redemands.md
+19-19Lines changed: 19 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,49 +10,49 @@ To comprehensively understand the concept behind redemanding, you need to be awa
10
10
11
11
## In Source elements
12
12
13
-
In the [source elements](../glossary/glossary.md#source), there is a "side-channel", from which we can receive data. That "side-channel" can be, as in the exemplary[pipeline](../glossary/glossary.md#pipeline) we are working on, in form of a file, from which we are reading the data. In real-life scenarios it could be also, i.e. an [RTP](../glossary/glossary.md#rtp) stream received via the network. Since we have that "side-channel", there is no need to receive data via the input [pad](../glossary/glossary.md#pad) (that is why we don't have it in the source element, do we?).
14
-
The whole logic of fetching the data can be put inside the `handle_demand/5` callback - once we are asked to provide the [buffers](../glossary/glossary.md#buffer), the `handle_demand/5` callback gets called and we can provide the desired number of buffers from the "side-channel", inside the body of that callback. No processing occurs here - we get asked for the buffer and we provide the buffer, simple as that.
15
-
The redemand mechanism here lets you focus on providing a single buffer in the `handle_demand/5` body - later on, you can simply return the `:redemand` action and that action will invoke the `handle_demand/5` once again, with the updated number of buffers which are expected to be provided. Let's see it in an example - we could have such a `handle_demand/5` definition (and it wouldn't be a mistake!):
13
+
In the [source elements](../glossary/glossary.md#source), there is a "side-channel", from which we can receive data. That "side-channel" can be, as in the example[pipeline](../glossary/glossary.md#pipeline) we are working on, in the form of a file, from which we are reading the data. In real-life scenarios it could also be, i.e. an [RTP](../glossary/glossary.md#rtp) stream received via the network. Since we have that "side-channel", there is no need to receive data via the input [pad](../glossary/glossary.md#pad) (that is why we don't have it in the source element).
14
+
The whole logic of fetching the data can be put inside the `handle_demand/5` callback - once we are asked to provide the [buffers](../glossary/glossary.md#buffer), the `handle_demand/5` callback gets called and we can provide the desired number of buffers from the "side-channel" inside the body of that callback. No processing occurs here - we get asked for the buffer and we provide the buffer, simple as that.
15
+
The redemand mechanism here lets you focus on providing a single buffer in the `handle_demand/5` body - later on, you can simply return the `:redemand` action and that action will invoke `handle_demand/5` once again, with the updated number of buffers which are expected to be provided. Let's see it in an example - we could have such a `handle_demand/5` definition (and it wouldn't be a mistake!):
16
16
17
17
```elixir
18
18
@impltrue
19
19
defhandle_demand(:output, size, _unit, _context, state) do
20
-
actions =for x <-1..size do
21
-
payload =Input.get_next() #Input.get_next() is an exemplary function which could be providing data
22
-
{:buffer, %Membrane.Buffer(payload: payload)}
23
-
end
24
-
{ {:ok, actions}, state}
20
+
actions =for x <-1..size do
21
+
payload =Input.get_next() #Input.get_next() is an example function providing data
22
+
{:buffer, %Membrane.Buffer{payload: payload}}
23
+
end
24
+
{actions, state}
25
25
end
26
26
```
27
27
28
-
As you can see in the snippet above, we need to generate the required `size` of buffers in the single `handle_demand/5` run. The logic of supplying the demand there is quite easy - but what if you would also need to check if there is enough data to provide a sufficient number of buffer? You would need to check it in advance (or try to read as much data as possible before supplying the desired number of buffers). And what if an exception occurs during the generation, before supplying all the buffers?
29
-
You would need to take under the consideration all these situations and your code would become larger and larger.
28
+
As you can see in the snippet above, we need to generate the required `size` of buffers in the single `handle_demand/5` run. The logic of supplying the demand there is quite easy - but what if you would also need to check if there is enough data to provide a sufficient number of buffers? You would need to check it in advance (or try to read as much data as possible before supplying the desired number of buffers). And what if an exception occurs during the generation, before supplying all the buffers?
29
+
You would need to consider all these situations and your code would become larger and larger.
30
30
31
31
Wouldn't it be better to focus on a single buffer in each `handle_demand/5` call - and let the Membrane Framework automatically update the demand's size? This can be done in the following way:
32
32
33
33
```elixir
34
34
@impltrue
35
35
defhandle_demand(:output, _size, unit, context, state) do
36
-
payload =Input.get_next()#Input.get_next() is an exemplary function which could be providing data
In the filter element, the situation is quite different.
43
+
In a filter element, the situation is quite different.
44
44
Since the filter's responsibility is to process the data sent via the input pads and transmit it through the output pads, there is no 'side-channel' from which we could take data. That is why in normal circumstances you would transmit the buffer through the output pad in the `handle_buffer/4` callback (which means - once your element receives a buffer, you process it, and then you 'mark' it as ready to be output with the `:buffer` action). When it comes to the `handle_demand/5` action on the output pad, all you need to do is to demand the appropriate number of buffers on the element's input pad.
45
45
46
-
That behavior is easy to specify when we exactly know how many input buffers correspond to the one output buffer (recall the situation in the [Depayloader](../glossary/glossary.md#payloader-and-depayloader) of our pipeline, where we *a priori* knew, that each output buffer ([frame](../glossary/glossary.md#frame)) consists of a given number of input buffers ([packets](../glossary/glossary.md#packet))). However it becomes impossible to define if the output buffer might be a combination of a discretionary set number of input buffers. At the same time, we have dealt with an unknown number of required buffers in the OrderingBuffer implementation, where we didn't know how many input buffers do we need to demand to fulfill the missing spaces between the packets ordered in the list. How did we manage to do it?
46
+
That behavior is easy to specify when we know exactly how many input buffers correspond to one output buffer (recall the situation in the [Depayloader](../glossary/glossary.md#payloader-and-depayloader) of our pipeline, where we *a priori* knew, that each output buffer ([frame](../glossary/glossary.md#frame)) consists of a given number of input buffers ([packets](../glossary/glossary.md#packet))). However it becomes impossible to define if the output buffer might be a combination of a discretionary set number of input buffers. At the same time, we have dealt with an unknown number of required buffers in the OrderingBuffer implementation, where we didn't know how many input buffers we would need to demand to fulfill the missing spaces between the packets ordered in the list. How did we manage to do it?
47
47
48
-
We simply used the `:redemand` action! In case there was a missing space between the packets, we returned the `:redemand` action, which immediately called the `handle_demand/5` callback (implemented in a way to request for a buffer on the input pad). The fact, that that callback invocation was immediate, which means - the callback was called synchronously, right after returning from the `handle_buffer/4` callback, before processing any other message from the element's mailbox - might be crucial in some situations, since it guarantees that the demand will be done before handling any other event.
48
+
We simply used the `:redemand` action! In case there was a missing space between the packets, we returned the `:redemand` action, which immediately called the `handle_demand/5` callback (implemented in a way to request for a buffer on the input pad). The fact that that callback invocation was immediate (the callback was called synchronously, right after returning from the `handle_buffer/4` callback, before processing any other message from the element's mailbox) might be crucial in some situations, since it guarantees that the demand will be done before handling any other event.
49
49
Recall the situation in the [Mixer](../glossary/glossary.md#mixer), where we were producing the output buffers right in the `handle_demand/5` callback. We needed to attempt to create the output buffer after:
50
50
51
51
- updating the buffers' list in `handle_buffer/4`
52
52
- updating the status of the [track](../glossary/glossary.md#track) in `handle_end_of_stream/3`
53
-
Therefore, we were simply returning the `:redemand` action, and the `handle_demand/5` was called sequentially after on, trying to produce the output buffer.
53
+
Therefore, we were simply returning the `:redemand` action, and the `handle_demand/5` was called sequentially afterwards, trying to produce the output buffer.
54
54
55
-
As you can see, redemand mechanism in filters helps us deal with situations, where we do not know how many input buffers to demand in order to be able to produce an output buffer/buffers.
56
-
In case we don't provide enough buffers in the `handle_demand/5` callback (or we are not sure that we do provide), we should call `:redemand` somewhere else (usually in the `handle_buffer/4`) to make sure that the demand is not lost.
55
+
As you can see, redemand mechanism in filters helps us deal with situations where we do not know how many input buffers to demand in order to be able to produce an output buffer/buffers.
56
+
In case we don't provide enough buffers in the `handle_demand/5` callback (or we are not sure that we do), we should call `:redemand` somewhere else (usually in the `handle_buffer/4`) to make sure that the demand is not lost.
57
57
58
58
With that knowledge let's carry on with the next element in our pipeline - `Depayloader`.
First, we define the list of children. The following children are defined:
28
28
29
29
-`:src` - a `Membrane.RTMP.SourceBin`, an RTMP server, which, according to its `:port` configuration, will be listening on port `9009`. This bin will be acting as a source for our pipeline. For more information on RTMP Source Bin please visit [the documentation](https://hexdocs.pm/membrane_rtmp_plugin/Membrane.RTMP.SourceBin.html).
30
30
-`:sink` - a `Membrane.HTTPAdaptiveStream.SinkBin`, acting as a sink of the pipeline. The full documentation of that bin is available [here](https://hexdocs.pm/membrane_http_adaptive_stream_plugin/Membrane.HTTPAdaptiveStream.SinkBin.html). We need to specify some of its options:
31
31
-`:manifest_module` - a module which implements [`Membrane.HTTPAdaptiveStream.Manifest`](https://hexdocs.pm/membrane_http_adaptive_stream_plugin/Membrane.HTTPAdaptiveStream.Manifest.html#c:serialize/1) behavior. A manifest allows aggregate tracks (of a different type, i.e. an audio track and a video track as well as many tracks of the same type, i.e. a few video tracks with different resolutions). For each track, the manifest holds a reference to a list of segments, which form that track. Furthermore, the manifest module is equipped with the `serialize/1` method, which allows transforming that manifest to a string (which later on can be written to a file). In that case, we use a built-in implementation of a manifest module - the `Membrane.HTTPAdaptiveStream.HLS`, designed to serialize a manifest into a form required by HLS.
32
-
-`:target_window_duriation` - that value determines the minimal manifest's duration. The oldest segments of the tracks will be removed whenever possible if persisting them would result in exceeding the manifest duration.
32
+
-`:target_window_duriation` - determines the minimal manifest's duration. The oldest segments of the tracks will be removed whenever possible if persisting them would result in exceeding the manifest duration.
33
33
-`:muxer_segment_duration` - the maximal duration of a segment. Each segment of each track shouldn't exceed that value. In our case, we have decided to limit the length of each segment to 8 seconds.
34
34
-`:storage` - the sink element, the module responsible for writing down the HLS playlist and manifest files. In our case, we use a pre-implemented `Membrane.HTTPAdaptiveStream.FileStorage` module, designed to write the files to the local filesystem. We configure it so that the directory where the files will be put in the `output/` directory (make sure that that directory exists as the storage module won't create it itself).
35
35
@@ -39,21 +39,21 @@ After providing the children's specifications, we are ready to connect the pads
The structure of links reflects the desired architecture of the application.
@@ -67,11 +67,11 @@ The final thing that is done in the `handle_init/1` callback's implementation is
67
67
**_`lib/rtmp_to_hls/pipeline.ex`_**
68
68
69
69
```elixir
70
-
@impltrue
71
-
defhandle_init(_opts) do
72
-
...
73
-
{ {:ok, spec: spec, playback::playing}, %{}}
74
-
end
70
+
@impltrue
71
+
defhandle_init(_opts) do
72
+
...
73
+
{[spec: spec, playback::playing], %{}}
74
+
end
75
75
```
76
76
77
77
The first action is the `:spec` action, which spawns the children. The second action changes the playback state of the pipeline into the `:playing` - meaning, that data can start flowing through the pipeline.
@@ -83,51 +83,51 @@ The pipeline is started with `Supervisor.start_link`, as a child of the applicat
0 commit comments