fluent / fluent-bit-docs-stream-processing

Fluent Bit Stream Processing Guide
3 stars 5 forks source link

Exception traps in Stream Processing model #3

Open snowjunkie opened 5 years ago

snowjunkie commented 5 years ago

As I understand, the stream processing creates a second stream that will be ingested back through the various process layers of fluent-bit. I'd like to understand the recommended way to use stream processing to branch to various outputs BUT also include an exception trap or catch-all route so that any events not matching the stream condition are routed to a remediation destination. I can't currently understand how to achieve this - in my tests the default output also caught the 'selected' messages, not just the exceptions. (i.e. the non-stream path also included the stream path data instead of just the non-stream data). Could this be included in the 101 example?

edsiper commented 5 years ago

Hi,

As I understand, the stream processing creates a second stream that will be ingested back through the various process layers of fluent-bit.

New stream of data is created only if you used the CREATE STREAM statement, otherwise results are only delivered to the standard output.

I'd like to understand the recommended way to use stream processing to branch to various outputs BUT also include an exception trap or catch-all route so that any events not matching the stream condition are routed to a remediation destination.

To send the results of a stream created by the stream processor goes through the normal process as the pipeline through Tags and Matching rules. So when you create a new stream of data you can tag it, e.g:

CREATE STREAM test WITH(tag='mytag') AS SELECT * FROM STREAM:data;

then just append a matching rule for an output plugin that matches 'mytag', e.g:

[OUTPUT]
    Name stdout
    Match mytag

at the moment Fluent Bit engine will discard any record that don't match any output matching rule, note that the stream processor works before that phase, there is no custom behavior for non-matching records.

Despite what stream processor generate a new stream of data or not, your original data will go anyways through the normal pipeline.