Table of Contents

Namespace CounterpointCollective.Dataflow

Classes

AdmissionGateBlock<T>

A dataflow block that conditionally admits messages into an inner target block based on user-defined AdmissionGateHooks.

Use this block when you want to control when/which messages are allowed to enter the block dynamically, e.g. by throttling rules.

Features include:

  • Hooks to decide whether a message may enter, when it enters, and if entry fails.
  • Automatic postponement of messages that are not currently allowed to enter.
  • Processing of postponed messages when state changes allow them to be admitted.
AdmissionGateHooks

Hooks for controlling and observing admission into a block.

AutoScalingBlockExtensions
BatchRunEvent<T>
BatchSizeCalculation
BoundedPropagatorBlockExtensions
BoundedPropagatorBlock<I, O>

A wrapper that enforces a BoundedCapacity on an arbitrary IPropagatorBlock<TInput, TOutput>. See also its base class BoundedTargetBlock<T>.

BoundedTargetBlock<T>

A wrapper that enforces a bounded capacity on an arbitrary ITargetBlock<TInput>.

Amount of messages currently owned by the block is tracked by Count. Incoming messages are added to Count. Exits can be created with CreateExit<TO>(ISourceBlock<TO>). Messages that leave an exit are deduced from the Count. Count can also be adjusted manually with AdjustCount(int), e.g. if you make messages re-enter the block.

If Count hits BoundedCapacity, no new messages will be allowed to enter the block, until Count drops below BoundedCapacity again. Then, postponed messages will be processed in FIFO order.

ChoiceBlock<TInput, TOutput>
DefaultBatchSizeStrategy

Default implementation that monitors throughput and adjusts batch size accordingly.

DeferredBlock<I, O>

A dataflow block that lets you construct and wire up an entire pipeline before the actual underlying block is available.

Use DeferredBlock when the real IPropagatorBlock<TInput, TOutput> requires asynchronous setup before it can be created (e.g. initializing a database connection, loading configuration, performing I/O, warming caches, etc.).

With this block you can:

  • Set up all LinkTo(ITargetBlock<O>, DataflowLinkOptions) connections immediately. These links are recorded and only activated once the inner block is created.
  • Start building the rest of your pipeline without having to await the initialization task.
  • Accept incoming messages during the initialization phase; they are stored and offered to the inner block once it is ready.
  • Request completion or fault at any time; the request is applied either immediately (if the inner block already exists) or propagated correctly once initialization finishes.

As soon as the asynchronous factory produces the real block:

  • Pending links are created.
  • Postponed messages are forwarded.
  • Any previously requested completion or fault is propagated.

After the inner block has been created, DeferredBlock behaves exactly as a transparent wrapper around it.

GroupAdjacentBlock<T, K, V>
GuaranteedBroadcastBlockOptions
GuaranteedBroadcastBlock<T>

Guaranteed delivery of messages to NrOfSources sources. Queueing up messages in the sources until the smallest queue reached BoundedCapacity.

IAsyncEnumerableExtensions
IEnumerableExtensions
NonOrderPreservingChoiceBlock<TInput, TOutput>
OrderPreservingChoiceBlock<TInput, TOutput>
ParallelBlock<I, T, O>

To prevent deadlocks, Source will always accept messages. However, the block itself will only accept new messages if the combined count of Source and the joining side (using either the smallest or largest queue count, depending on the configured mode) is below the capacity limit.

Be careful when you pass GuaranteedBroadcastBlockBoundedCapacityMode.LargestQueue in the options. This may trigger a deadlock in some cases, namely:

  • Queue[1] is empty.
  • Queue[2] has items
  • The ParBlock is full; it will not enqueue more messages before some are consumed.
  • worker 1 requires to consume messages before it will produce a next message, e.g. because it is batching.
  • worker 2 cannot consume more messages at the moment because it's output side is full The JoiningTarget will not consume messages until all workers have output available, causing a deadlock.
ParallelDataflowBlockExtensions
PriorityBufferBlock<T, TPriority>
ResizableBatchTransformBlock<I, O>
ResizableBufferBlock<T>
SynchronousFilterBlock<T>
SynchronousTransformingBlock<I, O>

Important: transform must be idempotent and cheap! It will be run multiple times for the same input in some scenarios.

TeeBlock<I, T, O>
TokenGateBlock<T>
UnboundedTargetBlock<T>

Will always accept messages until it is explicitly told to complete or fault.

Interfaces

IBatchSizeStrategy

Enums

FilterMode

Block means not to accept messages if the predicate does not hold. Drop means the messages are accepted (and disappear) if the predicate does not hold.

GuaranteedBroadcastBlockBoundedCapacityMode

Delegates

BatchAction<T>