AsyncManager

The critical factor in many applications is the cycle time of a task. The smaller this can be, the more frequently the signals of the processes can be checked, which are to be controlled by the program steps of this task. This minimizes the application’s response time for changes in process signals. In particular the Cycle time may be then as short as possible, if the number of instructions processed in one cycle corresponds exactly to the number of instructions strictly necessary to meet the given requirements. If a particular job does not have to be completed in one cycle, and the transfer of this job to another task with a longer cycle time is not an option, the individual calculation steps are assigned to different groups of operations (states) and the execution of each group (state) is moved to one of the following cycles. This means that only the computing time for one statement group is required in the current cycle. This approach reaches its limit when a statement group needs an unpredictable period of time for its calculation, or under certain circumstances can even block the complete program execution. In this case, such a group of statements must be attached to a background task so that the cycle time of the foreground task is not affected. A transport mechanism is required so that the parameters, states, and the calculation results can be exchanged consistently between the foreground and the background task. Usually, a “queue” and a “shared data area” is used for this purpose.

The following use cases where in mind while specifying this library.

Simple One-To-One Relationship

  • One action (IAsyncActionProvider) is assigned to one background task.

  • The current foreground task fills the parameter queue with parameter sets.

  • The result of the background task is available via the AsyncResult property.

Some Background Tasks are sharing Parameters and Results

  • Every instance of a BackgroundTask function block is connected to one and the same instance of an IAsyncActionProvider.

  • One parameter queue is connected to the group of BackgroundTasks. Each task will fetch one of the next parameter sets to its context for executing the IAsyncActionProvider.AsyncAction with this set of parameters.

  • The results of these joint efforts are available for the foreground task through the common SHD.ISharedArea.

Several Background Tasks Are Connected In A Row

Like on a conveyor belt, the BackgroundTasks are connected in series via their parameter/result queues. Each task processes a portion of a chain of transformations and passes its result on to its subsequent task.

The Parameter Queue feeding the Background Task

The “SharedData Utilities” library provides the interface SHD.ISharedQueue.

INTERFACE ISharedQueue EXTENDS __SYSTEM.IQueryInterface
METHOD Dequeue : IQueueableNode
VAR_OUTPUT
    /// Insertion point in time
    ltTimeStamp : LTIME;
    eErrorID : ERROR;
END_VAR

METHOD Enqueue : ERROR
VAR_INPUT
    itfNode : IQueueableNode;
END_VAR

A instance of the BackgroundTask function block will get out the parameters (via Enqueue method) for its IAsyncActionProvider instance. One possible implementation can be the SHD.SharedQueue which is defined in the “SharedData Utilities” library.

Example: Connecting Parameters to a BackgroundTask function block

VAR
    sqParameters : SHD.SharedQueue;
    itfActionProvider : AJM.IAsyncActionProvider (* := myActionProvider *);
    bgtBackgroundTask : AJM.BackgroundTask := (
        tgTaskGroup:='IEC-Tasks',
        anAppName:='Application',
        tnTaskName:='BackgroundTask',
        usiTaskPrio:=10,
        udiTaskInterval:=50000,
        itfParams := sqParameters,
        itfAction := itfActionProvider
    );
END_VAR

To feed the BackgroundTask with new parameters the call of the method ISharedQueue.Enqueue is necessary. The related paramter structure is behind the itfNode reference.

INTERFACE IQueueableNode EXTENDS __SYSTEM.IQueryInterface
METHOD NodeDispose
PROPERTY IsNodeValid : BOOL

Behind the IQueueableNode any proper implementation is possible. Thus the parameter structure is freely customizable and can be very well adapted to special requirements.

FUNCTION_BLOCK Parameter IMPLEMENTS SHD.IQueueableNode
VAR_INPUT
    (* Any required data structure *);
END_VAR

The method NodeDispose and the property IsNodeValid are utilized to handle resource management and provide the possibility to mark a parameter set as not valid any more while it is staying in the queue.

The Background Task’s Result

The SHD.ISharedArea interface is defined in the “SharedData Utilities” library. The implementation behind provides a consistent transport of data structures for example between multiple cores of a processor.

INTERFACE ISharedArea EXTENDS __SYSTEM.IQueryInterface
METHOD AreaSetObserver : ISharedAreaObserver
VAR_INPUT
    itfAreaObserver : ISharedAreaObserver;
END_VAR
VAR_OUTPUT
    eErrorID : ERROR;
END_VAR

One possible implementation of SHD.ISharedAreaObserver can be the SHD.SharedQueue which is defined in the “SharedData Utilities” library.

Example: Handling the results of a BackgroundTask function block

VAR
    sqResults : SHD.SharedQueue;
    xObserved : BOOL;
    itfNode : SHD.IQueueableNode;
    eErrorID : SHD.ERROR;
    bgtBackgroundTask : AJM.BackgroundTask;
END_VAR
bgtBackgroundTask();

IF bgtBackgroundTask.xBusy THEN
    IF NOT xObserved THEN
        bgtBackgroundTask.itfResult.AreaSetObserver(sqResult);
        xObserved := TRUE;
    END_IF

    itfNode := sqResult.Dequeue(eErrorID=>eErrorID);
    __QUERYINTERFACE(itfNode, itfSharedAreaRef);
    IF itfSharedAreaRef <> 0 THEN

        (* Process the results *)

        itfNode.NodeDispose();
        itfNode := 0;
        itfSharedAreaRef := 0;
    END_IF
END_IF