How the engine works
Last updated
Last updated
Client's application sends statements (events) to the engine in JSON format through HTTP connection (REST API). Events get to the engine through Kafka queue. Each event is saved to repository in order to enable off-line processing.
An event is transformed into variables (see )
Values of given user aggregates are refreshed
Scoring conditions are checked for each model (conditions starting evaluation of scoring and conditions checking, whether the given user should be scored by a given model)
For every model fulfilling scoring conditions a line of data is prepared
Score calculation
Returning score to the client
Counting of aggregates (based on saved events) for each user and each model
In analytical table containing calculated aggregates and target value a line of data may be created for each user. For some users this line will not be created, since:
Scoring condition will not be fulfilled
Conditions of target window calculating will not be fulfilled (for example: the target window counts for 3 days, while data contain events happening in only 2 days)
For each model a separate analytical table is created
The analytical table is an input to an ABM model counting process
Chosen models are automatically deployed