The model is being trained incrementally on the user’s system.
At first, only a fraction of the training data is used.
The SDK “listens” to clues regarding the training process.
Those clues are sent to a data selection engine that suggests a subset of data to use next.
The annotations are generated immediately. No waiting period.
The results are returned to the auditing module which identifies anomalies.
The data is re-routed to either the same or another labeling company or process for fixing.
You can also choose to fix the labels yourself.
Labeling partners are penalized if they make too many mistakes, which improves future recommendations.
Each batch of labels can be visualized and analyzed on the auditing module and the labeling dashboard.
The labels are versioned. The labels are sent back to the SDK to resume the training process with the optimal data.
Your model just got trained with a tuned training data without your involvement.