Our Platform

Find the Smart Data inside your Big Data

 

How Alectio Works

Alectio is a machine learning optimization platform, currently available as a software development kit (SDK).

In practice, Alectio is a wrapper around your model. As you train your model, Alectio   “listens” to what your model likes and what data it needs to become more accurate. It understands what data is actually useful to your algorithms and what data isn’t.

The result is that you save on data labeling and spend less time and money training your models, all without trading down in performance.

Smart Data > Big Data

At Alectio, we understand not all data is created equal. Some data is helpful to your model, other data is irrelevant, and some data is actively harmful. The trick is figuring out which data is what. 

That’s exactly what our platform does. We use an ensemble method that combines active learning, reinforcement learning, metalearning, and more to recommend the smart data hidden inside your big data training sets.

 

Total data privacy

 

Alectio is unique. Our product doesn’t require us to look at your data or your model to work. 

That means you keep your data and your model private. We never see either. You keep your existing workflows. You expose nothing.

Save time & money

 

Alectio reduces the amount of data you need to train your models. That means less time waiting on your data labeling partner to provide annotations. That means less money spent on those labeling tasks. That means less budget spent on compute.

Get better accuracy

 

Since Alectio helps you find only the most utile data to train your models with, you skip the irrelevant and harmful information that makes your model fail. In other words, Alectio helps you solve the “garbage in, garbage out” problem, all while saving you money to boot.

What we offer

Data Curation

Alectio’s flagship offering helps you understand what data helps your model learn, what data is irrelevant, and what data is actively hurting your model’s performance. Our data curation solutions helps you drastically cut down on labeling costs, reduces model training (and retraining) times, and helps you uncover the data your model really needs to reach the performance you need.

Hybrid Labeling Solution

With Alectio, you don’t have to get tied into long term contracts with big providers. Instead, we combine our model-powered autolabeling solution with a marketplace full of expert, responsive, nimble labeling companies to get you the best labels for your data. That means combining the best of machine and human intelligence to get you faster turnaround on every row of data you label.

Data Collection

Alectio not only helps you find the best data to train your models with–we also can show you what data to collect next. Once you know the information your model needs to learn, the data it already understands, and the data that hurts its performance, you no longer have to collect everything–you just need to collect the right things.

Data Filtering

Alectio can also help with on-edge data collection by filtering and curate your data as you collect it. This can be especially useful for domains like autonomous vehicles, where large volumes of data are collected and stored in the cloud and where labeling costs are especially high. Instead of collecting everything, we’ll help you save only the most important information–in real time.

FAQs

What is active learning?

Active learning is a semi-supervised machine learning strategy. Generally speaking, active learning aims to reduce the amount of labeled data required to train an effective model. AL models do this by first learning from a random sampling of data, after which the model actively requests more specific types of labels to improve its performance. This leads to converging an optimal model faster using less data. Alectio has extensive expertise in active learning and it forms a crucial cornerstone of our platform.

How does active learning help with labeling costs?

A lot of data we use to train models is either of minimal value (think duplicative data) or is actually harmful (think of mislabeled or spammy data). Training models with a lot of useless or detrimental data reduces their efficacy. Active learning seeks to solve that problem by listening to what your models need to succeed.

Aren’t there other solutions to reduce labeling costs?

There are, yes. Labeling costs are a real issue for a lot of businesses, either because of the volume of their typical dataset or because getting quality labels is slow or expensive (think of something where the labels need to come from experts like surgeons, lawyers, geophysicists, etc.). Labeling is also expensive because you often need to label data more than once. Of course, crowdsourced labeling has helped for some of us, but some companies don’t want to share data with third parties or simply cannot because of privacy concerns (or the data requires expertise the crowd doesn’t have).

Human-in-the-loop and Snorkel are two popular appraoches but there are still issues you may run into. For example, both still require you to label a ton of data (more efficient though they may be), which is often a waste of time and money.

Our approach at Alectio is different because we’re interested in finding and prioritizing the most useful data for your models to ingest. This solves the issues around labeling bottlenecks, overfitting, compute resources, and more since you label less data but label the right data for your project.

Doesn't active learning use a lot of compute resources?

Active learning was originally developed to help people save on labeling costs, not compute resources. That means that yes, it can use more computer resources than “regular” supervised learning. That said, active learning can actually help reduce your consumption of compute resources if your number of training loops isn’t too high.

Okay, so Alectio is an active learning company?

We definitely leverage active learning here at Alectio, but we combine it with reinforcement learning, meta learning, information theory, entropy analysis, topological data analysis, data shapley, and more. That’s because active learning in and of itself isn’t enough to get you the results you need. And as we mentioned in our last answer, active learning was originally designed to reduce labeling costs but generally will not reduce the compute power you use or the training time your problem requires. Combining it with other methods and concepts helps keep those in check.

So why isn’t active learning used more in the industry?

Largely because people are most familiar with a certain kind of active learning where the model selects least-confident data to train on. This works in academia, where the data is clean and the labels are accurate, but in the real world where data can be messy, this strategy doesn’t work.

How can using less data lead to better model performance?

That’s definitely a pervasive belief. But remember something we said up above: all data is not created equal. Some of it is really useful for model training. Some of it less so (redundant data, for example, can cause overfitting). Some of it is actively harmful (mislabeled data, for example, can cause serious confusion).

What kind of data does Alectio work with?

We can work with virtually any kind of data, though our approach excels especially with images. That said, we can work with virtually any data type because our tech learns from the metadata (log files) generated from the training process itself.

Will Alectio help me with feature engineering?

Our technology identifies which records are the most impactful and useful to a model, not which features should be used in a model. That said, since we can identify which data is useless to a learning process, it can occasionally be used to find weaknesses in the model itself, which can in turn help with feature engineering.

What if I don’t have a model yet or I’m still developing it?

In most situations, usefulness is actually a function of the use case and the data versus the model itself. Think about a facial recognition problem. Regardless of if you’ve selected a model, data without a person in it or with bad resolution is going to be less useful than other data. We can uncover that without knowing what model you’re using.

So data usefulness isn’t model-specific?

Usually not! Our research shows that usefulness is data-specific, not model-specific. For example, data uselessness is usually due to either redundancy or irrelevance, and while irrelevance is use case specific, redundancy is a more general concept. Data hurtfulness is also fairly use case agnostic. You can read a bit more about that here.

Can I still use human-in-the-loop to curate my data?

Of course! Many companies have dedicated teams focused on data curation these days. The issue is that people don’t understand how models work, especially black box models like deep learning. Having them decide which data matters often amounts to wild guessing and can inject biases into your data. At Alectio, we sometimes say we give the model a voice. It decides what data it needs to learn.

Slider

Get Started

Try Alectio for free or get in touch with us and let us show you how we can help.