- Article
How to Build Better Multilingual Datasets for Computer Vision Training
Computer vision is an integral part of the modern world. Facial recognition, autonomous vehicles, and imaged-based diagnostics are just a few of the technologies that use it to operate. So, what is computer vision? Simply put, it’s an interdisciplinary field that enables computers to “see” and interpret the content of digital images such as photographs and videos.
How computer vision differs from image processing
The goal of computer vision is to replicate the human visual system, including the ability to describe an image, summarize a video, or recognize a face after “seeing” it once. This differentiates it from image processing, which can create a new image from an existing one, but can’t understand it.
However, image processing often serves an important role in computer vision training. When it’s applied to raw images it can help the model analyze training data by:
- Cropping the bounds of the image.
- Adjusting brightness and color.
- Removing shadows or other digital noise.
The model then uses a machine-learning algorithm to recognize objects in the form of pixels. Yet the process is nowhere near as simple as a human glancing at a photo of a car and identifying it instantly. Computer vision requires a significant amount of data and training to do so. In this article, we’ll explore how computer vision training works and how to generate the multilingual datasets that allow models to operate worldwide.
How does computer vision training work?
Computer vision training involves teaching deep neural networks (DNNs) through supervised learning. This type of learning uses labeled training data to teach the model to understand the desired output. Different types of models can answer different questions about an image such as:
- Which objects are in it?
- Where are the objects in it?
- What are the interest points of an object?
- Which pixels belong to each object?
Types of models
Before we dive deeper into how computer vision training works, let’s take a look at the types of models that can answer the questions above.
Image classification. An image classification model attempts to identify the most important object class in an image. A class is a group of similar objects such as humans, buildings, or vehicles. This model determines which objects are in an image.
Object detection. When the location of an object is important, an object detection model attempts to identify where it is. It will run a set of coordinates, called a bounding box, that encloses the area that contains the object. This model determines where objects are in an image.
[form_newsletter]
Object landmark detection. An object landmark detection model labels certain interest points in an image in an attempt to capture important features in it. This model determines the interest points of an object.
Image segmentation. Sometimes knowing the exact shape of an object is necessary in order to complete a task. When that’s the case, an image segmentation model will draw pixel-level boundaries for each object. It then attempts to classify the images by object type. This model determines which pixels belong to each object.
Once a model learns to answer a specific question, it can use this ability to solve problems. For example, an image classification model can analyze an image of rush hour traffic and attempt to determine how many vehicles are on the road. Typically, the model’s output consists of a label and a confidence score, which is an estimate of the likelihood that it correctly labeled the object.
Types of Training Datasets
In order to learn how to meet their objectives, computer vision models use training data to practice. And proper training begins with a quality dataset. The training data should be similar to the type of real-world data it will need to analyze. Yet the type of dataset you use will depend on your goals and degree of accuracy you need. Below are four options:
Use an existing annotated dataset. This option works well if you need to quickly develop a general detection model that fits into a prototype. It won’t produce as high of a degree of accuracy as other datasets, but it will allow you to demonstrate what the model can do.
Build and annotate your own dataset. If you want a model that performs one task well, you can build your own dataset using images that closely resemble the environment your model will perform in. It can include photos and videos you took yourself as well as stock images. However, before you include an image, consider the angle, lighting, resolution, and any other characteristics that may impact the model’s ability to analyze it.
Use a digitally generated dataset. If you can’t collect enough data on your own, you can use synthetic data to generate a much larger dataset and achieve better results. This method is especially useful when you want to train a model to recognize unusual circumstances.
Augment your data. You can boost your dataset through data augmentation. This involves modifying images from an existing dataset. You can flip, rotate, crop, pad, or modify images in other ways to make them different enough to create a new data point. This adds variety to your dataset and helps you avoid overfitting.
Dataset annotation, model training, and testing
Unless you use an existing annotated dataset, the next step is to label or annotate the data. This process involves selecting a portion of an image and assigning a label to that region. Examples include:
- Drawing bounding boxes for object detection
- Tracing objects for semantic and instance segmentation
- Identifying interest points or landmarks
- Placing images in groups and labeling each group for image classification
Upon analyzing the dataset, the model compares its predictions to the annotations and makes adjustments to improve accuracy. You can repeat this process until the model achieves the results you want based on the metrics you set.
Once the model meets your metrics with annotated data, continue training it by feeding it new, unannotated images. You can add similar photos or videos that you took yourself to quickly identify any shortcomings or edge cases that the model has trouble identifying. Then you can retain the model using annotations from images that depict those edge cases or simply note the environments in which it performs best.
The challenges of multilingual data training
As you can see, training a computer vision model to achieve the results you want is a challenging and time-consuming process. Data preparation and labeling tasks alone take up 80% of the time consumed in AI initiatives. Multilingual projects include an additional layer of complexity due to the need for multilingual annotated data. Below are three major challenges for these types of projects:
- Network size. Companies may not have a big enough network to cover all the desired locations and languages.
- Data quality. Many data sourcing companies use a crowdsourcing system. While this works for some types of content, it makes it difficult to implement the quality control necessary for cleaner data sets.
- Volume. Quality data sets with enough volume are more abundant in English than in other languages. Companies that want to train models in additional languages may struggle to build large enough training datasets.
To solve these issues, we recommend partnering with a language service provider that:
- Has a world-wide network of native speakers living in-country.
- Provides managed services and can source subject matter experts or set qualifications for people working on your data.
- Uses English data as the source and has professional linguists provide the translations.
How Argos AI can help
We launched Argos AI to help companies meet the growing demand for multilingual data collection, annotation, validation, and more. Using our expertise in Natural Language Processing (NLP), we build custom programs for enterprise clients that facilitate computer vision training and improve results. Our global team offers services in over 75 languages and provides the support you need at every step of the process.
Interested in learning more about how Argos AI can improve your computer vision models? Get in touch for more information.