Citation Link: https://doi.org/10.25819/ubsi/10510
Synthesising large, low cost and diverse datasets for robust semantic segmentation in self-driving tasks
Source Type
Other
Institute
Issue Date
2022
Abstract
Robust scene understanding algorithms are crucial for the success of autonomous navigation. The supervised learning of semantic segmentation unfortunately requires large and diverse datasets. For some self-driving tasks, such as navigating a robot within an industrial facility, freely available datasets are not available, and manual annotation of large datasets is impractical for smaller development teams. While there are approaches to automatically generate synthetic data, they can be computationally expensive, require significant preparation effort, or miss a wide variety of features. This paper presents a new framework for generating synthetic datasets with high variance for low computing demands that can be easily adapted to different self-driving tasks. The details of the framework can be found at https://github.com/cITIcar/SAD-Generator. As a demonstration, this approach was applied to a semantic segmentation task on a miniature road with random obstacles, lane markings, and disturbing artifacts. A U-Net was trained using synthesized data and later fine-tuned with a small amount of manually annotated data. This resulted in an improvement of 2.5 percentage points in pixel accuracy (PA) and 11.19 percentage points in mean intersection over union (mIoU).
Description
This article presents a framework to artificially generate computer vision datasets with great variance for low computing demand that is easily adaptable to different semantic segmentation tasks.
The source code for this article is available on Github (https://github.com/cITIcar/SAD-Generator).
The source code for this article is available on Github (https://github.com/cITIcar/SAD-Generator).
File(s)![Thumbnail Image]()
Loading...
Name
Dietz_Romero_Mengel_Czekansky_Synthesising_2nd_edition.pdf
Size
529.3 KB
Format
Adobe PDF
Checksum
(MD5):8f400bc6ede74e604626c003edc34e7a
Owning collection