Citation link: 10.3389/fcomp.2024.1379788
DC FieldValueLanguage
crisitem.author.orcid0000-0001-5296-5347-
dc.contributor.authorHölzemann, Alexander-
dc.contributor.authorVan Laerhoven, Kristof-
dc.date.accessioned2024-09-26T07:12:20Z-
dc.date.available2024-09-26T07:12:20Z-
dc.date.issued2024de
dc.descriptionFinanziert aus dem DFG-geförderten Open-Access-Publikationsfonds der Universität Siegen für Zeitschriftenartikelde
dc.description.abstractResearch into the detection of human activities from wearable sensors is a highly active field, benefiting numerous applications, from ambulatory monitoring of healthcare patients via fitness coaching to streamlining manual work processes. We present an empirical study that evaluates and contrasts four commonly employed annotation methods in user studies focused on in-thewild data collection. For both the user-driven, in situ annotations, where participants annotate their activities during the actual recording process, and the recall methods, where participants retrospectively annotate their data at the end of each day, the participants had the flexibility to select their own set of activity classes and corresponding labels. Our study illustrates that different labeling methodologies directly impact the annotations' quality, as well as the capabilities of a deep learning classifier trained with the data. We noticed that in situ methods produce less but more precise labels than recall methods. Furthermore, we combined an activity diary with a visualization tool that enables the participant to inspect and label their activity data. Due to the introduction of such a tool were able to decrease missing annotations and increase the annotation consistency, and therefore the F1-Score of the deep learning model by up to 8% (ranging between 82.1 and 90.4 % F1-Score). Furthermore, we discuss the advantages and disadvantages of the methods compared in our study, the biases they could introduce, and the consequences of their usage on human activity recognition studies as well as possible solutions.en
dc.identifier.doi10.3389/fcomp.2024.1379788de
dc.identifier.urihttps://dspace.ub.uni-siegen.de/handle/ubsi/2765-
dc.identifier.urnurn:nbn:de:hbz:467-27652-
dc.language.isoende
dc.rightsNamensnennung 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.sourceFrontiers in computer science ; Vol. 6, 1379788. - https://doi.org/10.3389/fcomp.2024.1379788de
dc.subject.ddc004 Informatikde
dc.subject.otherData annotation ambiguityen
dc.subject.otherData labelingen
dc.subject.otherDeep learningen
dc.subject.otherDataseten
dc.subject.otherAnnotation methoden
dc.subject.otherMehrdeutigkeit von Datenkommentarende
dc.subject.otherDatenbeschriftungde
dc.subject.otherTiefes Lernende
dc.subject.otherDatensatzde
dc.subject.otherAnnotationsmethodede
dc.titleA matter of annotation: an empirical study on in situ and self-recall activity annotations from wearable sensorsen
dc.typeArticlede
item.fulltextWith Fulltext-
ubsi.contributor.contributorLukowicz, Paul-
ubsi.origin.dspace5true
ubsi.publication.affiliationDepartment Elektrotechnik - Informatikde
ubsi.source.issn2624-9898-
ubsi.source.issued2024de
ubsi.source.pages22de
ubsi.source.placeLausannede
ubsi.source.publisherFrontiers Mediade
ubsi.source.titleFrontiers in computer sciencede
ubsi.source.volume6de
Appears in Collections:Geförderte Open-Access-Publikationen
Files in This Item:
File Description SizeFormat
A_matter_of_annotation.pdf2.36 MBAdobe PDFThumbnail
View/Open

This item is protected by original copyright

Show simple item record

Page view(s)

75
checked on Nov 21, 2024

Download(s)

17
checked on Nov 21, 2024

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons