What is OPUS?
Siegen University Library provides a free of charge repository named OPUS Siegen (OPUS = Online PUblication Server) with the purpose to publish, archive and retrieve electronical documents produced at the University of Siegen.
What will you find here?
You will find Open-Access-Publications from all faculties of Siegen University and from the "universi" publishing house. The University Library applies acknowledged quality standards and offers support for publishing your documents.
How to participate?
For uploading documents, sign on to OPUS via Shibboleth using your ZIMT-Account.
Recently published
- Some of the metrics are blocked by yourconsent settings
Publication Open Access On the Complementarity of Video and Inertial Data for Human Activity Recognition(2025)With research in fields such as psychiatry having shown strong links between activities and behavior, there has been a growing interest in the development of automatic activity recognition systems using machine learning methods, also known as Human Activity Recognition (HAR). Within the last decade, Deep Learning methods have surpassed classical machine learning models in performance and have become the de facto standard learning-based approach for sensor-based HAR. While Deep Learning has largely automated feature extraction from inertial data, reducing the dependence on expert-crafted features, it has inadvertently introduced new challenges to the activity recognition community. This dissertation, structured in two parts, addresses two of these core challenges associated with applying deep learning to inertial-based HAR by leveraging concepts and methodologies from the domain of computer vision. The first part of the dissertation focuses on the so-called labeling bottleneck, which denotes the considerable manual effort and cost associated with annotating data from wearable inertial sensors. This issue has significantly limited the scale and complexity of publicly available HAR benchmark datasets, thereby negatively affecting methodological progress. In an effort to decrease annotator workload, a weak-annotation pipeline is proposed that only requires labels for representative segments of a synchronously recorded video stream by leveraging the discriminative capabilities of vision foundation models. The second part examines the sliding window problem, referring to the temporal modeling limitations caused by HAR approaches relying on fixed-length window classification. Showcasing a reformulated view to inertial-based HAR, this dissertation introduces vision-based Temporal Action Localization (TAL) into the inertial domain. Benchmark experiments demonstrate that both existing TAL models from the video domain and a newly proposed TAL-inspired architecture for inertial data significantly outperform classical inertial HAR models. By leveraging inter-segment temporal context, both approaches also exhibit reduced sensitivity to hyperparameters selected during segmentation. The demonstrated use cases show how recent advancements in video-based activity recognition can help overcome limitations inherent to inertial sensing. While each approach exhibits certain constraints, these works offer novel perspectives on long-standing issues and introduce methodologies that, if adopted, could inspire further research and innovation within the inertial HAR community.Source Type: - Some of the metrics are blocked by yourconsent settings
Publication Intra- & Extra-Source Exemplar-Based Style Synthesis for Improved Domain Generalization(2023)The generalization with respect to domain shifts, as they frequently appear in applications such as autonomous driving, is one of the remaining big challenges for deep learning models. Therefore, we propose an exemplar-based style synthesis pipeline to improve domain generalization in semantic segmentation. Our method is based on a novel masked noise encoder for StyleGAN2 inversion. The model learns to faithfully reconstruct the image, preserving its semantic layout through noise prediction. Random masking of the estimated noise enables the style mixing capability of our model, i.e. it allows to alter the global appearance without affecting the semantic layout of an image. Using the proposed masked noise encoder to randomize style and content combinations in the training set, i.e., intra-source style augmentation ( $$\textrm{ISSA}$$ ISSA ) effectively increases the diversity of training data and reduces spurious correlation. As a result, we achieve up to $$12.4\%$$ 12.4 % mIoU improvements on driving-scene semantic segmentation under different types of data shifts, i.e., changing geographic locations, adverse weather conditions, and day to night. $$\textrm{ISSA}$$ ISSA is model-agnostic and straightforwardly applicable with CNNs and Transformers. It is also complementary to other domain generalization techniques, e.g., it improves the recent state-of-the-art solution RobustNet by $$3\%$$ 3 % mIoU in Cityscapes to Dark Zürich. In addition, we demonstrate the strong plug-n-play ability of the proposed style synthesis pipeline, which is readily usable for extra-source exemplars e.g., web-crawled images, without any retraining or fine-tuning. Moreover, we study a new use case to indicate neural network’s generalization capability by building a stylized proxy validation set. This application has significant practical sense for selecting models to be deployed in the open-world environment. Our code is available at https://github.com/boschresearch/ISSA .Source Type:1 - Some of the metrics are blocked by yourconsent settings
Publication Dispersive analysis of B → K(*) and Bs → ϕ form factors(2023)Abstract We propose a stronger formulation of the dispersive (or unitarity) bounds à la Boyd-Grinstein-Lebed (BGL), which are commonly applied in analyses of the hadronic form factors for B decays. In our approach, the existing bounds are split into several new bounds, thereby disentangling form factors that are jointly bounded in the common approach. This leads to stronger constraints for these objects, to a significant simplification of our numerical analysis, and to the removal of spurious correlations among the form factors. We apply these novel bounds to $$ \overline{B}\to {\overline{K}}^{\left(\ast \right)} $$ B ¯ → K ¯ ∗ and $$ {\overline{B}}_s\to \phi $$ B ¯ s → ϕ form factors by fitting them to purely theoretical constraints. Using a suitable parametrization, we take into account the form factors’ below-threshold branch cuts arising from on-shell $$ {\overline{B}}_s{\pi}^0 $$ B ¯ s π 0 and $$ {\overline{B}}_s{\pi}^0{\pi}^0 $$ B ¯ s π 0 π 0 states, which so-far have been ignored in the literature. In this way, we eliminate a source of hard-to-quantify systematic uncertainties. We provide machine readable files to obtain the full set of the $$ \overline{B}\to {\overline{K}}^{\left(\ast \right)} $$ B ¯ → K ¯ ∗ and $$ {\overline{B}}_s\to \phi $$ B ¯ s → ϕ form factors in and beyond the entire semileptonic phase space.Source Type:2 - Some of the metrics are blocked by yourconsent settings
Publication Non-factorisable effects in the decays $$ {\overline{B}}_s^0\to {D}_s^{+}{\pi}^{-} $$ and $$ {\overline{B}}^0\to {D}^{+}{K}^{-} $$ from LCSR(2023)Abstract In light of the current discrepancies between the recent predictions based on QCD factorisation (QCDF) and the experimental data for several non-leptonic colour-allowed two-body B -meson decays, we obtain new determinations of the non-factorisable soft-gluon contribution to the decays $$ {\overline{B}}_s^0\to {D}_s^{+}{\pi}^{-} $$ B ¯ s 0 → D s + π − and $$ {\overline{B}}^0\to {D}^{+}{K}^{-} $$ B ¯ 0 → D + K − , using the framework of light-cone sum rule (LCSR), with a suitable three-point correlation function and B -meson light-cone distribution amplitudes. In particular, we discuss the problem associated with a double light-cone (LC) expansion of the correlator, and motivate future determinations of the three-pArticle B -meson matrix element with the gluon and the spectator quark aligned along different light-cone directions. Performing a LC-local operator product expansion of the correlation function, we find, for both modes considered, the non-factorisable part of the amplitude to be sizeable and positive, however, with very large systematic uncertainties. Furthermore, we also determine for the first time, using LCSR, the factorisable amplitudes at LO-QCD, and thus the corresponding branching fractions. Our predictions are in agreement with the experimental data and consistent with the results based on QCDF, although again within very large uncertainties. In this respect, we provide a rich outlook for future improvements and investigations.Source Type:2 - Some of the metrics are blocked by yourconsent settings
Publication Taming new physics in b → cūd(s) with τ(B+)/τ(Bd) and $$ {a}_{sl}^d $$(2023)Abstract Inspired by the recently observed tensions between the experimental data and the theoretical predictions, based on QCD factorisation, for several colour-allowed non-leptonic B -meson decays, we study the potential size of new physics (NP) effects in the decay channels b → cūd ( s ). Starting from the most general effective Hamiltonian describing the b → cūd ( s ) transitions, we compute NP contributions to the theoretical predictions of B -meson lifetime and of B -mixing observables. The well-known lifetime ratio τ ( B + )/ τ ( B d ) and the experimental bound on the semi-leptonic CP asymmetry $$ {a}_{sl}^d $$ a sl d , provide strong, complementary constraints on some of the NP Wilson coefficients.Source Type:

