Browsing by Author "Waltemath, Dagmar"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
- ItemAnnotation-based feature extraction from sets of SBML models(BioMed Central, 2015-04-15) Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, RonBackground: Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. Results: In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Conclusions: Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.
- ItemCOMODI : an ontology to characterise differences in versions of computational models in biology(BioMed Central, 2016-07-11) Scharm, Martin; Waltemath, Dagmar; Mendes, Pedro; Wolkenhauer, OlafBackground: Open model repositories provide ready-to-reuse computational models of biological systems. Models within those repositories evolve over time, leading to different model versions. Taken together, the underlying changes reflect a model’s provenance and thus can give valuable insights into the studied biology. Currently, however, changes cannot be semantically interpreted. To improve this situation, we developed an ontology of terms describing changes in models. The ontology can be used by scientists and within software to characterise model updates at the level of single changes. When studying or reusing a model, these annotations help with determining the relevance of a change in a given context. Methods: We manually studied changes in selected models from BioModels and the Physiome Model Repository. Using the BiVeS tool for difference detection, we then performed an automatic analysis of changes in all models published in these repositories. The resulting set of concepts led us to define candidate terms for the ontology. In a final step, we aggregated and classified these terms and built the first version of the ontology. Results: We present COMODI, an ontology needed because COmputational MOdels DIffer. It empowers users and software to describe changes in a model on the semantic level. COMODI also enables software to implement user-specific filter options for the display of model changes. Finally, COMODI is a step towards predicting how a change in a model influences the simulation results. Conclusion: COMODI, coupled with our algorithm for difference detection, ensures the transparency of a model’s evolution, and it enhances the traceability of updates and error corrections. COMODI is encoded in OWL. It is openly available at http://comodi.sems.uni-rostock.de/.
- ItemEvolution of computational models in BioModels Database and the Physiome Model Repository(BioMed Central, 2018-04-12) Scharm, Martin; Gebhardt, Tom; Toure, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, DagmarENGLISH SUMMARY : Background: A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. Methods: We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. Results: We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at hub.docker.com/r/binfalse/bives-statsgenerator. The website most.bio.informatik.uni-rostock.de provides interactive access to model versions and their evolutionary statistics. Conclusion: The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model’s provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.
- ItemHarmonizing semantic annotations for computational models in biology(Oxford University Press, 2019) Neal, Maxwell Lewis; Konig, Matthias; Nickerson, David; Mısırlı, Goksel; Kalbasi, Reza; Drager, Andreas; Atalag, Koray; Chelliah, Vijayalakshmi; Cooling, Michael T.; Cook, Daniel L.; Crook, Sharon; De Alba, Miguel; Friedman, Samuel H.; Garny, Alan; Gennari, John H.; Gleeson, Padraig; Golebiewski, Martin; Hucka, Michael; Juty, Nick; Myers, Chris; Olivier, Brett G.; Sauro, Herbert M.; Scharm, Martin; Snoep, Jacky L.; Toure, Vasundra; Wipat, Anil; Wolkenhauer, Olaf; Waltemath, DagmarLife science researchers use computational models to articulate and test hypotheses about the behavior of biological systems. Semantic annotation is a critical component for enhancing the interoperability and reusability of such models as well as for the integration of the data needed for model parameterization and validation. Encoded as machine-readable links to knowledge resource terms, semantic annotations describe the computational or biological meaning of what models and data represent. These annotations help researchers find and repurpose models, accelerate model composition and enable knowledge integration across model repositories and experimental data stores. However, realizing the potential benefits of semantic annotation requires the development of model annotation standards that adhere to a community-based annotation protocol.Without such standards, tool developers must account for a variety of annotation formats and approaches, a situation that can become prohibitively cumbersome and which can defeat the purpose of linking model elements to controlled knowledge resource terms. Currently, no consensus protocol for semantic annotation exists among the larger biological modeling community. Here, we report on the landscape of current annotation practices among the COmputational Modeling in BIology NEtwork community and provide a set of recommendations for building a consensus approach to semantic annotation.
- ItemMinimum information about a simulation experiment (MIASE)(PLOS, 2011-04) Waltemath, Dagmar; Adams, Richard; Beard, Daniel A.; Bergmann, Frank T.; Balla, Upinder S.; Britten, Randall; Chelliah, Vijayalakshmi; Cooling, Michael T.; Cooper, Jonathan; Crampin, Edmund J.; Garny, Alan; Hoops, Stefan; Hucka, Michael; Hunter, Peter; Klipp, Edda; Laibe, Camille; Miller, Andrew K.; Moraru, Ion; Nickerson, David; Nielsen, Poul; Nikolski, Macha; Sahle, Sven; Sauro, Herbert M.; Schmidt, Henning; Snoep, Jacky L.; Tolle, Dominic; Wolkenhauer, Olaf; Le Novere, NicolasReproducibility of experiments is a basic requirement for science. Minimum Information (MI) guidelines have proved a helpful means of enabling reuse of existing work in modern biology. The Minimum Information Required in the Annotation of Models (MIRIAM) guidelines promote the exchange and reuse of biochemical computational models. However, information about a model alone is not sufficient to enable its efficient reuse in a computational setting. Advanced numerical algorithms and complex modeling workflows used in modern computational biology make reproduction of simulations difficult. It is therefore essential to define the core information necessary to perform simulations of those models. The Minimum Information About a Simulation Experiment describes the minimal set of information that must be provided to make the description of a simulation experiment available to others. It includes the list of models to use and their modifications, all the simulation procedures to apply and in which order, the processing of the raw numerical results, and the description of the final output. MIASE allows for the reproduction of any simulation experiment. The provision of this information, along with a set of required models, guarantees that the simulation experiment represents the intention of the original authors. Following MIASE guidelines will thus improve the quality of scientific reporting, and will also allow collaborative, more distributed efforts in computational modeling and simulation of biological processes.
- ItemMinimum Information About a Simulation Experiment (MIASE)(Public Library of Science (PLOS), 2011-04) Waltemath, Dagmar; Adams, RichardReproducibility of experiments is a basic requirement for science. Minimum Information (MI) guidelines have proved a helpful means of enabling reuse of existing work in modern biology. The Minimum Information Required in the Annotation of Models (MIRIAM) guidelines promote the exchange and reuse of biochemical computational models. However, information about a model alone is not sufficient to enable its efficient reuse in a computational setting. Advanced numerical algorithms and complex modeling workflows used in modern computational biology make reproduction of simulations difficult. It is therefore essential to define the core information necessary to perform simulations of those models. The Minimum Information About a Simulation Experiment (MIASE, Glossary in Box 1) describes the minimal set of information that must be provided to make the description of a simulation experiment available to others.
- ItemReproducible computational biology experiments with SED-ML -The Simulation Experiment Description Markup Language(BioMed Central, 2011-12) Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T.; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K.; Moraru, Ion I.; Nickerson, David; Sahle, Sven; Snoep, Jacky L.; Le Novere, NicolasBackground: The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined.