We are excited to announce our workshop that will bring together researchers who are using data to derive and parametrise mechanistic data. The workshop will begin on the 11th December around noon and finish in the early afternoon of the 12th. Thanks to LMS, UKRI and Netwon Institute funding we have support for travel, preferentially for Early Career Researchers. For late registration please email Fabian Spill.
The meeting will take place in Lecture Theatre A in the Watson building of the University of Birmingham. It is a 10 minute walk from our local train station called University (Birmingham). For those who are staying in the Edgbaston Park hotel, this hotel is another few minutes north of the Watson building. A code of conduct is at the bottom of this page.
The conference dinner will take place at 19:00 in the White Swan in Harbourne, about 30 minutes walk from the University of Birmingham.
Monday 11 December
|Registration and lunch
Tuesday 12 December
|Manoja Rajalakshmi Aravindakshan
|Aravind Kumar Kamaraj
|Coffee and end of workshop
Structural Identifiability Analysis: A Tool for Hybrid Mechanistic/Data-Driven Modelling
For many systems (certainly those in biology, medicine and pharmacology) the mathematical models that are generated invariably include state variables that cannot be directly measured and associated model parameters, many of which may be unknown and which also cannot be measured. For such systems there is also often limited access for inputs or perturbations. These limitations can cause immense problems when investigating the existence of hidden pathways or attempting to estimate unknown parameters and this can severely hinder model validation. It is therefore highly desirable to have a formal approach to determine what additional inputs and/or measurements are necessary in order to reduce or remove these limitations and permit the derivation of models that can be used for practical purposes with greater confidence.
Structural identifiability arises in the inverse problem of inferring from the known, or assumed, properties of a biomedical or biological system a suitable model structure and estimates for the corresponding rate constants and other parameters. Structural identifiability analysis considers the uniqueness of the unknown model parameters from the input-output structure corresponding to proposed experiments to collect data for parameter estimation (under an assumption of the availability of perfect, noise-free observations). This is an important, but often overlooked, theoretical prerequisite to experiment design, system identification and parameter estimation, since estimates for unidentifiable parameters are effectively meaningless. If parameter estimates are to be used to inform about intervention or inhibition strategies, or other critical decisions, then it is essential that the parameters be uniquely identifiable.
Numerous techniques for performing a structural identifiability analysis on linear parametric models exist and this is a well-understood topic. In comparison, there are relatively few techniques available for nonlinear systems (the Taylor series approach, similarity transformation-based approaches, differential algebra techniques and the more recent observable normal form approach and symmetries approaches) and significant (symbolic) computational problems can arise, even for relatively simple models.
In this talk an introduction to structural identifiability analysis will be provided demonstrating the application of the techniques available to both linear and nonlinear systems and to models of a hybrid mechanistic/data-driven structure.
Fully Personalised Degenerative Disease Modelling - A Duchenne Muscular Dystrophy Case Study
The ambulatory abilities of Duchenne Muscular Dystrophy (DMD) patients can be measured using the NSAA score. This test can be taken many times, giving a trajectory over time. The ability to extrapolate these trajectories effectively would give clinicians the opportunity to construct treatment plans earlier, allowing treatments to be taken in advance of worsening of conditions. Furthermore, the data-sets available have high missingness, and the creation of a model for DMD progression provides an opportunity for high quality synthetic data without missingness to be constructed. Therefore, we create a hierarchical Bayesian model for NSAA scores, allowing personalised, probabilistic trajectory predictions to be made. Variations of this model can be created, such as whether to use a gaussian process to model discrepancy, covarying parameters with other predictors, and including terms for treatments. We hence examine criterion such as quantile coverage, sharpness, and the quality of synthetic data to perform model selection. Additionally, we try to determine how effective the model is at predicting minimal clinically important differences, in order to determine if the predictions made can be clinically useful.
Mathematical Modeling of Subunit Formation in Pex14
Peroxisomes are dynamic organelles and essential for health and development. The enzymes acting in peroxisomes have to be imported via a multi-protein machinery. Pex14 is a component of the import machinery complex and studies have indicated that the ability of Pex14 to form homo-oligomers is important for an efficient import process. We measured the subunit formation in Pex14 and developed a mathematical model to analyse the exchange of monomers, dimers and trimers. With help of the model we were able to determine time-scales of the exchange process and analyse the fluxes between the different species. The experimental data used for the modeling is measured with native MS and each species has a different response factor. The determination of response factors is an often unconsidered and challenging task and analysis via dynamic modeling could be a promising strategy. Via observation functions the model was used to determine the different response factors out of the data and the found values are in good agreement with existing results.
Neuroendocrine Dopamine Neuron Oscillations: From Circadian Dopamine Dynamics to Ultradian Bursting
In the arcuate nucleus of the hypothalamus, tuberoinfundibular dopaminergic neurons (TIDA) that regulate the release of the reproductive hormone prolactin are the site of multiple oscillatory phenomena. The dopamine output of the population follows a circadian cycle, and TIDA neurons regularly fire bursts of action potentials on the scale of seconds. Here, a combined experimental and modelling approach is taken to elucidate the origin of these rhythms. First, multielectrode array recordings of TIDA activity throughout the circadian cycle and in the presence of time-of-day signalling neuropeptides show how a combination of intrinsic TIDA timekeeping and communication from the master circadian pacemaker regulates daily dopamine release. Second, intracellular current-clamp recordings are used to derive a conductance-based model of TIDA bursting. A mechanism driven by a calcium-activated potassium current and a slowly activated persistent sodium current can lead to sustained bursting that closely resembles experimental recordings. Recent data suggest that electrical coupling between TIDA cells sets the network burst frequency; hence we explore the effect of this coupling between simulated cells and show that the collective frequency depends nonlinearly on the coupling strength. Electrical coupling is also shown to recruit non-bursting cells into burst oscillations and drive emergent oscillations between quiescent cells.
Heterogeneity in mathematical biology
Cell-to-cell variability is often a primary source of variability in experimental data. Yet, it is common for mathematical analysis of biological systems to neglect biological variability by assuming that model parameters remain fixed between measurements. In this talk, I present new mathematical and statistical tools to identify cell-to-cell variability from experimental data, based on mathematical models with random parameters. First, I identify variability in the internalisation of material by cells using approximate Bayesian computation and noisy flow cytometry measurements from several million cells. Second, I develop a computationally efficient method for inference and identifiability analysis of random parameter models based on an approximate moment-matched solution constructed through a multivariate Taylor expansion. Overall, I show how analysis of random parameter models can provide more precise parameter estimates and more accurate predictions with minimal additional computational cost compared to traditional modelling approaches. I conclude by discussing how mathematical theory can not only identify heterogeneity in biological systems, but explain its emergence.
Scalable inference for epidemic models with individual level data
As individual level epidemiological and pathogen genetic data become available in ever increasing quantities, the task of analysing such data becomes more and more challenging. Inferences for this type of data are complicated by the fact that the data is usually incomplete, in the sense that the times of acquiring and clearing infection are not directly observed, making the evaluation of the model likelihood intractable. A solution to this problem can be given in the Bayesian framework with unobserved data being imputed within Markov chain Monte Carlo (MCMC) algorithms at the cost of considerable extra computational effort.
Motivated by this demand, we develop a novel method for updating individual level infection states within MCMC algorithms that respects the dependence structure inherent within epidemic data. We apply our new methodology to an epidemic of Escherichia coli O157:H7 in feedlot cattle in which eight competing strains were identified using genetic typing methods. We show that surprisingly little genetic data is needed to produce a probabilistic reconstruction of the epidemic trajectories, despite some possibility of misclassification in the genetic typing. We believe that this complex model, capturing the interactions between strains, would not have been able to be fitted using existing methodologies.
Data-driven modelling of calcium puffs using integrodifferential equations
The calcium signalling system is important for many cellular processes within the human body. Abnormal ion channels can cause dysfunction in the release of calcium ions, and thus be detrimental to an individual’s health. Hybrid stochastic systems are often used to model the dynamics of the ion channel and calcium release, with Markov models simulating the stochastic behaviour of the ion channel and ordinary differential equations modelling the deterministic release of calcium ions. Whilst data-driven mathematical models have helped to advance knowledge of the calcium signalling system, they often require a high number of parameters and equations. Here, we simplify an existing hybrid stochastic system and present a model that consists of two integrodifferential equations. Our new model produces qualitatively similar results to more complex models, whilst enabling us to analyse the effect delaying the ion channel memory has on the calcium dynamics. Through our research we show that the ion channel is not able to function if only the present calcium concentration is known, and a longer delay is necessary to produce calcium puffs
Experimental design for ion channel modelling
Combining machine learning and digital twins to provide mechanistic cardiac phenotype of large populations
Large multi-modal studies on cardiovascular imaging and diagnostic datasets allow to explore the association of heart anatomy, function, and outcomes, but fail to reveal the underlying biological mechanisms. Cardiac digital twins (DT) provide a physics and physiology-constrained in-silico representation of specific individuals, allowing to infer multi-scale structural and functional property linked to underlying biological mechanisms. Here we constructed a population-based cohort of DT though a highly-automated framework, using clinical images and ECGs from the UK biobank (UKBB). We explored how the DT-derived phenotype vary according to sex, body mass index (BMI) and age, comparing to image and ECG derived phenotype. Moreover, we have conducted a phenome-wide association study to identify their correlations with various phenotypes reported in UKBB and further investigate their associations with clinical outcomes. Our study illustrates how population-based DT-derived phenotype can be used to assess the intersubject variability in human cardiology and potentially linked to mental health.
In Silico Electrophysiological Modeling of the Calcium Dynamics in Mouse Detrusor Smooth Muscle Cells Reveals the Spontaneous Contractions in the Context of Urinary Incontinence. Urinary Incontinence (UI) represents the involuntary leakage of urine, posing significant social and embarrassing challenges in daily life. While various pathological factors contribute to UI, detrusor smooth muscle (DSM) instability stands out as a primary culprit. Multiple experimental findings have indicated that DSM cells across different species exhibit spontaneous contractile activity at varying frequencies. Empirical and clinical investigations have underscored the link between calcium dynamics and these spontaneous contractions within DSM cells. Computational models offer a valuable means to quantitatively dissect the mechanisms governing calcium dynamics and explore their roles in cellular mechanical activities. In this study, we present a foundational model of calcium dynamics rooted in voltage-gated calcium channels, incorporating the kinetic processes within DSM cells based on the classical Hodgkin-Huxley formalism. This model integrates voltage-dependent calcium channels (ICa), voltage-gated potassium channels (IKv), calcium-activated potassium channels (IKCa), and background leakage currents (Il) to generate action potentials (APs) and calcium transients. By injecting an external stimulus current of 1-3nA for 0.5-1ms, we observe that a 3nA current over 0.6 ms initiates the first spike with a voltage threshold at -30mV. The calcium transient is recorded at a depth of 2µm from the cell membrane, with a resting [Ca2+]i set at 100 nM. The L-type channel emerges as the primary contributor to the increase in [Ca2+]i, resulting in a peak [Ca2+]i transient of 1900 nM in our model. The close resemblance between the shape and peak value of the [Ca2+]i transient in our model and experimental data underscores the accuracy of our model in representing the kinetics of channels, pumps, and calcium transients. At present, our computational model offers a fundamental tool for analyzing the physiological calcium dynamics and ionic channel kinetics underpinning DSM cell contractions, paving the way for exploring various hypotheses in the genesis of bladder overactivity.
Keywords: Urinary Incontinence; Ca2+ channel; PMCA; SERCA; Calcium transient; Computational model;
Omics data integration using network and partial least squares.
Data integration methods are used to obtain a unified summary of multiple datasets and their association with an outcome variable. For multi-modal data, we propose a computational workflow to analyse datasets from cell lines. The workflow comprises a novel partial least square (PLS) method for multi-omics data integration and a network method to integrate the identified omics features with validated drug compounds. The workflow is motivated by a study on synucleinopathies where transcriptomics, proteomics, and drug screening data are measured in affected LUHMES cell lines and controls. The LUHMES cell lines are used to study biological mechanisms underlying multiple system atrophy (MSA) a rare neurological disease related to Parkinson’s disease (PD).
The partial least square method for omics integration reduces the dimensionality and addresses the presence of heterogeneity because of different measurement technologies. The correlation structure within and between the datasets is modelled by joint and data specific components. Model parameters are estimated using maximum likelihood. In a second step, the drug compounds are integrated with the relevant omics features via neighbouring genes in a functional network obtained from bioinformatic databases. We illustrate the methods by application to multi omics datasets from the cell lines. The performance of the novel PLS method is compared with existing multivariate and univariate approaches in terms of prediction accuracies and interpretation of the obtained gene set via enrichment analysis.
To conclude, for this dataset our proposed PLS method outperforms existing methods. The gene set obtained with PLS is enriched for MSA and PD gene sets. The workflow identified several genes which are targeted by validated drugs.
Bayesian nonparametric methods for individual level stochastic epidemic models
Infectious disease transmission models require assumptions about how the pathogen spreads between individuals. These assumptions may be somewhat arbitrary, particularly when it comes to describing how transmission varies between individuals of different types or in different locations, and may, in turn, lead to incorrect conclusions or policy decisions. We develop a general Bayesian nonparametric framework for transmission modelling that removes the need to make such specific assumptions with regard to the infection process. We will use multioutput Gaussian process prior distributions to model different infection rates in populations containing multiple types of individuals. Further challenges arise because the transmission process itself is unobserved, and large outbreaks can be computationally demanding to analyse. I will address these issues by data augmentation and a suitable efficient approximation method. Simulation studies using synthetic data demonstrate that our framework gives accurate results. We analyse an outbreak of foot and mouth disease in the United Kingdom and Avian Influenza in the Netherlands.
Manoja Rajalakshmi Aravindakshan
Advancing organ-on-chip research: data-driven digital twins for improved drug development.
Digital twins, driven by data and mathematical equations, have emerged as powerful tools for simulating complex biological systems. In this work, we focus on the development of data-driven digital twins for a liver chip that closely mimics the functionalities of the human liver. Our approach involves the creation of a compartmental physiological model of the liver using ordinary differential equations (ODEs), with a primary focus on estimating parameters related to on-chip liver clearance. The model uses published data on pharmacokinetics (metabolism) and toxicology studies (IC50) to more effectively determine PK (pharmacokinetics) and PD (pharmacodynamics) related parameters, surpassing the limitations of currently used least-square methods. Strategies that allow for the exploration of sets of initial values and the discovery of better local minima of the objective function are implemented providing more accurate and reliable parameter estimation. The in vitro liver clearance for 15 drugs was predicted using a three-compartment model of the liver chip and in vitro to in vivo extrapolation (IVIVE) was assessed using time series kinetic data. Three ODEs define the drug concentrations in the plasma, interstitium, and intracellular compartments of the liver chip. Both the analytical and numerical solutions of the system were found to fit most of the drug kinetic data. To address discrepancies in fitting for some drugs, we reverse-engineer the kinetics and back-calculate the clearance value parameter from the observed in vivo clearance to obtain the actual clearance from the chip. The discrepancy between the observed kinetics and the model’s predictions suggests that the model may need to be further refined, as it fails to describe the data effectively. A key factor contributing to this inaccuracy appears to be the insufficient drug concentration within the intracellular compartment. The changes in the following two parameters: the permeability coefficient of the intracellular compartment, and surface area of the liver chip, have been observed to result in a better fit with the kinetic data. This is evidence that increasing surface area of the chip allows more compounds to flow into the intracellular compartment. The primary conclusion we make is that measuring intracellular concentration is crucial for validating and understanding whether the mathematical model needs to be refined or if the biological model parameters need to be more carefully determined. Thus further experiments are needed to improve the overall accuracy and reliability of the liver-on-chip model. Through the refinement of parameter estimation techniques and the exploration of additional data integration possibilities, this work aims to provide valuable tools and insights that significantly benefit pharmaceutical research and development.
Reducing the model: Disentangeling Inositol pyrophosphate fluxomics via mathemical modeling
Inorganic phosphate (Pi ) is often a limiting factor in the development and the proliferation of cells and microorganismes. Thus, cells need mechanisms to quickly adapt to changes in the abundance of Pi . The maintenance of a certain level of Pi in the cytosol is also called phosphate homeostasis and is a process which is poorly studied for microorganisms or even mammalian cells. Inositol pyrophosphates play a vital role in this in cells. Their metabolic pathways are, however, up until now only sparsely investigated. In this work we use 18 O water to label the transferable ATP γ- phosphates in cells, which then in turn label the different inositol phosphates, through enzymatic phosphorylation by kinases. The different inositol phosphates were measured using mass spectrometry providing data for non, single and double 18 O labeled phosphates in absolute concentrations for each time point, thus providing a dataset which gives insights into the kinetics of the involved reactions. Based on that data and through model reduction using the profile likelihood method, we build a mathematical model capable of describing the kinetics involved in the inositol pyrophosphate metabolic pathway for two entirely different species, namely the yeast Saccharomyces cerevisiae and the human cell line HTC 116. While the current opinion in literature is that this pathway must be cyclic, our analysis reveals that for both species, the kinetics are well in accordance with a chain like transition between the different inositol pyrophosphates, which is a significant advancement in understanding the pathway of inositol pyrophosphates in vitro.
Machine Learning incorporating expert knowledge:
combining the predictive power of machine learning with the explanatory power of modelling
The incorporation of expert knowledge into AI and machine learning techniques is of growing interest.
It lead to the emerging interdisciplinary discipline often referred to as scientific Machine Learning (SciML), that combines techniques from machine learning with scientific methods to solve complex scientific problems.
The scientific knowledge often appears in form of differential equations modeling the dynamics of the underlying data generating process. By including it the data-driven ML methods become more robust, increase their interpretability and transparency, as well as improve their extrapolation to novel unseen data. Therefore, scientific ML can automate experimentation and optimization processes, while addressing ethical considerations such as model interpretability and data bias. Furthermore mathematical modelling developed an impressive theoretical understanding of identifiability problems caused by limitations in data observability and a wide variety of approaches exist to analyze the corresponding parameter relationships. The problems caused by such overparameterized systems are well known in both ML and scientific modeling, however how they are handled can vary greatly. In our interdisciplinary projects we attempt to bring the theoretical understanding of dynamic model identifiability into SciML, to improve optimization and increase the understanding for the handling of both: data and model uncertainty.
Scaling Cardiac Digital Twins: Bridging the Divide between Simulation and Real-World Medicine
Cardiac digital twins, constrained by physics and physiology, offer a transformative framework for integrating patient data, predicting outcomes, and shaping therapy strategies. Despite promising early examples, the scalability of this technology remains a significant challenge, necessitating a shift from artisanal, bespoke solutions to a streamlined, automated workflow. Scaling cardiac digital twins to reduce the computational and labour costs in their creation, will open the door to characterizing and studying patient cohorts and whole population variation providing new insight into cardiovascular physiology and health. Reducing manual steps in model creation will improve precision, allowing effective studies with smaller numbers of patients. Finally, scaling cardiac digital twins is needed to bring them into routine clinical care. As these tools and twins become more widely available there will be growing opportunities to use these in device development, drug discovery, education and in improving patient care.
Choreography of hormonal rhythms: mathematical analysis and clinical significance
The Hypothalamic-Pituitary-Adrenal (HPA) axis is the key regulatory pathway responsible for maintaining homeostasis under conditions of real or perceived stress. Endocrine responses to stressors are mediated by adrenocorticotrophic hormone (ACTH) and corticosteroid (CORT) hormones. In healthy, non-stressed conditions, ACTH and CORT exhibit highly correlated ultradian pulsatility with an amplitude modulated by circadian processes. Disruption of these hormonal rhythms can occur as a result of stressors or in the very early stages of disease. Despite the fact that misaligned endocrine rhythms are associated with increased morbidity, a quantitative understanding of their mechanistic origin and pathogenicity is missing. Mathematically, the HPA axis can be understood as a dynamical system that is optimised to respond and adapt to perturbations. Normally, the body copes well with minor disruptions, but finds it difficult to withstand severe, repeated or long-lasting perturbations. Whilst a healthy HPA axis maintains a certain degree of robustness to stressors, its fragility in diseased states is largely unknown, and this understanding constitutes a critical step toward the development of digital tools to support clinical decision-making. This talk will explore how these challenges are being addressed by combining high-resolution biosampling techniques with mathematical and computational analysis methods. This interdisciplinary approach is helping us quantify the inter-individual variability of daily hormone profiles and develop novel “dynamic biomarkers” that serve as a normative reference and to signal endocrine dysfunction. By shifting from a qualitative to a quantitative description of the HPA axis, these insights takes us a step closer to personalised clinical interventions for which timing is key.
Accurate and efficient numerical methods for molecular and particle simulation using adaptive thermostats
I will discuss how we can use the so-called adaptive thermostats, which rely on a negative feedback loop, to develop accurate and efficient numerical methods for molecular and particle simulations, focusing on applications at mesoscales.
Quantifying Cytoskeletal Dynamics and Remodeling from Live-imaging Microscopy Data
The shape of biological cells emerges from dynamic remodeling of the cell’s in- ternal scaffolding, the cytoskeleton. Hence, correct cytoskeletal regulation is crucial for the control of cell behaviour, such as cell division and migration. A main component of the cytoskeleton is actin. Interlinked actin filaments span the body of the cell and contribute to a cell’s stiffness. The molecular motor myosin can induce constriction of the cell by moving actin filaments against each other. Capturing and quantifying these interactions between myosin and actin in living cells is an ongoing challenge. For example, live-imaging microscopy can be used to study the dynamic changes of actin and myosin density in deforming cells. These imaging data can be quantified using Optical Flow algorithms, which locally assign velocities of cytoskeletal movement to the data. Extended Optical Flow algorithms also quantify actin polymerization and depolymerization. However, these measurements on cytoskeletal dynamics may be influenced by noise in the image acquisition, by ad-hoc parameter choices in the algorithm, and by image pre-processing steps. The development of our Optical Flow method will be a starting point for identifying differences in cytoskeletal movement and remodeling under experimental perturbations.
A Mathematical model for BMP4 induced differentiation therapy in combination with radiotherapy in glioblastoma.
Glioblastoma (GBM) is the most aggressive and most common primary brain tumour in adults and is uniformly fatal, with a poor median survival time of 15 months and 5-year survival rates of only 5%. Standard of care for GBM consist of radiotherapy either alone or following surgical resection, despite this, radio-resistance almost always occurs making recurrence inevitable. Failure of the current standard of care has been partly attributed to a special sub-population, the glioma stem cells (GSCs), which initiate and drive tumour growth. Treatment cannot be successful unless all GSCs are eliminated. However, GSCs are known to be highly resistant to radiotherapy, and complete surgical removal is usually impossible in GBM. Therefore, new treatments that specifically target the GSCs could have a potentially large benefit. BMP4 has been shown to induce differentiation of GSCs towards a less malignant, astrocytic-like (ALCs) lineage. Furthermore, new delivery systems (nano particles) provide a potential mechanism by which BMP4 could be successfully administered to reverse the GSC state and reduce radio-resistance in a patient. We develop a data driven mechanistic mathematical model that accounts for the GSCs, tumour cells (TCs) and ALCs as well as their response to both radiotherapy and BMP4 induced differentiation therapy. Our model allows us to run in-silico experiments to investigate how varying several key parameters such as: the radiosensitivity of all cellular populations and the strength of BMP4 on differentiation rate, affect treatment outcome. Our model shows that treatments specifically targeting the GSCs are vital for prolonging survival in GBM and that a combination of both BMP4 therapy and radiotherapy can provide superior outcomes than either one individually.
Aravind Kumar Kamaraj
Using compartmental models to understand excitation-inhibition imbalance in epilepsy
Epileptic seizures are characterized by abnormal synchronous bursting of neurons due to an imbalance between excitatory and inhibitory neurotransmission. We apply compartmental models from epidemiology to study this interaction between excitatory and inhibitory populations of neurons in the context of epilepsy. Neurons could either be bursting or susceptible and the propagation of action potentials within the brain through the bursting of neurons is considered as an infection spreading through a population. We model the recruitment of neurons into bursting and their subsequent decay to susceptibility to be influenced by the proportion of excitatory and inhibitory neurons bursting, resulting in a two population Susceptible - Infected - Susceptible (SIS) model. This approach provides a tractable framework to inspect the mechanisms behind seizure generation and termination. Considering the excitatory neurotransmission as a disease spreading through the neuronal population and the inhibitory neurotransmission as a competing disease that stops the spread of excitation, we establish the conditions for a seizure-like state to be stable. Subsequently, we show how an activity-dependent dysfunction of inhibitory mechanisms such as impaired GABAergic inhibition or inhibitory--inhibitory interactions could result in a seizure even when the above conditions are not satisfied.
Cardiovascular disease mechanisms: laboratory based data analytics and modelling.
The formation of thrombi that lead to heart attacks and strokes is underpinned by the twin processes of coagulation and platelet activation. The effectiveness of anti-coagulant and anti-platelet therapies is well established; however, there is great inter-individual variability in response to these medications and it is unclear why they fail to work for some patients. A deeper understanding of the mechanisms that underpin changes in thrombus formation, how they are regulated and how they vary across cohorts of donors is the focus of work at the Institute for Cardiovascular and Metabolic research.
I will highlight the challenges of working in a laboratory based setting, presenting a number of mathematical models that have been used both to increase understanding of the mechanisms that underpin thrombus formation and to contribute to the development of quantitative experimental data. Furthermore, I will discuss how we use machine-learning techniques to categorise and measure platelet function phenotypes across diverse donor populations, helping bridge experimental observations with clinical applicability.
SCHOOL OF MATHEMATICS: CODE OF CONDUCT
The School of Mathematics is committed to providing a welcoming, inclusive and safe community for all. We expect co-operation and support from all staff, students and visitors to help ensure a harassment-free environment where everyone is treated with courtesy, respect and dignity.
Examples of harassment include, but are not limited to:
•offensive or belittling comments related to age, body size, disability, ethnicity, gender, gender identity and expression, physical appearance, sexual orientation, socio-economic status and religion;
•inappropriate language (this does not need to be aimed directly at an individual(s) for it to contravene the code);
•harassing photography or recording;
•inappropriate physical contact;
•unwelcome sexual or other forms of attention.
Such behaviours are not welcome in the School of Mathematics and will not be tolerated.
We also reject complicity that knowingly promotes, encourages, or protects discrimination or unprofessional behaviour on the part of others. Should anyone experience or witness behaviour contravening the School of Mathematics’ Code of Conduct, please intervene where you feel comfortable doing so. We strongly urge you to report this behaviour using the QR code:
Alternatively, please speak to a member of staff to whom you feel comfortable disclosing this information. The Director of Equality, Diversity and Inclusivity in the School of Mathematics is Dr Sara Jabbari in Watson 113 (firstname.lastname@example.org).
Multi-Faith Prayer Rooms
The Multi-Faith Chaplaincy offers an inclusive space for prayer, meditation, relaxation and worship, among many other activities. All staff and students are welcome, whether religious or not. Many Groups in the Multi-Faith Chaplaincy can provide you with more specific information.
The locations of the prayer rooms are:
- St Francis Hall Chaplaincy (Various spaces available but one specifically allocated at all times)
- Medical School, Room CLG30
- Westhill Chapel on the Selly Oak Campus
- Guild of Students, there are two specifically Islamic prayer rooms upstairs.