The DySCo meetings are currently held at FAL157, unless otherwise stated below, in which case they would be held on Zoom . The Meeting-ID is 450 507 042 and the password is Dysco2021!. Meeting times are usually on Wednesdays between 11:15 and 12:30.
We discuss topics related to neuroimaging methods and functional models of computations in the brain. The meetings are open to everyone interested in cognitive neuroscience modelling.
If you are interested to present your work at the meeting, or have more questions about the meeting, please contact Sarah Schwöbel.
Upcoming
31.01.2024, 13:00 - 14:30, Dario Cuevas
Title: TBA
17.01.2024, 13:00 - 14:30, Speaker Conor Heins
Title: TBA
10.01.2024, 13:00 - 14:30, Eric Legler
Title: TBA
03.01.2024, 13:00 - 14:30, Speaker TBA
Title: TBA
13.12.2023, 13:00 - 14:30, Sia Ivova Hranova
Title: TBA
06.12.2023, 13:00 - 14:30, Sarah Schwöbel
Title: TBA
15.11.2023, 13:00 - 14:30, Speaker TBA
Title: TBA
08.11.2023, 13:00 - 14:30, Ben Wagner
Title: TBA
01.11.2023, 13:00 - 14:30, Dimitrije Markovic
Title: Learning interactive d-sprite environments with Bayesian sparse coding.
Next Meeting
11.10.2023, 13:00 - 14:30, Martin Butz
Title: TBA
Past Meetings
12.07.2023, 11:30-12:40, Sarah Schwöbel
Title: Modern AI Technologies: Diffusion image models and large language models
Abstract: I will introduce two recent AI models that have become popular far beyond the scientific community. I will show examples of what they can do and how they can be applied, go through how they are implemented, and also discuss limitations of the technology.
28.06.2023, 11:30-12:40, Eric Legler
Title: Investigating the influence of repeated behavior in a sequential decision-making task with a Bayesian habit-learning model
21.06.2023, 11:30-12:40, Sascha
Title: Computational Modeling of the Action Sequences Task
14.06.2023, 11:30-12:40, Dario
Title: Update on my current project: modeling sequential movements
I'll bring you up to date on what I've been doing, and what my plans are for this project.
The meeting will not be in the usual Dysco room. Instead
Dysco link:
https://tu-dresden.zoom.us/j/69247096901?pwd=Q1hTcGQwbDRlZ3VpSnF0ZWRxUEI5QT09
17.05.2023, 11:30-12:40, Ben Wagner
Title: "Relative Learning and learning my repetition"
12.04.2023, 11:30-12:40, Dimitrije Markovic
Discussing the 2023 Algonouts Challenge
01.02.2023, 11:15-12:45, Sarah Schwöbel
18.01.2023, 11:15-12:45, Dimitrije Markovic
Title: Bayesian model reduction for nonlinear regression
11.01.2023, 11:15-12:45, NO DYSCO
04.01.2023, 11:15-12:45, Dario Cuevas
Title: Journal club: Independent generation of sequence elements by motor cortex
14.12.2022, 11:15-12:45, Ben Wagner
07.12.2022, 11:15-12:45, Hrvoje Stojić
Title: Human approaches to solving exploration-exploitation problem through the lens of novelty and attention
Abstract: Exploration-exploitation dilemma is a key problem decision makers that navigate the real world have to tackle. Should you choose an option that you know and currently like best? Or should you be curious and try a more uncertain option in order to learn about it? Previous studies have shown mixed results on how humans approach solving this trade-off between reward and information. Some studies show that humans tend to explore in a random, undirected fashion, while others show some evidence that humans are more sophisticated, exploring in a directed manner. I will present two studies, what I hope to be a convincing evidence for humans as sophisticated decision makers, one through the lens of how humans deal with novel options and one through the lens of joint interactions between learning, decision making and attention. Both studies show that humans are learning about decision options by building probabilistic models, and then leverage uncertainty afforded by the probabilistic nature of learning when deciding between the options - a form of sophisticated directed exploration. In addition, novelty study shows that uncertainty-guided exploration plays a key role in explaining behaviour towards novel options, while the attention study exploited gaze allocation data to provide additional evidence for the uncertainty-guided exploration.
09.11.2022, 11:15-12:45, Jean Daunizeau
02.11.2022, 11:15-12:45, Eric Legler,
Investigating the transition from goal-directed to habitual behavior with a Bayesian habit-learning model in a sequential decision-making task
26.10.2022, 11:15-12:45, Florian Ott
Discussing ideas how cognitive task abstraction might make planning efficient in the sequential "limited-energy task".
12.10.2022, 11:15-12:45, Start of semester intro
29.06.2022, 11:15-12:45, FAL157 Dr. Dario Cuevas
Title: short review of motor planning + a new hierarchical model for sequences of movements.
22.06.2022, 11:15-12:45, FAL157 Dr. Florian Ott
Title: Journal Club: Human Orbitofrontal Cortex Represents a Cognitive Map of State Space (2016) by Nicolas Schuck, Mingbo Cai, Robert Wilson and Yeal Niv
DOI: https://doi.org/10.1016/j.neuron.2016.08.019
25.05.2022, 11:15-12:45, FAL157 Dr. Sarah Schwoebel
Title: A brief introduction to pyro
Abstract: Pyro is a probabilistic programming language, which is a state-of-the-art Bayesian inference tool in python. It can be used for a wide range of data analyses from Bayesian linear regression to complex custom behavioral model fits. It natively implements many distributions and allows for definitions of Bayesian models. This is based on its backend: pytorch. I will introduce pytorch and show minimal working examples for inference with pyro. I will also show common pitfalls and do's and dont's when using this toolbox.
18.05.2022, 11:15-12:45, FAL157 Sascha Frölich
Title: The Motor-Sequences Task: Main Findings.
11.05.2022, 11:15-12:45, FAL157 Dr. Ben Wagner
04.05.2022, 11:15-12:45, FAL157
16.02.2022, 11:15-12:45, Online Meeting: Dimitrije Markovic
Title: Bayesian models for sparse regression
Abstract: I will talk about hierarchical priors, shrinkage priors, and Gaussian processes. The problem is how to identify relevant predictors, when you have many of them, and comparably small data set.
19.01.2022, 11:15-12:45, Online Meeting: Ben Wagner
Title: Dopaminergic modulation of intertemporal choice" (Ben PhD defense training)
Abstract: We constantly make decisions and these often involve trade-offs between direct and future outcomes. Often these trade-offs are minor or irrelevant, but in other cases they have major impacts. Even small decisions can have long lasting consequences when a specific decision pattern (e.g. never taking into account the future) persists and negative effects accumulate over time. In consequence, those decision patterns can become bad habits and contribute to maladaptive behavior and harmful consequences in the long-run. An example for such a trade-off is the decision between c(going out with friends, playing a video game, listening to music) or writing a thesis. Enjoying social interactions, games and music immediately pays off, while writing a thesis on the trade-off between smaller-sooner and larger-but-later rewards (probably) pays off in the future. How should one decide? Daily life requires these frequent trade-offs in various forms and different processes like valuation of decision options, prospection into the future, self-control and context orchestrate their outcome.
This dissertation tries to contribute to a better understanding of so-called intertemporal choices. Thereby it will especially focus on the role of dopamine in modulating decisions with time trade-offs in various populations, from healthy controls to humans with gambling problems and participants with neurological and psychiatric disorders
12.01.2022, 11:15-12:45, Online Meeting: TBD
Title: TBA
05.01.2022, 11:15-12:45, Online Meeting TBA
Title: TBA
15.12.2021, 11:15-12:45, Online Meeting: Sascha Frölich
Title: The Motor-Sequences Task: An Automaticity Paradigm
08.12.2021, 11:15-12:45, Online Meeting: Eric Legler
Title: Modulation of Default Strategy Selection in a Sequential Decision-Making Task
01.12.2021, 11:15-12:45, Online Meeting: Kevin Miller
Title: Habits without Values
Abstract:
Habits form a crucial component of behavior. In recent years, key computational models have conceptualized habits as arising from model-free reinforcement learning mechanisms, which typically select between available actions based on the future value expected to result from each. Traditionally, however, habits have been understood as behaviors that can be triggered directly by a stimulus, without requiring the animal to evaluate expected outcomes. Here, we develop a computational model instantiating this traditional view, in which habits develop through the direct strengthening of recently taken actions rather than through the encoding of outcomes. We demonstrate that this model accounts for key behavioral manifestations of habits, including insensitivity to outcome devaluation and contingency degradation, as well as the effects of reinforcement schedule on the rate of habit formation. The model also explains the prevalent observation of perseveration in repeated-choice tasks as an additional behavioral manifestation of the habit system. We suggest that mapping habitual behaviors onto value-free mechanisms provides a parsimonious account of existing behavioral and neural data. This mapping may provide a new foundation for building robust and comprehensive models of the interaction of habits with other, more goal-directed types of behaviors and help to better guide research into the neural mechanisms underlying control of instrumental behavior more generally.
10.11.2021, 11:15-12:45, Online Meeting: Sia Hranova
Title: Joint Modeling of Bayesian Forward Planing and Reaction Times using Evidence Accumulation Models
Abstract: I will present my semester project on the topic of whether a prior-based contextual control model can be successfully combined with a type of EAM called the Racing Diffusion Model and hence model both choice probabilities and reaction times. The success of the combined model is judged by its ability to recreate habitual behaviours in terms of response accuracy and latency in a simulated sequential decision task.
03.11.2021, 11:15-12:45, Online Meeting: Sarah Schwoebel
Title: Joint modeling of choices and reaction times based on Bayesian contextual behavioral control
Abstract: I will present my recent work on how prior-based contextual control in conjunction with a sampling-based reaction time algorithm can explain behavioral findings accross different experimental domains.
27.10.2021, 11:15-12:45, Online Meeting Daniel McNamee
Title: Flexible generative sampling in the entorhinal-hippocampal system
Abstract: I will present an abstract model of the entorhinal-hippocampal system regarding how sequences of positions within cognitive maps may be sampled. In particular, I will highlight how systematically modulating medial entorhinal input into hippocampus facilitates a remarkably flexible generative process. Leveraging this flexibility, I will argue that the brain may interpolate between various modes of sequential hippocampal reactivations which are contrastingly optimized for distinct downstream cognitive processes such as planning versus consolidation. Via simulation, it will be demonstrated that the model coheres several phenomena, such as generative cycling, diffusive hippocampal reactivations, and “jumping” trajectory events, within a normative framework.
13.10.2021, 11:15-12:45, Online Meeting - Sia Hranova
Title: Joint Modeling of Bayesian Forward Planing and Reaction Times using Evidence Accumulation Models
Abstract: I will present my semester project on the topic of whether a prior-based contextual control model can be successfully combined with a type of EAM called the Racing Diffusion Model and hence model both choice probabilities and reaction times. The success of the combined model is judged by its ability to recreate habitual behaviours in terms of response accuracy and latency in a simulated sequential decision task.
14.07.2021, 11:15-12:45, Online Meeting: Dimitrije Markovic
Title: Introduction to Monte Carlo Tree Search
23.06.2021, 11:15-12:45, Online Meeting: Dario Cuevas
Title: Motor adaptation at the single-cell level.
16.06.2021, 11:15-12:45,Online Meeting: Sarah Schwoebel
Title: Topographic feature organization in the brain: What is it good for and where does it come from?
09.06.2021, 11:15-12:45, Online Meeting: Sascha Frölich
Title: Boltzmann Machines: An Introduction
26.05.2021, 11:15-12:45, Online Meeting: Alexandra Felea
Title: Journal Club: Gradual extinction prevents the return of fear: implications for the discovery of state (2013) Samuel J. Gershman, CarolynE.Jones , Kenneth A. Norman , Marie-H. Monfils and Yael Niv
12.05.2021, 11:15-12:45, Online Meeting: Eric Legler
Title: Journal Club: Hardwick, R. M., Forrence, A. D., Krakauer, J. W., & Haith, A. M. (2019). Time-dependent competition between goal-directed and habitual response preparation. Nature human behaviour, 3(12), 1252-1262.
05.05.2021, 11:15-12:45, Online Meeting: Cassandra Visconti
Title: Clinically Informed Model-Fitting: Fear of Negative Evaluation and Social Anxiety
28.04.2021, 11:15-12:45, Online Meeting: Sascha Frölich
Journal Club: Momennejad, I., Otto, A. R., Daw, N. D., & Norman, K. A. (2018). Offline replay supports planning in human reinforcement learning. Elife, 7, e32548.
14.04.2021, 11:15-12:45, Room TBA: Semester Kickoff Meeting
20.01.2021, 11:15-12:45, Room TBA: Dario Cuevas
Title: Demonstration of arm movements and (maybe) hierarchical inference.
13.01.2021, 11:15-12:45, Room TBA: Eric Legler
Title: Modulation of Strategy Selection in a Sequential Decision Making Task
06.01.2021, 11:15-12:45, Room TBA: Cassandra Visconti
Title: A general model of hippocampal and dorsal striatal learning and decision making
16.12.2020, 11:15-12:45, Room TBA: Sascha Froelich
Title: Presentation of a Sequential Habit-Learning Task
09.12.2020, 11:15-12:45, Room TBA: Sascha Froelich
Title: Habits: An Overview
11.11.2020, 11:15-12:45, Online meeting: Johannes Steffen
Title: Forward Planning Differences between Young and Old Adults
04.11.2020, 11:15-12:45, Online meeting: Janik Fechtelpeter
Title: Variational autoencoders
23.07.2020, 13:15-14:45, FAL 156: Sascha Frölich
Journal Club: Deneve, S. (2008). Bayesian spiking neurons I: inference. Neural computation, 20(1), 91-117
16.07.2020, 15:00-16:30, FAL 156: Florian Ott
Title: Dorsal anterior cingulate cortex compares expected future consequences of choice options to resolve conflict during difficult decisions
09.07.2020, 13:15-14:45, FAL 156: Dario Cuevas
Title: Final presentation of students' project
25.06.2020, 13:00-14:00, FAL 156: Eric Legler
Title: Context-dependent decision making
28.05.2020, 13:15-14:45, FAL 156: Dimitrije Markovic
Title: Active inference and dynamic multi-armed bandits
14.05.2020, 13:15-14:45, FAL 156: Dimitrije Markovic
Title: Analysis of nonstationary time-series - progress report
07.05.2020, 13:15-14:45, FAL 156: Dimitrije Markovic
Title: Random stuff about SARS-CoV-2
23.04.2020, 13:15-14:45, FAL 156: Quick team meeting and info about conducting behavioral experiments online
09.04.2020, 13:15-14:45, FAL 156: Dario Cuevas
Title: Project presentation
29.01.2020, 11:30-13:00, FAL 156: Florian Ott
Title: Neural mechanisms of sustainable decision making
15.01.2020, 13:00-14:30, FAL 156: Eric Legler
Title: Red Cap
08.01.2020, 13:00-14:30, Quick internal meeting at Stefan's office.
18.12.2019, 13:00-14:30, FAL 156: Sascha Froelich
Title: Uncertainty in the brain [3]
11.12.2019, 13:00-14:30, FAL 156: Dimitrije Markovic
Title: Testing ergodicity assumption for model-based data anaysis
Dimitrije Markovic
Paper Review: Ergodicity-breaking reveals time optimal economic behavior in humans [5,4]
04.12.2019, 13:00-14:30, FAL 156: Sarah Schwoebel
Title: A (non-exhaustive) overview of decision making models
13.11.2019, 13:00-14:30, FAL 156: Cassandra Visconti
Title: Progress report
23.10.2019, 13:00-14:30, FAL 156: Semester Kickoff
10.07.2019, 14:15-15:15, FAL 157: Vahid Rahmati
Title: Progress report - analysing calcium imaging data
26.06.2019, 14:15-15:15, FAL 157: Florian Ott
Title: Anticipatory resource allocation in a dynamic sequential task
19.06.2019, 16:00-17:30, FAL 157: Dimitrije Markovic
Title: Statistical significance - the good, the bad and the ugly
12.06.2019, 14:15-15:15, FAL 157: Dario Cuevas Rivera
Title: Effort discounting in sequential decision making
05.06.2019, 14:15-15:15, FAL 157: Sascha Frölich
Presented a paper "A hierarchical neuronal model for generation and online recognition of birdsongs".
15.05.2019, 14:15-15:15, FAL 157: Sarah Schwoebel
Title: Active inference, habit learning, and implications for addiction
08.05.2019, 14:15-15:15, FAL 157: Cassandra Visconti
Title: TBA
10.04.2019, 14:15-15:15, FAL 157: Eric Legler
Title: TBA
30.01.2019, 11:00-12:30, FAL 157: Florian Ott
Title: Adaptive goal selection
23.01.2019, 11:00-12:30, FAL 157: Sascha Frölich
Title: Stable heteroclinic channels and probabilistic inference
12.12.2018, 10:45-11:45, FAL 157: Dimitrije Markovic
Title: The time interpretation of the expected utility theory
05.12.2018, 11:00-12:30, FAL 157: Dario Cuevas Rivera
Title: "Journal club: Experimental practices in economics: a methodological challenge for psychologists?"
14.11.2018, 11:00-12:30, FAL 157: Vahid Rahmati
Title: Neural development: Data analysis
07.11.2018, 11:00-12:30, FAL 157: Cassandra & Sarah practice talks
24.10.2018, 11:00-12:30, FAL 157: Sarah Schwoebel
Title: Beyond Active Inference: A discrete model in a continuous world & possible implementations in neural networks
10.10.2018, 11:00-12:30, FAL 157: Semester Kickoff Meeting
20.06.2018, 14:45-16:15, FAL 156: Sarah Schwoebel
Title: Active Inference and Anxiety
15.06.2018, 15:00-17:00, FAL 156:
Special Interest Session: Probabilistic programming in Python
13.06.2018, 14:45-16:15, FAL 156: Dario Cuevas Rivera
Title: A very complicated arbitration between model-based and model-free decisions.
31.05.2018, 11:00-12:30, FAL 156: Special Interest Session: Psychtoolbox
16.05.2018, 14:45-16:15, FAL 156: Florian Ott
Title: Dynamic prioritization in a new sequential decision making task with multiple goals
07.05.2018, 11:00-12:30, FAL 156: Sebastian Bitzer
Title: Perception for action: How the brain prepares suitable actions already during perception
02.05.2018, 14:45-16:15, FAL 156: Florian Bolenz
Title: Cost-benefit arbitration between model-free and model-based learning strategies in human aging
17.01.2018, 14:00-15:30, FAL 156: Pouyan Rafieifard
Title: Response bias in incoherent motion discrimination is explained by motion energy and bounded accumulation model
13.12.2017, 14:00-15:30, FAL 156: Vahid Rahmati
Title: Calcium Imaging: Biophysics, Advantages, Limitations, Analysis Methods
29.11.2017, 14:00-15:30, FAL 156: Sebastian Bitzer
Title: Sailing the high seas of correlation analysis to find new land in brain processes involved in perceptual decision making.
09.11.2017, 15:00-16:30, FAL 156: Sarah Schwoebel
Title: Active inference, belief propagation, and the Bethe approximation: Results
08.11.2017, 14:00-15:30, FAL 156: Dimitrije Markovic
Title: "Planning and inference with semi-Markov models: A reversal learning example"
25.10.2017, 14:00-15:30: Dario Cuevas Rivera
talked about fitting active inference to behavioral data.
28.06.2017, 14:00-15:30: Florian Ott
Title: Dynamic self-regulation and multiple goal pursuit
21.06.2017, 14:00-15:30: Pouyan Rafieifard
Title: "Stimulus or Bias? Neural and Behavioral Responses to Incoherent Motion Signals"
14.06.2017, 14:00-15:30: Sarah Schwoebel
Title: Active Inference, Belief Propagation and the Bethe Approximation
17.05.2017, 14:00-15:30: Alexander Strobel
Title: "Need for Cognition"
26.04.2017, 14:00-15:30: Dario Cuevas Rivera
Title: ""Fitting active inference to behavioral data."
19.04.2017, 14:00-15:30: Sebastian Bitzer
Title: "BeeMEG Experiment - Current analyses of sequential evidence integration in the brain and the resulting findings."
12.04.2017, 14:00-15:30: Dimitrije Markovic
Title: Learning in changing environments: Hierarchical and Switching Gaussian Filters
25.01.2017, 12:00-13:30: Florian Bolenz
Title: "Arbitration of habitual and goal-directed learning strategies"
11.01.2017, 12:00-13:30: Dimitrije Markovic
We discussed the organization and structure of the tutorial on Active Inference.
04.01.2017, 12:00-13:30: Sebastian Bitzer
Talked about how probabilistic modelling technology changes the required skill set for the modern psychologist.
07.12.2016, 12:00-13:30: Pouyan Rafieifard
Title: "Recent advances in Bayesian models of perceptual decision making".
23.11.2016, 12:00-13:30: Gruppe Christian Beste
Title: EEG source-based connectome comparisons – small world and other metrics
Abstract: For several decades, EEG has been a well-established measurement tool for gaining insights into a wide variety of brain functions. Over the last years, more advanced computational methods like time-frequency analysis, beamforming and connectivity analysis have been developed to extend the scope of EEG analysis beyond FFTs and event-related potentials. In this talk, we want to discuss the methodological possibilities and restrictions of 3D connectome analysis based on EEG data.
09.11.2016, 12:00-13:30: Cassandra Visconti
talked about the recent evidence for ideal observer-like sensitivity to complex acoustic patterns [8].
12.10.2016, 12:00-13:30: Stefan Kiebel
Organisational meeting for the start of the winter semester.
11.07.2016, 14:00-15:00: Arne Doose
gave introduction to "Learning and conditioning".
21.06.2016, 11:30-13: Sebastian Bitzer
spoke about the advantages and disadvantages of model expansion over model comparison.
07.06.2016, 11:30-13: Hame Park
presented recent work by Drugowitsch et al. about speed-accuracy tradeoff in decisions based on information from two modalities (vision and vestibular sensors) [9]. We figured that the experiment mostly showed that the vestibular modality lead to very different behaviour than vision in the used setup.
11.05.2016, 14-15: Stefan Kiebel
will present a paper with the title "Anterior cingulate cortex instigates adaptive switches in choice by integrating immediate and delayed components of value in ventromedial prefrontal cortex." [10].
27.04.2016, 14-15, Vahid Rahmati
talked about his recent paper: "Inferring Neuronal Dynamics from Calcium Imaging Data Using Biophysical Models and Bayesian Inference" [11].
19.04.2016, 11:30-13, Dario Cuevas Rivera
Active Inference in decision making
13.04.2016, 14:00-15:00, Sebastian Bitzer
Moodle and probabilistic programming
27.01.2016, 11:10 – 12:10
Pouyan R. Fard will give a talk on "Multi-voxel Pattern Analysis for Neuroimaging Data".
20.01.2016, 11:10 – 12:10 Dimitrije Markovic
Dimitrije Markovic will present a paper from Bernacchia et.al. titled: "A reservoir of time constants for memory traces in cortical neurons" [12]
06.01.2016, 11:10 – 12:10 Dario Cuevas Rivera
Reviewed several papers discussing the classification capacity of the human olfactory system [13, 14, 15].
16.12.2015, 11:10 – 12:10
Xmas preparations
09.12.2015, 12:11 – 12:40
Carolin Hübner gave a practice talk for her master thesis defense.
Dario Cuevas Rivera gave a practice talk on modeling insect faction with stable heteroclinic cycles and bayesian inference.
02.12.2015, 11:10 – 12:10
Carolin Hübner gave a practice talk for her master thesis defense.
25.11.2015, 11:10 – 12:10 Hame Park
Talk title: "Modeling perceptual decision-making with MEG/EEG: a literature review"
11.11.2015, 11:00 – 12:00
Sebastian Bitzer talked about efficient numerical approximations of probability distributions for nonlinear filtering.
04.11.2015, 11:00 – 12:00
Dimitrije Markovic presented his recent work on modeling inference in changing environments.
21.10.2015, 11:00 – 12:00
Carolin Hübner presented an advanced analysis of the Re-Decisions experiment.
14.10.2015, 15:00 – 16:00
Pouyan R. Fard presented [ 16 ].
22.7.2015, 13:00 – 14:00
Carolin Hübner presented her results on "Re-Decisions Experiment".
08.07.2015, 13:00 – 14:00
Pouyan R. Fard talked about "An Extended Bayesian Model Equivalent to Extended Drift-diffusion Model".
03.06.2015, 11:00 – 12:00
Talk by Stefan Scherbaum
13.05.2015, 11:00 – 12:30 Stefan's office:
We talked about ZIH's computer servers.
06.05.2015, 11:00 – 12:30 in Stefan's office
Hame Park discussed a few ideas for a new neuroimaging study on perceptual decision making.
29.04.2015, 11:00 – 12:00 Holger Mohr
Talked about: "Analysis and modeling of instruction-based learning: Completed, ongoing and future work".
Past Meetings @ MPI CBS
25.11.2014, 11:15, Valentin Schmutz
Talked about fitting the results of [17] with the Bayesian attractor model.
27.10.2014, 14:00, Dimitrije Marković
will present his work on bayesian modeling of probabilistic WCST.
14.10.2014, 11:10, Carolin Hübner
presented her Bachelor's thesis project on the concept of information across disciplinary boundaries.
13.10.2014, 12:00, Sophie Seidenbecher
presented her work on the model of [28].
16.07.2014, 14:00, Sabrina Trapp
will tell us about her results in an experiment which investigated choice behavior in a reinforcement learning task in which the perceptual predictability of stimulus material is additionally manipulated.
21.05.2014 Sophie Seidenbecher
presented a paper about chimera states in dynamical networks [18].
30.04.2014 Hame Park & Jan-Matthis Lückmann
presented the intermediate results for the Bee Experiment.
09.04.14 Andrej Warkentin
I will present my current progress on the extensions of the Bayesian model from Sebastian.
19.03.2014 Sebastian Bitzer
presented Expectation Propagation - Approximate Bayesian Inference (EP-ABC) [19].
05.02.2014 Stefan Kiebel
presented [20].
22.01.2014 Sophie Seidenbecher
went through the details of Brunton et al.'s [28] methods.
04.12.2013 Hame Park
presented [21].
13.11.2013 Sebastian Bitzer
Based on [22], discussed Shadlen's beliefs about how neurons implement probabilistic reasoning.
30.10.2013 Dimitrije Markovic
presented [23].
02.10.2013 Stefan Kiebel
presented [24].
26.07.2013 Hame Park
Presented [25].
19.07.2013 Dimitrije Markovic
Presented a paper [26], which invastigates the computational properties of Amygdala during Pavlovian conditioning.
05.07.2013 Sebastian Bitzer
I will present [27]. The authors propose an explanation of changes of mind in terms of switching between fixed points in attractor networks.
17.05.2013 Stefan Kiebel
Stefan will present a recent paper [28] which investigates where the noise in perceptual decision making comes from by using a clever task together with computational modelling.
26.04.2013 Hame Park
Hame presented an article by Amador et al [29]., titled, "Elemental gesture dynamics are encoded by song premotor cortical neurons".
12.04.2013 Jelle Bruineberg
Jelle presented a model for online discrimination of movements [30] and discussed possible simulations which relate model behaviour to experimental results.
08.03.2013 Burak Yildiz
Burak talked about the following NIPS paper [31] which shows how spiking neurons can learn to represent continuous signals.
22.02.2013 Sebastian Bitzer
Institute Colloquium practice talk.
08.02.2013 Sebastian Bitzer
I will discuss a paper [32] which argues that the timing of decisions is critically influenced by a perceived need to make a decision soon (urgency gating). The basis for the argument is a perceptual decision experiment in which participants receive varying amounts of evidence within a trial.
18.01.2013 Felix Effenberger
Felix told us about transfer entropy [33] and it's application to physiological recordings as, e.g., in [34].
11.01.2013 Kai Dannies
Kai presented his maze simulator.
07.12.2012: Burak Yildiz
Burak talked about an interesting phenomenon called "suprathreshold stochastic resonance". This illustrates mathematically how noise can be beneficial for information transfer in a parallel array of neurons. It has possible applications in cochlear implants. For the talk, it is enough to look at the following Scholarpedia page [35]. If you get more interested, you can read the following review [36].
23.11.2012: Sebastian Bitzer
Sebastian presented a paper [37] which tries to differentiate models of perceptual decision making based on two measures of firing rate variability in monkey LIP neurons.
02.11.2012: Stefan Kiebel
Stefan talked about a study describing efficient auditory coding [38].
25.09.2012: Burak Yildiz
Burak talked about sparse coding in the auditory pathway following a recent publication: [39]
04.09.2012: Arnold Ziesche
Arnold presented a neurofunctional model for the McGurk effect [40].
21.08.2012: Sebastian Bitzer
Sebastian presented Friston's latest work on how explorative behaviour may be understood within the free energy framework [41].
26.06.2012: Stefan Kiebel
Stefan reviewed a paper about optimal feedback control [42] and discussed how results presented in this paper may be implemented by active inference.
29.05.2012: Burak Yildiz
Burak initiated a discussion and brainstorming session: What insights about the brain did we gain from our theoretical work and how could we test and confirm these insights with (fMRI) experiments? These insights include topics such as uncertainty, precision, relation between priors and posteriors, prediction error and hierarchical message passing. Burak went through (mostly the second half) of the following review paper to stimulate discussion: [43]
15.05.2012: Arnold Ziesche
Arnold presented the results of his model for multisensory integration.
08.05.2012: Sebastian Bitzer
Sebastian gave an introduction to Gaussian Processes [44] with a focus on their use in DEM.
17.04.2012: Stefan Kiebel
Stefan presented a well-established model of multisensory integration [45].
10.04.2012: Stefan Kiebel
Stefan presented a lecture about computational models for bistable perception. 45 min + some discussion time.
03.04.2012: Burak Yildiz
Burak reported from CONAS 2012.
27.03.2012: Sam Mathias
Sam discussed with us potential approaches to model his newest auditory psychophysics experiments relating to frequency shift detectors.
21.02.2012: Burak Yildiz
Burak talked about the work he has been doing in the last couple of months. It was about learning and recognition of human speech.
07.02.2012: Arnold Ziesche
Arnold presented a generic model he is currently considering. There was nothing to read.
24.01.2012: Sebastian Bitzer
Sebastian presented Katori et al. (2011) [46] who suggest a spiking neural network which implements switching of stable states represented by some form of attractor dynamics. Basis of it all apparently are recordings of neurons in prefrontal cortex.
Also, please remember to list any interesting conferences.
10.01.2012: Stefan Kiebel
Stefan presented a very recent science paper about balancing excitation/inhibition [47].
13.12.2011: Stollen meeting
29.11.2011: Burak Yildiz
Burak talked about a side project of his. This was a rather mathematical talk about the echo state property (ESP) of Echo State Networks (ESN). It would be helpful to review the definitions and sufficient conditions for ESP of the standard and leaky integrator ESNs which can be found in [48].
15.11.2011: Arnold Ziesche
Arnold presented a paper which uses an echo state network (ESN) with decoupled reservoirs to learn dynamics on different time scales simultaneously [49]. For some reason, this might be related to his own work.
01.11.2011: Sebastian Bitzer
Peter Dayan pointed me to Tishby and Polani [50] when we spoke about the importance of actions in perception and the free energy principle. They unify ideas from information theory (determining the information, or surprise, of an event) with reinforcement learning (finding actions which maximise reward) and show that the minimisation of surprise is sufficient for optimising actions in the special case that the environment is "perfectly adapted", i.e., the transition probabilities of the environment directly reflect the associated rewards. Thus, they formalise a common reservation against the free energy principle: that surprise alone might not be sufficient to explain behaviour. Simultaneously, they embrace the potential existence of perfectly adapted environments through the coevolution of agents (their sensors) and the environment.
I took their formalism as a basis for discussing the representation of values in the form of priors in the free energy principle (see also the associated discussion on hunch.net).
Unfortunately, the book chapter is a bit long and quite technical. On the other hand, it is very well written and provides an excellent overview of the basic ideas in information theory and reinforcement learning.
25.10.2011: Stefan Kiebel
Stefan presented a review article [51] which stresses that a) spiking activity of a population of neurons depends on the hidden physiological state of the neurons (e.g. short-term synaptic plasticity), b) the response of sensory neurons is often projected into a higher dimensional space (of processing neurons) and c) linear read-out units may be sufficient to discriminate high-dimensional trajectories of neuron responses. There are hardly any equations in it but the topic is for us.
11.10.2011: coffee meeting
23.09.2011: Burak Yildiz
Burak discussed DCM for fMRI. He went through the first ten pages of [52] and also the visual example in Section 5.1.
30.08.2011: Arnold Ziesche
Arnold presented a paper which critically discusses a computational model for audiovisual speech perception [53].
23.08.2011: Sebastian Bitzer
Sebastian continued his session on speedDEM. He demonstrated the performance of speedDEM for two test cases and various network sizes.
16.08.2011: Sebastian Bitzer
After a reasonable amount of DEM theory this will be a practical session regarding the implementation of DEM in SPM and the resulting hooks for increasing the speed of DEM in speedDEM. In particular, I will discuss the functional logic of the very general SPM implementation and show profiling results for it which indicate that numeric differentiation is the major contributor to running time. Subsequently we will discuss possible changes and extensions which can reduce this, i.e., we will discuss the future of speedDEM.
09.08.2011: Stefan Kiebel
Stefan talked about a paper [54] that uses reservoir computing, slow feature analysis and independent component analysis to let an agent self-localize in a maze.
02.08.2011: Burak Yildiz (different time: 14:00)
He continued with the mathematical investigation of the DEM paper [57]. After a short review of the previous talk, he continued explaining the Laplace approximation formulas on page 854 and 855 and the integration scheme based on Ozsaki's work.
14.06.2011: Eduardo Aponte
He presented his recent work on unsupervised learning of spatiotemporal sequences in hippocampus.
07.06.2011: Arnold Ziesche
Arnold talked about a neural oscillator model for vowel recognition [55].
31.05.2011: Sebastian Bitzer
After a short recap of the things discussed on 05.04.2011 Sebastian continued to describe covariance hyperparameter estimation for the dynamic models presented in [60]. This also included a short general introduction to expectation maximisation (EM) and potentially some material from appendix A.1 in [56].
24.05.2011: Stefan Kiebel
Stefan gave a talk about his current pet project where they use the free-energy principle to derive a functional model of intracellular single neuron dynamics.
17.05.2011: Burak Yildiz
Burak went through some of the details in the DEM paper [57] to gain more insight into the physics background of the dynamic variational Bayesian; especially, ensemble dynamics and generalized coordinates. He started with the proof of Lemma 1 on page 850 and covered things until page 854 where the Laplace approximation starts.
26.04.2011: Eduardo Aponte
There is not an obvious reason why to use dynamical systems as computational tools. Moreover, recurrent neural networks show often undesirable properties, in particular, they may fall in chaotic regimes. Legenstein and Maass [58] address these questions and provide a non-technical explanation of the relation between chaos, dynamical systems and computation. This paper includes a few equations and inequations.
19.04.2011: Agata Checinska
She gave an introduction to Quantum information and tried to highlight potential relations to neuroscience.
12.04.2011: Arnold Ziesche
I'm interested in understanding the motivation to use generalized coordinates. [59] discusses a good reason along with a simple model and some nice perceptual illusions.
05.04.2011: Sebastian Bitzer
I presented the algorithm which underlies various forms of dynamic causal modeling and which we use to estimate RNN parameters. At the core of it is an iterative computation of the posterior of the parameters of a dynamical model based on a first-order Taylor series approximation of a meta-function mapping parameter values to observations, i.e., the dynamical system is hidden in this function such that the probabilistic model does not have to care about it. This is possible, because the dynamics is assumed to be deterministic and noise only contributes at the level of observations. It can be shown that the resulting update equations for the posterior mode are equivalent with a Gauss-Newton optimisation of the log-joint probability of observations and parameters (this is MAP estimation of the parameters). Consequently, the rate of convergence of the posterior may be up to quadratic, but it is not guaranteed to increase the likelihood at every step or actually converge at all. It should work well close to an optimum (when observations are well fitted), or if the dynamics is close to linear with respect to parameters. Because the dynamical system is integrated numerically to get observation predictions and the Jacobian of the observations with respect to parameters is also obtained numerically, this algorithm may be very slow.
This algorithm is described in [60] embedded into an application to fMRI. I did not present the specifics of this application and, particularly, ignored the influence of the there defined inputs u. The derivation of the parameter posterior described above is embedded in an EM algorithm for hyperparameters on the covariance of observations. I will discuss this in a future session.
22.+29.03.2011: Stefan Kiebel
I'll present a variational Bayesian approach for localizing dipoles for MEG and EEG data [61] . This is meant as a teaching session for variational Bayes. We will go through the motivation, some of the key steps and assumptions and the detailed math for the derivation of some of the update equations.
15.02.2011: Burak Yildiz
Fiete et al. [62] investigate the reasons why birds have sparse coding in the high vocal center (HVC) for song production. It was experimentally observed that each HVC neuron fires only once during a song motif to control the RA neurons. In this paper, authors give mathematical foundation for the benefits of such mechanism to learn new songs. They conclude that learning slows down when HVC neurons are used more than once during the song motif.
01.02.2011: Sebastian Bitzer
Sebastian presented Lazar et al. (2009) [63] as an example of the reemergence of recurrent neural network learning. While usually the RNNs in reservoir computing are fixed the authors show that it can be beneficial to update the RNN connections as well as the output connections. The learning algorithm is local, biologically motivated and based on spike-timing-dependent plasticity, intrinsic plasticity and synaptic normalisation. The units in the network are only binary.
The results in the paper show that the used learning algorithm can linearise the activity of the recurrent excitatory units with respect to the (next predicted) input symbol. Although STDP is essential to capture the sequential nature of the input patterns the homeostatic plasticity mechanisms are necessary to maintain distributed network activity and achieve good results. We presume that the architecture with an additional set of inhibitory neurons, as used here, also contributes to stable activity in the network, but this is not discussed in the paper. As a result of learning, the network exhibits sub-critical activity, i.e., when perturbed, it tends to restore its last activity pattern. Even though random networks (reservoir computing) have been shown to perform best in critical regimes, the sub-critical SORN still outperforms critical random networks. However, both network types share the necessity for sparse connections between excitatory neurons.
25.01.2011: Stefan Kiebel
Stefan talked about a 1995 paper by Geoff Hinton and others about some learning algorithm for neural networks [64]. This is actually one of the first papers mentioning the free energy in the context of learning algorithms.
18.01.2011: Burak Yildiz
Burak talked about a hierarchical neural model inspired by the activities of neural circuits in the brains of songbirds. There was no paper to read since the talk was about the things he has been working on recently.
23.11.2010: Arnold Ziesche
Arnold told us about his thesis work 'A computational model for visual stabilty'.
16.11.2010: Sebastian Bitzer
I will present the brand new Neuron Viewpoint by Churchland et al. [66]. They investigated the functional role of preparatory activity (here: neural firing) in (pre-)motor cortex and suggest that
preparatory tuning serves not to represent specific factors but to initialize a dynamical system whose future evolution will produce movement.
summary
What are the variables that best explain the preparatory tuning of neurons in dorsal premotor and primary motor cortex of monkeys doing a reaching task? This is the core question of the paper which is motivated by the observation of the authors that preparatory and perimovement (ie. within movement) activity of a single neuron may even qualitatively differ considerably (something conflicting with the view that preparatory activity is a subthreshold version of perimovement activity). This observation is experimentally underlined in the paper by showing that average preparatory activity and average perimovement activity of a single neuron are largely uncorrelated for different experimental conditions.
To quantify the suitability of a set of variables to explain perparatory activity of a neuron the authors use a linear regression approach in which the values of these variables for a given experimental condition are used to predict the firing rate of the neuron in that condition. The authors compute the generalisation error of the learnt linear model with crossvalidation and compare the performance of several sets of variables based on this error. The variables performing best are the principal component scores of the perimovement population activity of all recorded neurons. The difference to alternative sets of variables is significant and in particular the wide range of considered variables makes the result convincing (e.g. target position, initial velocity, endpoints and maximum speed, but also principal component scores of EMG activity and kinematic variables, i.e. position, speed and acceleration of the hand). That perimovement activity is the best regressor for preparatory activity is quite odd, or as Burak aptly put it: "They are predicting the past."
The authors suggest a dynamical systems view as explanation for their results and hypthesise that preparatory activity sets the initial state of the dynamical system constituted by the population of neurons. In this view, the preparatory activity of a single neuron is not sufficient to predict its evolution of activity (note that the correlation between perparatory and perimovement activity assesses only one particular way of predicting perimovement from preparatory activity - scaling), but the evolution of activity of all neurons can be used to determine the preparatory activity of a single neuron under the assumption that the evolution of activity is governed by approximately linear dynamics. If the dynamics is linear, then any state in the future is a linear transformation of the initial state and given enough data points from the future the initial state can be determined by an appropriate linear inversion. The additional PCA, also a linear transformation, doesn't change that, but makes the regression easier and, important for the noisy data, also regularises.
These findings and suggestions are all quite interesting and certainly fit into our preconceptions about neuronal activity, but are the presented results really surprising? Do people still believe that you can make sense of the activity of isolated neurons in cortex, or isn't it already accepted that population dynamics is necessary to characterise neuronal responses? For example, Pillow et al. [65] used coupled spiking models to successfully predict spike trains directly from stimuli in retinal ganglion cells. On the other hand, Churchland et al. indirectly claim in this paper that the population dynamics is (approximately) linear, which is certainly disputable, but what would nonlinear dynamics mean for their analysis?
09.11.2010: Stefan Kiebel
The paper by Varona et al. [67] used findings on a marine mollusc to motivate a dynamic model for the mollusc's behavioural repertoire.
26.10.2010: Burak Yildiz
We will have a look at song generation in birds. The paper by Fee et al. [68] compares two possible models that can describe the song generation. One of these models supports autonomous activity among RA neurons while the second model supports a close relationship between higher level neurons (HVC) and the lower level neurons (RA).
19.10.2010: Sebastian Bitzer
In this meeting I presented the machine learning view of learning (nonlinear) dynamics and showed how autoregressive state-space models can be derived from certain types of differential equations which allows application of standard machine learning methods to the resulting learning problems.
Then I went on to discuss Langford et al. (2009) [69] who have proposed the sufficient posterior representation of a dynamic model which is entirely based on function approximation. Unfortunately, their consistency proof, which would guarantee that the learnt state representation is a sufficient statistic for the posterior distribution, does not apply to their setup, because the way they have defined the prediction operator $C^f$ it will never be invertible (requirement of consistency proof). Interestingly, the particular instantiation of the functions they used is very similar to recurrent neural networks that we use.
We finished discussing the advantages of representing dynamics with differential equations in contrast to autoregressive models: it is much easier to represent long-range dependencies, which frequently occur in our world, with differential equations (especially using hierarchies), but how can you learn them?
12.10.2010: Georg Martius
Presents his own work.
It is about self-organizing robot behavior from the prinziple of Homeokinesis. See robot.informatik.uni-leipzig.de. I will introduce a dynamical systems description of the sensorimotor loop and present the learning rules. I have many videos to show. Then we can choose from some extensions of the original framework depending on your interest.
05.10.2010: Stefan Kiebel
Discussed Sussillo and Abbott (2009) [70].
28.09.2010: Burak Yildiz
Tomorrow, I will give a brief review of different ways to obtain Stable Heteroclinic Cycles. I will look into one of these models in more detail and I attached the related paper [71] to the calendar. I will talk about only the first 10 pages of that paper.
31.08.2010: Sebastian Bitzer
In this week's meeting I will present Legenstein et al. (2010) [72]. The paper is interesting for us, and me in particular, for several reasons:
- it addresses the issue of how you can learn a compact representation of a stimulus from data
- it does so by exploiting slowly varying features of the input in a hierarchy with increasing spatial scale (you notice that they have the same keywords as we do ;)
- they test the found representation by seeing how suitable it is for reinforcement learning of 2 independent tasks
Point 2. is the one most interesting for us while 1. and 3. are exactly the content of my last paper which will be presented at the International Conference on Intelligent Robots and Systems in October. I obviously used different methods, stimuli and tasks, but the idea is the same.
24.08.2010: David Wozny
I will discuss the Haken-Kelso-Bunz model for coordination dynamics.
Attached is the orginal 1985 paper, but I think the scholarpedia page is an excellent review.
So this page will be the focus of discussion. I will also provide my thoughts on why this is important in our line of research, and a summary of the diverse topics studied with coupled oscillators.
www.scholarpedia.org/article/Haken-Kelso-Bunz_model
For those that want more math, attached is one of the more advanced and complicated papers which is just one example of the possibilities in understanding coupled dynamic systems. From what I can tell, Peter Ashwin is one of the leaders in the field.
To clarify, you ONLY need to read the scholarpedia page listed above. This will be the focus of discussion. The other papers are simply examples of beginning and current stages of the topic of coupled dynamic systems.