Dynamical Systems and Computations in the Brain (DySCO)

The DySCo meetings are held at the Chair of Neuroimaing, Department of Psychology at the Technische Universität Dresden. Meeting times are usually on Wednesdays between 12:00 and 13:30 in the Falkenbrunnen building, Chemnitzer Str. 46B, but see below for particular dates.

We discuss topics related to neuroimaging methods and functional models of computations in the brain. The meetings are open to everyone interested in cognitive neuroscience modelling.

If you are interested in present your work at the meeting, or have more questions about the meeting, please contact Dimitrije Marković.

Upcoming

Next Meeting

12.07.2017, 14:00-15:30, FAL 157: Vahid Rahmati

Title: TBA

Past Meetings

28.06.2017, 14:00-15:30, FAL 157: Florian Ott

Title: Dynamic self-regulation and multiple goal pursuit

21.06.2017, 14:00-15:30, FAL 157: Pouyan Rafieifard

Title: "Stimulus or Bias? Neural and Behavioral Responses to Incoherent Motion Signals​"

14.06.2017, 14:00-15:30, FAL 157: Sarah Schwoebel

Title: Active Inference, Belief Propagation and the Bethe Approximation

17.05.2017, 14:00-15:30, FAL 157: Alexander Strobel

Title: "Need for Cognition"

26.04.2017, 14:00-15:30, FAL 157: Dario Cuevas Rivera

Title: ""Fitting active inference to behavioral data."

19.04.2017, 14:00-15:30, FAL 157: Sebastian Bitzer

Title: "BeeMEG Experiment - Current analyses of sequential evidence integration in the brain and the resulting findings."

12.04.2017, 14:00-15:30, FAL 157: Dimitrije Markovic

Title: Learning in changing environments: Hierarchical and Switching Gaussian Filters

25.01.2017, 12:00-13:30, FAL 156: Florian Bolenz

Title: "Arbitration of habitual and goal-directed learning strategies"

11.01.2017, 12:00-13:30, FAL 156: Dimitrije Markovic

We discussed the organization and structure of the tutorial on Active Inference.

04.01.2017, 12:00-13:30, FAL 156: Sebastian Bitzer

Talked about how probabilistic modelling technology changes the required skill set for the modern psychologist.

07.12.2016, 12:00-13:30, FAL 156: Pouyan Rafieifard

Title: "Recent advances in Bayesian models of perceptual decision making".

23.11.2016, 12:00-13:30, FAL 156: Gruppe Christian Beste

Title: EEG source-based connectome comparisons – small world and other metrics
Abstract: For several decades, EEG has been a well-established measurement tool for gaining insights into a wide variety of brain functions. Over the last years, more advanced computational methods like time-frequency analysis, beamforming and connectivity analysis have been developed to extend the scope of EEG analysis beyond FFTs and event-related potentials. In this talk, we want to discuss the methodological possibilities and restrictions of 3D connectome analysis based on EEG data.

09.11.2016, 12:00-13:30, FAL 156: Cassandra Visconti

talked about the recent evidence for ideal observer-like sensitivity to complex acoustic patterns [1].

12.10.2016, 12:00-13:30, FAL 156: Stefan Kiebel

Organisational meeting for the start of the winter semester.

11.07.2016, 14:00-15:00, FAL 157: Arne Doose

gave introduction to "Learning and conditioning".

21.06.2016, 11:30-13, FAL 158: Sebastian Bitzer

spoke about the advantages and disadvantages of model expansion over model comparison.

07.06.2016, 11:30-13, FAL 158: Hame Park

presented recent work by Drugowitsch et al. about speed-accuracy tradeoff in decisions based on information from two modalities (vision and vestibular sensors) [2]. We figured that the experiment mostly showed that the vestibular modality lead to very different behaviour than vision in the used setup.

11.05.2016, 14-15, FAL 157: Stefan Kiebel

will present a paper with the title "Anterior cingulate cortex instigates adaptive switches in choice by integrating immediate and delayed components of value in ventromedial prefrontal cortex." [3].

27.04.2016, 14-15, FAL 157: Vahid Rahmati

talked about his recent paper: "Inferring Neuronal Dynamics from Calcium Imaging Data Using Biophysical Models and Bayesian Inference" [4].

19.04.2016, 11:30-13, FAL 158: Dario Cuevas

Active Inference in decision making

13.04.2016, 14:00-15:00, FAL 157 - Sebastian Bitzer

Moodle and probabilistic programming

27.01.2016, 11:10 – 12:10 FAL 157

Pouyan R. Fard will give a talk on "Multi-voxel Pattern Analysis for Neuroimaging Data".

20.01.2016, 11:10 – 12:10 FAL 157 - Dimitrije Markovic

Dimitrije Markovic will present a paper from Bernacchia et.al. titled: "A reservoir of time constants for memory traces in cortical neurons" [5]

06.01.2016, 11:10 – 12:10 FAL 157 - Dario Cuevas Rivera

Reviewed several papers discussing the classification capacity of the human olfactory system [6, 7, 8].

16.12.2015, 11:10 – 12:10 FAL 157

Xmas preparations

09.12.2015, 12:11 – 12:40 FAL 157

Carolin Hübner gave a practice talk for her master thesis defense.
Dario Cuevas Rivera gave a practice talk on modeling insect faction with stable heteroclinic cycles and bayesian inference.

02.12.2015, 11:10 – 12:10 FAL 157

Carolin Hübner gave a practice talk for her master thesis defense.

25.11.2015, 11:10 – 12:10 FAL 157 - Hame Park

Talk title: "Modeling perceptual decision-making with MEG/EEG: a literature review"

11.11.2015, 11:00 – 12:00 FAL 157

Sebastian Bitzer talked about efficient numerical approximations of probability distributions for nonlinear filtering.

04.11.2015, 11:00 – 12:00 FAL 157

Dimitrije Markovic presented his recent work on modeling inference in changing environments.

21.10.2015, 11:00 – 12:00 FAL 157

Carolin Hübner presented an advanced analysis of the Re-Decisions experiment.

14.10.2015, 15:00 – 16:00 FAL 157

Pouyan R. Fard presented [ 9 ].

22.7.2015, 13:00 – 14:00 FAL 157

Carolin Hübner presented her results on "Re-Decisions Experiment".

08.07.2015, 13:00 – 14:00 FAL 157

Pouyan R. Fard talked about "An Extended Bayesian Model Equivalent to Extended Drift-diffusion Model".

03.06.2015, 11:00 – 12:00 FAL 158:

Talk by Stefan Scherbaum

13.05.2015, 11:00 – 12:30 Stefan's office:

We talked about ZIH's computer servers.

06.05.2015, 11:00 – 12:30 in Stefan's office

Hame Park discussed a few ideas for a new neuroimaging study on perceptual decision making.

29.04.2015, 11:00 – 12:00 FAL 158: Holger Mohr

Talked about: "Analysis and modeling of instruction-based learning: Completed, ongoing and future work".

Past Meetings @ MPI CBS

25.11.2014, 11:15, A311: Valentin Schmutz

Talked about fitting the results of [10] with the Bayesian attractor model.

27.10.2014, 14:00, C206: Dimitrije Marković

will present his work on bayesian modeling of probabilistic WCST.

14.10.2014, 11:10, A311: Carolin Hübner

presented her Bachelor's thesis project on the concept of information across disciplinary boundaries.

13.10.2014, 12:00, A311: Sophie Seidenbecher

presented her work on the model of [21].

16.07.2014, 14:00, A311: Sabrina Trapp

will tell us about her results in an experiment which investigated choice behavior in a reinforcement learning task in which the perceptual predictability of stimulus material is additionally manipulated.

21.05.2014 Sophie Seidenbecher

presented a paper about chimera states in dynamical networks [11].

30.04.2014 Hame Park & Jan-Matthis Lückmann

presented the intermediate results for the Bee Experiment.

09.04.14 Andrej Warkentin

I will present my current progress on the extensions of the Bayesian model from Sebastian.

19.03.2014 Sebastian Bitzer

presented Expectation Propagation - Approximate Bayesian Inference (EP-ABC) [12].

05.02.2014 Stefan Kiebel

presented [13].

22.01.2014 Sophie Seidenbecher

went through the details of Brunton et al.'s [21] methods.

04.12.2013 Hame Park

presented [14].

13.11.2013 Sebastian Bitzer

Based on [15], discussed Shadlen's beliefs about how neurons implement probabilistic reasoning.

30.10.2013 Dimitrije Markovic

presented [16].

02.10.2013 Stefan Kiebel

presented [17].

26.07.2013 Hame Park

Presented [18].

19.07.2013 Dimitrije Markovic

Presented a paper [19], which invastigates the computational properties of Amygdala during Pavlovian conditioning.

05.07.2013 Sebastian Bitzer

I will present [20]. The authors propose an explanation of changes of mind in terms of switching between fixed points in attractor networks.

17.05.2013 Stefan Kiebel

Stefan will present a recent paper [21] which investigates where the noise in perceptual decision making comes from by using a clever task together with computational modelling.

26.04.2013 Hame Park

Hame presented an article by Amador et al [22]., titled, "Elemental gesture dynamics are encoded by song premotor cortical neurons".

12.04.2013 Jelle Bruineberg

Jelle presented a model for online discrimination of movements [23] and discussed possible simulations which relate model behaviour to experimental results.

08.03.2013 Burak Yildiz

Burak talked about the following NIPS paper [24] which shows how spiking neurons can learn to represent continuous signals.

22.02.2013 Sebastian Bitzer

Institute Colloquium practice talk.

08.02.2013 Sebastian Bitzer

I will discuss a paper [25] which argues that the timing of decisions is critically influenced by a perceived need to make a decision soon (urgency gating). The basis for the argument is a perceptual decision experiment in which participants receive varying amounts of evidence within a trial.

18.01.2013 Felix Effenberger

Felix told us about transfer entropy [26] and it's application to physiological recordings as, e.g., in [27].

11.01.2013 Kai Dannies

Kai presented his maze simulator.

07.12.2012: Burak Yildiz

Burak talked about an interesting phenomenon called "suprathreshold stochastic resonance". This illustrates mathematically how noise can be beneficial for information transfer in a parallel array of neurons. It has possible applications in cochlear implants. For the talk, it is enough to look at the following Scholarpedia page [28]. If you get more interested, you can read the following review [29].

23.11.2012: Sebastian Bitzer

Sebastian presented a paper [30] which tries to differentiate models of perceptual decision making based on two measures of firing rate variability in monkey LIP neurons.

02.11.2012: Stefan Kiebel

Stefan talked about a study describing efficient auditory coding [31].

25.09.2012: Burak Yildiz

Burak talked about sparse coding in the auditory pathway following a recent publication: [32]

04.09.2012: Arnold Ziesche

Arnold presented a neurofunctional model for the McGurk effect [33].

21.08.2012: Sebastian Bitzer

Sebastian presented Friston's latest work on how explorative behaviour may be understood within the free energy framework [34].

26.06.2012: Stefan Kiebel

Stefan reviewed a paper about optimal feedback control [35] and discussed how results presented in this paper may be implemented by active inference.

29.05.2012: Burak Yildiz

Burak initiated a discussion and brainstorming session: What insights about the brain did we gain from our theoretical work and how could we test and confirm these insights with (fMRI) experiments? These insights include topics such as uncertainty, precision, relation between priors and posteriors, prediction error and hierarchical message passing. Burak went through (mostly the second half) of the following review paper to stimulate discussion: [36]

15.05.2012: Arnold Ziesche

Arnold presented the results of his model for multisensory integration.

08.05.2012: Sebastian Bitzer

Sebastian gave an introduction to Gaussian Processes [37] with a focus on their use in DEM.

17.04.2012: Stefan Kiebel

Stefan presented a well-established model of multisensory integration [38].

10.04.2012: Stefan Kiebel

Stefan presented a lecture about computational models for bistable perception. 45 min + some discussion time.

03.04.2012: Burak Yildiz

Burak reported from CONAS 2012.

27.03.2012: Sam Mathias

Sam discussed with us potential approaches to model his newest auditory psychophysics experiments relating to frequency shift detectors.

21.02.2012: Burak Yildiz

Burak talked about the work he has been doing in the last couple of months. It was about learning and recognition of human speech.

07.02.2012: Arnold Ziesche

Arnold presented a generic model he is currently considering. There was nothing to read.

24.01.2012: Sebastian Bitzer

Sebastian presented Katori et al. (2011) [39] who suggest a spiking neural network which implements switching of stable states represented by some form of attractor dynamics. Basis of it all apparently are recordings of neurons in prefrontal cortex.

Also, please remember to list any interesting conferences.

10.01.2012: Stefan Kiebel

Stefan presented a very recent science paper about balancing excitation/inhibition [40].

13.12.2011: Stollen meeting

29.11.2011: Burak Yildiz

Burak talked about a side project of his. This was a rather mathematical talk about the echo state property (ESP) of Echo State Networks (ESN). It would be helpful to review the definitions and sufficient conditions for ESP of the standard and leaky integrator ESNs which can be found in [41].

15.11.2011: Arnold Ziesche

Arnold presented a paper which uses an echo state network (ESN) with decoupled reservoirs to learn dynamics on different time scales simultaneously [42]. For some reason, this might be related to his own work.

01.11.2011: Sebastian Bitzer

Peter Dayan pointed me to Tishby and Polani [43] when we spoke about the importance of actions in perception and the free energy principle. They unify ideas from information theory (determining the information, or surprise, of an event) with reinforcement learning (finding actions which maximise reward) and show that the minimisation of surprise is sufficient for optimising actions in the special case that the environment is "perfectly adapted", i.e., the transition probabilities of the environment directly reflect the associated rewards. Thus, they formalise a common reservation against the free energy principle: that surprise alone might not be sufficient to explain behaviour. Simultaneously, they embrace the potential existence of perfectly adapted environments through the coevolution of agents (their sensors) and the environment.

I took their formalism as a basis for discussing the representation of values in the form of priors in the free energy principle (see also the associated discussion on hunch.net).

Unfortunately, the book chapter is a bit long and quite technical. On the other hand, it is very well written and provides an excellent overview of the basic ideas in information theory and reinforcement learning.

25.10.2011: Stefan Kiebel

Stefan presented a review article [44] which stresses that a) spiking activity of a population of neurons depends on the hidden physiological state of the neurons (e.g. short-term synaptic plasticity), b) the response of sensory neurons is often projected into a higher dimensional space (of processing neurons) and c) linear read-out units may be sufficient to discriminate high-dimensional trajectories of neuron responses. There are hardly any equations in it but the topic is for us.

11.10.2011: coffee meeting

23.09.2011: Burak Yildiz

Burak discussed DCM for fMRI. He went through the first ten pages of [45] and also the visual example in Section 5.1.

30.08.2011: Arnold Ziesche

Arnold presented a paper which critically discusses a computational model for audiovisual speech perception [46].

23.08.2011: Sebastian Bitzer

Sebastian continued his session on speedDEM. He demonstrated the performance of speedDEM for two test cases and various network sizes.

16.08.2011: Sebastian Bitzer

After a reasonable amount of DEM theory this will be a practical session regarding the implementation of DEM in SPM and the resulting hooks for increasing the speed of DEM in speedDEM. In particular, I will discuss the functional logic of the very general SPM implementation and show profiling results for it which indicate that numeric differentiation is the major contributor to running time. Subsequently we will discuss possible changes and extensions which can reduce this, i.e., we will discuss the future of speedDEM.

09.08.2011: Stefan Kiebel

Stefan talked about a paper [47] that uses reservoir computing, slow feature analysis and independent component analysis to let an agent self-localize in a maze.

02.08.2011: Burak Yildiz (different time: 14:00)

He continued with the mathematical investigation of the DEM paper [50]. After a short review of the previous talk, he continued explaining the Laplace approximation formulas on page 854 and 855 and the integration scheme based on Ozsaki's work.

14.06.2011: Eduardo Aponte

He presented his recent work on unsupervised learning of spatiotemporal sequences in hippocampus.

07.06.2011: Arnold Ziesche

Arnold talked about a neural oscillator model for vowel recognition [48].

31.05.2011: Sebastian Bitzer

After a short recap of the things discussed on 05.04.2011 Sebastian continued to describe covariance hyperparameter estimation for the dynamic models presented in [53]. This also included a short general introduction to expectation maximisation (EM) and potentially some material from appendix A.1 in [49].

24.05.2011: Stefan Kiebel

Stefan gave a talk about his current pet project where they use the free-energy principle to derive a functional model of intracellular single neuron dynamics.

17.05.2011: Burak Yildiz

Burak went through some of the details in the DEM paper [50] to gain more insight into the physics background of the dynamic variational Bayesian; especially, ensemble dynamics and generalized coordinates. He started with the proof of Lemma 1 on page 850 and covered things until page 854 where the Laplace approximation starts.

26.04.2011: Eduardo Aponte

There is not an obvious reason why to use dynamical systems as computational tools. Moreover, recurrent neural networks show often undesirable properties, in particular, they may fall in chaotic regimes. Legenstein and Maass [51] address these questions and provide a non-technical explanation of the relation between chaos, dynamical systems and computation. This paper includes a few equations and inequations.

19.04.2011: Agata Checinska

She gave an introduction to Quantum information and tried to highlight potential relations to neuroscience.

12.04.2011: Arnold Ziesche

I'm interested in understanding the motivation to use generalized coordinates. [52] discusses a good reason along with a simple model and some nice perceptual illusions.

05.04.2011: Sebastian Bitzer

I presented the algorithm which underlies various forms of dynamic causal modeling and which we use to estimate RNN parameters. At the core of it is an iterative computation of the posterior of the parameters of a dynamical model based on a first-order Taylor series approximation of a meta-function mapping parameter values to observations, i.e., the dynamical system is hidden in this function such that the probabilistic model does not have to care about it. This is possible, because the dynamics is assumed to be deterministic and noise only contributes at the level of observations. It can be shown that the resulting update equations for the posterior mode are equivalent with a Gauss-Newton optimisation of the log-joint probability of observations and parameters (this is MAP estimation of the parameters). Consequently, the rate of convergence of the posterior may be up to quadratic, but it is not guaranteed to increase the likelihood at every step or actually converge at all. It should work well close to an optimum (when observations are well fitted), or if the dynamics is close to linear with respect to parameters. Because the dynamical system is integrated numerically to get observation predictions and the Jacobian of the observations with respect to parameters is also obtained numerically, this algorithm may be very slow.

This algorithm is described in [53] embedded into an application to fMRI. I did not present the specifics of this application and, particularly, ignored the influence of the there defined inputs u. The derivation of the parameter posterior described above is embedded in an EM algorithm for hyperparameters on the covariance of observations. I will discuss this in a future session.

22.+29.03.2011: Stefan Kiebel

I'll present a variational Bayesian approach for localizing dipoles for MEG and EEG data [54] . This is meant as a teaching session for variational Bayes. We will go through the motivation, some of the key steps and assumptions and the detailed math for the derivation of some of the update equations.

15.02.2011: Burak Yildiz

Fiete et al. [55] investigate the reasons why birds have sparse coding in the high vocal center (HVC) for song production. It was experimentally observed that each HVC neuron fires only once during a song motif to control the RA neurons. In this paper, authors give mathematical foundation for the benefits of such mechanism to learn new songs. They conclude that learning slows down when HVC neurons are used more than once during the song motif.

01.02.2011: Sebastian Bitzer

Sebastian presented Lazar et al. (2009) [56] as an example of the reemergence of recurrent neural network learning. While usually the RNNs in reservoir computing are fixed the authors show that it can be beneficial to update the RNN connections as well as the output connections. The learning algorithm is local, biologically motivated and based on spike-timing-dependent plasticity, intrinsic plasticity and synaptic normalisation. The units in the network are only binary.

The results in the paper show that the used learning algorithm can linearise the activity of the recurrent excitatory units with respect to the (next predicted) input symbol. Although STDP is essential to capture the sequential nature of the input patterns the homeostatic plasticity mechanisms are necessary to maintain distributed network activity and achieve good results. We presume that the architecture with an additional set of inhibitory neurons, as used here, also contributes to stable activity in the network, but this is not discussed in the paper. As a result of learning, the network exhibits sub-critical activity, i.e., when perturbed, it tends to restore its last activity pattern. Even though random networks (reservoir computing) have been shown to perform best in critical regimes, the sub-critical SORN still outperforms critical random networks. However, both network types share the necessity for sparse connections between excitatory neurons.

25.01.2011: Stefan Kiebel

Stefan talked about a 1995 paper by Geoff Hinton and others about some learning algorithm for neural networks [57]. This is actually one of the first papers mentioning the free energy in the context of learning algorithms.

18.01.2011: Burak Yildiz

Burak talked about a hierarchical neural model inspired by the activities of neural circuits in the brains of songbirds. There was no paper to read since the talk was about the things he has been working on recently.

23.11.2010: Arnold Ziesche

Arnold told us about his thesis work 'A computational model for visual stabilty'.

16.11.2010: Sebastian Bitzer

I will present the brand new Neuron Viewpoint by Churchland et al. [59]. They investigated the functional role of preparatory activity (here: neural firing) in (pre-)motor cortex and suggest that

preparatory tuning serves not to represent specific factors but to initialize a dynamical system whose future evolution will produce movement.

summary

What are the variables that best explain the preparatory tuning of neurons in dorsal premotor and primary motor cortex of monkeys doing a reaching task? This is the core question of the paper which is motivated by the observation of the authors that preparatory and perimovement (ie. within movement) activity of a single neuron may even qualitatively differ considerably (something conflicting with the view that preparatory activity is a subthreshold version of perimovement activity). This observation is experimentally underlined in the paper by showing that average preparatory activity and average perimovement activity of a single neuron are largely uncorrelated for different experimental conditions.

To quantify the suitability of a set of variables to explain perparatory activity of a neuron the authors use a linear regression approach in which the values of these variables for a given experimental condition are used to predict the firing rate of the neuron in that condition. The authors compute the generalisation error of the learnt linear model with crossvalidation and compare the performance of several sets of variables based on this error. The variables performing best are the principal component scores of the perimovement population activity of all recorded neurons. The difference to alternative sets of variables is significant and in particular the wide range of considered variables makes the result convincing (e.g. target position, initial velocity, endpoints and maximum speed, but also principal component scores of EMG activity and kinematic variables, i.e. position, speed and acceleration of the hand). That perimovement activity is the best regressor for preparatory activity is quite odd, or as Burak aptly put it: "They are predicting the past."

The authors suggest a dynamical systems view as explanation for their results and hypthesise that preparatory activity sets the initial state of the dynamical system constituted by the population of neurons. In this view, the preparatory activity of a single neuron is not sufficient to predict its evolution of activity (note that the correlation between perparatory and perimovement activity assesses only one particular way of predicting perimovement from preparatory activity - scaling), but the evolution of activity of all neurons can be used to determine the preparatory activity of a single neuron under the assumption that the evolution of activity is governed by approximately linear dynamics. If the dynamics is linear, then any state in the future is a linear transformation of the initial state and given enough data points from the future the initial state can be determined by an appropriate linear inversion. The additional PCA, also a linear transformation, doesn't change that, but makes the regression easier and, important for the noisy data, also regularises.

These findings and suggestions are all quite interesting and certainly fit into our preconceptions about neuronal activity, but are the presented results really surprising? Do people still believe that you can make sense of the activity of isolated neurons in cortex, or isn't it already accepted that population dynamics is necessary to characterise neuronal responses? For example, Pillow et al. [58] used coupled spiking models to successfully predict spike trains directly from stimuli in retinal ganglion cells. On the other hand, Churchland et al. indirectly claim in this paper that the population dynamics is (approximately) linear, which is certainly disputable, but what would nonlinear dynamics mean for their analysis?

09.11.2010: Stefan Kiebel

The paper by Varona et al. [60] used findings on a marine mollusc to motivate a dynamic model for the mollusc's behavioural repertoire.

26.10.2010: Burak Yildiz

We will have a look at song generation in birds. The paper by Fee et al. [61] compares two possible models that can describe the song generation. One of these models supports autonomous activity among RA neurons while the second model supports a close relationship between higher level neurons (HVC) and the lower level neurons (RA).

19.10.2010: Sebastian Bitzer

In this meeting I presented the machine learning view of learning (nonlinear) dynamics and showed how autoregressive state-space models can be derived from certain types of differential equations which allows application of standard machine learning methods to the resulting learning problems.

Then I went on to discuss Langford et al. (2009) [62] who have proposed the sufficient posterior representation of a dynamic model which is entirely based on function approximation. Unfortunately, their consistency proof, which would guarantee that the learnt state representation is a sufficient statistic for the posterior distribution, does not apply to their setup, because the way they have defined the prediction operator $C^f$ it will never be invertible (requirement of consistency proof). Interestingly, the particular instantiation of the functions they used is very similar to recurrent neural networks that we use.

We finished discussing the advantages of representing dynamics with differential equations in contrast to autoregressive models: it is much easier to represent long-range dependencies, which frequently occur in our world, with differential equations (especially using hierarchies), but how can you learn them?

12.10.2010: Georg Martius

Presents his own work.
It is about self-organizing robot behavior from the prinziple of Homeokinesis. See robot.informatik.uni-leipzig.de. I will introduce a dynamical systems description of the sensorimotor loop and present the learning rules. I have many videos to show. Then we can choose from some extensions of the original framework depending on your interest.

05.10.2010: Stefan Kiebel

Discussed Sussillo and Abbott (2009) [63].

28.09.2010: Burak Yildiz

Tomorrow, I will give a brief review of different ways to obtain Stable Heteroclinic Cycles. I will look into one of these models in more detail and I attached the related paper [64] to the calendar. I will talk about only the first 10 pages of that paper.

31.08.2010: Sebastian Bitzer

In this week's meeting I will present Legenstein et al. (2010) [65]. The paper is interesting for us, and me in particular, for several reasons:

  1. it addresses the issue of how you can learn a compact representation of a stimulus from data
  2. it does so by exploiting slowly varying features of the input in a hierarchy with increasing spatial scale (you notice that they have the same keywords as we do ;)
  3. they test the found representation by seeing how suitable it is for reinforcement learning of 2 independent tasks

Point 2. is the one most interesting for us while 1. and 3. are exactly the content of my last paper which will be presented at the International Conference on Intelligent Robots and Systems in October. I obviously used different methods, stimuli and tasks, but the idea is the same.

24.08.2010: David Wozny

I will discuss the Haken-Kelso-Bunz model for coordination dynamics.
Attached is the orginal 1985 paper, but I think the scholarpedia page is an excellent review.
So this page will be the focus of discussion. I will also provide my thoughts on why this is important in our line of research, and a summary of the diverse topics studied with coupled oscillators.

www.scholarpedia.org/article/Haken-Kelso-Bunz_model

For those that want more math, attached is one of the more advanced and complicated papers which is just one example of the possibilities in understanding coupled dynamic systems. From what I can tell, Peter Ashwin is one of the leaders in the field.

To clarify, you ONLY need to read the scholarpedia page listed above. This will be the focus of discussion. The other papers are simply examples of beginning and current stages of the topic of coupled dynamic systems.

Bibliography
1. Barascud, Nicolas, et al. "Brain responses in humans reveal ideal observer-like sensitivity to complex acoustic patterns." Proceedings of the National Academy of Sciences 113.5 (2016): E616-E625 doi
2. Drugowitsch, J.; DeAngelis, G. C.; Angelaki, D. E. & Pouget, A. Tuning the speed-accuracy trade-off to maximize reward rate in multisensory decision-making. Elife, 2015, 4 doi
3. Economides, M.; Guitart-Masip, M.; Kurth-Nelson, Z. & Dolan, R. J. Anterior cingulate cortex instigates adaptive switches in choice by integrating immediate and delayed components of value in ventromedial prefrontal cortex. J Neurosci, 2014, 34, 3340-3349 doi
4. Rahmati V, Kirmse K, Markovic D, Holthoff K, Kiebel SJ (2016). Inferring Neuronal Dynamics from Calcium Imaging Data Using Biophysical Models and Bayesian Inference. PLoS Comput Biol, 12(2): e1004736. doi
5. Bernacchia, A., Seo, H., Lee, D., & Wang, X. J. (2011). A reservoir of time constants for memory traces in cortical neurons. Nature neuroscience, 14(3), 366-372. doi
6. Bushdid, C., Magnasco, M. O., Vosshall, L. B., & Keller, A. (2014). Humans can discriminate more than 1 trillion olfactory stimuli. Science, 343(6177), 1370-1372. doi
7. Gerkin, R. C., & Castro, J. B. (2015). The number of olfactory stimuli that humans can discriminate is still unknown. Elife, 4, e08127. doi
8. Meister, M. (2015). "On the dimensionality of odor space." Elife, 4, e07865. doi
9. Summerfield, Christopher, and Konstantinos Tsetsos. "Do humans make good decisions?." //Trends in cognitive sciences 19.1 (2015): 27-34 doi.
10. Rüter, J.; Marcille, N.; Sprekeler, H.; Gerstner, W. & Herzog, M. H. //Paradoxical evidence integration in rapid decision processes. PLoS Comput Biol, 2012, 8, e1002382 doi
11. Omelchenko I., Maistrenko Y., Hövel P. & Schöll E. Loss of Coherence in Dynamical Networks: Spatial Chaos and Chimera States. PRL, 2011, url
12. Barthelmé, S. & Chopin, N. Expectation-Propagation for Likelihood-Free Inference. Journal of the American Statistical Association, 2014, doi, preprint
13. O'Reilly, J. X.; Jbabdi, S.; Rushworth, M. F. S. & Behrens, T. E. J. Brain systems for probabilistic and dynamic prediction: computational specificity and integration. PLoS Biol, 2013, 11, e1001662 doi
14. Ratcliff, R.; Philiastides, MG.; Sajda, P. Quality of evidence for perceptual decision making is indexed by trial-to-trial variability of the EEG. PNAS, 2009, 106(16), 6539-6544, doi
15. Yang, T. & Shadlen, M. N. Probabilistic reasoning by neurons. Nature, 2007, 447, 1075-1080, doi
16. Iglesias, S.; Mathys, C. ; Brodersen, K. H. ;d Kasper, L.; Piccirelli, M.; den Ouden, H. E. M. & Stephan, K. E. Hierarchical Prediction Errors in Midbrain and Basal Forebrain during Sensory Learning. Neuron, 2013, 80, 519-530, doi
17. Summerfield, C.; Behrens, T. E. & Koechlin, E. Perceptual classification in a rapidly changing environment. Neuron, 2011, 71, 725-736, doi
18. Laje, R. & Buonomano, D. V. Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat Neurosci, 2013, 16, 925-933, doi
19. Prévost, C et. al. Evidence for Model-based Computations in the Human Amygdala during Pavlovian Conditioning. PLoS Comput Biol, 2013, 9(2), e1002918, doi
20. Albantakis, L. & Deco, G. Changes of mind in an attractor network of decision-making. PLoS Comput Biol, 2011, 7, e1002086, doi
21. Brunton, B. W.; Botvinick, M. M. & Brody, C. D. Rats and humans can optimally accumulate evidence for decision-making. Science, 2013, 340, 95-98, doi
22. Amador A, Perl YS, Mindlin GB, Margoliash D. Elemental gesture dynamics are encoded by song premotor cortical neurons. web, 2013
23. Bitzer, S.; Yildiz, I. B.; Kiebel, S. J. // Online Discrimination of Nonlinear Dynamics with Switching Differential Equations.// arXiv:1211.0947, 2012
24. Bourdoukan, R.; Barrett, D.; Machens, C. & Deneve, S. Learning optimal spike-based representations. NIPS, 2012, pdf
25. Cisek, P.; Puskas, G. A. & El-Murr, S. Decisions in changing conditions: the urgency-gating model. J Neurosci, 2009, 29, 11560-11571, doi
26. Vicente, R.; Wibral, M.; Lindner, M. & Pipa, G. Transfer entropy—a model-free measure of effective connectivity for the neurosciences. J Comput Neurosci, 2011, 30, 45-67, doi
27. Grützner, C.; Uhlhaas, P. J.; Genc, E.; Kohler, A.; Singer, W. & Wibral, M. Neuroelectromagnetic correlates of perceptual closure processes. J Neurosci, 2010, 30, 8342-8352 doi
28. McDonnell, M.; Stocks, N. Suprathreshold stochastic resonance Scholarpedia, 2009, 4(6):6508. web
29. McDonnell, M.; Abbott, D. What Is Stochastic Resonance? Definitions, Misconceptions, Debates, and Its Relevance to Biology. PLoS Comput Biol, 2009, 5(5): e1000348. doi
30. Churchland, A. K.; Kiani, R.; Chaudhuri, R.; Wang, X.-J.; Pouget, A. & Shadlen, M. N. Variance as a signature of neural computations during decision making. Neuron, 2011, 69, 818-831. doi
31. Smith, E. C.; Lewicki, M. S. (2006)Efficient auditory coding.Nature , 439: 978-892. doi
32. Carlson, N. L.; Ming, V. L & DeWeese, M.R. Sparse Codes for Speech Predict Spectrotemporal Receptive Fields in the Inferior Colliculus. PLoS Computational Biology, 2012, 8(7). doi
33. Kröger, B. J. & Kannampuzha, J. A Neurofunctional Model of Speech Production Including Aspects of Auditory and Audio-Visual Speech Perception. AVSP 2008 pdf
34. Friston, K.; Adams, R. A.; Perrinet, L. and Breakspear, M. Perceptions as hypotheses: saccades as experiments. Front Psychol, 2012, 3, 151. doi
35. Todorov E and Jordan MI (2002) Nat Neurosci, 7: 907-915. doi
36. Fiser J.; Berkes P.; Orban G. & Lengyel M. Statistically optimal perception and learning: from behavior to neural representations web
37. Rasmussen, C. E. & Williams, C. K. I. Gaussian Processes for Machine Learning MIT Press, 2006. web
38. Ernst, M. O. & Banks, M. S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 2002, 415, 429-433. doi
39. Katori, Y.; Sakamoto, K.; Saito, N.; Tanji, J.; Mushiake, H. & Aihara, K. Representational switching by dynamical reorganization of attractor structure in a network model of the prefrontal cortex. PLoS Comput Biol, 2011, 7, e1002266. doi
40. T. P. Vogels, H. Sprekeler, F. Zenke, C. Clopath, W. Gerstner Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks. Science, 2011, 334, 1569-1573. doi
41. Jaeger H. The "echo state" approach to analysing and training recurrent neural networks. GMD Report 148, GMD - German National Research Institute for Computer Science pdf
42. Xue, Y.; Yang, L.; Haykin, S. Decoupled echo state networks with lateral inhibition. Neural Networks, 2007, 20, 365-376. doi
43. Tishby, N. & Polani, D. Information Theory of Decisions and Actions in Cutsuridis, V.; Hussain, A. & Taylor, J. G. (ed.), Perception-Action Cycle, Springer New York, 2011, 601-636. doi
44. Buonomano, D. V. & Maass, W. State-dependent computations: spatiotemporal processing in cortical networks. Nat Rev Neurosci, 2009, 10, 113-125. doi
45. Friston, K.J.; Harrison, L.; Penny, W. Dynamic causal modelling Neuroimage 19, 1273-1302, 2003. doi
46. Schwartz, J. Why the FLMP should not be applied to McGurk data … or how to better compare models in the Bayesian framework AVSP 2003 pdf
47. Antonelo EA; Schrauwen B.Unsupervised Learning in Reservoir Computing: Modeling Hippocampal Place Cells for Small Mobile RobotsARTIFICIAL NEURAL NETWORKS - ICANN 2009, PT I Book Series: Lecture Notes in Computer Science 5768: 747-756 2009 doi
48. Liu F; Yamaguchi Y; Shimizu H. Flexible vowel recognition by the generation of dynamic coherence in oscillator neural networks: speaker-independent vowel recognition Biological Cybernetics, 1994, 71(2), 105-114. doi
49. Friston, K. J.; Penny, W.; Phillips, C.; Kiebel, S.; Hinton, G. & Ashburner, J. Classical and Bayesian inference in neuroimaging: theory. Neuroimage, 2002, 16, 465-483, doi
50. Friston KJ; Trujillo-Barreto N; Daunizeau J. DEM: a variational treatment of dynamic systems Neuroimage. 2008 Jul 1;41(3):849-85 doi
51. Legenstein, R.; Maass, W. What makes a dynamical system computationally powerfull? New Directions in Statistical Signal Processing: From Systems to Brain (2005), MIT Press pdf
52. Grush, R. Internal models and the construction of time: generalizing from state estimation to trajectory estimation to address temporal features of perception, including temporal illusions. J Neural Eng, 2005, 2, 209-218, doi
53. Friston, K. J. Bayesian estimation of dynamical systems: an application to fMRI. Neuroimage, 2002, 16, 513-530, doi
54. Kiebel, SJ; Daunizeau J; Phillips C; Friston KJ. Variational Bayesian inversion of the equivalent current dipole model in EEG/MEGNeuroImage 39(2): 728-741 doi
55. Fiete, I; Hahnloser, R.; Fee, M. & Seung, S. Temporal Sparseness of the Premotor Drive Is Important for Rapid Learning in a Neural Network Model of Birdsong JN Physiol, October 2004, vol. 92, no. 4 pdf
56. Lazar, A.; Pipa, G. & Triesch, J. SORN: a self-organizing recurrent neural network. Front Comput Neurosci, 2009, 3, 23 doi
57. Hinton, G.E.; Dayan, P.; Frey, B.J; Neal, R.M.The "Wake-Sleep" Algorithm for unsupervised neural networksScience, 1995, 268, 1158 - 1161 pdf
58. Pillow, J. W.; Shlens, J.; Paninski, L.; Sher, A.; Litke, A. M.; Chichilnisky, E. J. & Simoncelli, E. P. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature, 2008, 454, 995-999. au url doi
59. Churchland, M. M.; Cunningham, J. P.; Kaufman, M. T.; Ryu, S. I. & Shenoy, K. V. Cortical Preparatory Activity: Representation of Movement or First Cog in a Dynamical Machine? Neuron, 2010, 68, 387 - 400. doi
60. Varona, P., Rabinovich, M.I., Selverston, A.I., Arshavsky, Y.I. (2002), Winnerless competition between sensory neurons generates chaos: A possible mechanism for molluscan hunting behavior Chaos, 12: 672-677.
61. Fee, M. S., Kozhevnikov, A. A. and Hahnloser, R. H. (2004), Neural Mechanisms of Vocal Sequence Generation in the Songbird Annals of the New York Academy of Sciences, 1016: 153–170.
62. Langford, J.; Salakhutdinov, R. & Zhang, T.; Learning Nonlinear Dynamic Models. Proceedings of the 26th International Conference on Machine Learning (ICML), 2009, 593-600, pdf
63. Sussillo, D.; and Abbott, L. F. Generating coherent patterns of activity from chaotic neural networks. Neuron, 2009, 63, 544-557.
64. Gros, C. Neural networks with transient state dynamics New Journal of Physics, 2007, 9, 109.
65. Legenstein, R.; Wilbert, N.; and Wiskott, L. Reinforcement Learning on Slow Features of High-Dimensional Input Streams PLoS Comput Biol, Public Library of Science, 2010, 6, e1000894.