The DySCo meetings are held at the Chair of Neuroimaing, Department of Psychology at the Technische Universität Dresden. Meeting times are usually on Wednesdays between 12:00 and 13:30 in the Falkenbrunnen building, Chemnitzer Str. 46B, but see below for particular dates.
We discuss topics related to neuroimaging methods and functional models of computations in the brain. The meetings are open to everyone interested in cognitive neuroscience modelling.
If you are interested in present your work at the meeting, or have more questions about the meeting, please contact Dimitrije Marković.
12.07.2017, 14:00-15:30, FAL 157: TBA
28.06.2017, 14:00-15:30, FAL 157: Florian Ott
21.06.2017, 14:00-15:30, FAL 157: Vahid Rahmati
14.06.2017, 14:00-15:30, FAL 157: Sarah Schwoebel
24.05.2017, 14:00-15:30, FAL 157: Cassandra Visconti
17.05.2017, 14:00-15:30, FAL 157: Alexander Strobel
26.04.2017, 14:00-15:30, FAL 157: TBA
19.04.2017, 14:00-15:30, FAL 157: Sebastian Bitzer
12.04.2017, 14:00-15:30, FAL 157: Dimitrije Markovic
25.01.2017, 12:00-13:30, FAL 156: Florian Bolenz
Title: "Arbitration of habitual and goal-directed learning strategies"
11.01.2017, 12:00-13:30, FAL 156: Dimitrije Markovic
We discussed the organization and structure of the tutorial on Active Inference.
04.01.2017, 12:00-13:30, FAL 156: Sebastian Bitzer
Talked about how probabilistic modelling technology changes the required skill set for the modern psychologist.
07.12.2016, 12:00-13:30, FAL 156: Pouyan Rafieifard
Title: "Recent advances in Bayesian models of perceptual decision making".
23.11.2016, 12:00-13:30, FAL 156: Gruppe Christian Beste
Title: EEG source-based connectome comparisons – small world and other metrics
Abstract: For several decades, EEG has been a well-established measurement tool for gaining insights into a wide variety of brain functions. Over the last years, more advanced computational methods like time-frequency analysis, beamforming and connectivity analysis have been developed to extend the scope of EEG analysis beyond FFTs and event-related potentials. In this talk, we want to discuss the methodological possibilities and restrictions of 3D connectome analysis based on EEG data.
09.11.2016, 12:00-13:30, FAL 156: Cassandra Visconti
talked about the recent evidence for ideal observer-like sensitivity to complex acoustic patterns .
12.10.2016, 12:00-13:30, FAL 156: Stefan Kiebel
Organisational meeting for the start of the winter semester.
11.07.2016, 14:00-15:00, FAL 157: Arne Doose
gave introduction to "Learning and conditioning".
21.06.2016, 11:30-13, FAL 158: Sebastian Bitzer
spoke about the advantages and disadvantages of model expansion over model comparison.
07.06.2016, 11:30-13, FAL 158: Hame Park
presented recent work by Drugowitsch et al. about speed-accuracy tradeoff in decisions based on information from two modalities (vision and vestibular sensors) . We figured that the experiment mostly showed that the vestibular modality lead to very different behaviour than vision in the used setup.
11.05.2016, 14-15, FAL 157: Stefan Kiebel
will present a paper with the title "Anterior cingulate cortex instigates adaptive switches in choice by integrating immediate and delayed components of value in ventromedial prefrontal cortex." .
27.04.2016, 14-15, FAL 157: Vahid Rahmati
talked about his recent paper: "Inferring Neuronal Dynamics from Calcium Imaging Data Using Biophysical Models and Bayesian Inference" .
19.04.2016, 11:30-13, FAL 158: Dario Cuevas
Active Inference in decision making
13.04.2016, 14:00-15:00, FAL 157 - Sebastian Bitzer
Moodle and probabilistic programming
27.01.2016, 11:10 – 12:10 FAL 157
Pouyan R. Fard will give a talk on "Multi-voxel Pattern Analysis for Neuroimaging Data".
20.01.2016, 11:10 – 12:10 FAL 157 - Dimitrije Markovic
Dimitrije Markovic will present a paper from Bernacchia et.al. titled: "A reservoir of time constants for memory traces in cortical neurons" 
06.01.2016, 11:10 – 12:10 FAL 157 - Dario Cuevas Rivera
16.12.2015, 11:10 – 12:10 FAL 157
09.12.2015, 12:11 – 12:40 FAL 157
Carolin Hübner gave a practice talk for her master thesis defense.
Dario Cuevas Rivera gave a practice talk on modeling insect faction with stable heteroclinic cycles and bayesian inference.
02.12.2015, 11:10 – 12:10 FAL 157
Carolin Hübner gave a practice talk for her master thesis defense.
25.11.2015, 11:10 – 12:10 FAL 157 - Hame Park
Talk title: "Modeling perceptual decision-making with MEG/EEG: a literature review"
11.11.2015, 11:00 – 12:00 FAL 157
Sebastian Bitzer talked about efficient numerical approximations of probability distributions for nonlinear filtering.
04.11.2015, 11:00 – 12:00 FAL 157
Dimitrije Markovic presented his recent work on modeling inference in changing environments.
21.10.2015, 11:00 – 12:00 FAL 157
Carolin Hübner presented an advanced analysis of the Re-Decisions experiment.
14.10.2015, 15:00 – 16:00 FAL 157
Pouyan R. Fard presented [ 9 ].
22.7.2015, 13:00 – 14:00 FAL 157
Carolin Hübner presented her results on "Re-Decisions Experiment".
08.07.2015, 13:00 – 14:00 FAL 157
Pouyan R. Fard talked about "An Extended Bayesian Model Equivalent to Extended Drift-diffusion Model".
03.06.2015, 11:00 – 12:00 FAL 158:
Talk by Stefan Scherbaum
13.05.2015, 11:00 – 12:30 Stefan's office:
We talked about ZIH's computer servers.
06.05.2015, 11:00 – 12:30 in Stefan's office
Hame Park discussed a few ideas for a new neuroimaging study on perceptual decision making.
29.04.2015, 11:00 – 12:00 FAL 158: Holger Mohr
Talked about: "Analysis and modeling of instruction-based learning: Completed, ongoing and future work".
Past Meetings @ MPI CBS
25.11.2014, 11:15, A311: Valentin Schmutz
Talked about fitting the results of  with the Bayesian attractor model.
27.10.2014, 14:00, C206: Dimitrije Marković
will present his work on bayesian modeling of probabilistic WCST.
14.10.2014, 11:10, A311: Carolin Hübner
presented her Bachelor's thesis project on the concept of information across disciplinary boundaries.
13.10.2014, 12:00, A311: Sophie Seidenbecher
presented her work on the model of .
16.07.2014, 14:00, A311: Sabrina Trapp
will tell us about her results in an experiment which investigated choice behavior in a reinforcement learning task in which the perceptual predictability of stimulus material is additionally manipulated.
21.05.2014 Sophie Seidenbecher
presented a paper about chimera states in dynamical networks .
30.04.2014 Hame Park & Jan-Matthis Lückmann
presented the intermediate results for the Bee Experiment.
09.04.14 Andrej Warkentin
I will present my current progress on the extensions of the Bayesian model from Sebastian.
19.03.2014 Sebastian Bitzer
presented Expectation Propagation - Approximate Bayesian Inference (EP-ABC) .
05.02.2014 Stefan Kiebel
22.01.2014 Sophie Seidenbecher
went through the details of Brunton et al.'s  methods.
04.12.2013 Hame Park
13.11.2013 Sebastian Bitzer
Based on , discussed Shadlen's beliefs about how neurons implement probabilistic reasoning.
30.10.2013 Dimitrije Markovic
02.10.2013 Stefan Kiebel
26.07.2013 Hame Park
19.07.2013 Dimitrije Markovic
Presented a paper , which invastigates the computational properties of Amygdala during Pavlovian conditioning.
05.07.2013 Sebastian Bitzer
I will present . The authors propose an explanation of changes of mind in terms of switching between fixed points in attractor networks.
17.05.2013 Stefan Kiebel
Stefan will present a recent paper  which investigates where the noise in perceptual decision making comes from by using a clever task together with computational modelling.
26.04.2013 Hame Park
Hame presented an article by Amador et al ., titled, "Elemental gesture dynamics are encoded by song premotor cortical neurons".
12.04.2013 Jelle Bruineberg
Jelle presented a model for online discrimination of movements  and discussed possible simulations which relate model behaviour to experimental results.
08.03.2013 Burak Yildiz
Burak talked about the following NIPS paper  which shows how spiking neurons can learn to represent continuous signals.
22.02.2013 Sebastian Bitzer
Institute Colloquium practice talk.
08.02.2013 Sebastian Bitzer
I will discuss a paper  which argues that the timing of decisions is critically influenced by a perceived need to make a decision soon (urgency gating). The basis for the argument is a perceptual decision experiment in which participants receive varying amounts of evidence within a trial.
18.01.2013 Felix Effenberger
11.01.2013 Kai Dannies
Kai presented his maze simulator.
07.12.2012: Burak Yildiz
Burak talked about an interesting phenomenon called "suprathreshold stochastic resonance". This illustrates mathematically how noise can be beneficial for information transfer in a parallel array of neurons. It has possible applications in cochlear implants. For the talk, it is enough to look at the following Scholarpedia page . If you get more interested, you can read the following review .
23.11.2012: Sebastian Bitzer
Sebastian presented a paper  which tries to differentiate models of perceptual decision making based on two measures of firing rate variability in monkey LIP neurons.
02.11.2012: Stefan Kiebel
Stefan talked about a study describing efficient auditory coding .
25.09.2012: Burak Yildiz
Burak talked about sparse coding in the auditory pathway following a recent publication: 
04.09.2012: Arnold Ziesche
Arnold presented a neurofunctional model for the McGurk effect .
21.08.2012: Sebastian Bitzer
Sebastian presented Friston's latest work on how explorative behaviour may be understood within the free energy framework .
26.06.2012: Stefan Kiebel
Stefan reviewed a paper about optimal feedback control  and discussed how results presented in this paper may be implemented by active inference.
29.05.2012: Burak Yildiz
Burak initiated a discussion and brainstorming session: What insights about the brain did we gain from our theoretical work and how could we test and confirm these insights with (fMRI) experiments? These insights include topics such as uncertainty, precision, relation between priors and posteriors, prediction error and hierarchical message passing. Burak went through (mostly the second half) of the following review paper to stimulate discussion: 
15.05.2012: Arnold Ziesche
Arnold presented the results of his model for multisensory integration.
08.05.2012: Sebastian Bitzer
Sebastian gave an introduction to Gaussian Processes  with a focus on their use in DEM.
17.04.2012: Stefan Kiebel
Stefan presented a well-established model of multisensory integration .
10.04.2012: Stefan Kiebel
Stefan presented a lecture about computational models for bistable perception. 45 min + some discussion time.
03.04.2012: Burak Yildiz
Burak reported from CONAS 2012.
27.03.2012: Sam Mathias
Sam discussed with us potential approaches to model his newest auditory psychophysics experiments relating to frequency shift detectors.
21.02.2012: Burak Yildiz
Burak talked about the work he has been doing in the last couple of months. It was about learning and recognition of human speech.
07.02.2012: Arnold Ziesche
Arnold presented a generic model he is currently considering. There was nothing to read.
24.01.2012: Sebastian Bitzer
Sebastian presented Katori et al. (2011)  who suggest a spiking neural network which implements switching of stable states represented by some form of attractor dynamics. Basis of it all apparently are recordings of neurons in prefrontal cortex.
Also, please remember to list any interesting conferences.
10.01.2012: Stefan Kiebel
Stefan presented a very recent science paper about balancing excitation/inhibition .
13.12.2011: Stollen meeting
29.11.2011: Burak Yildiz
Burak talked about a side project of his. This was a rather mathematical talk about the echo state property (ESP) of Echo State Networks (ESN). It would be helpful to review the definitions and sufficient conditions for ESP of the standard and leaky integrator ESNs which can be found in .
15.11.2011: Arnold Ziesche
Arnold presented a paper which uses an echo state network (ESN) with decoupled reservoirs to learn dynamics on different time scales simultaneously . For some reason, this might be related to his own work.
01.11.2011: Sebastian Bitzer
Peter Dayan pointed me to Tishby and Polani  when we spoke about the importance of actions in perception and the free energy principle. They unify ideas from information theory (determining the information, or surprise, of an event) with reinforcement learning (finding actions which maximise reward) and show that the minimisation of surprise is sufficient for optimising actions in the special case that the environment is "perfectly adapted", i.e., the transition probabilities of the environment directly reflect the associated rewards. Thus, they formalise a common reservation against the free energy principle: that surprise alone might not be sufficient to explain behaviour. Simultaneously, they embrace the potential existence of perfectly adapted environments through the coevolution of agents (their sensors) and the environment.
I took their formalism as a basis for discussing the representation of values in the form of priors in the free energy principle (see also the associated discussion on hunch.net).
Unfortunately, the book chapter is a bit long and quite technical. On the other hand, it is very well written and provides an excellent overview of the basic ideas in information theory and reinforcement learning.
25.10.2011: Stefan Kiebel
Stefan presented a review article  which stresses that a) spiking activity of a population of neurons depends on the hidden physiological state of the neurons (e.g. short-term synaptic plasticity), b) the response of sensory neurons is often projected into a higher dimensional space (of processing neurons) and c) linear read-out units may be sufficient to discriminate high-dimensional trajectories of neuron responses. There are hardly any equations in it but the topic is for us.
11.10.2011: coffee meeting
23.09.2011: Burak Yildiz
Burak discussed DCM for fMRI. He went through the first ten pages of  and also the visual example in Section 5.1.
30.08.2011: Arnold Ziesche
Arnold presented a paper which critically discusses a computational model for audiovisual speech perception .
23.08.2011: Sebastian Bitzer
Sebastian continued his session on speedDEM. He demonstrated the performance of speedDEM for two test cases and various network sizes.
16.08.2011: Sebastian Bitzer
After a reasonable amount of DEM theory this will be a practical session regarding the implementation of DEM in SPM and the resulting hooks for increasing the speed of DEM in speedDEM. In particular, I will discuss the functional logic of the very general SPM implementation and show profiling results for it which indicate that numeric differentiation is the major contributor to running time. Subsequently we will discuss possible changes and extensions which can reduce this, i.e., we will discuss the future of speedDEM.
09.08.2011: Stefan Kiebel
Stefan talked about a paper  that uses reservoir computing, slow feature analysis and independent component analysis to let an agent self-localize in a maze.
02.08.2011: Burak Yildiz (different time: 14:00)
He continued with the mathematical investigation of the DEM paper . After a short review of the previous talk, he continued explaining the Laplace approximation formulas on page 854 and 855 and the integration scheme based on Ozsaki's work.
14.06.2011: Eduardo Aponte
He presented his recent work on unsupervised learning of spatiotemporal sequences in hippocampus.
07.06.2011: Arnold Ziesche
Arnold talked about a neural oscillator model for vowel recognition .
31.05.2011: Sebastian Bitzer
After a short recap of the things discussed on 05.04.2011 Sebastian continued to describe covariance hyperparameter estimation for the dynamic models presented in . This also included a short general introduction to expectation maximisation (EM) and potentially some material from appendix A.1 in .
24.05.2011: Stefan Kiebel
Stefan gave a talk about his current pet project where they use the free-energy principle to derive a functional model of intracellular single neuron dynamics.
17.05.2011: Burak Yildiz
Burak went through some of the details in the DEM paper  to gain more insight into the physics background of the dynamic variational Bayesian; especially, ensemble dynamics and generalized coordinates. He started with the proof of Lemma 1 on page 850 and covered things until page 854 where the Laplace approximation starts.
26.04.2011: Eduardo Aponte
There is not an obvious reason why to use dynamical systems as computational tools. Moreover, recurrent neural networks show often undesirable properties, in particular, they may fall in chaotic regimes. Legenstein and Maass  address these questions and provide a non-technical explanation of the relation between chaos, dynamical systems and computation. This paper includes a few equations and inequations.
19.04.2011: Agata Checinska
She gave an introduction to Quantum information and tried to highlight potential relations to neuroscience.
12.04.2011: Arnold Ziesche
I'm interested in understanding the motivation to use generalized coordinates.  discusses a good reason along with a simple model and some nice perceptual illusions.
05.04.2011: Sebastian Bitzer
I presented the algorithm which underlies various forms of dynamic causal modeling and which we use to estimate RNN parameters. At the core of it is an iterative computation of the posterior of the parameters of a dynamical model based on a first-order Taylor series approximation of a meta-function mapping parameter values to observations, i.e., the dynamical system is hidden in this function such that the probabilistic model does not have to care about it. This is possible, because the dynamics is assumed to be deterministic and noise only contributes at the level of observations. It can be shown that the resulting update equations for the posterior mode are equivalent with a Gauss-Newton optimisation of the log-joint probability of observations and parameters (this is MAP estimation of the parameters). Consequently, the rate of convergence of the posterior may be up to quadratic, but it is not guaranteed to increase the likelihood at every step or actually converge at all. It should work well close to an optimum (when observations are well fitted), or if the dynamics is close to linear with respect to parameters. Because the dynamical system is integrated numerically to get observation predictions and the Jacobian of the observations with respect to parameters is also obtained numerically, this algorithm may be very slow.
This algorithm is described in  embedded into an application to fMRI. I did not present the specifics of this application and, particularly, ignored the influence of the there defined inputs u. The derivation of the parameter posterior described above is embedded in an EM algorithm for hyperparameters on the covariance of observations. I will discuss this in a future session.
22.+29.03.2011: Stefan Kiebel
I'll present a variational Bayesian approach for localizing dipoles for MEG and EEG data  . This is meant as a teaching session for variational Bayes. We will go through the motivation, some of the key steps and assumptions and the detailed math for the derivation of some of the update equations.
15.02.2011: Burak Yildiz
Fiete et al.  investigate the reasons why birds have sparse coding in the high vocal center (HVC) for song production. It was experimentally observed that each HVC neuron fires only once during a song motif to control the RA neurons. In this paper, authors give mathematical foundation for the benefits of such mechanism to learn new songs. They conclude that learning slows down when HVC neurons are used more than once during the song motif.
01.02.2011: Sebastian Bitzer
Sebastian presented Lazar et al. (2009)  as an example of the reemergence of recurrent neural network learning. While usually the RNNs in reservoir computing are fixed the authors show that it can be beneficial to update the RNN connections as well as the output connections. The learning algorithm is local, biologically motivated and based on spike-timing-dependent plasticity, intrinsic plasticity and synaptic normalisation. The units in the network are only binary.
The results in the paper show that the used learning algorithm can linearise the activity of the recurrent excitatory units with respect to the (next predicted) input symbol. Although STDP is essential to capture the sequential nature of the input patterns the homeostatic plasticity mechanisms are necessary to maintain distributed network activity and achieve good results. We presume that the architecture with an additional set of inhibitory neurons, as used here, also contributes to stable activity in the network, but this is not discussed in the paper. As a result of learning, the network exhibits sub-critical activity, i.e., when perturbed, it tends to restore its last activity pattern. Even though random networks (reservoir computing) have been shown to perform best in critical regimes, the sub-critical SORN still outperforms critical random networks. However, both network types share the necessity for sparse connections between excitatory neurons.
25.01.2011: Stefan Kiebel
Stefan talked about a 1995 paper by Geoff Hinton and others about some learning algorithm for neural networks . This is actually one of the first papers mentioning the free energy in the context of learning algorithms.
18.01.2011: Burak Yildiz
Burak talked about a hierarchical neural model inspired by the activities of neural circuits in the brains of songbirds. There was no paper to read since the talk was about the things he has been working on recently.
23.11.2010: Arnold Ziesche
Arnold told us about his thesis work 'A computational model for visual stabilty'.
16.11.2010: Sebastian Bitzer
I will present the brand new Neuron Viewpoint by Churchland et al. . They investigated the functional role of preparatory activity (here: neural firing) in (pre-)motor cortex and suggest that
preparatory tuning serves not to represent specific factors but to initialize a dynamical system whose future evolution will produce movement.
What are the variables that best explain the preparatory tuning of neurons in dorsal premotor and primary motor cortex of monkeys doing a reaching task? This is the core question of the paper which is motivated by the observation of the authors that preparatory and perimovement (ie. within movement) activity of a single neuron may even qualitatively differ considerably (something conflicting with the view that preparatory activity is a subthreshold version of perimovement activity). This observation is experimentally underlined in the paper by showing that average preparatory activity and average perimovement activity of a single neuron are largely uncorrelated for different experimental conditions.
To quantify the suitability of a set of variables to explain perparatory activity of a neuron the authors use a linear regression approach in which the values of these variables for a given experimental condition are used to predict the firing rate of the neuron in that condition. The authors compute the generalisation error of the learnt linear model with crossvalidation and compare the performance of several sets of variables based on this error. The variables performing best are the principal component scores of the perimovement population activity of all recorded neurons. The difference to alternative sets of variables is significant and in particular the wide range of considered variables makes the result convincing (e.g. target position, initial velocity, endpoints and maximum speed, but also principal component scores of EMG activity and kinematic variables, i.e. position, speed and acceleration of the hand). That perimovement activity is the best regressor for preparatory activity is quite odd, or as Burak aptly put it: "They are predicting the past."
The authors suggest a dynamical systems view as explanation for their results and hypthesise that preparatory activity sets the initial state of the dynamical system constituted by the population of neurons. In this view, the preparatory activity of a single neuron is not sufficient to predict its evolution of activity (note that the correlation between perparatory and perimovement activity assesses only one particular way of predicting perimovement from preparatory activity - scaling), but the evolution of activity of all neurons can be used to determine the preparatory activity of a single neuron under the assumption that the evolution of activity is governed by approximately linear dynamics. If the dynamics is linear, then any state in the future is a linear transformation of the initial state and given enough data points from the future the initial state can be determined by an appropriate linear inversion. The additional PCA, also a linear transformation, doesn't change that, but makes the regression easier and, important for the noisy data, also regularises.
These findings and suggestions are all quite interesting and certainly fit into our preconceptions about neuronal activity, but are the presented results really surprising? Do people still believe that you can make sense of the activity of isolated neurons in cortex, or isn't it already accepted that population dynamics is necessary to characterise neuronal responses? For example, Pillow et al.  used coupled spiking models to successfully predict spike trains directly from stimuli in retinal ganglion cells. On the other hand, Churchland et al. indirectly claim in this paper that the population dynamics is (approximately) linear, which is certainly disputable, but what would nonlinear dynamics mean for their analysis?
09.11.2010: Stefan Kiebel
The paper by Varona et al.  used findings on a marine mollusc to motivate a dynamic model for the mollusc's behavioural repertoire.
26.10.2010: Burak Yildiz
We will have a look at song generation in birds. The paper by Fee et al.  compares two possible models that can describe the song generation. One of these models supports autonomous activity among RA neurons while the second model supports a close relationship between higher level neurons (HVC) and the lower level neurons (RA).
19.10.2010: Sebastian Bitzer
In this meeting I presented the machine learning view of learning (nonlinear) dynamics and showed how autoregressive state-space models can be derived from certain types of differential equations which allows application of standard machine learning methods to the resulting learning problems.
Then I went on to discuss Langford et al. (2009)  who have proposed the sufficient posterior representation of a dynamic model which is entirely based on function approximation. Unfortunately, their consistency proof, which would guarantee that the learnt state representation is a sufficient statistic for the posterior distribution, does not apply to their setup, because the way they have defined the prediction operator $C^f$ it will never be invertible (requirement of consistency proof). Interestingly, the particular instantiation of the functions they used is very similar to recurrent neural networks that we use.
We finished discussing the advantages of representing dynamics with differential equations in contrast to autoregressive models: it is much easier to represent long-range dependencies, which frequently occur in our world, with differential equations (especially using hierarchies), but how can you learn them?
12.10.2010: Georg Martius
Presents his own work.
It is about self-organizing robot behavior from the prinziple of Homeokinesis. See robot.informatik.uni-leipzig.de. I will introduce a dynamical systems description of the sensorimotor loop and present the learning rules. I have many videos to show. Then we can choose from some extensions of the original framework depending on your interest.
05.10.2010: Stefan Kiebel
Discussed Sussillo and Abbott (2009) .
28.09.2010: Burak Yildiz
Tomorrow, I will give a brief review of different ways to obtain Stable Heteroclinic Cycles. I will look into one of these models in more detail and I attached the related paper  to the calendar. I will talk about only the first 10 pages of that paper.
31.08.2010: Sebastian Bitzer
In this week's meeting I will present Legenstein et al. (2010) . The paper is interesting for us, and me in particular, for several reasons:
- it addresses the issue of how you can learn a compact representation of a stimulus from data
- it does so by exploiting slowly varying features of the input in a hierarchy with increasing spatial scale (you notice that they have the same keywords as we do ;)
- they test the found representation by seeing how suitable it is for reinforcement learning of 2 independent tasks
Point 2. is the one most interesting for us while 1. and 3. are exactly the content of my last paper which will be presented at the International Conference on Intelligent Robots and Systems in October. I obviously used different methods, stimuli and tasks, but the idea is the same.
24.08.2010: David Wozny
I will discuss the Haken-Kelso-Bunz model for coordination dynamics.
Attached is the orginal 1985 paper, but I think the scholarpedia page is an excellent review.
So this page will be the focus of discussion. I will also provide my thoughts on why this is important in our line of research, and a summary of the diverse topics studied with coupled oscillators.
For those that want more math, attached is one of the more advanced and complicated papers which is just one example of the possibilities in understanding coupled dynamic systems. From what I can tell, Peter Ashwin is one of the leaders in the field.
To clarify, you ONLY need to read the scholarpedia page listed above. This will be the focus of discussion. The other papers are simply examples of beginning and current stages of the topic of coupled dynamic systems.