RIKEN BRAIN SCIENCE INSTITUTE (RIKEN BSI)

main_image

Date: Thursday, March 16th, 2017

Venue: RIKEN Wako Campus Map
     Okochi Hall
(Campus map: Bldg. No.C32)

Program

13:00 Opening
Susumu Tonegawa (Director, RIKEN Brain Science Institute)
13:05-13:40 Canonical models for neural information processing and neuromorphic computing
Si Wu (Beijing Normal Univ)

Web site
13:40-14:15 From statistical neurodynamics of Amari-Hopfield model to bi-directional computation in deep generative model
Masato Okada (Univ Tokyo)
Web site
14:15-14:30 Break
14:30-15:05 Estimation of neural connections from multiple spike trains
Noboru Murata (Waseda Univ)
Web site
15:05-15:40 Machine learning and AI for the sciences -- towards understanding
Klaus-Robert Müller (Technical Univ Berlin, Korea University)
Web site
15:40-16:10  Coffee break
16:10-17:10 Information geometry of Wasserstein distance
Shun-ichi Amari (RIKEN BSI)
Web site

17:10 Closing
17:30-20:00 Reception at Hirosawa club (Campus map: Bldg. No.C72)



Online registration

Seat is limited in the hall so we encourage you to arrive early.




Organizers

Taro Toyoizumi (RIKEN BSI)
Hiroyuki Nakahara (RIKEN BSI)
Tomoki Fukai (RIKEN BSI)



Sponsored by

RIKEN Brain Science Institute



Inquiries

RIKEN BSI Mathematical Neuroscience
Emi Namioka e-mail: emi(at)brain.riken.jp

Canonical models for neural information processing and neuromorphic computing
 

Si Wu

Si Wu
Beijing Normal Univ

Owing to its many computationally desirable properties, the model of continuous attractor neural networks (CANNs) has been successfully applied to describe the encoding of simple continuous features in neural systems, such as orientation, moving direction, head direction, and spatial location of objects.
Recent experimental and theorical studies further suggest that CANNs may serve a canonical model for neural systems encoding the similary/dismilarity between knowledge.
CANNs, with their relative simple structure and powful functions, may also serve as a canonical model for neurochip design. In this talk, I will briefly introduce some applications of CANNs in neural computation and their potential applications in neuromorphic computing.

 

From statistical neurodynamics of Amari-Hopfield model to bi-directional computation in deep generative model
 

Masato Okada

Masato Okada
The University of Tokyo and RIKEN BSI

Amari proposed the correlation type associative memory model in 1977, and Hopfield discussed the storage capacity of this model in 1982. Amari also showed that a mixed state in the Amari-Hopfield model with hierarchically correlated memory patterns spontaneously becomes an attractor, and called this phenomenon concept formation. Amari and Maginu proposed a statistical neurodynamical theory of the Amari-Hopfield model in 1988. Utilizing this theory, it was showed that this model state initially approaches a mixed state. After that, it diverges from the mixed state, and finally converges to a memory pattern. Next, I discuss the relationship among the retrieval dynamics of Amari-Hopfield model, the temporal dynamics of face-responsive neurons in the inferior temporal cortex (Sugase et al., 1999 and Matsumoto et al., 2005), and recently obtained finding regarding bi-directional computation in deep generative model (Nagano, Karakida and Okada, 2017).

 

Estimation of neural connections from multiple spike trains
 

Noboru Murata

Noboru Murata
Waseda Univ

Estimating neural connections from multiple spike trains is an important task for analyzing mechanisms of information processing in the brain. There are many proposals for estimating connections between observed neurons, but most of them pay little attention on influence from unobserved neurons. By introducing a probabilistic firing model of observed and unobserved neurons, we propose a method of infering the effects of unobserved neurons and estimating the connections of partially observed neurons. Our proposed method is verified an validated with synthetic and real data.

This work is done in collaboration with
Mr. T. Iwasaki at Waseda University,
Dr. S. Akaho at AIST,
Dr. H. Hino at University of Tsukuba,
and Dr. M. Tatsuno at University of Lethbridge.

 

Machine learning and AI for the sciences -- towards understanding
 

Klaus-Robert Müller

Klaus-Robert Müller
Technische Universität Berlin, Germany Korea University, Seoul, South Korea

In recent years machine learning (ML) and Artificial Intelligence (AI) methods have begun to play a more and more enabling role in the sciences and in industry.
In particular the advent of large and/or complex data corpora has given rise to new technological challenges and possibilities.

The talk will touch upon the topic of ML applications in sciences, here, in Neuroscience and Physics and discuss possibilities for extracting information from machine learning models for furthering our understanding by explaining nonlinear ML models.

Finally briefly perspectives and limits will be outlined.

 

Information geometry of Wasserstein distance
 

Shun-ichi Amari

Shun-ichi Amari
RIKEN BSI

Clustering or automatic category formation is one of big topics both in neuroscience and AI.Patterns are clustered based on their similarity (dissimilarity). We regard a pattern as a distribution on a frame (consisting of pixels in the case of pictures). Information geometry studies invariant structure among (probability) distributions, whereas the Wasserstein distance is a structure taking the closeness of pixels into account. Both have developed for long years separately, and both have useful applications to machine learning and neuroscience. Now is the time to have a unified framework for geometry of distributions: Unifying the KL divergence and Wasserstein distance. This will give strong applications in future.