NIMH Brain Experts Podcast – How can computer models help us better understand the brain?

NIMH Brain Experts Podcast – How can computer models help us better understand the brain?


welcome to the brain experts podcast
where we meet neuroscience experts and talk about their work the field in
general and where it’s going we hope to provide both the education and
inspiration I am Peter Bonetti with the National Institute of Mental Health
please note that the views expressed by the guests do not reflect NIH policy
this is episode three with Nikko Korea escorting you will discuss among other
things how my brain imaging helped us to truly understand the brain let’s chat dr. Nico kreega’s Corte is a
computational neuroscientist who studies how our brains enable us to see and
understand the world around us Kriegers cordis lab uses deep neural
networks to build computer models that can see and recognize objects in ways
that are similar to biological visual systems Nikko received his PhD in
cognitive neuroscience from Maastricht University held postdoctoral positions
at the University of Minnesota as well as at the National Institute of Mental
Health here in Bethesda actually he was in my group and was a program leader at
the UK medical research council cognition and brain sciences unit at the
University of Cambridge Nico is currently a professor at Columbia
University affiliated with the department’s of psychology and
neuroscience he is principal investigator and director of cognitive
imaging at the Zukerman mine brain behavior Institute at Columbia
University Nico is also a co-founder of the count
conference cognitive computational neuroscience which has its inaugural
meeting in September 2017 at Columbia University
well you’ve pioneered a few techniques and now you’re a professor at the
Zukerman institutes at columbia so why don’t you tell me a little bit about
what motivated you to get started in this area and how your interests have
kind of moved you along and where you’re at right now my initial interest was in
computer science and psychology I read a book by Paul Watson
and others called human communication that inspired me a little bit so I
started analyzing my parents relationship so then I went to
university and I studied psychology initially and I did some computer
science on the side I was actually quite broad so it allowed
me to explore a lot so I looked around a little bit in Germany and I found a lot
of summer schools and cognitive science I found the Max Planck Institute so I
started doing internships there and going to these schools and that got me
into cognitive science initially and studying computer science on the side
and I got into machine learning and cognitive science when I graduated and
it came time to choose a lab to do my PhD in I realized that I wanted to study
the brain so this was an anterior in the late 90s where brain imaging was still
quite new there was this old revolution going on we could measure brain activity
in humans non-invasively with super exciting to me and then also in
electrophysiology in Germany there was both singers group studying
representations of visual stimuli and higher level phenomena such as grouping
by synchrony they had some computational work going on as well I was interested
in doing neural network modeling and that’s how I found my case she advised a
runner Kerbal who I became aware of after seeing a talk of this on a neural
network modeling project and I ended up having dinner with Reiner and he was
very excited about what he was doing with brain imaging data also with my
data already for finishing my degree I did an internship at the Max Planck
Institute and became a graduate student there and initially to do neural net
modeling all the way that was my plan and I was not interested in ish aliy in
doing empirical work but then everyone at the Max Planck Institute was
measuring brains suis and I did some some rotations helping with that and
then Ranas lab people were doing fMRI and I
got sort of drawn into that for the next I was working with Fri I’m thinking
about how to analyze my data so that was quite a transition for me and it’s never
been easy for me to be an empirical scientist because I there are several
aspects of that that I find actually difficult I mean it is always this
tension between collecting this messy data at a very specific spatial and
temporal scale model the data itself or or trying to actually model the
underlying mechanisms behind the data so that’s really hard work what is your
research now it’s interesting how how its involved from the very beginning of
my PhD where I wanted to study visual representations and I noticed that there
was this concept of the population code that the information was encoded in a
distributed fashion across a population of neurons in an area and at the same
time we were using fMRI and we had significantly higher resolution than we
did earlier with hats and with early versions of fmi so we could measure
these fine scale patterns but then the dominant mode of analysis was to pass a
millimeter smoothing kernel over the data and filter out all the fine scale
structures so I saw there a kind of tension between theory and experiment
where in theory we think of the representations as these fine-grained
patterns but then in the analysis we treat those patterns as though they were
noise and we look just at the overall activation of entire regions there
seemed to be something wrong with that and then in in terms of experimental
design similarly in fMRI a vision people classically root stimuli into blocks so
for example a block of faces and a block of places and then average the responses
to all these different visual stimuli and this space of the stimuli was sort
of a similar thing where you you’re averaging across lots of different
things that are uniquely represented in the brain every image looks entirely
different and the subjective experience is entirely different so I formed this
kind of overall conviction that we need to change these two things we need to
make every stimulus a condition and it’s on the right and we need a lot of
different stimuli at the same time and we need to analyze the information in
every single box oh and we want to not lose any of this precious information
that we can capture with our measurement technologies which could be array
recordings or an F my keys this box holds I have worked a lot my thesis on
pushing the resolution and using high field F I with seven Tesla to get more
detailed measurements so these were covered two things that I became quite
interested in during my PhD that have stuck with me and actually even today
I’m building on that in a sense right and then during my postdoc what was
added to it was the idea that we don’t only want to decode these patterns of
activity and see whether we can distinguish particular stimuli but we
want to explore the geometry of the representation if you will we want to
look at all the dimensions of the representational space not just a couple
of dimensions that decoders would focus on but the decoder you say I’m
interested in this kind of stimulus information let me see if I can decode
that if you generalized you could ask well there are many different properties
of the stimuli so why don’t we just fit decoders for all of them and so the
limiting case of that is being interested in the entire geometry of the
representation so your work pulled out these patterns that were different and
much more informative than just blobs and then you had this beautiful work
sort of comparing behavioral responses or preferences of people did how they
categorized objects you basically at a the circular array where you you had a
bunch of objects and they sorted them and the pattern differentiation seemed
to match to some degree the sorting but then you know you mentioned
computational models so could you unpack what you mean by that what do you define
as a computational model so computational
model that term is used in many different senses when I say
computational model I mean primarily what could be called a task performing
brain computational model and what I mean by that is that the model should be
interpretable as a processed model of what’s going on in the brain at some
level of abstraction so it doesn’t need to be biologically detailed it doesn’t
need to have spiking neurons it doesn’t need to correspond one-to-one to the
neurons in the brain it doesn’t even need to be a neural network model in
psychophysics there’s a lot long history of more abstract models with the
information processing that still have the flavor of wanting to capture the
information processing that it’s brought going on in the in the brain in a way
that captures task performance and the idea is that in order to link the
mission to the brain we need a model that implements a hypothesis about how
the required information processing might work okay so that leads actually
to the the next question which is kind of the prevailing question over all this
so we have the abstraction the computational model we have the data and
we have ever more sophisticated data we have ever more sophisticated models
everything from linear models to to nonlinear models to sort of neural
networks and and convolutional neural networks so they’re hopefully going to
meet in the middle the goal here is to understand the brain and we need to
define that and how would you define understanding the brain first of all and
what do you think that would look like how would that be would be able to
simulate it or emulate it or make one what would that mean that’s that’s a
great question and I think a very important question we need to carve up
all of cognition with different tasks and the tasks kind of bring the
information processing and the behavior into the lap and make the performance
measurable and allow us to quantify how well a given system a brain or
human brain or an animal brain or a computational model can perform the task
and they also allow us to measure in under what conditions task performance
suffers under what conditions the system makes mistakes or takes longer for
things like that and this gives us behavioral characterizations of task
performance that we can compare between models and brains and at the same time
we want these models to relate to the brain itself not just to produce the
behavior to be able to perform the tasks and match the patterns of errors as well
we’re not primarily interested in the engineering objective to do the task as
well as possible but we also want to match the situations in which the human
brain fails for example or the situations in which a subject might take
much longer to recognize an image so that’s the behavioral level and then in
addition we want to be able to relate the dynamics inside the model to the
dynamics in the brain of a human or animal performing the same tasks and
that’s a very interesting methodological challenge that is not a purely technical
and methodological challenge but that gets to the core of some fundamental
theoretical questions of neuroscience including at what level of detail can we
hope to make this comparison to find correspondence II how should the
correspondence II be defined between the dynamics in the brain and the dynamics
in the model and so in this way we want to be able to compare the models in
terms of their behavior and in terms of their internal activity to human brains
and we have succeeded when it is no longer possible for cognitive scientists
to come up with tasks at which humans outperform our new biologically
plausible models ok when so it’s going to be sort of a somewhat fun adversarial
cooperation I think between cognitive scientists
many of whom will be multidisciplinary cognitive scientists I’m guessing but
some in our community will focus more on some the behavioral aspect of it and I
see it as their job to design the tasks and program these tasks and share them
with the community that highlight exactly what a neural network models or
cognitive models more generally can’t match in human behavior okay at the
moment it’s still easy for them to do that but when we get to the point where
that this is dwindling and there is nothing you can come up with anymore
where we can’t find a neurobiological impossible mobile that can match the
task performance that’s when we’re done to use one quick example you know
something like playing chess or go computers can out match humans they can
outperform cognition and it’s very likely that they use an algorithm and if
you were to model that process computationally an extremely different
process at all spatial and temporal scales and also you know it’s fine by
its architecture as well so are we really gaining insight to the brain by
designing better AI algorithms that might outperform humans but for all very
different reasons it’s not susceptance or outperforming them but they’re doing
something that lends itself to the fact that it can compute so much faster but
the augur that might be much less efficient there might be many variables
in play that maybe not add up into understanding the human brain absolutely
I think this is a great example so chess or go on these games actually have a
long history in cognitive science as well Allen Newell in his essay you can’t
play 20 questions with nature and win I remember correctly as a possible test
case for cognitive models we were done with that task we wouldn’t understand
how the brain works we would just understand how it worked for that task
right and that’s why I’m saying we need all the different tasks I see
we understand that now for chess deaf me not right so for chest engineering
has surpassed human ability however that doesn’t mean that we know how humans
play chess right of course the fact that we within our reach in terms of
engineering helps us a lot in modeling how humans play chess but it would be a
very interesting and cutting edge project now that we are at the stage
with the engineering to revisit the question how humans play chess right and
you already brought up all the ways in which the programs that have superhuman
performance are different from the human brain right so that’s why it’s important
we want to match not only the general level of performance we also want to
match the limits of performance the kinds of mistakes that humans make and
we want to match the dynamics in the system so we want to be able to relate
the representations in the chess playing near biologically plausible model to the
dynamics in the brain while playing chess and we want to be able to explain
how humans acquire it for example how much do they have to learn right a lot
of these models for example for alphago zero they do a lot of self play and they
do an amount of self play that is impossible for a human to perform all
right that’s how understanding the brain is
fundamentally different from the AI engineering challenge although the AI
engineering challenge is a key component of all of this right which is why a AI
is a key component of cognitive computational neuroscience do you think
really neuroimaging data how much how limited is it in terms of informing
really true models of how the brain is actually working all our data are
limited we can’t think I think someone said behavioral data or reaction times
is like having a single measurement for the entire system it’s like having one
box all fight being just a uni-dimensional measure of the entire
system it gives you a lot of right if you combine it so call of
cognitive science is based on human behavioral data right it’s not just
reaction times obviously it’s you can have much more complex data but it’s
usually low dimensional data and it’s thought to be extremely powerful in
combination with the constraint that your model has to be able to perform the
task for adjudicating between models and that’s a special case of cause of this
empirical inference where you have a little clue and you have a lot of
assumptions and when you bring them together you can make surprisingly deep
inferences maybe you can you know find who committed a murder but you only have
these like three different cues and you have some prior knowledge about you know
who it might have been and then by elimination you can find out who it was
right yeah that’s potentially powerful but of course it’s also true that you
you need strong data and might leave more along the lines of your original
argument is that behavioral data alone now definitely not sufficient and I’m
interested in the brain because I think that we need the constraints of thinking
about how the brain is organized and of massive multivariate brain activity data
and anatomical data as well in order to constrain our theoretical efforts and
our modeling efforts appropriately but this was just to illustrate that even
the sort of overall measure can constrain you theory and of course if
you have F mirai data a wonderfully rich source of data it’s tens of thousands of
channels right so it’s always how you look at it if you look at it in terms of
coverage it’s perfect you see the whole brain if you look at it in terms of the
number of channels well it’s tens of thousands of channels that’s also
extremely rich informational sample per unit of time of this dynamic
system that you want to understand but of course if you look at it from the
other perspectives you know with within each voxel we have tens of thousands or
hundreds of thousands of neurons that were averaging across so if we’re think
brain function at this very detailed level of single neurons and it’s very
unsatisfying but the the reason why we can still use those data is because we
can interpret those data in the light of strong theory for example when we have a
computational model and the computational model has a lot of much
more fine scale structure then we can still predict the course scale dynamics
from the fine-scale model if we get to the point where we have the engineering
principles that put us in a position to build something that performs any task
we can come up with and that is consistent with all the data we have
that’s not to say that our data uniquely identifies one solution there might be
an infinite stage we should expect there to be an infinite space of equivalent
solutions but there’s also a large population of humans
everyone’s brain is different but still at the level that I’m interested in it
our brains are identical what I want to understand is not exactly what does a
particular neuron in your brain do but I want to understand what all those
neurons do together and those in your brain do fundamentally the same thing
they use the same algorithms as those in my brain that’s what I want to
understand I want to go back to these tails right so we went from behavior to
the level of neural imaging and we said that even though both of these are very
coarse scale in a sense when you think of it at the scale of single neurons
they can still help us constrain theory and adjudicate between different models
but of course there’s also increasingly high-quality fine scale of beta so
increasingly we can also get constraints at that level and that’s essential as
well but then of course also at the same time if we take those incredibly rich
and beautiful data and analyze them only with data
approaches and with insufficient theoretical constraints we also don’t
make a lot of progress yes it’s all about combining this the strong
theoretical perspective with rich data and linking these two up very well so
that we can combine the theoretical constraints and the data in an optimal
way so as to invade the huge Canyon between them right they only have
meeting in the context of a model in some sense otherwise it’s just
measurements so you look at yourself now as basically a model builder it’s true
that I kind of came full circle from the initial interest in neural networks and
machine learning in the 90s then you know with this empirical science and now
we’re doing a lot of modeling again how do you see that actually ratcheting
forward I mean every single model is falsifiable it’s all testable but the
data are maybe not good enough to necessarily falsify the model so our
data tend to be limited but they’re often good enough to eliminate our
models okay we compute a number that characterizes how well a model explains
the data and then we also compute what we call a noise ceiling which is the
upper bound to the performance of the true model if some Oracle gave us the
true model given the noise and the data we expect the true model to have some
level of performance we also expect that a lot of untrue models will have the
same level of performance but this noise ceiling helps us eliminate models that
fall short even given the limitations of our data okay this is how our data can
drive progress all right that a path that more people should probably try to
understand and embrace I think it’s useful as opposed to just being sort of
in their silo of either collecting data or modeling or trying to make AI so you
started this meeting a couple years ago and I’ve been really fascinated with it
I’ve gone twice computational cognitive neuroscience meeting could you just talk
a little bit about the meeting yeah sure like people that
so this meeting was not my idea it was the idea of Kendrick K and Thomas not
Solaris and they’re two very good computational neuroscientists and I
approached me about it and I admitted that it’s really necessary meeting
because all the other meetings that I go to are related to a subset of the fields
that need to come together but don’t really put all these fields together and
try to put the elements together and get these communities to interact with each
other we argued quite a bit about what the name of the meeting should be and
the two versions were computational cognitive neuroscience or cognitive
computational neural and my mind are totally different they have nothing to
do computational neuroscience because the other just doesn’t make any sense
okay so the reason is from my perspective the cognitive level is the
top level computation is the glue that relates cognition to the brain that’s
one reason why computational should be in the middle there another perspective
is that it’s cognitive science plus computational neuroscience so we kind of
wanted to keep the computational neuroscience intact there the second
half right and the other perspective is that it emphasizes something that’s
missing in the current scene and that’s the top-down component starts with
cognition it starts with the task it starts with the idea that we want to
understand how cognition works and then we use those top-down constraints to
interpret our data which can be Averell data or brain data or can be at a very
detailed level also you can think of it as as cognitive science finding roots
and neuroscience or you could think of it as computational neuroscience which
has always been about understanding components of computation that might be
useful in the in the context of cognition but
it hasn’t really fully related it to cognition so you could think of it as
computational neuroscience growing out toward cognition alright is there
anything that you’d like to highlight as a recent advancement that you’re excited
about so there are huge advances in brain inspired AI in the last couple of
years and deep neural networks are a big and famous one and I think this is
really a revolution and not just for AI but also for brain science because it
vindicates the old intuitions that this intermediate level of brain inspired
computation that abstracts from a lot of details of the biology is already very
useful and that gives us a common modeling language that links cognitive
science and computational neuroscience and AI and that gives us technology and
software tools that enable us to implement our theories and these tasks
performing brain computational models and what that means is that there’s
really no excuse anymore in a way right so when this started a few years ago
when I thought about how I think vision works and vision is what I’m trying to
understand every day I didn’t really have a very good understanding of my own
intuitions and let’s do with the fact that for a dozen years I’ve been
thinking almost exclusively about how to analyze my data my brain data I’ve been
thinking about multivariate data and about modeling the noise in those data
and about being a statistically efficient in my analyses and things like
that but I was not thinking on a daily basis about how does the brain achieve
these amazing things how does it compute and now that’s totally changed so what I
think about when I fall asleep is neural networks and copy
neural networks and I like that very much and if someone has a theory and
thinks they understand some aspect for example of visual recognition there is
no excuse for not implementing that in a neural network model and then showing on
the one hand that it really is capable of performing the task but also that it
predicts behavioral patterns of reaction times and arrows and brain activity data
right so this has this kind of brings to the fore this kind of central challenge
of how does the brain information processing work and that it’s it’s very
exciting that we have meeting that challenge head-on and in engineering
also there is massive creativity in terms of architectures for neural
networks and invading new kinds of tasks that were never before thought to be
tasks for machines think for example of the task of creating an image right so
the task give me an image it’s supposed to look like a photo but could be
anything this is not the kind of task that a few years ago we would have
associated with a task for a machine it’s more of a creative task but now
with what neural networks engineers are thinking about these kinds of tasks and
of course many of them are not thinking about the brain does this and that’s as
it should be but in brain science and in cognitive science people are using the
same kinds of models to explain the human mind and the human brain achieve
these feats well thanks so ok the last final question if someone was either
established in a field or just starting out is there any pieces of advice that
you would give them to help navigate I guess two things come to my mind list is
long and so take this with a grain of salt but two things I think of first as
choose good advisors so I chose two great advisors mine are Gerber and you
enabled me to my way they gave me their wisdom and they gave me the freedom that
I for my ideas and follow my my intuitions
and they always supported me and that was I think absolutely key and the
second thing maybe is to trust your intuition about what’s interesting so
for me it’s happening time and again that I had an intuition that something
was somehow deep or interesting or attracted me and I couldn’t always
immediately fully rationally explain it or explain it to others and I also met
others that said that’s a bad idea for experiment and some of these things I
didn’t do and others I did do but when I look back at maybe doesn’t know so ideas
usually when I was very excited about it at it I’d say today there was a reason
for that and some cases I realized the idea and I later noticed that it was
important in ways that I hadn’t anticipated in other cases I was scared
off maybe or you know someone maybe there was a project meeting and people
had good arguments against and I didn’t do it but in more cases than not years
later I found other people are pursuing it and doing really interesting things
so I think trusting our intuition there and following what we feel is
interesting or there’s some mystery there I think that’s great advice and I
appreciate the the compliment regarding the advising well yeah I just like to
thank you and appreciate the time that you spent and and wish you the best of
luck in the in

Leave a Reply

Your email address will not be published. Required fields are marked *