Doctor Katrin Amunts is a professor of brain research at Heinrich Heine University Düsseldorf, Director of the Cécile and Oskar Vogt Institute for Brain Research at the Düsseldorf University Hospital and Director of the Institute for Neuroscience and Medicine at the Jülich Research Centre. She has been the Scientific Director of the Human Brain Project (HBP) since 2016. She talks about her research and goals in an interview during the Platform for Advanced Scientific Computing (PASC) Conference, which was held in Lugano on 26-28 of June.
July 26, 2017 – by Simone Ulmer
Ms. Amunts, you gave a keynote speech entitled Towards the Decoding of the Human Brain at the PASC17 Conference. It was primarily about understanding the brain and its functionalities in such a way that they can be depicted and simulated on a computer.
Simulation is a tool that helps gain insights into the human brain and is based on models developed from empirical findings. Simulation, but also data analytics procedures such as machine learning enable us to understand the organisational principles of the brain better. If new findings are obtained, they can be tested in an experiment, and new, more effective models can be developed. Understanding the organisational principles of the brain means grasping how the various spatial levels of the brain organisation are connected–from the molecular level via the cellular level to the large networks that regulate cognitive processes and ultimately behaviour. We aim to make a significant contribution towards this with the Human Brain Project.
Where did you start your research?
My starting point was cell-body-stained histological sections, which I used to study the architecture of cells in various areas of the brain. This architecture is closely related to the connections between areas, and also to its function. At the border between two adjacent areas, the cellular architecture changes. This is the basis for developing maps of the brain.
Can you give an example?
For instance, we know an area that controls motor functions has a very different cellular and connectional architecture than one that receives sensory input from the eye, ear or sense of touch. The former controls the hand muscles, for example; the information goes from the motor region to the muscles, so the motor region is output-dominated. The latter, on the other hand, is more of an input region–it receives information and processes it. This is reflected in the architecture. The different aspects of the brain organisation are closely linked: the cells have a particular molecular and genetic pattern and highly characteristic connections to each other; they are organised into brain areas and nuclei that also have particular properties and form large networks. The brain is a highly complex system in the best sense of the word–not only because there are 86 billion nerve cells with several thousand contacts per cell to other nerve cells, but also because of the various organisational levels.
Your remarks about the different levels and aspects of the brain organisation alone give the impression that decoding the brain is almost an insurmountable task.
That’s why we launched the Human Brain Project. For every level, there is a community of scientists today who usually conduct research on one of these levels using completely different methods. However, there is still far too little exchange between them–either because it’s difficult or impossible to swap and integrate the wealth of individual results and data with each other, or simply because the various communities have had little contact thus far.
We want to bring the various approaches together and offer the scientists an infrastructure where they can exchange their data and find tools that help them conduct their own research. These are tools they can use to draw on data from other groups or to simulate or analyse particularly well, for instance; or an atlas that enables you to view various maps of the brain and interpret your own findings. If we achieve that scientists within and outside of the Human Brain Project intensify collaboration and become aware of what the others are doing, it’s not so unrealistic after all.
Do you think the brain can be decoded with the Human Brain Project then?
Yes. We can certainly take a major step in that direction. The key difference compared to similar projects in the field worldwide is that groups from many European countries coordinate closely with each other in regards to the scientific approach in the HBP. The American Brain Initiative focuses mainly on open calls and assumes that everything else will sort itself out within the framework of this call. We in HBP try to define research goals and to distribute tasks in such a way that the neuralgic points are precisely met, the cornerstones as it were, around which everything is then arranged.
Through your time-consuming, data-intensive mapping of the human brain, you have compiled a probabilistic cytoarchitectonic atlas of the brain and made it publicly accessible. What was the motivation behind this?
Today, we assume that the cerebral cortex has roughly 200 cortical areas. Then there are 100 or more subcortical nuclei. We map these areas in tissue sections of brains from body donors before reconstructing these areas in 3D. In order to capture the variability in brain structure and reveal how much brain regions differ from one person to the next, we create these maps for ten brains and superimpose them. This generates cytoarchitectonic probability maps, which tell me how high the probability is of an area being located at a particular point in the space. These maps may help interpret results from healthy test subjects and patients who are being examined within the scope of imaging studies.
What else can researchers and neurologists read from the maps, and how can they work with them?
The maps have already been used frequently. Meanwhile, there are thousands of downloads. They can be used for patients, such as if you want to determine where a stroke is localised or how much volume loss a patient’s brain suffers in a particular area during a particular illness within a certain period of time.
Apart from the atlas, you also play a leading role in the BigBrain project, which is part of the atlas.
Exactly. While the resolution of the probabilistic atlas is one millimetre, corresponding to the resolution of neuroimaging data, we went down to the cellular level with a resolution of 20 micrometres in BigBrain. Although that’s a bit above cell resolution, you can already recognise the cells and spot clear differences between the cell layers and brain regions. Our major breakthrough here came when we managed to reconstruct a brain in this resolution from almost 7,500 sections for the first time in 3D. We can extrapolate the density distribution of the cells from this brain, for instance. This is then actually interesting for simulations, as the cell distribution of a motor area looks different to that of the visual centre. Based on the 3D reconstruction, we can also measure the thickness of the cerebral cortex. BigBrain is therefore an important “gold standard” for measuring the thickness of the cerebral cortex in the event of neurodegenerative diseases as the thickness decreases there.
How else can researchers and neurologists use BigBrain?
BigBrain is also used as a reference, or atlas brain, to which we can refer any other possible findings. If a researcher has achieved, for example, results on layer III pyramidal cells in a particular region, he or she then has to enter them in a reference system with pinpoint precision if he or she wants to make the data available to other researchers. BigBrain’s high resolution enables this. BigBrain can collate the vastly different research results for this brain region from different studies, such as the molecular structure, gene expression, connection patterns of the cells or even the involvement in brain functions. Researchers who are interested in memory and therefore conduct research on the hippocampus, for instance, can enter their findings in a database and indicate precisely which region of the hippocampus and which layer they refer to. This wasn’t possible before. So now you can gradually fill BigBrain with data.
How might the data from macroscopic to microscopic studies one day be linked and combined?
You could try to describe every place in the brain via a vast number of features, for instance–such as how high the packing density of the cells is, how the cells are connected to those in other brain areas, and what large-scale networks that control language function, for example, they are involved in. Then you could try to collate all this knowledge and identify patterns in it, patterns that tell me things like: if I have an area that’s important for language, then such-and-such a precondition needs to be fulfilled. This kind of linkage is only possible with a multimodal atlas. Multimodal means integrating vastly different aspects.
Fewer simulations are central to your research than data storage and analysis.
Yes, that’s true for my own research projects. It needs secure data lines and a close collaboration with specialists from the computing sector to tackle large amounts of data. CSCS and Jülich have secure data forwarding in PRACE. We’re in the process of setting up a European infrastructure in conjunction with Thomas Schulthess in Lugano and Thomas Lippert in Jülich so neuroscientists like me don’t have to worry about where the enormous amounts of data are stored and how we can access them for analysis. How data can be handled safely is an issue for many researchers, which is why I find forums like the PASC Conference interesting.
PASC embodies interdisciplinarity. After your presentation, you had a lively exchange with researchers from other disciplines. Can you give them something and take something from them away with you?
I think so. There is certainly a methodical proximity to many projects. What’s more, I can find out about the latest situation and learn what is currently going on in high-performance computing. I found the panel discussion on neuromorphic computing, quantum computing and new materials absolutely fascinating. These are things I like to keep an eye on as a neuroscientist, because I’d like to know how far this will take me and where I can address new questions. One colleague I spoke to deals with other tissues and organs, but the growth mechanisms she studies might be very similar to those in the brain, so you keep meeting each other. We use the same optical methods as the material researcher on the panel. Although she tends to use them for materials and in my case it’s tissue, there are frequently overlaps. When you look at other research institutes, sometimes you’re astonished to find that they have already found solutions to problems you’re still grappling with yourself. At the moment, I’m trying to set up a consortium on deep learning together with colleagues from aerospace who deal with remote sensing and colleagues who conduct cellular analyses. We have similar issues and want to try developing neuronal networks tailored exactly to our issues. We all bring different use cases to the table and look where the general points of these cases are and what theory is behind them.
Deep learning was a central topic at the PASC Conference. Where do you find support here, and how is deep learning situated between simulations, theory and experiments in brain research?
Analysing Big Data with deep learning yields patterns that enable patients to be divided into groups, for instance. This in turn can help distinguish patients who benefit from a particular course of therapy from those who benefit less and might need a different therapeutic approach. Deep learning gives us access. Data analyses and simulations complement each other and, together with empirical approaches and theory, they may help recognise organisational principles in the brain.
Simulations, on the other hand, can help us understand and predict which substance binds particularly well to a receptor in the brain on account of its chemical make-up. This enables us to forecast which class of chemical compounds is particularly suitable as a therapeutic agent. If successful, this would be revolutionary, as classic drug development is extremely complex and takes a huge amount of time and money. People have been working on this for years. With its interdisciplinary composition and expertise in the field of molecular research, neuroscience, clinical research, simulation and scientific computing, the HBP might just make a key contribution here.
Have you got any idea where brain research will be in 20 years? What’s your dream?
By then, I hope we’ll have a basic overview of why the brain has such a heterogeneous structure and what the principles are that give us such different cognitive and emotional capabilities. What exactly constitutes the interplay between the cells to end up with emotionality, or so I can make a movement or recognise a face? What is ultimately the special thing that makes us human beings? This question really isn’t all that easy to answer. Or what is consciousness? Various groups are working on this in the HBP. I’d like us to be able to define what consciousness is and what cellular preconditions it is bound by on a neurobiological level. Where and how does consciousness develop at all the many levels of the brain’s organisation? That’s what I’d really like to know.
And how might this be achieved?
By then, we should have computers that enable us to keep an entire human brain model in the memory on a cellular level and analyse it. In 20 years, I should think we’ll have realistic simulations of a human brain at nerve cell level that factor in the most important boundary conditions. The results of simulations could well predict what the electrical activity of nerve cells and networks is like, such as if I plan a grasping movement. A simulation might also help gauge what happens if a brain region or connection malfunctions, as is the case after a stroke, for instance.
Are you also considering personalised medicine so we might one day have a kind of personalised atlas for every individual brain as and when we need it?
I’m a physician, and that’s particularly close to my heart. I think we’ll have to have much more individual information to diagnose and treat individual patients. It would be important to be able to determine more effectively what exactly a patient needs after a stroke in terms of drug-based therapy, physiological procedures, behaviour therapy, cognitive training or rehabilitation for the tissue left over around the stroke site to be able to assume functions as effectively as possible. But the provision of comprehensive brain maps to support neurosurgeons when they plan surgery is also one such point. This might help prepare deep brain stimulations in Parkinson’s patients, for instance. I think we’ll have made progress in terms of planning operations and placing electrodes in 20 years’ time, because we’ll have a better grasp of where exactly the nerve cells and the connections that the electrodes are supposed to stimulate are and be able to factor in the patient’s individual characteristics more effectively. I assume we’ll have come on in leaps and bounds by then.
And in the early detection of degenerative processes?
Here too. If someone develops Alzheimer’s, the process begins many years before any symptoms actually appear. Today, we’re in the situation where the brain is already seriously damaged by the time we diagnose Alzheimer’s. If we managed to spot signs for certain only five years sooner, it would already be a massive plus. After all, we’d have gained that much more time to counteract the degenerative process. Perhaps today’s drugs would have an entirely different impact.