In recent years, there has been an intense effort to characterize both the structure and the dynamics of the brain at different scales. Initiatives such as BRAIN aim to develop innovative large-scale recording technologies, while the Blue Brain and Human Brain Projects are opening the path to unprecedented high-detail simulations of cortical columns and other brain structures.
One of the most urgent matters for neuroscience today is to understand how to combine all this information and data, which is especially challenging given the many different spatial and temporal scales involved. Computational models that are able to glue these scales together, in a way that is useful to shape our intuition and explore ideas, are a core ingredient to overcome such a challenge. As an example in this direction, I will review some of my recent work on computational models of the macaque brain across many scales of description. In particular, I will show how multi-scale models can be useful to (i) study attentional bottom-up and top-down interactions between cortical areas and their frequency-dependent nature, (ii) formulate a mechanism for efficient long-range signal propagation and its potential connection with conscious perception, and (iii) propose a mechanistic hypothesis of working memory based on distributed representations across large-scale cortical networks.
Finally, I will briefly discuss possible future directions involving models for mice and/or human.