Real-world human-environmental problems tend to be vast, urgent, and complex. Confronted with such problems, we are often tempted to act fast by pulling together bits and pieces of information from different research fields and adding these to pre-existent models or frameworks. Seldom, though, do we take a step back to consider how best to draw from these fields, in order to build structures that will remain powerful in addressing social-ecological challenges in the long term. In the Macroecology & Society lab, we try to change this status quo. Combined, our different research projects provide components of a growing, interdisciplinary ‘infrastructure’ that will integrate and regularly update best-available data, theory, and methods from multiple fields, to facilitate an empirical, predictive understanding of the complex relationships between global human and environmental dynamics. We hope that our modular approach to interdisciplinary research will help provide robust foundations for addressing some of the grand sustainability challenges of the 21st century.
This crosscutting research field provides an organizing framework to projects in the lab's other four research fields, which in turn develop proof-of-concept implementations for different components of the framework.
Computational answers to big questions
We routinely deal with multi-scale research questions that require large volumes of data. These data have different spatial, temporal and thematic resolutions and can take multiple forms. Therefore, we require a well-thought system that offers standardized and transferable solutions to access, process and report on these data efficiently. In answer to this issue, we develop semi-automated geocomputation algorithms powered by R, Python and GDAL, which we use to derive global, high-resolution, multi-temporal data products and data analysis through high-performance computing. Similarly, we set up the analysis scripts our individual projects as generic, reproducible scripts. This allows us to use them within a larger system of interoperable datasets and analytical pipelines. We share these data, scripts, and algorithms through common databases, documentation standards, and coding conventions, to enable mutual acceleration of current projects. Moreover, these common resources can foster new research projects that require similar data and tools, thus propelling their development and allowing them to jump-start complex projects.