Professor John Crimaldi using lasers to render odor plumes visible. (Courtesy John Crimaldi)
Professor John Crimaldi using lasers to render odor plumes visible. (Courtesy John Crimaldi)

The Fascinating Significance of Understanding How Animals Detect Odors

Professor John Crimaldi studies the subject with students in the Ecological Fluid Dynamics Lab at the University of Colorado Boulder.
By Iris McCloughan
May 27, 2022
12 minute read

How might society benefit from understanding the ways in which the brain processes smell? John Crimaldi, a professor in civil, environmental, and architectural engineering at the University of Colorado in Boulder, who leads the Ecological Fluid Dynamics Lab, has more than a few answers. Getting clarity around brain function, he says, could lead to breakthroughs in a wide range of topics, from neurological disorders to artificial intelligence.

Historically, his lab has focused on how the physics of fluid dynamics interact with biological and ecological processes. Its scientists use a variety of experimental methods to investigate how organisms have evolved and adapted to the opportunities and constraints associated with their physical environments. But recently, his team began applying its knowledge of fluid dynamics to the study of olfaction, working as part of a larger research group to understand more clearly how animals use their sense of smell. Together, they explore how to visualize and quantify odor. Crimaldi recognizes that studying olfaction may not be the most obvious way to understanding how the brain works—but because the complex of neurons involved in smelling is simpler than other senses, such as seeing, he says, scent is a very ideal means to investigate the subject at hand. (The effort is funded by the National Science Foundation’s NeuroNex initiative, which seeks to leverage existing neuroscience technologies in a multidisciplinary way to better understand brain function.)

Here, Crimaldi gives us a glimpse into his lab’s novel techniques for making odors visible, and the wide-ranging potential of this fascinating research.

In simple terms, what is your research on olfaction all about?

I’ll give you this information in the sense of being the lead investigator for the Odor2Action Network. Odor2Action is a group of fifteen investigators from around the world, and if you add in postdocs and students, it’s close to eighty people. It’s a large, international, interdisciplinary network that is studying a very specific problem in neuroscience: How do animals encode odor stimuli into their brains, and subsequently use that information to generate some sort of natural behavioral response? This could be anything from changing the way they’re flicking their whiskers to sensing danger and running away. There’s a whole mechanistic process there that we’re trying to understand. Ultimately, we're trying to reverse-engineer the mechanistic functions of a certain part of the brain.

And this part of the brain is very special, from an evolutionary perspective. That’s because smelling, which is a form of chemical sensing, was the very first sense to evolve in life forms on earth. Even back in the primordial soup where there were just bacteria, they had crude abilities to sense chemicals in the environment around them, which aided their ability to find food and resources. All brain evolution has taken place in the presence of olfaction—this ability to sense and smell your environment—which is not true of the other senses.

The other thing that makes it special, and that makes the idea of studying smell a viable path for studying how the brain works, is that the architecture of the neural circuitry in this part of the brain—the part that does olfactory sensing and leads to the behavioral decisions—is relatively simple across a range of animals that we can study. What I mean by that is, the number of levels of neurons involved in making this process work is much simpler relative to, say, vision. So it turns out that even though it’s maybe not an intuitive thing to study if you want to understand how the brain works, olfaction is actually a very natural portal from which to study brain function.

As part of this work, you developed a novel technique for making odor visible. Can you explain it, and how it’s used in your research?

In my lab, we investigate how to visualize and quantify odor, because if we’re going to study how animals respond to olfactory stimuli, we’d better make sure we understand what that stimulus is like.

As odors move through the environment, they have both spatial and temporal structure. If you looked at an odor plume in space, you’d see that there are places where it has higher concentration, places where it has lower concentration, and places where it doesn’t have any concentration. Conversely, if you sat at a point in space as these odors wafted by you, you would see the odor changing with time. We call that structure. One of the things that we’re focused on is the idea that the statistical nature of these structures in space and time encode information about where an odor source is, and how an animal could locate it.

We also solve the problem of the fact that odors are invisible by using some experimental tricks to render those odors visible. Once they’re visible, they’re more trackable to us as humans, because we tend to be more visually focused. Instead of using a particular pheromone or chemical that the animal responds to, we use a different chemical that moves through the environment in exactly the same way as that odor, but has a special property: When you shine a laser onto that surrogate, it fluoresces, and it fluoresces in a different color than the laser. If we image that fluorescent field with a scientific camera—and if that camera has a filter on it that only lets in light that corresponds to that color—then the only thing we see is the spatial distribution of that odor at that instant. If we take a lot of those images really quickly, maybe thirty frames per second, we can see how the odor’s structure evolves over time. An irony of this study is that we’re studying odors, but to do that, we end up studying them from a visual perspective.

Why are the data in these visualizations significant?

The data that comes out of my lab is essentially nothing more than movies of odor plumes. But they are accurate, scientific movies—meaning that, at each pixel of that movie, the intensity of the light that is recorded corresponds, in an accurate, quantitative way, to the concentration of odor at that point. So if we want to understand, for example, search algorithms that animal brains might use to navigate through these odor plumes—or algorithms that could be used by robots to navigate though odor plumes—we can put a virtual search agent into this digital odor landscape that we’ve measured. Based on a set of rules we give it to use for navigation, we can test how effective that particular algorithm is at locating the odor’s source, and we can do it over and over to understand statistically how that works. It’s a way to essentially test a robotic approach to source localization, or test the way an animal might do source localization—but we can do it on a computer using an odor landscape that was measured and quantified in my lab.

Another thing we do is create virtual reality environments for animals. We take our odor landscapes and create virtual reality systems for animals where, instead of the animals having a set of goggles projecting information into their eyes, we project odors, using olfactometers—little devices that puff odors into the right or left nostrils of the animal. The amount of odor delivered to each side of the animal, and the patterns of that delivery over time, are driven by the virtual system created from our data set. As the animal moves, that stimulus changes according to the data.

Wow. What can you learn from this setup?

We have a number of techniques and technologies that allow us to look into an animal’s brain, and measure the specific neuron-by-neuron responses to this stimulus. We can’t do that as easily if the animal is walking around in an environment. We can do that much more easily if the animal is in this virtual-reality environment. The virtual -reality system allows us to make the animal think and behave like it’s in a real environment, and we can measure the response. By doing that, we can start building up an actual mechanistic model of how the brain is responding to this olfactory stimulus. Ultimately, the goal is to build a mechanistic model of that portion of the brain.

In what ways might this research help society at large?

We’re trying to understand brain function, which has enormous implications for public health. It could lead to developments in neurological disorders like Alzheimer’s or epilepsy. It could also lead to breakthroughs in artificial intelligence. If you understand how the brain figures things out, you can use that as a model for developing intelligent AI systems.

There are also applications that have nothing to do with brain function. There are certain things that we rely on animals for—things that involve detecting chemical signatures, like sniffing for explosives in an airport, searching for contraband, or finding people who’ve been buried in an avalanche. All of these things are done using trained animals, instead of robots. If you want to build a drone to map out the size of all the lakes in Minnesota, for instance, it’s a relatively trivial thing, based on the technologies we have. We have really good cameras, which are artificial eyes, and really good technology for moving the robot around.

What we don’t have is a really good analogue for that with smell. We have artificial noses that can detect and identify olfactory stimuli and measure their concentrations. But they can’t perform the next step, which is locating the source of the odor. So the work that we’re doing could lead to breakthroughs in what we call autonomous platforms, which are basically robots that could do many of these tasks that I outlined—many of which are dangerous. If you’re looking for chemical weapons, or unexploded landmines, or pollution leaks at the bottom of a very deep lake—all those things are either very time-consuming, or unsafe. If you could do all that with robots it would be much more efficient.

Another good example: One of the single biggest contributors to the emission of greenhouse gasses are methane leaks from natural gas pipelines. I work with colleagues who are interested in developing technology to fly drones that can “smell” the methane, which is the easier part, and then determine where it’s coming from, which is the harder part. If we can understand how to program robots to locate the source of these leaks, we could fix them more easily.