The territory of the Museum of Natural History (NHM) in London In recent years, they have undergone a complete transformation, which served as ensuring that visitors were completely immersed in the miracles of the world of nature, as soon as they enter the site.
When visitors enter the territory through the London underground tunnels, which also associate NHM with other museums and local attractions scattered along the exhibition road, now they do this along the framework inhabited by ferns, cyades and dinosaur models as part of the Evolutionary History of the Earth.
To the left of the building, just a stone's bustle of South Kensington, is the NHM Nature Discovery Garden, which opened for the public in the summer of 2024.
The space is replete with flora and fauna, which provides a “living laboratory” for 400 scientists of the museum, which monitor the biodiversity of space using a network of sensors that are scattered around this place.
“The ambition with the museum sensors of the garden is the creation of this“ always ”in the open air to collect and use urban data on the environment,” says Rachel Wiles technology manager, NHM technology. “At present, we have a corps of about 58,000 visual observations from the museum garden, but these sensors will do something beyond this.”
Sensors will collect environmental data, for example, tracking the temperature and moisture of the soil and waterways of the garden and record extremely detailed acoustic data.
“You could hear the song of the bird in the garden, but the sensors are built up enough to hear the hacking of the insect wing and similar pieces,” Adiles adds. “Our goal is to combine all the data of the sensors that we will receive with a huge visual observation data case that we must already create this research platform to which our scientists can access and use.”
Connection of points
SPEAKY Garden Nature data is collected from several sensors This includes 25 Raspberry Pi devices, which are connected together by a complex data transfer network.
“There are sensors, and there are devices, and the boxes for devices can have sensors, which are also where Raspberry Pi and all computing power for installation,” says Ed Baker, a researcher of acoustic biology in NHM. “In each of the Raspberry Pi boxes, there is some form of an audio -sensor, and can have many other environmental sensors [linked to] their.”
For example, Baker quotes one of the devices that is used to record an audio pond, which is also associated with a series of thermometers that are at different depths of the pond to evaluate how differences in water temperatures can affect what grows or lives there. Acoustic sensors are also equipped with several microphones, so the sound can be obtained from different directions.
According to him, to help Baker and his team determine the source of the raised sounds, they use Machine Labe Machine Learning models, including the Merlin bird to identify birds, he says.
“This has various lists [bird] Types, and there are one that can determine the difference between human voices, cars and everything else that occurs in the environment, ”he says. – most of the time you write down to avoid noise, but [we want to understand] How birds interact with the noise of the urban environment, so we want to keep noise. This is very different from work in a good, untouched forest or an acoustic laboratory, but we want to decompose a sound landscape, and then get some meaning from it. ”
Currently, only modest amounts of Raspberry Pi computing power are used in devices, but they have potential to provide advantages similar to calculations to help process data collected by sensors in the future.
“The gardens were really widely reconstructed for the first time since 1881, before they opened again last summer, and probably will be so long before we get the opportunity to change,” says Baker. “So, so far we have gone for a short break in computational power in the gardens, and on the infrastructure that underlies all this, so we have some level of verification of the future.
“Where we are now, data transfer back [to where the data is processed) is really easy, so doing that right now on the edge isn’t important. As we expand out to some of our other sites, where we can’t dig up their gardens, we’ll revisit that.”
Sensor install and deployment
While the Nature Discovery Garden has been open to the public for more than year, the sensors themselves were switched on in mid-September 2025, after a period of sensor installation and testing.
“There have been some challenges with the sensor install, as it comes to managing timelines across the estate,” says Wiles.
For example, the terracotta brickwork on the exterior of the museum has been under restoration for some time, and the sensor install project has had to fit in with the timelines for that work, she says. The data collected by the sensors will be fed into an Amazon Web Services (AWS) product stack known as the Data Ecosystem Platform, which Wiles is the product lead for, having joined the project eight months ago.

She describes her role as being the “bridge between the engineering and science teams”, as it her job to ensure the huge amounts of data being collected are processed efficiently.
“We had a testing week of the sensors recently,” she says, in response to a question about the scale of the data being collected for the project, “And we’ve had about 36,000 recordings per day over the whole sensor network, which are all fed into the Data Ecosystem. We expect that in just audio data alone, we’re going to generate around 20 terabytes of data a year, so there are big considerations to take in terms of the tech we use to process that and feed it into the system.”
As well as information gleaned from the garden’s sensors, there is also data collected during the museum’s BioBlitz events, whereby people are invited to record and share details of the flora and fauna they have spotted in the gardens during a defined period.
“Most of the [BioBlitz] Data is received through the Into -vianate conveyor in the ecosystem of data, and – from the point of view of the sensor – what we will do is loading them into the ecosystem, so that we can check and track everything, as expected. ”
Working through problems
However, when the sensors are turned on, there is a hope that the loading of the data ecosystem will occur with much greater regularity.
“We move to the phase of automation to have this“ always turned on ”element and live ideas about how the data looks,” says Wiles. “At the moment, it can be that we only have [the data] Once a day, but we want this continuous stream of data to be received as soon as we confirm that this first set of these sensors works, as expected. ”
There were also “many technical reasons” that need to be worked out in order to ensure information in the data platform in the data ecosystem is consistently entered, given that “all these data are entered in one place of many, many sources,” says Wiles.
“In the world of nature, we have about two million observed species, but we have about 20 million different ways of the name of these two million species, so we had to build a taxonomic equivalent engine that is an internal process that we use to make sure that the names in different data sets are equivalent.
“And this means that our researchers have a very standardized, correct and reliable way to make sure that they see all the notes for specific taxonomy, which really will help combine this data, create connections and accelerate research.”
Digging in the ecosystem of data
As mentioned earlier, the Ecosystem Data platform was created exclusively from AWS technologies, including Amazon DocucTDB and Amazon S3, for data storage, as well as an integration proposal without the AWS server for using the sensor data.
“Now we are using Quicksight Amazon [assist with our] Visual observations [data]What is fantastic because you can use Amazon Q for Pivot and gain access to these data to see new points of view that you have not seen particularly before, and do things such as visualizing them on the map, visualize it as a schedule and unlock new ideas, ”says Wiles.
Given the fact that the garden sensors are now included, additional data will also appear in the platform that NHM also plans to use Amazon Q Generate Artificial Intelligence (AI) for the request, and the Amazon SageMaker settings will be a role. The latest technology is exhibited by AWS as a combination of machine learning with analytical capabilities to help users understand their data.
“We strive to turn on Sagemaker as our main analysis tool at the end of this year,” says Wiles. “And as soon as this is done, we will be able to ask our researchers team to access all the data that are in the basic data ecosystem so that they can conduct their research within SageMaker.”
For a badger, however, at present, the main attention is paid to the use of sensors to get annual data to create a basic picture of what exposure, for example, seasonality on the temperature model throughout the site.
“I want to get the base line of the year [of data] Since the UK is very seasonal, so it should be sure that any differences that you find is not only until seasonality, ”he says. “We need solid data that needs a solid interpretation so that we can make a solid policy that means that we can represent the softenings that make city places like this good for everyone.”
For example, he refers to the amount of noise pollution that the gardens are subjected to, out of its location in the center of South Kensington, and how this can be softened.
“There is not so much vegetation between us and the road, and in winter noise pollution is much louder. And you can imagine interference to solve this and increase biodiversityFor example, planting trees and shrubs, but which ones? Because we want to make city spaces more pleasant for people and give nature a little chance of fighting. ”