Blue EconomyClean EnergyClimate

What green artificial intelligence needs

Long before the real-world effects of climate change became so abundantly obvious, the data painted a bleak picture – in painful detail – of the scale of the problem. For decades, carefully collected data on weather patterns and sea temperatures were fed into models that analysed, predicted, and explained the effects of human activities on our climate. And now that we know the alarming answer, one of the biggest questions we face in the next few decades is how data-driven approaches can be used to overcome the climate crisis.
Data and technologies like artificial intelligence (AI) are expected to play a very large role. But that will happen only if we make major changes in data management. We will need to move away from the commercial proprietary models that currently predominate in large developed economies. While the digital world might seem like a climate-friendly world (it is better to Zoom to work than to drive there), digital and Internet activity already accounts for around 3.7% of total greenhouse-gas (GHG) emissions, which is about the same as air travel. In the United States, data centres account for around 2% of total electricity use.
The figures for AI are much worse. According to one estimate, the process of training a machine-learning algorithm emits a staggering 626,000lb (284,000kg) of carbon dioxide – five times the lifetime fuel use of the average car, and 60 times more than a transatlantic flight. With the rapid growth of AI, these emissions are expected to rise sharply. And Blockchain, the technology behind Bitcoin, is perhaps the worst offender of all. On its own, Bitcoin mining (the computing process used to verify transactions) leaves a carbon footprint roughly equivalent to that of New Zealand.
Fortunately, there are also many ways that AI can be used to cut CO2 emissions, with the biggest opportunities in buildings, electricity, transport, and farming. The electricity sector, which accounts for around one-third of GHG emissions, advanced the furthest. The relatively small cohort of big companies that dominate the sector have recognised that AI is particularly useful for optimising electricity grids, which have complex inputs – including the intermittent contribution of renewables like wind power – and complex usage patterns. Similarly, one of Google DeepMind’s AI projects aims to improve the prediction of wind patterns and thus the usability of wind power, enabling “optimal hourly delivery commitments to the power grid a full day in advance.”
Using similar techniques, AI can also help to anticipate vehicle traffic flows or bring greater precision to agricultural management, such as by predicting weather patterns or pest infestations.
But Big Tech itself has been slow to engage seriously with the climate crisis. For example, Apple, under pressure to keep delivering new generations of iPhones or iPads, used to be notoriously uninterested in environmental issues, even though it – like other hardware firms – contributes heavily to the problem of e-waste. Facebook, too, was long silent on the issue, before creating an online Climate Science Information Center late last year. And until the launch of the $10bn Bezos Earth Fund in 2020, Amazon and its leadership also was missing in action. These recent developments are welcome, but what took so long?
Big Tech’s belated response reflects the deeper problem with using AI to help the world get to net-zero emissions. There is a wealth of data – the fuel that powers all AI systems – about what is happening in energy grids, buildings, and transportation systems, but it is almost all proprietary and jealously guarded within companies. To make the most of this critical resource – such as by training new generations of AI – these data sets will need to be opened up, standardised, and shared.
Work on this is already underway. The C40 Knowledge Hub offers an interactive dashboard to track global emissions; NGOs like Carbon Tracker use satellite data to map coal emissions; and the Icebreaker One project aims to help investors track the full carbon impact of their decisions. But these initiatives are still small-scale, fragmented, and limited by the data that are available.
Freeing up much more data ultimately will require an act of political will. With local or regional “data commons,” AIs could be commissioned to help whole cities or countries cut their emissions. As a widely circulated 2019 paper by David Rolnick of the University of Pennsylvania and 21 other machine-learning experts demonstrates, there is no shortage of ideas for how this technology can be brought to bear.
But that brings us to a second major challenge: Who will own or govern these data and algorithms? Right now, no one has a good, complete answer. Over the next decade, we will need to devise new and different kinds of data trusts to curate and share data in a variety of contexts.
For example, in sectors like transport and energy, public-private partnerships (for example, to gather “smart-meter” data) are probably the best approach, whereas in areas like research, purely public bodies will be more appropriate. The lack of such institutions is one reason why so many “smart-city” projects fail. Whether it is Google’s Sidewalk Labs in Toronto or Replica in Portland, they are unable to persuade the public that they are trustworthy.
We will also need new rules of the road. One option is to make data sharing a default condition for securing an operating license. Private entities that provide electricity, oversee 5G networks, use city streets (such as ride-hailing companies), or seek local planning permission would be required to provide relevant data in a suitably standardised, anonymised, and machine-readable form.
These are just a few of the structural changes that are needed to get the tech sector on the right side of the fight against climate change. The failure to mobilise the power of AI reflects both the dominance of data-harvesting business models and a deep imbalance in our public institutional structures. The European Union, for example, has major financial agencies like the European Investment Bank but no comparable institutions that specialise in orchestrating the flow of data and knowledge. We have the International Monetary Fund and the World Bank, but no equivalent World Data Fund.
This problem is not insoluble. But first, it must be acknowledged and taken seriously. Perhaps then a tiny fraction of the massive financing being channelled into green investments will be directed toward funding the basic data and knowledge plumbing that we so urgently need. – Project Syndicate

• Geoff Mulgan, a former chief executive of NESTA, is Professor of Collective Intelligence, Public Policy and Social Innovation at University College London and the author of Big Mind: How Collective Intelligence Can Change Our World.

image_pdfimage_print
Print Friendly, PDF & Email