MIT Technology Review https://www.technologyreview.com Thu, 16 Apr 2026 23:43:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://wp.technologyreview.com/wp-content/uploads/2024/09/cropped-TR-Logo-Block-Centered-R.png?w=32 MIT Technology Review https://www.technologyreview.com 32 32 172986898 Making AI operational in constrained public sector environments https://www.technologyreview.com/2026/04/16/1135216/making-ai-operational-in-constrained-public-sector-environments/ Thu, 16 Apr 2026 13:00:00 +0000 https://www.technologyreview.com/?p=1135216 The AI boom has hit across industries, and public sector organizations are facing pressure to accelerate adoption. At the same time, government institutions face distinct constraints around security, governance, and operations that set them apart from their business counterparts. For this reason, purpose-built small language models (SLMs) offer a promising path to operationalize AI in these environments.  

A Capgemini study found that 79 percent of public sector executives globally are wary about AI’s data security, an understandable figure given the heightened sensitivity of government data and the legal obligations surrounding its use. As Han Xiao, vice president of AI at Elastic, says, “Government agencies must be very restricted about what kind of data they send to the network. This sets a lot of boundaries on how they think about and manage their data.”

The fundamental need for control over sensitive information is one of many factors complicating AI deployment, particularly when compared against the private sector’s standard operational assumptions.

Unique operational challenges

When private-sector entities expand AI, they typically assume certain conditions will be in place, including continuous connectivity to the cloud, reliance on centralized infrastructure, acceptance of incomplete model transparency, and limited restrictions on data movement. For many state institutions, however, accepting these conditions could be anything from dangerous to impossible. 

Government agencies must ensure that their data stays under their control, that information can be checked and verified, and that operational disruptions are kept to an absolute minimum. At the same time, they often have to run their systems in environments where internet connectivity is limited, unreliable, or unavailable. These complexities prevent many promising public sector AI pilots from moving beyond experimentation. “Many people undervalue the operating challenge of AI,” Xiao says. “The public sector needs AI to perform reliably on all kinds of data, and then to be able to grow without breaking. Continuity of operations is often underestimated.” An Elastic survey of public sector leaders found that 65 percent struggle to use data continuously in real time and at scale. 

Infrastructure constraints compound the problem. Government organizations may also struggle to obtain the graphics processing units (GPUs) used to train and access complex AI models. As Xiao points out, “Government doesn’t often purchase GPUs, unlike the private sector—they’re not used to managing GPU infrastructure. So accessing a GPU to run the model is a bottleneck for much of the public sector.” 

A smaller, more practical model

The many nonnegotiable requirements in the public sector make large language models (LLMs) untenable. But SLMs can be housed locally, offering greater security and control. SLMs are specialized AI models that typically use billions rather than hundreds of billions of parameters, making them far less computationally demanding than the largest LLMs.

The public sector does not need to build ever-larger models housed in offsite, centralized locations. An empirical study found that SLMs performed as well or better than LLMs. SLMs allow sensitive information to be used effectively and efficiently while avoiding the operational complexity of maintaining large models. Xiao puts it this way: “It is easy to use ChatGPT to do proofreading. It’s very difficult to run your own large language models just as smoothly in an environment with no network access.” 

SLMs are purpose-built for the needs of the department or agency that will use them. The data is stored securely outside the model, and is only accessed when queried. Carefully engineered prompts ensure that only the most relevant information is retrieved, providing more accurate responses. Using methods such as smart retrieval, vector search, and verifiable source grounding, AI systems can be built that cater to public sector needs. 

Thus, the next phase of AI adoption in the public sector may be to bring the AI tool to the data, rather than sending the data out into the cloud. Gartner predicts that by 2027, small, specialized AI models will be used three times more than LLMs.

Superior search capabilities

“When people in the public sector hear AI, they probably think about ChatGPT. But we can be much more ambitious,” says Xiao. “AI can revolutionize how the government searches and manages the large amounts of data they have.”

Looking beyond chatbots reveals one of AI’s most immediate opportunities: dramatically improved search. Like many organizations, the public sector has mountains of unstructured data—including technical reports, procurement documents, minutes, and invoices. Today’s AI, however, can deliver results sourced from mixed media, like readable PDFs, scans, images, spreadsheets, and recordings, and in multiple languages. All of this can be indexed by SLM-powered systems to provide tailored responses and to draft complex texts in any language, while ensuring outputs are legally compliant. “The public sector has a lot of data, and they don’t always know how to use this data. They don’t know what the possibilities are,” says Xiao.

Even more powerful, AI can help government employees interpret the data they access. “Today’s AI can provide you with a completely new view of how to harness that data,” says Xiao. A well-trained SLM can interpret legal norms, extract insights from public consultations, support data-driven executive decision-making, and improve public access to services and administrative information. This can contribute to dramatic improvements in how the public sector conducts its operations.

The small-language promise

Focusing on SLMs shifts the conversation from how comprehensive the model can be to how efficient it is. LLMs incur significant performance and computational costs and require specialized hardware that many public entities cannot afford. Despite requiring some capital expenses, SLMs are less resource-intensive than LLMs, so they tend to be cheaper and reduce environmental impact. 

Public sector agencies often face stringent audit requirements, and SLM algorithms can be documented and certified as transparent. Some countries, particularly in Europe, also have privacy regulations such as GDPR that SLMs can be designed to meet.

Tailored training data produces more targeted results, reducing errors, bias, and hallucinations that AI is prone to. As Xiao puts it, “Large language models generate text based on what they were trained on, so there is a cut-off date when they were trained. If you ask about anything after that, it will hallucinate. We can solve this by forcing the model to work from verified sources.”

Risks are also minimized by keeping data on local servers, or even on a specific device. This isn’t about isolation but about strategic autonomy to enable trust, resilience, and relevance.

By prioritizing task-specific models designed for environments that process data locally, and by continuously monitoring performance and impact, public sector organizations can build lasting AI capabilities that support real-world decisions. “Do not start with a chatbot; start with search,” Xiao advises. “Much of what we think of as AI intelligence is really about finding the right information.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

]]>
1135216
Treating enterprise AI as an operating layer https://www.technologyreview.com/2026/04/16/1135554/treating-enterprise-ai-as-an-operating-layer/ Thu, 16 Apr 2026 13:00:00 +0000 https://www.technologyreview.com/?p=1135554 There’s a fault line running through enterprise AI, and it’s not the one getting the most attention. The public conversation still tracks foundation models and benchmarks—GPT versus Gemini, reasoning scores, and marginal capability gains. But in practice, the more durable advantage is structural: who owns the operating layer where intelligence is applied, governed, and improved. One model treats AI as an on-demand utility; the other embeds it as an operating layer—the combination of operation software, data capture, feedback loops and governance that sits between models and real work—that compounds with use.

Model providers like OpenAI and Anthropic sell intelligence as a service: you have a problem, you call an API, you get an answer. That intelligence is general-purpose, largely stateless, and only loosely connected to the day-to-day operations where decisions are made. It’s highly capable and increasingly interchangeable. The distinction that matters is whether intelligence resets on every prompt or accumulates over time.

Incumbent organizations, by contrast, can treat AI as an operating layer: instrumentation across operations, feedback loops from human decisions, and governance that turns individual tasks into reusable policy. In that setup, every exception, correction, and approval becomes a chance to learn—and intelligence can improve as the platform absorbs more of the organization’s work. The organizations most likely to shape the enterprise AI era are those that can embed intelligence directly into operational platforms and instrument those platforms so work generates usable signals.

The prevailing narrative says nimble startups will out-innovate incumbents by building AI-native from scratch. If AI is primarily a model problem, that story holds. But in many enterprise domains, AI is a systems problem—integrations, permissions, evaluation, and change management—where advantage accrues to whomever already sits inside high-volume, high-stakes operations and converts that position into learning and automation.

The inversion: AI executes, humans adjudicate

Traditional services organizations are built on a simple architecture: humans use software to do expert work. Operators log into systems, navigate operations, make decisions, and process cases. Technology is the medium. Human judgment is the product.

An AI-native platform inverts this. It ingests a problem, applies accumulated domain knowledge, executes autonomously what it can with high confidence, and routes targeted sub-tasks to human experts when the situation demands judgment that the system can’t yet reliably provide.

But inverting human-AI interaction isn’t just a UI redesign—it requires raw material. It’s only possible when the platform is built on a foundation of domain expertise, behavioral data, and operational knowledge accumulated over years.

The three compounding assets incumbents already own

AI-native startups begin with a clean architectural slate and can move quickly. What they can’t easily manufacture is the raw material that makes domain AI defensible at scale:

  • Proprietary operational data
  • A large workforce of domain experts whose day-to-day decisions generate training signals
  • Accumulated tacit knowledge about how complex work actually gets done

Services companies already have all three. But these ingredients aren’t moats on their own. They become an advantage only when a company can systematically convert messy operations into AI-ready signals and institutional knowledge—then feed the results back into operations so the system keeps improving.

Codifying expertise into reusable signals

In most services organizations, expertise is tacit and perishable. The best operators know things they cannot easily articulate: heuristics developed over the years, edge-case intuitions, and pattern recognition that operate below the level of conscious reasoning.

At Ensemble, the strategy for addressing this challenge is knowledge distillation. The systematic conversion of expert judgment and operational decisions into machine-readable training signals.

In health-care revenue cycle management, for example, systems can be seeded with explicit domain knowledge and then deepen their coverage through structured daily interaction with operators. In Ensemble’s implementation, the system identifies gaps, formulates targeted questions, and cross-checks answers across multiple experts to capture both consensus and edge-case nuance. It then synthesizes these inputs into a living knowledge base that reflects the situational reasoning behind expert-level performance.

Turning decisions into a learning flywheel

Once a system is constrained enough to be trusted, the next question is how it gets better without waiting for annual model upgrades. Every time a skilled operator makes a decision, they generate more than a completed task. They generate a potential labeled example—context paired with an expert action (and sometimes an outcome). At scale, across thousands of operators and millions of decisions, that stream can power supervised learning, evaluation, and targeted forms of reinforcement—teaching systems to behave more like experts in real conditions.

For example, if an organization processes 50,000 cases a week and captures just three high-quality decision points per case, that’s 150,000 labeled examples every week without creating a separate data-collection program.

A more advanced human-in-the-loop design places experts inside the decision process, so systems learn not just what the right answer was, but how ambiguity gets resolved. Practically, humans intervene at branch points—selecting from AI-generated options, correcting assumptions, and redirecting operations. Each intervention becomes a high-value training signal. When the platform detects an edge case or a deviation from the expected process, it can prompt for a brief, structured rationale, capturing decision factors without requiring lengthy free-form reasoning logs.

Building toward expertise amplification

The goal is to permanently embed the accumulated expertise of thousands of domain experts—their knowledge, decisions, and reasoning—into an AI platform that amplifies what every operator can accomplish. Done well, this produces a quality of execution that neither humans nor AI achieve independently: higher consistency, improved throughput, and measurable operational gains. Operators can focus on more consequential work, supported by an AI that has already completed the analytical groundwork across thousands of analogous prior cases.

The broader implication for enterprise leaders is straightforward. Advantages in AI won’t be determined by access to general-purpose models alone. It will come from an organization’s ability to capture, refine, and compound what it knows, its data, decisions, and operational judgment, while building the controls required for high-stakes environments. As AI shifts from experimentation to infrastructure, the most durable edge may belong to the companies that understand the work well enough to instrument it and can turn that understanding into systems that improve with use.

This content was produced by Ensemble. It was not written by MIT Technology Review’s editorial staff.

]]>
1135554
The Download: cyberscammers’ banking bypasses, and carbon removal troubles https://www.technologyreview.com/2026/04/16/1136034/the-download-cyberscammers-banking-bypasses-microsoft-carbon-removal-troubles/ Thu, 16 Apr 2026 12:10:00 +0000 https://www.technologyreview.com/?p=1136034 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Cyberscammers are bypassing banks’ security with illicit tools sold on Telegram 

Inside a money-laundering center in Cambodia, an employee opens a banking app on his phone. It asks for a photo linked to the account, so he uploads a picture of a 30-something Asian man. 

The app then requests a video “liveness” check. The scammer holds up a static image of a woman who doesn’t match the account. After 90 seconds, he’s in. 

The exploit relies on illicit hacking services sold on Telegram that break “Know Your Customer” (KYC) facial scans. MIT Technology Review found 22 channels and groups advertising these services. This is what we discovered

—Fiona Kelliher 

Is carbon removal in trouble? 

—Casey Crownhart 

Last week, news emerged that Microsoft was pausing carbon removal purchases. It was a bombshell—Microsoft effectively is the carbon removal market, single-handedly purchasing around 80% of all contracted carbon removal. 

The report sparked fear across the industry, raising questions about the future of carbon removal and the role of Big Tech. Read the full story

This story is from The Spark, our weekly newsletter exploring the technology that could combat the climate crisis. Sign up to receive it in your inbox every Wednesday. 

The quest to measure our relationship with nature 

—Emma Marris 

Humans have done some destructive things to the ecosystems around us. But conservationists are learning that we can also be a force for good. 

To understand how we work best with nature, a group of scientists, authors, and philosophers have developed new measurements of human-nonhuman relationships. Now, a team in the United Nations is continuing the work. Find out why—and what they hope to achieve

This story is from the next issue of our print magazine, which is all about nature. Subscribe now to read it when it lands on Wednesday, April 22.  

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 Ukraine says Russian troops have surrendered to robots  
They claim a fully automated attack captured army positions for the first time in history. (404 Media
+ Europe’s vision for future wars is full of drones. (MIT Technology Review
 
2 Monkeys with BCIs are navigating virtual worlds using only their thoughts 
The research could help people with paralysis. (New Scientist)  
+ But these implants still face a critical test. (MIT Technology Review
 
3 NASA wants to put nuclear reactors on the Moon 
They could power lunar bases and extend spaceflight. (Wired $) 
+ NASA is also building a nuclear-powered spacecraft. (MIT Technology Review

4 Plans for online age verification in the US are raising red flags 
Experts warn of compliance issues and potential data breaches. (NBC News
+ In the EU, an age verification app is about to launch. (Reuters $) 

5 An AI chip boom just pushed Taiwan’s stock market past the UK’s 
It’s risen past $4 trillion to become the world’s seventh largest. (FT $) 
+ Future AI chips could be built on glass. (MIT Technology Review

6 The public backlash against data centers is intensifying in the US 
Protests and litigation are blocking projects. (CNBC
+ One potential solution? Putting them in space. (MIT Technology Review

7 Five-minute EV charging is becoming a reality 
China’s BYD has started rolling it out. (Gizmodo)  
+ “Extended-range electric vehicles” are about to hit US streets. (Atlantic $) 

8 Stealth signals are bypassing Iran’s internet blackout  
Files hidden in satellite TV broadcasts keep information flowing. (IEEE
 
9 Shoe brand Allbirds made a shock pivot to AI, sending stock up 700%  
No bubble to see here, folks. (CNBC)  
+ What even is the AI bubble? (MIT Technology Review

10 The largest ever map of the universe is complete  
It captures 47 million galaxies and quasars. (Space.com

Quote of the day 

“I like the internet as much as anybody, but we’ve got to go on an internet diet. We don’t need to pay for corporations to do their internet stuff.” 

 —Sylvia Whitt, a 78-year-old retiree based in Virginia, tells the Washington Post why they’re protesting against data centers.  

One More Thing 

a collage of hands and suggestive body shapes
ISRAEL VARGAS

AI and the future of sex 

Some Republican lawmakers want to criminalize porn and arrest its creators. But what if porn is wholly created by an algorithm? In that case, whether it’s obscene, ethical, or safe becomes a secondary issue. The primary concern will be what it means for porn to be “real”—and what the answer demands from all of us. 

Technological advances could even remove the “messy humanity” from sex itself. The rise of AI-generated porn may be a symptom of a new synthetic sexuality, not the cause. Read the full story

—Leo Herrera 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ An animator turned his son’s drawings into epic anime characters. 
+ Hundreds of baby green sea turtles made a spectacular first journey to the ocean. 
+ You can now track rocket launches from take-off to orbit in real time. 
+ These musical mistakes prove that even the classics aren’t perfect. 

]]>
1136034
Why having “humans in the loop” in an AI war is an illusion https://www.technologyreview.com/2026/04/16/1136029/humans-in-the-loop-ai-war-illusion/ Thu, 16 Apr 2026 12:00:00 +0000 https://www.technologyreview.com/?p=1136029 The availability of artificial intelligence for use in warfare is at the center of a legal battle between Anthropic and the Pentagon. This debate has become urgent, with AI playing a bigger role than ever before in the current conflict with Iran. AI is no longer just helping humans analyze intelligence. It is now an active player—generating targets in real time, controlling and coordinating missile interceptions, and guiding lethal swarms of autonomous drones.

Most of the public conversation regarding the use of AI-driven autonomous lethal weapons centers on how much humans should remain “in the loop.” Under the Pentagon’s current guidelines, human oversight supposedly provides accountability, context, and nuance while reducing the risk of hacking.

AI systems are opaque “black boxes”

But the debate over “humans in the loop” is a comforting distraction. The immediate danger is not that machines will act without human oversight; it is that human overseers have no idea what the machines are actually “thinking.” The Pentagon’s guidelines are fundamentally flawed because they rest on the dangerous assumption that humans understand how AI systems work.

Having studied intentions in the human brain for decades and in AI systems more recently, I can attest that state-of-the-art AI systems are essentially “black boxes.” We know the inputs and outputs, but the artificial “brain” processing them remains opaque. Even their creators cannot fully interpret them or understand how they work. And when AIs do provide reasons, they are not always trustworthy.

The illusion of human oversight in autonomous systems

In the debate over human oversight, a fundamental question is going unasked: Can we understand what an AI system intends to do before it acts?

Imagine an autonomous drone tasked with destroying an enemy munitions factory. The automated command and control system determines that the optimal target is a munitions storage building. It reports a 92% probability of mission success because secondary explosions of the munitions in the building will thoroughly destroy the facility. A human operator reviews the legitimate military objective, sees the high success rate, and approves the strike.

But what the operator does not know is that the AI system’s calculation included a hidden factor: Beyond devastating the munitions factory, the secondary explosions would also severely damage a nearby children’s hospital. The emergency response would then focus on the hospital, ensuring the factory burns down. To the AI, maximizing disruption in this way meets its given objective. But to a human, it is potentially committing a war crime by violating the rules regarding civilian life. 

Keeping a human in the loop may not provide the safeguard people imagine, because the human cannot know the AI’s intention before it acts. Advanced AI systems do not simply execute instructions; they interpret them. If operators fail to define their objectives carefully enough—a highly likely scenario in high-pressure situations—the “black box” system could be doing exactly what it was told and still not acting as humans intended.

This “intention gap” between AI systems and human operators is precisely why we hesitate to deploy frontier black-box AI in civilian health care or air traffic control, and why its integration into the workplace remains fraught—yet we are rushing to deploy it on the battlefield.

To make matters worse, if one side in a conflict deploys fully autonomous weapons, which operate at machine speed and scale, the pressure to remain competitive would push the other side to rely on such weapons too. This means the use of increasingly autonomous—and opaque—AI decision-making in war is only likely to grow.

The solution: Advance the science of AI intentions

The science of AI must comprise both building highly capable AI technology and understanding how this technology works. Huge advances have been made in developing and building more capable models, driven by record investments—forecast by Gartner to grow to around $2.5 trillion in 2026 alone. In contrast, the investment in understanding how the technology works has been minuscule.

We need a massive paradigm shift. Engineers are building increasingly capable systems. But understanding how these systems work is not just an engineering problem—it requires an interdisciplinary effort. We must build the tools to characterize, measure, and intervene in the intentions of AI agents before they act. We need to map the internal pathways of the neural networks that drive these agents so that we can build a true causal understanding of their decision-making, moving beyond merely observing inputs and outputs. 

A promising way forward is to combine techniques from mechanistic interpretability (breaking neural networks down into human-understandable components) with insights, tools, and models from the neuroscience of intentions. Another idea is to develop transparent, interpretable “auditor” AIs designed to monitor the behavior and emergent goals of more capable black-box systems in real time.  

Developing a better understanding of how AI functions will enable us to rely on AI systems for mission-critical applications. It will also make it easier to build more efficient, more capable, and safer systems.

Colleagues and I are exploring how ideas from neuroscience, cognitive science, and philosophy—fields that study how intentions arise in human decision-making—might help us understand the intentions of artificial systems. We must prioritize these kinds of interdisciplinary efforts, including collaborations between academia, government, and industry.

However, we need more than just academic exploration. The tech industry—and the philanthropists funding AI alignment, which strives to encode human values and goals into these models—must direct substantial investments toward interdisciplinary interpretability research. Furthermore, as the Pentagon pursues increasingly autonomous systems, Congress must mandate rigorous testing of AI systems’ intentions, not just their performance.

Until we achieve that, human oversight over AI may be more illusion than safeguard.

Uri Maoz is a cognitive and computational neuroscientist specializing in how the brain transforms intentions into actions. A professor at Chapman University with appointments at UCLA and Caltech, he leads an interdisciplinary initiative focused on understanding and measuring intentions in artificial intelligence systems (ai-intentions.org).

]]>
1136029
The noise we make is hurting animals. Can we learn to shut up? https://www.technologyreview.com/2026/04/16/1135179/anthropogenic-noise-hurting-animals/ Thu, 16 Apr 2026 10:00:00 +0000 https://www.technologyreview.com/?p=1135179 When the covid-19 pandemic started, Jennifer Phillips thought about the songs of the sparrows.

They were easier to hear, because the world had suddenly become quieter. Car traffic plummeted as people sheltered at home and shifted to remote work. Air travel collapsed. Cities—normally filled with the honking, screeching, engine-gunning riot of transportation—became as silent as tombs.

For years, Phillips has studied how animals react to “anthropogenic noise,” or the racket created by human activity. Most animals really don’t like it, she and her colleagues have learned. Animals constantly listen to the world around them: They’re on the alert for the rustle of approaching predators, or a mating call from a member of their species. As human society has expanded—with sprawling cities, industrial mines, and roads crisscrossing the world—it has gotten noisier too, and animals have trouble hearing one another.

Noise is invisible; there’s no billowing smokestack, no soiled waterway. We just got used to it as it vibrated in the background.

Phillips and her colleagues had spent time in the 2010s in San Francisco recording the sound of white-crowned sparrows in the Presidio. It’s a park that is half peaceful nature and half automobile noise, since it’s filled with thick clumps of trees and grassy fields but also has two highways that slice through it, feeding onto the Golden Gate Bridge. In past recordings, starting in the 1950s, sparrows had sung with complex and lower-pitched melodies and three major “dialects.” But by the 2010s, traffic in the Presidio had exploded, and the hubbub was so loud that the birds began to sing with faster trills—and at a higher pitch—so their fellows could hear them. The two quietest dialects were either dead or on their way to extinction.

They’re “screaming at the top of their lungs,” says Phillips. “They really can’t hear the lower frequencies when the traffic noise is present.” Urban noise can even change birds’ bodies; they get thinner and more stressed out. Their mating calls aren’t as effective, because female birds, as researchers have found, generally don’t enjoy high-pitched, high-volume shouting. (It makes them wonder if the males are unhealthy.) The noise can increase bird-on-bird conflict, because when birds can’t hear warning cries they accidentally stumble into enemy territory. Perhaps worst of all, in situations like these biodiversity takes a hit: Entire species that can’t handle urban clamor simply head out of town and never come back.

But as the sudden, eerie silence of the pandemic descended, Phillips sat at home thinking, It’s really quiet. And then she wondered: Would the Presidio birds now be able to hear each other better?

She raced over to the park and started recording. Sure enough, the park was seven decibels quieter—a huge drop. (That’s like the difference between the noise of the average home and whispering.)

And remarkably, the researchers found that the songs of the white-crowned sparrows had transformed. They were singing more quietly, with a richer range of frequencies. A bird could be heard twice as far as before. And the mating calls had gotten more sultry.

“They could sing a higher performance, basically a sexier song, but not have to scream it so loud,” Phillips says. 

It was as if time had been reversed and all the damage abruptly repaired. And it proved what Phillips and her peers have been increasingly documenting: that anthropogenic noise is the newest form of pollution we need to tackle. The noise of our relentlessly on-the-move industrial society affects all life on Earth, wildlife and humans, in ways we’re just beginning to grasp. Yet strategies such as electrification and clever urban design could help. As the Presidio showed, noise can vanish overnight—once we figure out how to shut up.

Hidden impacts

Many forms of pollution are obvious to us humans. Dumping toxic goo into lakes? Sure, that’s bad. Coal smokestacks pumping soot and carbon dioxide, plastic bags and sea nets choking whales—we now understand that these, too, are problems. Even an idea as gauzy as light pollution has penetrated the public consciousness to some extent, since it’s why city dwellers can’t see many stars, and we’ve heard it confuses migratory birds.

But noise, mostly from transportation, took longer to hit our radar. This is partly because it’s invisible; there’s no billowing smokestack, no soiled waterway. We just got used to it as it vibrated in the background.

sparrow perched on a branch, singing
Sparrows in San Francisco’s Presidio began to sing with faster trills—and at a higher pitch—so their fellows could hear them over the noise of nearby traffic.
GETTY IMAGES
hummingbird in flight
The black-chinned hummingbird seems to prefer noisy areas, fledging more chicks than the same species does in quieter areas.
MDF/WIKIMEDIA COMMONS

There were a few studies in the ’70s and ’80s showing that animals were upset by our noise. But the field really began to take off in the ’00s, in part because digital technology made it easier to record long swathes of sound out in nature and analyze them. One early salvo came from the biologist Hans Slabbekoorn, who was studying doves in the city of Leiden and irritatedly noticed that he could rarely get a clean recording because of the background noise. Sometimes he’d see the doves’ throats moving as they cooed but couldn’t hear them. “If I’m having difficulty hearing them,” he thought, “what about them?”

So he and a colleague started recording ambient sound levels in different parts of Leiden. Some were quiet residential areas, which registered a soothing 42 decibels, and others were noisy intersections or areas near highways, which reached 63 decibels, about as loud as background music. Sure enough, he found that birds in the noisy areas were singing at a higher pitch.

Over the next two decades, research in the field bloomed. Noise, the scientists found, has a few common ill effects on animals. It disrupts communication, certainly. But it also generally stresses them, reducing everything from their body weight to their receptivity to mating calls. If an animal nests closer to a road, its reproduction rates can go down; eastern bluebirds, for example, produce fewer fledglings. Truly cacophonous noise—like planes taking off at a nearby airport—can cause hearing loss in birds. And animals can wind up becoming less aware of threats from predators. They’ll wander closer to danger, because they can’t hear it coming. (And sometimes they’ll do the opposite: They’ll develop a rageaholic hair-­trigger temper, because they’re constantly on high alert and regard everything as a threat.) 

Even in deep rural areas, where things are normally pretty quiet, highways can disrupt wildlife—the noise carries far into the fields nearby. Fraser Shilling, a biologist at the University of California, Davis, has stood up to half a mile from rural highways and recorded sound as loud as 60 decibels, which is at least 20 decibels higher than you’d typically find in the wilderness. “The motorcycles and the 18-wheelers are really the ones that project a lot of noise,” he told me. 

Above 55 decibels, many skittish animals get into a fight-or-flight panic. The prevalence of bobcats—an endangered species famously rattled by noise—“starts dropping off the cliff,” says Shilling. Above 65, “you’re really starting to exclude almost all wildlife.”

And that’s not even the upper limit of what wildlife is exposed to. There are roughly a half-million natural-gas wells around the US, and piercingly loud compressors are used to shoot water down into most of them. Up close, the compressors can kick out 95 decibels, a sound as loud as a subway train; at one Wyoming gas well the sound still registered around 48 decibels nearly a quarter-mile away.

Historically, it wasn’t always easy to prove that noise was causing whatever problems the animals were experiencing. Maybe it was other factors; maybe animal populations reduce near a road because some are hit by vehicles? 

But several clever experiments have proved that noise—and noise alone—can disrupt wildlife. One was the “phantom road” experiment by the conservation scientist Jesse Barber and his team, then at Boise State University. They went out to a quiet, uninhabited area of the Boise foothills in Idaho, far away from any roads. In this valley in the mountains, thousands of migratory birds stop on their way south each year; they’ll gorge themselves on cherry bushes, gaining weight for the next days of flying. The researchers strapped 15 pairs of speakers to Douglas fir trees, in a half-kilometer line. Then they blasted recordings of highway noise. They played the noise for four days and then turned it off for four days. Then they observed thousands of birds, capturing many to measure their body mass.

The noise truly rattled the birds. When the sound was turned on, nearly a third left the area. Those that stuck around ate less: While birds should be heavier after a day of foraging, these ones didn’t gain much. The noise seemed to have so interrupted their feeding that they weren’t packing on the weight needed for their migratory trip.

Other, similarly nifty A/B tests followed. One was led by David Luther, a biologist at George Mason University (who also worked with Phillips on the covid-19 study in San Francisco). In 2015, these researchers took 17 white-crowned sparrows at birth and raised them in a lab. To teach them their species’ songs, they played the nestlings recordings of adult sparrows singing, at low and high pitches. Six of the nestlings heard the songs without any interference; with the other half, the researchers played the sounds of city noise at the same time.

The results were stark. The lucky birds that were spared the traffic noise learned to perform the quieter, sweeter, more complex songs. But the birds that had traffic noise blasted learned only the higher, faster, more stressed-out songs. From the cradle, noise changed the way they communicated.

Humans hate noise too

You can’t pull the same experiment with humans, raising them in a lab to see how noise affects them. (Not ethically, anyway.) But if we could, we’d likely find the same thing. We, too, are animals—and it appears that we suffer in similar ways from anthropogenic noise, even though we’re the ones creating it.

The sound of traffic is correlated with lousy sleep, higher blood pressure, more heart disease, and higher stress.

Stacks of research in the last few decades have found that noise—most often, as with wildlife, the sound of traffic—is correlated with lousy sleep, higher blood pressure, more heart disease, and higher stress. A Danish study followed almost 25,000 nurses for years and found that an additional 10 decibels hit them hard; over a 23-year period they had an 8% higher rate of death, plus higher rates of nearly every bad thing that could happen to you: cancers, psychiatric problems, strokes. (They controlled for other malign health influences.) As you’d probably predict by now, children fare badly too. When Barcelona researchers followed almost 3,000 elementary school kids for a year, they found that those in noisier schools performed worse on assessments of working memory and ability to pay attention.

“We think of ourselves as being ‘used to it,’” says Gail Patricelli, a professor of evolution and ecology at the University of California, Davis. “We’re not as used to it as we think we are.”

It’s also true that there’s a trade-off. Many people understand that noise from cities and highways is aggravating, but we tolerate it because we get benefits along with the hassles. Cities are crammed with jobs and connections and dating opportunities; cars and trucks bring us the things we need and increase our personal mobility.

It turns out that animals make a similar calculus. Some species appear to benefit in certain ways from proximity to noise, so they move toward it. 

Clinton Francis, a biologist at California Polytechnic State University, and a team studied bird populations near noisy gas wells in rural New Mexico. Most species avoided the riot of the well pumps. But Francis was surprised to find that some hummingbirds and finches preferred it, and by one important measure they thrived: They were nesting more in the noisy areas than in the quieter areas. Additionally, several species had more success at fledging chicks in noisier locations.

What was going on? It’s likely that the noise makes it harder for predators to hear the birds and hunt down their nests. “It’s essentially a predator shield,” Francis says. Since his research found that predators can cause as much as 76% of failures of eggs to produce healthy offspring, that’s a significant survival advantage.

Cities can offer the same protections to certain species. Consider the case of Flaco, a Eurasian eagle-owl that escaped from the Central Park Zoo in February of 2023 and found he was in a terrific place to hunt. The incessant traffic ought to have caused him trouble. “An owl like this is among the most vulnerable species to intrusions from noise pollution. They’re listening for extremely faint signals or cues that their prey provide,” Francis notes. But New York has its compensations, because prey animals abound. They’re also naïve and unguarded, never expecting an owl with a six-foot wingspan to swoop down and devour them.

""
EDDIE GUY

Granted, these upsides don’t cancel out the negatives. Human noise may shield some birds from predators, but in other ways it leaves them faintly miserable, with high levels of stress hormones and lower weight. 

Worse, the species that manage to thrive in cities or near highways are often the same ones all over the country.  And they represent only a minority of species; most are driven further away, with less and less land to live on as civilization spreads ever outward. 

“Overall, it’s kind of a nightmare for diversity,” says Luther.

How to silence the world

In the early ’00s, the village of Alverna in the Netherlands began to get louder. A major intercity road cut straight through the town, and traffic had gone up by two-thirds in the previous decade. Facing complaints about the din, the town offered to put up some 13-foot walls on either side of the route. Residents hated the idea. Who wants to look out the window at massive walls?

So instead town planners redesigned the road in subtle ways. They lowered it by half a meter, slightly blocking the tire sounds. They built wedges that rise up three feet on either side, and surfaced them with attractive antique stone; that blocked even more sound. They planted sound-absorbing trees. And as a final coup de grâce, they reduced the speed limit from about 50 to 30 miles per hour. When a car is moving slowly, the engine is producing most of the roar—but once it’s going 45 mph or faster, the rumble of tires on the pavement takes over and is much louder. Each intervention had only a small effect, but cumulatively they made the road a blessed 10 decibels quieter.

This tale illustrates one curious upside of noise. Compared with other forms of pollution, it can be ended quickly. Toxic pollutants or CO2 can hang around for tens of thousands of years; the microplastics in your pancreas are probably never coming out. But with noise, the instant you reduce the source, the benefits are immediate. 

Plus, most of what works is “not rocket science,” Shilling says. A tall wall at the side of a highway will cut noise by 10 decibels; fill a double-sided wall with rubble and it’s even better. That could cut the traffic noise to below 55 decibels, he notes, which would help particularly skittish forms of wildlife. Walls can block animal movement, though, so in animal-heavy areas it’s better to build berms—small hills on either side of a highway. Areas of high ecological importance could be prioritized to keep costs down. 

“If there’s a great chunk of wetland habitat and it’s the only one around for 50 miles in any direction? Well, then we should build noise walls around it,” he says. We should also build overpasses and underpasses to help animals get around. And to quiet the din of gas wells out in the countryside, states could require companies to build walls around them. (They’ll likely only do that, though, when human neighbors complain or launch lawsuits; animals don’t have lawyers.)

Cities, too, can learn to shut up, as Alverna proved. At the most ambitious, some have buried noisy highways that once cut through the downtown core. Boston put a massive elevated highway underground in its “Big Dig”; in Slabbekoorn’s hometown of Amstelveen—a suburb of Amsterdam—they’re currently enclosing the A9 highway in a tunnel and turning the surface into a verdant park with new buildings. “That’s amazing, getting back a lot of the space as well,” he says. 

Granted, this sort of reengineering can be brutally expensive, which is why politicians blanch when they’re asked to reduce road noise. The Big Dig cost $15 billion, and with interest up to $24 billion. When I mentioned cost to Shilling, he sighed. “It’s not as expensive as a B-1 bomber or tax cuts for rich people,” he says. “Environmental stuff is considered expensive just because our expectations are low, not because we can’t afford to do it.”

There are cheaper and more politically palatable fixes, though. Reducing urban speed limits is one; Paris recently cut the top speed on its ring roads from 70 to 50 kilometers per hour (43 to 31 mph), and noise at night went down by an average 2.7 decibels—a noticeable drop. Planting more trees and vegetation all around roads and cities can cut a few decibels more, and residents love it. 

Growing adoption of electricity would also bring down the volume. “Electric vehicles of all kinds have the potential to make a big difference,” Patricelli says; when the light turns green and an EV next to you accelerates away, it’s up to 13 decibels quieter than a comparable gas-­powered vehicle. These benefits won’t be felt as much on highways, because EVs still make tire noise at high speeds. But in the slower stop-and-go traffic of urban life, they are far more pleasant to the ears, both animal and human. Indeed, the electrification of everything that currently uses a gas-powered motor will make urban life quieter. Cities like Alameda, California, and Alexandria, Virginia, are increasingly banning gas-powered leaf blowers and lawn mowers, which operate at hair-raising volume while electric ones whisper along. 

We’ve engineered a civilization that roars, but the next phase is making it purr. The animals will thank us. 

Clive Thompson is a science and technology journalist based in New York City.

]]>
1135179
The quest to measure our relationship with nature https://www.technologyreview.com/2026/04/16/1135245/measure-relationship-with-nature-index/ Thu, 16 Apr 2026 10:00:00 +0000 https://www.technologyreview.com/?p=1135245 As a movement, environmentalism has been pretty misanthropic. Understandably so—we humans have done some destructive things to the ecosystems around us. In the 21st century, though, mainstream conservation is learning that humans can be a force for good. Foresters are turning to Indigenous burning practices to prevent wildfires. Biologists are realizing that flower-dotted meadows were ancient food-production landscapes that need harvesting or they’ll disappear. And the once endangered peregrine falcon now thrives in part thanks to nesting sites on skyscrapers and abundant urban prey: rats. 

For decades (two, but that counts), I’ve been writing about how humans aren’t metaphysically different from any other species on Earth. Conservation can’t only be about fencing people out of protected areas. A lot of the time the real trick is not to withdraw from “nature” but to get better at being part of it. 

Still, I recognize that living in harmony with nature sounds like a mushy idea. I was therefore stoked to participate in a meeting in Oxford, UK, that sought to build more precise tools to assess human-nonhuman relationships. Scientists have invented lots of measurements of environmental destruction, from parts per million of carbon dioxide to extinction rates to “planetary boundaries.” These have their uses, but they engage people mostly through dread. Why not invent metrics, we thought, that would engage people’s hopes and dreams? 

It was harder than I expected. How do you quantify how good people in any given nation are at living with other Earthlings? Some of the metrics the group proposed seemed to me to be too similar to the older, more adversarial approach. Why tally the agricultural land use per person, for example? Environmentalists have typically seen farms as the opposite of nature, but they’re also potential sites for both edible and inedible biodiversity. Some of us were keen on satellite imagery to calculate things like how close people live to green space. But without local information, you can’t prove that people can actually access that space.

Eventually the 20 or so scientists, authors, and philosophers who met in Oxford settled on three basic questions. First, is nature thriving and accessible to people? We wanted to know if humans could engage with the world around them. Second, is nature being used with care? (Of course, “care” could mean lots of things. Is it just keeping harvests under maximum sustainable yield? Or does it require a completely circular economy?) And third, is nature safeguarded? Again, not easy to assess. But if we could roughly measure each of these three things, the numbers could combine into an overall score for the quality of a human-nature relationship. 

We published our ideas in Nature last year. Though they weren’t perfect, green-space remote sensing and agricultural footprint calculations made the cut. Since then, a team in the United Nations Human Development Office has continued that work, planning to debut a Nature Relationship Index (NRI) later this year alongside the 2026 Human Development Report. Everyone loves a ranked list; we hope countries will want to score well and will compete to rise to the top. 

Pedro Conceição, lead author of the Human Development Report, tells me that he wants the new index to shift how countries see their environmental programs. (He wouldn’t give me spoilers as to the final metrics, but he did tell me that nothing from our Nature paper made it in.) The NRI, Conceição says, will be critical for “challenging this idea that humans are inherent destroyers of nature and that nature is pristine.” Narratives around constraints, limits, and boundaries are polarizing instead of energizing, he says. So the NRI isn’t about how badly we are failing. It speaks to aspirations for a green, abundant world. As we do better, the number goes up—and there is no limit. 

Emma Marris is the author of Wild Souls: Freedom and Flourishing in the Non-Human World.

]]>
1135245
Is carbon removal in trouble? https://www.technologyreview.com/2026/04/16/1135928/carbon-removal-microsoft/ Thu, 16 Apr 2026 10:00:00 +0000 https://www.technologyreview.com/?p=1135928 Last week, news outlets reported that Microsoft was pausing carbon removal purchases. It was something of a bombshell.

The thing is, Microsoft is the carbon removal market. The company has single-handedly purchased something like 80% of all contracted carbon removal. If you’re looking for someone to pay you to suck carbon dioxide out of the atmosphere, Microsoft is probably who you’re after.

The company has said that it is not permanently ending its carbon removal purchases (though it didn’t directly answer further questions about this apparent pause). But with this flurry of news, there’s a lot of fear in the industry—so, it’s worth talking about the state of carbon removal, and where Big Tech companies fit in.

Carbon removal aims to reliably pull carbon dioxide out of the atmosphere and permanently store it. There’s a wide range of technologies in this space, including direct air capture (DAC) plants, which usually use some kind of sorbent or solvent to pull carbon dioxide from the air. Another important method is bioenergy with carbon capture and storage (BECCS), in which biomass like trees or waste-derived biofuels are burned for energy, and scrubbing equipment captures the greenhouse gases.

There was a huge boom of interest in carbon removal technologies in the first half of this decade. One UN climate report in 2022 found that nations may need to remove up to 11 billion metric tons of carbon dioxide every year by 2050 to keep warming to 2 °C above preindustrial levels.

One nagging problem is that the economics here have always been tricky. There’s a major potential public good to pulling carbon pollution out of the atmosphere. The question is, Who will pay for it?

So far, the answer has been Microsoft. The company is by far the largest buyer of carbon removal contracts, and it’s the only purchaser that has made megatonne-scale purchases, says Robert Höglund, cofounder of CDR.fyi, ​​a public-benefit corporation that analyzes the carbon removal sector. “Microsoft has had a huge importance, especially for getting large-scale projects off the ground and showing there is demand for large deals,” Höglund said via email.

Microsoft has pledged to become carbon-negative by 2030 and to remove the equivalent of its historic emissions by 2050. Progress on actually cutting emissions has been tough to achieve though—in the company’s latest Environmental Sustainability Report, published in June 2025, it announced emissions had risen by 23.4% since 2020.

On April 10, Heatmap News reported that Microsoft staff had told suppliers and partners that it was pausing future purchases of carbon removal, though it wasn’t clear whether the company would increase support for existing projects, or when purchases might resume. Bloomberg reported a similar story the next day. In one instance, Microsoft employees said that the decision was related to financial considerations, one source told Bloomberg. 

In a statement in response to written questions, Microsoft said that it was not permanently closing its carbon removal program. “At times we may adjust the pace or volume of our carbon removal procurement as we continue to refine our approach toward sustainability goals. Any adjustments we make are part of our disciplined approach—not a change in ambition,” Microsoft Chief Sustainability Officer Melanie Nakagawa said in the statement.

Whatever, exactly, is happening behind the scenes, many in the industry are nervous, says Wil Burns, Co-Director of the Institute for Responsible Carbon Removal at American University. People viewed the company as the foundational supporter of carbon removal, he adds.

“This pause—whether it’s short term or whatever it is—the way it’s been rolled out is extremely irresponsible,” Burns says. The vast majority of firms looking to get carbon removal contracts are probably seeking Microsoft deals. So, while Microsoft has every right to change its plans, the company needs to be open with the industry now, he adds.

“I don’t think you can hold yourself out as the paragon of fostering carbon removal and then treat a nascent industry that disrespectfully,” Burns says.

Carbon removal companies were already in turmoil in the US, particularly because of recent policy shifts: Funding has been cut back, and recent changes at the Environmental Protection Agency were aimed at the government’s ability to target carbon pollution.

Now, if the largest corporate backer is shifting plans or taking a significant pause, things could get rocky.

Depending on the extent of this pause, the industry may need to survive on smaller purchases and hope for support from governments and philanthropy, Höglund says. But for carbon removal to truly scale, we need policymakers to create mandates so that emitters are responsible for either storing the carbon dioxide they produce or paying for it, Burns says.

“Maybe the upside of this is Microsoft has sent a wake-up call, that you just can’t rely on the kindness of strangers to make carbon removal scale.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here

]]>
1135928
The Download: NASA’s nuclear spacecraft and unveiling our AI 10 https://www.technologyreview.com/2026/04/15/1135904/the-download-nasa-nuclear-powered-spacecraft-10-things-that-matter-in-ai-right-now/ Wed, 15 Apr 2026 12:10:00 +0000 https://www.technologyreview.com/?p=1135904 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

NASA is building the first nuclear reactor-powered interplanetary spacecraft. How will it work? 

Just before Artemis II began its historic slingshot around the moon, NASA revealed an even grander space travel plan. By the end of 2028, the agency aims to fly a nuclear reactor-powered interplanetary spacecraft to Mars. 

A successful mission would herald a new era in spaceflight—and might just give the US the edge in the race against China. But the project remains shrouded in mystery. 

MIT Technology Review picked the brains of nuclear power and propulsion experts to find out how the nuclear-powered spacecraft might work. Here’s what we discovered

—Robin George Andrews 

This story is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. 

Coming soon: our 10 Things That Matter in AI Right Now 

Each year, we compile our 10 Breakthrough Technologies list, featuring our educated predictions for which technologies will change the world. Our 2026 list, however, was harder to wrangle than normal. Why? We had so many worthy AI candidates we couldn’t fit them all in!  

That got us thinking: what if we made an entirely new list all about AI? Before we knew it, we had the beginnings of what we’re calling 10 Things That Matter in AI Right Now.  

On April 21, we’ll unveil the list on stage at our signature AI conference, EmTech AI, and then publish it online later that day. If you want to be among the first to see it, join us at EmTech AI or become a subscriber to livestream the announcement.  

Find out more about the list’s methodology and aims here

—Niall Firth & Amy Nordrum 

MIT Technology Review Narrated: this company is developing gene therapies for muscle growth, erectile dysfunction, and “radical longevity” 

In January, a handful of volunteers were injected with two experimental gene therapies as part of an unusual clinical trial. Its long-term goal? To achieve radical human life extension.  

The therapies are designed to support muscle growth. The company behind them, Unlimited Bio, also plans to trial similar therapies in the scalp (for baldness) and penis (for erectile dysfunction). But some experts are concerned about the plans.  

Find out why the trial has divided opinion

—Jessica Hamzelou 

This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 Google, Microsoft, and Meta track users even when they opt out 
According to an independent audit, they may be racking up billions in fines. (404 Media)  
+ How our digital devices put our privacy at risk. (Ars Technica
+ Privacy’s next frontier is AI “memories.” (MIT Technology Review
 
2 OpenAI has a new cybersecurity model—and strategy 
GPT-5.4-Cyber is designed specifically for defensive cybersecurity work. (Reuters $) 
+ OpenAI has joined Anthropic in focusing on cybersecurity recently. (Wired $) 
+ Like Anthopic, its latest model is only available to verified testers. (NYT $) 
+ AI is already making online crimes easier. It could get much worse. (MIT Technology Review

3 Amazon is buying satellite firm Globalstar in a bid to rival Starlink   
The $11.6 billion deal targets the lucrative satellite internet market. (WSJ $)  
+ Apple has chosen Amazon satellites for iPhone. (Ars Technica
 
4 What it’s like to live with an experimental brain implant 
Early BCI users explain what the technology gives—and takes. (IEEE
+ A patient with Neuralink got a boost from generative AI. (MIT Technology Review
 
5 Dozens of AI disease-prediction models were trained on dubious data  
A few might already have been used on patients. (Nature

6 Uber is breaking from its gig economy model to avoid robotaxi disruption  
It’s spending $10 billion to buy thousands of autonomous vehicles. (FT $) 
 
7 xAI is being sued over data center pollution  
Musk’s AI venture stands accused by the NAACP of violating the Clean Air Act. (Engadget
+ No one wants a data center in their backyard. (MIT Technology Review
 
8 Apple could win the AI race without running  
It may reap the rewards of everyone else’s spending. (Axios
 
9 How 4chan set a precedent for AI’s reasoning abilities  
The notorious forum tested a feature called “chain of thought.” (The Atlantic $) 
 
10 The surprising emotional toll of wearing Meta’s AI sunglasses 
Their shortcomings are making users sad. (NYT $) 
 
 

Quote of the day 

“Everything got a whole lot worse once they rolled out AI.” 

—A copywriter tells the Guardian that they’re drowning in “workslop” — AI-generated work that seems polished but has major flaws 

One More Thing 

blocks of frozen carrots and peas
GETTY IMAGES

How refrigeration ruined fresh food 

Bananas may not be chilled in the grocery store, but they’re the ultimate refrigerated fruit. It’s only thanks to a network of thermal control that they’ve become a global commodity. And that salad bag on the shelf? It’s not just a bag but a highly engineered respiratory apparatus. 

According to Nicola Twilley—a contributor to the New Yorker and cohost of the podcast Gastropod—refrigeration has wrecked our food system. Thankfully, there are promising alternative preservation methods.  

Read the full story on her research

—Allison Arieff 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ Spotify only shows 10 popular songs per artist. This tool lists them all. 
+ These GIF animations are mesmerizing loops of nostalgia. 
+ This site beautifully visualizes Curiosity’s 13 years on Mars. 
+ A retro-futurist designer has turned a NES console into a working synthesizer. 

]]>
1135904
Cyberscammers are bypassing banks’ security with illicit tools sold on Telegram https://www.technologyreview.com/2026/04/15/1135898/cyberscammers-bypassing-bank-telegram/ Wed, 15 Apr 2026 11:26:12 +0000 https://www.technologyreview.com/?p=1135898

From inside a money-laundering center in Cambodia, an employee opens a popular Vietnamese banking app on his phone. The app asks him to upload a photo associated with the account, so he clicks on a picture of a 30-something Asian man.

Next, the app requests to open the camera for a video “liveness” check. The scammer holds up a static image of a woman bearing no resemblance to the man who owns the account. After a 90-second wait—as the app tells him to readjust the face inside the frame—he’s in. 

The exploit he’s demonstrating, in a video shared with me by a cyberscam researcher named Hieu Minh Ngo, is possible thanks to one of a growing range of illicit hacking services, readily available for purchase on Telegram, that are designed to break “Know Your Customer” (KYC) facial scans.

These banking and crypto safeguards are supposed to confirm that an account belongs to a real person, and that the user’s face matches the identity documents that were provided to open the account. But scammers are bypassing them in order to open mule accounts and launder money. Rather than using a live phone camera feed for a liveness check, the hacks typically deploy a tool known as a virtual camera. Users can replace the video stream with other videos or photos—depicting a real or deepfake person or even an object.

As financial institutions enact enhanced security measures aimed at stopping cyberscammers, these workarounds are the latest round in the cat-and-mouse game between criminal operators and the financial services industry.

Over the course of a two-month investigation earlier this year, MIT Technology Review identified 22 Chinese-, Vietnamese-, and English-language public Telegram channels and groups advertising bypass kits and stolen biometric data. The software kits use a variety of methods to compromise phone operating systems and banking applications, claiming to enable users to get around the compliance checks imposed by financial institutions ranging from major crypto exchanges such as Binance to name-brand banks like Spain’s BBVA. 

“Specializing in bank services—handling dirty money,” reads the since-deleted Telegram bio of the program used by the Cambodian launderer, complete with a thumbs-up emoji. “Secure. Professional. High quality.” Some of the channels and groups had thousands of subscribers or members, and many posted bullet points listing their services (“All kinds of KYC verification services”; “It’s all smooth and seamless”) alongside videos purporting to show successful hacks. 

Telegram says that after reviewing the accounts, it removed them for violating its terms of service. But such online marketplaces proliferate easily, and multiple channels and groups advertising similar tools remain active.

Banks and butchers

The rise in KYC bypasses has occurred alongside an expansion of a global industry in “pig-butchering” cyberscams. Crypto platforms and banks around the world are facing increasing scrutiny over the flow of illegally obtained money, including profits from such scams, through their platforms. This has prompted tightened banking regulations in countries such as Vietnam and Thailand, where governments have increased customer verification and fraud monitoring requirements and are pushing for stronger anti-money-laundering safeguards in the crypto industry.

Chainalysis, a US blockchain analysis firm, estimates that around $17 billion was stolen in 2025 in crypto scams and fraud, up from $13 billion in 2024. The United Nations Office on Drugs and Crime, meanwhile, warned in a recent report that the expansion of Asian scam syndicates in Africa and the Pacific has helped the industry “dramatically scale up profits.”

That combination of factors—more scrutiny, but also more revenue—has vaulted KYC bypasses to the center of the online marketplace for cyberscam and casino money launderers. Although estimates vary, cybersecurity researchers say these kinds of attacks are rising: The biometrics verification company iProov estimated that virtual-camera attacks were more than 25 times as common worldwide 2024 than in 2023, while Sumsub, a company providing KYC services, reported that “sophisticated” or multi-step fraud attempts, including virtual-camera bypasses, almost tripled last year among its clients. 

Three financial institutions that were named as targets on such Telegram channels—the world’s largest crypto exchange, Binance, as well as BBVA and UK-based Revolut—told me they’re aware of such bypasses and emphasize that they’re an industry-wide challenge. A spokesperson from Binance said it has “observed attempts of this nature to circumvent our controls,” adding that “we have successfully prevented such attacks and remain confident in our systems.”  BBVA and Revolut also declined to comment on whether their safeguards had been breached.

It’s difficult to estimate success rates, because companies may not be aware of bypasses—or report them—until later. “What’s important is what we don’t see,” Artem Popov, Sumsub’s head of fraud prevention products, told me, referring to attacks that go undetected. “There’s always part of the story where it might be completely hidden from our eyes, and from the eyes of any company in the industry, using any type of KYC provider.”

How criminals navigate a compliance maze 

Advertisements for the exploits appear simple enough, but on the back end, building a successful bypass is complex and often involves multiple methods. Some channels offer to jailbreak a physical phone so that scammers can trigger the use of a virtual camera (VCam) instead of the built-in one whenever they’d like. Other hacks inject code known as a “hooking framework” into a financial institution’s app that triggers the VCam to open. Either way, VCams can be used to dupe KYC safeguards with images or videos that replace genuine, live video of the account’s owner.

Sergiy Yakymchuk, CEO of Talsec, a cybersecurity company that primarily serves financial institutions, reviewed details from the Telegram channels identified by MIT Technology Review and says they are consistent with successful tactics used against his banking and crypto clients. His team received help requests from banks and exchanges for roughly 30 VCam-based hacks over the past year, up from fewer than 10 in 2023. 

Increasingly, hackers compromise both the phone itself and the code of the financial institutions’ apps before feeding the virtual camera a mix of stolen biometrics and deepfakes, Yakymchuk says.

“Some time ago, it was enough to decompile the app of a bank and distribute this on Telegram, and that was everything you needed,” he says. “Now it’s not enough, because you have KYC—and more and more things are needed.”

For money launderers, KYC bypasses have “become essential for everything right now—because scam compounds need to move money,” says Ngo, the researcher who shared the demo video. A convicted former hacker who became a cybersecurity advisor for the Vietnamese government, Ngo now runs an anti-scam nonprofit and helps law enforcement investigate money laundering. 

He describes how the process works in the case of pig-butchering scams: Funds originating with victims are received into bank accounts controlled or rented by a money-laundering network, known colloquially as “water houses.” Money launderers use KYC bypasses to access the accounts and quickly redistribute the profits before converting them into digital assets—typically in the form of the stablecoin Tether, a type of cryptocurrency that is pegged to the US dollar.

These transactions often happen in seconds, under tightly orchestrated management. “They know, very clearly, the flow of how the banks verify or authenticate accounts,” Ngo says. 

A cat-and-mouse game 

The growth of cyberscam money laundering has led to heightened scrutiny of financial institutions. In 2023, Binance pleaded guilty in US federal courts to operating without anti-money-laundering safeguards. Donald Trump pardoned former Binance CEO Chaopeng Zhao last October.

Recent analysis from the International Consortium of Investigative Journalists found that after Zhao’s guilty plea, more than $400 million continued to move to Binance from Huione Group, a Cambodia-based firm that the US sanctioned after the Treasury Department deemed it a “critical node” for money laundering in pig-butchering scams.

Binance says it has “state-of-the-art security systems” that prevented billions in fraud losses and that the company processed more than 71,000 law enforcement requests in 2025.

But John Griffin, a finance and blockchain expert at the University of Texas at Austin, does not think the exchanges are sufficiently secure. “Even though they have all this press about ‘Oh, yes, we’ve changed this and that’—well, the proof is in the pudding. The criminals are still using your exchange,” Griffin told me of the industry at large. “So there must be holes.” (Binance says it “objects to the dubious findings” of Griffin’s work tracking the flow of criminal profits across exchanges like Binance, Huobi, OKX, and Tokenlon, calling it “misleading at best and, at worst, wildly inaccurate.”)

Binance also pointed out that some purported bypass services are themselves scams, casting doubt on whether successful bypasses are as widespread as the Telegram marketplace may suggest. Engaging with such services “exposes individuals to significant security risks,” a spokesperson said. “Even where access appears to be granted, accounts are often already restricted by internal detection and compliance controls, rendering them nonfunctional for trading or withdrawals.”

Regulators around the world are trying to catch up. In Thailand, where citizens’ bank accounts regularly serve as money mules for cyberscams based in neighboring Myanmar and Cambodia, new legislation has enhanced KYC monitoring, limited daily transactions, and strengthened oversight bodies’ ability to suspend accounts. The US money-laundering regulator, the Financial Crimes Enforcement Network, issued a warning against KYC deepfakes and the use of VCams in late 2024, encouraging platforms to track broader transaction patterns to identify money laundering.

For scammers, any new security or reporting requirements will make bypasses harder, but “it’s not going to stop them,” Ngo says. “It’s just a matter of time.”

]]>
1135898
No one’s sure if synthetic mirror life will kill us all https://www.technologyreview.com/2026/04/15/1135197/synthetic-mirror-life-microbes-kill-us-all/ Wed, 15 Apr 2026 09:00:00 +0000 https://www.technologyreview.com/?p=1135197 For four days in February 2019, some 30 synthetic biologists and ethicists hunkered down at a conference center in Northern Virginia to brainstorm high-risk, cutting-­edge, irresistibly exciting ideas that the National Science Foundation should fund. By the end of the meeting, they’d landed on a compelling contender: making “mirror” bacteria. Should they come to be, the lab-created microbes would be structured and organized like ordinary bacteria, with one important exception: Key biological molecules like proteins, sugars, and lipids would be the mirror images of those found in nature. DNA, RNA, and many other components of living cells are chiral, which means they have a built-in rotational structure. Their mirrors would twist in the opposite direction. 

Researchers thrilled at the prospect. “Everybody—everybody—thought this was cool,” says John Glass, a synthetic biologist at the J. Craig Venter Institute in La Jolla, California, who attended the 2019 workshop and is a pioneer in developing synthetic cells. It was “an incredibly difficult project that would tell us potentially new things about how to design and build cells, or about the origin of life on Earth.” The group saw enormous potential for medicine, too. Mirror microbes might be engineered as biological factories, producing mirror molecules that could form the basis for new kinds of drugs. In theory, such therapeutics could perform the same functions as their natural counterparts, but without triggering unwelcome immune responses. 

After the meeting, the biologists recommended NSF funding for a handful of research groups to develop tools and carry out preliminary experiments, the beginnings of a path through the looking glass. The excitement was global. The National Natural Science Foundation of China funded major projects in mirror biology, as did the German Federal Ministry of Research, Technology, and Space.

By five years later, in 2024, many researchers involved in that NSF meeting had reversed course. They’d become convinced that in the worst of all possible futures, mirror organisms could trigger a catastrophic event threatening every form of life on Earth; they’d proliferate without predators and evade the immune defenses of people, plants, and animals. 

“I wish that one sunny afternoon we were having coffee and we realized the world’s about to end, but that’s not what happened.”

Kate Adamala, synthetic biologist, University of Minnesota

Over the past two years, they’ve been ringing alarm bells. They published an article in Science in December 2024, accompanied by a 299-page technical report addressing feasibility and risks. They’ve written essays and convened panels and cofounded the Mirror Biology Dialogues Fund (MBDF), a broadly funded nonprofit charged with supporting work on understanding and addressing the risk. The issue has received a blaze of media attention and ignited dialogues among not only chemists and synthetic biologists but also bioethicists and policymakers.  

What’s received less attention, however, is how we got here and what uncertainties still remain about any potential threat. Creating a mirror-life organism would be tremendously complicated and expensive. And although the scientific community is taking the alarm seriously, some scientists doubt whether it’s even possible to create a mirror organism anytime soon. “The hypothetical creation of mirror-­image organisms lies far beyond the reach of present-day science,” says Ting Zhu, a molecular biologist at Westlake University, in China, whose lab focuses on synthesizing mirror-image peptides and other molecules. He and others have urged colleagues not to let speculation and anxiety guide decision-making and argued that it’s premature to call for a broad moratorium on early-stage research, which they say could have medical benefits. 

But the researchers who are raising flags describe a pathway, even multiple pathways, to bringing mirror life into existence—and they say we urgently need guardrails to figure out what kinds of mirror-biology research might still be safe. That means they’re facing a question that others have encountered before, multiple times over the last several decades and with mixed results—one that doesn’t have a neat home in the scientific method. What should scientists do when they see the shadow of the end of the world in their own research? 

Looking-glass life

The French chemist and microbiologist Louis Pasteur was the first to recognize that biological molecules had built-in handedness. In the late 19th century, he described all living species as “functions of cosmic asymmetry.” What would happen, he mused, if one could replace these chiral components with their mirror opposites? 

Scientists now recognize that chirality is central to life itself, though no one knows why. In humans, 19 of the 20 so-called “standard” amino acids that make up proteins are chiral, and all in the same way. (The outlier, glycine, is symmetrical.) The functions of proteins are intricately tied to their shapes, and they mostly interact with other molecules through chiral structures. Almost all receptors on the surface of a cell are chiral. During an infection, the immune system’s sentinels use chirality to detect and bind to antigens—substances that trigger an immune response—and to start the process of building antibodies. 

By the late 20th century, researchers had begun to explore the idea of reversing chirality. In 1992, one team reported having synthesized the first mirror-image protein. That, in turn, set off the first clarion call about the risk: In response to the discovery, chemists at Purdue University pointed out, briefly, that mirror-life organisms, if they escaped from a lab, would be immune to any attack by “normal” life. A 2010 story in Wired highlighting early findings in the area noted that if a such a microbe developed the ability to photosynthesize, it could obliterate life as we know it. 

The synthetic biology community didn’t seriously weigh those threats then, says David Relman, a specialist who bridges infectious disease and microbiology at Stanford University and a trailblazer in studying the gut and oral microbiomes. The idea of a mirror microbe seemed too far beyond the actual progress on proteins. “This was almost a solely theoretical argument 20 years ago,” he says. 

Now the research landscape has changed. 

Scientists are quickly making progress on mirror images of the machinery cells use to make proteins and to self-replicate. Those components include DNA, which encodes the recipes for proteins; DNA polymerases, which help copy genetic material; and RNA, which carries recipes to ribosomes, the cell’s protein factories. If researchers could make self-replicating mirror ribosomes, then they would have an efficient way to produce mirror proteins. That could be used as a biological manufacturing method for therapeutics. But embedded in a self-­replicating, metabolizing synthetic cell, all these pieces could give rise to a mirror microbe. 

When synthetic biologists convened in Northern Virginia in 2019, they didn’t recognize how quickly the technology was advancing, and if they saw a threat at all, it may have been obscured by the blinding appeal of pushing the science forward. What’s become apparent now, says Glass, is that scientists in different disciplines, all related to mirror life, were largely unaware of what other scientists had been doing. Chemists didn’t know that synthetic biologists had made so much progress on creating mirror cells with natural chirality from scratch. Biologists didn’t appreciate that chemists were building ever-larger mirror macromolecules. “We tend to be siloed,” Glass says. And nobody, he says, had thought to seriously examine the immune system concerns that had already been raised in response to earlier work. “There was not an immunologist or an infectious disease person in the room,” Glass says, reflecting on the 2019 meeting. “I may have come closest, given that I work with pathogenic bacteria and viruses,” he adds, but his work doesn’t address how they cause infections in their hosts.

on the left, a hand with petri dish and the same image inverted on the right
GETTY IMAGES

These scientists also didn’t know that around the same time as their meeting, another conversation about mirror life was happening—a darker dialogue that was as focused on danger as it was on discovery. Starting around 2016, researchers with an organization called Open Philanthropy had begun compiling research files on catastrophic biological risks. The group, which rebranded as Coefficient Giving in 2025, funds projects across a range of focus areas; it shares DNA with a divisive philanthropic philosophy called effective altruism, which advocates giving money to projects with the highest potential benefit to the most people. While that might not sound objectionable, critics point out that the metrics devotees use to gauge “effectiveness” can prioritize long-term solutions while neglecting social injustices or systemic problems. 

Mirror life came up when Open Philanthropy reached out to external scientists about biosecurity risks. In 2019 the organization began funding research by Kevin Esvelt, who leads the Sculpting Evolution group at the MIT Media Lab, on biosecurity issues, including mirror life. He began reading up to see whether mirror life was really something to worry about.

Esvelt made waves in 2013 for pioneering the use of CRISPR to develop a gene drive, a technology that could spread genetic changes introduced into a living organism through a whole population. Researchers are exploring its use, for example, to make mosquitoes hostile to the parasite that causes malaria—and, as a result, lower their chance of spreading it to humans. But almost immediately after he developed the tool, Esvelt argued against using it for profit, at least until proper safeguards could be set and its use in fighting malaria had been established. “Do you really have the right to run an experiment where if you screw up, it affects the whole world?” he asked, in this magazine, in 2016. At the Media Lab, Esvelt leads efforts to safely develop gene drives that can be deployed locally but prevented from spreading globally. 

Esvelt says he’s often thinking about the security risks posed by self-sustaining genetically engineered technologies, and research led him to suspect that the threat of mirror organisms hadn’t been seriously interrogated. The more he learned about microbial growth rates, predator-prey and microbe-microbe interactions, and immunology, the more he began to worry that mirror organisms, if impervious to the innate defenses of natural ones, could cause unstoppable infections in the event that they escaped the lab. 

Even if the first experimental iteration of such a germ were too fragile to survive in the environment or a human body, Esvelt says, it would be a light lift to genetically engineer new, more resilient versions with existing technology. Even worse, he says, the results could be weaponized. The possible path from 2019 to global annihilation seemed almost too direct, he found. 

But he wasn’t an expert in all the scientific fields involved in research on mirror life, so he started making calls. He first described his concerns to Relman one night in February 2022, at a restaurant outside Washington, DC. Esvelt hoped Relman would tell him he was wrong, that he’d missed something over the years of gathering data. Instead, he was troubled. 

The concern spreads

When Relman returned to California, he read more about the technology, the risks, and the role of chirality in the immune system and the environment. And he consulted experts he knew well—ecologists, other microbiologists, immunologists, all of them leaders in their fields—in an attempt to assuage his concerns. “I was hoping that they’d be able to say, I’ve thought about this, and I see a problem with your logic. I see that it’s really not so bad,” he says. “At every turn, that did not happen. Something about it was new to every person.” 

The concern spread. Relman worked with Jack Szostak, a professor of chemistry at the University of Chicago, and a group of researchers to see if it was possible to make an argument that mirror life wasn’t going to wipe out humanity. Included in that group was Kate Adamala, a synthetic biologist at the University of Minnesota. She was a natural choice: Adamala had shared the initial grant from the NSF, in 2019, to explore mirror-life technologies. 

She also became convinced the risk was real—and was dumbfounded that she hadn’t seen it earlier. “I wish that one sunny afternoon we were having coffee and we realized the world’s about to end, but that’s not what happened,” she says. “I’m embarrassed to admit that I wasn’t even the one that brought up the risks first.” Through late 2023 and early 2024, the endeavor began to take on the form of a rigorous scientific investigation. Experts were presented with a hypothesis—namely, that if mirror cells were built, they would pose an existential threat—and asked to challenge it. The goal was to falsify the hypothesis. “It would be great if we were wrong,” says Vaughn Cooper, a microbiologist at the University of Pittsburgh and president-elect of the American Society for Microbiology. 

Relman says that as the chemists and biologists learned more about one another’s work and began to understand what immunologists know about how living things defend themselves, they started to connect the dots and see an emerging picture of an unstoppable synthetic threat.

Some scientists have pushed back against the doomsday scenario, suggesting that the case against mirror life offers an “inflated view of the danger.”

Timothy Hand, an immunologist at the University of Pittsburgh who hadn’t participated in the 2019 NSF meeting, wasn’t initially worried when he heard about mirror life, in 2024. “The mammalian immune system has this incredible capability to make antibodies against any shape,” he says. “Who cares if it’s a mirror?” But when he took a closer look at that process, he could see a cascade of potential problems far upstream of antibody production. Start with detection: Macrophages, which are cells the immune system uses to identify and dispatch invaders, use chiral sensing receptors on their surfaces. The proteins they use to grab on to those invaders, too, are chiral. That suggests the possibility that an organism could be infected with a mirror organism but not be able to detect it or defend against it. “The lack of innate immune sensing is an incredibly dangerous circumstance for the host,” Hand says.

By early 2024, Glass had become concerned as well. Relman and James Wagstaff, a structural biologist from Open Philanthropy, visited him at the Venter Institute to talk about the possibility of using synthetic cell technology—Glass’s specialty—to build mirror life. “At first I thought, This can’t be real,” Glass says. They walked through arguments and counterarguments. “The more this went on, the more I started feeling ill,” he says. “It made me realize that work I had been doing for much of the last 20 years could be setting the world up for this incredible catastrophe.” 

In the second half of 2024, the growing group of scientists assembled the report and wrote the policy forum for Science. Relman briefed policymakers at the White House and members of the national security community. Researchers met with the National Institutes of Health and the National Science Foundation. “We briefed the United Nations, the UK government, the government of Singapore, scientific funding organizations from Brazil,” says Glass. “We’ve talked to the Chinese government indirectly. We were trying to not blindside anybody.” 

A year and a half on, the push has had an impact. UNESCO has recommended a precautionary global moratorium on creating mirror-life cells, and major philanthropic organizations that fund science, including the Alfred P. Sloan Foundation, have announced they will not finance research leading to a mirror microorganism. The Bulletin of the Atomic Scientists highlighted considerations about mirror life in its most recent report on the Doomsday Clock. In March, the United Nations Secretary-General’s Scientific Advisory Board issued a brief highlighting the risks—noting, for example, that recent progress on building mirror molecules could reduce the cost of creating a mirror microbe. 

“I think no one really believes at this stage that we should make mirror life, based on the evidence that’s available,” says James Smith, the scientist who leads the MBDF, the nonprofit focused on assessing the risks of mirror life, which is funded by Coefficient Giving, the Sloan Foundation, and other organizations. The challenge now, Smith says, is for scientists to work with policymakers and bioethicists to figure out how much research on mirror life should be permitted—and who will enforce the rules.

Drawing the line

Not everyone is convinced that mirror organisms pose an existential threat. It’s difficult to verify predictions about how mirror microbes would fare in the immune system—or the larger world—without running experiments on them. Some scientists have pushed back against the doomsday scenario, suggesting that the case against mirror life offers an “inflated view of the danger.” Others have noted that carbohydrates called glycans already exist in both left- and right-handed forms—even in pathogens—and the immune system can recognize both of them. Experiments focused on interactions between the immune system and mirror molecules, they say, could help clarify the risks of mirror organisms and reduce uncertainty. 

Even among those convinced that the worst-case scenario is possible, researchers still disagree over where to draw the line. What inquiries should be allowed and what should be prohibited?

Andy Ellington, a biotechnologist and synthetic biologist at the University of Texas at Austin, doesn’t think mirror organisms will come to fruition anytime soon. Even if they do, he isn’t sure they will pose a threat. “If there is going to be harm done to the human race, this is about position 382 on my list,” he says. But at the same time, he says it’s a complicated issue worth studying more, and he wants to see the conversations continue: “We’re operating in a space where there’s so much unknown that it’s very difficult for us to do risk assessment.” 

Even among those convinced that the worst-case scenario is possible, researchers still disagree over where to draw the line. What inquiries should be allowed and what should be prohibited? 

Adamala, of the University of Minnesota, and others see a natural line at ribosomes, the cellular factories that transform chains of amino acids into proteins. These would be a critical ingredient in creating a self-replicating organism, and Adamala says the path to getting there once mirror ribosomes are in place would be pretty straightforward. But Zhu, at Westlake, and others counter that it’s worth developing mirror ribosomes because they could possibly produce medically useful peptides and proteins more efficiently than traditional chemical methods. He sees a clear distinction, and a foundational gap, between that kind of technology and the creation of a living synthetic organism. “It is crucial to distinguish mirror-image molecular biology from mirror-image life,” he says. That said, he points out that many synthetic molecules and organisms containing unnatural components, including but not limited to the mirror-image subset, might pose health risks. Researchers, he says, should focus on developing holistic guidelines to cover such risks—not just those from mirror molecules. 

Even if the exact risk remains uncertain, Esvelt remains more convinced than ever that the work should be paused, perhaps indefinitely. No one has taken a meaningful swing at the hypothesis that mirror life could wipe out everything, he says. The primary uncertainties aren’t around whether mirror life is dangerous, he points out; they have more to do with identifying which bacterium—including what genes it encodes, what it eats, how it evades the immune system’s sentinels—could lead to the most serious consequences. “The risk of losing everything, like the entire future of humanity integrated over time, is not worth any small fraction of the economy. You just don’t muck around with existential risk like that,” he says. 

In some ways, scientists have been here before, working out rules and limits for research. Two years after the start of the covid-19 pandemic, for example, the World Health Organization published guidelines for managing risks in biological research. But the history is much deeper: Horrific episodes of human experimentation led to the establishment of institutional review boards to provide ethical oversight. In the early 1970s, in response to concerns over lab-acquired infections and growing use of biological warfare, the US Centers for Disease Control and Prevention established biohazard safety levels (BSLs), which govern work on potentially dangerous biological experiments.

And in 1975—at the dawn of recombinant DNA research, which allows researchers to put genetic material from one organism into another—geneticists met at the Asilomar conference center in Pacific Grove, California, to hammer out rules governing the work. There were concerns over what would happen if some virus or bacterium, genetically engineered to have traits that would make it particularly dangerous for people, escaped from a lab. Scientists agreed to self-imposed restrictions, like a moratorium on research until new safety guidelines were in place. As a result of the meeting, in June 1976 the NIH issued rules that, among other things, categorized the risks associated with rDNA experiments and aligned them with the newly adopted BSL system.

Asilomar is often hailed as a successful model for scientific self-governance. But that perception reflects a tendency to recall the meeting through a nostalgic haze. “In fact, it was incredibly messy and human,” says Luis Campos, a historian of science at Rice University. Equally brilliant Nobelists argued on either side of the question of whether to rein in rDNA research. Technical discussions dominated; talks about who would be affected by the technology were missing. The meeting didn’t start establishing guidelines, says Campos, until the lawyers mentioned liability and lab leaks. 

For now it’s unclear whether these examples of self-­governance, which arose from the demonstrated risks of existing technologies, hold useful lessons for the mirror-life community. Three competing images of the future are coming into focus: Mirror life might not be possible, it might be possible but not threatening, or it might be possible and capable of obliterating all life on Earth. 

Scientists may be censoring themselves out of fear and speculation. To some, shutting down the work seems necessary and urgent; to others, it is unnecessarily limiting. What’s clear is that the question of what to do about mirror life has been both illuminating and disorienting, pushing scientists to interrogate not only their current research but where it might lead. This is uncharted territory. 

Stephen Ornes is a science writer based in Nashville, Tennessee.

Correction: An earlier version of this article incorrectly stated that David Relman briefed the National Security Agency. Relman says he briefed members of the national security community. This story was also updated to clarify aspects of Coefficient Giving as an organization and its timeline of mirror life investigation.

]]>
1135197