Carbon Peak and Carbon Neutralization Information Support Platform
The surging value of firms such as NVIDIA has fuelled concerns about an AI stock market bubble.Credit: Kent Nishimura/Bloomberg via Getty After years of hype and ballooning investment, the boom in artificial intelligence technology is beginning to show signs of strain. Many financial analysts now agree that there is an ‘AI bubble’, and some speculate it could finally burst in the next few months. In economic terms, the rise of AI is unlike any other tech boom in history — there is now 17 times more investment in AI than in Internet companies before the dot-com crash of the early 2000s. And, valued at around US$4.6 trillion, the AI company NVIDIA was worth more than the economies of every nation except the United States, China and Germany. But AI is not living up the promise of revolutionizing multiple sectors — nearly 80% of companies using AI found it had had no significant impact on their earnings, according to a report from management consulting firm McKinsey, and concerns over the basic architecture of chatbots is leading scientists to say that AI has the potential to harm their research. These doubts over the technology’s utility, and financial viability, is leading analysts and investors to speculate that a crash is coming. Even tech chief executives such as Sam Altman of ChatGPT’s parent company OpenAI in San Francisco, California, have admitted that parts of the field are “kind of bubbly right now”. So, if a crash is imminent, how will it affect AI research and the scientists and engineers who make it happen? Lessons from the noughties Some analysts say that an AI-market collapse would be even more catastrophic than the dot-com crash — a shock that wiped out more than $5 trillion in stock-market value and led to hundreds of thousands of job losses in the tech industry alone. Like those of other tech bubbles before it, the dot-com crash had a lasting impact on computer-science research, says John Turner, an economist and historian at Queen’s University Belfast, UK. “But it wasn’t all bad,” he adds. “In 2000, a lot of highly skilled electronic engineers and computer scientists lost their jobs” and demand for computer-science graduates plummeted, he says. This led to a drop in the numbers of computer-science graduates — but despite this, research output didn’t falter, and the average number of computer-science publications continued to rise each year during and after the dot-com crash (see Dot-com crash aftermath’). Similarly, the roll-out of telecommunication technologies such as mobile phones and the Internet continued unabated. Sources: National Center for Education Statistics; Artificial Intelligence Index Report 2025. Brent Goldfarb, an economist at University of Maryland in College Park, says similar lay-offs in AI researchers and developers would happen were the AI bubble to burst. The biggest impact “would be on the hoard of start-ups jumping on the AI bandwagon, like the tenth AI notetaking app or AI scientist”, he says. OpenAI, Google, NVIDIA and other major AI companies “will likely survive”, he says. “The last thing they’ll do is get rid of their scientific core; that’s the path to the future.” In fact, crashes can have a silver lining: they can take innovation into other sectors when leading scientists change jobs, Turner says. Take, for instance the British bicycle crash of 1896. “Motorcycles, motorcars, the Wright brothers; all can trace their origins to the bicycle bubble,” he says. “The railway ‘manias’ of the nineteenth century left the legacy of railways for the benefit of people, much like the dot-com bubble gave society the Internet.” Liberated researchers Currently, the tech industry eclipses academia when it comes to AI, in terms of both investment and publication output. Some researchers have called this an “AI brain drain”, which sidelines exploratory science in favour of commercial interest1. “If I’m an AI researcher working at OpenAI, why would I go to a university when I earn ten times the salary?” Goldfarb says. Could industry lay-offs after an AI crash have the opposite effect, and push more researchers into academic jobs? Possibly, says Goldfarb, adding that “AI researchers coming back to academia would be good to train future generations”. But he doubts whether enough AI researchers would be drawn into academia to make universities a dominant centre of AI research. Tech lay-offs in 2022 and 2023 were the worst since the dot-com bubble, but there is little indication it has affected academic AI research — industry has gained most of the PhD graduates in AI research, and 90% of the largest AI models topping benchmark rankings were developed in industry (see ‘AI brain drain’). Source: Ahmed, N. et al. Science 379 884-886 (2023). David Kirsch, a historian of modern technology at the University of Maryland, says that even if they go into academia, the “talent liberated from an AI bust” would go on to create tools that are much more valuable for society than for the companies that created the AI models. The protein-folding software AlphaFold, for example, is “super useful” for solving problems in biology. “I could imagine researchers solving other historically challenging things that need to combine AI and deep human knowledge to generate meaningful innovation,” he says. There are already signs that this is happening. Top AI researchers left OpenAI, Meta and Google this year to found Periodic Labs, a start-up in San Francisco that aims to use AI to accelerate scientific discoveries in physics and chemistry. And Meta chief executive Mark Zuckerberg’s plans to push for AI ‘superintelligence’ has led the company’s chief scientist Yann LeCun to say he intends to leave the company and launch his own start-up, developing “world models” — neural networks that understand the physical properties of the real world and can plan actions rather than just react to prompts. Whatever happens to the AI bubble, the money and human resources invested into it will spread innovation into other sectors outside the tech industry, says Turner. “The question is: what is that ‘something else’ in AI?”
发布时间:2025-11-19 NatureBefore a car crash in 2008 left her paralysed from the neck down, Nancy Smith enjoyed playing the piano. Years later, Smith started making music again, thanks to an implant that recorded and analysed her brain activity. When she imagined playing an on-screen keyboard, her brain–computer interface (BCI) translated her thoughts into keystrokes — and simple melodies, such as ‘Twinkle, Twinkle, Little Star’, rang out1. The rise of brain-reading technology: what you need to know But there was a twist. For Smith, it seemed as if the piano played itself. “It felt like the keys just automatically hit themselves without me thinking about it,” she said at the time. “It just seemed like it knew the tune, and it just did it on its own.” Smith’s BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so, says trial leader Richard Andersen, a neuroscientist at the California Institute of Technology in Pasadena. Smith is one of roughly 90 people who, over the past two decades, have had BCIs implanted to control assistive technologies, such as computers, robotic arms or synthetic voice generators. These volunteers — paralysed by spinal-cord injuries, strokes or neuromuscular disorders, such as motor neuron disease (amyotrophic lateral sclerosis) — have demonstrated how command signals for the body’s muscles, recorded from the brain’s motor cortex as people imagine moving, can be decoded into commands for connected devices. But Smith, who died of cancer in 2023, was among the first volunteers to have an extra interface implanted in her posterior parietal cortex, a brain region associated with reasoning, attention and planning. Andersen and his team think that by also capturing users’ intentions and pre-motor planning, such ‘dual-implant’ BCIs will improve the performance of prosthetic devices. Nancy Smith used a brain–computer interface to make music after a car accident left her paralysed from the neck down.Credit: Caltech Andersen’s research also illustrates the potential of BCIs that access areas outside the motor cortex. “The surprise was that when we go into the posterior parietal, we can get signals that are mixed together from a large number of areas,” says Andersen. “There’s a wide variety of things that we can decode.” The ability of these devices to access aspects of a person’s innermost life, including preconscious thought, raises the stakes on concerns about how to keep neural data private. It also poses ethical questions about how neurotechnologies might shape people’s thoughts and actions — especially when paired with artificial intelligence. Meanwhile, AI is enhancing the capabilities of wearable consumer products that record signals from outside the brain. Ethicists worry that, left unregulated, these devices could give technology companies access to new and more precise data about people’s internal reactions to online and other content. Ethicists and BCI developers are now asking how previously inaccessible information should be handled and used. “Whole-brain interfacing is going to be the future,” says Tom Oxley, chief executive of Synchron, a BCI company in New York City. He predicts that the desire to treat psychiatric conditions and other brain disorders will lead to more brain regions being explored. Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users. “It leads you to the final question: how do we make that safe?” Consumer concerns Consumer neurotech products capture less-sophisticated data than implanted BCIs do. Unlike implanted BCIs, which rely on the firings of specific collections of neurons, most consumer products rely on electroencephalography (EEG). This measures ripples of electrical activity that arise from the averaged firing of huge neuronal populations and are detectable on the scalp. Rather than being created to capture the best recording possible, consumer devices are designed to be stylish (such as in sleek headbands) or unobtrusive (with electrodes hidden inside headphones or headsets for augmented or virtual reality). Still, EEG can reveal overall brain states, such as alertness, focus, tiredness and anxiety levels. Companies already offer headsets and software that give customers real-time scores relating to these states, with the intention of helping them to improve their sports performance, meditate more effectively or become more productive, for example. AI has helped to turn noisy signals from suboptimal recording systems into reliable data, explains Ramses Alcaide, chief executive of Neurable, a neurotech company in Boston, Massachusetts, that specializes in EEG signal processing and sells a headphone-based headset for this purpose. “We’ve made it so that EEG doesn’t suck as much as it used to,” Alcaide says. “Now, it can be used in real-life environments, essentially.” And there is widespread anticipation that AI will allow further aspects of users’ mental processes to be decoded. For example, Marcello Ienca, a neuroethicist at the Technical University of Munich in Germany, says that EEG can detect small voltage changes in the brain that occur within hundreds of milliseconds of a person perceiving a stimulus. Such signals could reveal how their attention and decision-making relate to that specific stimulus. Although accurate user numbers are hard to gather, many thousands of enthusiasts are already using neurotech headsets. And ethicists say that a big tech company could suddenly catapult the devices to widespread use. Apple, for example, patented a design for EEG sensors for future use in its Airpods wireless earphones in 2023. Yet unlike BCIs aimed at the clinic, which are governed by medical regulations and privacy protections, the consumer BCI space has little legal oversight, says David Lyreskog, an ethicist at the University of Oxford, UK. “There’s a wild west when it comes to the regulatory standards,” he says. In 2018, Ienca and his colleagues found that most consumer BCIs don’t use secure data-sharing channels or implement state-of-the-art privacy technologies2. “I believe that has not changed,” Ienca says. What’s more, a 2024 analysis3 of the data policies of 30 consumer neurotech companies by the Neurorights Foundation, a non-profit organization in New York City, showed that nearly all had complete control over the data users provided. That means most firms can use the information as they please, including selling it. Neuralink brain chip: advance sparks safety and secrecy concerns Responding to such concerns, the government of Chile and the legislators of four US states have passed laws that give direct recordings of any form of nerve activity protected status. But Ienca and Nita Farahany, an ethicist at Duke University in Durham, North Carolina, fear that such laws are insufficient because they focus on the raw data and not on the inferences that companies can make by combining neural information with parallel streams of digital data. Inferences about a person’s mental health, say, or their political allegiances could still be sold to third parties and used to discriminate against or manipulate a person. “The data economy, in my view, is already quite privacy-violating and cognitive- liberty-violating,” Ienca says. Adding neural data, he says, “is like giving steroids to the existing data economy”. Several key international bodies, including the United Nations cultural organization UNESCO and the Organisation for Economic Co-operation and Development, have issued guidelines on these issues. Furthermore, in September, three US senators introduced an act that would require the Federal Trade Commission to review how data from neurotechnology should be protected. Heading to the clinic While their development advances at pace, so far no implanted BCI has been approved for general clinical use. Synchron’s device is closest to the clinic. This relatively simple BCI allows users to select on-screen options by imagining moving their foot. Because it is inserted into a blood vessel on the surface of the motor cortex, it doesn’t require neurosurgery. It has proved safe, robust and effective in initial trials4, and Oxley says Synchron is discussing a pivotal trial with the US Food and Drug Administration that could lead to clinical approval. Elon Musk’s neurotech firm Neuralink in Fremont, California, has surgically implanted its more complex device in the motor cortices of at least 13 volunteers who are using it to play computer games, for example, and control robotic hands. Company representatives say that more than 10,000 people have joined waiting lists for its clinical trials. At least five more BCI companies have tested their devices in humans for the first time over the past two years, making short-term recordings (on timescales ranging from minutes to weeks) in people undergoing neurosurgical procedures. Researchers in the field say the first approvals are likely to be for devices in the motor cortex that restore independence to people who have severe paralysis — including BCIs that enable speech through synthetic voice technology. As for what’s next, Farahany says that moving beyond the motor cortex is a widespread goal among BCI developers. “All of them hope to go back further in time in the brain,” she says, “and to get to that subconscious precursor to thought.” Last year, Andersen’s group published a proof-of-concept study5 in which internal dialogue was decoded from the parietal cortex of two participants, albeit with an extremely limited vocabulary. The team has also recorded from the parietal cortex while a BCI user played the card game blackjack (pontoon)6. Certain neurons responded to the face values of cards, whereas others tracked the cumulative total of a player’s hand. Some even became active when the player decided whether to stick with their current hand or take another card. Casey Harrell (with his wife Levana Saxon) uses his brain implant to generate synthetic speech.Credit: Ian Bates/New York Times/Redux/eyevine Both Oxley and Matt Angle, chief executive of BCI company Paradromics, based in Austin, Texas, agree that BCIs in brain regions other than the motor cortex might one day help to diagnose and treat psychiatric conditions. Maryam Shanechi, an engineer and computer scientist at the University of Southern California in Los Angeles, is working towards this goal — in part by aiming to identify and monitor neural signatures of psychiatric diseases and their symptoms7. BCIs could potentially track such symptoms in a person, deliver stimulation that adjusts neural activity and quantify how the brain responds to that stimulation or other interventions. “That feedback is important, because you want to precisely tailor the therapy to that individual’s own needs,” Shanechi says. Shanechi does not yet know whether the neural correlates of psychiatric symptoms will be trackable across many brain regions or whether they will require recording from specific brain areas. Either way, a central aspect of her work is building foundation models of brain activity. Such models, constructed by training AI algorithms on thousands of hours of neural data from numerous people, would in theory be generalizable across individuals’ brains. Synchron is also using the learning potential of AI to build foundation models, in collaboration with the AI and chip company NVIDIA in Santa Clara, California. Oxley says these models are revealing unexpected signals in what was thought to be noise in the motor cortex. “The more we apply deeper learning techniques,” he says, “the more we can separate out signal from noise. But it’s not actually signal from noise, it’s signal from signal.” Oxley predicts that BCI data integrated with multimodal streams of digital data will increasingly be able to make inferences about people’s inner lives. After evaluating that data, a BCI could respond to thoughts and wants — potentially subconscious ones — in ways that might nudge thinking and behaviour. Shanechi is sceptical. “It’s not magic,” she says, emphasizing that what BCIs can detect and decode is limited by the training data, which is challenging to obtain. The I in AI In unpublished work, researchers at Synchron have found that, like Andersen’s team, they can decode a type of preconscious thought with the help of AI. In this case, it’s an error signal that happens just before a user selects an unintended on-screen option. That is, the BCI recognizes that the person has made a mistake slightly before the person is aware of their mistake. Oxley says the company must now decide how to use this insight. “If the system knows you’ve just made a mistake, then it can behave in a way that is anticipating what your next move is,” he says. Automatically correcting mistakes would speed up performance, he says, but would do so by taking action on the user’s behalf. Although this might prove uncontroversial for BCIs that record from the motor cortex, what about BCIs that are inferring other aspects of a person’s thinking? Oxley asks: “Is there ever going to be a moment at which the user enables a feature to act on their behalf without their consent?” Angle says that the addition of AI has introduced an “interesting dial” that allows BCI users to trade off agency and speed. When users hand over some control, such as when brain data are limited or ambiguous, “will people feel that the action is disembodied, or will they just begin to feel that that was what they wanted in the first place?” Angle asks. Farahany points to Neuralink’s use of the AI chatbot Grok with its BCI as an early example of the potentially blurry boundaries between person and machine. One research volunteer who is non-verbal can generate synthetic speech at a typical conversational speed with the help of his BCI and Grok. The chatbot suggests and drafts replies that help to speed up communication. Mind-reading devices are revealing the brain’s secrets Although many people now use AI to draft e-mail and other responses, Farahany suspects that a BCI-embedded AI chatbot that mediates a person’s every communication is likely to have an outsized influence over what a user ends up saying. This effect would be amplified if an AI were to act on intentions or preconscious ideas. The chatbot, with its built-in design features and biases, she argues, would mould how a person thinks. “What you express, you incorporate into your identity, and it unconsciously shapes who you are,” she says. Farahany and her colleagues argued in a July preprint8 for a new form of BCI regulation that would give developers in both experimental and consumer spaces a legal fiduciary duty to users of their products. As happens with a lawyer and their client, or a physician and their patient, the BCI developers would be duty-bound to act in the user’s best interests. Previous thinking about neurotech, she says, was centred mainly on keeping users’ brain data private, to prevent third parties from accessing sensitive personal information. Going forward, the questions will be more about how AI-empowered BCI systems work in full alignment with users’ best interests. “If you care about mental privacy, you should care a lot about what happens to the data when it comes off of the device,” she says, “I think I worry a lot more about what happens on the device now.”
发布时间:2025-11-19 NatureDownload the Nature Podcast 19 November 2025 In this episode: 00:45 A molecule that delivers insulin through the skin Researchers have developed a skin-permeable polymer that can deliver insulin into the body, which they say could one day offer an alternative to injections for diabetes management. The skin’s structure presents a formidable barrier to the delivery of large drugs but in this work a team show that their polymer can penetrate though the different layers without causing damage. Insulin attached to this polymer was able to reduce blood glucose levels in animal models for diabetes at a comparable speed to injected insulin. While further research is required on the long-term safety of this strategy, the team hope it could offer a way to non-invasively deliver other large-molecule drugs into the body. Research Article: Wei et al. 09:23 Research Highlights How extreme drought may be humanity’s biggest challenge after a huge volcanic eruption — plus, turning a bacterium into a factory for a colour-changing pigment Research Highlight: Volcano mega-eruptions lead to parched times Research Highlight: Dye or die: bacterium forced to make pigment to stay alive 11:42 How language lights up the brain, whatever the tongue The human brain responds in a similar way to both familiar and unfamiliar languages, but there are some key differences, according to new research — a finding that may explain why learning a language can be difficult. A study looking involving 34 people showed that listening to an unfamiliar language triggers similar neural activity to listening to their native tongue. The finding implies that human speech triggers a common reaction in the brain regardless of understanding. However, there were subtle differences when listening to a known language that may help explain how people actually understand words. Research Article: Bhaya-Grossman et al. Neuron: Zhang et al Sounds used under CC BY 4.0 27:18 Briefing Chat Signs that greenhouse-gas emissions may peak around 2030 — plus, evidence of dog breeding by ancient humans. Nature: Global greenhouse-gas emissions are still rising: when will they peak? Nature: How ancient humans bred and traded the first domestic dogs Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday. Never miss an episode. Subscribe to the Nature Podcast on Apple Podcasts, Spotify, YouTube Music or your favourite podcast app. An RSS feed for the Nature Podcast is available too.
发布时间:2025-11-19 NaturePeople with measles are barred from entering a hospital in Canada, which has seen a surge in cases of the highly infectious disease in 2025.Credit: Nicole Osborne/The Canadian Press via ZUMA Press A surge in measles cases has cost Canada its official measles free designation — and the United States looks likely to follow suit. The spike in Canada’s measles rate has been dramatic: so far, there have been 4,843 confirmed cases in 2025, up from just 147 cases in 2024 (see ‘Canadian surge’). Meanwhile, the United States has had more than 1,720 confirmed cases this year, more than in any year in the past three decades. If the disease continues to spread until January 2026, the United States could lose its ‘measles elimination’ status — a label applied to regions that have had no endemic measles transmission for at least 12 months — early next year. Source: Health Infobase, Government of Canada Public-health officials say that the surge in cases in North America is concerning, and possibly a sign of worse things to come. But these outbreaks are not unprecedented in the broader picture. The global number of measles cases was much higher in 2019 (see ‘Up and down and up again’), when Africa was hit hard by outbreaks. And 2024 was particularly bad in Europe; the region saw double the number of measles cases it recorded in 2023, and the United Kingdom declared a national incident. Source: World Health Organization Health officials around the world have been trying to quash measles for decades, with variable success. Countries and regions come and go from the measles-free list frequently. “Elimination is a fragile state,” says William Moss, an epidemiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. The Americas became the first — and, so far, only — World Health Organization (WHO) region to be declared measles-free, in 2016, but this status didn’t last long, thanks to an outbreak in Venezuela in 2018 that spread to Brazil. The region clawed back its status in 2024, but has now lost it again, thanks to the 10 November decision about Canada, made by the Pan American Health Organization. High bar The WHO recommends that countries vaccinate 95% of children with two doses of measles-containing vaccine. Few countries hit that mark: in the European Union in 2023, for example, only four countries achieved it. “That’s an aspirational goal. A country can eliminate measles without 95% coverage,” notes Moss; outbreaks tend to happen in communities in which the coverage is much lower than the national average. Globally, the rate of measles vaccination with one dose — which provides some protection — has hovered between 81% and 86% for more than a decade, and is currently recovering from a dip that occurred during the pandemic as a result of disruptions in vaccine delivery (see ‘Widespread protection’). Many countries, including the United States and Canada, maintain a rate of around 90% for one dose. Source: UNICEF However, those high national rates can mask gaps and problematic trends in certain communities or sectors. The rates of immunization with the measles, mumps and rubella (MMR) vaccine in kindergartners in the United States, for example, is creeping downwards, from 95.2% in the 2019–2020 school year to 92.5% in the 2024–2025 school year. And the proportion of children in Canada who received a second dose of measles vaccine dropped from 87% before the COVID-19 pandemic to 79% in 2024. Some of the declines are due to changing attitudes towards vaccination. This trend is starkly visible in data on the number of US kindergartners receiving exemptions from one or more vaccines (see ‘Sitting it out’). Although exemptions for medical reasons have remained steady, the number of parents opting out of vaccination for non-medical reasons (such as religion or personal choice) has risen sharply, up to 3.6% in the 2024–25 school year ― the highest level since data began. Source: US Centers for Disease Control and Prevention One factor at play is complacency, says Marco Cavaleri, head of vaccines strategy for the European Medicines Agency in London. “The problem with these diseases is that, thanks to the vaccines, they have almost disappeared in many places. People think they are gone.” This year has also seen cuts to foreign aid for vaccination programmes and a change in vaccination policies, particularly in the United States, that mean the measles surge might be a sign of things to come with other diseases. “We tend to see measles outbreaks first, just because it’s so contagious,” says Moss.
发布时间:2025-11-18 NatureNo one could accuse Demis Hassabis of dreaming small. In 2016, the company he co-founded, DeepMind, shocked the world when an artificial intelligence model it created beat the best human player of the strategy game Go. Then Hassabis set his sights even higher: in 2019, he told colleagues that his goal was to win Nobel prizes with the company’s AI tools. Chemistry Nobel goes to developers of AlphaFold AI that predicts protein structures It took only five years for Hassabis and DeepMind’s John Jumper to do so, collecting a share of the 2024 Nobel Prize in Chemistry for creating AlphaFold, the AI that revolutionized the prediction of protein structures. AlphaFold is just one in a string of science successes that DeepMind has achieved over the past decade. When he co-founded the company in 2010, Hassabis, a neuroscientist and game developer, says his aim was to make “a world-class scientific research lab, but in industry”. In that quest, the company sought to apply the scientific method to the development of AI, and to do so ethically and responsibly by anticipating risks and reducing potential harms. Establishing an AI ethics board was a condition of the firm’s agreement to be acquired by Google in 2014 for around US$400 million, according to media reports. Now Google DeepMind is trying to replicate the success of AlphaFold in other fields of science. “We’re applying AI to nearly every other scientific discipline now,” says Hassabis. Demis Hassabis co-founded DeepMind in 2010.Credit: Antonio Olmos/Guardian/eyevine But the climate for this marriage of science and industry has changed drastically since the release of ChatGPT in 2022 — an event that Hassabis calls a “wake-up moment”. The arrival of chatbots and the large language models (LLMs) that power them led to an explosion in AI usage across society, as well as a scramble by a growing number of well-funded competitors to achieve human-level artificial general intelligence (AGI). Google DeepMind is now racing to release commercial products — including iterations of the firm’s Gemini LLMs — almost weekly, while continuing its machine-learning research and producing science-specific models. The acceleration has made doing responsible AI harder, and some staff are unhappy with the firm’s more commercial outlook, say several former employees. All of this raises questions about where DeepMind is headed, and whether it can achieve blockbuster successes in other fields of science. Nobel bound At Google DeepMind’s slick headquarters in London’s King’s Cross technology hub, gleaming geometric sculptures and the smell of espresso hang in the reception hall. Time is so precious that staff members — thought to number between 500 and 1,000 worldwide — can pick up a scooter to race the few hundred metres from one office to another. It’s a far cry from the humble origins of the company, which sought to build general AI systems by melding ideas from neuroscience and machine learning. “They were absolutely just super geniuses,” says Joanna Bryson, a computer scientist and researcher in AI ethics at the Hertie School in Berlin. “They were these 12 guys that everybody wanted.” The laboratory pioneered the deep-learning AI technique, which uses simulated neurons to learn associations in data after studying real-world examples, as well as reinforcement learning, in which a model learns by trial, error and reward. After applying these to teach models how to play arcade games1 in 2015 and master the ancient game of Go2 in 2016, DeepMind turned its sights to its first scientific problem — predicting the 3D structure of proteins3 from their constituent amino acids. A member of the AlphaFold team examines a prediction of a protein structure.Credit: Alecsandra Dragoi for Nature Hassabis first came across the puzzle of protein structure as an undergraduate at the University of Cambridge, UK, in the 1990s and noted it as being a problem that AI might one day help to solve. AI learning techniques require a database of examples as well as clear metrics of success that guide the model’s progress. Thanks to a long-standing database of known structures and an established competition that judged the accuracy of the predictions, proteins had both. Protein folding ticked a crucial box for Hassabis: it is a ‘root node’ problem that, once solved, opens up branches of downstream research and applications. Those kinds of problem “are worth spending five years or ten years on, and loads of computers and researchers”, he says. What’s next for AlphaFold and the AI protein-folding revolution DeepMind released its first iteration of AlphaFold in 2018, and by 2020, its performance far outstripped that of tools from any other team. Today, a spin-off from DeepMind, Isomorphic Labs, is seeking to use AlphaFold in drug discovery. And DeepMind’s AlphaFold database of more than 200 million protein-structure predictions has been used in a range of research efforts, from improving bee immunity to disease in the face of global population declines to screening for antiparasitic compounds to treat Chagas disease, a potentially life-threatening parasitic infection4. Science is not just a source of problems to solve; the firm tries to approach all of its AI development in a scientific way, says Pushmeet Kohli, who leads the company’s science efforts. Researchers tend to go back to first principles for each problem and try fresh techniques, he says. Staff members at many other AI firms are more like engineers, applying ingenuity but not doing basic discovery, says Jonathan Godwin, chief executive of the AI firm Orbital Materials in London, who was a researcher at Google DeepMind until the end of 2022. John Jumper and Pushmeet Kohli speak to researcher Olaf Ronneberger in the DeepMind offices.Credit: Alecsandra Dragoi for Nature But replicating the success of AlphaFold will be tough: “Not many scientific endeavours work like that,” says Godwin. Unlocking the genome Google DeepMind is throwing its resources at several problems for which it thinks AI could speed development, and which could have “transformative impact”, says Kohli. These include weather forecasting5 and nuclear fusion, which has the potential to become a clean, abundant energy source. The company picks projects through a strict selection process, but individual researchers can choose which to work on and how to tackle a problem, he says. AI models that work on such problems often require specialized data and researchers to program knowledge into them. One project that shows promise, says Kohli, is AlphaGenome, which launched in June as an attempt to decipher long stretches of human non-coding DNA and predict their possible functions6. But the challenge is harder than for AlphaFold, because each sequence yields multiple valid functions. Materials science is another area in which the company hopes that AI could be revolutionary. Materials are hard to model because the complex interactions of atomic nuclei and electrons can only be approximated. Learning from a database of simulated structures, DeepMind developed its GNoME model, which in 2023 predicted 400,000 potential new substances7. Now, Kohli says, the team is using machine learning to develop better ways to simulate electron behaviour, ones that are learnt from example interactions rather than by relying on the principles of physics. The end goal is to predict materials with specific properties, such as magnetism or superconductivity, he says. “We want to see the era where AI can basically design any material with any sort of magical property that you want, if it is possible,” he says. John Jumper and Pushmeet Kohli in the headquarters building.Credit: Alecsandra Dragoi for Nature AI models have a variety of known safety issues, from the risk of being used to create bioweapons to the perpetuation of racial and gender-based biases, and these come to the fore when releasing models into the world. Google DeepMind has a dedicated committee on responsibility and safety that works across the company and is consulted at each major stage of development, says Anna Koivuniemi, who runs its ‘impact accelerator’, an effort to scour society for areas in which AI could make a difference. Committee members stress-test the idea to see what could go wrong, including by consulting externally. “We take it very, very seriously,” she says. Another advantage the firm has is that its researchers are pursuing the kind of AI that the world ultimately wants, says Godwin. “People don’t really want random videos of themselves being generated and put on a social-media network; they want limitless energy or diseases being cured,” he says. But DeepMind now has company in the quest to use AI for science. Some firms that started out making LLMs seem to be coming around to Hassabis’ vision of AI for science. In the past two months, both OpenAI and the Paris-based AI firm Mistral have created teams dedicated to scientific discovery. Company concerns For AI companies and researchers, OpenAI’s 2022 release of ChatGPT changed everything. Its success was “pretty surprising to everyone”, says Hassabis. Following that watershed moment, in 2023 DeepMind merged with Google Brain, Google’s other major AI research team, to centralize its AI expertise and compete with other companies in rolling out LLMs. The newly formed Google DeepMind launched Google’s first commercial LLM, Gemini, in December 2023. The acceleration of AI development means the firm now has a commercial imperative, as well as a research one, says Hassabis. AI tools are designing entirely new proteins that could transform medicine On the one hand, that means more investment, computing resources and intensity of work. On the other hand, it makes it harder to focus on doing research for its own sake, foreseeing the impacts of technologies and deploying them responsibly, he says. “That’s something that is harder when you’re in this commercial kind of flywheel,” he adds. The shift means more competition for talent and the company has been forced to take on more of an engineering-focused culture to stay at the cutting edge, says Godwin. However, Google DeepMind researchers working on AI for science face fewer commercial demands than do their colleagues who work on other projects, he says. There is some evidence that the increased competition has also changed the company’s openness towards publishing its work. Nicholas Carlini, an AI safety researcher who joined Google DeepMind from Google Brain during the merger, left earlier this year, claiming in an open letter that it had become harder to publish at the company. The share of papers co-authored by researchers listed as being at DeepMind, Google Brain or Google DeepMind, for example, has fallen at three top AI conferences — NeurIPS, ICLR and ICML — from 10.5% at its peak in 2018 to 4.5% in 2024, although the overall number of papers from these researchers has risen sharply over these years (see ‘DeepMind’s research output’). Source: Nature analysis Some staff members also objected to Google DeepMind’s decision, in February, to drop from its ‘AI principles’ its commitment not to apply AI to surveillance or weapons. In April, the Financial Times reported that around 300 UK staff members were making moves to unionize, in protest at the company’s stance on military involvement. “The updates to our AI Principles aren’t a wholesale change,” a Google DeepMind spokesperson told Nature, “but rather an opportunity for deeper engagement. Our core commitment — to pursue AI where benefits substantially outweigh the risks — remains the same.” The spokesperson also said that the company regularly updates policies “to preserve the ability for our teams to publish and contribute to the broader research ecosystem”. The company calculates that its share of papers accepted at top AI conferences in 2024 was about the same as its long-term average; this calculation tracks yearly publications by current Google DeepMind staff members. Like other AI companies, Google DeepMind is pursuing AGI — a somewhat vague term for a system that is resourceful enough to excel at any cognitive task. Many companies that specialize in LLMs are betting that AGI can be reached by scaling up these models, increasing the data and computing power available until they develop skills general enough to tackle any task. But for that, Hassabis says, fresh conceptual breakthroughs in AI techniques will probably be needed. In that regard, the company’s wide research base could bear fruit, say some researchers. The firm has “intellectual diversity there that I don’t see in the other companies”, says Gary Marcus, a neuroscientist at New York University. “I’ve always thought they have a better chance of actually getting to AGI than the other companies,” which are more focused on LLMs. Wendy Hall, a computer scientist at the University of Southampton, UK, says there’s another key difference between DeepMind and other companies pursuing AGI. Hassabis “understands the boundaries of what they’re doing” and considers what it might mean for humanity to reach AGI, she says. Hassabis says he feels a duty to demonstrate a responsible scientific approach, in contrast to Silicon Valley’s “move fast and break things” method. Despite the pressures, Google DeepMind has a chance to do things better than other firms, says Bryson. “They’re in Europe, they have that bit of distance. And they were never in it only for the money,” she says. “But I don’t know if it’s enough.”
发布时间:2025-11-18 NatureDrugs such as Ozempic and Mounjaro act on GLP-1 receptors in the brain to regulate appetite. Credit: Oliver Berg/dpa via Alamy The obesity drug tirzepatide, sold as Mounjaro or Zepbound, can suppress patterns of brain activity associated with food cravings, a study suggests. Researchers measured the changing electrical signals in the brain of a person with severe obesity who had experienced persistent ‘food noise’ — intrusive, compulsive thoughts about eating — shortly after the individual began taking the medication. The study is the first to use electrodes to directly measure how blockbuster obesity drugs that mimic the hormone GLP-1 affect brain activity in people, and to hint at how they curb extreme food cravings. “It’s a great strategy to try and find a neural signature of food noise, and then try to understand how drugs can manipulate it,” says Amber Alhadeff, a neuroscientist at the Monell Chemical Senses Center in Philadelphia, Pennsylvania. The findings were published today in Nature Medicine1. Bonus finding Casey Halpern, a neurosurgeon-scientist at the University of Pennsylvania in Philadelphia, and his colleagues did not set out to investigate the effects of obesity drugs on the brain. The team’s goal was to test whether a type of deep brain stimulation — a therapy that involves delivering a weak electrical current directly into the brain — can help to reduce compulsive eating in people with obesity for whom treatments such as bariatric surgery haven’t worked. How to keep weight off after obesity drugs The scientists set up a study in which participants had an electrode implanted into their nucleus accumbens, a region of the brain that is involved in feelings of reward. It also expresses the GLP-1 receptor, notes Christian Hölscher, a neuroscientist at the Henan Academy of Innovations in Medical Science in Zhengzhou, China, “so we know that GLP-1 plays a role in modulating reward here”. This type of electrode, which can both record electrical activity and deliver an electrical current when needed, is already used in people to treat some forms of epilepsy. For the study’s first two participants, the researchers found that episodes of intense food noise were accompanied by a surge in low-frequency brain activity. This pattern suggested that these changes could serve as a measurable sign of compulsive food cravings. The third trial participant, a 60-year-old woman, had just started taking a high dose of tirzepatide — which had been prescribed by her physician to treat type 2 diabetes — when she had the electrode implanted. “We took advantage of this serendipitous opportunity because of the excitement around these drugs,” Halpern says. Food noise silenced In the following months, while taking the drug, her compulsions to binge eat vanished. “It was very striking to see such an absence in food noise, in somebody who had a long history of cravings and food preoccupation,” says Halpern. “Also very striking was that this was preceded by this profound silence in the nucleus accumbens, in terms of the electrical activity that you record in that area,” he adds. Between five and seven months after electrode implantation, however, the researchers observed that the type of brain activity that had been associated with food compulsion in the other two participants started to ramp up in the third participant’s brain. They wondered whether that was a warning sign that the food noise was going to return. And it did. “All of a sudden, the signal that we had identified before was there, and it happened before the return of the food cravings and the loss-of-control eating behaviour,” says Halpern. Why do obesity drugs seem to treat so many other ailments? The fact that the neural biomarker preceded the return of the food noise makes the link between the two “pretty compelling”, says Alhadeff, despite the finding being in just one person. “It will have to be validated in more people,” she adds. The participant was still taking tirzepatide when her food cravings returned, which suggests that she might have developed a tolerance to this effect of the medication. It might also be that the GLP-1 receptors in that brain region become desensitized to the drug, says Hölscher. Halpern hopes that the findings inspire companies to design drugs to specifically target food noise. “Right now, these drugs are optimized for weight loss,” he says. “That temporarily seems to help people with food noise, but it may not be a long-term, durable treatment.”
发布时间:2025-11-17 NatureResearchers gave an AI data from physics experiments involving systems using pendulum-like motion to see if it could derive basic laws of physics.Credit: stefilyn/Getty Most artificial-intelligence (AI) models can reliably identify patterns in data and make predictions, but struggle to use that data to come up with broad scientific concepts, such as the laws of gravity. Now, a team in China has developed a system called AI-Newton that, after being fed experimental data, can autonomously ‘discover’ key physics principles, such as Newton’s second law describing the effect of force and mass on acceleration. The model mimics the human scientific process by incrementally building a knowledge base of concepts and laws, says Yan-Qing Ma, a physicist at Peking University in Beijing who helped to develop the system. Being able to identify useful concepts means that the system can potentially discover scientific insights without human pre-programming, Ma adds. Keyon Vafa, a computer scientist at Harvard University in Cambridge, Massachusetts, explains that AI-Newton uses an approach called symbolic regression, in which the model hunts for the best mathematical equation to represent physical phenomena. This technique is a promising method for scientific discovery, he adds, because the system is programmed in a way that encourages it to deduce concepts. The team at Peking University used a simulator to generate data from 46 physics experiments1 involving the free motion of balls and springs, collisions between objects and the behaviour of systems exhibiting vibrations, oscillations and pendulum-like motion. The simulator also deliberately introduced statistical errors to mimic real-world data. For example, AI-Newton was given data on the position of a ball at a given time and asked to come up with a mathematical equation that explains the relationship between the two variables of time and position. It was able to provide an equation for velocity. It stored this knowledge for the next set of tasks, in which it successfully derived the mass of the ball using Newton’s second law. The results have not yet been peer reviewed. Planetary trajectories Scientists have previously used AI models to predict planetary orbits. In 2019, researchers at the Swiss Federal Institute of Technology (ETH) in Zurich developed ‘AI Copernicus’, a neural network that used Earth-based observations to derive formulae for planetary trajectories. In that case, humans were needed to interpret the equations and understand how they relate to the movement of the planets around the Sun. Vafa and his colleagues at the Massachusetts Institute of Technology in Cambridge, tried a similar experiment with several foundation models, a type of AI model trained on vast data sets, including large language models such as GPT, Claude and Llama. They trained the models to predict the location of planets across solar systems and then asked them to predict the forces that govern planetary trajectories. In a preprint2, the researchers showed that when the models were trained on orbital trajectories, they could not use the knowledge to do any task other than predicting planetary trajectories. When trying to turn that orbital trajectory data into a law about how forces behave, the foundation AI models derived a nonsensical law of gravitation. “A language model trained to predict the outcome of physics experiments will not encode concepts in a simple and parsimonious way. Instead, it will find some very non-human way to approximate physical solutions,” says Vafa. David Powers, who specializes in computer and cognitive science at Flinders University in Adelaide, Australia, says that models that can derive scientific laws are useful. However, for AI to make autonomous discoveries, it would need to be involved in other stages of a project, such as identifying problems worth solving, determining what experiments need to be done, analysing the data generated and creating hypotheses or confirming predictions. “Experimental science is about coming up with the variables of interest and performing systematic experiments to obtain the data and verify predictions,” Powers says. Ma agrees that models such as AI-Newton are far from making truly autonomous discoveries, but he thinks their work can help to train future AIs to use real-world data to discover new general laws of physics. His team is now testing whether the model can uncover quantum theories.
发布时间:2025-11-14 NatureDownload the Nature Podcast Extra 14 November 2025 Yoshua Bengio, considered by many to be one of the godfathers of AI, has long been at the forefront of machine-learning research . However, his opinions on the technology have shifted in recent years — he joins us to talk about ways to address the risks posed by AI, and his efforts to develop an AI with safety built in from the start. Nature: ‘It keeps me awake at night’: machine-learning pioneer on AI’s threat to humanity Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday. Never miss an episode. Subscribe to the Nature Podcast on Apple Podcasts, Spotify, YouTube Music or your favourite podcast app. An RSS feed for the Nature Podcast is available too.
发布时间:2025-11-14 NatureCredit: Qilai Shen/Bloomberg via Getty China has made no secret of its goal to attract the world’s best scientists. In the past three years, a parade of highly accomplished researchers has emigrated there. Wolfgang Baumeister, a molecular biologist, started working in China in 2019, following nearly three decades at the Max Planck Institute of Biochemistry in Munich, Germany. Baumeister is the pioneer of cryogenic-electron tomography, which enables researchers to construct 3D images of large molecules and the insides of cells. For this work, he was awarded Hong Kong’s Shaw Prize for life science and medicine this year. Now based at the iHuman Institute at the ShanghaiTech University in China, he continues to study the molecular machinery involved in type 2 diabetes. Nature met with Baumeister in Hong Kong. The following is an edited version of that conversation, and his talk to journalists at the Hong Kong Laureate Forum 2025. Wolfgang Baumeister receiving the Shaw Prize this year.Credit: Hou Yu/China News Service/VCG via Getty Why did you decide to move to ShanghaiTech University? My colleagues and I had a big European Research Council grant for work on neurotoxic aggregates inside cells. But we have mandatory retirement in Germany. My contract was extended beyond the normal retirement age, and my colleagues in China knew that and said, ‘Why not come to China and you can continue?’ I also had offers from the United States to continue my research there, but they would have requested that I move there permanently. With ShanghaiTech University, I can come and go. I have been there six times this year, typically for two weeks at a time. What is it like working as a scientist in China? There are things I had to get used to. For instance, human resources departments at universities are more powerful here. In my role as managing director of the institute in Munich, I always tried to make sure that administration serves the scientists and does not command them. In Germany, when we bought an instrument, I was used to making that decision myself. What happens here is that the university wants the responsibility for such a decision to be with a committee, who are often non-experts. Very often I say, okay, we can pay for the instrument and then I will be told that the committee will meet in two months and then make the decision. This is often a waste of time. But when it comes to purchasing very expensive, high-end instrumentation, such as a $15 million electron microscope, I just talked to the president of the university for 10 minutes and he approved it. The very big things are often decided spontaneously by the leadership. That is pretty good. How are tensions between China and the US affecting international scientists in China? There are more restrictions coming from the US in the past few years. The current US government is certainly quite restrictive. If you have money from the US National Institutes of Health, you can no longer easily run a lab in China and the US. There are problems for many Americans to travel to China, not only if they are NIH-funded. Some US companies don’t allow their employees to travel to China, or it requires longer negotiations. And then, of course, they are not allowed to take their laptop or cell phone with them. For our Chinese students, getting a visa to travel to the US is increasingly a problem. It also happens even if they get a visa, they are still rejected at the border. There is a conference in Hawaii this December on cryo-electron tomography and cryo-electron microscopy. Attendance from mainland China is limited. It is currently difficult for Chinese investigators to get a visa to go to there. The geopolitical situation means science is unfortunately no longer without borders. That is a sad development. I mean, science should be without borders, but we are not living in an ideal world.
发布时间:2025-11-14 NatureThe tiny, drug-filled robots are guided through blood vessels using magnets.Credit: Luca Donati/lad.studio Zürich A remote-controlled robot the size of a grain of sand can swim through blood vessels to deliver drugs before dissolving into the body. The technology could allow doctors to administer small amounts of drugs to specific sites, avoiding the toxic side effects of body-wide therapies. The microrobots — guided by magnetic fields — work in blood vessels in pigs and sheep, researchers showed in a paper published in Science on 13 November1. The system has yet to be trialled in people, but it shows promise because it works in a roughly human-sized body, and because all its components have already been shown to be biocompatible, says Bradley Nelson, a mechanical engineer at Swiss Federal Institute of Technology (ETH) in Zurich, who co-led the work. Around one-third of developed drugs that fail to come to market do so because they’re too toxic2, says Nelson. The team says the microrobots would allow smaller amounts of drugs to be given directly to the affected areas, thereby reducing potential side effects. The technique could be used to target stroke-causing blockages or brain tumours. “The demonstrations are compelling but still preclinical,” says Wei Gao, a medical engineer at the California Institute of Technology in Pasadena, whose team has developed an alternative robotic drug-delivery system. But if further studies proceed smoothly, remote-controlled drug-delivery robots could be used in the first medical applications within five to ten years, he says. Credit: ETH Zürich Robot-delivered drugs Researchers have explored how to use tiny robots to deliver drugs for decades, including by steering them using ultrasound and using rotating devices that mimic bacteria. The system developed by the ETH team involves filling a tiny bead of gelatine with a drug, as well as nanoparticles of magnetic iron oxide, which allows its movement to be controlled by magnetic fields surrounding the patient. In trials in the brains of pigs and sheep, the team showed that they could use a catheter to insert the bots, before making them roll along the edges of blood vessels, swim against the flow or navigate with the stream at speeds as fast as 40 centimetres per second. They used X-ray images to observe and manoeuvre the bots in real time with millimetre-precision. In trials in pigs, the team showed that in more than 95% of cases, the drugs were delivered to the correct location. To release the drugs, the team used fast-changing magnetic fields to heat and break down the gelatine. Before being used in humans, researchers will need to monitor how the body clears leftover nanoparticles, says Gao. Finding the right mix of materials that allowed the bots to be controlled from a distance, while keeping them small enough to navigate tiny blood vessels, was a “big deal” and took the team 20 years, says Nelson. “It all looks obvious in hindsight. But getting there was the big leap.” “The next step is looking at doing some kind of clinical trials with this in humans,” he adds.
发布时间:2025-11-13 Nature