Carbon Peak and Carbon Neutralization Information Support Platform
Robert F. Kennedy Jr wants a study about vaccines retracted.Credit: Tom Williams/CQ Roll Call/Sipa US via Alamy US health secretary and vaccine sceptic Robert F. Kennedy Jr has called for the retraction of a Danish study that found no link between aluminium in vaccines and chronic diseases in children — a rare move for a US public official. Aluminium has been used for almost a century to enhance the immune system’s response to some vaccines. But some people claim the ingredient is linked to rising rates of childhood disorders such as autism. Public-health officials in Kennedy’s position rarely request that studies be retracted, says Ivan Oransky, a specialist in academic publishing and co-founder of the media organization Retraction Watch. Through this request, “Secretary Kennedy has demonstrated that he wants the scientific literature to bend to his will”, says Oransky. The study1 in question, published in Annals of Internal Medicine in July, is one of the largest of its kind, looking at 1.2 million children born over more than two decades in Denmark. The authors reported that no significant risk of developing autoimmune, allergic or neurodevelopmental disorders was associated with exposure to aluminium compounds in vaccines. In an opinion piece published on TrialSite News on 1 August, Kennedy called into question the study’s methodology, analysis and results. Since his appointment as head of the US Department of Health and Human Services, Kennedy has bypassed normal scientific review processes to change vaccine recommendations and terminated grants for projects on mRNA vaccines. Annals of Internal Medicine says it stands by the study and has no plans to retract it. Christine Laine, editor in chief for the journal, wrote in a comment on the study’s web page on 11 August that “retraction is warranted only when serious errors invalidate findings or there is documented scientific misconduct, neither of which occurred here”. The Department of Health and Human Services said that Kennedy’s article spoke for itself, and that the department did not have any further comment in response to Nature’s questions about Kennedy's request for a retraction. Widely used Aluminium, in the form of salts, such as potassium aluminium sulfate, have been administered in vaccines — for diseases ranging from whooping cough to pneumonia — to millions of people worldwide, and the vaccines have been widely studied for safety issues2,3. Gary Grohmann, an independent virologist in Canberra, says there is no evidence of significant side effects caused by the small amount of aluminium in vaccines. But in 2011, a study4 published in the Journal of Inorganic Biochemistry claimed to show a causal relationship between rising autism diagnoses in children and increased exposure to aluminium-containing vaccines. In 2012, the World Health Organization’s Global Advisory Committee on Vaccine Safety said the study and another by the same authors were “seriously flawed” because they used inappropriate study designs, incorrect assumptions and questionable data. Since then, Grohmann says, the claim that aluminium in vaccines causes autism has been debunked “again and again”. “If there was a mechanism of action where a particular vaccine caused autism, we’d see it in 80, 90, 100% of people receiving the vaccine, and we don’t,” he says. Any association between autism and vaccines is probably a coincidence of timing, he says. “In other words, vaccines might be given at the age of two, and autism genetically might also kick in at the age of two,” he adds. Allen Cheng, an epidemiologist at Monash University in Melbourne, Australia, says the Danish study adds to the evidence that vaccines containing aluminium are safe. Kennedy’s concerns Among Kennedy’s criticisms of the Danish study are that the analysis excluded children who had died before the age of two. According to Kennedy, this means that the children “most likely to reveal injuries” associated with aluminum exposure were excluded. Kennedy also criticized the fact that the authors did not compare vaccinated and unvaccinated children to determine whether any aluminium exposure causes harm, even though they had some data on unvaccinated children. Other critiques posted on the journal website overlapped with Kennedy’s criticisms, says Anders Hviid, the senior author and an epidemiologist at the Statens Serum Institut in Copenhagen, Denmark’s public-health agency. Hviid says he and his colleagues addressed the critiques “one by one”. He also published a rebuttal of Kennedy’s article on TrialSite News on 3 August. In one response on the study’s webpage, the Danish researchers said they did not use unvaccinated children as a control group because completely unvaccinated children were rare — only 1.2% (15,200) of the 1.2 million children in the study did not receive a vaccine containing aluminium before age two. Such a small group would have made their statistical analysis imprecise, the researchers said. Instead, they compared the relationship between the risk of developing childhood disorders and how much aluminium in vaccine children were exposed to, ranging from 0 mg to 4.5 mg, before the age of two. But they acknowledged that the study did not evaluate whether any exposure, regardless of the total, increased the risk of childhood disorders. In another response, the researchers said they excluded children who experienced outcomes or died before age two to allow for the expected lag between symptom onset and diagnosis. They noted that most disorders could not reliably be diagnosed before age two. Their additional analysis of outcomes starting at 14 months showed similar results to their main findings. Kennedy’s article also refers to a secondary analysis in the supplementary data, which he claims “contradicts the study’s conclusions”. The analysis showed there was no overall risk of developing neurodevelopmental disorders with increasing aluminium exposure, but Kennedy pointed out there was a 67% increased risk of Asperger syndrome for every 1 mg increase in aluminium for children born after 2007. The authors said that analysis should be interpreted cautiously. They didn't include it in their main findings because the underlying data were incomplete. Hviid says Kennedy’s call for retraction has not fazed him. He and his colleagues presented their preliminary data, which showed similar results to those of the final study, to the US Centers for Disease Control and Prevention’s Advisory Committee on Immunization Practices in 2023. "We have put out a solid study on an important topic.”
发布时间:2025-08-22 NatureThe brain’s map of the body in the primary somatosensory cortex remains unchanged after amputation.Credit: Zephyr/Science Photo Library A brain-imaging study of people with amputated arms has upended a long-standing belief: that the brain’s map of the body reorganizes itself to compensate for missing body parts. Previous research1 had suggested that neurons in the brain region holding this internal map, called the primary somatosensory cortex, would grow into the neighbouring area of the cortex that previously sensed the limb. But the latest findings, published in Nature Neuroscience on 21 August2, reveal that the primary somatosensory cortex stays remarkably constant even years after arm amputation. The study refutes foundational knowledge in the field of neuroscience that losing a limb results in a drastic reorganization of this region, the authors say. “Pretty much every neuroscientist has learnt through their textbook that the brain has the capacity for reorganization, and this is demonstrated through studies on amputees,” says study senior author Tamar Makin, a cognitive neuroscientist at the University of Cambridge, UK. But “textbooks can be wrong”, she adds. “We shouldn’t take anything for granted, especially when it comes to brain research.” The discovery could lead to the development of better prosthetic devices, or improved treatments for pain in ‘phantom limbs’ — when people continue to sense the amputated limb. It could also help scientists working to restore sensation in people who have had amputations. Mapping cortical plasticity Study first author Hunter Schone, a neuroscientist at the University of Pittsburgh in Pennsylvania, says that previous reports from some people with amputations had led him and his colleagues to doubt the idea that the brain’s map of the body is reorganized after amputation. These maps are responsible for processing sensory information, such as touch or temperature, at specific body regions. “They would say: ‘I can still feel the limb, I can still move individual fingers of a hand I haven’t had for decades,’” Schone says. To investigate this contradiction, the researchers followed three people who were due to undergo amputation of one of their arms. The team used functional magnetic resonance imaging (fMRI) to map the cortical representations of the body before the surgery, and then after the amputation for up to five years. It is the first study to do this. Before their amputations, participants performed various movements, such as tapping their fingers, pursing their lips and flexing their toes while inside an fMRI scanner that measured the activity in different parts of the brain. This allowed the researchers to create a cortical ‘map’ showing which regions sensed the hand. To test the idea that neighbouring neurons redistribute in the cortex after amputation, they also made maps of the adjacent cortical area — in this case, the part that processes sensations from the lips. The participants repeated this exercise several times after their amputation, tapping “with their phantom fingers”, says Schone. Famous ‘homunculus’ brain map redrawn to include complex movements The analysis revealed that the brain’s representation of the body was consistent after the arm was amputated. Even five years after surgery, the cortical map of the missing hand was still activated in the same way as before amputation. There was also no evidence that the cortical representation of the lips had shifted into the hand region following amputation — which is what previous studies suggested would happen. Makin says their study is “the most decisive direct evidence” that the brain’s in-built body map remains stable after the loss of a limb. “It just goes against the foundational knowledge of the field,” she says. Solaiman Shokur, a neuroengineer at the Sant’anna School of Advanced Studies in Pisa, Italy, says he was surprised to see the evidence shown “in such a clear manner” and that the results “contradict something that is believed in the field, and do so to some extent”. Implications for research Giacomo Valle, a neuroengineer at Chalmers University of Technology in Gothenburg, Sweden, praised the study’s methodology and says it “puts a final dot — or conclusion — on the debate” about the brain’s map of the body following amputation. “This is important proof,” he adds. He says that the findings could have implications for research on prosthetic limbs that are controlled through brain–computer interfaces implanted in the somatosensory cortex. The information is relevant to the recruitment of volunteers in clinical trials of such devices and for potential participants who might benefit from brain–computer interfaces, he says. The study authors note that their findings also explain why treatments for phantom limb pain aimed at ‘reversing’ reorganization in the brain’s map have shown limited success. “Researchers may have missed the profound resilience of cortical representations,” they write.
发布时间:2025-08-21 NatureReviewers are more likely to approve a manuscript if their own work is cited in subsequent versions than are reviewers who are not cited, according to an analysis of 18,400 articles from four open-access publications. The study, which is yet to be peer reviewed, was posted online as a preprint earlier this month1. The study was inspired by anecdotes from authors who cited articles only because reviewers asked them to, says study author Adrian Barnett, who researches peer review and meta-research at Queensland University of Technology in Brisbane, Australia. Sometimes, these requests are fine, he says. But if reviewers ask for too many citations or the reason to cite their work is not justified, the peer-review process can become transactional, says Barnett. Citations increase a researcher’s h-index, a metric reflecting the impact of their publications. These are the most-cited research papers of all time Making unnecessary or unjustified requests for citations, sometimes called coercive citation, is generally considered poor practice. Balazs Aczel, a psychologist who studies metascience at Eötvös Loránd University in Budapest, says that the latest work isn’t the first to investigate reviewers asking for citations, but that the number of peer reviews included and level of analysis is novel. A barrier to studying the practice is a lack of data sharing from publishers, he says. Approvals, rejections and reservations The preprint considered articles from four publishing platforms — F1000Research, Wellcome Open Research, Gates Open Research and Open Research Europe — that make all versions of their articles, as well as reviewer comments, publicly available. The publishers ask reviewers to approve articles, reject them or approve them with reservations. Reviewers are also asked to explain why when they ask authors to cite their own work. Of 37,000 reviews — at least two people reviewed each article — 54% of reviewers approved articles with no changes and rejected 8%. Almost 5,000 reviewed articles cited a reviewer and roughly 2,300 reviews requested a citation from a reviewer. The analysis found that reviewers who were cited were more likely to approve the article after the first review than were reviewers who were not cited. But reviewers who suggested that their own research be cited were about half as likely to approve the article than reject it or express reservations. In more than 400 reviews in which the reviewer was not cited in version 1 of the article and requested a citation in their review, 92% of reviewers who were cited in version 2 recommended approval compared with 76% for reviewers who were not cited. When a reviewer rejects a paper, they and the authors know that the reviewer is probably going to evaluate any revised versions of the article, says Barnett, so authors might opt for the path of least resistance and include the citation to get their paper accepted. Reviewer comments Barnett also analysed 2,700 reviewer comments and identified the 100 most frequently used words. He found that reviewers who requested citation were more likely to use words such as ‘need’ or ‘please’ in their comments when they rejected an article, which he says suggests that coercive language was used. Jan Feld, a metascience researcher at the Victoria University of Wellington, New Zealand, is not convinced that such language is a sign of coercion. “That seems like a bit of a stretch,” he says. There are other explanations for reviewers rejecting an article than the author refusing to cite their work. He doesn’t doubt that reviewers request citations that are not warranted, but they can recommend citations, including of their own work, to address issues they’ve identified. But even after those recommendations, “if the paper has not improved or I still have concerns, I cannot recommend publication”, he adds. Barnett acknowledges that the analysis does not differentiate between unjustified requests for citation and legitimate ones. Solutions To reduce unjustified requests for citation, Barnett suggests that reviewers should always state in their review comments when they recommend authors cite their work and why. Using an algorithm to detect and flag request for citations to editors, who could check if the citation is reasonable, is another option, he adds. Greater oversight from journal editors could catch “really blatant” cases in which the citations are unnecessary, says Feld, but in most cases, the situation is ambiguous. Although reviewers could suggest citing research from other people, they might suggest their own work because they know it best, he says. A spokesperson for F1000, the publisher of one of the publishing platforms in the study, said the analysis raised questions “about reviewer behavior for further investigation and consideration by the academic community”. “However, as the preprint acknowledges, there is no suggestion that reviewer misconduct is any more common for articles published on F1000 platforms,” they said. Publishing reviewer comments should dissuade many reviewers from making inappropriate requests for citation, they added.
发布时间:2025-08-21 NatureDownload the Nature Podcast 18 August 2025 In this episode: 00:46 Electrochemical fusion Researchers have used electrochemistry to increase the rates of nuclear fusion reactions in a desktop reactor. Fusion energy promises abundant clean energy, but fusion events are rare, hindering progress. Now, inspired by the controversial claim of cold fusion, researchers used electrochemistry to get palladium to absorb more deuterium ions, that are used in fusion. When a beam of deuterium was fired at the deuterium-filled palladium, they saw a 15% increase in fusion events. They did not get more energy than they put in, but the authors believe this is a step towards enhancing fusion energy and shows the promise of electrochemical techniques. Research Article: Chen et al. News and Views: Low-energy nuclear fusion boosted by electrochemistry 10:06 Research Highlights Do ants hold the key to better teamwork? — plus, the coins that hint at extensive hidden trade networks in southeast Asia. Research Highlight: Super-efficient teamwork is possible — if you’re an ant Research Highlight: Ancient coins unveil web of trade across southeast Asia 12:31 The microbial taste of chocolate Chocolate gets its best tastes from microbes, according to a new study. Fermentation of cocoa beans helps create chocolate tastes but not much has been known about the process. Now, the temperature, pH and microbes involved have been identified and the researchers showed how it would be possible to manipulate these to produce premium chocolate flavours. News: Why chocolate tastes so good: microbes that fine-tune its flavour Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday. Never miss an episode. Subscribe to the Nature Podcast on Apple Podcasts, Spotify, YouTube Music or your favourite podcast app. An RSS feed for the Nature Podcast is available too.
发布时间:2025-08-20 NatureThis January, Byeongjun Park, a researcher in artificial intelligence (AI), received a surprising e-mail. Two researchers from India told him that an AI-generated manuscript had used methods from one of his papers, without credit. Park looked up the manuscript. It wasn’t formally published, but had been posted online (see go.nature.com/45pdgqb) as one of a number of papers generated by a tool called The AI Scientist — announced in 2024 by researchers at Sakana AI, a company in Tokyo1. The AI Scientist is an example of fully automated research in computer science. The tool uses a large language model (LLM) to generate ideas, writes and runs the code by itself, and then writes up the results as a research paper — clearly marked as AI-generated. It’s the start of an effort to have AI systems make their own research discoveries, says the team behind it. The AI-generated work wasn’t copying his paper directly, Park saw. It proposed a new architecture for diffusion models, the sorts of model behind image-generating tools. Park’s paper dealt with improving how those models are trained2. But to his eyes, the two did share similar methods. “I was surprised by how closely the core methodology resembled that of my paper,” says Park, who works at the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, South Korea. Researchers built an ‘AI Scientist’ — what can it do? The researchers who e-mailed Park, Tarun Gupta and Danish Pruthi, are computer scientists at the Indian Institute of Science in Bengaluru. They say that the issue is bigger than just his paper. In February, Gupta and Pruthi reported3 that they’d found multiple examples of AI-generated manuscripts that, according to external experts they consulted, used others’ ideas without attribution, although without directly copying words and sentences. Gupta and Pruthi say that this amounts to the software tools plagiarizing other ideas — albeit with no ill intention on the part of their creators. “A significant portion of LLM-generated research ideas appear novel on the surface but are actually skillfully plagiarized in ways that make their originality difficult to verify,” they write. In July, their work won an ‘outstanding paper’ award at the Association for Computational Linguistics conference in Vienna. But some of their findings are disputed. The team behind The AI Scientist told Nature that it strongly disagrees with Gupta and Pruthi’s findings, and doesn’t accept that any plagiarism occurred in The AI Scientist case studies that the paper examines. In Park’s specific case, one independent specialist told Nature that he thought the AI manuscript’s methods didn’t overlap enough with Park’s paper to be termed plagiarism. Park himself also demurred at using ‘plagiarism’ to describe what he saw as a strong methodological overlap. Beyond the specific debate about The AI Scientist lies a broader concern. So many papers are published each year — especially in computer science — that researchers already struggle to keep track of whether their ideas are really innovative, says Joeran Beel, a specialist in machine-learning and information science at the University of Siegen, Germany. And if more LLM-based tools are used to generate ideas, this could deepen the erosion of intellectual credit in science. Because LLMs work in part by remixing and interpolating the text they’re trained on, it would be natural for them to borrow from earlier work, says Parshin Shojaee, a computer scientist at the Virginia Tech Research Center — Arlington. The issue of ‘idea plagiarism’, although little discussed, is already a problem with human-authored papers, says Debora Weber-Wulff, a plagiarism researcher at the University of Applied Sciences, Berlin, and she expects that it will get worse with work created by AI. But, unlike the more familiar forms of plagiarism — involving copied or subtly rewritten sentences — it’s hard to prove the reuse of ideas, she says. That makes it difficult to see how to automate the task of checking for true novelty or originality, to match the pace at which AIs are going to be able to synthesize manuscripts. “There’s no one way to prove idea plagiarism,” Weber-Wulff says. Overlapping methods Bad actors can, of course, already use AI to deliberately plagiarize others or rewrite others’ work to pass it off as their own (see Nature https://doi.org/gt5rjz; 2025). But Gupta and Pruthi wondered if well-intentioned AI approaches might be using others’ methods or ideas too. Gupta and Pruthi were first alerted to the issue when they read a 2024 study led by Chenglei Si, a computer scientist at Stanford University in California4. Si’s team asked both people and LLMs to generate “novel research ideas” on topics in computer science. Although Si’s protocol included a novelty check and asked human reviewers to assess the ideas, Gupta and Pruthi argue that some of the AI-generated ideas produced by the protocol nevertheless lifted from existing works — and so weren’t ‘novel’ at all. They picked out one of the AI-generated ideas in Si’s paper, which they say borrowed from a paper first posted as a preprint5 in 2023. Si tells Nature that he agrees that the ‘high-level’ idea was similar to material in the preprint, but that “whether the low-level implementation differences count as novelty is probably a subjective judgement”. Shubhendu Trivedi, a machine-learning researcher who co-authored that 2023 preprint, and was until recently at the Massachusetts Institute of Technology in Cambridge, says that “the LLM-generated paper was basically very similar to our paper, despite some superficial-level differences”. AI is complicating plagiarism. How should scientists respond? Gupta and Pruthi further tested their concern by taking the four AI-generated research proposals publicly released by Si’s team and the ten AI manuscripts released by Sakana AI, and generated 36 fresh proposals themselves, using Si’s methodology. They then asked 13 specialists to try to find overlaps in methods between the AI-made works and existing papers, using a 5-point scale, on which 5 corresponded to a ‘one-to-one mapping in methods’ and 4 to ‘mix-and-match from two-to-three prior works’; 3 and 2 represented more-modest overlaps and 1 indicated no overlap. “It’s essentially about copying of the idea or crux of the paper,” says Gupta. The researchers also asked the authors of original papers identified by the specialists to give their own views on the overlaps. Including this step, Gupta and Pruthi report that 12 papers in their sample of AI-generated works reached levels 4 and 5, implying, they said, a plagiarism proportion of 24%; the figure rises to 18 (36%) if cases in which the original authors didn’t reply are included. Some were from Sakana’s and Si’s work, although Gupta and Pruthi discuss in detail only the examples reported in this story. They also said they’d found a similar kind of overlap in an AI-generated manuscript (see go.nature.com/4oym4ru) that, Sakana announced this March, had passed through a stage of peer review for a workshop at a prestigious machine-learning conference, the International Conference on Learning Representations. At the time, the firm said that this was the first fully-AI-generated paper to pass human peer review. It also explained that it had agreed with workshop organizers to trial putting AI-generated papers into peer review and to withdraw them if they were accepted, because the community hadn’t yet decided whether AI-generated papers should be published in conference proceedings. (The workshop organizers declined Nature’s request for comment.) Gupta and Pruthi say that this paper borrowed its core contribution from a 2015 work6, without citing it. Their report quotes the authors of that paper, computer scientists David Krueger and Roland Memisevic, as saying that the Sakana work is “definitively not novel”, and identifying a second uncited manuscript7 that the paper borrowed from. Another computer scientist, Radu Ionescu at the University of Bucharest, told Nature he rated the similarity between the AI-generated work and Krueger and Memisevic’s paper as a 5. Krueger, who is at the University of Montreal in Canada, told Nature that the related works should have been cited, but that he “wouldn’t be surprised to see human researchers reinvent this and miss previous work” too. “I think this AI system and others are not capable of achieving academic standards for referencing related work,” he said, adding that the AI paper was “extremely low quality overall”. But he wasn’t sure whether the word plagiarism should be applied, because he feels that term implies that the person (or AI tool) reusing methods was aware of earlier work, but chose not to cite it. Pushback The team behind The AI Scientist, which includes researchers at the University of Oxford, UK, and the University of British Columbia in Vancouver, Canada, pushed back strongly against Gupta and Pruthi’s work when asked by Nature. “The plagiarism claims are false,” the team wrote in an e-mailed point-by-point critique, adding that they were “unfounded, inaccurate, extreme, and should be ignored”. On two AI Scientist manuscripts discussed in Gupta and Pruthi’s paper, for instance, the team says that these works have different hypotheses from those in the earlier papers and apply them to different domains, even if some elements of the methods are related. Is it OK for AI to write science papers? Nature survey shows researchers are split The references found by the specialists for Gupta and Pruthi’s analysis are work that the AI-generated papers could have cited, but nothing more, the AI Scientist team says, adding: “What they should have reported is some related work that went uncited (a daily occurrence by human authors).” The team says it would be “appropriate” to have cited Park’s paper. In the case of Krueger’s paper and the second uncited manuscript, the AI Scientist team says, “these two papers are related, so, while it is an everyday occurrence by humans not to include works like this, it would have been good for The AI Scientist to cite them”. Ben Hoover, a machine-learning researcher at the Georgia Institute of Technology in Atlanta who specializes in diffusion models, told Nature that he’d score the overlap with Park’s paper as a ‘3’ on Gupta’s scale. He said the AI-generated paper is of much lower quality and less thorough than Park’s work, and should have cited it, but “I would not go so far as to say plagiarism.” Gupta and Pruthi’s analysis relies on ‘superficial similarities’ between generic statements in the AI-generated work that, when read in detail, don’t meaningfully map to Park’s paper, he adds. Ionescu told Nature he would give the AI-generated paper a rating of 2 or 3. Park judges the overlap with his paper to be much stronger than Hoover’s and Ionescu’s ratings. He says he would give it a score of 5 on Gupta’s scale, and adds that it “reflects a strong methodological resemblance that I consider noteworthy.” Even so, this does not necessarily align with what he sees as the legal or ethical definition of plagiarism, he told Nature. What counts as plagiarism Part of the disagreement could stem from different operational understandings of what ‘plagiarism’ means, especially when it comes to overlap in ideas or methods. Researchers who study plagiarism hold different views on the term from those of some of the computer scientists in the current debate, says Weber-Wulff. “Plagiarism is a word we should and do reserve for extreme cases of intentional fraudulent cheating,” the AI Scientist team wrote, adding that Gupta and Pruthi “are wildly out of line with established conventions regarding what counts as plagiarism in academia”. But Weber-Wulff disagrees: she says that intent shouldn’t be a factor. “The machine has no intent,” she says. “We don’t have a good mechanism for explaining why the system is saying something and where it got it from, because these systems are not built to give references.” Weber-Wulff’s own favoured definition of plagiarism is that it occurs when a manuscript “uses words, ideas, or work products attributable to another identifiable person or source without properly attributing the work to the source from which it was obtained in a situation in which there is a legitimate expectation of original authorship”. That definition was produced by Teddi Fishman, the former director of a US non-profit consortium of universities called the International Center for Academic Integrity. AI ‘scientists’ joined these research teams: here’s what happened Pruthi says that although what counts as plagiarized research is subjective, the researchers felt that scores of 4 and 5 on their scale were “serious enough that if people knew about this, they would complain about it”. Si and the AI Scientist team both say that Gupta and Pruthi could also have found examples of human-authored research papers that had borrowed ideas from earlier work without credit — if they had specifically asked experts to look for this. Gupta and Pruthi concede that point. In their paper, they make an attempt at comparison by examining peer reviews of hundreds of papers from computer-science conferences, and argue, on the basis of an automated analysis using LLMs, that only 1–5% of these reviews contain mentions of plagiarism equivalent to their 4 or 5 score. But they didn’t ask a team of experts to review human-authored papers, as they had done with the AI ones. The AI Scientist team also adds that it had already said in its paper that, in general, The AI Scientist makes citation mistakes; that it should cite more related papers; and that researchers should validate the tool’s outputs themselves. “Our paper was announcing a proof of concept, that we have now reached an ‘it’s-now-possible-to-do-even-if-imperfectly’ moment in AI-generated science papers,” the team says. “Ultimately, The AI Scientist and systems like it will soon be making obviously new, major discoveries.” It adds that “we do think there are major upsides for AI-generated science”, that AI software will improve in quality, and that for now, the tool should be used mostly to inspire ideas, and researchers shouldn’t trust its outputs without validating its work themselves. How to check for novelty Whether it’s even possible to reliably automate checks on AI-generated research to be sure that it is original, and that related works are credited, is still a major challenge. After The AI Scientist generates a fresh paper or idea, for instance, the system typically examines whether it is original, and what to cite, by feeding relevant search-query terms (themselves produced by an LLM) into the Semantic Scholar search engine; another LLM is then asked to judge the top papers returned. For instance, the LLM might judge that the AI-generated work is so similar to an existing paper that the idea isn’t original. Or, in a separate step, it might recommend that the AI-generated paper cite an earlier paper. Repeating this process a number of times “essentially mimics how human researchers search for papers to cite,” the AI Scientist team says. But this can be simplistic, Beel says. It’s hard to reduce an idea to a list of keywords, and search engines might not have full papers in their databases. The top hits that search engines return in this automated process — which might be ranked by a criterion such as citation count — could easily miss out relevant work that a specialist researcher in the field would know. And although there is research on automatically detecting the semantic similarity of sentences, “there’s little work on idea-level or concept-level similarity checking”, says Yan Liu, an AI researcher at Nanyang Technological University in Singapore. Gupta and Pruthi tested Turnitin, a commercial plagiarism-detection tool, and OpenScholar, an LLM built to answer queries by searching the scientific literature, on the AI-generated papers that scored 4 and 5 in their study. Turnitin identified none of the source papers their human experts had spotted, whereas OpenScholar found only one, they say. But human reviewers disagree about this sort of thing, too, says Jinheon Baek, a graduate student in AI at KAIST. At conferences, he says, he’s seen reviewers argue about what counts as original in research papers. “Novelty is very subjective,” he says. Some researchers think that it will be difficult to improve automated tools to devise scientific ideas without first improving the plagiarism-detection steps. “The important thing is these tools are going to be here. We need to find the right way of using them,” says computer-science researcher Min-Yen Kan at the National University of Singapore. Si says he appreciates Gupta and Pruthi’s study. “For people working on AI scientists, we should be holding ourselves to higher standards of what counts as novel and good research,” he says.
发布时间:2025-08-20 NatureThe Aedes mosquitos, which spread dengue, thrive in warm conditions. Credit: Soumyabrata Roy/NurPhoto via Getty Major dengue outbreaks in the Americas tend to occur about five months after an El Niño event — the periodic warming of the Pacific Ocean that can disrupt global weather — a study1 has found. Meanwhile, local outbreaks tend to happen about three months after summer temperatures peak and roughly one month after peak rainfall. The study, published today in Science Translational Medicine, paints a clearer picture of the relationship between the mosquito-borne disease and climatic conditions in the Americas, a region that saw a record-breaking 13 million cases in 2024. Dengue is caused by four closely related viruses and spread by the Aedes species mosquitoes. There is no specific treatment, and the disease can lead to fever, bone pain and even death. The research relied on roughly three decades of surveillance data from 14 countries. Cases in the region tended to rise and fall in sync, on average six months apart, even in places as far as 10,000 kilometres apart. The findings are “useful to anticipate when a region might expect to see an epidemic, which can help inform planning and preparedness”, says Talia Quandelacy, a co-author and an infectious-disease epidemiologist at the University of Colorado School of Public Health in Aurora. She notes that, although the link between dengue and climate is well known, what stands out in the findings is how this association plays out across the entire continent, “especially given that it’s such a climatically diverse region”. Toasty mosquitoes The Aedes mosquitoes typically thrive in warm and humid conditions. Furthermore, the incubation period of dengue viruses in the mosquito — the time between infection and when the mosquito can transmit the virus — shortens at high temperatures, says Quandelacy. The virus replicates quicker in warmer environments. “There’s just more-efficient transmission of the dengue virus when we have warmer temperatures,” she says. But climate is just one factor driving dengue epidemics. Population immunity and other local conditions are equally relevant. For example, mosquitoes rely on standing water to lay their eggs, which is abundant in urban areas that lack proper sanitation. “Analyses showing the impact of extreme weather events like El Niño are important but, particularly for arboviruses, we have to account for the local characteristics of urban areas,” says Marcia Castro, a public-health specialist at Harvard T.H. Chan School of Public Health in Boston, Massachusetts. “You have cities without infrastructure, areas like slums growing, and then an El Niño comes and exacerbates all of the consequences.”
发布时间:2025-08-20 NatureIn 1970, a woman in Mexico might have expected to have seven children, on average. By 2014, that figure had fallen to around two. As of 2023, it was just 1.6. That means that the population is no longer making enough babies to maintain itself. Mexico is not alone: countries around the world are witnessing falling fertility rates1. Exceptions are few. The Institute for Health Metrics and Evaluation (IHME) at the University of Washington in Seattle estimates that, by 2050, more than three-quarters of countries will be in a comparable situation. “There has been an absolutely incredible drop in fertility — much faster than anyone had anticipated,” says Jesús Fernández-Villaverde, an economist at the University of Pennsylvania in Philadelphia. “And it is happening in a lot of countries you would have never guessed.” How to make America healthy: the real problems — and best fixes The numbers are clear. What’s uncertain is how problematic this global ‘baby bust’ will be, and how nations should respond. In economies that have been built around the prospect of steady population growth, the concern is over future slumps in innovation and productivity, as well as having too few working-age citizens to support a growing number of older people. Researchers warn of ripple effects, from weakened military power and less political influence for countries with lower fertility rates, to fewer investments in green technology. It is imperative that countries address population decline and its impacts now, says Austin Schumacher, a health metrics researcher at the IHME. Many countries have been trying to take action, and the data suggest that some strategies are helpful — if politically fraught. But to scientists familiar with the data, even the most effective efforts are unlikely to bring a full rebound in fertility rates. That’s why many researchers are recommending a shift in focus from reversal to resilience. They see room for optimism. Even if countries can only slow the decline, that should buy them time to prepare for future demographic shifts. Ultimately, scientists say, fertility rates that are low, but not too low, could have some benefits. “We’re not not making babies,” says Barbara Katz Rothman, a sociologist at the City University of New York. “The human race is not folding in on itself.” What the data say In the mid-twentieth century, the world’s total fertility rate — generally defined as the average number of children a woman would have during her reproductive years — was five. (Nature recognizes that transgender men and non-binary people might become pregnant. We use ‘woman’ and ‘women’ in this story to reflect language used in the field.) Some dubbed this mid-twentieth surge the baby boom. Ecologist Paul Ehrlich and conservation biologist Anne Ehrlich saw it differently, warning in their 1968 book The Population Bomb that overpopulation would lead to famine and environmental devastation. But they failed to anticipate advances in agricultural and health technology that would enable the population to double to eight billion in a little more than five decades. Humanity’s impact on the environment has intensified, owing to that growth and to increased consumption in many parts of the world. But concerns about overpopulation have flipped. Population growth has been slowing down over the past 50 years, and the average total fertility rate stands at 2.2. In about half of countries, it has fallen below 2.1, the threshold generally needed to maintain a steady population (see ‘Declining fertility’). Small changes in these numbers can have strong effects. A fertility rate of 1.7 could reduce a population to half its original size several generations sooner than a rate of 1.9, for example. Source: Our World in Data (https://go.nature.com/45RWYFJ) The case of South Korea is under close scrutiny. Its fertility rate fell from 4.5 in 1970 to 0.75 in 2024, and its population peaked at just under 52 million in 2020. That figure is now declining at a pace that is expected to accelerate. Forecasts for the world vary. The United Nations and the International Institute for Applied Systems Analysis in Laxenburg, Austria, project gentler declines than the IHME does (see, for example, go.nature.com/4mtkj8b). But demographers generally expect that the global population will peak in the next 30 to 60 years and then contract. If it does, that will be the first such decline since the Black Death in the 1300s. According to the UN, China’s population might already have peaked in around 2022, at 1.4 billion. India’s could do the same in the early 2060s, topping out at 1.7 billion people. And, assuming the most likely immigration scenario, the US Census Bureau predicts that the US population will peak in 2080 at around 370 million. Meanwhile, many of the steepest near-term crashes are anticipated in middle-income countries: Cuba is expected to lose more than 15% of its population by 2050. Sub-Saharan Africa is the notable exception. By 2100, more than half of the world’s babies are likely to be born there1, despite it having some of the world’s lowest incomes, weakest health-care systems and most fragile food and water supplies. Nigeria’s fertility rate remains above four, and its population is projected to grow by another 76% by 2050, which will make it the world’s third-most-populous country. Still, fertility-rate trends are hard to predict. Data gaps persist, and many models rely on the expectation that rates will rebound as they’ve done before. And as the Ehrlichs’ failed forecasts show, the past isn’t always indicative of the future. “We are groping in the dark,” says demographer Anne Goujon, programme director for population and just societies at the International Institute for Applied Systems Analysis. What’s driving the decline? The factors behind fertility collapse are numerous. They range from expanded access to contraception and education, to shifting norms around relationships and parenting. Debate continues over which factors matter most, and how they vary across regions. Some drivers reflect positive societal changes. In the United States, data from the US Centers for Disease Control and Prevention show that fertility has declined in part because of fewer unplanned pregnancies and teenage births. A long-term drop in domestic violence might also have contributed. Research in 2018 by Jennifer Barber, a sociologist now at Indiana University in Bloomington, and her colleagues showed that women in violent relationships have children at around twice the rate as do those in non-violent ones2. 154 million lives and counting: 5 charts reveal the power of vaccines Globally, access to contraception has helped to decouple sex from reproduction. In Iran, a national family-planning campaign that started in the 1980s contributed to the largest and fastest fall in fertility rates ever recorded: from nearly seven to under two in less than two decades3. The country reversed course around 2006, and is once again promoting policies to increase fertility rates. Young people in wealthy countries are also forming fewer partnerships and having less sex. Alice Evans, a sociologist at King’s College London, has suggested that online entertainment is outcompeting real-world interactions and eroding social confidence. As women worldwide have gained education and career opportunities, many have grown more selective. Women want independence, while many men expect a “servant at home”, says Fernández-Villaverde. “Women are asking, ‘Why would I marry this person?’ A lot of men are undateable. Truly undateable.” This disconnect fuels trends such as South Korea’s Four Nos feminist movement — in which many young women are rejecting dating, marriage, sex and childbirth — and a similar ‘boy sober’ movement among US women. Mexico is one of several countries that has a fertility rate below replacement level.Credit: Bernd Vogel/Getty Many young people are also pursuing more education so as to gain jobs that might come with high stress and little stability early on. As a result, even people who pair up might postpone having children or have trouble conceiving because they are older. Those who do have kids face pressure to prepare them for the same high-stakes race for university and career, says Matthias Doepke, an economist at the London School of Economics and Political Science. “It’s not like we have withdrawn from parenting. It’s just that we concentrate all this investment, all these hours, on fewer children.” Rising costs create further pressures. A UN survey of more than 14,000 people in 14 countries found that 39% cited financial limitations as a reason not to have children (see www.unfpa.org/swp2025). In US cities, births have fallen most sharply where housing prices have risen most rapidly (see go.nature.com/4tqqzsg). Ultra-low fertility rates tend to emerge where these pressures converge, says Doepke. In South Korea, he says, housing is expensive, the parenting culture is intense and the working culture rewards long hours. Other contributors include declining sperm counts, potentially linked to environmental toxins. Many prospective parents also have growing anxiety about political and environmental instability, as highlighted in the UN survey. It’s not clear which of these many factors are most important in individual countries. But ultimately, low fertility rates “reflect broken systems and broken institutions that prevent people from having the number of children they want”, says Stuart Gietel-Basten, a sociologist at the Hong Kong University of Science and Technology. “That is the real crisis.” Countering the crash The fallout will play out differently around the world. Middle-income countries, such as Cuba, Colombia and Turkey, could be the hardest hit, with falling fertility compounded by rising emigration to wealthier nations. Urban–rural divides will also deepen. As young people leave small towns, infrastructure such as schools, supermarkets and hospitals shuts down — prompting more to move away. Often, it’s older people who remain. Globally, ageing is the core issue with population decline. In countries that have shrinking fertility rates, the proportion of people aged 65 or older is projected to nearly double, from 17% to 31% in the next 25 years (see go.nature.com/4fspvh5). As life expectancy rises, the demand for physical and fiscal support grows, yet there is a lag in supply. For the majority of countries hoping to break the fertility fall, tools exist. These include financial incentives, such as US President Donald Trump’s proposal to give each newborn baby US$1,000 in an investment fund. Global population is crashing, soaring and moving Data show that baby bonuses yield modest results for fertility. Australia brought in a $3,000 bonus in 2004, later increased to $5,000 (see go.nature.com/4mgrwsc). Although the policy led to 7% more births in the short term, it’s unclear whether families had more children overall or just chose to have them earlier in life. And scientists caution that such incentives can undermine gender equity and reproductive rights by prioritizing population growth over personal choice, restricting access to contraception and abortion, and reinforcing conventional gender roles. More-effective approaches, they say, include generous parental leave and subsidies for childcare and housing. Nordic countries pioneered such investments, including leave for fathers. Those nations saw slower fertility declines than elsewhere in Europe — although decreases persist. Researchers say more can be done, such as placing a higher value on care work. “Everything about the making of babies — growing them, birthing them, feeding them — is treated as cheap labour,” says Katz Rothman. Countries where fathers take on more childcare tend to have higher fertility rates4. One study in Bulgaria, the Czech Republic, Hungary, Poland and Russia linked greater paternal involvement with a higher likelihood that the mother would have a second child and work full-time5. Of course, putting a higher value on care work could increase the costs of raising a child. There is no silver bullet. No policy will restore fertility rates any time soon, researchers say. But even small gains can add up to form a valuable cushion. “Part of the reason progressive policies get a bad rap is because people expect too much from them,” says Fernández-Villaverde. Even a combined 0.2 or 0.3 increase in the fertility rate could slow down declines and give countries time to adapt. And adaptation deserves more attention, says political demographer Jennifer Sciubba, president of the non-profit Population Reference Bureau in Washington DC. “If people aren’t having children for a mix of reasons, we are better off using our time, money and good ideas to support adaptation,” she says. Adapting to a new reality Some strategies can achieve both goals. Strengthening the care workforce, for example, could both encourage people to have families and patch gaps in care for older people. But there are also policies that governments could use to stabilize strained state pension and protection programmes, such as by raising the Social Security tax cap in the United States. Increasing the retirement age, as some countries are doing, is another option. On average, a 70-year-old in 2022 had the same cognitive ability as a 53-year-old had in 2000, according to data from 41 advanced and emerging economies6. Older people who stay productive — whether through continuing to work or caring for grandchildren — can also see improvements in their health and experience less loneliness. ‘Unacceptable’: a staggering 4.4 billion people lack safe drinking water, study finds Still, such policy changes can provoke backlash. Proposals to increase the retirement age in Russia in 2018 and in France in 2023 sparked protests, for example. “But it doesn’t have to be a matter of compelling people to work late into old age,” says Rebecca Zerzan, senior editor of the UN Population Fund’s State of World Population report, who is based in New York City. In fact, according to research by the multinational investment bank Goldman Sachs, working lives are already lengthening in some countries, even in those that haven’t brought in major pension reforms. Immigration is another lever. It can match labour shortages in wealthier nations to high birth rates in poorer ones, says Schumacher. Migrants boost tax revenues and innovation, even when they don’t receive the tax benefits or government assistance that they help to fund, says Karen Guzzo, a sociologist at the University of North Carolina at Chapel Hill. South Korea and Japan have relaxed immigration rules and helped to fill some of their workforce gaps. Still, immigration is a politically sensitive issue. In countries that open their borders to spur growth, people often blame migrants for the challenges brought on by population decline. And brain drain can hurt the economies that migrants leave. Gietel-Basten urges policymakers to consider several dimensions, beyond the obvious ones. “It is much easier to eradicate child poverty than to boost fertility,” he says. Even if certain prosocial policies don’t “magically unlock an additional baby per family”, says Zerzan, “you’re going to have people who are happier, healthier and able to pursue education alongside work. That will help create a world where people have more hope. And if they have more hope, then they might have the number of kids that they want to have.” Sciubba agrees. The path to helping people thrive, she says, “is the same path that could potentially create the conditions for people to want to have children.” Researchers say that a smaller population should bring benefits: a society that has fewer people can lessen pressure on the environment and allow for greater investment in each individual. But a stable economy is key. Without it, a fiscal squeeze could worsen environmental damage, weaken support systems and undermine human rights. Still, there’s reason for optimism. “If you invest in health and education, which can boost productivity, then a slightly lower population can actually raise gross domestic product,” says Gietel-Basten. Today’s population isn’t necessarily the optimal population, he says. “Declining fertility is only a disaster if you don’t adapt.”
发布时间:2025-08-19 NatureManipulating microbes involved in the fermentation of cocoa beans could lead to the creation of exciting new flavours of chocolate. Credit: Say-Cheese/iStock via Getty When you bite into a piece of chocolate, you can taste its distinct fruity, nutty and earthy flavours. Now, scientists have gained fresh insights into how the process of fermenting cocoa beans can affect that flavour profile. In a study published today in Nature Microbiology1, researchers found that pH, temperature and microbial species in the fermentation process all influence how the resulting chocolate tastes. They also replicated the flavour attributes of a high-quality chocolate in the laboratory by creating the ideal environment for fermentation. The researchers hope that using these techniques will “create novelty and exciting new flavours for consumers in the future”, says study co-author David Gopaulchan, a plant geneticist at the University of Nottingham, UK. “I think this definitely has promise for people to start to play with and look at in terms of designer chocolates”, says Heather Hallen-Adams, a food scientist at the University of Nebraska–Lincoln. Fermented flavours Fermentation is a flavour-enhancing step in the production of some foods and drinks. Making wine, cheese or beer involves adding yeast or other microorganisms. To make chocolate, cocoa beans are removed from their pods, put together and left to ferment, after which they are dried and roasted. But unlike production of wine, beer and cheese, cacao fermentation is a natural process that usually takes place without adding specific microbes. As a result, little is known about how different conditions or microbe species might influence the flavour of chocolate. “Ultimately, what we are trying to do is increase the quality of chocolates,” says Gopaulchan. His team took samples of cocoa beans from a farm in the Santander district of Colombia and measured changes in the pH and temperature during the fermentation process. They suspected that these conditions would affect the chocolate’s flavour, because of how they influenced interactions between bacteria and fungi. Raw cocoa beans must undergo fermentation before they can be made into chocolate.Credit: Ute Grabowsky/Photothek via Getty The researchers then compared cocoa samples from the Santander district with those from farms in the Huila and Antioquia regions of Colombia. They prepared cocoa ‘liquors’ from the fermented beans from the three farms to test their flavour profiles. This process involves drying, roasting and breaking down the beans to produce cocoa nibs, which are ground into a paste. A panel of trained food tasters sampling the liquors reported that those from Santander and Huila shared flavour attributes, with notes of roasted nuts, ripe berries and coffee. By comparison, cocoa liqour from Antioquia had a simpler, more bitter flavour. The cocoa from all three farms had similar genetic backgrounds, which allowed the researchers to exclude genotype as a factor influencing flavour. Analysis of the fermentation conditions from the three farms found that unique microbial communities influenced the flavour profiles of the three cocoa liquors. For example, the fungal genera Torulaspora and Saccharomyces were strongly associated with flavour attributes of finer chocolate. Designer chocolate The researchers next aimed to reproduce the fine flavours of chocolate in the lab by designing and controlling the features of cocoa fermentation. The team designed ‘synthetic’ microbial communities of bacteria and fungi to ferment the cocoa beans, and prepared liquors for taste-testing, as before. The panel of tasters confirmed that beans fermented with the lab-controlled microbiota communities had the same fine chocolate notes as those from Santander and Huila. The researchers say their findings show that relationships between pH, temperature and microbiota help to explain regional differences in chocolate flavour and quality. They also hint at a method to more closely control the flavour and quality of chocolate in industrial food labs. “This is going to give us controllability of the process and give a specific flavour, increase the quality of the cocoa and not wait on a specific time or a specific environment that we cannot control,” says Andrés Fernando Gonzales Barrios, a chemical engineer at Universidad de los Andes in Bogotá. It could ultimately “increase the value of cocoa”, he adds.
发布时间:2025-08-18 NatureA microscopic view of a nine-day-old human embryo shows a protein found in embryonic stem cells in green, developing tissue in magenta and DNA in blue. Credit: Institute for Bioengineering of Catalonia (IBEC) A time-lapse film offers a glimpse of a hidden milestone of human development: the moment when the newly formed embryo latches onto the uterine lining. Researchers have captured real-time footage of an embryo pulling on a high-fidelity replica of the lining to bury itself inside, effectively remodelling its new home. The team reports its findings today in Science Advances1. Hidden figures The authors were inspired to simulate the implantation process because the actual events are so difficult to capture. “It’s very inaccessible because it’s all happening inside the mother,” says co-author Samuel Ojosnegros, a bioengineer at the Institute for Bioengineering of Catalonia (IBEC) in Barcelona, Spain. “It’s such an important process for human reproduction, but at the same time, we don’t have the technology to study it.” A human embryo contracts itself to minimize its exposure to the outside environment.Credit: Institute for Bioengineering of Catalonia (IBEC) Although previous studies have investigated how human embryos interact with glass, the embryo can’t penetrate this material as it does real human tissue. So the team set out to create a more lifelike mock-up, devising a faux uterine lining from a gel rich in collagen and proteins that are crucial for embryonic development. To shoot their stop-motion film, researchers placed human embryos donated by a local hospital near the gel. As the embryo attached to the ‘uterus’, the team used a microscope to capture an image about every 20 minutes for 16–24 hours, and then stitched the stills together. Co-author Amélie Godeau, a biomechanics researcher at IBEC, was shocked to see how quickly the embryo burrowed down into the gel. “My first reflex was to think my experiment had gone wrong and there was some drift in the microscope,” Godeau says. By contrast, the team found that mouse embryos adhere to the surface of the uterus rather than embedding itself inside. A human embryo plunges into a synthetic uterine lining tissue by applying force to pull the tissue apart.Credit: Institute for Bioengineering of Catalonia (IBEC) It was known that the human embryo releases enzymes to break down the uterine lining during implantation. But the new study also suggests that the embryo must exert some sort of extra force on the uterus to lodge itself there, pulling at the intricate network of tissue so it can cosy up inside. Ripla Arora, a uterine biologist at Michigan State University in East Lansing, says this study is the first to document the mechanics of the implantation process in such detail — although she’d be interested to see how the uterus might reciprocate by applying force back on the embryo. “An exciting next step is to know what the uterus is doing in this scenario, but that’s harder to mimic,” Arora says. In the future, Godeau would like to study why some healthy embryos fail to attach to the uterus, or to follow the process of implantation over a longer period. “Just by looking one day later at the distribution of forces, we may learn even more about how this implantation is happening,” she says.
发布时间:2025-08-15 NatureQuantum computing qubits are arranged in a grid in this artist’s illustration.Credit: Getty Artificial intelligence (AI) tools are increasingly helping scientists to write papers, conduct literature reviews and even design laboratory experiments. Now researchers can add optimizing quantum computing to the list. A team has used an AI model to calculate the best way to rapidly assemble a grid of atoms that might one day serve as the ‘brain’ of a quantum computer. To show just how quickly the model can re-shuffle the atoms, the team also used the system to create a tiny animation of Schrödinger's cat. The work was reported last week in Physical Review Letters1. Study co-author Jian-Wei Pan, a physicist at the University of Science and Technology of China in Hefei, says the team became interested in using AI to speed up the building these ‘neutral atom arrays’ after one of his former students got a job in an AI laboratory. “AI for science is emerging as a powerful paradigm for addressing complex scientific problems,” he says. One of the big challenges in using arrays of atoms for quantum computing is working out how to rearrange them in an “efficient, fast and scalable manner”, Pan says. AI solved that problem for the team — and did it quickly. Playing with atoms Classical computers carry out operations using binary digits, or bits, encoded as a 1 or 0. Quantum computers use qubits, which can be put into a ‘superposition’, in which the two states — 1 and 0 — exist simultaneously. Calculations involve entangling qubits, which means that their states become linked. Google uncovers how quantum computers can beat today’s best supercomputers Researchers have been creating qubits with materials such as superconducting circuits, trapped ions and grids of neutral atoms, which are prized for their ability to maintain their quantum states over a relatively long time. To use the atoms as qubits, scientists trap them with laser light and then store quantum information in the energy levels of their electrons. The hope is that if you use enough atoms, a quantum computer will one day overcome the errors that often plague these systems — and eventually perform calculations that aren’t feasible for classical computers. Pan and his colleagues trained their AI model by showing it how various distributions of rubidium atoms could be nudged into a range of grid configurations using different patterns of laser light. Depending on the atoms’ starting locations, the model could then quickly work out the correct pattern of light needed to rearrange them into a selection of 2D and 3D shapes. An animation created with an AI-guided laser pattern depicts Schrödinger's cat (version here slowed by a factor of 33).Credit: R. Lin et al., Phys. Rev. Lett. The researchers used their model to assemble an array of up to 2,024 rubidium atoms in just 60 milliseconds. By contrast, another group assembled about 800 neutral atoms last year2, but without the use of AI, it took an entire second. For the video of Schrödinger's cat, the AI system directed laser light to move atoms to create the desired patterns. The atoms became visible when they emitted light in response to laser pulses. Scaling up Creating the right pattern of light, or hologram, that dictates how to arrange neutral atom arrays usually involves a slew of painstaking calculations. “And doing those calculations as you make the arrays bigger and bigger can take up a fair amount of time,” says Mark Saffman, a physicist at the University of Wisconsin–Madison. That’s why many of his colleagues "were really impressed by this work, as was I.” Quantum-computing technology that makes qubits from atoms wins mega investment As the array gets larger, it also “becomes more challenging to calculate solutions to rearrange atoms”, says Joonhee Choi, a quantum researcher at Stanford University in California. Choi says he thinks the new work is “remarkable”, adding that, “thanks to this investment in AI, we can actually come up with better algorithms to rearrange large-scale arrays”. Pan says that other research teams have already reached out with questions about his group’s methods and successfully reproduced the study’s results. But a fully functional quantum computer using neutral atoms, or any other qubit system, is still a distant prospect. To perform complex calculations with minimal error, a quantum computer would need about a million atoms’ worth of information — many more than the couple of thousand pieced together in this study, Saffman says. But, as physicists work towards those staggering numbers, Pan notes that the AI model shouldn’t have a problem keeping up. Adding on atoms shouldn’t create a lag in the AI’s ‘thought process’, he says, meaning that the method is “readily scalable to 10,000 or even 100,000 atoms in the future”.
发布时间:2025-08-15 Nature