As the cyclone got closer, Fugate says he struck up a conversation on Twitter with a resident of American Samoa who reported that winds were picking up and ferries had stopped running. Then the local shared another crucial nugget of information: He started tweeting about the NFL game on TV. “I knew he had power and a TV signal,” Fugate says. The then-administrator passed the intel along to FEMA colleagues trying to figure out where different emergency resources needed to go. More than a decade later, Twitter has become an even more powerful and established tool for collecting and disseminating information in crisis. Government agencies and organizations like the Red Cross have built the platform into operating procedures for responding to natural disasters like cyclones or earthquakes, or manmade ones like war. But now that Tesla and SpaceX CEO Elon Musk has acquired (and hollowed out) Twitter, the platform is changing in ways that threaten to transform how people dealing with disaster and the authorities trying to help them communicate. Musk has said he favors looser moderation, welcomed back banned users, and attempted to allow anyone to pay for the check mark originally designed to verify notable accounts, including those of government agencies, NGOs, and journalists. Emergency managers and humanitarian groups worry the changes to Twitter could hinder their lifesaving work. “I don’t think Twitter has looked at the second, third, fourth tier effects of what they do—and that’s what we do, generally,” says Kate Hutton, the communications coordinator at the Seattle Office of Emergency Management. Crisis and Twitter have gone hand in hand since shortly after the service debuted in 2006. A disaster even helped popularize the hashtag as an organizing tool. In 2007, users adopted #sandiegofire as a way to track and aid others in the midst of fast-moving wildfires in southern California. As the platform grew, some emergency managers began to use the platform more formally to blast out crucial messages to the public and inform decisions about where to send resources. Twitter provided a direct route to residents and media, who could in turn easily amplify information via retweets. Robert Mardini, the director general of the International Committee of the Red Cross (ICRC), says that the organization has its own trends analysis unit that uses software to monitor Twitter and other online sources in places where the organization operates. That can help keep workers safe in conflict zones, for example. Of course, you can’t believe everything you read on Twitter. During a crisis, emergency responders using social media must figure out which posts are false or unreliable, and when to call out dangerous rumors. This is where Twitter’s own moderation capacity can be crucial, experts say, and an area for concern as the downsized company changes. In conflict zones, military campaigns sometimes include online operations that try to use the platform for weaponized falsehoods. “Misinformation and disinformation can inflict harm on humanitarian organizations,” Mardini says. “When the ICRC or our Red Cross Red Crescent Movement partners face false rumors about our work or behavior, it can put our staff’s safety in jeopardy.” In May, Twitter introduced a special moderation policy for Ukraine aimed at curbing misinformation about its conflict with Russia. Nathaniel Raymond, coleader of the Humanitarian Research Lab at Yale’s School of Public Health, says that though Twitter has not made any recent announcements about that policy, he and his team have seen evidence is being enforced less consistently since Musk took over as CEO and fired many staff working on moderation. “Without a doubt we are seeing more bots,” he says. “This is anecdotal, but it appears that that information space has regressed.” Musk’s takeover has also put into doubt Twitter’s ability to preserve evidence of potential war crimes posted to the platform. “Before we knew who to talk to get that evidence preserved,” Raymond says. “Now we don’t know what’s going to happen.” Other emergency responders worry about the effects of Twitter’s new verification plan, which is on hold after some users who paid for a verification check mark used their new status to imitate major brands, including Coca-Cola and drug company Eli Lilly. Emergency responders and people on the front lines of a disaster both need to be able to determine quickly whether an account is the legitimate Twitter presence of an official organization, says R. Clayton Wukich, a professor at Cleveland State University who studies how local governments use social media. “They’re literally making life and death decisions,” he says. WIRED asked Twitter whether the company’s special moderation policy for Ukraine remains in place, but did not receive a response as the company recently fired its communications team. A company blog post published Wednesday says that “none of our policies have changed” but also that the platform will rely more on automation to moderate abuse. Yet automated moderation systems are far from perfect and require constant upkeep from human workers to keep up with changes in problematic content over time. For people who work in emergency management, the upheaval at Twitter has raised larger questions about what role the internet should play in crisis response. If Twitter becomes unreliable, can any other service fill the same role as a source of distraction and entertainment, but also dependable information on an ongoing disaster? “With the absence of this kind of public square, it’s not clear where public communication goes,” says Leysia Palen, a professor at University of Colorado Boulder who has studied crisis response. Twitter wasn’t perfect, and her research suggests the platform’s community has become less good at organically amplifying high quality information. “But it was better than having nothing at all, and I don’t know we can say that anymore,” she says. Some emergency managers are making contingency plans. If Twitter becomes too toxic or spammy, they could turn their accounts into one-way communication tools, simply a way to hand out directions rather than gather information and quell worried people’s fears directly. Eventually, they could leave the platform altogether. “This is emergency management,” says Joseph Riser, a public information officer with Los Angeles’ Emergency Management Department. “We always have a plan B.”