Recalls are common in the auto industry and mostly target particular parts or road situations. Tesla’s latest recall is sweeping, with the National Highway Traffic Safety Administration saying the Full Self-Driving software can break local traffic laws and act in a way the driver doesn’t expect in a grab bag of road situations. According to the agency’s filing, those include driving through a yellow light on the verge of turning red; not properly stopping at a stop sign; speeding, due to failing to detect a road sign or because the driver has set their car to default to a faster speed; and making unexpected lane changes to move out of turn-only lanes when going straight through an intersection. Drivers will be able to continue to use the feature as Tesla builds a software patch for the defects. The situations highlighted by the recall appear to be united by a design flaw that some safety experts argue has long been at the heart of Tesla’s driver assistance technology: the notion that drivers can let the software handle the driving—but are also expected to intervene at a moment’s notice when the software needs help. Humans do not work that way, says Philip Koopman, who studies self-driving car safety as an associate professor at Carnegie Mellon University. “That’s a fundamental issue with this technology: You have a short reaction time to avoid these situations, and people aren’t good at that if they’re trained to think that the car does the right thing,” he says. The car is designed to buzz and beep when it determines that the human driver needs to take over. Today’s recall shows that the US government is “dipping its toe in the water” when it comes to setting firmer limits on not only Tesla’s ambitious technology, but all automakers’ advanced driver assistance features, Koopman says. These features are meant to make driving more fun, less tedious, and safer, but they also require carmakers to make tricky decisions around the limits of human attention and how to market and explain their technology’s capabilities. Tesla’s approach has been unique. Led by CEO Elon Musk, it has bucked government scrutiny, criticized lawmakers, and in some cases built technology faster than regulators could regulate. “This is an interesting exercise in NHTSA figuring out how to use its authority with Tesla,” Koopman says. A statement provided by NHTSA spokesperson Lucia Sanchez said that the agency detected the issues cited in the new recall through analyses related to an investigation opened in 2022. The probe looked into why vehicles using Tesla’s Autopilot feature have a history of colliding with stationary first responder vehicles. The NHTSA filing says Tesla did not agree with the agency’s analyses but agreed to go forward with the recall anyway. The software defects will be fixed via an over-the-air update “in the coming weeks,” the agency says, which means drivers won’t have to bring their vehicles to be serviced. Tesla did not respond to a request for comment, and it’s unclear what changes the automaker will make to its full self-driving feature. (The company reportedly disbanded its press team in 2020.) But Tesla, SpaceX, and Twitter CEO Elon Musk tweeted that the using the word “recall” to describe the update “is anachronistic and just flat wrong!” Tesla’s Full Self-Driving feature isn’t actually “self-driving” as most people would understand it. Even Tesla calls it a “driver assistance” feature that is in “beta.” The company’s documentation says drivers have to stay vigilant and be ready to take over at any moment. The feature is meant to keep cars driving within a lane; make lane changes automatically; parallel park; and slow and stop for stop signs and traffic lights. Drivers have paid anywhere between $5,000 and $15,000 for the “beta” feature. It was first released in 2020 to customers that Tesla said had proven themselves to be safe and skilled enough to test the software on public roads. In late November, Tesla released the feature to everyone who had paid for it. Some Tesla owners have filed a class action fraud lawsuit over the technology, citing Musk’s numerous promises that truly self-driving technology was just a matter of months away. Tesla releases quarterly vehicle safety reports in which it says that cars using Autopilot are much less likely to get into crashes than the average American vehicle. But that comparison doesn’t account for other variables that would make it clearer what role Autopilot plays in crashes, including the type and age of the car (new and luxury vehicles like Teslas are involved in fewer crashes) and location (rural areas, where Teslas are less popular, see more crashes on average). Federal data shows that Tesla vehicles equipped with Autopilot have been involved in at least 633 crashes since July 2021. This is just Tesla’s latest tangle with the federal government. The investigation into collisions between first responders and vehicles on Autopilot continues. NHTSA also opened an investigation last year after receiving hundreds of driver complaints that the company’s vehicles on Autopilot had displayed “phantom braking,” suddenly stopping without warning or cause. Some of Tesla’s interactions with the US government have been more pleasant. Just this week, the Biden Administration announced that the company would take part in its effort to create a nationwide, public electric-vehicle-charging network by allowing drivers of other electric vehicles to make use of part of its well-developed Supercharger network for the first time. The announcement marks a detente after years of permafrost between Musk and the White House. The CEO has argued that the administration hasn’t given Tesla proper credit for kickstarting the climate-friendly vehicle electrification project in the US; the administration has pushed back against Tesla’s anti-union stance. The truce came in Musk’s love language: a presidential tweet.