This “almost” phase [of machine autonomy] isn’t a brief transition. It’s the product—one that will be with us for years, maybe decades. So it’s important to notice the patterns. When an AI system never admits uncertainty, or when a car’s marketing says “self-driving” but the fine print says “driver responsible,” that’s a warning sign. When you realize that you haven’t really been paying attention for the past 10 miles, or the past 10 auto-composed emails, that’s the trap.
Things don’t have to be this way, but they won’t change unless consumers see the situation clearly and refuse to accept it. We should reject the deal we’ve been handed—the one where the terms of service become a shield for companies and a sword against users. We should demand that companies share the risk they’re enticing us into taking. If they design for complacency, they should get some of the blame when their product fails. – “My Tesla Was Driving Itself Perfectly—Until It Crashed” by Raffi Krikorian, The Atlantic, April 2026
This, when someone says AI-governed stuff (cars, research, whatever) is more reliable than humans. Even if that’s true, when AI fails, the AI merchants should bear responsibility for the resulting damage.