Table of Contents
ToggleFrom SB 243 to ChatGPT: Why it’s ‘Not Cool’ to be Cautious About AI
If you voiced concern about a new, powerful, and unproven technology in the 1990s, you were often met with a specific brand of dismissal. You were a Luddite, a technophobe, standing in the way of progress. Your caution was seen as a failure of imagination.
This sentiment is perfectly encapsulated in a piece of 1999 political history: California Senate Bill 243. The bill, which sought to place a moratorium on the use of biometric technology like facial recognition in public spaces, was vetoed by then-Governor Gray Davis. His reasoning? A now-famous line: “While I am concerned about the potential ‘Big Brother’ implications of this technology, I do not believe it is cool to send a message that California is fearful of the future.”
Fast forward a quarter-century, replace “biometric technology” with “generative AI,” and you have the exact same cultural dynamic playing out. Today, urging caution about the breakneck speed of AI development is, in many circles, simply “not cool.”
But why? And at what cost does this pressure to be “fearless” come?
The Ghost of SB 243: A Blueprint for Dismissal
Governor Davis’s veto message is a masterclass in framing caution as a character flaw. The word “cool” is doing heavy lifting. It implies that the stance of a state, or an individual, should be governed by aesthetics and market appeal, not by prudence or ethical consideration. To be cautious is to be uncool, backward, and fearful. To be enthusiastic, regardless of the consequences, is to be forward-thinking and brave.
This framing was incredibly effective. It sidelined the substantive concerns about privacy, state surveillance, and the erosion of civil liberties that SB 243 sought to address—concerns that have proven to be prescient and critically important today. By making it about “fear,” the debate was no longer about the content of the fear, but the emotion itself.
The ChatGPT Era: “Move Fast and Break Things” on Steroids
Enter the age of ChatGPT, Midjourney, and their rapidly evolving successors. The launch of these powerful AI tools has triggered a gold rush mentality unlike any since the early internet. The pressure to adopt, integrate, and monetize is immense. In this environment, caution is once again treated as a social and professional liability.
This pressure manifests in several ways:
- The FOMO (Fear Of Missing Out) Economy: Businesses are terrified of being left behind. Boards are asking CEOs, “What is our AI strategy?” Startups are pivoting to be “AI-native.” In this scramble, pausing to ask about data provenance, copyright infringement, environmental costs, or potential job displacement is seen as a luxury that competitors will not afford. To hesitate is to lose.
- The “Inevitableism” Narrative: A common rhetorical tactic is to frame AI progress as an unstoppable force of nature, like the tide coming in. “You can’t stop it, so you might as well get on board.” This deterministic view removes agency and moral responsibility from the developers and deployers of the technology. If it’s inevitable, why waste time with pesky things like regulation or ethical guardrails?
- The Cool Factor of the Tech Elite: The leading voices in AI are often brilliant, charismatic, and incredibly wealthy. Their vision of a techno-utopian future is seductive. When they speak of AI solving climate change or curing cancer, questioning the path to that goal can make you sound like you’re against the goals themselves. You’re not a thoughtful critic; you’re a party pooper.
The High Cost of Being “Uncool”
The social pressure to be uncritically enthusiastic about AI has real-world consequences. It creates a chilling effect where valid concerns are dismissed or silenced.
- Ethical Debates are Rushed: Complex questions about bias, disinformation, and the concentration of power are glossed over with promises that “the next model will fix it.” We’re building the plane while it’s in the air, and asking the passengers to just enjoy the ride.
- Regulation is Stifled: Just as with SB 243, calls for sensible, proactive regulation are framed as “innovation-killing.” This creates a regulatory vacuum where powerful technologies can become deeply embedded in our societal infrastructure before any rules of the road are established.
- Human-Centric Concerns are Marginalized: Worries about artistic integrity, the value of human labor, and mental health in an AI-saturated world are often dismissed as sentimental or irrelevant. The focus remains on capability and efficiency, often at the expense of humanity.
Redefining “Cool”: The Courage to be Thoughtful
It’s time to reclaim the narrative. True progress isn’t about blind acceleration; it’s about intelligent navigation. Being cautious about AI isn’t a sign of fear; it’s a sign of wisdom, responsibility, and a genuine commitment to a better future.
The truly “cool” position in the age of AI is not the wide-eyed enthusiast, but the clear-eyed steward. It’s the developer who advocates for rigorous testing and red-teaming. It’s the CEO who invests in AI ethics alongside AI R&D. It’s the artist who questions the data their tools are built upon. It’s the legislator who has the courage to write rules for a world that doesn’t fully exist yet.
The lessons from the veto of SB 243 are clear. The concerns we dismiss today as “uncool” will be the crises we grapple with tomorrow. We had a chance to thoughtfully shape the future of facial recognition in 1999, and we prioritized being “cool” over being careful. We are now living with the results of that choice.
Let’s not make the same mistake with AI. Let’s build a culture where it’s not just acceptable, but laudable, to pause, to question, and to ensure that this incredible technology is shaped by human wisdom, not just by market hype. The future is too important to be left to the “cool” kids. It belongs to the careful, the critical, and the courageous.