POWER AND TOOLS
Every powerful tool in history has been used for both construction and destruction. Fire, printing, nuclear fission, the internet. AI is no different, except the cycle between invention and widespread deployment has compressed from decades to months.
GPT-4 was released in March 2023. Within weeks it was being used to write phishing emails, generate propaganda, and automate scams. Simultaneously, it was accelerating drug discovery, making education more accessible, and helping developers ship better software faster.
The policy response has been predictable: calls for regulation from people who don't understand the technology, and calls for unrestricted access from people who underestimate the risks. Neither position is useful.
What actually works is building safety into the tools themselves. Anthropic's Constitutional AI approach, OpenAI's usage policies, Google's safety filters—these are imperfect but practical. They reduce harm without eliminating capability.
The harder problem is open-source models. Once weights are released, there's no taking them back. Meta's Llama models are already being fine-tuned for purposes their creators never intended. This isn't an argument against open source. It's an argument for investing more in defense than offense.
The organizations building detection tools, verification systems, and defensive AI will matter more than the ones building the next frontier model.