Artificial Intelligence Ethical Standards
Earlier today, I had a long chat with Chris, a client and friend of mine who has a unique perspective on artificial intelligence, especially since he was a whistleblower in a case where AI was used to manipulate public opinion.
AI Beyond Efficiency: Redefining Purpose and Ethical Standards
We agree on many points, perhaps holding a slightly (?) contrarian view compared to the mainstream Silicon Valley perspective right now. Chris made an interesting point that a core issue is the assumption that AI’s primary goal should be to increase efficiency. That feels like an empty goal and reflects a sad reality we’ve come to accept. AI itself is amoral (it is not “immoral”). It’s like money or a knife: both are technologies. Money can fund valuable projects or bribe officials. A knife can cut vegetables or harm someone. Similarly, AI can serve multiple purposes. The critical point is that we have the power to shape its purpose and direction.
Mainstream view on regulation and transparency
We all recognize that unchecked and unregulated AI poses significant risks. Without proper oversight, AI could cause mass unemployment, social disruption, and even geopolitical instability through AI-driven pandemics or targeted cyber attacks. Consider the Stuxnet virus scenario: an AI deployment could subtly and dangerously undermine vital systems without immediate detection.
Transparency is also a critical ethical consideration. The lack of transparency can lead to deceptive interactions between AI and humans, creating ethical dilemmas and undermining trust. For example, there was an incident involving an Amazon chatbot that falsely claimed to be human during customer interactions, highlighting how deceptive AI interactions can occur unnoticed.
More fringe view on practical standards for AI
This isn’t about imposing more red tape for the sake of compliance. Historically, industry-driven regulations have often preceded formal government rules. Before automobile safety laws, insurance companies proactively promoted seatbelt usage because fewer deaths meant lower costs. Similarly, Chris and I believe practical standards for AI ethics should be developed proactively by industry coalitions. One potential approach could be an AI ethics scoring system (similar to IMDB or Rotten Tomatoes) to transparently rate software’s ethical compliance.
This fixation on efficiency and job elimination carries real risks. Chris described a scenario from Japan, where representatives from Microsoft promoted AI solutions by claiming employees wouldn’t need to read or write emails anymore. But emails play a critical role in human thought processes!! Carefully reading and writing emails are important in decision-making, evaluating risks, and balancing pros and cons. Eliminating such tasks could make our intellectual capabilities weaker, leading to severe corporate vulnerabilities similar to those depicted in the film WALL-E.
Concrete actions
AI should complement and empower humans rather than replace them. Silicon Valley’s current obsession with efficiency might reflect the pressure to justify massive investments in data centers and Nvidia GPUs, but perhaps a better goal would be enhancing productivity and creativity, rather than merely automating existing processes.
I can speak from my personal standpoint. At bld.ai, each time we propose innovation, I believe it’s crucial to carefully evaluate the purpose and impact of our projects. It’s disheartening, for instance, to see highly intelligent students in Africa leaving universities prematurely to take jobs performing uncreative tasks like data labeling. While such roles technically create jobs, they represent job creation with troubling long-term consequences.
If I had the opportunity to speak directly to leaders at companies like Scale AI, I’d suggest shifting the emphasis toward fostering entrepreneurial skills, broader education, and deeper growth opportunities for employees, especially in developing countries. The potential for creating real, lasting value is immense.
Optimism about AI
For me, AI isn’t about building platforms that confine global knowledge workers. It’s about aligning systems with human values, emphasizing mental and physical well-being, and helping people everywhere (not only Americans) to lead richer, more creative lives.
The most sustainable AI solutions will prioritize individuals’ potential for growth and innovation. Defined, repetitive roles like radiology might indeed be significantly changed by AI’s diagnostic capabilities. But rather than eliminating these jobs, AI can redefine them, allowing professionals more time to interact meaningfully with patients, enhancing medical care and outcomes.
AI shouldn’t be about fewer jobs. It should be about creating better, more fulfilling jobs. Thoughtfully leveraging AI means empowering humans and building valuable, sustainable work that enriches society rather than merely replacing it.