18 September 2024
|
Zaker Adham
Generative AI has often been compared to the world's most brilliant intern — one that’s incredibly smart but lacks experience, leading to moments of both brilliance and mishaps. Despite its capabilities, AI tools have caused a few controversies.
Take, for example, the lawyer who relied on an AI tool's eloquence, only to face fines after submitting fabricated legal documents. Or Air Canada’s lawsuit following its AI-powered customer service chatbot’s mistake in promising bereavement fare reimbursement. These incidents highlight the need for more responsible AI.
A 2023 Gallup/Bentley University survey revealed that only 21% of consumers trust businesses to handle AI responsibly, raising concerns as AI chatbots become more widespread. If we can train interns to act responsibly, surely we can do the same with AI chatbots.
Here are five steps to achieve that:
Users are quick to notice when their rights are respected online. For example, no one wants sensitive information shared without consent. Studies show that transparency about how data is collected and used influences consumer trust. When building responsible AI chatbots, follow these three guidelines:
For instance, while ChatGPT's privacy policy is vague about data retention, Claude’s AI system automatically deletes data after 30 days, providing more transparency.
Like any good intern, AI chatbots need regular performance evaluations. Dr. Catherine Breslin, an AI consultant and former Alexa developer, emphasizes the importance of testing AI both before and after deployment. Continuous testing ensures that AI chatbots are trained on unbiased and diverse data sets to reduce errors and ethical concerns.
Fine-tuning and prompt engineering are essential steps. These processes help the AI learn specific tasks and styles, such as understanding company policies in the case of an HR chatbot. A responsible chatbot should also be transparent about how it reaches decisions, building user trust.
Much like new employees undergoing safety training, chatbots must meet security standards. According to Jonny Pelter, former CISO at Thames Water, chatbot security should include measures like incident response and AI security monitoring. With increasing AI-driven threats, organizations must adopt adversarial testing and secure software development practices.
AI regulations, such as the EU AI Act and the U.S. government's executive order on AI, are pushing companies toward stronger safety protocols.
Defining responsibility for AI’s actions is crucial. Legal frameworks like the EU’s AI Act and GDPR impose strict guidelines for AI systems in Europe, while global standards like NIST and ISO 23894 encourage fairness and accountability. In highly regulated industries like finance, AI chatbots must meet stringent compliance standards to avoid mishaps.
Just like interns, we expect AI chatbots to follow ethical standards, such as respecting customers and acting with integrity. AI systems are managed by "humans in the loop" to ensure accountability for any actions taken by the chatbot. Dr. Nataliya Tkachenko of Cambridge Judge Business School notes that AI's environmental impact is also an important concern, as chatbots require significant computational resources, especially for real-time applications.
Ultimately, responsible AI training can lead to more reliable and trustworthy AI tools — just as we train interns to become valued professionals.