Sameer Gupta is chief analytics officer at DBS Group. He is also head of the DBS Data Chapter, which brings together over 700 data professionals to establish a pool of deep data and AI modeling experts. He has more than 25 years of banking and financial services experience across advanced analytics, AI and machine learning, strategic marketing, customer experience, and change management. Previously, he was the customer care and insights solutions leader for banking clients in the Association of Southeast Asian Nations at IBM. Before that, he spent 16 years in various roles across Asia and Europe with GE Capital. He holds an MBA from the Indian Institute of Management.
Voting History
Statement | Response |
---|---|
Effective human oversight reduces the need for explainability in AI systems. Disagree | “Effective human oversight and AI explainability are not interchangeable but complementary elements of responsible AI. Human oversight guides and controls AI behavior but is only as effective as the level of explainability available. Without clear insight into how and why an AI system reaches its conclusions, oversight becomes superficial, reducing human involvement to a rubber stamp rather than acting as a critical check. Explainability ensures that AI decisions can be understood, evaluated, and corrected when needed — especially vital when outcomes are unexpected or problematic.” |
General-purpose AI producers (e.g., companies like DeepSeek, OpenAI, Anthropic) can be held accountable for how their products are developed. Agree |
“Consider the analogy of buying a car. Manufacturers need to ensure that the cars they produce meet safety standards, and distributors need to do their own checks to make sure the cars are fit for sale, while drivers need to be sufficiently trained to operate the vehicle. Similarly, responsible AI producers should proactively mitigate risks, including those around intellectual property, hallucinations, toxicity, bias, fairness, and security. However, companies that use AI products need to establish their own governance frameworks to determine which products are suitable for adoption. The shape and form of such guardrails vary across regulatory boundaries, industry standards, and customer expectations. Individual users of AI should also be aware of AI risks and how best to manage them. Companies and regulators can play a role by stepping up digital and AI literacy.
But what happens when things go wrong? Returning to the car analogy, accountability depends on the circumstances — was the accident caused by driver negligence or by a mechanical failure? While there are well-established guidelines for car accidents, the same can’t be said about AI-related incidents. This is a topic that warrants further discussion.” |
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree |
“It is reassuring to observe a growing global consensus around the need to establish consistent principles for trusted and safe AI while enabling space for innovation. In May 2023, the G7 launched the Hiroshima AI Process, the world’s first multinational effort to promote safe, secure, and trustworthy AI. In October 2023, the UN created its 39-member AI Advisory Body to examine the risks, opportunities, and global governance of AI. While these early steps are going in the right direction, we have a long way to go in aligning on the principles, and even further in how these principles will be adopted.
What is absent is a supranational platform that enables various stakeholders to discuss and take action on some of the thorniest questions around AI. Companies must take it upon themselves to think about governance frameworks and to best manage the real and material risks that AI presents. We recognize that companies operating in multiple jurisdictions will have to navigate diverse regulatory landscapes and sometimes divergent processes around AI. At DBS, some of the governance mechanisms we have in place include a senior-level committee that opines on whether an AI use case is fit not just legally but ethically.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Agree |
“Organizations have been gradually unlocking the potential of data over the years while simultaneously developing data and risk management frameworks to mitigate associated risks. Recently, the adoption of AI/ML and generative AI capabilities has accelerated the industrialization of data usage in organizations, introducing a new set of amplified risks, such as fairness, bias, explainability, ethics, and hallucination. This has led to significant changes in how organizations manage data-related risks, shifting the focus from managing the data itself to ensuring its responsible use.
In essence, organizations are expanding their risk management frameworks to address AI-related risks. However, the challenge lies in ensuring that these measures can keep pace with the rapid evolution of AI capabilities, particularly in light of the emerging risks associated with generative AI.” |