Responsible AI / Panelist

Pierre-Yves Calloc’h

Pernod Ricard

France

Pierre-Yves Calloc’h is chief digital officer of Pernod Ricard, which owns more than 240 premium wine and spirit brands. He leads the strategic digital programs at the heart of Pernod Ricard’s digital transformation, combining the implementation of AI solutions at scale and the incubation and acceleration of new businesses. He joined the company in 2003, serving in CIO roles and then as managing director of several subsidiaries in Latin America Global before becoming global digital acceleration director in charge of digital marketing and e-commerce acceleration. Before joining Pernod Ricard, he was IT director for the Gérard Darel fashion group in Paris. Calloc’h holds an engineering degree from École Polytechnique.

Voting History

Statement Response
Effective human oversight reduces the need for explainability in AI systems. Neither agree nor disagree “Before an AI system is validated for use, it needs approval, often by experts. During the phase of development and tuning of an AI system, explainability is a must to create upfront trust; otherwise, the system will not go live.

In other words, one of the reasons for implementing AI systems is to identify better “insights” — something previously unseen. Before going live, the insights generated by the AI system are reviewed by experts, who will need explainability to validate the system. This is why explainability is a prerequisite, meaning “no explainability, no live system.””
General-purpose AI producers (e.g., companies like DeepSeek, OpenAI, Anthropic) can be held accountable for how their products are developed. Agree “In the development of GPAI tools, a number of decisions are taken, such as the vetting of training sources (not all websites are vetted), the directions given to the reinforcement learning teams when finalizing the training, and the level of transparency on the code. Although we should not expect the companies to check the actual content of each source of training, they should provide transparency on how the decisions they are making in terms of design and fine-tuning impact the output of the models.”
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree “Most emerging codes of conduct and standards contain similar principles, but there are many variations an international company needs to adapt to.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree “Companies should always care about the trust built with the people they engage with: employees, consumers, customers, providers, and partners. Use of AI should be disclosed in particular when there is an expectation that the work has been done in a traditional way. This will allow receivers of AI-generated content, for instance, to be more wary of known limitations of the technology.”
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Strongly agree “The EU framework is pragmatic and adapted by level of risk. Driven by respect of employees, consumers, and business partners, many companies have started implementing some of the requirements for ethical reasons already.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Neither agree nor disagree “Because of the lack of knowledge/understanding, there is a mix of overestimation of risks and some underestimation. We are missing an approach by type of usage.”