首页 > 译 文 > 产品经理如何在人工智能时代打造可信和安全的产品
2024
04-29

产品经理如何在人工智能时代打造可信和安全的产品

本文仅为翻译,不代表译者任何观点 


Introduction

介绍

With the advent of Generative AI, bad actors now have access to large language models (LLMs) and GenAI tools such as Dall-E and Midjourney to commit a wide variety of fraud. AI-generated text, selfies, video, and audio can all be used to open fake accounts and establish synthetic IDs. For online marketplaces and other social platforms, this raises serious questions about the future of trust and safety. As the importance of trust and safety continues to grow, product managers play a critical role in creating and maintaining secure online environments. With my experience leading business integrity at WhatsApp, I want to share insights on building strategies, overcoming challenges, and implementing best practices to effectively navigate the new complexities of trust and safety in the age of AI.

随着生成式人工智能(Generative AI)的出现,不良行为者现在可以利用大型语言模型(LLM)以及 Dall-E 和 Midjourney 等 GenAI 工具来实施各种欺诈行为。人工智能生成的文本、自拍、视频和音频都可用于开设虚假账户和建立合成 ID。对于在线市场和其他社交平台来说,这对未来的信任和安全提出了严重的问题。随着信任和安全的重要性与日俱增,产品经理在创建和维护安全的在线环境方面发挥着至关重要的作用。凭借我在 WhatsApp 领导业务诚信方面的经验,我想与大家分享关于建立战略、克服挑战和实施最佳实践的见解,以便在人工智能时代有效驾驭新的复杂的信任和安全问题。

Generative AI has the potential to transform the way companies interact with customers and drive business growth. It is projected that AI bots will power 95% of all customer service interactions by 2025. Companies are exploring how it could impact every part of the business, including sales, customer service, marketing, commerce, IT, legal, HR, and others.

生成式人工智能有可能改变公司与客户互动的方式,并推动业务增长。预计到 2025 年,95% 的客户服务互动都将由人工智能机器人提供支持。企业正在探索人工智能如何影响业务的各个部分,包括销售、客户服务、营销、商务、IT、法律、人力资源等。

Meanwhile, bad actors aren’t behind in leveraging these tools in a plethora of different ways to commit fraud. As much as AI is an opportunity, it poses immense threat with bias, misinformation and user privacy challenges. This is especially true for online marketplaces such as Lyft, Airbnb etc. and social technology platforms such as Twitter, Facebook, Snap and WhatsApp.

与此同时,坏人也不甘落后,利用这些工具以各种方式实施欺诈。虽然人工智能是一个机遇,但它也带来了巨大的威胁,如偏见、错误信息和用户隐私挑战。对于 Lyft、Airbnb 等在线市场以及 Twitter、Facebook、Snap 和 WhatsApp 等社交技术平台来说,尤其如此。

Businesses need a game plan for how it will deal with the threat and this is where trust and safety in products becomes more important than ever.

企业需要制定一个应对威胁的游戏计划,而这正是产品的信任和安全变得比以往任何时候都重要的地方。

What is trust and safety and why is it important

什么是信任和安全,为什么它很重要

Trust and safety refer to the measures and practices implemented to create a secure and reliable environment for users engaging in online platforms, services, and transactions. It encompasses strategies, policies, and tools designed to protect users from risks, such as fraud, harassment, misinformation, and other harmful activities. Trust and safety efforts aim to build confidence, foster positive user experiences, and safeguard the integrity of online ecosystems.

信任和安全指的是为用户参与在线平台、服务和交易创造安全可靠环境而实施的措施和做法。它包括旨在保护用户免受欺诈、骚扰、错误信息和其他有害活动等风险的策略、政策和工具。信任和安全工作旨在建立信心,促进积极的用户体验,保障在线生态系统的完整性。

Challenges product managers face in building a platform with integrity

产品经理在构建诚信平台时面临的挑战

​​Balancing user experience and safety: The biggest challenge in building robust trust and safety measures is maintaining a seamless user experience. At WhatsApp, we recently launched a cloud platform to enable businesses to interact with their customers on WhatsApp. We need to ensure that any business onboarding onto the platform is real and authentic by going through a multi-step verification process. At the same time however, we need to make this onboarding experience seamless for businesses to get started quickly. On the other hand, we also have a bunch of user controls to enable blocking or reporting of malicious activities. Achieving the right balance between these controls, which possibly interfere with a seamless user experience, is a formula product managers need to crack.

平衡用户体验和安全性:建立强大的信任和安全措施的最大挑战是保持无缝的用户体验。在 WhatsApp,我们最近推出了一个云平台,使企业能够在 WhatsApp 上与其客户互动。我们需要通过多步验证流程,确保任何加入该平台的企业都是真实可信的。但与此同时,我们也需要让这种入驻体验无缝衔接,以便企业快速上手。另一方面,我们还有一系列用户控制措施,以阻止或报告恶意活动。这些控制措施可能会影响无缝的用户体验,如何在这些控制措施之间取得适当的平衡,是产品经理需要破解的难题。

Legal and regulatory compliance: Building a global, social platform requires you to stay updated on relevant laws and regulations to ensure compliance. At WhatsApp, for example, we prohibit use of the platform for buying, selling or promoting certain regulated or restricted goods and services such as firearms, alcohol, adult products etc. Local country and regional laws further add to the complexity of these policies. As a product manager, building a safe platform for your users requires you and your team to stay abreast with global laws and regulations. 

遵守法律法规:要建立一个全球性的社交平台,就必须随时了解相关法律法规,确保合规。例如,在 WhatsApp,我们禁止使用平台购买、销售或推广某些受管制或限制的商品和服务,如枪支、酒精、成人用品等。当地国家和地区的法律进一步增加了这些政策的复杂性。作为产品经理,要为用户打造一个安全的平台,您和您的团队就必须随时了解全球法律法规。

The emerging threat of AI: Recently, an incident reported a fake version of President Joe Biden’s voice had been used in automatically generated robocalls to discourage Democrats from taking part in the primary. AI-generated text, selfies, video, and audio can all be used to open fake accounts and establish synthetic IDs. If left unchecked, fake profiles, fake product listings, and other fake content can all cause serious hardship for your users and irreparably damage the hard-fought trust that you have built. This may cause your users to think twice before completing a transaction on your platform; at worst, it may send them toward your competitors.

新出现的人工智能威胁:最近,有报道称,有人在自动生成的机器人电话中使用了乔-拜登总统的假声音,以阻止民主党人参加初选。人工智能生成的文本、自拍、视频和音频都可以用来开设虚假账户和建立合成 ID。如果任其发展,虚假个人资料、虚假产品列表和其他虚假内容都会给用户造成严重困扰,并对您好不容易建立起来的信任造成不可挽回的损失。这可能会让您的用户在您的平台上完成交易之前三思而后行;最糟糕的是,这可能会让他们转向您的竞争对手。

How do you build a product that users trust?

如何打造用户信赖的产品?

Risk and policy frameworks: At the core of trust and safety is the development and implementation of robust policies that align with legal and ethical standards. Once policies are defined, you now need a framework and the necessary operational support to enforce and help mitigate risk. 

风险和政策框架:信任和安全的核心是制定和实施符合法律和道德标准的稳健政策。一旦确定了政策,现在就需要一个框架和必要的运营支持来执行和帮助降低风险。

Risk assessment: As your threats evolve in this new AI age, so will your risks. Leverage ML, analytics and data science to evolve risk assessment techniques associated with new AI-generated content, transactions, and interactions. Don’t assume you’re immune. Deepfakes, synthetic IDs can be tricky to spot, so stay vigilant, and keep scanning for risky signals and suspicious connections between accounts.

风险评估:随着新人工智能时代威胁的演变,您的风险也会随之变化。利用 ML、分析和数据科学来发展与人工智能生成的新内容、交易和互动相关的风险评估技术。不要以为自己可以独善其身。深度伪造、合成 ID 可能很难识别,因此要保持警惕,不断扫描账户之间的风险信号和可疑连接。

Content moderation: Explore strategies for efficient and scalable content moderation, including automation, machine learning, human review and user reporting.

内容管理:探索高效、可扩展的内容审核策略,包括自动化、机器学习、人工审核和用户报告。

Account verification frameworks: Invest in building robust verification systems such as user profiling, account verification combined with anomaly detection. You may need to incentivize your users to go through these verification steps for example, access to advanced features or a verification symbol such as a badge or blue check.

账户验证框架:投资建立强大的验证系统,如用户分析、结合异常检测的账户验证。您可能需要激励用户通过这些验证步骤,例如访问高级功能或徽章或蓝色检查等验证符号。

User education: Building a T&S system alone isn’t enough! Invest in educating users about platform guidelines, security measures, privacy policies and quality best practices. After all, you are building a product for legit, well-intentioned users. Think of creative, timely and precise pieces of communication on the website, in-app or triggered by a specific action.  

用户教育:仅建立一个 T&S 系统是不够的!对用户进行平台指南、安全措施、隐私政策和质量最佳实践方面的教育。毕竟,您是在为合法、善意的用户打造产品。请考虑在网站上、应用程序内或由特定操作触发的创造性、及时性和精确性的沟通。

Measuring trust

衡量信任

Trust can be very subjective. It means different things to different people. So how do you measure the results from your work, making sure the features you worked on had a positive impact on the product? In other words, how do you measure trust? Spam, fraud and abuse keeps evolving, with the advent of LLMs and Gen AI tools, there is a substantial risk of scaled abuse. Product managers in trust and safety have always looked at ‘effectiveness’ metrics such as prevalence, false positive rate, precision etc. However, in the world of GenAI, ‘efficiency’ becomes equally, if not more important. Cost of review, turn-around-time, ease of scale etc. need to be a part of your core metrics to ensure you not only optimize for accuracy, but the speed of scaled harm. 

信任可能非常主观。它对不同的人意味着不同的东西。那么,你如何衡量你的工作成果,确保你所开发的功能对产品产生了积极影响?换句话说,如何衡量信任度?垃圾邮件、欺诈和滥用行为不断演变,随着 LLM 和 Gen AI 工具的出现,存在着大规模滥用的巨大风险。信任和安全领域的产品经理一直在关注 "有效性 "指标,如流行率、误报率、精确度等。然而,在 GenAI 的世界里,"效率 "变得同样重要,甚至更为重要。审查成本、周转时间、扩展难易度等都需要成为核心指标的一部分,以确保不仅能优化准确性,还能优化危害扩展的速度。

With the advent of Gen AI, constant investment and improvement of trust and safety driven through its platform is a critical factor for the success of an organization. Users expect a safe and secure environment, free from fraud, abuse, and harmful content which in turn drives trust and potentially a repeat customer in your future. AI is here to stay and product managers need to adapt to these changes, be creative, make the right investments and balance product growth with integrity and safety. Playing the long game is the only way to ensure sustainable user growth.

随着 Gen AI 的出现,通过其平台驱动对信任和安全的不断投资和改进是一个组织成功的关键因素。用户期待一个安全可靠的环境,没有欺诈、滥用和有害内容,这反过来又会促进信任,并有可能成为您未来的回头客。人工智能将继续存在,产品经理需要适应这些变化,发挥创造力,进行正确的投资,并在产品增长与完整性和安全性之间取得平衡。打持久战是确保用户可持续增长的唯一途径。


本文》有 0 条评论

留下一个回复