织梦CMS - 轻松建站从此开始!

欧博ABG官网-欧博官方网址-会员登入

Apple tests Siri revam皇冠p with ChatGPT

时间:2025-10-03 02:53来源: 作者:admin 点击: 7 次
Amid continued setbacks, Apple is turning to an internal chatbot app, Veritas, to help reshape Siri's future.

I am hopeful about Apple's approach to keeping AI sandboxed on-device, securely accessing user information and things like subscribed and paid-for information from Apple News, etc. This could avoid many of the pitfalls of the not-ready-for-primetime AI that others have more quickly rolled out.

I hope Apple will also build safeguards into how it handles the longer back-and-forth conversations mentioned here. AI-based advanced Siri needs to be designed as a highly professional personal assistant that maintains strict "personal boundaries." In effect, they need to prevent people from thinking they are developing a friendship or other personal relationship with Siri.

This is because LLM AI mimics human language, but can't actually think or feel emotion. Most people, however, do think and feel emotion, and they're highly prone to anthropomorphize even inanimate objets. When presented with something reasonably effective at acting human, most people really want to believe it is human.

Without boundaries and safeguards, people who think they are developing a friendship or relationship with an AI chatbot don't understand that the closest human analog to an unrestrained chatbot is a psychopath. Psychopaths also don't understand human emotion but work hard to mimic it in order to convince others that they do. The psychopath then uses that to manipulate their victims into doing whatever the psychopath wants.

Unlike a psychopath, AI can't think, so unless it already has preprogrammed "motivations," an improperly restrained LLM AI will, in an extended 'conversation' with a human, create a feedback loop with the human, effectively distilling down that person's own motivations and reflecting them back, but without any thought or any emotional or ethical filters. The result can expose raw vulnerabilities and be highly damaging to the human, up to and including causing self harm.

So while some contextual continuity could be a great thing in a Siri assistant, it should probably be designed to avoid extended conversations, or to otherwise periodically reset to a baseline understanding of personal user information. 

(责任编辑:)
------分隔线----------------------------
发表评论
请自觉遵守互联网相关的政策法规,严禁发布色情、暴力、反动的言论。
评价:
表情:
用户名: 验证码:
发布者资料
查看详细资料 发送留言 加为好友 用户等级: 注册时间:2025-10-14 19:10 最后登录:2025-10-14 19:10
栏目列表
推荐内容