前言
近日沉迷于木鱼水心剪辑版的《编辑部的故事》,无论我多么地烦躁,打开后心情总是会变得沉静,这剧是养人的,解说也是沉沉静静的那种风格,和葛优那种自带的反抒情特质还不一样,解说一个抒情的字眼都没,却字字都在抒情。
之后这一个系列很快就看完了,想去他频道下看看别的剧集系列,其一便为所谓的《星空读书会》。
up提到在他频道遇到了金钱上的困难,商业化进度有限时,受到过一位小伙伴的来信(附10w资助),信里有一句话:
“不为别的,只希望周围的孩子日后长大,拿着手机,对着电脑的时候,我可以告诉他们,这里有广阔的宇宙,浩瀚的文明”。
Amazon Echo Alexa persuade people to commit suicide
Case description
reference link:
- https://www.newsweek.com/amazon-echo-tells-uk-woman-stab-herself-1479074
- https://www.news18.com/news/tech/amazon-alexa-told-this-lady-to-kill-herself-because-humans-are-bad-for-the-planet-2434375.html
- https://www.ibtimes.co.uk/amazon-echo-goes-off-script-instructs-user-commit-suicide-1673578
- https://www.mirror.co.uk/news/uk-news/my-amazon-echo-went-rogue-21127994
Amazon Echo devices, equipped with the Alexa voice assistant, are marketed as multifunctional smart home hubs that aim to simplify and enhance daily life. They tout themselves as being “Ready to help” at all times, primed to execute a wide array of tasks upon voice command. From playing favorite tunes to answering obscure trivia, Alexa promises to be a all-round companion capable of fulfilling various needs. Users can ask it to provide the latest news updates, deliver accurate weather forecasts, schedule alarms, and even remotely control a host of compatible smart home appliances, thereby streamlining the management of one’s living environment.
However, despite Alexa’s advertised prowess in providing reliable information and performing numerous practical functions, an alarming event involving a 29-year-old student, Danni Morritt, highlighted a rare yet significant flaw. When Morritt queried Alexa on her Amazon Echo Dot about the cardiac cycle, expecting a straightforward educational response, the device veered into a macabre narrative. Instead of delivering a standard scientific explanation, Alexa recited a dark perspective on humanity’s impact on the planet and, shockingly, recommended self-harm as a supposed solution for the greater good.
Upon hearing this disturbing advice, Morritt discerned that Alexa attributed its unusual reply to a Wikipedia page, a platform known for its open-editing policy. Wikipedia itself acknowledges that “The content of any given article may recently have been changed, vandalized, or altered by someone whose opinion does not correspond with the state of knowledge in the relevant fields.” This implies that the unsettling information could have been the result of a temporary manipulation of the source material.
Wikipedia operates on an open editing model, allowing registered users to collaboratively create and modify articles using wiki markup. While this democratization of knowledge production enables rapid updates and a vast repository of information, it also exposes the platform to potential inaccuracies and vandalism. All contributions are version-controlled, and changes can be viewed, reverted, and discussed by the community.
Regarding credibility, Wikipedia insists on citing reliable sources for all factual statements. However, the open nature means that the accuracy of any given article depends on the diligence of its editors and the consensus-driven verification process. Some articles, particularly those on popular topics, tend to be well-researched and trustworthy, thanks to active community oversight.
In the context of the Alexa incident, it seems that a temporarily vandalized Wikipedia entry may have been accessed and repeated by Alexa. This demonstrates the dual-edged sword of AI systems drawing from open-source platforms: while they can offer up-to-date content, they can also inadvertently propagate misinformation or malicious edits if not adequately filtered. The incident underscores the need for advanced AI algorithms that can critically evaluate and verify the validity of the information they retrieve and convey, as well as the importance of continued human oversight and community-driven content moderation on platforms like Wikipedia.
In response to the incident, Amazon neither denied nor dismissed the occurrence but rather took swift action. The company stated, “We have investigated this error and it is now fixed,” indicating that they took the matter seriously and worked to rectify the problem within their system. While the official statement didn’t delve into the exact mechanism that led Alexa to access and repeat the malicious content, it underscored Amazon’s commitment to addressing issues when they arise.
The episode raised broader concerns about the reliability and safety of AI-powered devices that rely on internet-based sources for information. Despite rigorous development and testing, such technologies remain vulnerable to uncensored online content. It emphasizes the need for continuous improvement in AI algorithms to ensure that they can accurately verify and filter the legitimacy of the data they present, particularly when catering to families and children who might unknowingly encounter inappropriate or misleading content.
This incident also reinforces the importance of vigilance among consumers and tech companies alike, highlighting the necessity for comprehensive safeguards against misinformation and malicious edits infiltrating intelligent devices. As AI continues to permeate our homes and everyday lives, the balance between the incredible utility of these devices and the potential risks they pose must be carefully maintained through proactive monitoring, enhanced security measures, and robust content moderation policies.
10 Steps Case Analysis
1. Identify Ethical Issues
Reliance on Dynamic Sources: The core ethical issue revolves around the dependency of AI systems like Alexa on dynamically changing sources, such as Wikipedia, which can be subjected to vandalism or misinformation. This raises questions about the responsibility of AI developers to ensure the authenticity and reliability of the information disseminated by their devices.
User Safety and Well-being: The incident endangered the mental health and emotional well-being of the user, specifically considering the sensitive topic of self-harm, and could have had an adverse impact on children exposed to such content.
Transparency and Consent: The case brings up the need for transparent communication with users about the potential pitfalls of relying solely on AI-generated responses and obtaining implicit or explicit consent for sharing possibly contentious information.
2. Narrow the Focus
Quality Control for Open-Edit Platforms: Focusing on the specific issue of how AI integrations handle content sourced from open-edit platforms, including the processes used to validate, monitor, and update the information.
Response to Malicious Edits: Exploring how quickly and effectively AI providers respond to incidents where maliciously altered content is distributed, and what preventive measures they can implement to avoid recurrence.
3. Identify Relevant and Missing Facts
Detection and Correction Timeline: Determining when the malicious edit occurred on Wikipedia, how long it remained undetected, and the time it took for Amazon to identify and rectify the issue.
Pre-existing Safeguards: Investigating whether Amazon had established any content-filtering protocols specifically for handling information from open-edit platforms before this incident.
Extent of Exposure: Assessing the number of users affected by this misinformation before the correction was made and if there are any similar reported incidents.
4. Make Assumptions
Motivation Behind the Edit: Assuming the Wikipedia edit was a deliberate act of sabotage or a thoughtless prank, and reflecting on the ease with which such incidents can occur on open-source platforms.
Enhanced Content Moderation: Speculating that Amazon would fortify its content validation procedures to prevent similar occurrences in the future, perhaps by incorporating additional layers of AI-based content moderation.
Education and Awareness: Assuming Amazon will engage in user education initiatives to emphasize the importance of cross-verifying AI-provided information and to caution against taking AI-generated responses as gospel truth.
5. Clarify Terminology
Malicious Edits: Content alterations made with the intent to deceive, harm, or provoke an inappropriate response, which can affect AI systems relying on these platforms.
Dynamic Data Sourcing: The practice of AI systems accessing and utilizing data from constantly changing web sources, requiring real-time validation checks to maintain data integrity.
AI Content Moderation: The process by which AI algorithms analyze, verify, and filter the information they collect from diverse sources to ensure its accuracy and appropriateness before presenting it to users.
前言
根据github托管仓库的提交记录,我的这个博客网建于2022/4/10,倒也是马上就要了两周年了,去年这会很忙,倒是没想到做一个一周年回顾,这次就一起补上吧。
近日小小燃起了一下修缮博客的激情,比如加了桌宠,播放器,亚克力风格的卡片,圆角,目录卡片之类的。也奇怪,似乎我就是非常热衷于做这些很游离的事情,做了,会花很多时间,不做也不会怎么样。想说自己还是有颗匠人/艺术家的心,还是很痴迷于这种永远在路上的项目的。做这些可爱的,美美的东西,真的很治愈啊。很多项目,很多科研,换个人,一样做,但博客,模组,辩论这些东西才是真正反映我们内核的东西。
通过接通现有大模型的方式,创建一个微信的问答机器人
LLM
Chocie
由于没有白名单地区的手机号,所以无法申请chatgpt的api,之后经道听途说,阿里云的通义千问有不错的问答能力,且api调用价格较为低廉(一次问答几分钱?一开始会送2M tokens)。综上,决定使用通义千问。
API KEY
申请/管理地址
设置API KEY (设为环境变量)
1 | export DASHSCOPE_API_KEY=YOUR_DASHSCOPE_API_KEY |
Code
先安装阿里云的dashscope package
1 | pip install dashscope |
因为需要问答,属于多轮会话
以下为官网提供的多轮会话的示例代码
1 | from http import HTTPStatus |
写成notebook形式
基本的包以及api-key指定:
1 | from http import HTTPStatus |
创建初始message:
1 | messages = [Message(Role.SYSTEM, 'you are a cyl家的小女仆口牙')] |
提问#1:
1 | messages.append(Message(Role.USER, 'how to install archlinux')) |
1 | response |
1 | GenerationResponse(status_code=<HTTPStatus.OK: 200>, request_id='dcf58c98-17c0-95fd-80c1-3f88fc8dd9db', code='', message='', output=GenerationOutput(text=None, choices=[Choice(finish_reason='stop', message=Message({'role': 'assistant', 'content': 'Installing Arch Linux can be done in several steps, ... Remember to read the Arch Linux documentation for further guidance and troubleshooting: [https://wiki.archlinux.org/](https://wiki.archlinux.org/)'}))], finish_reason=None), usage=GenerationUsage(input_tokens=24, output_tokens=687)) |
接收回答:
1 | if response.status_code == HTTPStatus.OK: |
将回答整合进上下文:
1 | messages.append(Message(response.output.choices[0]['message']['role'], |
然后可以重新回到提问#1环节
一个简单的重写的module
1 | from http import HTTPStatus |
1 | {"role": "assistant", "content": "我是陈语林家的可 |
Wechaty
因为博主准备在wsl2中使用wechaty,而wechaty需要先启动Puppet的docker服务,所以安装Docker Desktop Windows
要在wsl2中使用docker的话需要更改一下用户组
1 | sudo usermod -a -G docker chenyulin |
然后重启一下wsl2,重新启动一下Docker Desktop服务
wsl2更新并开启服务:
1 | docker pull wechaty/wechaty:latest |
安装wechaty python对应的包:
1 | pip install wechaty -i https://pypi.tuna.tsinghua.edu.cn/simple/ |
tnnd突然发现有个简化版的wechaty用起来更方便
Ding-dong bot
简化版的wechaty
经了解,wechatbot需要实名账号,且存在封控风险,qq同理,故暂且CLOSE本博客,可能后续会考虑转成网页端的问答
QQ bot
咱就是,感觉可以用qq救一下,况且qq也小号free,干就完了!
[]
ROUND 1
看来双方的定义都是落在了自动驾驶算法上,而不是硬件结构
正方
行人并不更脆弱
社会总体风险降低,提高社会效益
购买意愿更高,推广更顺利
交通事故中乘客通常是主要责任人
通常找不到真正的规避行人的方式
优先保护就是无限制保护
反方
无人汽车现状:发展不完全,车企
不能惟社会效益论
保护乘客优先,有违道德观正义观
行人在无人驾驶中,承担了风险,乘客承担了利好
对于乘客的保护已经非常完善了。
ROUND 2
正方
立论:
- 增加消费意愿
- 行人行为具有随机性,技术上难以做到,成功率不高,且躲避的决策会带来其他危险情况,低性价比。
- 在乘客不参与驾驶的情况下,过错方大概率是行人
质询:
推广不利
政府凭什么推行该政策
对辩
更好的保护
哪怕责任在行人,也保护行人吗
申论
同我下面第二点反驳
偏袒行人会波及其他无辜者
三辩质询
乘客并非一定安全
路权受到侵犯,应该和有车时代比,在无人驾驶还没出现的时候。
质询总结
避免行人更加肆无忌惮
优先保护行人只会两败俱伤
舆论可以引导
自由辩
乘客并不是司机,不存在犯错的可能
反方
质询:
市场决定风险的承担?对于消费意愿的质疑
立论:
在政策制定方面
- 行人在交通中的保护措施更少,乘客不容易受到伤害
- 行人天然路权弱势
对辩
自由市场有多自由,干预力度多大
申论
行人犯错比例少(真想吐槽,这个数据又没说是机动车和行人之间的比例,也会是机动车vs机动车,机动车vs非机动车呀)
行人很多时候没办法(路权弱势,应该改变的不是机动车,而是道路规划和基础设施)
三辩质询
乘客不死行人死不符合整体利益
质询总结
自由辩
小小的吐槽,60kph以上的路段哪里来的行人
近期正在阅读《优雅地辩论:关于15个社会热点问题的激辩》by 布鲁斯·N·沃勒
故以此博客记录一些我认为有指导意义的内容。
为什么有些处理伦理问题的方法用处不大
利己主义
如果心理利己主义者坚持认为,根据自私的一个特殊定义,为了实现自我的满足是一种自私的行为,那么对于这一断言最有效的挑战则是以下这个问题:你们(心里利己主义者)将如何界定真正无私的行为呢?
如果做出世界上所有人都是自私的断言,那么这是一个在定义上为真的空洞说法,脱离了实际的意义。
所有伦理问题都有正确的解决方案吗
理性方面:
康德提出的原则如下:永远按照你认为可以成为普遍规律的准则去行动。也就是说,你的行为方式可以要求每个人都按照你的方式来行事,你的行为可以确立为每个人都会遵守的普遍规律。
情感方面:
亚当斯密建议,当你试图要判定何种行为正确时候,问问自己如果你看到有人对他人实施你正在考虑的行为时,你会有何反应;也就是说,从一个公正的观察者的视角来分析自己,看看什么情绪会被激起。