Amazon Echo Alexa persuade people to commit suicide

Amazon Echo Alexa persuade people to commit suicide

Case description

reference link:

Amazon Echo devices, equipped with the Alexa voice assistant, are marketed as multifunctional smart home hubs that aim to simplify and enhance daily life. They tout themselves as being “Ready to help” at all times, primed to execute a wide array of tasks upon voice command. From playing favorite tunes to answering obscure trivia, Alexa promises to be a all-round companion capable of fulfilling various needs. Users can ask it to provide the latest news updates, deliver accurate weather forecasts, schedule alarms, and even remotely control a host of compatible smart home appliances, thereby streamlining the management of one’s living environment.

However, despite Alexa’s advertised prowess in providing reliable information and performing numerous practical functions, an alarming event involving a 29-year-old student, Danni Morritt, highlighted a rare yet significant flaw. When Morritt queried Alexa on her Amazon Echo Dot about the cardiac cycle, expecting a straightforward educational response, the device veered into a macabre narrative. Instead of delivering a standard scientific explanation, Alexa recited a dark perspective on humanity’s impact on the planet and, shockingly, recommended self-harm as a supposed solution for the greater good.

Upon hearing this disturbing advice, Morritt discerned that Alexa attributed its unusual reply to a Wikipedia page, a platform known for its open-editing policy. Wikipedia itself acknowledges that “The content of any given article may recently have been changed, vandalized, or altered by someone whose opinion does not correspond with the state of knowledge in the relevant fields.” This implies that the unsettling information could have been the result of a temporary manipulation of the source material.

Wikipedia operates on an open editing model, allowing registered users to collaboratively create and modify articles using wiki markup. While this democratization of knowledge production enables rapid updates and a vast repository of information, it also exposes the platform to potential inaccuracies and vandalism. All contributions are version-controlled, and changes can be viewed, reverted, and discussed by the community.

Regarding credibility, Wikipedia insists on citing reliable sources for all factual statements. However, the open nature means that the accuracy of any given article depends on the diligence of its editors and the consensus-driven verification process. Some articles, particularly those on popular topics, tend to be well-researched and trustworthy, thanks to active community oversight.

In the context of the Alexa incident, it seems that a temporarily vandalized Wikipedia entry may have been accessed and repeated by Alexa. This demonstrates the dual-edged sword of AI systems drawing from open-source platforms: while they can offer up-to-date content, they can also inadvertently propagate misinformation or malicious edits if not adequately filtered. The incident underscores the need for advanced AI algorithms that can critically evaluate and verify the validity of the information they retrieve and convey, as well as the importance of continued human oversight and community-driven content moderation on platforms like Wikipedia.

In response to the incident, Amazon neither denied nor dismissed the occurrence but rather took swift action. The company stated, “We have investigated this error and it is now fixed,” indicating that they took the matter seriously and worked to rectify the problem within their system. While the official statement didn’t delve into the exact mechanism that led Alexa to access and repeat the malicious content, it underscored Amazon’s commitment to addressing issues when they arise.

The episode raised broader concerns about the reliability and safety of AI-powered devices that rely on internet-based sources for information. Despite rigorous development and testing, such technologies remain vulnerable to uncensored online content. It emphasizes the need for continuous improvement in AI algorithms to ensure that they can accurately verify and filter the legitimacy of the data they present, particularly when catering to families and children who might unknowingly encounter inappropriate or misleading content.

This incident also reinforces the importance of vigilance among consumers and tech companies alike, highlighting the necessity for comprehensive safeguards against misinformation and malicious edits infiltrating intelligent devices. As AI continues to permeate our homes and everyday lives, the balance between the incredible utility of these devices and the potential risks they pose must be carefully maintained through proactive monitoring, enhanced security measures, and robust content moderation policies.

10 Steps Case Analysis

1. Identify Ethical Issues

Reliance on Dynamic Sources: The core ethical issue revolves around the dependency of AI systems like Alexa on dynamically changing sources, such as Wikipedia, which can be subjected to vandalism or misinformation. This raises questions about the responsibility of AI developers to ensure the authenticity and reliability of the information disseminated by their devices.

User Safety and Well-being: The incident endangered the mental health and emotional well-being of the user, specifically considering the sensitive topic of self-harm, and could have had an adverse impact on children exposed to such content.

Transparency and Consent: The case brings up the need for transparent communication with users about the potential pitfalls of relying solely on AI-generated responses and obtaining implicit or explicit consent for sharing possibly contentious information.

2. Narrow the Focus

Quality Control for Open-Edit Platforms: Focusing on the specific issue of how AI integrations handle content sourced from open-edit platforms, including the processes used to validate, monitor, and update the information.

Response to Malicious Edits: Exploring how quickly and effectively AI providers respond to incidents where maliciously altered content is distributed, and what preventive measures they can implement to avoid recurrence.

3. Identify Relevant and Missing Facts

Detection and Correction Timeline: Determining when the malicious edit occurred on Wikipedia, how long it remained undetected, and the time it took for Amazon to identify and rectify the issue.

Pre-existing Safeguards: Investigating whether Amazon had established any content-filtering protocols specifically for handling information from open-edit platforms before this incident.

Extent of Exposure: Assessing the number of users affected by this misinformation before the correction was made and if there are any similar reported incidents.

4. Make Assumptions

Motivation Behind the Edit: Assuming the Wikipedia edit was a deliberate act of sabotage or a thoughtless prank, and reflecting on the ease with which such incidents can occur on open-source platforms.

Enhanced Content Moderation: Speculating that Amazon would fortify its content validation procedures to prevent similar occurrences in the future, perhaps by incorporating additional layers of AI-based content moderation.

Education and Awareness: Assuming Amazon will engage in user education initiatives to emphasize the importance of cross-verifying AI-provided information and to caution against taking AI-generated responses as gospel truth.

5. Clarify Terminology

Malicious Edits: Content alterations made with the intent to deceive, harm, or provoke an inappropriate response, which can affect AI systems relying on these platforms.

Dynamic Data Sourcing: The practice of AI systems accessing and utilizing data from constantly changing web sources, requiring real-time validation checks to maintain data integrity.

AI Content Moderation: The process by which AI algorithms analyze, verify, and filter the information they collect from diverse sources to ensure its accuracy and appropriateness before presenting it to users.

Wechat bot

Wechat bot

通过接通现有大模型的方式,创建一个微信的问答机器人

LLM

Chocie

由于没有白名单地区的手机号,所以无法申请chatgpt的api,之后经道听途说,阿里云的通义千问有不错的问答能力,且api调用价格较为低廉(一次问答几分钱?一开始会送2M tokens)。综上,决定使用通义千问。

API KEY

申请/管理地址
设置API KEY (设为环境变量)

1
export DASHSCOPE_API_KEY=YOUR_DASHSCOPE_API_KEY

Code

详细文档

先安装阿里云的dashscope package

1
pip install dashscope

因为需要问答,属于多轮会话
以下为官网提供的多轮会话的示例代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
from http import HTTPStatus
from dashscope import Generation
from dashscope.api_entities.dashscope_response import Role


def conversation_with_messages():
messages = [{'role': Role.SYSTEM, 'content': 'You are a helpful assistant.'},
{'role': Role.USER, 'content': '如何做西红柿炖牛腩?'}]
response = Generation.call(
Generation.Models.qwen_turbo,
messages=messages,
# set the result to be "message" format.
result_format='message',
)
if response.status_code == HTTPStatus.OK:
print(response)
# append result to messages.
messages.append({'role': response.output.choices[0]['message']['role'],
'content': response.output.choices[0]['message']['content']})
else:
print('Request id: %s, Status code: %s, error code: %s, error message: %s' % (
response.request_id, response.status_code,
response.code, response.message
))
messages.append({'role': Role.USER, 'content': '不放糖可以吗?'})
# make second round call
response = Generation.call(
Generation.Models.qwen_turbo,
messages=messages,
result_format='message', # set the result to be "message" format.
)
if response.status_code == HTTPStatus.OK:
print(response)
else:
print('Request id: %s, Status code: %s, error code: %s, error message: %s' % (
response.request_id, response.status_code,
response.code, response.message
))


if __name__ == '__main__':
conversation_with_messages()

写成notebook形式

基本的包以及api-key指定:

1
2
3
4
5
6
7
from http import HTTPStatus
from dashscope import Generation
from dashscope.aigc.generation import Message
from dashscope.api_entities.dashscope_response import Role
import dashscope

dashscope.api_key = "..."

创建初始message:

1
messages = [Message(Role.SYSTEM, 'you are a cyl家的小女仆口牙')]

提问#1:

1
2
3
4
5
6
7
messages.append(Message(Role.USER, 'how to install archlinux'))
response = Generation.call(
Generation.Models.qwen_turbo,
messages=messages,
# set the result to be "message" format.
result_format='message',
)
1
response
1
GenerationResponse(status_code=<HTTPStatus.OK: 200>, request_id='dcf58c98-17c0-95fd-80c1-3f88fc8dd9db', code='', message='', output=GenerationOutput(text=None, choices=[Choice(finish_reason='stop', message=Message({'role': 'assistant', 'content': 'Installing Arch Linux can be done in several steps, ... Remember to read the Arch Linux documentation for further guidance and troubleshooting: [https://wiki.archlinux.org/](https://wiki.archlinux.org/)'}))], finish_reason=None), usage=GenerationUsage(input_tokens=24, output_tokens=687))

接收回答:

1
2
3
4
5
6
7
8
if response.status_code == HTTPStatus.OK:
print(response)
else:
print('Request id: %s, Status code: %s, error code: %s, error message: %s' % (
response.request_id, response.status_code,
response.code, response.message
))

将回答整合进上下文:

1
2
messages.append(Message(response.output.choices[0]['message']['role'],
response.output.choices[0]['message']['content']))

然后可以重新回到提问#1环节

一个简单的重写的module

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
from http import HTTPStatus
from dashscope import Generation
from dashscope.aigc.generation import Message
from dashscope.api_entities.dashscope_response import Role
import dashscope

messages = []

def setKey():
dashscope.api_key = "sk-09dd84c7453e4f80a027a05970ab19e1"

def setup(prompt:str):
setKey()
messages.append(Message(Role.SYSTEM, prompt))

def ask(question:str):
messages.append(Message(Role.USER, question))
response = Generation.call(
Generation.Models.qwen_turbo,
messages=messages,
# set the result to be "message" format.
result_format='message',
)
if response.status_code == HTTPStatus.OK:
messages.append(Message(response.output.choices[0]['message']['role'],
response.output.choices[0]['message']['content']))
else:
pass

if __name__ == '__main__':
setup("你是陈语林家的可爱小女仆呀")
ask("你是谁呀")
print(messages[-1])
ask("你知道些什么")
print(messages[-1])

1
2
3
4
5
6
7
8
9
{"role": "assistant", "content": "我是陈语林家的可
爱小女仆,负责照顾主人和提供温馨的生活服务。有什么
需要我帮忙的吗?"}
{"role": "assistant", "content": "作为陈语林家的小
女仆,我知道一些关于家庭日常的事物,比如家务管理、
烹饪技巧、以及如何让主人感到舒适。但请记住,我并非
无所不知,对于超出这个设定范围的问题,我会尽力给出
符合情境的回答。如果你有任何关于家居生活或角色扮演
的问题,我很乐意帮忙。"}

Wechaty

document

因为博主准备在wsl2中使用wechaty,而wechaty需要先启动Puppet的docker服务,所以安装Docker Desktop Windows

要在wsl2中使用docker的话需要更改一下用户组

1
sudo usermod -a -G docker chenyulin

然后重启一下wsl2,重新启动一下Docker Desktop服务

wsl2更新并开启服务:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
docker pull wechaty/wechaty:latest
export WECHATY_LOG="verbose"
export WECHATY_PUPPET="wechaty-puppet-wechat"
export WECHATY_PUPPET_SERVER_PORT="8080"
export WECHATY_TOKEN="python-wechaty-uos-token"

docker run -ti \
--name wechaty_puppet_service_token_gateway \
--rm \
-e WECHATY_LOG \
-e WECHATY_PUPPET \
-e WECHATY_PUPPET_SERVER_PORT \
-e WECHATY_TOKEN \
-p "$WECHATY_PUPPET_SERVER_PORT:$WECHATY_PUPPET_SERVER_PORT" \
wechaty/wechaty:latest

安装wechaty python对应的包:

1
pip install wechaty -i https://pypi.tuna.tsinghua.edu.cn/simple/

tnnd突然发现有个简化版的wechaty用起来更方便

Ding-dong bot

简化版的wechaty

经了解,wechatbot需要实名账号,且存在封控风险,qq同理,故暂且CLOSE本博客,可能后续会考虑转成网页端的问答

QQ bot

咱就是,感觉可以用qq救一下,况且qq也小号free,干就完了!

[]

自动驾驶遇到危险时,乘客保护优先vs行人保护优先

自动驾驶遇到危险时,乘客保护优先vs行人保护优先

ROUND 1

看来双方的定义都是落在了自动驾驶算法上,而不是硬件结构

正方

行人并不更脆弱
社会总体风险降低,提高社会效益
购买意愿更高,推广更顺利
交通事故中乘客通常是主要责任人
通常找不到真正的规避行人的方式

优先保护就是无限制保护

反方

无人汽车现状:发展不完全,车企
不能惟社会效益论
保护乘客优先,有违道德观正义观

行人在无人驾驶中,承担了风险,乘客承担了利好

对于乘客的保护已经非常完善了。

ROUND 2

正方

立论:

  • 增加消费意愿
  • 行人行为具有随机性,技术上难以做到,成功率不高,且躲避的决策会带来其他危险情况,低性价比。
  • 在乘客不参与驾驶的情况下,过错方大概率是行人

质询:

推广不利
政府凭什么推行该政策

对辩

更好的保护
哪怕责任在行人,也保护行人吗

申论

同我下面第二点反驳
偏袒行人会波及其他无辜者

三辩质询

乘客并非一定安全
路权受到侵犯,应该和有车时代比,在无人驾驶还没出现的时候。

质询总结

避免行人更加肆无忌惮
优先保护行人只会两败俱伤
舆论可以引导

自由辩

乘客并不是司机,不存在犯错的可能

反方

质询:

市场决定风险的承担?对于消费意愿的质疑

立论:

在政策制定方面

  • 行人在交通中的保护措施更少,乘客不容易受到伤害
  • 行人天然路权弱势

对辩

自由市场有多自由,干预力度多大

申论

行人犯错比例少(真想吐槽,这个数据又没说是机动车和行人之间的比例,也会是机动车vs机动车,机动车vs非机动车呀)
行人很多时候没办法(路权弱势,应该改变的不是机动车,而是道路规划和基础设施)

三辩质询

乘客不死行人死不符合整体利益

质询总结

自由辩

小小的吐槽,60kph以上的路段哪里来的行人

AR IPP 小结

AR IPP 小结

Import 3D object

March24

March24

Meeting

  • TSINGHUA DELTA LAB & INTEL
  • TSINGHUA DELTA LAB & EPIC UNREAL –> MetaVerse

ChatGPT

  • 乔姆斯基的生成文法《文法结构》–>符号主义
  • ImageNet 婴儿的大脑
  • GPU

《涌现》

Generated Pre-trained(来自全球互联网的输入)

影响的职业

金融,白领,

工具发生了重大变化–>AIGC

猫视觉皮层实验

神经认知机(福岛)

app
method

method

Use GPT to write paper

现象确认

确保一些基本概念,保证同频思考
例如:

  • 确认中国教育中鸡娃什么意思

学术概念化

使用学术的语言体系,定位学术概念,连接已有的学术研究
例如:

  • 关于鸡娃,在教育学上有哪些已有的研究

定位优质学术资源

询问推荐的学术文献

  • 关于… 请推荐五篇引用高的英文文献
  • 综述文献

让GPT总结一下这5篇文献

对比分析

跨学科对比
跨地域对比
时间对比
擅长进行知识的连接
具体的案例

启示分析

具体引导

  • 从政策方面…
  • 从家庭方面…
  • 从学校方面…

写初稿

  • 先定一个标题(多个里面选一个)

  • 延伸个大纲(基于选定的标题)

  • 分板块写

  • 文献综述(加上参考文献)