
因为oh-my-live2d官网崩了,所以需要自己私有托管一下live2d服务才能让博客网上的小埋复活。(bonus:可以借此尝试自己制作一个live2d形象)
更于9/25,拖着很久没搞,然后官网自己复活了,好好好,直接快进到customized live2d。
因为oh-my-live2d官网崩了,所以需要自己私有托管一下live2d服务才能让博客网上的小埋复活。(bonus:可以借此尝试自己制作一个live2d形象)
更于9/25,拖着很久没搞,然后官网自己复活了,好好好,直接快进到customized live2d。
因为之前使用gitalk评论系统在国内用不了了,所以尝试更换为一些国内可用的方案。其中一个是国内开发者开发@imaegoo搭建的Twikoo系统
> 拜一下,我的博客网也是改自他的魔改版hexo-icarus主题对,然后针对icarus系的主题,官网有提供详细例程
主要分为云函数部署部分和前端配置部分
归根结底就是为静态的博客网提供一个可以响应用户数据提交和显示的服务。
主要分为两块
那么首先就需要去mongodb申请一个数据库,依照这个例程: https://twikoo.js.org/mongodb-atlas.html
> 密码保密)随后huggingface的服务就通过这串链接字符串来与这个数据库交互
随后去huggingface开启云服务器: https://twikoo.js.org/backend.html#hugging-face-%E9%83%A8%E7%BD%B2
我的云仓库为: https://huggingface.co/spaces/CallMeChen/BlogComment
启动服务成功后可以看到如下log:
参照 https://www.anzifan.com/post/icarus_to_candy_2
访客还可以通过输入数字 QQ 邮箱地址,使用 QQ 头像发表评论。
添加图床方便上传图片(已解决)
10/20发现评论系统不能用了,发现是mongodb暂停了数据库服务,想起来它应该是每个月都需要自己connect一下(也算是释放不活跃用户的资源吧)。
每周维护的时候需要进到 https://cloud.mongodb.com/v2/6610ebcc85d7126b38c3837b#/overview connect一下随后发现huggingface的仓库一直卡在build,遂更换为Netlify云函数平台。
https://app.netlify.com/sites/admirable-sunburst-4762e3/configuration/general
1 | { |
Take a look at the differences between Microprocessor (in our personal computer or phone) and Microcontroller
### Von Neumann vs Harvard architecture efficiency: Harvard architecture can avoid the "Von Neumann bottleneck" > VERBOSE~ **Von Neumann bottleneck**: when the bandwidth between CPU and RAM is much lower than the speed at which a typical CPU can process data, because the shared bus for instructions and data can cause competition.In embedded system, harvard architecture is widely used.
Our board (STM32F103C8T6) use harvard architecture on physical level (refer to the block diagram in reference manual)
However, in the software level, we treat the instruction memory and data memory as a whole block of memory (therefore, it is more accurate to say that the stm32 uses a mixed Harvard and von Neumann architecture.).
In stm32, instruction memory, data memory, registers of peripherals/IO are all mapped to memory.
Table from https://embeddedsecurity.io/vendor-stm32const
If you want to save the limited RAM spaces (data memory) for other variables, you can use this keyword to store this variable in ROM (program memory). It’s important for harvard architecture.
volatile
This means two things:
- The compiler will not try to optimize the variable with
volatile
. See the two examples on slides.- Each time the program reads the
volatile
variable, the processor will not look into cached data memory, meaning that the program can always get the newest updated data in memory (which is very important when external hardware change the variable).However, this case is not relevant with STM32 MCU since it didn’t have cache.
const volatile
?const volatile char *a
declares a pointer pointing to a value that cannot be changed by the program through *a
, but the value of a can be changed (pointing to another value). *a = 0
is not allowed, a = &b
is allowed.Generally, we use const volatile
to declare pointers that points to hardware registers or memory-mapped Input ports(read only).
1 | // import header file for the board (containing the declaration of SFR) |
We need to use C code to set the value of SFR.
These are registers that are embedded in peripherals, used for configuration and control of peripherals.
VERBOSE~
If we want to get the status of a peripheral, we read the value of SFR.
If we want to send something to peripheral, we write value to SFR.
Let’s take timer as an example:
SFR in block diagram of timer:
SFR declaration in code:
We change operate with these registers through Bit Operation
.
All the modes of GPIO:
The whole block diagram:You may see an unfamiliar unit
Here.After we configured the GPIO to be input ports, the output driver is disabled (disconnected).
The input driver part is still enabled so that we can read the output status.
Can “generate” voltage higher than VDD
at IO pin.
Most common one.
- “0” in the Output register activates the N-MOS (LOW (0V) at IO pin) - “1” in the Output register activates the P-MOS (HIGH (VDD) at IO pin)Not covered.
Peripherals inform the processor through external interrupt.
We mainly deal with the interrupts from peripherals.
Each peripheral can have multiple interrupt sources, indicating different events.Through interrupt service routine (ISR)
#### Interrupt vectors Interrupt vectors are **addresses that inform the interrupt handler as to where to find the ISR**1 | | Vector Number | Interrupt Number | Description | Vector Address | |
This is an arbitrary IVT that depicts the pattern of IVT, for detailed IVT of STM32, please refer to the reference manual.
Let’s split Timer peripheral into 3 parts:
#### Blue part: Master/slave controller The master/slave unit provides the time-base unit with the **counting clock signal** (for example the CK_PSC signal, PSC here means that it's for the prescaler in time-base unit), as well as the counting direction (counting up/down) control signal. This unit mainly provides the **control signals** for the time-base unit. #### Yellow part: Time-base unit The main block of the programmable timer is a **16-bit** counter with its related auto-reload register. The counter can count up, down or both up and down. The counter clock can be divided by a **prescaler** (which is basically another counter). > On the reference manual there are many wave-forms for you to understand how these control registers take effects.The reset frequency of the counter is $$\frac{f_{input}}{(Prescaler+1)\times(Counter Period+1)}$$
The timer channels are the working elements of the timer.
They are the means by which a timer peripheral interacts with its external environment (through input capture or output compare).
Just refer to RC2_LCD. I think it’s not the focus of final exam.
IC is a peripheral that can monitor the input signal changes (pos/neg edge) independent of the processor (Core).
OC is a peripheral that can generate precise output signal independent of the processor (Core).
In STM32, it is embedded in timer peripheral (together with output compare).
You can consider IC as a timer value recorder. It will record the timer value each time the capture condition is met (you can see from the diagram, the timer value is from CNT counter
).
These conditions can be:
- rising edge
- falling edge
- both
With a prescaler, we trigger capture events every few edges.
Similar for interrupt, we can trigger an interrupt every few captures.
Note:
The IC does not capture the edge immediately when a rising or falling edge happened. The capture event needs to be sync with PB_clk.
Further more, the module will capture the timer counter value that is valid 2-3 PB_clk cycles after the capture event.
For detailed configuration, please refer to Reference Manual.pdf
on canvas, page 349-359, 382-385.
Just refer to RC2_Output_Campare and RC3_Lab4 for the concepts and PWM configuration.
Also, the solution of hw2 has been uploaded to canvas, please take a look.
以下为项目布局示意图
(左下角为深度相机)
以下为项目的架构图
具体解释一下这张图
首先用户佩戴 hololens,hololens 会先把可交互的物体(物体数据源于物体检测算法)渲染在 UI 上,之后用户基于这些视觉信息,给出交互指令。
随后系统会根据给到的指令和可交互物体的信息,规划机械臂的运动路径。
路径信息会首先传到数字孪生系统中。该系统中有三部分孪生,分别是人体(实时数据来自人体检测算法),环境(实时数据来自 hololens 的环境感知系统) 和 机械臂(来自于前面提到的路径信息)的孪生。
随后该 DT 系统会根据路径信息解算在机械臂运动过程中是否会发生碰撞。
代码仓库:
https://github.com/Chen-Yulin/Unity-Python-UDP-Communication
传输的数据为字符串:
1 | def SendData(self, strToSend): |
需要包含的信息:物体种类,物体的三轴方位,三轴旋转,三轴尺寸。
格式:
1 | {Object Detection} |
示例:
1 | {Object Detection} |
需要包含的信息:6 个 joint 角度(单位为度)
格式:
1 | {Current Joint} |
需要包含的信息:6 个 joint 角度(单位为度)
格式:
1 | {Target Joint} |
https://github.com/Siliconifier/Python-Unity-Socket-Communication
首先 RGB-D 相机识别出桌面上的某一个物体,反馈到 hololens 的界面中,用户会看到对应物体上渲染出来了一个边界框和表示位姿的坐标轴:
随后用户可以通过手部的射线来移动这个目标框(物体上的检测框依然会存在)到某个位置,然后目标框上会显示一个 UI 询问是否要把物体移动到这里,如果点击确认,机械臂就会把对应的物体到目标框的位置。
可以在路径中设置一些障碍,展示机械臂障碍识别的功能。
用户凝视一个物体时显示一个界面,即是否要将操作员注视的物体移动到他的手上,如果手腕翻起,即为确认。随后机械臂会将该物体移动到手上。
仓库:https://github.com/liuyuan-pal/Gen6D
手册:https://github.com/liuyuan-pal/Gen6D/blob/main/custom_object.md
步骤指令:
1 | python prepare.py --action video2image --input data/custom/part1/ref.mp4 --output data/custom/part1/images --frame_inter 10 --image_size 960 |
关于判定不准确怎么解决:https://github.com/liuyuan-pal/Gen6D/issues/29
unity 使用左手坐标系,普遍的 6d 算法使用右手坐标系,所以得出[R;t]后需要做一步针对 y 轴的反射变换
1 | def right_to_left_hand_pose_R(R): |
可以看到效果很好:
State of The Art: Foundation Pose (https://github.com/NVlabs/FoundationPose)
CASAPose (https://github.com/fraunhoferhhi/casapose?tab=readme-ov-file)
MegaPose (https://github.com/megapose6d/megapose6d)
MegaPose (https://github.com/megapose6d/megapose6d)
OVE6D (https://github.com/dingdingcai/OVE6D-pose)