参加了一次RT-Thread的开发者大会 ,相当有意思,虽然一天奔波挺累,但睡了半天之后简单剪了下22号的视频,也就有时间写自己的参会笔记了。
与openEuler社区不同,RT-Thread社区更专注于嵌入式 ,与硬件厂商结合较为紧密。我是在openEuler的嵌入式SIG的引导下来的,openEuler社区的Embedded发行版是Yocto架构为主与服务器端的发行版本并不一样。很明显在ARM-M系列的场景下是跑不了openEuler的,这时RTOS(实时操作系统)显然更为适合这类场景,正如RT-Thread开发者大会上演讲者所说,RT-Thread也在做一些填补大型操作系统与硬件之间关系 ,可以这么理解。
我们是到的最早的那一批,坐了无人车,到了会场外,边收集着开发板,边看着一家家公司布着展,看着demo, 看着有意思的东西,开发板、系统、小样。虽然都在说今年裁员压力大,但各家公司的产品都还看起来不错。
早上,是开场,然后小睡了会儿,主办方RTT在说社区的人更多了,软件包更多了,但也更为完善了,更新放缓的阶段。英飞凌 介绍了PSoc MCU , 瑞萨 给了几个行业应用的例子,比较让我惊喜的是给了很多新能源行业的解决方案。给我不少做课设的启发,如果自己的成果能解决行业问题,我觉得也是挺好的方向。
下午,听了场瑞萨 和恩智普的边缘计算分会场的讲座。
一个在做MCU的e-AI 模型迁移部署实验--强调了
MPU与MCU的界限逐渐模糊
Cheak MOTOR 电机检测
- 算力检测--平台
- HVAC风机检测
- 13帧的视觉检测的RA8 MCU
一个在做相似的方向(emmm, 没咋听全,有个老哥打我电话,喊我搬砖)
- 机器学习
- 控制器
- 产品
对了还有一家做车载AI视觉的大宋汽车技术合作方------黑芝麻
动手实践也比较简单,但挺有趣的。
现场发布的 RA8 MCU 开发板
用的 OpenMV IDE, 界面也比较简单(与PR相比),挺有意思的,demo如下
Blog就到这了,Bye 2023RT-Thread开发者大会。
bash
这里是用到的代码
python
# Fast Linear Regression Example
#
# This example shows off how to use the get_regression() method on your OpenMV Cam
# to get the linear regression of a ROI. Using this method you can easily build
# a robot which can track lines which all point in the same general direction
# but are not actually connected. Use find_blobs() on lines that are nicely
# connected for better filtering options and control.
#
# This is called the fast linear regression because we use the least-squares
# method to fit the line. However, this method is NOT GOOD FOR ANY images that
# have a lot (or really any) outlier points which corrupt the line fit...
import sensor
import time
THRESHOLD = (0, 100) # Grayscale threshold for dark things.
BINARY_VISIBLE = True # Binary pass first to see what linear regression is running on.
sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QQVGA)
sensor.skip_frames(time=2000)
clock = time.clock()
while True:
clock.tick()
img = sensor.snapshot().binary([THRESHOLD]) if BINARY_VISIBLE else sensor.snapshot()
# Returns a line object similar to line objects returned by find_lines() and
# find_line_segments(). You have x1(), y1(), x2(), y2(), length(),
# theta() (rotation in degrees), rho(), and magnitude().
#
# magnitude() represents how well the linear regression worked. It goes from
# (0, INF] where 0 is returned for a circle. The more linear the
# scene is the higher the magnitude.
line = img.get_regression([(255, 255) if BINARY_VISIBLE else THRESHOLD])
print(
"FPS %f, mag = %s" % (clock.fps(), str(line.magnitude()) if (line) else "N/A")
)
# About negative rho values:
#
# A [theta+0:-rho] tuple is the same as [theta+180:+rho].
python
# Automatic RGB565 Color Tracking Example
#
# This example shows off single color automatic RGB565 color tracking using the OpenMV Cam.
import sensor
import time
print("请勿在相机前放置任何物品")
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time=2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()
# Capture the color thresholds for whatever was in the center of the image.
r = [(320 // 2) - (50 // 2), (240 // 2) - (50 // 2), 50, 50] # 50x50 center of QVGA.
print(
"将要跟踪的物体放在相机前面的框中"
)
print(
"确保您要追踪的物体的颜色完全被框住!"
)
for i in range(60):
img = sensor.snapshot()
img.draw_rectangle(r)
print("开始学习颜色 ")
threshold = [50, 50, 0, 0, 0, 0] # Middle L, A, B values.
for i in range(60):
img = sensor.snapshot()
hist = img.get_histogram(roi=r)
lo = hist.get_percentile(
0.01
) # Get the CDF of the histogram at the 1% range (ADJUST AS NECESSARY)!
hi = hist.get_percentile(
0.99
) # Get the CDF of the histogram at the 99% range (ADJUST AS NECESSARY)!
# Average in percentile values.
threshold[0] = (threshold[0] + lo.l_value()) // 2
threshold[1] = (threshold[1] + hi.l_value()) // 2
threshold[2] = (threshold[2] + lo.a_value()) // 2
threshold[3] = (threshold[3] + hi.a_value()) // 2
threshold[4] = (threshold[4] + lo.b_value()) // 2
threshold[5] = (threshold[5] + hi.b_value()) // 2
for blob in img.find_blobs(
[threshold], pixels_threshold=100, area_threshold=100, merge=True, margin=10
):
img.draw_rectangle(blob.rect())
img.draw_cross(blob.cx(), blob.cy())
img.draw_rectangle(r)
print("Thresholds learned...")
print("Tracking colors...")
while True:
clock.tick()
img = sensor.snapshot()
for blob in img.find_blobs(
[threshold], pixels_threshold=100, area_threshold=100, merge=True, margin=10
):
img.draw_rectangle(blob.rect())
img.draw_cross(blob.cx(), blob.cy())
print(clock.fps())
RA8 MCU开发板如果大家感兴趣的话,我就专门出一期,看看有没有人想看,超过10票就发,嘿嘿。