基于MAX78000FTHR的扑克牌卷积神经网络图像分类识别系统
该项目使用了MAX78000FTHR,实现了扑克牌的卷积神经网络图像分类的设计,它的主要功能为:1.摄像头识别扑克牌花色和点数,尽量精确。2.TFT液晶同步看效果,用中英文双语显示结果;3.后期准备加上语音报告结果功能。
标签
嵌入式系统
MPU
ylml
更新2024-01-11
540

项目介绍:

参加比赛,同时,让自己在AI时代掌握一种新的技能。用AI来解决身边的一些有趣的任务。

大赛地址:https://www.eetree.cn/page/analog-max78000-2

前面测试了好几种方法,但对项目结果不太满意,测试了好几种模型,最后还是觉得扑克牌大众化,人人都可以玩,分类也足够多(本次分了54个类别,还增加了“unkown”类型来抗干扰)。

很适合大家分享一起玩,看看AI分类的效果,演示该芯片的潜力等场景也很合适。

扑克牌做图像识别可能出现效果较好或较差的原因有很多,以下是一些主要因素:

1. 扑克牌设计的一致性与区分性:
- 扑克牌在设计上有一定的标准化,如大小、形状、花色和数字样式较为统一。
- 每张扑克牌有其独特的符号和数字以区分其它牌,如AKQJ以及210的数字。
- 四种不同的花色(黑桃、红心、梅花、方块)有明显的颜色和图案差异。
这些因素有助于图像识别算法区分和识别各种牌型。

2. 图像质量:
- 扑克牌图像的分辨率、清晰度和对比度会影响识别算法的性能。高质量的图片更有助于算法准确识别牌面。
- 照明条件对图像质量也有重要影响,如阴影、反光或不均匀的光照会干扰图像识别。

3. 视角与变形问题:
- 如果图像捕捉时扑克牌存在倾斜、扭曲或遮挡,则可能对识别效果产生负面影响。
- 高级的图像识别算法可能会应用透视变换技术以校正视角问题。

4. 环境背景:
- 复杂或杂乱的背景可能干扰图像识别算法提取扑克牌的特征。
- 纯色或对比度强烈的背景有助于算法识别和分割前景中的扑克牌。

5. 训练数据质量:
- 图像识别通常依赖机器学习算法,这些算法的性能取决于训练数据集的多样性和质量。
- 如果训练数据中包含了多种不同条件下的扑克牌图像,那么识别效果可能会更好。

6. 图像识别算法的复杂性:
- 使用高级的机器学习和深度学习模型,如卷积神经网络(CNN),通常能提供较高的识别准确率。
- 算法的泛化能力和对新条件的适应性也决定了在实际应用中的表现。

理想情况下,如果我们能控制上述因素,图像识别应该能够很好地识别出扑克牌。然而在实践中,可能会受到多种因素的影响,图像识别的实际效果可能会有所不同。通过提高输入图像的质量、优化算法和使用高质量的训练数据等措施,可以显著提升图像识别的性能。


我尽量把学习的过程和操作步骤写清楚,方便后来人复现。


模型的选择:在参考了往届的几个图像分类的项目后,也尝试过复现项目,看一看模型的效果。

但好几个项目复现失败,又回过头来看大牛们的模型的定义文件和yaml文件是怎么定义的。

但在复现过程中,遇到很多问题,处理问题的过程中,也加深了对模型的理解。


LCD的采购与杜邦线连接:

外扩元件方面:购买了TFT-LCD.

FiFnlbCoPWuwBWgTQdJ5jJktZCli

把买来的LCD显示器 用杜邦线连接开发板,也能用。只是用的不那么舒服而已。

接线引脚定于如下:

我查阅了屏幕驱动库的源代码,找到了接线的方式。

·        CLK:P0_7

·        MOSI:P0_5

·        MISO:P0_6

·        CS:P0_11

·        DC:P0_8

当然,LCD下面的引脚也要和开发板进行连接才能正常工作:

·        3V3

·        GND

·        RST

用猫狗分类代码对LCD进行了测试,LCD工作正常, 如下:

外形折腾: 加上外壳是一个简易相机

在“相机”盒子的右上角装了一个大按钮,充当相机按键。

电池输出的3.3V, 可以供开发板用电。

背面插上 MAX78000FTHR 开发板,摄像头正好暴露在外面。


折腾的时候,再这里(https://oshwhub.com/woshinidiegun/Max-Expansion-Board)发现了一个嘉立创开源的PCB。

感谢作者开源,让更多的爱好者有了新玩具。

顺便打样了一个PCB,PCB到了以后,就拆掉了简易相机。最后,用如下PCB测试。

在满足最小可用原则的基础上,对原理图进行了简化,去掉了锂电池的充放电电路,本人使用的电路连接原理图如下:

实物图如下:


开发环境的搭建

按照官方的资料,在windows10下安装了WSL2, 然后按照:

https://github.com/MaximIntegratedAI/ai8x-training

https://github.com/MaximIntegratedAI/ai8x-synthesis

的说明进行了搭建。过程还算是挺顺利的。

主要用到的命令如下:

-------------------------------------- venv虚拟环境用法 ----------------------------------
python.exe -m pip install --upgrade pip

1. 创建虚拟环境:在终端中输入以下命令,创建一个名为 venv 的虚拟环境:
python -m venv venv

2.激活虚拟环境:
在 Windows 上,可以使用以下命令激活虚拟环境:
venv\Scripts\activate.bat

而在 Linux 或 macOS 上,可以使用以下命令激活虚拟环境:
source venv/bin/activate
激活虚拟环境后,终端的提示符会显示虚拟环境名称,表示当前环境已切换到虚拟环境中。

3. 在虚拟环境中安装第三方库:在激活虚拟环境后,可以使用 pip 命令安装需要的第三方库,例如:
pip install pytest
这样可以在虚拟环境中安装 其它需要的库。

4.退出虚拟环境:在虚拟环境中完成工作后,可以使用以下命令退出虚拟环境:
deactivate
------------------------------------------------------------------------------------------
pip install pyenv
pyenv install 3.8.11
pyenv local 3.8.11 # 设置使用那个版本的python
python --version # 检查版本

$ python -m venv venv --prompt ai8x-training
$ source venv/bin/activate
(ai8x-training) $ pip3 install -U pip wheel setuptools
(ai8x-training) $ pip3 install -r requirements-cu11.txt -i https://mirrors.aliyun.com/pypi/simple/

(ai8x-training) $ git pull
(ai8x-training) $ git submodule update --init
(ai8x-training) $ pip3 install -U pip setuptools
(ai8x-training) $ pip3 install -U -r requirements.txt # or requirements-xxx.txt, as shown above

(ai8x-training) $ deactivate # 退出虚拟环境

nvidia-smi 查看 CUDA 版本

ai8x-training$ ./check_cuda.py
System: linux
Python version: 3.8.11 (default, Nov 26 2023, 00:09:03) [GCC 9.4.0]
PyTorch version: 1.8.1+cu111
CUDA acceleration: available in PyTorch
---------------------------------------- WSL2 ----------------------------------------
https://github.com/MaximIntegratedAI/ai8x-training 按照这里的说明安装
curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash

在 Ubuntu 20.04WSL2 上,将以下内容添加到 ~/.bashrc :
# WSL2
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init --path)"
eval "$(pyenv virtualenv-init -)"

windows与虚拟环境的文件共享:

wsl2安装后可以通过网络路径访问wsl2 Linux的文件,路径是:\\wsl$


模型的打印调试思路可用快速定位问题

我在模型定义的文件中,增加了模型的各个层的打印语句,通过打印。

就能知道模型一层和一层之间大概是怎么传递参数的。

例如:我的模型文件如下:

"""
playcards classification network for AI85
"""
import torch
import torch.nn as nn

import ai8x

class AI85PlayingCardsNet(nn.Module):
"""
Define CNN model for image classification.
"""
def __init__(self, num_classes=54, num_channels=3, dimensions=(128, 128),
fc_inputs=16, bias=False, **kwargs):
super().__init__()

# AI85 Limits
assert dimensions[0] == dimensions[1] # Only square supported

# Keep track of image dimensions so one constructor works for all image sizes
dim = dimensions[0]

self.conv1 = ai8x.FusedConv2dReLU(num_channels, 16, 3,
padding=1, bias=bias, **kwargs)
# padding 1 -> no change in dimensions -> 16x128x128

pad = 2 if dim == 28 else 1
self.conv2 = ai8x.FusedMaxPoolConv2dReLU(16, 32, 3, pool_size=2, pool_stride=2,
padding=pad, bias=bias, **kwargs)
dim //= 2 # pooling, padding 0 -> 32x64x64
if pad == 2:
dim += 2 # padding 2 -> 32x32x32

self.conv3 = ai8x.FusedMaxPoolConv2dReLU(32, 64, 3, pool_size=2, pool_stride=2, padding=1,
bias=bias, **kwargs)
dim //= 2 # pooling, padding 0 -> 64x32x32

self.conv4 = ai8x.FusedMaxPoolConv2dReLU(64, 32, 3, pool_size=2, pool_stride=2, padding=1,
bias=bias, **kwargs)
dim //= 2 # pooling, padding 0 -> 32x16x16

self.conv5 = ai8x.FusedMaxPoolConv2dReLU(32, 32, 3, pool_size=2, pool_stride=2, padding=1,
bias=bias, **kwargs)
dim //= 2 # pooling, padding 0 -> 32x8x8

self.conv6 = ai8x.FusedConv2dReLU(32, fc_inputs, 3, padding=1, bias=bias, **kwargs)

self.fc = ai8x.Linear(fc_inputs*dim*dim, num_classes, bias=True, wide=True, **kwargs)

for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')

def forward(self, x): # pylint: disable=arguments-differ
"""Forward prop"""
x = self.conv1(x)
print('After conv1 shape:', x.shape)
x = self.conv2(x)
print('After conv2 shape:', x.shape)
x = self.conv3(x)
print('After conv3 shape:', x.shape)
x = self.conv4(x)
print('After conv4 shape:', x.shape)
x = self.conv5(x)
print('After conv5 shape:', x.shape)
x = self.conv6(x)
print('After conv6 shape:', x.shape)
x = x.view(x.size(0), -1)
print('After flatten shape:', x.shape)
x = self.fc(x)
print('After fc shape:', x.shape)

return x

def ai85net_playingcards(pretrained=False, **kwargs):
assert not pretrained
return AI85PlayingCardsNet(**kwargs)

models = [
{
'name': 'ai85net_playingcards',
'min_input': 1,
'dim': 2,
},
]

if __name__ == '__main__':
ai8x.set_device(device=85, simulate=True, round_avg=False)

model = AI85PlayingCardsNet()
input_tensor = torch.randn(54, 3, 128, 128)
output_tensor = model(input_tensor)
print('Output shape:', output_tensor.shape)

我通过运行模型文件,让模型文件打印各层大小的参数。就能做到对模型心中有数。

python models/ai85net-playingcards.py

Configuring device: MAX78000, simulate=True.
After conv1 shape: torch.Size([54, 16, 128, 128])
After conv2 shape: torch.Size([54, 32, 64, 64])
After conv3 shape: torch.Size([54, 64, 32, 32])
After conv4 shape: torch.Size([54, 32, 16, 16])
After conv5 shape: torch.Size([54, 32, 8, 8])
After conv6 shape: torch.Size([54, 16, 8, 8])
After flatten shape: torch.Size([54, 1024])
After fc shape: torch.Size([54, 54])
Output shape: torch.Size([54, 54])

好几次,发现了别人的模型,用上面的方法一打印,发现那个模型他用是可用的,我用是有问题的。

例如,水果分类的那个模型,原作者在64x64分辨率是可用的,如下:

(ai8x-training) ml@DESKTOP-UHGOPTO:~/ai8x-training$ python models/ai85net-fruit.py
Configuring device: MAX78000, simulate=True.
Input shape: torch.Size([54, 3, 64, 64])
After conv1 shape: torch.Size([54, 16, 32, 32])
After conv2 shape: torch.Size([54, 16, 32, 32])
After conv3 shape: torch.Size([54, 20, 32, 32])
After resid1 shape: torch.Size([54, 20, 32, 32])
After conv4 shape: torch.Size([54, 20, 32, 32])
After conv5 shape: torch.Size([54, 20, 32, 32])
After conv6 shape: torch.Size([54, 20, 16, 16])
After resid2 shape: torch.Size([54, 20, 16, 16])
After conv7 shape: torch.Size([54, 44, 16, 16])
After conv8 shape: torch.Size([54, 44, 16, 16])
After conv9 shape: torch.Size([54, 48, 8, 8])
After resid3 shape: torch.Size([54, 48, 8, 8])
After conv10 shape: torch.Size([54, 32, 2, 2])
After flatten shape: torch.Size([54, 128])
After fc shape: torch.Size([54, 54])
Output shape: torch.Size([54, 54])


我做扑克牌识别,考虑到芯片的内存大小,芯片能支持的最大分辨率128x128,修改后运行就报错了。报错如下:

有了报错,修改不可用模型为可用,也有了对应的修改思路;

测试模型的可用性也多了一种快速、有效的手段:

(ai8x-training) ml@DESKTOP-UHGOPTO:~/ai8x-training$ python models/ai85net-fruit.py
Configuring device: MAX78000, simulate=True.
Input shape: torch.Size([54, 3, 128, 128])
After conv1 shape: torch.Size([54, 16, 64, 64])
After conv2 shape: torch.Size([54, 16, 64, 64])
After conv3 shape: torch.Size([54, 20, 64, 64])
After resid1 shape: torch.Size([54, 20, 64, 64])
After conv4 shape: torch.Size([54, 20, 64, 64])
After conv5 shape: torch.Size([54, 20, 64, 64])
After conv6 shape: torch.Size([54, 20, 32, 32])
After resid2 shape: torch.Size([54, 20, 32, 32])
After conv7 shape: torch.Size([54, 44, 32, 32])
After conv8 shape: torch.Size([54, 44, 32, 32])
After conv9 shape: torch.Size([54, 48, 16, 16])
After resid3 shape: torch.Size([54, 48, 16, 16])
After conv10 shape: torch.Size([54, 32, 6, 6])
After flatten shape: torch.Size([54, 1152])
Traceback (most recent call last):
File "models/ai85net-fruit.py", line 90, in <module>
output_tensor = model(input_tensor)
File "/home/ml/ai8x-training/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "models/ai85net-fruit.py", line 64, in forward
x = self.fc(x)
File "/home/ml/ai8x-training/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ml/ai8x-training/models/ai8x.py", line 450, in forward
x = self.op(x)
File "/home/ml/ai8x-training/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ml/ai8x-training/venv/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "/home/ml/ai8x-training/venv/lib/python3.8/site-packages/torch/nn/functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (54x1152 and 128x54)

报错了以后,再回头看芯片的参数,发现还是具有参考意义的。

上面的报错原因,个人理解为:受芯片资源限制通道数,不能超过1024。

那我们就要考虑设计更加精简的网络结构,既能满足芯片要求,又能满足我们的训练要求。


后来又尝试参考了“AI宝可梦图鉴”,他的分辨率也是128x128,符合做扑克牌识别的要求。但后来发现有坑的。

这个过程中,积累了很多有意思的经验。

最后,参考了厂家的经典的猫狗分类模型的案例。终于把扑克牌识别训练到了满意的效果。

卷积神经网络中各层的设计如下:


训练和移植步骤

经过这段时间的学习,我觉得把一种基于卷积神经网络的图像分类技术用到嵌入式产品里,需要经过下面5个步骤:

(0)收集可用数据集。

(1)训练。

(2)量化。

(3)生成样本测试文件(sample_playingcards.npy)。

(4)量化完成后可以对量化后结果进行评估,评估量化后的模型后续是否可用。

(5)把量化后的模型转化为嵌入式C语言的CNN代码。

训练和量化用到的几个程序之间的关系如下:


(0)收集合适的扑克牌的数据集。

我找到的主要参考数据集如下:

https://github.com/lordloh/playing-cards

https://www.kaggle.com/datasets/gpiosenka/cards-image-datasetclassification


我对两个扑克牌数据集合并的过程中,我自己用手机拍照的方法新增了3000多张扑克牌图片。

手机拍照有一个选项就是正方形,拍照时尽可能的拍扑克牌的全貌。

每一种花色和点数都在原来的基础上,合并了精心裁剪的带有不同背景的扑克牌照片。增加抗干扰能力。

(1)准备好数据集、模型,进行训练

命令如下:

python train.py --epochs 500  --optimizer Adam --lr 0.001 --wd 0.001 --batch-size 200 --gpus 0 --deterministic --compress policies/schedule-playingcards.yaml --model ai85net_playingcards --dataset puke --param-hist --pr-curves --embedding --device MAX78000 

中间又几次wsl的linux闪退了,好在可以断点继续。


这样就可以在上次炼丹的基础上,进行炼丹了。可以大大节约时间。区别在于增加一个参数“--resume-from ../ai8x-synthesis/trained/playingcard_qat_best.pth.tar”。

继续训练的命令如下:

python train.py --epochs 800  --optimizer Adam --lr 0.001 --wd 0.001 --batch-size 210 --gpus 0 --deterministic --compress policies/schedule-playingcards.yaml --model ai85net_playingcards --dataset puke  --resume-from  ../ai8x-synthesis/trained/playingcard_qat_best.pth.tar --param-hist --pr-curves --embedding --device MAX78000

又崩了。

python train.py --epochs 900  --optimizer Adam --lr 0.001 --wd 0.001 --batch-size 200 --gpus 0 --deterministic --compress policies/schedule-playingcards.yaml --model ai85net_playingcards --dataset puke  --resume-from  logs/2023.12.31-141013/qat_best.pth.tar --param-hist --pr-curves --embedding --device MAX78000

贵在坚持,最后训练了1100个epochs结束,我用的GTX3050,总的训练时间大概是两天两夜。

python train.py --epochs 1100  --optimizer Adam --lr 0.001 --wd 0.001 --batch-size 200 --gpus 0 --deterministic --compress policies/schedule-playingcards.yaml --model ai85net_playingcards --dataset puke  --resume-from  logs/2024.01.04-210257/qat_best.pth.tar --param-hist --pr-curves --embedding --device MAX78000

新的发现和感想:--batch-size 200 这个参数很有用,我的的GTX3050是4G内存,默认参数一运行就崩了。

我发现参数是210或者200的时候,显存刚刚好。这一发现,也给我省了不少时间。不然,去云上再搭一套环境肯定不如本地训练方便快捷。


因为有54个分类,实际分担到每个扑克花色的训练图片,其实并不多,也就120-180个左右的图片。数据集一共有7063个图片。

最终完成了1100个epoch的训练后的结果如下:

Training epoch: 10928 samples (200 per mini-batch)
Epoch: [1099][ 10/ 55] Overall Loss 0.459331 Objective Loss 0.459331 LR 0.000250 Time 2.016463
Epoch: [1099][ 20/ 55] Overall Loss 0.445955 Objective Loss 0.445955 LR 0.000250 Time 1.775931
Epoch: [1099][ 30/ 55] Overall Loss 0.439101 Objective Loss 0.439101 LR 0.000250 Time 1.886492
Epoch: [1099][ 40/ 55] Overall Loss 0.437805 Objective Loss 0.437805 LR 0.000250 Time 1.784566
Epoch: [1099][ 50/ 55] Overall Loss 0.440521 Objective Loss 0.440521 LR 0.000250 Time 1.832200
Epoch: [1099][ 55/ 55] Overall Loss 0.445095 Objective Loss 0.445095 Top1 86.280488 Top5 97.865854 LR 0.000250 Time 1.756030
--- validate (epoch=1099)-----------
1214 samples (200 per mini-batch)
Epoch: [1099][ 7/ 7] Loss 0.868613 Top1 74.299835 Top5 91.598023
==> Top1: 74.300 Top5: 91.598 Loss: 0.869

==> Best [Top1: 77.842 Top5: 92.339 Sparsity:0.00 Params: 111024 on epoch: 1084]
Saving checkpoint to: logs/2024.01.05-232502/qat_checkpoint.pth.tar
--- test ---------------------
549 samples (200 per mini-batch)
Test: [ 3/ 3] Loss 0.518195 Top1 84.335155 Top5 95.264117
==> Top1: 84.335 Top5: 95.264 Loss: 0.518

拷贝 ai8x-training/logs/2024.01.05-232502/qat_best.pth.tar 为 ai8x-synthesis/trained/playingcards5_qat_best_q8.pth.tar

可用看出,一次性识别成功率为: 84.335%, Top5: 95.264% 这个结果已经很不错了,够我们一般的玩耍和某些场合的演示了。


(2)模型的量化.

(ai8x-synthesis) ml@DESKTOP-UHGOPTO:~/ai8x-synthesis$ python quantize.py trained/playingcards5_qat_best.pth.tar trained/playingcards5_qat_best_q8.pth.tar --device MAX78000 -v -c
networks/playingcards.yaml "$@"
Configuring device: MAX78000
Reading networks/playingcards.yaml to configure network...
Converting checkpoint file trained/playingcards5_qat_best.pth.tar to trained/playingcards5_qat_best_q8.pth.tar

Model keys (state_dict):
conv1.output_shift, conv1.weight_bits, conv1.bias_bits, conv1.quantize_activation, conv1.adjust_output_shift, conv1.shift_quantile, conv1.op.weight, conv2.output_shift, conv2.weight_bits, conv2.bias_bits, conv2.quantize_activation, conv2.adjust_output_shift, conv2.shift_quantile, conv2.op.weight, conv3.output_shift, conv3.weight_bits, conv3.bias_bits, conv3.quantize_activation, conv3.adjust_output_shift, conv3.shift_quantile, conv3.op.weight, conv4.output_shift, conv4.weight_bits, conv4.bias_bits, conv4.quantize_activation, conv4.adjust_output_shift, conv4.shift_quantile, conv4.op.weight, conv5.output_shift, conv5.weight_bits, conv5.bias_bits, conv5.quantize_activation, conv5.adjust_output_shift, conv5.shift_quantile, conv5.op.weight, conv6.output_shift, conv6.weight_bits, conv6.bias_bits, conv6.quantize_activation, conv6.adjust_output_shift, conv6.shift_quantile, conv6.op.weight, fc.output_shift, fc.weight_bits, fc.bias_bits, fc.quantize_activation, fc.adjust_output_shift, fc.shift_quantile, fc.op.weight, fc.op.bias
conv1.op.weight avg_max: 0.3080426 max: 0.50178313 mean: -0.0016146562 factor: [128.] bits: 8
conv2.op.weight avg_max: 0.21638438 max: 0.37782183 mean: -0.004252241 factor: [256.] bits: 8
conv3.op.weight avg_max: 0.10264358 max: 0.18610927 mean: -0.00275823 factor: [512.] bits: 8
conv4.op.weight avg_max: 0.18799132 max: 0.52683735 mean: -0.0013671168 factor: [128.] bits: 8
conv5.op.weight avg_max: 0.2335556 max: 0.3548111 mean: 0.0019469503 factor: [256.] bits: 8
conv6.op.weight avg_max: 0.31261 max: 0.40786323 mean: 0.004781828 factor: [256.] bits: 8
fc.op.weight avg_max: 0.11929286 max: 0.19588341 mean: -0.00024378698 factor: [512.] bits: 8
fc.op.bias avg_max: 0.00069900794 max: 0.10763672 mean: 0.00069900794 factor: [512.] bits: 8


(3)生成样本测试文件(sample_playingcards.npy)

使用的命令如下:

(ai8x-training) ml@DESKTOP-UHGOPTO:~/ai8x-training$ python train.py --batch-size 200 --optimizer Adam --lr 0.0001 --model ai85net_playingcards --save-sample 100 --dataset puke --evaluate --exp-load-weights-from ../ai8x-synthesis/trained/playingcards2_qat_best_q8.pth.tar -8 --device MAX78000 --use-bias "$@"
警告:打开文件限制为1024.请提高限额(见文档).
Configuring device: MAX78000, simulate=True.
Log file for this run: /home/ml/ai8x-training/logs/2023.12.31-212714/2023.12.31-212714.log
{'start_epoch': 10, 'weight_bits': 8}
Model `../ai8x-synthesis/trained/playingcards2_qat_best_q8.pth.tar` is old. Missing parameters added with default values!
=> loading checkpoint ../ai8x-synthesis/trained/playingcards2_qat_best_q8.pth.tar
=> Checkpoint contents:
+----------------------+-------------+----------------------+
| Key | Type | Value |
|----------------------+-------------+----------------------|
| arch | str | ai85net_playingcards |
| compression_sched | dict | |
| epoch | int | 401 |
| extras | dict | |
| optimizer_state_dict | dict | |
| optimizer_type | type | Adam |
| state_dict | OrderedDict | |
+----------------------+-------------+----------------------+

=> Checkpoint['extras'] contents:
+-----------------+--------+-------------------+
| Key | Type | Value |
|-----------------+--------+-------------------|
| best_epoch | int | 401 |
| best_mAP | int | 0 |
| best_top1 | float | 82.90816326530613 |
| clipping_method | str | MAX_BIT_SHIFT |
| current_mAP | int | 0 |
| current_top1 | float | 82.90816326530613 |
+-----------------+--------+-------------------+

Loaded compression schedule from checkpoint (epoch 401)
=> loaded 'state_dict' from checkpoint '../ai8x-synthesis/trained/playingcards2_qat_best_q8.pth.tar'
Optimizer Type: <class 'torch.optim.adam.Adam'>
Optimizer Args: {'lr': 0.0001, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0.0001, 'amsgrad': False}
Dataset sizes:
test=318
--- test ---------------------
318 samples (200 per mini-batch)
==> Saving sample at index 100 to sample_puke.npy
Test: [ 2/ 2] Loss 0.532525 Top1 89.937107 Top5 97.169811
==> Top1: 89.937 Top5: 97.170 Loss: 0.533

运行后,把本目录下的文件 sample_playingcards.npy 复制到ai8x-synthesis\tests下,在生成嵌入式CNN代码的时候会用到。

(4)量化完成后可以对量化后结果进行评估

评估量化后的模型后续是否可用。

脚本路径: ai8x-training\scripts\evaluate_playingcards.sh

(ai8x-training) ml@DESKTOP-UHGOPTO:~/ai8x-training$ python train.py --model ai85net_playingcards --dataset puke --confusion --evaluate --exp-load-weights-from ../ai8x-synthesis/trained/playingcards5_qat_best_q8.pth.tar -8 --device MAX78000 --use-bias "$@"
警告:打开文件限制为1024.请提高限额(见文档).
Configuring device: MAX78000, simulate=True.
Log file for this run: /home/ml/ai8x-training/logs/2024.01.06-073135/2024.01.06-073135.log
{'start_epoch': 10, 'weight_bits': 8}
Model `../ai8x-synthesis/trained/playingcards5_qat_best_q8.pth.tar` is old. Missing parameters added with default values!
=> loading checkpoint ../ai8x-synthesis/trained/playingcards5_qat_best_q8.pth.tar
=> Checkpoint contents:
+----------------------+-------------+----------------------+
| Key | Type | Value |
|----------------------+-------------+----------------------|
| arch | str | ai85net_playingcards |
| compression_sched | dict | |
| epoch | int | 1084 |
| extras | dict | |
| optimizer_state_dict | dict | |
| optimizer_type | type | Adam |
| state_dict | OrderedDict | |
+----------------------+-------------+----------------------+

=> Checkpoint['extras'] contents:
+-----------------+--------+-------------------+
| Key | Type | Value |
|-----------------+--------+-------------------|
| best_epoch | int | 1084 |
| best_mAP | int | 0 |
| best_top1 | float | 77.84184514003294 |
| clipping_method | str | MAX_BIT_SHIFT |
| current_mAP | int | 0 |
| current_top1 | float | 77.84184514003294 |
+-----------------+--------+-------------------+

Loaded compression schedule from checkpoint (epoch 1084)
=> loaded 'state_dict' from checkpoint '../ai8x-synthesis/trained/playingcards5_qat_best_q8.pth.tar'
Optimizer Type: <class 'torch.optim.sgd.SGD'>
Optimizer Args: {'lr': 0.1, 'momentum': 0.9, 'dampening': 0, 'weight_decay': 0.0001, 'nesterov': False}
Dataset sizes:
test=549
--- test ---------------------
549 samples (256 per mini-batch)
Test: [ 3/ 3] Loss 0.772329 Top1 85.245902 Top5 95.992714
==> Top1: 85.246 Top5: 95.993 Loss: 0.772

==> Confusion:
[[ 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0]
[ 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 7 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0]
[ 0 1 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 1 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 1 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 1 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 6 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 12 0 0 0 0 1 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 1 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 8 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 1 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 8 0 0]
[ 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 6 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10]]

发现个别在验证集上好像有问题,进去看了一下,验证集(test)的图片设置的不合理,部分图片看不清,进行了合理的替换。

可以看到:经过1100个epoch的训练,在第1084个epoch的训练中,发现一次性识别成功率最高。为: 85.246%, Top5: 95.993% 这个准确率已经很高,够我们愉快的玩耍和一般场合的演示了。



(5)把量化后的模型转化为嵌入式C语言的CNN代码

把 ai8x-training/puke.npy 文件拷贝为 ai8x-synthesis/tests/sample_playingcard5.npy

然后,执行下面的命令进行转化。

ml@DESKTOP-UHGOPTO:~$ cd ai8x-synthesis/
ml@DESKTOP-UHGOPTO:~/ai8x-synthesis$ source venv/bin/activate
(ai8x-synthesis) ml@DESKTOP-UHGOPTO:~/ai8x-synthesis$ python ai8xize.py --test-dir "sdk/Examples/$DEVICE/CNN" --prefix playingcard --checkpoint-file trained/playingcards2_qat_best_q8.pth.tar --config-file networks/playingcards.yaml --fifo --softmax --device MAX78000 --timer 0 --display-checkpoint --verbose --compact-data --mexpress --sample-input 'tests/sample_playingcards2.npy' --boost 2.5 "$@"

Configuring device: MAX78000
Reading networks/playingcards.yaml to configure network...
WARNING: Cannot run "yamllint" linter to check networks/playingcards.yaml
Reading trained/playingcards2_qat_best_q8.pth.tar to configure network weights...
Checkpoint for epoch 401, model ai85net_playingcards - weight and bias data:
InCh OutCh Weights Quant Shift Min Max Size Key Bias Quant Min Max Size Key
3 16 (48, 3, 3) 8 -1 -89 112 432 conv1.op.weight (16,) 8 -1 0 16 conv1.op.bias
16 32 (512, 3, 3) 8 -1 -78 94 4608 conv2.op.weight (32,) 8 -1 0 32 conv2.op.bias
32 64 (2048, 3, 3) 8 -2 -81 96 18432 conv3.op.weight (64,) 8 -1 0 64 conv3.op.bias
64 32 (2048, 3, 3) 8 -1 -59 102 18432 conv4.op.weight (32,) 8 -1 0 32 conv4.op.bias
32 32 (1024, 3, 3) 8 -1 -69 85 9216 conv5.op.weight (32,) 8 -1 0 32 conv5.op.bias
32 16 (512, 3, 3) 8 -1 -103 91 4608 conv6.op.weight (16,) 8 -1 0 16 conv6.op.bias
1024 54 (1, 54, 1024) 8 -2 -90 109 55296 fc.op.weight (54,) 8 -29 37 54 fc.op.bias
TOTAL: 7 parameter layers, 111,270 parameters, 111,270 bytes
Configuring data set: puke.
playingcard...
Arranging weights... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
Storing weights... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
WARNING: Layer 0: Combining streaming and a bias. THIS COMBINATION MIGHT NOT FUNCTION CORRECTLY!!!
WARNING: Layer 1: Combining streaming and a bias. THIS COMBINATION MIGHT NOT FUNCTION CORRECTLY!!!
Creating network... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%


(6) 移植CNN代码到嵌入式Demo项目中


增加中文打印的支持:

这是给家里的小朋友演示的时候,小朋友说他有点看不懂英文。于是,就增加了中文的支持。调用过程请参考附件的项目代码。

TFT LCD如下文件"MaximSDK\Libraries\MiscDrivers\Display\tft_ili9341.c" 增加了中文的打印部分,主要源码如下:

// 下面函数:新加的一个字的中文显示函数
void tft_Print_CN_char(unsigned int x,unsigned int y, char *str)
{
unsigned int column;
unsigned char tm=0,temp=0,xxx=0;

write_command(0x2a); //Column address set
write_data(x>>8); //start column
write_data(x);
x=x+15;
write_data(x>>8); //end column
write_data(x);

write_command(0x2b); //Row address set
write_data(y>>8); //start row
write_data(y);
y=y+15;
write_data(y>>8); //end row
write_data(y);
write_command(0x2C); //Memory write

for(column=0;column<32;column++) //column loop
{
temp=str[xxx];
for(tm=0;tm<8;tm++)
{
if(temp&0x01)
{
write_data(g_foreground_color>>8);
write_data(g_foreground_color);
}
else
{
write_data(g_background_color>>8);
write_data(g_background_color);
}
temp>>=1;
}
xxx++;
}
}


经过1100轮训练,识别率如下: Top1: 85.246 Top5: 95.993 Loss: 0.714

实际使用也证明识别率尚可接受。目前对常见的扑克牌识别率比较高。

但因为没有专门对残缺的扑克进行训练。所以,对半个扑克拍照暂时无法识别,因为没有对半个扑克进行更多的训练。

整体扑克的识别率较高,侧面拍照,扑克有浅一点的形变,也可以正确识别扑克牌的花色和点数。


项目的简要工作思路的流程如下:

测试结果和显示效果

识别方块8的结果和显示界面如下:

我们测试识别一张梅花Q,如下:


不足:

本次训练图片中,歪斜的图片比较少。歪一定的角度后,识别就没有原来那么精确了。


实际商品化的产品训练,要增加很多抗干扰措施。

对着空的东西,也能分类到54种扑克牌类别中,但是置信度比较低(小于50%)。

抗干扰措施的一种就是:发现置信度低于50%的情况,设置为“未识别”。

本项目基于演示和学习的场景,没有加上这一规则。

参加本活动过程中,学习到了很多有用的经验。

软硬件
电路图
附件下载
playingcards发布.zip
团队介绍
个人玩家,老嵌入式系统工程师;基于业余兴趣爱好,有空做点有趣的东西。做过4G开发,研发过工控PLC/DCS,搞IOT安全研究8-9年了。有空玩玩机器人、无人机、智能小车等东西。轻度强迫症患者。
团队成员
ylml
评论
0 / 100
查看更多
硬禾服务号
关注最新动态
0512-67862536
info@eetree.cn
江苏省苏州市苏州工业园区新平街388号腾飞创新园A2幢815室
苏州硬禾信息科技有限公司
Copyright © 2024 苏州硬禾信息科技有限公司 All Rights Reserved 苏ICP备19040198号