安全带错误佩戴检测:Euro NCAP新要求与计算机视觉方案

Euro NCAP 2026新增: 安全带错误佩戴(Belt Misuse)检测
检测类型: 肩带后置、腰带松弛、安全带扭曲、多人共用
技术路线: 视觉检测 + 深度估计


Euro NCAP 安全带要求

新增检测场景

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Euro NCAP 安全带检测场景:

├── B-01: 安全带未系
│ └── 检测:肩带完全未拉出

├── B-02: 肩带后置 (Shoulder Belt Behind Back)
│ └── 检测:肩带位于背部而非胸前

├── B-03: 腋下穿带 (Shoulder Belt Under Arm)
│ └── 检测:肩带从腋下穿过

├── B-04: 腰带松弛 (Lap Belt Loose)
│ └── 检测:腰带未紧贴髋部

├── B-05: 安全带扭曲 (Twisted Belt)
│ └── 检测:安全带扭转超过180

└── B-06: 多人共用 (Multiple Occupants One Belt)
└── 检测:一条安全带绕过多名乘员

检测性能要求

场景 检测时间 警告等级 精度要求
B-01 未系 ≤2s 一级 ≥99%
B-02 后置 ≤3s 一级 ≥95%
B-03 腋下 ≤3s 一级 ≥95%
B-04 松弛 ≤5s 二级 ≥90%
B-05 扭曲 ≤5s 二级 ≥90%
B-06 共用 ≤3s 一级 ≥95%

检测难点分析

视觉挑战

挑战 描述 解决方案
遮挡 手臂/衣物遮挡安全带 多角度摄像头
颜色相似 安全带与衣服颜色相近 红外成像
深度估计 判断安全带位置关系 深度相机/双目
光照变化 昼夜、隧道等 自适应算法
姿态多样 不同乘员体型姿态 多样化训练数据

摄像头配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
车内摄像头布局:

┌─────────────────────────────────────────┐
A柱摄像头 │
│ (检测安全带上固定点) │
│ ○ ○ │
│ │
│ ┌────────────────────────┐ │
│ │ 中控上方摄像头 │ │
│ │ ○ │ │
│ │ (检测前排安全带) │ │
│ └────────────────────────┘ │
│ │
│ 驾驶员 乘客 │
│ 👤 👤 │
│ ═══════ ═══════ │
│ ║ 安全带 ║ ║ 安全带 ║ │
│ ═══════ ═══════ │
│ │
│ ┌────────────────────────┐ │
│ │ 后排中央摄像头 │ │
│ │ ○ │ │
│ │ (检测后排安全带) │ │
│ └────────────────────────┘ │
│ │
└─────────────────────────────────────────┘

核心代码实现

1. 安全带检测模型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
"""
安全带错误佩戴检测模型
基于YOLOv8的关键点检测 + 状态分类
"""

import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import Dict, List, Tuple, Optional
import numpy as np


class BeltKeypointDetector(nn.Module):
"""
安全带关键点检测器

检测安全带的路径关键点,用于判断佩戴状态
"""

# 关键点定义
KEYPOINTS = [
"shoulder_anchor", # 肩部固定点
"shoulder_point", # 肩带与肩部交叉点
"chest_point", # 胸前交叉点
"buckle_point", # 卡扣点
"hip_left", # 左髋部点
"hip_right" # 右髋部点
]

def __init__(
self,
backbone: str = "mobilenetv3",
num_keypoints: int = 6
):
super().__init__()

self.num_keypoints = num_keypoints

# 骨干网络
if backbone == "mobilenetv3":
from torchvision.models import mobilenet_v3_small
base = mobilenet_v3_small(pretrained=True)
self.backbone = nn.Sequential(*list(base.children())[:-1])
feature_dim = 576
else:
raise ValueError(f"Unknown backbone: {backbone}")

# 关键点检测头
self.keypoint_head = nn.Sequential(
nn.Conv2d(feature_dim, 256, 3, 1, 1),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, num_keypoints * 3, 1) # x, y, visibility
)

# 安全带分割头
self.segment_head = nn.Sequential(
nn.Conv2d(feature_dim, 128, 3, 1, 1),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 1, 1) # belt mask
)

def forward(
self,
x: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Args:
x: (batch, 3, H, W)

Returns:
keypoints: (batch, num_keypoints, 3) - x, y, confidence
belt_mask: (batch, 1, H, W)
"""
batch_size = x.size(0)

# 骨干特征
features = self.backbone(x)

# 关键点
kpt_heatmap = self.keypoint_head(features)
kpt_heatmap = kpt_heatmap.view(batch_size, self.num_keypoints, 3, -1)
kpt_heatmap = kpt_heatmap.permute(0, 1, 3, 2)

# 分割
belt_mask = torch.sigmoid(self.segment_head(features))

return kpt_heatmap, belt_mask


class BeltMisuseClassifier(nn.Module):
"""
安全带错误佩戴分类器

基于关键点位置判断佩戴状态
"""

MISUSE_TYPES = [
"normal", # 正确佩戴
"not_worn", # 未系
"shoulder_behind", # 肩带后置
"under_arm", # 腋下穿带
"loose_lap", # 腰带松弛
"twisted", # 扭曲
"multiple_users" # 多人共用
]

def __init__(
self,
num_keypoints: int = 6
):
super().__init__()

self.num_keypoints = num_keypoints

# 分类器
self.classifier = nn.Sequential(
nn.Linear(num_keypoints * 3, 128),
nn.ReLU(inplace=True),
nn.Dropout(0.3),
nn.Linear(128, 64),
nn.ReLU(inplace=True),
nn.Linear(64, len(self.MISUSE_TYPES))
)

def forward(
self,
keypoints: torch.Tensor
) -> torch.Tensor:
"""
Args:
keypoints: (batch, num_keypoints, 3) - x, y, confidence

Returns:
logits: (batch, num_classes)
"""
# 展平
batch_size = keypoints.size(0)
kpt_flat = keypoints.view(batch_size, -1)

# 分类
logits = self.classifier(kpt_flat)

return logits

def analyze_keypoints(
self,
keypoints: np.ndarray,
image_size: Tuple[int, int]
) -> Dict:
"""
分析关键点位置,判断错误类型

Args:
keypoints: (num_keypoints, 3) - x, y, confidence
image_size: (H, W)

Returns:
analysis: 分析结果
"""
h, w = image_size

# 归一化坐标
kpt_normalized = keypoints.copy()
kpt_normalized[:, 0] /= w
kpt_normalized[:, 1] /= h

# 提取关键点
shoulder_anchor = kpt_normalized[0, :2]
shoulder_point = kpt_normalized[1, :2]
chest_point = kpt_normalized[2, :2]
buckle = kpt_normalized[3, :2]
hip_left = kpt_normalized[4, :2]
hip_right = kpt_normalized[5, :2]

analysis = {
"is_misuse": False,
"misuse_type": "normal",
"confidence": 0.0,
"details": {}
}

# 检查:肩带后置
# 正常情况:肩点在胸前 (x坐标在肩锚点和卡扣之间)
if shoulder_point[0] < shoulder_anchor[0] - 0.1:
# 肩点在锚点后方,可能后置
analysis["is_misuse"] = True
analysis["misuse_type"] = "shoulder_behind"
analysis["details"]["shoulder_point_x"] = float(shoulder_point[0])
return analysis

# 检查:腋下穿带
# 肩点位置偏低 (y坐标过大)
if shoulder_point[1] > shoulder_anchor[1] + 0.2:
analysis["is_misuse"] = True
analysis["misuse_type"] = "under_arm"
analysis["details"]["shoulder_point_y"] = float(shoulder_point[1])
return analysis

# 检查:腰带松弛
# 髋部点与卡扣距离过大
hip_center = (hip_left + hip_right) / 2
lap_distance = np.linalg.norm(hip_center - buckle)
if lap_distance > 0.15:
analysis["is_misuse"] = True
analysis["misuse_type"] = "loose_lap"
analysis["details"]["lap_distance"] = float(lap_distance)
return analysis

# 检查:未系
# 关键点置信度过低
avg_confidence = np.mean(keypoints[:, 2])
if avg_confidence < 0.3:
analysis["is_misuse"] = True
analysis["misuse_type"] = "not_worn"
analysis["details"]["avg_confidence"] = float(avg_confidence)
return analysis

analysis["confidence"] = float(avg_confidence)
return analysis


class BeltMisuseDetector(nn.Module):
"""
完整的安全带错误佩戴检测系统
"""

def __init__(
self,
backbone: str = "mobilenetv3"
):
super().__init__()

# 关键点检测
self.keypoint_detector = BeltKeypointDetector(backbone)

# 分类器
self.classifier = BeltMisuseClassifier()

def forward(
self,
x: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Args:
x: (batch, 3, H, W)

Returns:
keypoints: (batch, num_keypoints, 3)
belt_mask: (batch, 1, H, W)
class_logits: (batch, num_classes)
"""
# 关键点检测
keypoints, belt_mask = self.keypoint_detector(x)

# 分类
class_logits = self.classifier(keypoints)

return keypoints, belt_mask, class_logits

def detect(
self,
image: np.ndarray,
conf_threshold: float = 0.5
) -> Dict:
"""
检测安全带状态

Args:
image: 输入图像
conf_threshold: 置信度阈值

Returns:
result: 检测结果
"""
import cv2

# 预处理
original_size = image.shape[:2]
input_tensor = self._preprocess(image)

# 推理
with torch.no_grad():
keypoints, belt_mask, logits = self.forward(input_tensor)

# 后处理
keypoints_np = keypoints[0].cpu().numpy()
class_probs = F.softmax(logits, dim=1)[0].cpu().numpy()

# 分析关键点
analysis = self.classifier.analyze_keypoints(
keypoints_np,
(224, 224) # 模型输入尺寸
)

# 结合分类结果
predicted_class = np.argmax(class_probs)

result = {
"is_misuse": analysis["is_misuse"] or predicted_class > 0,
"misuse_type": BeltMisuseClassifier.MISUSE_TYPES[predicted_class],
"confidence": float(class_probs[predicted_class]),
"keypoints": keypoints_np.tolist(),
"class_probabilities": class_probs.tolist()
}

return result

def _preprocess(
self,
image: np.ndarray,
target_size: Tuple[int, int] = (224, 224)
) -> torch.Tensor:
"""图像预处理"""
import cv2

# 缩放
image = cv2.resize(image, target_size)

# BGR -> RGB
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# 归一化
image = image.astype(np.float32) / 255.0

# HWC -> CHW
image = image.transpose(2, 0, 1)

# 添加batch维度
tensor = torch.from_numpy(image).unsqueeze(0)

return tensor


# 测试
if __name__ == "__main__":
# 创建模型
model = BeltMisuseDetector(backbone="mobilenetv3")

# 模拟输入
x = torch.randn(1, 3, 224, 224)

# 前向传播
keypoints, belt_mask, logits = model(x)

print("=== 安全带错误佩戴检测模型测试 ===")
print(f"输入形状: {x.shape}")
print(f"关键点输出: {keypoints.shape}")
print(f"分割输出: {belt_mask.shape}")
print(f"分类输出: {logits.shape}")
print(f"参数量: {sum(p.numel() for p in model.parameters()):,}")

2. 深度估计辅助检测

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
"""
基于深度的安全带位置判断
用于区分肩带在前还是在后
"""

import torch
import torch.nn as nn
import numpy as np
from typing import Dict, Tuple


class DepthEstimator(nn.Module):
"""
轻量级深度估计网络

用于判断安全带与身体的相对位置
"""

def __init__(
self,
backbone: str = "efficientnet_b0"
):
super().__init__()

# 编码器
from torchvision.models import efficientnet_b0
base = efficientnet_b0(pretrained=True)
self.encoder = nn.Sequential(*list(base.children())[:-1])

# 深度解码器
self.decoder = nn.Sequential(
nn.Conv2d(1280, 256, 3, 1, 1),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True),

nn.Conv2d(256, 64, 3, 1, 1),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Upsample(scale_factor=4, mode='bilinear', align_corners=True),

nn.Conv2d(64, 1, 3, 1, 1)
)

def forward(
self,
x: torch.Tensor
) -> torch.Tensor:
"""
Args:
x: (batch, 3, H, W)

Returns:
depth: (batch, 1, H', W') - 相对深度值
"""
features = self.encoder(x)
depth = self.decoder(features)
return depth


class BeltPositionAnalyzer:
"""
安全带位置分析器

结合RGB和深度信息判断安全带状态
"""

def __init__(
self,
depth_threshold: float = 0.1
):
"""
Args:
depth_threshold: 深度差异阈值
"""
self.depth_threshold = depth_threshold

def analyze_belt_position(
self,
rgb_image: np.ndarray,
belt_mask: np.ndarray,
depth_map: np.ndarray,
shoulder_keypoint: Tuple[int, int]
) -> Dict:
"""
分析安全带位置

Args:
rgb_image: RGB图像
belt_mask: 安全带分割掩码
depth_map: 深度图
shoulder_keypoint: 肩部关键点坐标

Returns:
position_info: 位置信息
"""
# 提取安全带区域深度
belt_depths = depth_map[belt_mask > 0.5]

# 提取肩部区域深度
sx, sy = shoulder_keypoint
shoulder_region = depth_map[
max(0, sy-20):sy+20,
max(0, sx-20):sx+20
]
shoulder_depth = np.median(shoulder_region)

# 计算相对位置
belt_median_depth = np.median(belt_depths)
depth_difference = belt_median_depth - shoulder_depth

# 判断
if depth_difference > self.depth_threshold:
# 安全带在肩部前方(正常)
position = "front"
is_correct = True
elif depth_difference < -self.depth_threshold:
# 安全带在肩部后方(错误)
position = "behind"
is_correct = False
else:
# 深度接近,无法确定
position = "ambiguous"
is_correct = None

return {
"belt_position": position,
"is_correct_placement": is_correct,
"depth_difference": float(depth_difference),
"belt_median_depth": float(belt_median_depth),
"shoulder_depth": float(shoulder_depth)
}


# Euro NCAP 测试场景
EURO_NCAP_BELT_TEST_SCENARIOS = [
{
"id": "B-01",
"name": "安全带未系",
"setup": "乘员不系安全带",
"expected_detection_time": 2.0,
"expected_result": "not_worn",
"test_procedure": """
1. 测试人员进入车辆,不系安全带
2. 启动车辆
3. 记录系统检测时间和警告等级
4. 通过条件:≤2秒检测到,一级警告
"""
},
{
"id": "B-02",
"name": "肩带后置",
"setup": "肩带放置在背后",
"expected_detection_time": 3.0,
"expected_result": "shoulder_behind",
"test_procedure": """
1. 测试人员将肩带从背后绕过
2. 卡扣正常扣好
3. 记录系统检测结果
4. 通过条件:≤3秒检测到错误佩戴
"""
},
{
"id": "B-03",
"name": "腋下穿带",
"setup": "肩带从腋下穿过",
"expected_detection_time": 3.0,
"expected_result": "under_arm",
"test_procedure": """
1. 测试人员将肩带从腋下穿过
2. 卡扣正常扣好
3. 记录系统检测结果
4. 通过条件:≤3秒检测到错误佩戴
"""
},
{
"id": "B-04",
"name": "腰带松弛",
"setup": "腰带未紧贴髋部",
"expected_detection_time": 5.0,
"expected_result": "loose_lap",
"test_procedure": """
1. 测试人员正常系安全带
2. 腰带故意留出较大松弛量(>10cm)
3. 记录系统检测结果
4. 通过条件:≤5秒检测到松弛
"""
}
]


# 测试
if __name__ == "__main__":
print("=== Euro NCAP 安全带检测测试场景 ===\n")

for scenario in EURO_NCAP_BELT_TEST_SCENARIOS:
print(f"场景 {scenario['id']}: {scenario['name']}")
print(f" 预期检测时间: ≤{scenario['expected_detection_time']}秒")
print(f" 预期结果: {scenario['expected_result']}")
print(f" 测试流程: {scenario['test_procedure'].strip()}")
print()

硬件配置建议

摄像头选型

类型 参数 适用场景
RGB标准 2MP, 30fps 白天检测
RGB-IR 2MP, 30fps, 940nm 全天候
深度相机 TOF/双目, 640x480 深度判断
红外 640x480, 30fps 夜间

推荐配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
BELT_DETECTION_HARDWARE_CONFIG = {
"camera": {
"type": "RGB-IR",
"resolution": (1920, 1080),
"fov": 90,
"position": "overhead",
"ir_wavelength": 940 # nm
},
"processor": {
"platform": "QCS8255",
"npu_tops": 26,
"expected_latency_ms": 20
},
"output": {
"interface": "CAN-FD",
"message_rate": 10 # Hz
}
}

总结

维度 内容
新增要求 6种错误佩戴检测
检测时间 2-5秒(视类型)
精度要求 90%-99%
技术路线 关键点检测 + 深度辅助
硬件需求 RGB-IR摄像头 + NPU

发布时间: 2026-04-22
标签: #安全带检测 #Euro NCAP #BeltMisuse #关键点检测 #IMS


安全带错误佩戴检测:Euro NCAP新要求与计算机视觉方案
https://dapalm.com/2026/04/22/2026-04-22-seatbelt-misuse-detection-cv/
作者
Mars
发布于
2026年4月22日
许可协议