commit abe9ba35645680eb7a3d36f2a613f12ea7372966 Author: wzj <244142824@qq.com> Date: Fri Apr 25 14:48:24 2025 +0800 first commit diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..892a565 --- /dev/null +++ b/LICENSE @@ -0,0 +1,11 @@ +Copyright (c) 2019, SeetaTech, +Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China +All rights reserved. + +Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/README.md b/README.md new file mode 100644 index 0000000..dc02aae --- /dev/null +++ b/README.md @@ -0,0 +1,99 @@ +# SeetaFace6 + +[![License](https://img.shields.io/badge/license-BSD-blue.svg)](LICENSE) + +[[中文]()] [English] + +# 源码发布 +该项目源码已经发布到 [SeetaFace6Open](https://github.com/SeetaFace6Open/index). + +# 开放模块 + +`SeetaFace6`是中科视拓最新开放的商业正式级版本。突破了之前社区版和企业版版本不同步发布的情况,这次发布的v6版本正式与商用版本同步。 + +
+ +
+ +此次开放版包含了一直以来人脸识别的基本部分,如人脸检测、关键点定位、人脸识别。同时增加了活体检测、质量评估、年龄性别估计。并且响应时事,开放了口罩检测以及戴口罩的人脸识别模型。 + +
+ +
+ + +对比于SeetaFace2,我们开放版采用了商用版最新的推理引擎TenniS,ResNet50的推理速度,从SeetaFace2在I7的8FPS提升到了20FPS。同时人脸识别训练集也大幅度提高,SeetaFace6人脸识别数据量增加到了上亿张图片。 + +为了应对不同级别的应用需求,SeetaFace6将开放三个版本模型: + +模型名称 | 网络结构 | 速度(I7-6700) | 速度(RK3399) | 特征长度 +-|-|-|-|- +通用人脸识别 | ResNet-50 | 57ms | 300ms | 1024 +带口罩人脸识别 | ResNet-50 | 34ms | 150ms | 512 +通用人脸识别(小) | Mobile FaceNet | 9ms | 70ms | 512 + +作为能力兼容升级,SeetaFace6仍然能够给众多人脸识别应用提供业务能力。 + +
+ +
+ +同时该套算法适用于高精度的服务器部署外,也可以终端设备上很好的适应运行。 + +
+ +
+ +此次开放版将开放标准C++开发接口的,包含x86和ARM架构支持,逐步开放Ubuntu、CentOS、macOS、Android、IOS的支持。同时仍然保持了SeetaFace优良传统,不依赖任何第三方库。 + +
+ +
+ +# 下载地址 + +### 百度网盘 + +开发包: +Windows: [Download](https://pan.baidu.com/s/1_rFID6k6Istmu8QJkHpbFw) code: `iqjk`. Patch: 1. x86 pentium support [Download](https://pan.baidu.com/s/1RsXdg2h4Yq-bILdyVSTXDA) code: `0vn3`. +Ubuntu1604: [Download](https://pan.baidu.com/s/1tOq12SdpUtuybe48cMuwag) code: `lc44` +CentOS7: [Download](https://pan.baidu.com/s/1-U02a--Xjt-Jvi2QWI-9vQ) code: `1i62` +Android: [Download](https://pan.baidu.com/s/1nGm5VB2D8OZOlZgcABGA7g) code: `7m2h` +macOS: [Comming soon] +IOS: [Download](https://pan.baidu.com/s/1-jKlCpVHoml9TmXq77SXxg) code: `t14x`, [Example](https://pan.baidu.com/s/159EVG8eqX2hPDeu1IrQaqg) code: `dund`. +ARM-Ubuntu1604(RK3399): [Download](https://pan.baidu.com/s/16fMkI5K02k0TEAOGvIsPuw) code: `wi4q`. + + +模型文件: +Part I: [Download](https://pan.baidu.com/s/1LlXe2-YsUxQMe-MLzhQ2Aw) code: `ngne`, including: `age_predictor.csta`, `face_landmarker_pts5.csta`, `fas_first.csta`, `pose_estimation.csta`, `eye_state.csta`, `face_landmarker_pts68.csta`, `fas_second.csta`, `quality_lbn.csta`, `face_detector.csta`, `face_recognizer.csta`, `gender_predictor.csta`, `face_landmarker_mask_pts5.csta`, `face_recognizer_mask.csta`, `mask_detector.csta`. +Part II: [Download](https://pan.baidu.com/s/1xjciq-lkzEBOZsTfVYAT9g) code: `t6j0`,including: `face_recognizer_light.csta`. + +### DropBox + +[Comming soon] + +# 使用入门 + +关于基本的接口使用,请参见教程: +[《SeetaFace 入门教程》](http://leanote.com/blog/post/5e7d6cecab64412ae60016ef),github上有同步[文档源码](https://github.com/seetafaceengine/SeetaFaceTutorial)。 + +人脸识别的完整示例Demo见 [example/qt](./example/qt)。 + +在每个压缩包的文档中都包含了对应平台上的调用示例,请解压对应平台压缩包后分别获取。 + +关于版本号的额外说明,该开放版本立项的时候,就是作为社区版v3发布,而执行过程中调整至发布版本为商用版本v6。这个版本不统一是因为商用版迭代的版本管理和社区版不统一造成的。现在统一版本为v6。但是项目过程中还是存在`SeetaFace3`的表述,大家不同担心,v6和v3其实就是一个版本。 + +# 接口文档 + +各模块接口参见 [docs](./docs) + +# 开发者社区 + +欢迎开发者加入 SeetaFace 开发者社区,请先加 SeetaFace 小助手微信,经过审核后邀请入群。 + +![QR](./asserts/QR.png) + +# 联系我们 + +`SeetaFace` 开放版可以免费用于商业和个人用途。如果需要更多的商业支持,请联系商务邮件 bd@seetatech.com。 + diff --git a/asserts/QR.png b/asserts/QR.png new file mode 100644 index 0000000..5013311 Binary files /dev/null and b/asserts/QR.png differ diff --git a/asserts/api_matrix.png b/asserts/api_matrix.png new file mode 100644 index 0000000..e3d8a4d Binary files /dev/null and b/asserts/api_matrix.png differ diff --git a/asserts/app_matrix.png b/asserts/app_matrix.png new file mode 100644 index 0000000..b2bf949 Binary files /dev/null and b/asserts/app_matrix.png differ diff --git a/asserts/endpoints.png b/asserts/endpoints.png new file mode 100644 index 0000000..6b4b652 Binary files /dev/null and b/asserts/endpoints.png differ diff --git a/asserts/fas.jpg b/asserts/fas.jpg new file mode 100644 index 0000000..4687bbd Binary files /dev/null and b/asserts/fas.jpg differ diff --git a/asserts/fr_mask.png b/asserts/fr_mask.png new file mode 100644 index 0000000..b186217 Binary files /dev/null and b/asserts/fr_mask.png differ diff --git a/docs/人脸检测.md b/docs/人脸检测.md new file mode 100644 index 0000000..2fea0e4 --- /dev/null +++ b/docs/人脸检测.md @@ -0,0 +1,110 @@ +# 人脸检测器 + +## **1. 接口简介**
+ +人脸检测器会对输入的彩色图片或者灰度图像进行人脸检测,并返回所有检测到的人脸位置。
+ +## **2. 类型说明**
+ +### **2.1 struct SeetaImageData**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|unit8_t* |图像数据| +|width | int32_t | 图像的宽度| +|height | int32_t | 图像的高度| +|channels | int32_t | 图像的通道数| +说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 + +### **2.2 struct SeetaRect**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|int32_t |人脸区域左上角横坐标| +|y| int32_t | 人脸区域左上角纵坐标| +|width| int32_t | 人脸区域宽度| +|height| int32_t | 人脸区域高度| + +### **2.3 struct SeetaFaceInfo**
+ +|名称 | 类型 | 说明| +|---|---|---| +|pos|SeetaRect|人脸位置| +|score|float|人脸置信分数| + +### **2.4 struct SeetaFaceInfoArray**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|const SeetaFaceInfo*|人脸信息数组| +|size|int|人脸信息数组长度| + +## 3 class FaceDetector + +人脸检测器。 + +### 3.1 Enum SeetaDevice + +模型运行的计算设备。
+ +|名称 |说明| +|---|---| +|SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| +|SEETA_DEVICE_CPU|使用CPU计算| +|SEETA_DEVICE_GPU|使用GPU计算| + +### 3.2 struct SeetaModelSetting + +构造人脸检测器需要传入的结构体参数。
+ +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|model|const char**| |检测器模型| +|id|int| |GPU id| +|device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| + +### 3.3 构造函数 + +#### FaceDetector + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|setting|const SeetaModelSetting&| |检测器结构参数| + +### 3.4 成员函数 + +#### detect + +输入彩色图像,检测其中的人脸。
+ +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |输入的图像数据| +|返回值|SeetaFaceInfoArray| |人脸信息数组| + +#### set +设置人脸检测器相关属性值。其中
+**PROPERTY_MIN_FACE_SIZE**: 表示人脸检测器可以检测到的最小人脸,该值越小,支持检测到的人脸尺寸越小,检测速度越慢,默认值为20;
+**PROPERTY_THRESHOLD**: +表示人脸检测器过滤阈值,默认为 0.90;
+**PROPERTY_MAX_IMAGE_WIDTH** 和 **PROPERTY_MAX_IMAGE_HEIGHT**: +分别表示支持输入的图像的最大宽度和高度;
+**PROPERTY_NUMBER_THREADS**: +表示人脸检测器计算线程数,默认为 4. + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||人脸检测器属性类别| +|value|double||设置的属性值| +|返回值|void| | | | + +#### get +获取人脸检测器相关属性值。
+ +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||人脸检测器属性类别| +|返回值|double||对应的人脸属性值| + + + diff --git a/docs/人脸识别.md b/docs/人脸识别.md new file mode 100644 index 0000000..4d9d9b0 --- /dev/null +++ b/docs/人脸识别.md @@ -0,0 +1,190 @@ +# 人脸识别器 + +## **1. 接口简介**
+ +人脸识别器要求输入原始图像数据和人脸特征点(或者裁剪好的人脸数据),对输入的人脸提取特征值数组,根据提取的特征值数组对人脸进行相似度比较。
+ +## **2. 类型说明**
+ +### **2.1 struct SeetaImageData**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|unit8_t* |图像数据| +|width | int32_t | 图像的宽度| +|height | int32_t | 图像的高度| +|channels | int32_t | 图像的通道数| +说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 + +### **2.2 struct SeetaPointF**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|double|人脸特征点横坐标| +|y|double|人脸特征点纵坐标| + +## 3 class FaceRecognizer +人脸识别器。 + +### 3.1 Enum SeetaDevice + +模型运行的计算设备。 + +|名称 |说明| +|---|---| +|SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| +|SEETA_DEVICE_CPU|使用CPU计算| +|SEETA_DEVICE_GPU|使用GPU计算| + +### 3.2 struct SeetaModelSetting + +构造人脸识别器需要传入的结构体参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|model|const char**| |识别器模型| +|id|int| |GPU id| +|device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| + +### 3.3 构造函数 +#### FaceRecognizer + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|setting|const SeetaModelSetting&| |识别器结构参数| + +### 3.4 成员函数 + +#### GetCropFaceWidth +获取裁剪人脸的宽度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸宽度| + +#### GetCropFaceHeight +获取裁剪的人脸高度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸高度| + +#### GetCropFaceChannels +获取裁剪的人脸数据通道数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸数据通道数| + +#### CropFace +裁剪人脸。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|points|const SeetaPointF*| |人脸特征点数组| +|face|SeetaImageData&| |返回的裁剪人脸| +|返回值|bool| |true表示人脸裁剪成功| + +#### CropFace +裁剪人脸。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|points|const SeetaPointF*| |人脸特征点数组| +|返回值|seeta::ImageData| |返回的裁剪人脸| + +#### GetCropFaceWidthV2 +获取裁剪人脸的宽度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸宽度| + +#### GetCropFaceHeightV2 +获取裁剪的人脸高度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸高度| + +#### GetCropFaceChannelsV2 +获取裁剪的人脸数据通道数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸数据通道数| + +#### CropFaceV2 +裁剪人脸。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|points|const SeetaPointF*| |人脸特征点数组| +|face|SeetaImageData&| |返回的裁剪人脸| +|返回值|bool| |true表示人脸裁剪成功| + +#### CropFaceV2 +裁剪人脸。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|points|const SeetaPointF*| |人脸特征点数组| +|返回值|seeta::ImageData| |返回的裁剪人脸| + +#### GetExtractFeatureSize +获取特征值数组的长度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |特征值数组的长度| + +#### ExtractCroppedFace +输入裁剪后的人脸图像,提取人脸的特征值数组。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|face|const SeetaImageData&| |裁剪后的人脸图像数据| +|features|float*| |返回的人脸特征值数组| +|返回值|bool| |true表示提取特征成功| + +#### Extract +输入原始图像数据和人脸特征点数组,提取人脸的特征值数组。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始的人脸图像数据| +|points|const SeetaPointF*| |人脸的特征点数组| +|features|float*| |返回的人脸特征值数组| +|返回值|bool| |true表示提取特征成功| + +#### CalculateSimilarity +比较两人脸的特征值数据,获取人脸的相似度值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|features1|const float*| |特征数组一| +|features2|const float*| |特征数组二| +|返回值|float| |相似度值| + +#### set +设置相关属性值。其中
+**PROPERTY_NUMBER_THREADS**: +表示计算线程数,默认为 4. + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|value|double||设置的属性值| +|返回值|void| | | | + +#### get +获取相关属性值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|返回值|double||对应的属性值| \ No newline at end of file diff --git a/docs/人脸跟踪.md b/docs/人脸跟踪.md new file mode 100644 index 0000000..3fbf62d --- /dev/null +++ b/docs/人脸跟踪.md @@ -0,0 +1,151 @@ +# 人脸跟踪器 + +## **1. 接口简介**
+ +人脸跟踪器会对输入的彩色图像或者灰度图像中的人脸进行跟踪,并返回所有跟踪到的人脸信息。
+ +## **2. 类型说明**
+ +### **2.1 struct SeetaImageData**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|unit8_t* |图像数据| +|width | int32_t | 图像的宽度| +|height | int32_t | 图像的高度| +|channels | int32_t | 图像的通道数| +说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 + +### **2.2 struct SeetaRect**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|int32_t |人脸区域左上角横坐标| +|y| int32_t | 人脸区域左上角纵坐标| +|width| int32_t | 人脸区域宽度| +|height| int32_t | 人脸区域高度| + +### **2.3 struct SeetaTrackingFaceInfo**
+ +|名称 | 类型 | 说明| +|---|---|---| +|pos|SeetaRect|人脸位置| +|score|float|人脸置信分数| +|frame_no|int|视频帧的索引| +|PID|int|跟踪的人脸标识id| + +### **2.4 struct SeetaTrackingFaceInfoArray**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|const SeetaTrackingFaceInfo*|人脸信息数组| +|size|int|人脸信息数组长度| + +## 3 class FaceTracker + +人脸跟踪器。 + +### 3.1 Enum SeetaDevice + +模型运行的计算设备。 + +|名称 |说明| +|---|---| +|SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| +|SEETA_DEVICE_CPU|使用CPU计算| +|SEETA_DEVICE_GPU|使用GPU计算| + +### 3.2 struct SeetaModelSetting + +构造人脸跟踪器需要传入的结构体参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|model|const char**| |跟踪器模型| +|id|int| |GPU id| +|device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| + +### 3.3 构造函数 + +#### FaceTrakcer + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|setting|const SeetaModelSetting&| |跟踪器结构参数| +|video_width|int| |视频的宽度| +|video_height|int| |视频的高度| + +### 3.4 成员函数 + +#### SetSingleCalculationThreads +设置底层的计算线程数量。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|num|int| |线程数量| +|返回值|void| || + +#### Track +对视频帧中的人脸进行跟踪。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|返回值|SeetaTrackingFaceInfoArray| |跟踪到的人脸信息数组| + +#### Track +对视频帧中的人脸进行跟踪。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|frame_no|int| |视频帧索引| +|返回值|SeetaTrackingFaceInfoArray| |跟踪到的人脸信息数组| + +#### SetMinFaceSize +设置检测器的最小人脸大小。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|size|int32_t| |最小人脸大小| +|返回值|void| || +说明:size 的大小保证大于等于 20,size的值越小,能够检测到的人脸的尺寸越小, +检测速度越慢。 + +#### GetMinFaceSize +获取最小人脸的大小。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int32_t| |最小人脸大小| + +#### SetThreshold +设置检测器的检测阈值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|thresh|float| |检测阈值| +|返回值|void| || + +#### GetScoreThreshold +获取检测器检测阈值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|float| |检测阈值| + +#### SetVideoStable +设置以稳定模式输出人脸跟踪结果。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|stable|bool| |是否是稳定模式| +|返回值|void| || +说明:只有在视频中连续跟踪时,才使用此方法。 + +#### GetVideoStable +获取当前是否是稳定工作模式。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|bool| |是否是稳定模式| diff --git a/docs/口罩检测.md b/docs/口罩检测.md new file mode 100644 index 0000000..4a47512 --- /dev/null +++ b/docs/口罩检测.md @@ -0,0 +1,70 @@ +# 口罩检测器 + +## **1. 接口简介**
+ +口罩检测器根据输入的图像数据、人脸位置,返回是否佩戴口罩的检测结果。
+ +## **2. 类型说明**
+ +### **2.1 struct SeetaImageData**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|unit8_t* |图像数据| +|width | int32_t | 图像的宽度| +|height | int32_t | 图像的高度| +|channels | int32_t | 图像的通道数| +说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 + +### **2.2 struct SeetaRect**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|int32_t |人脸区域左上角横坐标| +|y| int32_t | 人脸区域左上角纵坐标| +|width| int32_t | 人脸区域宽度| +|height| int32_t | 人脸区域高度| + +## 3 class MaskDetector +口罩检测器。 + +### 3.1 Enum SeetaDevice + +模型运行的计算设备。 + +|名称 |说明| +|---|---| +|SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| +|SEETA_DEVICE_CPU|使用CPU计算| +|SEETA_DEVICE_GPU|使用GPU计算| + +### 3.2 struct SeetaModelSetting + +口罩检测器需要传入的结构体参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|model|const char**| |检测器模型| +|id|int| |GPU id| +|device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| + +### 3.3 构造函数 + +#### MaskDetector +构造检测器,需要在构造的时候传入检测器结构参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|setting|const SeetaModelSetting&| |识别器接口参数| + +### 3.4 成员函数 + +#### detect +输入图像数据、人脸位置,返回是否佩戴口罩的检测结果。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|face|const SeetaRect&| |人脸位置| +|score|float*|nullptr|戴口罩的置信度| +|返回值|bool| |true为佩戴了口罩| diff --git a/docs/年龄估计.md b/docs/年龄估计.md new file mode 100644 index 0000000..005f70d --- /dev/null +++ b/docs/年龄估计.md @@ -0,0 +1,125 @@ +# 年龄估计器 + +## **1. 接口简介**
+ +年龄估计器要求输入原始图像数据和人脸特征点(或者裁剪好的人脸数据),对输入的人脸进行年龄估计。
+ +## **2. 类型说明**
+ +### **2.1 struct SeetaImageData**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|unit8_t* |图像数据| +|width | int32_t | 图像的宽度| +|height | int32_t | 图像的高度| +|channels | int32_t | 图像的通道数| +说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 + +### **2.2 struct SeetaPointF**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|double|人脸特征点横坐标| +|y|double|人脸特征点纵坐标| + +## 3 class AgePredictor +年龄估计器。 + +### 3.1 Enum SeetaDevice + +模型运行的计算设备。 + +|名称 |说明| +|---|---| +|SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| +|SEETA_DEVICE_CPU|使用CPU计算| +|SEETA_DEVICE_GPU|使用GPU计算| + +### 3.2 struct SeetaModelSetting + +年龄估计器需要传入的结构体参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|model|const char**| |模型文件| +|id|int| |GPU id| +|device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| + +### 3.3 构造函数 +#### AgePredictor + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|setting|const SeetaModelSetting&| |结构参数| + +### 3.4 成员函数 + +#### GetCropFaceWidth +获取裁剪人脸的宽度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸宽度| + +#### GetCropFaceHeight +获取裁剪的人脸高度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸高度| + +#### GetCropFaceChannels +获取裁剪的人脸数据通道数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸数据通道数| + +#### CropFace +裁剪人脸。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|points|const SeetaPointF*| |人脸特征点数组| +|face|SeetaImageData&| |返回的裁剪人脸| +|返回值|bool| |true表示人脸裁剪成功| + +#### PredictAge +输入裁剪好的人脸,返回估计的年龄。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|face|const SeetaImageData&| |裁剪好的人脸数据| +|age|int&| |估计的年龄| +|返回值|bool| |true表示估计成功| + +#### PredictAgeWithCrop +输入原始图像数据和人脸特征点,返回估计的年龄。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始人脸数据| +|points|const SeetaPointF*| |人脸特征点| +|age|int&| |估计的年龄| +|返回值|bool| |true表示估计成功| + +#### set +设置相关属性值。其中
+**PROPERTY_NUMBER_THREADS**: +表示计算线程数,默认为 4. + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|value|double||设置的属性值| +|返回值|void| | | | + +#### get +获取相关属性值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|返回值|double||对应的属性值| \ No newline at end of file diff --git a/docs/性别估计.md b/docs/性别估计.md new file mode 100644 index 0000000..0d59223 --- /dev/null +++ b/docs/性别估计.md @@ -0,0 +1,128 @@ +# 性别估计器 + +## **1. 接口简介**
+ +性别估计器要求输入原始图像数据和人脸特征点(或者裁剪好的人脸数据),对输入的人脸进行性别估计。
+ +## **2. 类型说明**
+ +### **2.1 struct SeetaImageData**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|unit8_t* |图像数据| +|width | int32_t | 图像的宽度| +|height | int32_t | 图像的高度| +|channels | int32_t | 图像的通道数| +说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 + +### **2.2 struct SeetaPointF**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|double|人脸特征点横坐标| +|y|double|人脸特征点纵坐标| + +## 3 class GenderPredictor +性别估计器。 + +### 3.1 Enum SeetaDevice + +模型运行的计算设备。 + +|名称 |说明| +|---|---| +|SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| +|SEETA_DEVICE_CPU|使用CPU计算| +|SEETA_DEVICE_GPU|使用GPU计算| + +### 3.2 struct SeetaModelSetting + +性别估计器需要传入的结构体参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|model|const char**| |模型文件| +|id|int| |GPU id| +|device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| + +### 3.3 构造函数 +#### GenderPredictor + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|setting|const SeetaModelSetting&| |结构参数| + +### 3.4 成员函数 + +#### GetCropFaceWidth +获取裁剪人脸的宽度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸宽度| + +#### GetCropFaceHeight +获取裁剪的人脸高度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸高度| + +#### GetCropFaceChannels +获取裁剪的人脸数据通道数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |返回的人脸数据通道数| + +#### CropFace +裁剪人脸。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|points|const SeetaPointF*| |人脸特征点数组| +|face|SeetaImageData&| |返回的裁剪人脸| +|返回值|bool| |true表示人脸裁剪成功| + +#### PredictGender +输入裁剪好的人脸,返回估计的性别。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|face|const SeetaImageData&| |裁剪好的人脸数据| +|gender|GENDER&| |估计的性别| +|返回值|bool| |true表示估计成功| +说明:GENDER可取值MALE(男性)和FEMALE(女性)。 + +#### PredictGenderWithCrop +输入原始图像数据和人脸特征点,返回估计的性别。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始人脸数据| +|points|const SeetaPointF*| |人脸特征点| +|gender|GENDER&| |估计的性别| +|返回值|bool| |true表示估计成功| +说明:GENDER可取值MALE(男性)和FEMALE(女性)。 + +#### set +设置相关属性值。其中
+ +**PROPERTY_NUMBER_THREADS**: +表示计算线程数,默认为 4. + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|value|double||设置的属性值| +|返回值|void| | | | + +#### get +获取相关属性值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|返回值|double||对应的属性值| \ No newline at end of file diff --git a/docs/特征点检测.md b/docs/特征点检测.md new file mode 100644 index 0000000..1b48608 --- /dev/null +++ b/docs/特征点检测.md @@ -0,0 +1,113 @@ +# 人脸特征点检测器 + +## **1. 接口简介**
+ +人脸特征点检测器要求输入原始图像数据和人脸位置,返回人脸 5 个或者其他数量的的特征点的坐标(特征点的数量和加载的模型有关)。
+ +## **2. 类型说明**
+ +### **2.1 struct SeetaImageData**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|unit8_t* |图像数据| +|width | int32_t | 图像的宽度| +|height | int32_t | 图像的高度| +|channels | int32_t | 图像的通道数| +说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 + +### **2.2 struct SeetaRect**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|int32_t |人脸区域左上角横坐标| +|y| int32_t | 人脸区域左上角纵坐标| +|width| int32_t | 人脸区域宽度| +|height| int32_t | 人脸区域高度| + +### **2.3 struct SeetaPointF**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|double|人脸特征点横坐标| +|y|double|人脸特征点纵坐标| + +## 3 class FaceLandmarker + +人脸特征点检测器。 + +### 3.1 Enum SeetaDevice + +模型运行的计算设备。 + +|名称 |说明| +|---|---| +|SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| +|SEETA_DEVICE_CPU|使用CPU计算| +|SEETA_DEVICE_GPU|使用GPU计算| + +### 3.2 struct SeetaModelSetting + +构造人脸特征点检测器需要传入的结构体参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|model|const char**| |检测器模型| +|id|int| |GPU id| +|device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| + +### 3.3 构造函数 + +#### FaceLandmarker + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|setting|const SeetaModelSetting&| |检测器结构参数| + +### 3.4 成员函数 + +#### number +获取模型对应的特征点数组长度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| |模型特征点数组长度| + +#### mark +获取人脸特征点。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |图像原始数据| +|face|const SeetaRect&| |人脸位置| +|points|SeetaPointF*| |获取的人脸特征点数组(需预分配好数组长度,长度为number()返回的值)| +|返回值|void| | | + +#### mark +获取人脸特征点和遮挡信息。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |图像原始数据| +|face|const SeetaRect&| |人脸位置| +|points|SeetaPointF*| |获取的人脸特征点数组(需预分配好数组长度,长度为number()返回的值)| +|mask|int32_t*| |获取人脸特征点位置对应的遮挡信息数组(需预分配好数组长度,长度为number()返回的值), 其中值为1表示被遮挡,0表示未被遮挡| +|返回值|void| | | + +#### mark +获取人脸特征点。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |图像原始数据| +|face|const SeetaRect&| |人脸位置| +|返回值|std::vector| |获取的人脸特征点数组 | + +#### mark_v2 +获取人脸特征点和遮挡信息。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |图像原始数据| +|face|const SeetaRect&| |人脸位置| +|返回值|std::vector| |获取人脸特征点和是否遮挡数组| \ No newline at end of file diff --git a/docs/眼睛状态检测.md b/docs/眼睛状态检测.md new file mode 100644 index 0000000..48bcb07 --- /dev/null +++ b/docs/眼睛状态检测.md @@ -0,0 +1,86 @@ +# 眼睛状态检测器 + +## **1. 接口简介**
+ +眼睛检测器要求输入原始图像数据和人脸特征点,返回左眼和右眼的状态。
+ +## **2. 类型说明**
+ +### **2.1 struct SeetaImageData**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|unit8_t* |图像数据| +|width | int32_t | 图像的宽度| +|height | int32_t | 图像的高度| +|channels | int32_t | 图像的通道数| +说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 + +### **2.2 struct SeetaPointF**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|double|人脸特征点横坐标| +|y|double|人脸特征点纵坐标| + +## 3 class EyeStateDetector +眼睛状态检测器。 + +### 3.1 Enum SeetaDevice + +模型运行的计算设备。 + +|名称 |说明| +|---|---| +|SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| +|SEETA_DEVICE_CPU|使用CPU计算| +|SEETA_DEVICE_GPU|使用GPU计算| + +### 3.2 struct SeetaModelSetting + +构造眼睛状态检测器需要传入的结构体参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|model|const char**| |检测器模型| +|id|int| |GPU id| +|device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| + +### 3.3 构造函数 +#### EyeStateDetector + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|setting|const SeetaModelSetting&| |检测器结构参数| + +### 3.4 成员函数 + +#### Detect +输入原始图像数据和人脸特征点,返回左眼和右眼的状态。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|points|const SeetaPointF*| |人脸特征点数组| +|leftState|EYE_STATE| |返回的左眼状态| +|rightState|EYE_STATE| |返回的右眼状态| +说明:EYE_STATE可取值为EYE_CLOSE(闭眼)、EYE_OPEN(睁眼)、EYE_RANDOM(非眼部区域)和EYE_UNKNOWN(未知状态)。 + +#### set +设置相关属性值。其中
+**PROPERTY_NUMBER_THREADS**: +表示计算线程数,默认为 4. + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|value|double||设置的属性值| +|返回值|void| | | | + +#### get +获取相关属性值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|返回值|double||对应的属性值| diff --git a/docs/质量评估器.md b/docs/质量评估器.md new file mode 100644 index 0000000..63dadce --- /dev/null +++ b/docs/质量评估器.md @@ -0,0 +1,356 @@ +# 质量评估器 + +## **1. 接口简介**
+ +质量评估器包含不同的质量评估模块,包括人脸亮度、人脸清晰度(非深度方法)、 +人脸清晰度(深度方法)、人脸姿态(非深度方法)、人脸姿态(深度方法)、人脸分辨率和人脸完整度评估模块。
+ +## **2. 类型说明**
+ +### **2.1 struct SeetaImageData**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|unit8_t* |图像数据| +|width | int32_t | 图像的宽度| +|height | int32_t | 图像的高度| +|channels | int32_t | 图像的通道数| +说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 + +### **2.2 struct SeetaRect**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|int32_t |人脸区域左上角横坐标| +|y| int32_t | 人脸区域左上角纵坐标| +|width| int32_t | 人脸区域宽度| +|height| int32_t | 人脸区域高度| + +### **2.3 struct SeetaPointF**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|double|人脸特征点横坐标| +|y|double|人脸特征点纵坐标| + +### 2.4 enum QualityLevel + +|名称 | 类型 | 说明| +|---|---|---| +|LOW| |表示人脸质量为低| +|MEDIUM| |表示人脸质量为中| +|HIGH| |表示人脸质量为高| + +### 2.5 class QualityResult + +|名称 | 类型 | 说明| +|---|---|---| +|level|QualityLevel|人脸质量等级| +|score|float|人脸质量分数| + +## 3 class QualityOfBrightness +非深度的人脸亮度评估器。 + +### 3.1 构造函数 + +#### QualityOfBrightness +人脸亮度评估器构造函数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|void|| || + +#### QualityOfBrightness +人脸亮度评估器构造函数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|v0|float| |分级参数一| +|v1|float| |分级参数二| +|v2|float| |分级参数三| +|v3|float| |分级参数四| +说明:说明:分类依据为[0, v0) and [v3, ~) => LOW;[v0, v1) and [v2, v3) => +MEDIUM;[v1, v2) => HIGH。 + +### 3.2 成员函数 + +#### check +检测人脸亮度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|face|const SeetaRect&| |人脸位置| +|points|const SeetaPointF*| |人脸5个特征点数组| +|N|const int32_t| |人脸特征点数组长度| +|返回值|QualityResult| |人脸亮度检测结果| + +## 4 class QualityOfClarity +非深度学习的人脸清晰度评估器。 + +### 4.1 构造函数 + +#### QualityOfClarity +人脸清晰度评估器构造函数 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|void|| || + +#### QualityOfClarity +人脸清晰度评估器构造函数 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|low|float| |分级参数一| +|high|float| |分级参数二| +说明:分类依据为[0, low)=> LOW; [low, high)=> MEDIUM; [high, ~)=> HIGH。 + +### 4.2 成员函数 + +#### check +检测人脸清晰度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|face|const SeetaRect&| |人脸位置| +|points|const SeetaPointF*| |人脸5个特征点数组| +|N|const int32_t| |人脸特征点数组长度| +|返回值|QualityResult| |人脸清晰度检测结果| + +## 5 class QualityOfLBN +深度学习的人脸清晰度评估器。 + +### 5.1 Enum SeetaDevice + +模型运行的计算设备。 + +|名称 |说明| +|---|---| +|SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| +|SEETA_DEVICE_CPU|使用CPU计算| +|SEETA_DEVICE_GPU|使用GPU计算| + +### 5.2 struct SeetaModelSetting + +构造评估器需要传入的结构体参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|model|const char**| |评估器模型| +|id|int| |GPU id| +|device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| + +### 5.3 构造函数 +人脸清晰度评估器构造函数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|setting|const SeetaModelSetting&| |对象构造结构体参数| + +### 5.4 成员函数 + +#### Detect +检测人脸清晰度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|points|const SeetaPointF*| |人脸68个特征点数组| +|light|int*| |亮度返回结果,暂不推荐使用该返回结果| +|blur|int*| |模糊度返回结果| +|noise|int*| |是否有噪声返回结果,暂不推荐使用该返回结果| +|返回值|void| || +说明:blur 结果返回 0 说明人脸是清晰的,blur 为 1 说明人脸是模糊的。 + +#### set +设置相关属性值。其中
+ +**PROPERTY_NUMBER_THREADS**: +表示计算线程数,默认为 4。
+**PROPERTY_ARM_CPU_MODE**:针对于移动端,表示设置的 cpu 计算模式。0 表示 +大核计算模式,1 表示小核计算模式,2 表示平衡模式,为默认模式。
+**PROPERTY_BLUR_THRESH**:表示人脸模糊阈值,默认值大小为 0.80。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|value|double||设置的属性值| +|返回值|void| | | | + +#### get +获取相关属性值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|返回值|double||对应的属性值| + +## 6 class QualityOfPose +非深度学习的人脸姿态评估器。 + +### 6.1 构造函数 + +#### QualityOfPose +人脸姿态评估器构造函数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|void|| || + +### 6.2 成员函数 + +#### check +检测人脸姿态。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|face|const SeetaRect&| |人脸位置| +|points|const SeetaPointF*| |人脸5个特征点数组| +|N|const int32_t| |人脸特征点数组长度| +|返回值|QualityResult| |人脸姿态检测结果| + +## 7 class QualityOfPoseEx +深度学习的人脸姿态评估器。 + +### 7.1 Enum SeetaDevice + +模型运行的计算设备。 + +|名称 |说明| +|---|---| +|SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| +|SEETA_DEVICE_CPU|使用CPU计算| +|SEETA_DEVICE_GPU|使用GPU计算| + +### 7.2 struct SeetaModelSetting + +构造评估器需要传入的结构体参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|model|const char**| |评估器模型| +|id|int| |GPU id| +|device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| + +### 7.3 构造函数 + +#### QualityOfPoseEx +人脸姿态评估器构造函数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|setting|const SeetaModelSetting&| |对象结构体参数| + +### 7.4 成员函数 + +#### check +检测人脸姿态。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|face|const SeetaRect&| |人脸位置| +|points|const SeetaPointF*| |人脸5个特征点数组| +|N|const int32_t| |人脸特征点数组长度| +|返回值|QualityResult| |人脸姿态检测结果| + + +#### set +设置相关属性值。其中
+**YAW_HIGH_THRESHOLD**: +yaw方向的分级参数一。
+**YAW_LOW_THRESHOLD**: +yaw方向的分级参数二。
+**PITCH_HIGH_THRESHOLD**: +pitch方向的分级参数一。
+**PITCH_LOW_THRESHOLD**: +pitch方向的分级参数二。
+**ROLL_HIGH_THRESHOLD**: +roll方向的分级参数一。
+**ROLL_LOW_THRESHOLD**: +roll方向的分级参数二。
+ +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|value|double||设置的属性值| +|返回值|void| | | | + +#### get +获取相关属性值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|返回值|double||对应的属性值| + +## 8 class QualityOfResolution +非深度学习的人脸尺寸评估器。 + +### 8.1 构造函数 + +#### QualityOfResolution +人脸尺寸评估器构造函数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|void|| || + +#### QualityOfResolution +人脸尺寸评估器构造函数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|low|float| |分级参数一| +|high|float| |分级参数二| + +### 8.2 成员函数 + +#### check +评估人脸尺寸。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|face|const SeetaRect&| |人脸位置| +|points|const SeetaPointF*| |人脸5个特征点数组| +|N|const int32_t| |人脸特征点数组长度| +|返回值|QualityResult| |人脸尺寸评估结果| + +## 9 class QualityOfIntegrity +非深度学习的人脸完整度评估器,评估人脸靠近图像边缘的程度。 + +### 9.1 构造函数 + +#### QualityOfIntegrity +人脸完整评估器构造函数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|void|| || + +#### QualityOfIntegrity +人脸尺寸评估器构造函数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|low|float| |分级参数一| +|high|float| |分级参数二| + +说明:low和high主要来控制人脸位置靠近图像边缘的接受程度。 + +### 9.2 成员函数 + +#### check +评估人脸完整度。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|face|const SeetaRect&| |人脸位置| +|points|const SeetaPointF*| |人脸5个特征点数组| +|N|const int32_t| |人脸特征点数组长度| +|返回值|QualityResult| |人脸完整度评估结果| \ No newline at end of file diff --git a/docs/静默活体.md b/docs/静默活体.md new file mode 100644 index 0000000..a781020 --- /dev/null +++ b/docs/静默活体.md @@ -0,0 +1,159 @@ +# 静默活体识别器 + +## **1. 接口简介**
+ +静默活体识别根据输入的图像数据、人脸位置和人脸特征点,对输入人脸进行活体的判断,并返回人脸活体的状态。
+ +## **2. 类型说明**
+ +### **2.1 struct SeetaImageData**
+ +|名称 | 类型 | 说明| +|---|---|---| +|data|unit8_t* |图像数据| +|width | int32_t | 图像的宽度| +|height | int32_t | 图像的高度| +|channels | int32_t | 图像的通道数| +说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 + +### **2.2 struct SeetaRect**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|int32_t |人脸区域左上角横坐标| +|y| int32_t | 人脸区域左上角纵坐标| +|width| int32_t | 人脸区域宽度| +|height| int32_t | 人脸区域高度| + +### **2.3 struct SeetaPointF**
+ +|名称 | 类型 | 说明| +|---|---|---| +|x|double|人脸特征点横坐标| +|y|double|人脸特征点纵坐标| + +## 3 class FaceAntiSpoofing +活体识别器。 + +### 3.1 Enum SeetaDevice + +模型运行的计算设备。 + +|名称 |说明| +|---|---| +|SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| +|SEETA_DEVICE_CPU|使用CPU计算| +|SEETA_DEVICE_GPU|使用GPU计算| + +### 3.2 struct SeetaModelSetting + +构造活体识别器需要传入的结构体参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|model|const char**| |识别器模型| +|id|int| |GPU id| +|device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| + +### 3.3 构造函数 + +#### FaceAntiSpoofing +构造活体识别器,需要在构造的时候传入识别器结构参数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|setting|const SeetaModelSetting&| |识别器接口参数| +说明:活体对象创建可以出入一个模型文件(局部活体模型)和两个模型文件(局部活体模型和全局活体模型,顺序不可颠倒),传入一个模型文件时活体识别速度快于传入两个模型文件的识别速度,传入一个模型文件时活体识别精度低于传入两个模型文件的识别精度。 + +### 3.4 成员函数 + +#### Predict +基于单帧图像对人脸是否为活体进行判断。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|face|const SeetaRect&| |人脸位置| +|points|const SeetaPointF*| |人脸特征点数组| +|返回值|Status| |人脸活体的状态| +说明:Status 活体状态可取值为REAL(真人)、SPOOF(假体)、FUZZY(由于图像质量问题造成的无法判断)和 DETECTING(正在检测),DETECTING 状态针对于 PredicVideo 模式。 + +#### PredictVideo +基于连续视频序列对人脸是否为活体进行判断。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|image|const SeetaImageData&| |原始图像数据| +|face|const SeetaRect&| |人脸位置| +|points|const SeetaPointF*| |人脸特征点数组| +|返回值|Status| |人脸活体的状态| +说明:Status 活体状态可取值为REAL(真人)、SPOOF(假体)、FUZZY(由于图像质量问题造成的无法判断)和 DETECTING(正在检测),DETECTING 状态针对于 PredicVideo 模式。 + +#### ResetVideo +重置活体识别结果,开始下一次 PredictVideo 识别过程。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|void| || + +#### GetPreFrameScore +获取活体检测内部分数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|clarity|float*| |人脸清晰度分数| +|reality|float*| |人脸活体分数| +|返回值|void| || + +#### SetVideoFrameCount +设置 Video 模式中识别视频帧数,当输入帧数为该值以后才会有活体的 +真假结果。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|number|int32_t| |video模式下活体需求帧数| +|返回值|void| || + +#### GetVideoFrameCount +获取video模式下活体需求帧数。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|返回值|int| || + +#### SetThreshold +设置阈值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|clarity|float| |人脸清晰度阈值| +|reality|float| |人脸活体阈值| +|返回值|void| || +说明:人脸清晰度阈值默认为0.3,人脸活体阈值为0.8。 + +#### GetThreshold +获取阈值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|clarity|float*| |人脸清晰度阈值| +|reality|float*| |人脸活体阈值| + +#### set +设置相关属性值。其中
+**PROPERTY_NUMBER_THREADS**: +表示计算线程数,默认为 4. + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|value|double||设置的属性值| +|返回值|void| | | | + +#### get +获取相关属性值。 + +|参数 | 类型 |缺省值|说明| +|---|---|---|---| +|property|Property||属性类别| +|返回值|double||对应的属性值| \ No newline at end of file diff --git a/example/qt/README.md b/example/qt/README.md new file mode 100644 index 0000000..73c18d5 --- /dev/null +++ b/example/qt/README.md @@ -0,0 +1,66 @@ +SeetaFaceDemo depend on opencv4 (or opencv3) and SeetaTech.com SF3.0 lib + +open seetaface_demo.pro, modify INCLUDEPATH parameter and LIBS parameter. +INCLUDEPATH add opencv header files path and SF3.0 header files path. +LIBS add opencv libs and SF3.0 libs. + + +you modify and save seetaface_demo.pro, then must run qmake + +example: + +LINUX: + + +``` +INCLUDEPATH += /wqy/tools/opencv4_home/include/opencv4 \ + /wqy/seeta_sdk/SF3/libs/SF3.0_v1/include + + + +LIBS += -L/wqy/tools/opencv4_home/lib -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs \ + -L/wqy/seeta_sdk/SF3/libs/SF3.0_v1/lib64 -lSeetaFaceDetector600 -lSeetaFaceLandmarker600 \ + -lSeetaFaceAntiSpoofingX600 -lSeetaFaceTracking600 -lSeetaFaceRecognizer610 \ + -lSeetaQualityAssessor300 -lSeetaPoseEstimation600 -lSeetaAuthorize -ltennis +``` + +WINDOWS: + +1install vs2015 +2install qt5.9. + note:when select install components, checked msvc2015 64-bit + after installed, confirm compile tool and build kits is msvc2015 64bit + + + +3configure parameters: + + SF3.0_ROOT = C:/study/SF3.0/sf3.0_windows/sf3.0_windows + OPENCV_ROOT = C:/thirdparty/opencv4.2/build +``` +INCLUDEPATH += C:/thirdparty/opencv4.2/build/include \ + C:/study/SF3.0/sf3.0_windows/sf3.0_windows/include + +CONFIG(debug, debug|release) { + +LIBS += -LC:/thirdparty/opencv4.2/build/x64/vc14/lib -lopencv_world420d \ + -LC:/study/SF3.0/sf3.0_windows/sf3.0_windows/lib/x64 -lSeetaFaceDetector600d -lSeetaFaceLandmarker600d \ + -lSeetaFaceAntiSpoofingX600d -lSeetaFaceTracking600d -lSeetaFaceRecognizer610d \ + -lSeetaQualityAssessor300d -lSeetaPoseEstimation600d + +} else { + +LIBS += -LC:/thirdparty/opencv4.2/build/x64/vc14/lib -lopencv_world420 \ + -LC:/study/SF3.0/sf3.0_windows/sf3.0_windows/lib/x64 -lSeetaFaceDetector600 -lSeetaFaceLandmarker600 \ + -lSeetaFaceAntiSpoofingX600 -lSeetaFaceTracking600 -lSeetaFaceRecognizer610 \ + -lSeetaQualityAssessor300 -lSeetaPoseEstimation600 + +} +``` + +Note: + + + +Before running seetaface_demo, please download and save SF3.0 models into seetaface_demo's directory models. +Then copy opencv_world420d.dll and all of SF3.0 lib directory's dll files and paste them into seetaface_demo directory diff --git a/example/qt/seetaface_demo/default.png b/example/qt/seetaface_demo/default.png new file mode 100644 index 0000000..4c2914d Binary files /dev/null and b/example/qt/seetaface_demo/default.png differ diff --git a/example/qt/seetaface_demo/face_resource.qrc b/example/qt/seetaface_demo/face_resource.qrc new file mode 100644 index 0000000..6889d98 --- /dev/null +++ b/example/qt/seetaface_demo/face_resource.qrc @@ -0,0 +1,7 @@ + + + default.png + white.png + seetatech_logo.png + + diff --git a/example/qt/seetaface_demo/inputfilesprocessdialog.cpp b/example/qt/seetaface_demo/inputfilesprocessdialog.cpp new file mode 100644 index 0000000..4f568fa --- /dev/null +++ b/example/qt/seetaface_demo/inputfilesprocessdialog.cpp @@ -0,0 +1,101 @@ +#include +#include +#include +#include +#include +#include "inputfilesprocessdialog.h" + +#include "videocapturethread.h" + + +InputFilesProcessDlg::InputFilesProcessDlg(QWidget *parent, InputFilesThread * thread) + : QDialog(parent) +{ + m_exited = false; + workthread = thread; + qDebug() << "------------dlg input----------------"; + //初始化控件对象 + //tr是把当前字符串翻译成为其他语言的标记 + //&后面的字母是用快捷键来激活控件的标记,例如可以用Alt+w激活Find &what这个控件 + label = new QLabel("", this); + + progressbar = new QProgressBar(this); + progressbar->setOrientation(Qt::Horizontal); + progressbar->setMinimum(0); + progressbar->setMaximum(100); + progressbar->setValue(5); + progressbar->setFormat(tr("current progress:%1%").arg(QString::number(5, 'f',1))); + progressbar->setAlignment(Qt::AlignLeft| Qt::AlignVCenter); + + cancelButton = new QPushButton(tr("&Cancel")); + cancelButton->setEnabled(true); + + //closeButton = new QPushButton(tr("&Close")); + + + //连接信号和槽 + //connect(edit1, SIGNAL(textChanged()), this, SLOT(enableOkButton())); + //connect(okButton, SIGNAL(clicked()), this, SLOT(okClicked())); + //connect(closeButton, SIGNAL(clicked()), this, SLOT(close())); + connect(workthread, SIGNAL(sigprogress(float)), this, SLOT(setprogressvalue(float))); + connect(workthread, SIGNAL(sigInputFilesEnd()), this, SLOT(setinputfileend())); + + + + QHBoxLayout *bottomLayout = new QHBoxLayout; + bottomLayout->addStretch(); + bottomLayout->addWidget(cancelButton); + //bottomLayout->addWidget(closeButton); + bottomLayout->addStretch(); + + QVBoxLayout *mainLayout = new QVBoxLayout; + mainLayout->addWidget(label); + mainLayout->addWidget(progressbar); + mainLayout->addStretch(); + mainLayout->addLayout(bottomLayout); + + this->setLayout(mainLayout); + + setWindowTitle(tr("Input Files Progress")); + + //cancelButton->setEnabled(true); + setFixedSize(400,160); +} + +void InputFilesProcessDlg::closeEvent(QCloseEvent *event) +{ + if(!m_exited) + { + workthread->m_exited = true; + event->ignore(); + }else + { + event->accept(); + } + +} + +void InputFilesProcessDlg::cancelClicked() +{ + workthread->m_exited = true; +} + + +InputFilesProcessDlg::~InputFilesProcessDlg() +{ + +} +void InputFilesProcessDlg::setinputfileend() +{ + hide(); + m_exited = true; + close(); +} + + +void InputFilesProcessDlg::setprogressvalue(float value) +{ + QString str = QString("%1%").arg(QString::number(value, 'f',1)); + progressbar->setValue(value); + progressbar->setFormat(str); +} diff --git a/example/qt/seetaface_demo/inputfilesprocessdialog.h b/example/qt/seetaface_demo/inputfilesprocessdialog.h new file mode 100644 index 0000000..49be5d2 --- /dev/null +++ b/example/qt/seetaface_demo/inputfilesprocessdialog.h @@ -0,0 +1,48 @@ +#ifndef INPUTFILESPROCESSDIALOG_H +#define INPUTFILESPROCESSDIALOG_H + + +#include + + +class QLabel; +class QProgressBar; +class QPushButton; +class InputFilesThread; + +class InputFilesProcessDlg :public QDialog{ + + //如果需要在对话框类中自定义信号和槽,则需要在类内添加Q_OBJECT + Q_OBJECT +public: + //构造函数,析构函数 + InputFilesProcessDlg(QWidget *parent, InputFilesThread * thread); + ~InputFilesProcessDlg(); +protected: + void closeEvent(QCloseEvent *event); + + //在signal和slots中定义这个对话框所需要的信号。 +signals: + //signals修饰的函数不需要本类实现。他描述了本类对象可以发送那些求助信号 + +//slots必须用private修饰 +private slots: + void cancelClicked(); + void setprogressvalue(float value); + void setinputfileend(); +//申明这个对话框需要哪些组件 +private: + QLabel *label; + + QProgressBar *progressbar; + //QLabel *label2; + + QPushButton *cancelButton;//, *closeButton; + + InputFilesThread * workthread; + bool m_exited; +}; + + + +#endif // INPUTFILESPROCESSDIALOG_H diff --git a/example/qt/seetaface_demo/main.cpp b/example/qt/seetaface_demo/main.cpp new file mode 100644 index 0000000..cf52393 --- /dev/null +++ b/example/qt/seetaface_demo/main.cpp @@ -0,0 +1,22 @@ +#include "mainwindow.h" +#include +#include + + +int main(int argc, char *argv[]) +{ + QApplication a(argc, argv); + + + //QTextCodec::setCodecForCStrings(QTextCodec::codecForName("GBK")); + //QTextCodec::setCodecForCStrings(QTextCodec::codecForName("UTF-8")) + MainWindow w; + w.setWindowTitle("SeetaFace Demo"); + w.setWindowIcon(QIcon(":/new/prefix1/seetatech_logo.png")); + w.show(); + + QString str("乱码"); + + qDebug() << str; + return a.exec(); +} diff --git a/example/qt/seetaface_demo/mainwindow.cpp b/example/qt/seetaface_demo/mainwindow.cpp new file mode 100644 index 0000000..df99e3e --- /dev/null +++ b/example/qt/seetaface_demo/mainwindow.cpp @@ -0,0 +1,1310 @@ +#include "mainwindow.h" +#include "ui_mainwindow.h" + +#include "QDir" +#include "QFileDialog" +#include "QDebug" + +#include "qsqlquery.h" +#include "qmessagebox.h" +#include "qsqlerror.h" + +#include "qitemselectionmodel.h" +#include + +//#include "faceinputdialog.h" + +#include "inputfilesprocessdialog.h" +#include "resetmodelprocessdialog.h" + +#include +#include +#include +#include + +//#include "Common/CStruct.h" +#include +using namespace std::chrono; + + + +////////////////////////////////// + + +const QString gcrop_prefix("crop_"); +Config_Paramter gparamters; +std::string gmodelpath; + +///////////////////////////////////// +MainWindow::MainWindow(QWidget *parent) : + QMainWindow(parent), + ui(new Ui::MainWindow) +{ + m_currenttab = -1; + ui->setupUi(this); + + + QIntValidator * vfdminfacesize = new QIntValidator(20, 1000); + ui->fdminfacesize->setValidator(vfdminfacesize); + + QDoubleValidator *vfdthreshold = new QDoubleValidator(0.0,1.0, 2); + ui->fdthreshold->setValidator(vfdthreshold); + + QDoubleValidator *vantispoofclarity = new QDoubleValidator(0.0,1.0, 2); + ui->antispoofclarity->setValidator(vantispoofclarity); + + QDoubleValidator *vantispoofreality = new QDoubleValidator(0.0,1.0, 2); + ui->antispoofreality->setValidator(vantispoofreality); + + QDoubleValidator *vyawhigh = new QDoubleValidator(0.0,90, 2); + ui->yawhighthreshold->setValidator(vyawhigh); + + QDoubleValidator *vyawlow = new QDoubleValidator(0.0,90, 2); + ui->yawlowthreshold->setValidator(vyawlow); + + QDoubleValidator *vpitchlow = new QDoubleValidator(0.0,90, 2); + ui->pitchlowthreshold->setValidator(vpitchlow); + + QDoubleValidator *vpitchhigh = new QDoubleValidator(0.0,90, 2); + ui->pitchhighthreshold->setValidator(vpitchhigh); + + QDoubleValidator *vfrthreshold = new QDoubleValidator(0.0,1.0, 2); + ui->fr_threshold->setValidator(vfrthreshold); + + gparamters.MinFaceSize = 100; + gparamters.Fd_Threshold = 0.80; + gparamters.VideoWidth = 400; + gparamters.VideoHeight = 400; + gparamters.AntiSpoofClarity = 0.30; + gparamters.AntiSpoofReality = 0.80; + gparamters.PitchLowThreshold = 20; + gparamters.PitchHighThreshold = 10; + gparamters.YawLowThreshold = 20; + gparamters.YawHighThreshold = 10; + gparamters.Fr_Threshold = 0.6; + gparamters.Fr_ModelPath = "face_recognizer.csta"; + + m_type.type = 0; + m_type.filename = ""; + m_type.title = "Open Camera 0"; + + ui->recognize_label->setText(m_type.title); + + int width = this->width(); + int height = this->height(); + this->setFixedSize(width, height); + + ui->db_editpicture->setStyleSheet("border-image:url(:/new/prefix1/default.png)"); + ui->db_editcrop->setStyleSheet("border-image:url(:/new/prefix1/default.png)"); + + ///////////////////////// + + + m_database = QSqlDatabase::addDatabase("QSQLITE"); + QString exepath = QCoreApplication::applicationDirPath(); + QString strdb = exepath + /*QDir::separator()*/ + "/seetaface_demo.db"; + + m_image_tmp_path = exepath + /*QDir::separator()*/ + "/tmp/";// + QDir::separator(); + m_image_path = exepath + /*QDir::separator()*/ + "/images/";// + QDir::separator(); + //m_model_path = exepath + /*QDir::separator()*/ + "/models/";// + QDir::separator(); + gmodelpath = (exepath + /*QDir::separator()*/ + "/models/"/* + QDir::separator()*/).toStdString(); + + QDir dir; + dir.mkpath(m_image_tmp_path); + dir.mkpath(m_image_path); + + m_database.setDatabaseName(strdb); + + if(!m_database.open()) + { + QMessageBox::critical(NULL, "critical", tr("open database failed, exited!"), QMessageBox::Yes); + exit(-1); + } + + QStringList tables = m_database.tables(); + m_table = "face_tab"; + m_config_table = "setting_tab";//"paramter_tab"; + + + + bool bfind = false; + bool bconfigfind = false; + int i =0; + for( i=0; ifdminfacesize->setText(QString::number(gparamters.MinFaceSize)); + ui->fdthreshold->setText(QString::number(gparamters.Fd_Threshold)); + ui->antispoofclarity->setText(QString::number(gparamters.AntiSpoofClarity)); + ui->antispoofreality->setText(QString::number(gparamters.AntiSpoofReality)); + ui->yawlowthreshold->setText(QString::number(gparamters.YawLowThreshold)); + ui->yawhighthreshold->setText(QString::number(gparamters.YawHighThreshold)); + ui->pitchlowthreshold->setText(QString::number(gparamters.PitchLowThreshold)); + ui->pitchhighthreshold->setText(QString::number(gparamters.PitchHighThreshold)); + ui->fr_threshold->setText(QString::number(gparamters.Fr_Threshold)); + ui->fr_modelpath->setText(gparamters.Fr_ModelPath); + qDebug() << "create config table ok!"; + + } + + ui->dbtableview->setSelectionBehavior(QAbstractItemView::SelectRows); + ui->dbtableview->setEditTriggers(QAbstractItemView::NoEditTriggers); + ui->dbtableview->verticalHeader()->setDefaultSectionSize(80); + ui->dbtableview->verticalHeader()->hide(); + + connect(ui->dbtableview, SIGNAL(clicked(QModelIndex)), this, SLOT(showfaceinfo())); + + m_model = new QStandardItemModel(this); + QStringList columsTitles; + columsTitles << "ID" << "Name" << "Image" << /*"edit" << */" "; + m_model->setHorizontalHeaderLabels(columsTitles); + ui->dbtableview->setModel(m_model); + ui->dbtableview->setColumnWidth(0, 120); + ui->dbtableview->setColumnWidth(1, 200); + ui->dbtableview->setColumnWidth(2, 104); + ui->dbtableview->setColumnWidth(3, 100); + //ui->dbtableview->setColumnWidth(4, 100); + getdatas(); + /// /////////////////////////// + + gparamters.VideoWidth = ui->previewlabel->width(); + gparamters.VideoHeight = ui->previewlabel->height(); + + if(bconfigfind) + { + //fd_minfacesize, fd_threshold, antispoof_clarity, antispoof_reality, qa_yawlow, qa_yawhigh, qa_pitchlow, qa_pitchhigh + QSqlQuery q("select * from " + m_config_table); + while(q.next()) + { + gparamters.MinFaceSize = q.value("fd_minfacesize").toInt(); + ui->fdminfacesize->setText(QString::number(q.value("fd_minfacesize").toInt())); + + gparamters.Fd_Threshold = q.value("fd_threshold").toFloat(); + ui->fdthreshold->setText(QString::number(q.value("fd_threshold").toFloat())); + + gparamters.AntiSpoofClarity = q.value("antispoof_clarity").toFloat(); + ui->antispoofclarity->setText(QString::number(q.value("antispoof_clarity").toFloat())); + + gparamters.AntiSpoofReality = q.value("antispoof_reality").toFloat(); + ui->antispoofreality->setText(QString::number(q.value("antispoof_reality").toFloat())); + + gparamters.YawLowThreshold = q.value("qa_yawlow").toFloat(); + ui->yawlowthreshold ->setText(QString::number(q.value("qa_yawlow").toFloat())); + + gparamters.YawHighThreshold = q.value("qa_yawhigh").toFloat(); + ui->yawhighthreshold ->setText(QString::number(q.value("qa_yawhigh").toFloat())); + + gparamters.PitchLowThreshold = q.value("qa_pitchlow").toFloat(); + ui->pitchlowthreshold ->setText(QString::number(q.value("qa_pitchlow").toFloat())); + + gparamters.PitchHighThreshold = q.value("qa_pitchhigh").toFloat(); + ui->pitchhighthreshold ->setText(QString::number(q.value("qa_pitchhigh").toFloat())); + + gparamters.Fr_Threshold = q.value("fr_threshold").toFloat(); + gparamters.Fr_ModelPath = q.value("fr_modelpath").toString(); + + ui->fr_threshold->setText(QString::number(gparamters.Fr_Threshold)); + ui->fr_modelpath->setText(gparamters.Fr_ModelPath); + + } + + } + + + //////////////////////////// + ui->previewtableview->setSelectionBehavior(QAbstractItemView::SelectRows); + ui->previewtableview->setEditTriggers(QAbstractItemView::NoEditTriggers); + ui->previewtableview->verticalHeader()->setDefaultSectionSize(80); + ui->previewtableview->verticalHeader()->hide(); + + //connect(ui->tableView, SIGNAL(clicked(QModelIndex)), this, SLOT(showfaceinfo())); + + m_videomodel = new QStandardItemModel(this); + columsTitles.clear(); + columsTitles << "Name" << "Score" << "Gallery" << "Snapshot" << "PID"; + m_videomodel->setHorizontalHeaderLabels(columsTitles); + ui->previewtableview->setModel(m_videomodel); + ui->previewtableview->setColumnWidth(0, 140); + ui->previewtableview->setColumnWidth(1, 80); + ui->previewtableview->setColumnWidth(2, 84); + ui->previewtableview->setColumnWidth(3, 84); + ui->previewtableview->setColumnWidth(4, 2); + ui->previewtableview->hideColumn(4); + + ///////////////////////// + m_videothread = new VideoCaptureThread(&m_datalst, ui->previewlabel->width(), ui->previewlabel->height()); + m_videothread->setparamter(); + //m_videothread->setMinFaceSize(ui->fdminfacesize->text().toInt()); + connect(m_videothread, SIGNAL(sigUpdateUI(const QImage &)), this, SLOT(onupdateui(const QImage &))); + connect(m_videothread, SIGNAL(sigEnd(int)), this, SLOT(onvideothreadend(int))); + connect(m_videothread->m_workthread, SIGNAL(sigRecognize(int, const QString &, const QString &, float, const QImage &, const QRect &)), this, + SLOT(onrecognize(int, const QString &, const QString &, float, const QImage &, const QRect &))); + //m_videothread->start(); + + m_inputfilesthread = new InputFilesThread(m_videothread, m_image_path, m_image_tmp_path); + m_resetmodelthread = new ResetModelThread( m_image_path, m_image_tmp_path); + + connect(m_inputfilesthread, SIGNAL(sigInputFilesUpdateUI(std::vector*)), this, SLOT(oninputfilesupdateui(std::vector *)), Qt::BlockingQueuedConnection); + + ui->dbsavebtn->setEnabled(true); + ui->previewrunbtn->setEnabled(true); + ui->previewstopbtn->setEnabled(false); + + //ui->pushButton_6->setEnabled(false); + /////////////////////// + /////////////////////// + //ui->label->setStyleSheet("QLabel{background-color:rgb(255,255,255);}"); + //ui->label->setStyleSheet("border-image:url(:/new/prefix1/white.png)"); + int a = ui->previewlabel->width(); + int b = ui->previewlabel->height(); + QImage image(":/new/prefix1/white.png"); + QImage ime = image.scaled(a,b); + ui->previewlabel->setPixmap(QPixmap::fromImage(ime)); + + ui->tabWidget->setCurrentIndex(0); + m_currenttab = ui->tabWidget->currentIndex(); + + + if(m_model->rowCount() > 0) + { + ui->dbtableview->scrollToBottom(); + ui->dbtableview->selectRow(m_model->rowCount() - 1); + emit ui->dbtableview->clicked(m_model->index(m_model->rowCount() - 1, 1)); + } +} + +MainWindow::~MainWindow() +{ + + delete ui; + cleardata(); +} + +void MainWindow::cleardata() +{ + std::map::iterator iter = m_datalst.begin(); + for(; iter != m_datalst.end(); ++iter) + { + if(iter->second) + { + delete iter->second; + iter->second = NULL; + } + } + m_datalst.clear(); +} + +void MainWindow::getdatas() +{ + int i = 0; + QSqlQuery q("select * from " + m_table + " order by id asc"); + while(q.next()) + { + //qDebug() << q.value("id").toInt() << "-----" << q.value("name").toString() << "----" << q.value("image_path").toString(); + QByteArray data1 = q.value("feature_data").toByteArray(); + float * ptr = (float *)data1.data(); + //qDebug() << ptr[0] << "," << ptr[1] << "," << ptr[2] << "," << ptr[3] ; + + ////////////////////////////////////////////////// + m_model->setItem(i, 0, new QStandardItem(QString::number(q.value("id").toInt()))); + m_model->setItem(i, 1, new QStandardItem(q.value("name").toString())); + // m_model->setItem(i, 2, new QStandardItem(q.value("image_path").toString())); + + QLabel *label = new QLabel(""); + label->setFixedSize(100,80); + label->setStyleSheet("border-image:url(" + m_image_path + q.value("image_path").toString() + ")"); + ui->dbtableview->setIndexWidget(m_model->index(m_model->rowCount() - 1, 2), label); + + /* + QPushButton *button = new QPushButton("edit"); + button->setProperty("id", q.value("id").toInt()); + button->setFixedSize(80, 40); + connect(button, SIGNAL(clicked()), this, SLOT(editrecord())); + ui->dbtableview->setIndexWidget(m_model->index(m_model->rowCount() - 1, 3), button); + */ + + + QPushButton *button2 = new QPushButton("delete"); + button2->setProperty("id", q.value("id").toInt()); + button2->setFixedSize(80, 40); + connect(button2, SIGNAL(clicked()), this, SLOT(deleterecord())); + + QWidget *widget = new QWidget(); + QHBoxLayout *layout = new QHBoxLayout; + layout->addStretch(); + layout->addWidget(button2); + layout->addStretch(); + widget->setLayout(layout); + + ui->dbtableview->setIndexWidget(m_model->index(m_model->rowCount() - 1, 3), widget); + + //ui->dbtableview->setIndexWidget(m_model->index(m_model->rowCount() - 1, 3), button2); + + DataInfo * info = new DataInfo; + info->id = q.value("id").toInt(); + info->name = q.value("name").toString(); + info->image_path = q.value("image_path").toString(); + memcpy(info->features, ptr, 1024 * sizeof(float)); + info->x = q.value("facex").toInt(); + info->y = q.value("facey").toInt(); + info->width = q.value("facewidth").toInt(); + info->height = q.value("faceheight").toInt(); + m_datalst.insert(std::map::value_type(info->id, info)); + i++; + } +} + + + +void MainWindow::editrecord() +{ + //QPushButton *button = (QPushButton *)sender(); + //qDebug() << button->property("id").toInt() << ", edit"; +} + +void MainWindow::deleterecord() +{ + QPushButton *button = (QPushButton *)sender(); + qDebug() << button->property("id").toInt() << ",del"; + QMessageBox::StandardButton reply = QMessageBox::question(NULL, "delete", tr("Are you sure delete this record?"), QMessageBox::Yes | QMessageBox::No); + if(reply == QMessageBox::No) + return; + + QModelIndex modelindex = ui->dbtableview->indexAt(button->pos()); + + int id = button->property("id").toInt(); + QStandardItemModel * model = (QStandardItemModel *)ui->dbtableview->model(); + + QSqlQuery query("delete from " + m_table + " where id=" + QString::number(id)); + //qDebug() << "delete from " + m_table + " where id=" + QString::number(id); + if(!query.exec()) + { + QMessageBox::warning(NULL, "warning", tr("delete this record failed!"), QMessageBox::Yes); + return; + } + + int nrows = modelindex.row(); + model->removeRow(modelindex.row()); + std::map::iterator iter = m_datalst.find(id); + if(iter != m_datalst.end()) + { + QFile file(m_image_path + iter->second->image_path); + file.remove(); + delete iter->second; + m_datalst.erase(iter); + } + + if(m_model->rowCount() > 0) + { + nrows--; + if(nrows < 0) + { + nrows = 0; + } + //qDebug() << "delete------------row:" << nrows; + ui->dbtableview->selectRow(nrows); + emit ui->dbtableview->clicked(m_model->index(nrows, 1)); + }else + { + ui->db_editname->setText(""); + ui->db_editid->setText(""); + ui->db_editpicture->setStyleSheet("border-image:url(:/new/prefix1/default.png)"); + ui->db_editcrop->setStyleSheet("border-image:url(:/new/prefix1/default.png)"); + } +} + +void MainWindow::showfaceinfo() +{ + int row = ui->dbtableview->currentIndex().row(); + //qDebug() << "showfaceinfo:" << row ; + if(row >= 0) + { + QModelIndex index = m_model->index(row, 0); + int id = ui->db_editid->text().toInt(); + int curid = m_model->data(index).toInt(); + if(id == curid) + return; + + + ui->db_editid->setText(QString::number(m_model->data(index).toInt())); + std::map::iterator iter = m_datalst.find(m_model->data(index).toInt()); + if(iter == m_datalst.end()) + return; + + index = m_model->index(row, 1); + ui->db_editname->setText(m_model->data(index).toString()); + + QString strimage = iter->second->image_path; + //qDebug() << "showfaceinfo:" << strimage; + ui->db_editpicture->setStyleSheet("border-image:url(" + m_image_path + strimage + ")"); + + + //qDebug() << "showfaceinfo:" << strimage; + ui->db_editcrop->setStyleSheet("border-image:url(" + m_image_path + gcrop_prefix + strimage + ")"); + + + iter = m_datalst.find(id); + if(iter == m_datalst.end()) + return; + QFile::remove(m_image_tmp_path + iter->second->image_path); + } +} + +void MainWindow::onrecognize(int pid, const QString & name, const QString & imagepath, float score, const QImage &image, const QRect &rc) +{ + int nrows = m_videomodel->rowCount(); + + if(nrows > 1000) + { + ui->previewtableview->setUpdatesEnabled(false); + m_videomodel->removeRows(0, 200); + ui->previewtableview->setUpdatesEnabled(true); + } + + nrows = m_videomodel->rowCount(); + int i = 0; + for(; iitem(i, 4)->text().toInt() == pid) + { + break; + } + } + + nrows = i; + + m_videomodel->setItem(nrows, 0, new QStandardItem(name)); + //m_videomodel->setItem(nrows, 1, new QStandardItem(QString::number(score, 'f', 3))); + + QLabel *label = new QLabel(""); + label->setFixedSize(80,80); + if(name.isEmpty()) + { + m_videomodel->setItem(nrows, 1, new QStandardItem("")); + label->setText(imagepath); + }else + { + m_videomodel->setItem(nrows, 1, new QStandardItem(QString::number(score, 'f', 3))); + //QLabel *label = new QLabel(""); + //qDebug() << "rows:" << nrows << ",imagepath:" << imagepath << "," << m_image_path + gcrop_prefix + imagepath ; + //label->setFixedSize(80,80); + + QImage srcimage; + srcimage.load( m_image_path + imagepath); + srcimage = srcimage.copy(rc.x(),rc.y(),rc.width(),rc.height()); + srcimage = srcimage.scaled(80,80); + label->setPixmap(QPixmap::fromImage(srcimage)); + //label->setStyleSheet("border-image:url(" + m_image_path + gcrop_prefix + imagepath + ")"); + //ui->previewtableview->setIndexWidget(m_videomodel->index(nrows, 2), label); + } + + ui->previewtableview->setIndexWidget(m_videomodel->index(nrows, 2), label); + + /* + QLabel *label = new QLabel(""); + qDebug() << "rows:" << nrows << ",imagepath:" << imagepath << "," << m_image_path + gcrop_prefix + imagepath ; + label->setFixedSize(80,80); + + QImage srcimage; + srcimage.load( m_image_path + imagepath); + srcimage = srcimage.copy(rc.x(),rc.y(),rc.width(),rc.height()); + srcimage = srcimage.scaled(80,80); + label->setPixmap(QPixmap::fromImage(srcimage)); + //label->setStyleSheet("border-image:url(" + m_image_path + gcrop_prefix + imagepath + ")"); + ui->previewtableview->setIndexWidget(m_videomodel->index(nrows, 2), label); + */ + + QLabel *label2 = new QLabel(""); + label2->setFixedSize(80,80); + QImage img = image.scaled(80,80); + label2->setPixmap(QPixmap::fromImage(img)); + //label2->setStyleSheet("border-image:url(" + m_image_path + gcrop_prefix + imagepath + ")"); + ui->previewtableview->setIndexWidget(m_videomodel->index(nrows, 3), label2); + + m_videomodel->setItem(nrows, 4, new QStandardItem(QString::number(pid))); + ui->previewtableview->scrollToBottom(); + +} + +void MainWindow::onupdateui(const QImage & image) +{ + int a = ui->previewlabel->width(); + int b = ui->previewlabel->height(); + QImage ime = image.scaled(a,b); + ui->previewlabel->setPixmap(QPixmap::fromImage(ime)); + ui->previewlabel->show(); +} + +void MainWindow::onvideothreadend(int value) +{ + qDebug() << "onvideothreadend:" << value; + //ui->label->setStyleSheet("border-image:url(:/new/prefix1/white.png)"); + + if(m_type.type != 2) + { + int a = ui->previewlabel->width(); + int b = ui->previewlabel->height(); + QImage image(":/new/prefix1/white.png"); + QImage ime = image.scaled(a,b); + ui->previewlabel->setPixmap(QPixmap::fromImage(ime)); + ui->previewlabel->show(); + } + + ui->previewrunbtn->setEnabled(true); + ui->previewstopbtn->setEnabled(false); +} + +void MainWindow::on_dbsavebtn_clicked() +{ + //input image to database + //phuckDlg *dialog = new phuckDlg(this); + //dialog->setModal(true); + //dialog->show(); + + //qDebug() << "----begin---update"; + if(ui->db_editname->text().isEmpty()) + { + QMessageBox::critical(NULL, "critical", tr("name is empty!"), QMessageBox::Yes); + return; + } + + if(ui->db_editname->text().length() > 64) + { + QMessageBox::critical(NULL, "critical", tr("name length is more than 64!"), QMessageBox::Yes); + return; + } + + int index = 1; + index = ui->db_editid->text().toInt(); + + //qDebug() << "----begin---update---index:" << index; + std::map::iterator iter = m_datalst.find(index); + if(iter == m_datalst.end()) + { + return; + } + + QString str = m_image_tmp_path + iter->second->image_path; + QFileInfo fileinfo(str); + bool imageupdate = false; + float features[1024]; + SeetaRect rect; + + if(fileinfo.isFile()) + { + //imageupdate = true; + QString cropfile = m_image_tmp_path + gcrop_prefix + iter->second->image_path; + + float features[1024]; + int nret = m_videothread->checkimage(str, cropfile, features, rect); + QString strerror; + + if(nret == -2) + { + strerror = "do not find face!"; + }else if(nret == -1) + { + strerror = str + " is invalid!"; + }else if(nret == 1) + { + strerror = "find more than one face!"; + }else if(nret == 2) + { + strerror = "quality check failed!"; + } + + if(!strerror.isEmpty()) + { + QFile::remove(str); + QMessageBox::critical(NULL,"critical", strerror, QMessageBox::Yes); + return; + } + } + + //qDebug() << "---1-begin---update---index:" << index; + + QSqlQuery query; + + if(imageupdate) + { + query.prepare("update " + m_table + " set name = :name, feature_data=:feature_data, facex=:facex,facey=:facey,facewidth=:facewidth,faceheight=:faceheight where id=" + QString::number(index)); + QByteArray bytearray; + bytearray.resize(1024 * sizeof(float)); + memcpy(bytearray.data(), features, 1024 * sizeof(float)); + query.bindValue(":feature_data", QVariant(bytearray)); + query.bindValue(":facex", rect.x); + query.bindValue(":facey", rect.y); + query.bindValue(":facewidth", rect.width); + query.bindValue(":faceheight", rect.height); + + }else + { + query.prepare("update " + m_table + " set name = :name where id=" + QString::number(index)); + } + query.bindValue(":name", ui->db_editname->text());//fileinfo.fileName());//strfile); + + if(!query.exec()) + { + if(imageupdate) + { + QFile::remove(str); + QFile::remove(m_image_tmp_path + gcrop_prefix + iter->second->image_path); + } + + //QFile::remove() + //qDebug() << "failed to update table:" << query.lastError(); + QMessageBox::critical(NULL, "critical", tr("update data to database failed!"), QMessageBox::Yes); + return; + } + + //qDebug() << "---ddd-begin---update---index:" << index; + iter->second->name = ui->db_editname->text(); + + + if(imageupdate) + { + memcpy(iter->second->features, features, 1024 * sizeof(float)); + //qDebug() << "---image-begin---update---index:" << index << ",image:" << str; + QFile::remove(m_image_path + iter->second->image_path); + QFile::remove(m_image_path + gcrop_prefix + iter->second->image_path); + QFile::copy(str, m_image_path + iter->second->image_path); + QFile::copy(m_image_tmp_path + gcrop_prefix + iter->second->image_path, m_image_path + gcrop_prefix + iter->second->image_path); + QFile::remove(str); + QFile::remove(m_image_tmp_path + gcrop_prefix + iter->second->image_path); + } + + int row = ui->dbtableview->currentIndex().row(); + //qDebug() << "showfaceinfo:" << row ; + if(row >= 0) + { + QModelIndex index = m_model->index(row, 1); + m_model->itemFromIndex(index)->setText(ui->db_editname->text()); + + //qDebug() << "---image-begin---update---index:" << index << ",image:" << str; + if(imageupdate) + { + index = m_model->index(row, 2); + ui->dbtableview->indexWidget(index)->setStyleSheet("border-image:url(" + m_image_path + iter->second->image_path + ")"); + ui->db_editcrop->setStyleSheet("border-image:url(" + m_image_path + gcrop_prefix + iter->second->image_path + ")"); + } + } + QMessageBox::information(NULL, "info", tr("update name to database success!"), QMessageBox::Yes); +} + +void MainWindow::on_previewrunbtn_clicked() +{ + m_videothread->m_exited = false; + m_videothread->start(m_type); + ui->previewrunbtn->setEnabled(false); + ui->previewstopbtn->setEnabled(true); +} + +void MainWindow::on_previewstopbtn_clicked() +{ + m_videothread->m_exited = true; +} + +void MainWindow::on_settingsavebtn_clicked() +{ + /* + ResetModelProcessDlg dialog(this, m_resetmodelthread); + //m_resetmodelthread->start(&m_datalst, m_table, fr); + int nret = dialog.exec(); + + qDebug() << "ResetModelProcessDlg:" << nret; + + if(nret != QDialog::Accepted) + { + + QMessageBox::critical(NULL, "critical", "reset face recognizer model failed!", QMessageBox::Yes); + return; + } + return; + */ + ////////////////////////////////// + + int size = ui->fdminfacesize->text().toInt(); + if(size < 20 || size > 1000) + { + QMessageBox::warning(NULL, "warn", "Face Detector Min Face Size is invalid!", QMessageBox::Yes); + return; + } + + float value = ui->fdthreshold->text().toFloat(); + if(value >= 1.0 || value < 0.0) + { + QMessageBox::warning(NULL, "warn", "Face Detector Threshold is invalid!", QMessageBox::Yes); + return; + } + + value = ui->antispoofclarity->text().toFloat(); + if(value >= 1.0 || value < 0.0) + { + QMessageBox::warning(NULL, "warn", "Anti Spoofing Clarity is invalid!", QMessageBox::Yes); + return; + } + + value = ui->antispoofreality->text().toFloat(); + if(value >= 1.0 || value < 0.0) + { + QMessageBox::warning(NULL, "warn", "Anti Spoofing Reality is invalid!", QMessageBox::Yes); + return; + } + + value = ui->yawlowthreshold->text().toFloat(); + if(value >= 90 || value < 0.0) + { + QMessageBox::warning(NULL, "warn", "Quality Yaw Low Threshold is invalid!", QMessageBox::Yes); + return; + } + value = ui->yawhighthreshold->text().toFloat(); + if(value >= 90 || value < 0.0) + { + QMessageBox::warning(NULL, "warn", "Quality Yaw High Threshold is invalid!", QMessageBox::Yes); + return; + } + + value = ui->pitchlowthreshold->text().toFloat(); + if(value >= 90 || value < 0.0) + { + QMessageBox::warning(NULL, "warn", "Quality Pitch Low Threshold is invalid!", QMessageBox::Yes); + return; + } + value = ui->pitchhighthreshold->text().toFloat(); + if(value >= 90 || value < 0.0) + { + QMessageBox::warning(NULL, "warn", "Quality Pitch High Threshold is invalid!", QMessageBox::Yes); + return; + } + + value = ui->fr_threshold->text().toFloat(); + if(value >= 1.0 || value < 0.0) + { + QMessageBox::warning(NULL, "warn", "Face Recognizer Threshold is invalid!", QMessageBox::Yes); + return; + } + + QString strmodel = ui->fr_modelpath->text().trimmed(); + QFileInfo fileinfo(gmodelpath.c_str() + strmodel); + if(QString::compare(fileinfo.suffix(), "csta", Qt::CaseInsensitive) != 0) + { + QMessageBox::warning(NULL, "warn", "Face Recognizer model file is invalid!", QMessageBox::Yes); + return; + } + + QMessageBox::StandardButton result; + if(QString::compare(gparamters.Fr_ModelPath, ui->fr_modelpath->text().trimmed()) != 0) + { + result = QMessageBox::warning(NULL, "warning", "Set new face recognizer model need reset features, Are you sure?", QMessageBox::Yes | QMessageBox::No); + if(result == QMessageBox::No) + { + return; + } + + seeta::FaceRecognizer * fr = m_videothread->CreateFaceRecognizer(ui->fr_modelpath->text().trimmed()); + ResetModelProcessDlg dialog(this, m_resetmodelthread); + m_resetmodelthread->start(&m_datalst, m_table, fr); + int nret = dialog.exec(); + + qDebug() << "ResetModelProcessDlg:" << nret; + + if(nret != QDialog::Accepted) + { + delete fr; + QMessageBox::critical(NULL, "critical", "reset face recognizer model failed!", QMessageBox::Yes); + return; + } + m_videothread->set_fr(fr); + } + + + QString sql("update " + m_config_table + " set fd_minfacesize=%1, fd_threshold=%2, antispoof_clarity=%3, antispoof_reality=%4,"); + sql += "qa_yawlow=%5, qa_yawhigh=%6, qa_pitchlow=%7, qa_pitchhigh=%8, fr_threshold=%9,fr_modelpath=\"%10\""; + sql = QString(sql).arg(ui->fdminfacesize->text()).arg(ui->fdthreshold->text()).arg(ui->antispoofclarity->text()).arg(ui->antispoofreality->text()). + arg(ui->yawlowthreshold->text()).arg(ui->yawhighthreshold->text()).arg(ui->pitchlowthreshold->text()).arg(ui->pitchhighthreshold->text()). + arg(ui->fr_threshold->text()).arg(ui->fr_modelpath->text().trimmed()); + QSqlQuery q(sql); + //qDebug() << sql; + //QSqlQuery q("update " + m_config_table + " set min_face_size =" + ui->fdminfacesize->text() ); + if(!q.exec()) + { + QMessageBox::critical(NULL, "critical", "update setting failed!", QMessageBox::Yes); + return; + } + + + + gparamters.MinFaceSize = ui->fdminfacesize->text().toInt(); + gparamters.Fd_Threshold = ui->fdthreshold->text().toFloat(); + gparamters.AntiSpoofClarity = ui->antispoofclarity->text().toFloat(); + gparamters.AntiSpoofReality = ui->antispoofreality->text().toFloat(); + gparamters.YawLowThreshold = ui->yawlowthreshold->text().toFloat(); + gparamters.YawHighThreshold = ui->yawhighthreshold->text().toFloat(); + gparamters.PitchLowThreshold = ui->pitchlowthreshold->text().toFloat(); + gparamters.PitchHighThreshold = ui->pitchhighthreshold->text().toFloat(); + gparamters.Fr_Threshold = ui->fr_threshold->text().toFloat(); + gparamters.Fr_ModelPath = ui->fr_modelpath->text().trimmed(); + + m_videothread->setparamter(); + + QMessageBox::information(NULL, "info", "update setting ok!", QMessageBox::Yes); + +} + +void MainWindow::on_rotatebtn_clicked() +{ + QMatrix matrix; + matrix.rotate(90); + + int id = ui->db_editid->text().toInt(); + + std::map::iterator iter = m_datalst.find(id); + if(iter == m_datalst.end()) + { + return; + } + + //QFile::remove(m_image_tmp_path + iter->second->image_path); + if(!QFile::exists(m_image_tmp_path + iter->second->image_path)) + { + QFile::copy(m_image_path + iter->second->image_path, m_image_tmp_path + iter->second->image_path); + } + + if(!QFile::exists(m_image_tmp_path + gcrop_prefix + iter->second->image_path)) + { + QFile::copy(m_image_path + gcrop_prefix + iter->second->image_path, m_image_tmp_path + gcrop_prefix + iter->second->image_path); + } + //QFile::copy(m_image_path + iter->second->image_path, m_image_tmp_path + iter->second->image_path); + + QImage image(m_image_tmp_path + iter->second->image_path); + if(image.isNull()) + return; + + image = image.transformed(matrix, Qt::FastTransformation); + image.save(m_image_tmp_path + iter->second->image_path); + + ui->db_editpicture->setStyleSheet("border-image:url(" + m_image_tmp_path + iter->second->image_path + ")"); + + /////////////////////// + //QMatrix cropmatrix; + matrix.reset(); + matrix.rotate(90); + QImage cropimage(m_image_tmp_path + gcrop_prefix + iter->second->image_path); + if(cropimage.isNull()) + return; + + cropimage = cropimage.transformed(matrix, Qt::FastTransformation); + cropimage.save(m_image_tmp_path + gcrop_prefix + iter->second->image_path); + + ui->db_editcrop->setStyleSheet("border-image:url(" + m_image_tmp_path + gcrop_prefix + iter->second->image_path + ")"); + +} + + + +void MainWindow::on_tabWidget_currentChanged(int index) +{ + //qDebug() << "cur:" << ui->tabWidget->tabText(index) << ",old:" << ui->tabWidget->tabText(m_currenttab) ; + if(m_currenttab != index) + { + if(m_currenttab == 2) + { + on_previewstopbtn_clicked(); + m_videothread->wait(); + } + m_currenttab = index; + } + //qDebug() << "tab:" << ui->tabWidget->tabText(index) << ",cur:" << index << ",old:" << ui->tabWidget->currentIndex(); +} + +void MainWindow::on_addimagebtn_clicked() +{ + QString fileName = QFileDialog::getOpenFileName(this, tr("open image file"), + "./" , + "JPEG Files(*.jpg *.jpeg);;PNG Files(*.png);;BMP Files(*.bmp)"); + //qDebug() << "image:" << fileName; + + QImage image(fileName); + if(image.isNull()) + return; + + QFile file(fileName); + QFileInfo fileinfo(fileName); + + ////////////////////////////// + QSqlQuery query; + query.prepare("insert into " + m_table + " (id, name, image_path, feature_data, facex,facey,facewidth,faceheight) values (:id, :name, :image_path, :feature_data,:facex,:facey,:facewidth,:faceheight)"); + + int index = 1; + if(m_model->rowCount() > 0) + { + index = m_model->item(m_model->rowCount() - 1, 0)->text().toInt() + 1; + } + + + QString strfile = QString::number(index) + "_" + fileinfo.fileName();//m_image_path + QString::number(index) + "_" + m_currentimagefile;//fileinfo.fileName(); + + QString cropfile = m_image_path + gcrop_prefix + strfile; + + float features[1024]; + SeetaRect rect; + int nret = m_videothread->checkimage(fileName, cropfile, features, rect); + QString strerror; + + if(nret == -2) + { + strerror = "do not find face!"; + }else if(nret == -1) + { + strerror = fileName + " is invalid!"; + }else if(nret == 1) + { + strerror = "find more than one face!"; + }else if(nret == 2) + { + strerror = "quality check failed!"; + } + + if(!strerror.isEmpty()) + { + QMessageBox::critical(NULL,"critical", strerror, QMessageBox::Yes); + return; + } + + QString name = fileinfo.completeBaseName();//fileName(); + int n = name.indexOf("_"); + + if(n >= 1) + { + name = name.left(n); + } + + query.bindValue(0, index); + query.bindValue(1,name); + + //query.bindValue(2, "/wqy/Downloads/ap.jpeg"); + query.bindValue(2, strfile);//fileinfo.fileName());//strfile); + + //float data[4] = {0.56,0.223,0.5671,-0.785}; + QByteArray bytearray; + bytearray.resize(1024 * sizeof(float)); + memcpy(bytearray.data(), features, 1024 * sizeof(float)); + + query.bindValue(3, QVariant(bytearray)); + query.bindValue(4, rect.x); + query.bindValue(5, rect.y); + query.bindValue(6, rect.width); + query.bindValue(7, rect.height); + + + if(!query.exec()) + { + QFile::remove(cropfile); + qDebug() << "failed to insert table:" << query.lastError(); + QMessageBox::critical(NULL, "critical", tr("save face data to database failed!"), QMessageBox::Yes); + return; + } + + file.copy(m_image_path + strfile); + + + DataInfo * info = new DataInfo(); + info->id = index; + info->name = name; + info->image_path = strfile; + memcpy(info->features, features, 1024 * sizeof(float)); + info->x = rect.x; + info->y = rect.y; + info->width = rect.width; + info->height = rect.height; + m_datalst.insert(std::map::value_type(index, info)); + + //////////////////////////////////////////////////////////// + int rows = m_model->rowCount(); + //qDebug() << "rows:" << rows; + + m_model->setItem(rows, 0, new QStandardItem(QString::number(index))); + m_model->setItem(rows, 1, new QStandardItem(info->name)); + + QLabel *label = new QLabel(""); + + label->setStyleSheet("border-image:url(" + m_image_path + strfile + ")"); + ui->dbtableview->setIndexWidget(m_model->index(rows, 2), label); + + QPushButton *button2 = new QPushButton("delete"); + button2->setProperty("id", index); + button2->setFixedSize(80, 40); + connect(button2, SIGNAL(clicked()), this, SLOT(deleterecord())); + + QWidget *widget = new QWidget(); + QHBoxLayout *layout = new QHBoxLayout; + layout->addStretch(); + layout->addWidget(button2); + layout->addStretch(); + widget->setLayout(layout); + + ui->dbtableview->setIndexWidget(m_model->index(rows, 3), widget); + ui->dbtableview->scrollToBottom(); + ui->dbtableview->selectRow(rows); + + emit ui->dbtableview->clicked(m_model->index(rows, 1)); + //QMessageBox::information(NULL, "info", tr("add face operator success!"), QMessageBox::Yes); + +} + +void MainWindow::on_menufacedbbtn_clicked() +{ + ui->tabWidget->setCurrentIndex(1); +} + + + +void MainWindow::on_menusettingbtn_clicked() +{ + + ui->tabWidget->setCurrentIndex(3); +} + +void MainWindow::on_previewclearbtn_clicked() +{ + ui->previewtableview->setUpdatesEnabled(false); + m_videomodel->removeRows(0, m_videomodel->rowCount()); + //m_videomodel->clear(); + ui->previewtableview->setUpdatesEnabled(true); +} + +void MainWindow::on_menuopenvideofile_clicked() +{ + QString fileName = QFileDialog::getOpenFileName(this, tr("open video file"), + "./" , + "MP4 Files(*.mp4 *.MP4);;AVI Files(*.avi);;FLV Files(*.flv);;h265 Files(*.h265);;h263 Files(*.h263)"); + //qDebug() << "image:" << fileName; + m_type.type = 1; + m_type.filename = fileName; + m_type.title = "Open Video: " + fileName; + ui->recognize_label->setText(m_type.title); + ui->tabWidget->setCurrentIndex(2); + emit ui->previewrunbtn->clicked(); +} + +void MainWindow::on_menuopenpicturefile_clicked() +{ + QString fileName = QFileDialog::getOpenFileName(this, tr("open image file"), + "./" , + "JPEG Files(*.jpg *.jpeg);;PNG Files(*.png);;BMP Files(*.bmp)"); + //qDebug() << "image:" << fileName; + m_type.type = 2; + m_type.filename = fileName; + m_type.title = "Open Image: " + fileName; + ui->recognize_label->setText(m_type.title); + ui->tabWidget->setCurrentIndex(2); + emit ui->previewrunbtn->clicked(); +} + +void MainWindow::on_menuopencamera_clicked() +{ + m_type.type = 0; + m_type.filename = ""; + m_type.title = "Open Camera: 0"; + ui->recognize_label->setText(m_type.title); + ui->tabWidget->setCurrentIndex(2); + emit ui->previewrunbtn->clicked(); +} + +static void FindFile(const QString & path, QStringList &files) +{ + QDir dir(path); + if(!dir.exists()) + return; + + dir.setFilter(QDir::Dirs | QDir::Files | QDir::NoDotAndDotDot | QDir::NoSymLinks); + dir.setSorting(QDir::DirsFirst);; + + QFileInfoList list = dir.entryInfoList(); + int i = 0; + while(i < list.size()) + { + QFileInfo info = list.at(i); + //qDebug() << info.absoluteFilePath(); + if(info.isDir()) + { + FindFile(info.absoluteFilePath(), files); + }else + { + QString str = info.suffix(); + if(str.compare("png", Qt::CaseInsensitive) == 0 || str.compare("jpg", Qt::CaseInsensitive) == 0 || str.compare("jpeg", Qt::CaseSensitive) == 0 || str.compare("bmp", Qt::CaseInsensitive) == 0) + { + files.append(info.absoluteFilePath()); + } + } + i++; + } + return; +} + +void MainWindow::on_addfilesbtn_clicked() +{ + QString fileName = QFileDialog::getExistingDirectory(this, tr("Select Directorky"), "."); + if(fileName.isEmpty()) + { + return; + } + + qDebug() << fileName; + QStringList files; + FindFile(fileName, files); + qDebug() << files.size(); + if(files.size() <= 0) + return; + + for(int i=0; irowCount() > 0) + { + index = m_model->item(m_model->rowCount() - 1, 0)->text().toInt(); + } + + InputFilesProcessDlg dialog(this, m_inputfilesthread); + + + m_inputfilesthread->start(&files, index, m_table); + dialog.exec(); + + + //qDebug() << "------on_addfilesbtn_clicked---end"; +} + +void MainWindow::oninputfilesupdateui(std::vector * datas) +{ + DataInfo * info = NULL; + //qDebug() << "----oninputfilesupdateui--" << datas->size(); + if(datas->size() > 0) + { + ui->dbtableview->setUpdatesEnabled(false); + } + + int rows = 0; + for(int i=0; isize(); i++) + { + rows = m_model->rowCount(); + //qDebug() << "rows:" << rows; + info = (*datas)[i]; + m_datalst.insert(std::map::value_type(info->id, info)); + m_model->setItem(rows, 0, new QStandardItem(QString::number(info->id))); + m_model->setItem(rows, 1, new QStandardItem(info->name)); + + QLabel *label = new QLabel(""); + + label->setStyleSheet("border-image:url(" + m_image_path + info->image_path + ")"); + ui->dbtableview->setIndexWidget(m_model->index(rows, 2), label); + + QPushButton *button2 = new QPushButton("delete"); + button2->setProperty("id", info->id); + button2->setFixedSize(80, 40); + connect(button2, SIGNAL(clicked()), this, SLOT(deleterecord())); + ui->dbtableview->setIndexWidget(m_model->index(rows, 3), button2); + //ui->dbtableview->scrollToBottom(); + //ui->dbtableview->selectRow(rows); + } + if(datas->size() > 0) + { + ui->dbtableview->setUpdatesEnabled(true); + ui->dbtableview->scrollToBottom(); + ui->dbtableview->selectRow(rows); + emit ui->dbtableview->clicked(m_model->index(rows, 1)); + } + +} + +void MainWindow::on_settingselectmodelbtn_clicked() +{ + QString fileName = QFileDialog::getOpenFileName(this, tr("open model file"), + "./" , + "CSTA Files(*.csta)"); + QFileInfo fileinfo(fileName); + QString modelfile = fileinfo.fileName(); + + QString str = gmodelpath.c_str() + modelfile; + + qDebug() << "------str:" << str; + qDebug() << "fileName:" << fileName; + + if(QString::compare(fileName, str) == 0) + { + ui->fr_modelpath->setText(modelfile); + return; + } + //QFile file(fileName); + if(!QFile::copy(fileName, str)) + { + QMessageBox::critical(NULL, "critical", "Copy model file: " + fileName + " to " + gmodelpath.c_str() + " failed, file already exists!", QMessageBox::Yes); + return; + } + + ui->fr_modelpath->setText(modelfile); + + //m_videothread->reset_fr_model(modelfile); + //qDebug() << "image:" << fileName; +} + +void MainWindow::closeEvent(QCloseEvent *event) +{ + m_videothread->m_exited = true; + m_videothread->wait(); + QWidget::closeEvent(event); +} diff --git a/example/qt/seetaface_demo/mainwindow.h b/example/qt/seetaface_demo/mainwindow.h new file mode 100644 index 0000000..687c864 --- /dev/null +++ b/example/qt/seetaface_demo/mainwindow.h @@ -0,0 +1,134 @@ +#ifndef MAINWINDOW_H +#define MAINWINDOW_H + +#include +//#include + +/* +#include +#include +#include + +#include "seeta/FaceLandmarker.h" +#include "seeta/FaceDetector.h" +#include "seeta/FaceAntiSpoofing.h" +#include "seeta/Common/Struct.h" +*/ + +#include "videocapturethread.h" + +#include "qsqldatabase.h" +#include "qsqltablemodel.h" +#include "qstandarditemmodel.h" + +#include + + +namespace Ui { +class MainWindow; +} + +class MainWindow : public QMainWindow +{ + Q_OBJECT + +public: + explicit MainWindow(QWidget *parent = 0); + ~MainWindow(); + + void getdatas(); + void cleardata(); + +protected: + void closeEvent(QCloseEvent *event); + +private slots: + //void on_pushButton_clicked(); + + void editrecord(); + void deleterecord(); + void onupdateui(const QImage & image); + void onrecognize(int pid, const QString & name, const QString & imagepath, float score, const QImage &image, const QRect &rc); + + void onvideothreadend(int value); + void on_dbsavebtn_clicked(); + + void on_previewrunbtn_clicked(); + + void on_previewstopbtn_clicked(); + + void on_settingsavebtn_clicked(); + + void on_rotatebtn_clicked(); + + + + void showfaceinfo(); + + void on_tabWidget_currentChanged(int index); + + void on_addimagebtn_clicked(); + + void on_menufacedbbtn_clicked(); + + //void on_pushButton_8_clicked(); + + void on_menusettingbtn_clicked(); + + void on_previewclearbtn_clicked(); + + void on_menuopenvideofile_clicked(); + + void on_menuopenpicturefile_clicked(); + + void on_menuopencamera_clicked(); + + void on_addfilesbtn_clicked(); + + void oninputfilesupdateui(std::vector *); + + void on_settingselectmodelbtn_clicked(); + +private: + Ui::MainWindow *ui; + + /* + QTimer *m_timer; + cv::VideoCapture * m_capture; + + seeta::FaceDetector * m_fd; + seeta::FaceLandmarker * m_pd; + seeta::FaceAntiSpoofing * m_spoof; + */ + + VideoCaptureThread * m_videothread; + + QSqlDatabase m_database; + + // QSqlTableModel * m_model; + QString m_table; + QString m_config_table; + QStandardItemModel * m_model; + + QPixmap m_default_image; + + //QString m_currentimagefile; + QString m_image_path; + QString m_image_tmp_path; + //QString m_model_path; + + std::map m_datalst; + + int m_currenttab; + + QStandardItemModel * m_videomodel; + + RecognizeType m_type; + + InputFilesThread *m_inputfilesthread; + ResetModelThread *m_resetmodelthread; + + +}; + +#endif // MAINWINDOW_H diff --git a/example/qt/seetaface_demo/mainwindow.ui b/example/qt/seetaface_demo/mainwindow.ui new file mode 100644 index 0000000..cfae9b6 --- /dev/null +++ b/example/qt/seetaface_demo/mainwindow.ui @@ -0,0 +1,774 @@ + + + MainWindow + + + + 0 + 0 + 1230 + 718 + + + + MainWindow + + + + + + 0 + 0 + 1221 + 751 + + + + 3 + + + + Menu + + + + + 190 + 220 + 171 + 81 + + + + &Face Database + + + + + + 190 + 380 + 171 + 81 + + + + Open &Camera + + + + + + 701 + 220 + 171 + 81 + + + + &Setting + + + + + + 440 + 383 + 171 + 81 + + + + Open &Video + + + + + + 706 + 383 + 171 + 81 + + + + Open &Image + + + + + + Face Database + + + + + 0 + 10 + 641 + 631 + + + + + + + 660 + 10 + 521 + 121 + + + + Register + + + + + 61 + 60 + 131 + 25 + + + + &Image + + + + + + 309 + 60 + 151 + 25 + + + + &Directory + + + + + + + 660 + 230 + 521 + 391 + + + + Edit + + + + + 45 + 40 + 21 + 16 + + + + ID: + + + + + false + + + + 70 + 40 + 171 + 25 + + + + + + + 274 + 40 + 53 + 16 + + + + Name: + + + + + + 320 + 40 + 171 + 25 + + + + + + + 10 + 90 + 53 + 16 + + + + Picture: + + + + + + 70 + 90 + 171 + 161 + + + + QFrame::Box + + + QFrame::Plain + + + + + + + + + 283 + 90 + 41 + 20 + + + + Face: + + + + + + 320 + 90 + 111 + 101 + + + + QFrame::Box + + + QFrame::Plain + + + + + + + + + 130 + 260 + 71 + 25 + + + + &Rotate + + + + + + 170 + 340 + 161 + 25 + + + + &Save + + + + + + + Preview + + + + + 6 + 26 + 800 + 600 + + + + + 800 + 600 + + + + QFrame::Box + + + QFrame::Plain + + + + + + + + + 831 + 603 + 101 + 25 + + + + &Run + + + + + + 955 + 603 + 101 + 25 + + + + &Stop + + + + + + 816 + 25 + 71 + 17 + + + + Record: + + + + + + 9 + 4 + 761 + 17 + + + + open camera: 0 + + + + + + 814 + 46 + 391 + 551 + + + + + + + 1076 + 603 + 101 + 25 + + + + &Clear + + + + + + Setting + + + + + 880 + 470 + 121 + 51 + + + + &Save + + + + + + 110 + 40 + 661 + 80 + + + + Face Detector + + + + + 81 + 40 + 91 + 20 + + + + Min Face Size: + + + + + + 180 + 40 + 113 + 25 + + + + + + + 381 + 40 + 121 + 20 + + + + Score Threshold: + + + + + + 500 + 40 + 113 + 25 + + + + + + + + 110 + 140 + 661 + 80 + + + + Anti-Spoofing + + + + + 500 + 40 + 113 + 25 + + + + + + + 49 + 40 + 131 + 20 + + + + Clarity Threshold: + + + + + + 370 + 40 + 131 + 20 + + + + Reality Threshold: + + + + + + 180 + 40 + 113 + 25 + + + + + + + + 110 + 250 + 661 + 151 + + + + Quality Assessor + + + + + 180 + 40 + 61 + 25 + + + + + + + 37 + 39 + 141 + 20 + + + + Yaw Score Between: + + + + + + 297 + 40 + 61 + 25 + + + + + + + 180 + 90 + 61 + 25 + + + + + + + 31 + 90 + 151 + 20 + + + + Pitch Score Between: + + + + + + 298 + 90 + 61 + 25 + + + + + + + 362 + 40 + 51 + 17 + + + + ° + + + + + + 363 + 89 + 51 + 17 + + + + ° + + + + + + 245 + 40 + 51 + 17 + + + + ° And + + + + + + 244 + 90 + 51 + 17 + + + + ° And + + + + + + + 110 + 430 + 661 + 91 + + + + Face Recognizer + + + + + 140 + 30 + 113 + 25 + + + + + + + 56 + 60 + 81 + 20 + + + + Model File: + + + + + + 140 + 60 + 471 + 25 + + + + + + + 20 + 30 + 121 + 20 + + + + Score Threshold: + + + + + + 620 + 60 + 21 + 25 + + + + ... + + + + + + + + + + 0 + 0 + 1230 + 23 + + + + + + TopToolBarArea + + + false + + + + + + + + diff --git a/example/qt/seetaface_demo/resetmodelprocessdialog.cpp b/example/qt/seetaface_demo/resetmodelprocessdialog.cpp new file mode 100644 index 0000000..a1242cc --- /dev/null +++ b/example/qt/seetaface_demo/resetmodelprocessdialog.cpp @@ -0,0 +1,109 @@ +#include +#include +#include +#include +#include +#include "resetmodelprocessdialog.h" + +#include "videocapturethread.h" + + +ResetModelProcessDlg::ResetModelProcessDlg(QWidget *parent, ResetModelThread * thread) + : QDialog(parent) +{ + m_exited = false; + workthread = thread; + qDebug() << "------------dlg input----------------"; + //初始化控件对象 + //tr是把当前字符串翻译成为其他语言的标记 + //&后面的字母是用快捷键来激活控件的标记,例如可以用Alt+w激活Find &what这个控件 + label = new QLabel("", this); + + progressbar = new QProgressBar(this); + progressbar->setOrientation(Qt::Horizontal); + progressbar->setMinimum(0); + progressbar->setMaximum(100); + progressbar->setValue(5); + progressbar->setFormat(tr("current progress:%1%").arg(QString::number(5, 'f',1))); + progressbar->setAlignment(Qt::AlignLeft| Qt::AlignVCenter); + + cancelButton = new QPushButton(tr("&Cancel")); + //cancelButton->setEnabled(true); + + //closeButton = new QPushButton(tr("&Close")); + + + //连接信号和槽 + connect(cancelButton, SIGNAL(clicked()), this, SLOT(cancelClicked())); + //connect(okButton, SIGNAL(clicked()), this, SLOT(okClicked())); + //connect(closeButton, SIGNAL(clicked()), this, SLOT(close())); + connect(workthread, SIGNAL(sigprogress(float)), this, SLOT(setProgressValue(float))); + connect(workthread, SIGNAL(sigResetModelEnd(int)), this, SLOT(setResetModelEnd(int))); + + + + QHBoxLayout *bottomLayout = new QHBoxLayout; + bottomLayout->addStretch(); + bottomLayout->addWidget(cancelButton); + //bottomLayout->addWidget(closeButton); + bottomLayout->addStretch(); + + QVBoxLayout *mainLayout = new QVBoxLayout; + mainLayout->addWidget(label); + mainLayout->addWidget(progressbar); + mainLayout->addStretch(); + mainLayout->addLayout(bottomLayout); + + this->setLayout(mainLayout); + + setWindowTitle(tr("Reset Face Recognizer Model Progress")); + + //cancelButton->setEnabled(true); + setFixedSize(400,160); +} + +void ResetModelProcessDlg::closeEvent(QCloseEvent *event) +{ + if(!m_exited) + { + workthread->m_exited = true; + event->ignore(); + }else + { + event->accept(); + } +} + +void ResetModelProcessDlg::cancelClicked() +{ + qDebug() << "ResetModelProcessDlg cancelclicked"; + workthread->m_exited = true; +} + + +ResetModelProcessDlg::~ResetModelProcessDlg() +{ + qDebug() << "ResetModelProcessDlg ~ResetModelProcessDlg"; +} +void ResetModelProcessDlg::setResetModelEnd(int value) +{ + m_exited = true; + this->hide(); + qDebug() << "setResetModelEnd:" << value; + if(value == 0) + { + accept(); + }else + { + reject(); + } + +} + + +void ResetModelProcessDlg::setProgressValue(float value) +{ + QString str = QString("%1%").arg(QString::number(value, 'f',1)); + progressbar->setValue(value); + progressbar->setFormat(str); +} diff --git a/example/qt/seetaface_demo/resetmodelprocessdialog.h b/example/qt/seetaface_demo/resetmodelprocessdialog.h new file mode 100644 index 0000000..c60e2c4 --- /dev/null +++ b/example/qt/seetaface_demo/resetmodelprocessdialog.h @@ -0,0 +1,50 @@ +#ifndef RESETMODELPROCESSDIALOG_H +#define RESETMODELPROCESSDIALOG_H + + + +#include + + +class QLabel; +class QProgressBar; +class QPushButton; +class ResetModelThread; + +class ResetModelProcessDlg :public QDialog{ + + //如果需要在对话框类中自定义信号和槽,则需要在类内添加Q_OBJECT + Q_OBJECT +public: + //构造函数,析构函数 + ResetModelProcessDlg(QWidget *parent, ResetModelThread * thread); + ~ResetModelProcessDlg(); + +protected: + void closeEvent(QCloseEvent *event); + //在signal和slots中定义这个对话框所需要的信号。 +signals: + //signals修饰的函数不需要本类实现。他描述了本类对象可以发送那些求助信号 + +//slots必须用private修饰 +private slots: + void cancelClicked(); + void setProgressValue(float value); + void setResetModelEnd(int); +//申明这个对话框需要哪些组件 +private: + QLabel *label; + + QProgressBar *progressbar; + //QLabel *label2; + + QPushButton *cancelButton;//, *closeButton; + + ResetModelThread * workthread; + bool m_exited; +}; + + + + +#endif // RESETMODELPROCESSDIALOG_H diff --git a/example/qt/seetaface_demo/seetaface_demo.pro b/example/qt/seetaface_demo/seetaface_demo.pro new file mode 100644 index 0000000..d682f83 --- /dev/null +++ b/example/qt/seetaface_demo/seetaface_demo.pro @@ -0,0 +1,71 @@ +#------------------------------------------------- +# +# Project created by QtCreator 2020-03-16T14:40:38 +# +#------------------------------------------------- + +QT += core gui sql + +greaterThan(QT_MAJOR_VERSION, 4): QT += widgets + +TARGET = seetaface_demo +TEMPLATE = app + +# The following define makes your compiler emit warnings if you use +# any feature of Qt which has been marked as deprecated (the exact warnings +# depend on your compiler). Please consult the documentation of the +# deprecated API in order to know how to port your code away from it. +DEFINES += QT_DEPRECATED_WARNINGS + +# You can also make your code fail to compile if you use deprecated APIs. +# In order to do so, uncomment the following line. +# You can also select to disable deprecated APIs only up to a certain version of Qt. +#DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000 # disables all the APIs deprecated before Qt 6.0.0 + + +SOURCES += \ + main.cpp \ + mainwindow.cpp \ + videocapturethread.cpp \ + inputfilesprocessdialog.cpp \ + resetmodelprocessdialog.cpp + +HEADERS += \ + mainwindow.h \ + videocapturethread.h \ + inputfilesprocessdialog.h \ + resetmodelprocessdialog.h + +FORMS += \ + mainwindow.ui + +#windows adm64: + +#INCLUDEPATH += C:/thirdparty/opencv4.2/build/include \ +# C:/study/SF3.0/sf3.0_windows/sf3.0_windows/include + + +#CONFIG(debug, debug|release) { +#LIBS += -LC:/thirdparty/opencv4.2/build/x64/vc14/lib -lopencv_world420d \ +# -LC:/study/SF3.0/sf3.0_windows/sf3.0_windows/lib/x64 -lSeetaFaceDetector600d -lSeetaFaceLandmarker600d \ +# -lSeetaFaceAntiSpoofingX600d -lSeetaFaceTracking600d -lSeetaFaceRecognizer610d \ +# -lSeetaQualityAssessor300d -lSeetaPoseEstimation600d + +#} else { +#LIBS += -LC:/thirdparty/opencv4.2/build/x64/vc14/lib -lopencv_world420 \ +# -LC:/study/SF3.0/sf3.0_windows/sf3.0_windows/lib/x64 -lSeetaFaceDetector600 -lSeetaFaceLandmarker600 \ +# -lSeetaFaceAntiSpoofingX600 -lSeetaFaceTracking600 -lSeetaFaceRecognizer610 \ +# -lSeetaQualityAssessor300 -lSeetaPoseEstimation600 +#} + +#linux: +INCLUDEPATH += /wqy/tools/opencv4_home/include/opencv4 \ + /wqy/seeta_sdk/SF3/libs/SF3.0_v1/include + +LIBS += -L/wqy/tools/opencv4_home/lib -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs \ + -L/wqy/seeta_sdk/SF3/libs/SF3.0_v1/lib64 -lSeetaFaceDetector600 -lSeetaFaceLandmarker600 \ + -lSeetaFaceAntiSpoofingX600 -lSeetaFaceTracking600 -lSeetaFaceRecognizer610 \ + -lSeetaQualityAssessor300 -lSeetaPoseEstimation600 -lSeetaAuthorize -ltennis + +RESOURCES += \ + face_resource.qrc diff --git a/example/qt/seetaface_demo/seetaface_demo.pro.user b/example/qt/seetaface_demo/seetaface_demo.pro.user new file mode 100644 index 0000000..eb387b9 --- /dev/null +++ b/example/qt/seetaface_demo/seetaface_demo.pro.user @@ -0,0 +1,336 @@ + + + + + + EnvironmentId + {1a9b2863-cf78-4bf8-bcd9-520cf75bdafe} + + + ProjectExplorer.Project.ActiveTarget + 0 + + + ProjectExplorer.Project.EditorSettings + + true + false + true + + Cpp + + CppGlobal + + + + QmlJS + + QmlJSGlobal + + + 2 + UTF-8 + false + 4 + false + 80 + true + true + 1 + true + false + 0 + true + true + 0 + 8 + true + 1 + true + true + true + false + + + + ProjectExplorer.Project.PluginSettings + + + + ProjectExplorer.Project.Target.0 + + Desktop Qt 5.9.2 GCC 64bit + Desktop Qt 5.9.2 GCC 64bit + qt.592.gcc_64_kit + 0 + 0 + 0 + + /wqy/test/qtproject/build-seetaface_demo-Desktop_Qt_5_9_2_GCC_64bit-Debug + + + true + qmake + + QtProjectManager.QMakeBuildStep + true + + false + false + false + + + true + Make + + Qt4ProjectManager.MakeStep + + -w + -r + + false + + + + 2 + Build + + ProjectExplorer.BuildSteps.Build + + + + true + Make + + Qt4ProjectManager.MakeStep + + -w + -r + + true + clean + + + 1 + Clean + + ProjectExplorer.BuildSteps.Clean + + 2 + false + + Debug + + Qt4ProjectManager.Qt4BuildConfiguration + 2 + true + + + /wqy/test/qtproject/build-seetaface_demo-Desktop_Qt_5_9_2_GCC_64bit-Release + + + true + qmake + + QtProjectManager.QMakeBuildStep + false + + false + false + false + + + true + Make + + Qt4ProjectManager.MakeStep + + -w + -r + + false + + + + 2 + Build + + ProjectExplorer.BuildSteps.Build + + + + true + Make + + Qt4ProjectManager.MakeStep + + -w + -r + + true + clean + + + 1 + Clean + + ProjectExplorer.BuildSteps.Clean + + 2 + false + + Release + + Qt4ProjectManager.Qt4BuildConfiguration + 0 + true + + + /wqy/test/qtproject/build-seetaface_demo-Desktop_Qt_5_9_2_GCC_64bit-Profile + + + true + qmake + + QtProjectManager.QMakeBuildStep + true + + false + true + false + + + true + Make + + Qt4ProjectManager.MakeStep + + -w + -r + + false + + + + 2 + Build + + ProjectExplorer.BuildSteps.Build + + + + true + Make + + Qt4ProjectManager.MakeStep + + -w + -r + + true + clean + + + 1 + Clean + + ProjectExplorer.BuildSteps.Clean + + 2 + false + + Profile + + Qt4ProjectManager.Qt4BuildConfiguration + 0 + true + + 3 + + + 0 + Deploy + + ProjectExplorer.BuildSteps.Deploy + + 1 + Deploy locally + + ProjectExplorer.DefaultDeployConfiguration + + 1 + + + false + false + 1000 + + true + + false + false + false + false + true + 0.01 + 10 + true + 1 + 25 + + 1 + true + false + true + valgrind + + 0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13 + 14 + + 2 + + seetaface_demo + + Qt4ProjectManager.Qt4RunConfiguration:/wqy/test/qtproject/seetaface_demo/seetaface_demo.pro + true + + seetaface_demo.pro + false + + /wqy/test/qtproject/build-seetaface_demo-Desktop_Qt_5_9_2_GCC_64bit-Debug + 3768 + false + true + false + false + true + + 1 + + + + ProjectExplorer.Project.TargetCount + 1 + + + ProjectExplorer.Project.Updater.FileVersion + 18 + + + Version + 18 + + diff --git a/example/qt/seetaface_demo/seetatech_logo.png b/example/qt/seetaface_demo/seetatech_logo.png new file mode 100644 index 0000000..c4bdc58 Binary files /dev/null and b/example/qt/seetaface_demo/seetatech_logo.png differ diff --git a/example/qt/seetaface_demo/videocapturethread.cpp b/example/qt/seetaface_demo/videocapturethread.cpp new file mode 100644 index 0000000..bc93db8 --- /dev/null +++ b/example/qt/seetaface_demo/videocapturethread.cpp @@ -0,0 +1,1058 @@ +#include "videocapturethread.h" + + + +#include "seeta/QualityOfPoseEx.h" +#include "seeta/Struct.h" +#include +#include +#include +#include +#include +#include "QDebug" + + +using namespace std::chrono; + +extern const QString gcrop_prefix; +extern Config_Paramter gparamters; +extern std::string gmodelpath;// = "/wqy/seeta_sdk/SF3/libs/SF3.0_v1/models/"; + +void clone_image( const SeetaImageData &src, SeetaImageData &dst) +{ + if(src.width != dst.width || src.height != dst.height || src.channels != dst.channels) + { + if(dst.data) + { + delete [] dst.data; + dst.data = nullptr; + } + dst.width = src.width; + dst.height = src.height; + dst.channels = src.channels; + dst.data = new unsigned char[src.width * src.height * src.channels]; + } + + memcpy(dst.data, src.data, src.width * src.height * src.channels); +} + +////////////////////////////// +WorkThread::WorkThread(VideoCaptureThread * main) +{ + m_mainthread = main; + +} + +WorkThread::~WorkThread() +{ + qDebug() << "WorkThread exited"; +} + +int WorkThread::recognize(const SeetaTrackingFaceInfo & faceinfo)//, std::vector & datas) +{ + auto points = m_mainthread->m_pd->mark(*m_mainthread->m_mainImage, faceinfo.pos); + + m_mainthread->m_qa->feed(*(m_mainthread->m_mainImage), faceinfo.pos, points.data(), 5); + auto result1 = m_mainthread->m_qa->query(seeta::BRIGHTNESS); + auto result2 = m_mainthread->m_qa->query(seeta::RESOLUTION); + auto result3 = m_mainthread->m_qa->query(seeta::CLARITY); + auto result4 = m_mainthread->m_qa->query(seeta::INTEGRITY); + auto result = m_mainthread->m_qa->query(seeta::POSE_EX); + + qDebug() << "PID:" << faceinfo.PID; + if(result.level == 0 || result1.level == 0 || result2.level == 0 || result3.level == 0 || result4.level == 0 ) + { + qDebug() << "Quality check failed!"; + return -1; + } + + auto status = m_mainthread->m_spoof->Predict( *m_mainthread->m_mainImage, faceinfo.pos, points.data() ); + + if( status != seeta::FaceAntiSpoofing::REAL) + { + qDebug() << "antispoofing check failed!"; + return -2; + } + seeta::ImageData cropface = m_mainthread->m_fr->CropFaceV2(*m_mainthread->m_mainImage, points.data() ); + float features[1024]; + memset(features, 0, 1024 * sizeof(float)); + m_mainthread->m_fr->ExtractCroppedFace(cropface, features); + std::map::iterator iter = m_mainthread->m_datalst->begin(); + //std::vector datas; + + for(; iter != m_mainthread->m_datalst->end(); ++iter) + { + if(m_mainthread->m_exited) + { + return -3; + } + float score = m_mainthread->m_fr->CalculateSimilarity(features, iter->second->features); + qDebug() << "PID:" << faceinfo.PID << ", score:" << score; + if(score >= gparamters.Fr_Threshold) + { + //datas.push_back(faceinfo.PID); + //m_lastpids.push_back(faceinfo.PID); + + int x = faceinfo.pos.x - faceinfo.pos.width / 2; + if((x) < 0) + x = 0; + int y = faceinfo.pos.y - faceinfo.pos.height / 2; + if(y < 0) + y = 0; + + int x2 = faceinfo.pos.x + faceinfo.pos.width * 1.5; + if(x2 >= m_mainthread->m_mainImage->width) + { + x2 = m_mainthread->m_mainImage->width -1; + } + + int y2 = faceinfo.pos.y + faceinfo.pos.height * 1.5; + if(y2 >= m_mainthread->m_mainImage->height) + { + y2 = m_mainthread->m_mainImage->height -1; + } + + //qDebug() << "----x:" << faceinfo.pos.x << ",y:" << faceinfo.pos.y << ",w:" << faceinfo.pos.width << ",h:" << faceinfo.pos.height; + cv::Rect rect(x, y, x2-x, y2 - y); + //qDebug() << "x:" << x << ",y:" << y << ",w:" << x2-x << ",h:" << y2-y; + //cv::Rect rect(faceinfo.pos.x, faceinfo.pos.y, faceinfo.pos.width, faceinfo.pos.height); + + cv::Mat mat = m_mainthread->m_mainmat(rect).clone(); + //cv::imwrite("/tmp/ddd.png",mat); + //qDebug() << "----mat---"; + QImage image((const unsigned char *)mat.data, mat.cols,mat.rows,mat.step, QImage::Format_RGB888); + //image.save("/tmp/wwww.png"); + //qDebug() << "PID:" << faceinfo.PID << ", score:" << score; + QRect rc(iter->second->x, iter->second->y, iter->second->width, iter->second->height); + + emit sigRecognize(faceinfo.PID, iter->second->name, iter->second->image_path, score, image, rc); + return 0; + } + } + + //m_lastpids.clear(); + //m_lastpids.resize(datas.size()); + //std::copy(datas.begin(), datas.end(), m_lastpids.begin()); + + return -3; +} + +void WorkThread::run() +{ + //m_begin = system_clock::now(); + m_lastpids.clear(); + m_lasterrorpids.clear(); + bool bfind = false; + std::vector datas; + std::vector errordatas; + int nret = 0; + while(!m_mainthread->m_exited) + { + if(!m_mainthread->m_readimage) + { + QThread::msleep(1); + continue; + } + + auto end = system_clock::now(); + //auto duration = duration_cast(end - m_begin); + //int spent = duration.count(); + //if(spent > 10) + // m_lastpids.clear(); + + datas.clear(); + errordatas.clear(); + for(int i=0; im_mainfaceinfos.size(); i++) + { + if(m_mainthread->m_exited) + { + return; + } + + bfind = false; + for(int k=0; km_mainfaceinfos[i].PID == m_lastpids[k]) + { + datas.push_back(m_lastpids[k]); + bfind = true; + break; + } + } + if(!bfind) + { + SeetaTrackingFaceInfo & faceinfo = m_mainthread->m_mainfaceinfos[i]; + nret = recognize(faceinfo);//(m_mainthread->m_mainfaceinfos[i]);//, datas); + if(nret < 0) + { + Fr_DataInfo info; + info.pid = faceinfo.PID; + info.state = nret; + errordatas.push_back(info); + bool bsend = true; + for(int k=0; k= m_mainthread->m_mainImage->width) + { + x2 = m_mainthread->m_mainImage->width -1; + } + + int y2 = faceinfo.pos.y + faceinfo.pos.height * 1.5; + if(y2 >= m_mainthread->m_mainImage->height) + { + y2 = m_mainthread->m_mainImage->height -1; + } + + //qDebug() << "----x:" << faceinfo.pos.x << ",y:" << faceinfo.pos.y << ",w:" << faceinfo.pos.width << ",h:" << faceinfo.pos.height; + cv::Rect rect(x, y, x2-x, y2 - y); + //qDebug() << "x:" << x << ",y:" << y << ",w:" << x2-x << ",h:" << y2-y; + //cv::Rect rect(faceinfo.pos.x, faceinfo.pos.y, faceinfo.pos.width, faceinfo.pos.height); + + cv::Mat mat = m_mainthread->m_mainmat(rect).clone(); + //cv::imwrite("/tmp/ddd.png",mat); + //qDebug() << "----mat---"; + QImage image((const unsigned char *)mat.data, mat.cols,mat.rows,mat.step, QImage::Format_RGB888); + + QString str; + if(info.state == -1) + { + str = "QA ERROR"; + }else if(info.state == -2) + { + str = "SPOOFING"; + }else if(info.state == -3) + { + str = "MISS"; + } + emit sigRecognize(info.pid, "", str, 0.0, image, QRect(0,0,0,0)); + } + }else + { + datas.push_back(m_mainthread->m_mainfaceinfos[i].PID); + } + } + + } + + m_lastpids.clear(); + m_lastpids.resize(datas.size()); + std::copy(datas.begin(), datas.end(), m_lastpids.begin()); + + m_lasterrorpids.clear(); + m_lasterrorpids.resize(errordatas.size()); + std::copy(errordatas.begin(), errordatas.end(), m_lasterrorpids.begin()); + + auto end2 = system_clock::now(); + auto duration2= duration_cast(end2 - end); + int spent2 = duration2.count(); + //qDebug() << "----spent:" << spent2; + m_mainthread->m_mutex.lock(); + m_mainthread->m_readimage = false; + m_mainthread->m_mutex.unlock(); + + } + +} + + +///////////////////////////////// +ResetModelThread::ResetModelThread(const QString &imagepath, const QString & tmpimagepath) +{ + //m_mainthread = main; + m_image_path = imagepath; + m_image_tmp_path = tmpimagepath; + m_exited = false; +} + +ResetModelThread::~ResetModelThread() +{ + qDebug() << "ResetModelThread exited"; +} + +void ResetModelThread::start(std::map *datalst, const QString & table, seeta::FaceRecognizer * fr) +{ + m_table = table; + m_datalst = datalst; + m_fr = fr; + m_exited = false; + + QThread::start(); +} + +typedef struct DataInfoTmp +{ + int id; + float features[ 1024]; +}DataInfoTmp; + +void ResetModelThread::run() +{ + int num = m_datalst->size(); + QString fileName; + + float lastvalue = 0.0; + float value = 0.0; + + + ////////////////////////////////// + + QSqlQuery query; + query.exec("drop table " + m_table + "_tmp"); + if(!query.exec("create table " + m_table + "_tmp (id int primary key, name varchar(64), image_path varchar(256), feature_data blob)")) + { + qDebug() << "failed to create table:" + m_table + "_tmp"<< query.lastError(); + emit sigResetModelEnd(-1); + return; + } + + + //////////////////////////////// + float features[1024]; + + std::vector vecs; + + std::map::iterator iter = m_datalst->begin(); + //std::vector datas; + int i=0; + + for(; iter != m_datalst->end(); ++iter,i++) + { + if(m_exited) + { + break; + } + + value = (i + 1) / num; + value = value * 90; + if(value - lastvalue >= 1.0) + { + emit sigprogress(value); + lastvalue = value; + } + //QString str = QString("current progress : %1%").arg(QString::number(value, 'f',1)); + emit sigprogress(value); + + fileName = m_image_path + "crop_" + iter->second->image_path; + cv::Mat mat = cv::imread(fileName.toStdString().c_str()); + if(mat.data == NULL) + { + continue; + } + + SeetaImageData image; + image.height = mat.rows; + image.width = mat.cols; + image.channels = mat.channels(); + image.data = mat.data; + memset(features, 0, 1024 * sizeof(float)); + m_fr->ExtractCroppedFace(image, features); + + + + //////////////////////////////////////////////////////// + /* + /// + QSqlQuery query; + query.prepare("update " + m_table + " set feature_data = :feature_data where id=:id"); + + query.bindValue(":id", iter->second->id); + + QByteArray bytearray; + bytearray.resize(1024 * sizeof(float)); + memcpy(bytearray.data(), features, 1024 * sizeof(float)); + query.bindValue(":feature_data", QVariant(bytearray)); + if(!query.exec()) + { + //vecs.push_back(iter->second->id); + qDebug() << "failed to update table:" << query.lastError(); + continue; + } + */ + ////////////////////////////////////////////////////// + QSqlQuery query2; + query2.prepare("insert into " + m_table + "_tmp (id, name, image_path, feature_data) values (:id, :name, :image_path, :feature_data)"); + + query2.bindValue(":id", iter->second->id); + query2.bindValue(":name",iter->second->name); + query2.bindValue(":image_path", iter->second->image_path); + + QByteArray bytearray; + bytearray.resize(1024 * sizeof(float)); + memcpy(bytearray.data(), features, 1024 * sizeof(float)); + + query2.bindValue(":feature_data", QVariant(bytearray)); + if(!query2.exec()) + { + qDebug() << "failed to update table:" << query.lastError(); + continue; + break; + } + + + /////////////////////////////////////////////// + + + DataInfoTmp * info = new DataInfoTmp; + info->id = iter->second->id; + memcpy(info->features, features, 1024 * sizeof(float)); + vecs.push_back(info); + memcpy(iter->second->features, features, 1024 * sizeof(float)); + } + + if(i < m_datalst->size()) + { + + QSqlQuery deltable("drop table " + m_table + "_tmp"); + deltable.exec(); + for(int k=0; kfind(vecs[k]->id); + if(iter != m_datalst->end()) + { + memcpy(iter->second->features, vecs[k]->features, 1024 * sizeof(float)); + delete vecs[k]; + } + } + vecs.clear(); + + } + emit sigprogress(100.0); + qDebug() << "------ResetModelThread---ok:"; + emit sigResetModelEnd(0); +} +/// + + +///////////////////////////////// +InputFilesThread::InputFilesThread(VideoCaptureThread * main, const QString &imagepath, const QString & tmpimagepath) +{ + m_mainthread = main; + m_image_path = imagepath; + m_image_tmp_path = tmpimagepath; + m_exited = false; +} + +InputFilesThread::~InputFilesThread() +{ + qDebug() << "InputFilesThread exited"; +} + +void InputFilesThread::start(const QStringList * files, unsigned int id, const QString & table) +{ + m_table = table; + m_files = files; + m_id = id; + m_exited = false; + QThread::start(); +} + +void InputFilesThread::run() +{ + int num = m_files->size(); + float features[1024]; + QString strerror; + int nret; + QString fileName; + int index; + + float lastvalue = 0.0; + float value = 0.0; + SeetaRect rect; + std::vector datalst; + + for(int i=0; isize(); i++) + { + if(m_exited) + break; + value = (i + 1) / num; + value = value * 100 * 0.8; + if(value - lastvalue >= 1.0) + { + emit sigprogress(value); + lastvalue = value; + } + QString str = QString("current progress : %1%").arg(QString::number(value, 'f',1)); + emit sigprogress(value); + + fileName = m_files->at(i); + + QImage image(fileName); + if(image.isNull()) + continue; + + QFile file(fileName); + QFileInfo fileinfo(fileName); + + ////////////////////////////// + QSqlQuery query; + query.prepare("insert into " + m_table + " (id, name, image_path, feature_data, facex,facey,facewidth,faceheight) values (:id, :name, :image_path, :feature_data,:facex,:facey,:facewidth,:faceheight)"); + + index = m_id + 1; + + QString strfile = QString::number(index) + "_" + fileinfo.fileName(); + QString cropfile = m_image_path + "crop_" + strfile; + + memset(features, 0, sizeof(float) * 1024); + nret = m_mainthread->checkimage(fileName, cropfile, features, rect); + strerror = ""; + + if(nret == -2) + { + strerror = "do not find face!"; + }else if(nret == -1) + { + strerror = fileName + " is invalid!"; + }else if(nret == 1) + { + strerror = "find more than one face!"; + }else if(nret == 2) + { + strerror = "quality check failed!"; + } + + if(!strerror.isEmpty()) + { + //QMessageBox::critical(NULL,"critical", strerror, QMessageBox::Yes); + continue; + } + + QString name = fileinfo.completeBaseName();//fileName(); + int n = name.indexOf("_"); + + if(n >= 1) + { + name = name.left(n); + } + + query.bindValue(0, index); + query.bindValue(1,name); + query.bindValue(2, strfile); + + QByteArray bytearray; + bytearray.resize(1024 * sizeof(float)); + memcpy(bytearray.data(), features, 1024 * sizeof(float)); + + query.bindValue(3, QVariant(bytearray)); + query.bindValue(4, rect.x); + query.bindValue(5, rect.y); + query.bindValue(6, rect.width); + query.bindValue(7, rect.height); + if(!query.exec()) + { + QFile::remove(cropfile); + qDebug() << "failed to insert table:" << query.lastError(); + //QMessageBox::critical(NULL, "critical", tr("save face data to database failed!"), QMessageBox::Yes); + continue; + } + + file.copy(m_image_path + strfile); + + + DataInfo * info = new DataInfo(); + info->id = index; + info->name = name; + info->image_path = strfile; + memcpy(info->features, features, 1024 * sizeof(float)); + info->x = rect.x; + info->y = rect.y; + info->width = rect.width; + info->height = rect.height; + datalst.push_back(info); + + m_id++; + } + + if(datalst.size() > 0) + { + emit sigInputFilesUpdateUI( &datalst); + } + + emit sigprogress(100.0); + + datalst.clear(); + emit sigInputFilesEnd(); +} +/// + +VideoCaptureThread::VideoCaptureThread(std::map * datalst, int videowidth, int videoheight) +{ + m_exited = false; + //m_haveimage = false; + + m_datalst = datalst; + //m_width = 800; + //m_height = 600; + qDebug() << "video width:" << videowidth << "," << videoheight; + + //std::string modelpath = "/wqy/seeta_sdk/SF3/libs/SF3.0_v1/models/"; + seeta::ModelSetting fd_model; + fd_model.append(gmodelpath + "face_detector.csta"); + fd_model.set_device( seeta::ModelSetting::CPU ); + fd_model.set_id(0); + m_fd = new seeta::FaceDetector(fd_model); + m_fd->set(seeta::FaceDetector::PROPERTY_MIN_FACE_SIZE, 100); + + m_tracker = new seeta::FaceTracker(fd_model, videowidth,videoheight); + m_tracker->SetMinFaceSize(100); //set(seeta::FaceTracker::PROPERTY_MIN_FACE_SIZE, 100); + + seeta::ModelSetting pd_model; + pd_model.append(gmodelpath + "face_landmarker_pts5.csta"); + pd_model.set_device( seeta::ModelSetting::CPU ); + pd_model.set_id(0); + m_pd = new seeta::FaceLandmarker(pd_model); + + + seeta::ModelSetting spoof_model; + spoof_model.append(gmodelpath + "fas_first.csta"); + spoof_model.append(gmodelpath + "fas_second.csta"); + spoof_model.set_device( seeta::ModelSetting::CPU ); + spoof_model.set_id(0); + m_spoof = new seeta::FaceAntiSpoofing(spoof_model); + m_spoof->SetThreshold(0.30, 0.80); + + seeta::ModelSetting fr_model; + fr_model.append(gmodelpath + "face_recognizer.csta"); + fr_model.set_device( seeta::ModelSetting::CPU ); + fr_model.set_id(0); + m_fr = new seeta::FaceRecognizer(fr_model); + + + + /////////////////////////////// + seeta::ModelSetting setting68; + setting68.set_id(0); + setting68.set_device( SEETA_DEVICE_CPU ); + setting68.append(gmodelpath + "face_landmarker_pts68.csta" ); + m_pd68 = new seeta::FaceLandmarker( setting68 ); + + seeta::ModelSetting posemodel; + posemodel.set_device(SEETA_DEVICE_CPU); + posemodel.set_id(0); + posemodel.append(gmodelpath + "pose_estimation.csta"); + m_poseex = new seeta::QualityOfPoseEx(posemodel); + m_poseex->set(seeta::QualityOfPoseEx::YAW_LOW_THRESHOLD, 20); + m_poseex->set(seeta::QualityOfPoseEx::YAW_HIGH_THRESHOLD, 10); + m_poseex->set(seeta::QualityOfPoseEx::PITCH_LOW_THRESHOLD, 20); + m_poseex->set(seeta::QualityOfPoseEx::PITCH_HIGH_THRESHOLD, 10); + + seeta::ModelSetting lbnmodel; + lbnmodel.set_device(SEETA_DEVICE_CPU); + lbnmodel.set_id(0); + lbnmodel.append(gmodelpath + "quality_lbn.csta"); + m_lbn = new seeta::QualityOfLBN(lbnmodel); + m_lbn->set(seeta::QualityOfLBN::PROPERTY_BLUR_THRESH, 0.80); + + m_qa = new seeta::QualityAssessor(); + m_qa->add_rule(seeta::INTEGRITY); + m_qa->add_rule(seeta::RESOLUTION); + m_qa->add_rule(seeta::BRIGHTNESS); + m_qa->add_rule(seeta::CLARITY); + m_qa->add_rule(seeta::POSE_EX, m_poseex, true); + + ////////////////////// + + + //m_capture = new cv::VideoCapture(0); + m_capture = NULL;//new cv::VideoCapture; + //m_capture->set( cv::CAP_PROP_FRAME_WIDTH, videowidth ); + //m_capture->set( cv::CAP_PROP_FRAME_HEIGHT, videoheight ); + //int videow = vc.get( CV_CAP_PROP_FRAME_WIDTH ); + //int videoh = vc.get( CV_CAP_PROP_FRAME_HEIGHT ); + + m_workthread = new WorkThread(this); + + m_mainImage = new SeetaImageData(); + //m_curImage = new SeetaImageData(); + m_mainImage->width = m_mainImage->height = m_mainImage->channels= 0; + m_mainImage->data = NULL; + + //m_curImage->width = m_curImage->height = m_curImage->channels= 0; + //m_curImage->data = NULL; +} + +VideoCaptureThread::~VideoCaptureThread() +{ + m_exited = true; + while(!isFinished()) + { + QThread::msleep(1); + } + qDebug() << "VideoCaptureThread exited"; + if( m_capture) + delete m_capture; + delete m_fd; + delete m_pd; + delete m_spoof; + delete m_tracker; + delete m_lbn; + delete m_qa; + + delete m_workthread; + +} + +void VideoCaptureThread::setparamter() +{ + /* + qDebug() << gparamters.MinFaceSize << ", " << gparamters.Fd_Threshold; + qDebug() << gparamters.VideoWidth << ", " << gparamters.VideoHeight; + qDebug() << gparamters.AntiSpoofClarity << ", " << gparamters.AntiSpoofReality; + qDebug() << gparamters.YawLowThreshold << ", " << gparamters.YawHighThreshold; + qDebug() << gparamters.PitchLowThreshold << ", " << gparamters.PitchHighThreshold; + */ + m_fd->set(seeta::FaceDetector::PROPERTY_MIN_FACE_SIZE, gparamters.MinFaceSize); + m_fd->set(seeta::FaceDetector::PROPERTY_THRESHOLD, gparamters.Fd_Threshold); + + m_tracker->SetMinFaceSize(gparamters.MinFaceSize); + m_tracker->SetThreshold(gparamters.Fd_Threshold); + m_tracker->SetVideoSize(gparamters.VideoWidth, gparamters.VideoHeight); + + m_spoof->SetThreshold(gparamters.AntiSpoofClarity, gparamters.AntiSpoofReality); + + m_poseex->set(seeta::QualityOfPoseEx::YAW_LOW_THRESHOLD, gparamters.YawLowThreshold); + m_poseex->set(seeta::QualityOfPoseEx::YAW_HIGH_THRESHOLD, gparamters.YawHighThreshold); + m_poseex->set(seeta::QualityOfPoseEx::PITCH_LOW_THRESHOLD, gparamters.PitchLowThreshold); + m_poseex->set(seeta::QualityOfPoseEx::PITCH_HIGH_THRESHOLD, gparamters.PitchHighThreshold); + +} + +seeta::FaceRecognizer * VideoCaptureThread::CreateFaceRecognizer(const QString & modelfile) +{ + + seeta::ModelSetting fr_model; + fr_model.append(gmodelpath + modelfile.toStdString()); + fr_model.set_device( seeta::ModelSetting::CPU ); + fr_model.set_id(0); + seeta::FaceRecognizer * fr = new seeta::FaceRecognizer(fr_model); + return fr; +} + +void VideoCaptureThread::set_fr(seeta::FaceRecognizer * fr) +{ + if(m_fr != NULL) + { + delete m_fr; + } + m_fr = fr; +} + +void VideoCaptureThread::start(const RecognizeType &type) +{ + m_type.type = type.type; + m_type.filename = type.filename; + QThread::start(); +} + +void VideoCaptureThread::run() +{ + int nret = 0; + + + if(m_type.type == 0) + { + m_capture = new cv::VideoCapture; + m_capture->open(m_type.type); + m_capture->set( cv::CAP_PROP_FRAME_WIDTH, gparamters.VideoWidth ); + m_capture->set( cv::CAP_PROP_FRAME_HEIGHT, gparamters.VideoHeight ); + + }else if(m_type.type == 1) + { + m_capture = new cv::VideoCapture; + m_capture->open(m_type.filename.toStdString().c_str()); + m_capture->set( cv::CAP_PROP_FRAME_WIDTH, gparamters.VideoWidth ); + m_capture->set( cv::CAP_PROP_FRAME_HEIGHT, gparamters.VideoHeight ); + } + + //m_capture->open("/tmp/test.avi"); + //m_capture->open(0); + //m_capture->set( cv::CAP_PROP_FRAME_WIDTH, gparamters.VideoWidth ); + //m_capture->set( cv::CAP_PROP_FRAME_HEIGHT, gparamters.VideoHeight ); + + + if((m_capture != NULL) && (!m_capture->isOpened())) + { + m_capture->release(); + emit sigEnd(-1); + return; + } + + cv::Mat mat, mat2; + cv::Scalar color; + color = CV_RGB( 0, 255, 0 ); + + m_workthread->start(); + + /* + //mp4,h263,flv + cv::VideoWriter outputvideo; + cv::Size s(800,600); + int codec = outputvideo.fourcc('M', 'P', '4', '2'); + outputvideo.open("/tmp/test.avi", codec, 50.0, s, true); + if(!outputvideo.isOpened()) + { + qDebug() << " write video failed"; + } + */ + + while(!m_exited) + { + if(m_type.type == 2) + { + mat = cv::imread(m_type.filename.toStdString().c_str()); + if(mat.data == NULL) + { + qDebug() << "VideoCapture read failed"; + m_exited = true; + nret = -2; + break; + } + }else + { + if(!m_capture->read(mat)) + { + qDebug() << "VideoCapture read failed"; + m_exited = true; + nret = -2; + break; + } + } + + //(*m_capture) >> mat; + + //cv::imwrite("/tmp/www_test.png",mat); + auto start = system_clock::now(); + if(m_type.type == 1) + { + cv::flip(mat, mat, 1); + }else + { + cv::Size size (gparamters.VideoWidth, gparamters.VideoHeight); + cv::resize(mat, mat2, size, 0, 0, cv::INTER_CUBIC); + mat = mat2.clone(); + } + + if(mat.channels() == 4) + { + cv::cvtColor(mat, mat, cv::COLOR_RGBA2BGR); + } + + SeetaImageData image; + image.height = mat.rows; + image.width = mat.cols; + image.channels = mat.channels(); + image.data = mat.data; + + cv::cvtColor(mat, mat2, cv::COLOR_BGR2RGB); + + auto faces = m_tracker->Track(image); + //qDebug() << "-----track size:" << faces.size; + if( faces.size > 0 ) + { + m_mutex.lock(); + if(!m_readimage) + { + clone_image(image, *m_mainImage); + //cv::Mat tmpmat; + //cv::cvtColor(mat, tmpmat, cv::COLOR_BGR2RGB); + m_mainmat = mat2.clone();//tmpmat.clone(); + m_mainfaceinfos.clear(); + for(int i=0; i(end - start); + int spent = duration.count() / 1000; + if(spent - 50 > 0) + { + QThread::msleep(spent - 50); + } + + if(m_type.type == 2) + { + nret = -2; + m_exited = true; + break; + } + } + + if(m_capture != NULL) + { + m_capture->release(); + } + + while(!m_workthread->isFinished()) + { + QThread::msleep(1); + } + + emit sigEnd(nret); +} + +//return 0:success, -1:src image is invalid, -2:do not find face, 1: find more than one face, 2: quality check failed +int VideoCaptureThread::checkimage(const QString & image, const QString & crop, float * features, SeetaRect &rect) +{ + std::string strimage = image.toStdString(); + std::string strcrop = crop.toStdString(); + + cv::Mat mat = cv::imread(strimage.c_str()); + if(mat.empty()) + return -1; + + SeetaImageData img; + img.width = mat.cols; + img.height = mat.rows; + img.channels = mat.channels(); + img.data = mat.data; + + auto face_array = m_fd->detect(img); + + if(face_array.size <= 0) + { + return -2; + }else if(face_array.size > 1) + { + return 1; + } + + SeetaRect& face = face_array.data[0].pos; + SeetaPointF points[5]; + + m_pd->mark(img, face, points); + + m_qa->feed(img, face, points, 5); + auto result1 = m_qa->query(seeta::BRIGHTNESS); + auto result2 = m_qa->query(seeta::RESOLUTION); + auto result3 = m_qa->query(seeta::CLARITY); + auto result4 = m_qa->query(seeta::INTEGRITY); + //auto result5 = m_qa->query(seeta::POSE); + auto result = m_qa->query(seeta::POSE_EX); + + if(result.level == 0 || result1.level == 0 || result2.level == 0 || result3.level == 0 || result4.level == 0 ) + { + return 2; + } + + /* + SeetaPointF points68[68]; + memset( points68, 0, sizeof( SeetaPointF ) * 68 ); + + m_pd68->mark(img, face,points68); + int light, blur, noise; + light = blur = noise = -1; + + m_lbn->Detect( img, points68, &light, &blur, &noise ); + */ + //std::cout << "light:" << light << ", blur:" << blur << ", noise:" << noise << std::endl; + + seeta::ImageData cropface = m_fr->CropFaceV2(img, points); + cv::Mat imgmat(cropface.height, cropface.width, CV_8UC(cropface.channels), cropface.data); + + m_fr->ExtractCroppedFace(cropface, features); + + cv::imwrite(strcrop.c_str(), imgmat); + + /////////////////////////////////////////////// + int x = face.x - face.width / 2; + if((x) < 0) + x = 0; + int y = face.y - face.height / 2; + if(y < 0) + y = 0; + + int x2 = face.x + face.width * 1.5; + if(x2 >= img.width) + { + x2 = img.width -1; + } + + int y2 = face.y + face.height * 1.5; + if(y2 >= img.height) + { + y2 = img.height -1; + } + + rect.x = x; + rect.y = y; + rect.width = x2 - x; + rect.height = y2 - y; + + return 0; +} diff --git a/example/qt/seetaface_demo/videocapturethread.h b/example/qt/seetaface_demo/videocapturethread.h new file mode 100644 index 0000000..d3feb15 --- /dev/null +++ b/example/qt/seetaface_demo/videocapturethread.h @@ -0,0 +1,222 @@ +#ifndef VIDEOCAPTURETHREAD_H +#define VIDEOCAPTURETHREAD_H + +#include +#include +#include + +#include "seeta/FaceLandmarker.h" +#include "seeta/FaceDetector.h" +#include "seeta/FaceAntiSpoofing.h" +#include "seeta/Common/Struct.h" +#include "seeta/CTrackingFaceInfo.h" +#include "seeta/FaceTracker.h" +#include "seeta/FaceRecognizer.h" +#include "seeta/QualityAssessor.h" +#include "seeta/QualityOfPoseEx.h" +#include "seeta/QualityOfLBN.h" + + +#include +#include +#include +#include + +#include + +typedef struct RecognizeType +{ + int type; //0: open camera, 1:open video file, 2:open image file + QString filename; //when type is 1 or 2, video file name or image file name + QString title; //windows title +}RecognizeType; + +typedef struct DataInfo +{ + int id; + int x; + int y; + int width; + int height; + QString name; + QString image_path; + float features[1024]; +}DataInfo; + + +typedef struct Config_Paramter +{ + int MinFaceSize; + float Fd_Threshold; + int VideoWidth; + int VideoHeight; + + float YawLowThreshold; + float YawHighThreshold; + float PitchLowThreshold; + float PitchHighThreshold; + + float AntiSpoofClarity; + float AntiSpoofReality; + + float Fr_Threshold; + QString Fr_ModelPath; +} Config_Paramter; + + +typedef struct Fr_DataInfo +{ + int pid; + int state; +}Fr_DataInfo; + +class VideoCaptureThread; + +class WorkThread : public QThread +{ + Q_OBJECT +public: + WorkThread(VideoCaptureThread * main); + ~WorkThread(); + +protected: + void run(); + + +signals: + void sigRecognize(int, const QString &, const QString &, float, const QImage &, const QRect &); + +private: + + int recognize(const SeetaTrackingFaceInfo & faceinfo);//, std::vector & datas); + +public: + + VideoCaptureThread * m_mainthread; + std::vector m_lastpids; + std::vector m_lasterrorpids; +}; + + +class ResetModelThread : public QThread +{ + Q_OBJECT +public: + ResetModelThread( const QString &imagepath, const QString & tmpimagepath); + ~ResetModelThread(); + + void start(std::map *datalst, const QString & table, seeta::FaceRecognizer * fr); +protected: + void run(); + + +signals: + //void sigResetModelUpdateUI(std::vector *); + void sigResetModelEnd(int); + void sigprogress(float); + +public: + + seeta::FaceRecognizer * m_fr; + //VideoCaptureThread * m_mainthread; + + std::map * m_datalst; + + QString m_table; + QString m_image_path; + QString m_image_tmp_path; + + bool m_exited; +}; + + +class InputFilesThread : public QThread +{ + Q_OBJECT +public: + InputFilesThread(VideoCaptureThread * main, const QString &imagepath, const QString & tmpimagepath); + ~InputFilesThread(); + + void start(const QStringList * files, unsigned int id, const QString & table); +protected: + void run(); + + +signals: + void sigInputFilesUpdateUI(std::vector *); + void sigInputFilesEnd(); + void sigprogress(float); + +public: + + VideoCaptureThread * m_mainthread; + + const QStringList * m_files; + unsigned int m_id; + QString m_table; + QString m_image_path; + QString m_image_tmp_path; + + bool m_exited; +}; + + +class VideoCaptureThread : public QThread +{ + Q_OBJECT +public: + VideoCaptureThread(std::map * datalst, int videowidth, int videoheight); + ~VideoCaptureThread(); + //void setMinFaceSize(int size); + + void setparamter(); + int checkimage(const QString & image, const QString & crop, float * features, SeetaRect &rect); + + void start(const RecognizeType &type); + + seeta::FaceRecognizer * CreateFaceRecognizer(const QString & modelfile); + void set_fr(seeta::FaceRecognizer * fr); +protected: + void run(); + + +signals: + void sigUpdateUI(const QImage & image); + void sigEnd(int); + +private: + + cv::VideoCapture * m_capture; + +public: + seeta::FaceDetector * m_fd; + seeta::FaceLandmarker * m_pd; + seeta::FaceLandmarker * m_pd68; + seeta::FaceAntiSpoofing * m_spoof; + seeta::FaceRecognizer * m_fr; + seeta::FaceTracker * m_tracker; + seeta::QualityAssessor * m_qa; + seeta::QualityOfLBN * m_lbn; + seeta::QualityOfPoseEx * m_poseex; + +public: + bool m_isrun; + bool m_exited; + + + + std::map *m_datalst; + + bool m_readimage; + SeetaImageData *m_mainImage; + cv::Mat m_mainmat; + + std::vector m_mainfaceinfos; + + WorkThread * m_workthread; + QMutex m_mutex; + + RecognizeType m_type; +}; + +#endif // VIDEOCAPTURETHREAD_H diff --git a/example/qt/seetaface_demo/white.png b/example/qt/seetaface_demo/white.png new file mode 100644 index 0000000..7fe9a48 Binary files /dev/null and b/example/qt/seetaface_demo/white.png differ