安卓CameraX基于虹软人脸识别程序开发
2021/12/13 11:17:46
本文主要是介绍安卓CameraX基于虹软人脸识别程序开发,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
一、前言
需求:根据镜头内看到的人脸,获取其在系统中注册的用户信息
设计方案:
①注册:用户通过前端系统自拍图片发送给后端注册。后端对图片进行人脸特征数据分析,将特征数据保存到用户数据库。
②识别:用户正对设备镜头,通过前端识别到人脸时,对人脸分析获取特征数据,将特征数据发送给后端获取用户信息。后端比对特征数据,将相似度0.9以上中最高的用户返回。
准备:虹软注册账号获取APP_ID和SDK_KEY(付费版还要一个Active_Key)和下载SDK
二、初始SDK
要使用SDK,得先把sdk放到项目里,然后首次需要激活SDK,每次使用SDK需要初始化
1、先切换到Project视图,将下载的SDK 放到对应位置。然后在build.gradle中引入一下
2、到MainActive中先激活SDK,实际生产中不要每次都去激活,记录首次取激活就好了
protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); toActiveFaceSDK(); // 激活 } public void toActiveFaceSDK(){ int status=FaceEngine.activeOnline(this, 你的APP_ID, 你的SDK_KEY); if(status == ErrorInfo.MOK || status == ErrorInfo.MERR_ASF_ALREADY_ACTIVATED){ Toast.makeText(this, "激活成功", Toast.LENGTH_LONG).show(); initEngine();//激活成功那就去初始化SDK,准备使用 }else if(status == ErrorInfo.MERR_ASF_VERSION_NOT_SUPPORT){ Toast.makeText(this, "安卓版本不支持人脸识别", Toast.LENGTH_LONG).show(); }else{ Toast.makeText(this, "激活失败,状态码:" + status, Toast.LENGTH_LONG).show(); } }
3、初始化SDK,准备使用,每次页面onDestroy的时候记得对应卸载SDK
public void initEngine(){ // 初始化sdk faceObj = new FaceEngine(); DetectFaceOrientPriority ftOrient = DetectFaceOrientPriority.ASF_OP_ALL_OUT; // 默认是对拿到的图像进行全角度检测 // DetectFaceOrientPriority ftOrient = DetectFaceOrientPriority.ASF_OP_270_ONLY; // 270 afCode = faceObj.init(that.getApplicationContext(), DetectMode.ASF_DETECT_MODE_VIDEO,ftOrient,16, 3, faceObj.ASF_FACE_DETECT | faceObj.ASF_AGE |faceObj.ASF_FACE3DANGLE |faceObj.ASF_GENDER | faceObj.ASF_LIVENESS| faceObj.ASF_FACE_RECOGNITION); if (afCode != ErrorInfo.MOK) { if(afCode == ErrorInfo.MERR_ASF_NOT_ACTIVATED){ Toast.makeText(this, "设备SDK未激活", Toast.LENGTH_LONG).show(); }else{ Toast.makeText(this, "初始化失败"+afCode, Toast.LENGTH_LONG).show(); } } } public void unInitEngine() { //卸载SDK if (afCode == 0) { afCode = faceObj.unInit(); } }
三、配置相机
安卓Camera API到目前为止已有三代,ArcFace官方教程给的是Camera 1,但我在AS中引入的时候,会划删除线,提示Out并建议使用Camera2替代。于是乎,我使用Camera2去尝试替换官方的一代写法,虽然勉强成功,但是说实在的各种适配问题层出不穷,里面做适配写的代码占了一大半。So 最终我使用了CameraX,就这个预览自适应,就已经天下无敌了!推荐《为什么使用CameraX》。
1、CameraX需要在build.gradle中引入几个包,如下:
def camerax_version = "1.1.0-alpha08" implementation("androidx.camera:camera-core:${camerax_version}") implementation("androidx.camera:camera-camera2:${camerax_version}") implementation("androidx.camera:camera-lifecycle:${camerax_version}") implementation("androidx.camera:camera-view:1.0.0-alpha25") implementation("androidx.camera:camera-extensions:1.0.0-alpha25")
2、创建预览View
页面定义变量 public int rational; private Size mImageReaderSize; private FaceEngine faceObj; //sdk对象 private int afCode = -1; // sdk状态 private ProcessCameraProvider cameraProvider = null; // cameraX对象 private Preview mPreviewBuild = null; //cameraX的preview private PreviewView viewFinder = null; // X的自带预览view private byte[] nv21 = null; // 实时nv21图像流 protected void onCreate()( RelativeLayout layout=that.findViewById(R.id.camerax_preview); viewFinder =new PreviewView(that);//我是动态创建了个Preview,你也可以在页面直接写上 layout.addView(viewFinder); Point screenSize = new Point(); // 获取屏幕总宽高 getWindowManager().getDefaultDisplay().getSize(screenSize);//getRealSize就会包括状态栏 rational = aspectRatio(screenSize.x, screenSize.y); startCameraX(); } private double RATIO_4_3_VALUE = 4.0 / 3.0; private double RATIO_16_9_VALUE = 16.0 / 9.0; private int aspectRatio(int width, int height) { double previewRatio = Math.max(width, height) * 1.00 / Math.min(width, height); if (Math.abs(previewRatio - RATIO_4_3_VALUE) <= Math.abs(previewRatio - RATIO_16_9_VALUE)) { return AspectRatio.RATIO_4_3; } return AspectRatio.RATIO_16_9; }
3、初始化相机
private void startCameraX() { ListenableFuture<ProcessCameraProvider> providerFuture = ProcessCameraProvider.getInstance(that); providerFuture.addListener(() -> { try { // 检测CameraProvider可用性 cameraProvider = providerFuture.get(); } catch (ExecutionException | InterruptedException e) { Toast.makeText(this, "相机X不可用", Toast.LENGTH_LONG).show(); e.printStackTrace(); return; } CameraSelector cameraSelector = new CameraSelector.Builder().requireLensFacing( CameraSelector.LENS_FACING_FRONT // CameraSelector.LENS_FACING_BACK ).build(); //绑定预览 mPreviewBuild = new Preview.Builder() // .setTargetRotation(Surface.ROTATION_180)//设置预览旋转角度 .build(); mPreviewBuild.setSurfaceProvider(viewFinder.getSurfaceProvider()); // 绑定显示 // 图像分析,监听试试获取的图像流 ImageAnalysis imageAnalysis = new ImageAnalysis.Builder() .setTargetAspectRatio(rational) .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST) //阻塞模式:ImageAnalysis.STRATEGY_BLOCK_PRODUCER (在此模式下,执行器会依序从相应相机接收帧;这意味着,如果 analyze() 方法所用的时间超过单帧在当前帧速率下的延迟时间,所接收的帧便可能不再是最新的帧,因为在该方法返回之前,新帧会被阻止进入流水线) //非阻塞模式: ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST (在此模式下,执行程序在调用 analyze() 方法时会从相机接收最新的可用帧。如果此方法所用的时间超过单帧在当前帧速率下的延迟时间,它可能会跳过某些帧,以便 analyze() 在下一次接收数据时获取相机流水线中的最新可用帧) .build(); imageAnalysis.setAnalyzer( ContextCompat.getMainExecutor(this), new FaceAnalyze.MyAnalyzer() //绑定给我的图片分析类 ); // 绑定生命周期前先解绑 cameraProvider.unbindAll(); // 绑定 Camera mCameraX = cameraProvider.bindToLifecycle( (LifecycleOwner) this ,cameraSelector ,imageAnalysis ,mPreviewBuild); }, ContextCompat.getMainExecutor(this)); }
4、图片分析类,ImageUtil里的代码最后给出
/** * 实时预览 Analyzer处理 */ private class MyAnalyzer implements ImageAnalysis.Analyzer { private byte[] y; private byte[] u; private byte[] v; private ReentrantLock lock = new ReentrantLock(); private Object mImageReaderLock = 1;//1 available,0 unAvailable private long lastDrawTime = 0; private int timerSpace = 300; // 识别间隔 @Override public void analyze(@NonNull ImageProxy imageProxy) { @SuppressLint("UnsafeOptInUsageError") Image mediaImage = imageProxy.getImage(); // int rotationDegrees = imageProxy.getImageInfo().getRotationDegrees(); if(mediaImage != null){ synchronized (mImageReaderLock) { /*识别频率Start 和状态*/ long start = System.currentTimeMillis(); if (start - lastDrawTime < timerSpace || !mImageReaderLock.equals(1)) { mediaImage.close(); imageProxy.close(); return; } lastDrawTime = System.currentTimeMillis(); /*识别频率End*/ //判断YUV类型,我们申请的格式类型是YUV_420_888 if (ImageFormat.YUV_420_888 == mediaImage.getFormat()) { Image.Plane[] planes = mediaImage.getPlanes(); if (mImageReaderSize == null) { mImageReaderSize = new Size(planes[0].getRowStride(), mediaImage.getHeight()); } lock.lock(); if (y == null) { y = new byte[planes[0].getBuffer().limit() - planes[0].getBuffer().position()]; u = new byte[planes[1].getBuffer().limit() - planes[1].getBuffer().position()]; v = new byte[planes[2].getBuffer().limit() - planes[2].getBuffer().position()]; } //从planes中分别获取y、u、v 变量数据 if (mediaImage.getPlanes()[0].getBuffer().remaining() == y.length) { planes[0].getBuffer().get(y); planes[1].getBuffer().get(u); planes[2].getBuffer().get(v); if (nv21 == null) { nv21 = new byte[planes[0].getRowStride() * mediaImage.getHeight() * 3 / 2]; } if (nv21 != null && (nv21.length != planes[0].getRowStride() * mediaImage.getHeight() * 3 / 2)) { return; } // 回传数据是YUV422 if (y.length / u.length == 2) { ImageUtil.yuv422ToYuv420sp(y, u, v, nv21, planes[0].getRowStride(), mediaImage.getHeight()); } // 回传数据是YUV420 else if (y.length / u.length == 4) { nv21 = ImageUtil.yuv420ToNv21(mediaImage); } //调用Arcsoft算法,获取人脸特征 getFaceInfo(nv21); } lock.unlock(); } } } //一定要关闭 mediaImage.close(); imageProxy.close(); } }
5、进行人脸特征分析
/** * 获取人脸特征 */ private int lastFaceID=-1; private void getFaceInfo(byte[] nv21) { List<FaceInfo> faceInfoList = new ArrayList<FaceInfo>(); FaceFeature faceFeature = new FaceFeature(); //第一步是送数据给arcsoft sdk,检查是否有人脸信息。 int code = faceObj.detectFaces(nv21, mImageReaderSize.getWidth(), mImageReaderSize.getHeight(), faceObj.CP_PAF_NV21, faceInfoList); //数据检查正常,并且含有人脸信息,则进行下一步的人脸识别。 if (code == ErrorInfo.MOK && faceInfoList.size() > 0) { int newFaceID = faceInfoList.get(0).getFaceId();// 每次不离镜头的同一人脸,其ID均相同 if (lastFaceID == newFaceID) { // 相同脸则return Log.i("人脸ID相同", "旧" + lastFaceID);// + "、新" + newFaceID return; } lastFaceID = newFaceID; // long start = System.currentTimeMillis(); // 特征数据分析部分机型会卡 code = faceObj.extractFaceFeature(nv21, mImageReaderSize.getWidth(), mImageReaderSize.getHeight(),faceObj.CP_PAF_NV21, faceInfoList.get(0), faceFeature); if (code != ErrorInfo.MOK) { return; } byte[] featureData=faceFeature.getFeatureData();//最终的特征数据!!! post(featureData); //把数据给后端去查信息 // Log.e(TAG, "特征长度"+faceFeature.getFeatureData().length); // long end = System.currentTimeMillis(); // Log.e(TAG+lastFaceID, "耗时: "+(end-start)+"ms特征数据---: "+featureData.length); } }
四、ImageUtil代码
public class ImageUtil { /** * 将Y:U:V == 4:2:2的数据转换为nv21 * * @param y Y 数据 * @param u U 数据 * @param v V 数据 * @param nv21 生成的nv21,需要预先分配内存 * @param stride 步长 * @param height 图像高度 */ public static void yuv422ToYuv420sp(byte[] y, byte[] u, byte[] v, byte[] nv21, int stride, int height) { System.arraycopy(y, 0, nv21, 0, y.length); // 注意,若length值为 y.length * 3 / 2 会有数组越界的风险,需使用真实数据长度计算 int length = y.length + u.length / 2 + v.length / 2; int uIndex = 0, vIndex = 0; for (int i = stride * height; i < length; i += 2) { nv21[i] = v[vIndex]; nv21[i + 1] = u[uIndex]; vIndex += 2; uIndex += 2; } } /** * 将Y:U:V == 4:1:1的数据转换为nv21 * * @param y Y 数据 * @param u U 数据 * @param v V 数据 * @param nv21 生成的nv21,需要预先分配内存 * @param stride 步长 * @param height 图像高度 */ public static void yuv420ToYuv420sp(byte[] y, byte[] u, byte[] v, byte[] nv21, int stride, int height) { System.arraycopy(y, 0, nv21, 0, y.length); // 注意,若length值为 y.length * 3 / 2 会有数组越界的风险,需使用真实数据长度计算 int length = y.length + u.length + v.length; int uIndex = 0, vIndex = 0; for (int i = stride * height; i < length; i++) { nv21[i] = v[vIndex++]; nv21[i + 1] = u[uIndex++]; } } public static byte[] yuv420ToNv21(Image image) { Image.Plane[] planes = image.getPlanes(); ByteBuffer yBuffer = planes[0].getBuffer(); ByteBuffer uBuffer = planes[1].getBuffer(); ByteBuffer vBuffer = planes[2].getBuffer(); int ySize = yBuffer.remaining(); int uSize = uBuffer.remaining(); int vSize = vBuffer.remaining(); int size = image.getWidth() * image.getHeight(); byte[] nv21 = new byte[size * 3 / 2]; yBuffer.get(nv21, 0, ySize); vBuffer.get(nv21, ySize, vSize); byte[] u = new byte[uSize]; uBuffer.get(u); //每隔开一位替换V,达到VU交替 int pos = ySize + 1; for (int i = 0; i < uSize; i++) { if (i % 2 == 0) { nv21[pos] = u[i]; pos += 2; } } return nv21; } public static Bitmap nv21ToBitmap(byte[] nv21,int w, int h) { YuvImage image = new YuvImage(nv21, ImageFormat.NV21, w, h, null); ByteArrayOutputStream stream = new ByteArrayOutputStream(); image.compressToJpeg(new Rect(0, 0, w, h), 80, stream); Bitmap newBitmap = BitmapFactory.decodeByteArray(stream.toByteArray(), 0, stream.size()); try{ stream.close(); }catch (Exception e){} return newBitmap; } }
That's all
Thanks !!!
附:
人脸路程中的坑《人脸识别坑》
二进制的特征数据怎么发给后台,参考我的另外一篇《安卓form-data形式上传二进制》
这篇关于安卓CameraX基于虹软人脸识别程序开发的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-11-27Excel中实现拖动排序的简单教程
- 2024-11-27Rocket消息队列资料:新手入门指南
- 2024-11-27rocket消息队资料详解与入门指南
- 2024-11-27RocketMQ底层原理资料详解入门教程
- 2024-11-27RocketMQ项目开发资料:新手入门教程
- 2024-11-27RocketMQ项目开发资料详解
- 2024-11-27RocketMQ消息中间件资料入门教程
- 2024-11-27初学者指南:深入了解RocketMQ源码资料
- 2024-11-27Rocket消息队列学习入门指南
- 2024-11-26Rocket消息中间件教程:新手入门详解