课设作业需要用到图像处理相关的库,想着自己知识范围内的也只是移动端开发,不如直接写个app交上去,可问题来了------能在移动端运行的图像处理库必须得满足两个要求,高效(手机摄像头每一帧捕捉到的画面都得高速处理,要不然实际应用会卡顿),轻量(不能手机上塞个几G的大模型吧..... ),思来想去好像也就openCV和 TensorFlow Lite能用(后者还能换模型,感觉会好一点),本片即记录正常使用openCV库的过程
准备工作
OpenCV SDK
-
Releases - OpenCV (在这里下载
SDK
)
- 可以看到最新的版本是截止到2023-12-28 的4.9.0 ,我们要使用的包即
Android
包,直接下载即可
Android Studio
-
此教程写于2024.04 ,
Android Studio
版本为Android Studio Iguana | 2023.2.1
-
正常创建项目
-
目前
openCV
的预览摄像头view
还是Camera
或Camera2
实现,还没用上Google
最新的CameraX
(希望快点更新,CameraX
着实好用得多) -
注意在选择
Build configuration language
时要选择Groovy
,因为待会集成Module
时OpenCV
自己的build.gradle
里还是用的Groovy
,使用最新的Kotlin DSL
会报错
正式开始
导入SDK
- 打开
Project Structure
- 选择导入
Module
- 找到自己下好的
OpenCV
包并导入
- 更改导入后的
Module
名称,当然也可以不改,我觉得改了会好些(
- 直接
Finish
,可以看到导入完成
- 我们还需要把
Module
作为Depandency
导入到我们自己的项目中,选择Module Depandecy
- 选中
opencv
,OK
- 可以看到
opencv
被导入成功
修改文件
-
已经导入好的
opencv
模块是无法直接使用的,因为模块中build.gradle
中好多配置都和我们自己项目的build.gradle
不同,我们还得使二者同步 -
opencv
下的build.gradle
中已经有了注释来告诉你该如何进行相应的配置,我不是一个喜欢用旧库的人,为了保证我项目下的build.gradle
配置同步,所有能拉高版本的地方我都做了修改 -
先看一下我们默认生成的
app/build.gradle
gradle
plugins {
alias(libs.plugins.androidApplication)
alias(libs.plugins.jetbrainsKotlinAndroid)
}
android {
namespace 'com.ericmoin.opencv_demo'
compileSdk 34
defaultConfig {
applicationId "com.ericmoin.opencv_demo"
minSdk 24
targetSdk 34
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_17
targetCompatibility JavaVersion.VERSION_17
}
kotlinOptions {
jvmTarget = '17'
}
buildFeatures {
viewBinding true
}
}
dependencies {
implementation project(':opencv')
implementation libs.androidx.core.ktx
implementation libs.androidx.appcompat
implementation libs.material
implementation libs.androidx.activity
implementation libs.androidx.constraintlayout
testImplementation libs.junit
androidTestImplementation libs.androidx.junit
androidTestImplementation libs.androidx.espresso.core
}
-
这里我们得到
minSdk
为24
,targetSdk
为34
,这些都是待会在opencv/build.gradle
要修改的内容 -
我们需要知道
kotlin
和gradle
的版本。每一个版本的Android Studio
都会对项目结构下的gradle
进行调整,这里目前最新的版本可以看到gradle
的版本号已经被隐藏了,我们可以通过Ctrl+鼠标左键
点击libs.plugins.jetbrainsKotlinAndroid
直接跳转到它链接的文件,最后找到这个文件
- 在这里我们可以找到相应的版本号,接下来即修改
opencv/build.gradle
gradle
apply plugin: 'com.android.library'
apply plugin: 'maven-publish'
apply plugin: 'kotlin-android'
def openCVersionName = "4.9.0"
def openCVersionCode = ((4 * 100 + 9) * 100 + 0) * 10 + 0
println "OpenCV: " +openCVersionName + " " + project.buildscript.sourceFile
android {
namespace 'org.opencv'
// compileSdkVersion 31
compileSdkVersion 34
defaultConfig {
minSdkVersion 24
// minSdkVersion 21
// targetSdkVersion 31
targetSdkVersion 34
versionCode openCVersionCode
versionName openCVersionName
externalNativeBuild {
cmake {
arguments "-DANDROID_STL=c++_shared"
targets "opencv_jni_shared"
}
}
}
compileOptions {
// sourceCompatibility JavaVersion.VERSION_1_8
sourceCompatibility JavaVersion.VERSION_17
targetCompatibility JavaVersion.VERSION_17
// targetCompatibility JavaVersion.VERSION_1_8
}
buildTypes {
debug {
packagingOptions {
doNotStrip '**/*.so' // controlled by OpenCV CMake scripts
}
}
release {
packagingOptions {
doNotStrip '**/*.so' // controlled by OpenCV CMake scripts
}
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'
}
}
buildFeatures {
aidl true
prefabPublishing true
buildConfig true
}
prefab {
opencv_jni_shared {
headers "native/jni/include"
}
}
sourceSets {
main {
jniLibs.srcDirs = ['native/libs']
java.srcDirs = ['java/src']
aidl.srcDirs = ['java/src']
res.srcDirs = ['java/res']
manifest.srcFile 'java/AndroidManifest.xml'
}
}
publishing {
singleVariant('release') {
withSourcesJar()
withJavadocJar()
}
}
externalNativeBuild {
cmake {
path (project.projectDir.toString() + '/libcxx_helper/CMakeLists.txt')
}
}
}
publishing {
publications {
release(MavenPublication) {
groupId = 'org.opencv'
artifactId = 'opencv'
version = '4.9.0'
afterEvaluate {
from components.release
}
}
}
repositories {
maven {
name = 'myrepo'
url = "${project.buildDir}/repo"
}
}
}
dependencies {
}
-
其中注释的部分即为文件原来的内容,注意文件中修改的几处,
Java
版本随意,只要能跑通即可,我这里使用的是jdk17
-
如果上述操作都没啥问题,可以直接先跑一遍运行,如果操作正确
app
是可以正常启动的 -
如果遇到报错,请注意自己的gradle版本是否一致 ,编译所用的jdk是否一致
-
OpenCV 4.9
的build.gradle
已经移除了gradle
版本号部分,如果是更前的版本,需要额外注意版本号不同,两个build.gradle
和settings.gradle
中的内容不同带来的影响
编写基本布局
-
因为只是一个示例
demo
,我打算只在这里演示两个功能------OpenCV
的边缘检测 和图像灰度化 -
这样我们的布局就非常简单了------
DetectFragment
和ImageFragment
,和刚开始就需要展示的MainFragment
,相关的权限申请 和路由跳转也在这里完成 -
先在
app/build.gradle
中导入需要的库
gradle
dependencies {
implementation project(':opencv')
// implementation "com.guolindev.permissionx:permissionx:1.7.1"
// def nav_version = "2.7.7"
// implementation "androidx.navigation:navigation-fragment-ktx:$nav_version"
// implementation "androidx.navigation:navigation-ui-ktx:$nav_version"
// implementation "androidx.navigation:navigation-dynamic-features-fragment:$nav_version"
// androidTestImplementation "androidx.navigation:navigation-testing:$nav_version"
// implementation "androidx.navigation:navigation-compose:$nav_version"
implementation libs.permissionx
implementation libs.androidx.navigation.fragment.ktx
implementation libs.androidx.navigation.ui.ktx
implementation libs.androidx.navigation.dynamic.features.fragment
androidTestImplementation libs.androidx.navigation.testing
implementation libs.androidx.navigation.compose
implementation libs.androidx.core.ktx
implementation libs.androidx.appcompat
implementation libs.material
implementation libs.androidx.activity
implementation libs.androidx.constraintlayout
testImplementation libs.junit
androidTestImplementation libs.androidx.junit
androidTestImplementation libs.androidx.espresso.core
}
-
上面注释的内容其实是旧的写法,在
Android Studio
中会被纠正成新的写法,我给出示例只是为了考虑旧的Android Studio
中的导入形式 -
依次创建
MainFragment.kt
,DetectFragment.kt
,ImageFragment.kt
-
编写
activity_main.xml
xml
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/main"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<androidx.fragment.app.FragmentContainerView
android:id="@+id/container"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:name="androidx.navigation.fragment.NavHostFragment"
app:defaultNavHost="true"
app:navGraph="@navigation/main_navigation"
/>
</androidx.constraintlayout.widget.ConstraintLayout>
- 编写
fragment_main.xml
xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/mainFragment"
android:layout_width="match_parent"
android:layout_height="match_parent">
<Button
android:id="@+id/cameraButton"
android:text="跳转检测"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
/>
<Button
android:id="@+id/imageButton"
android:text="跳转图像"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
/>
</LinearLayout>
- 编写
fragment_detect.xml
xml
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent">
<org.opencv.android.JavaCamera2View
android:id="@+id/cameraView"
android:layout_width="match_parent"
android:layout_height="match_parent"
/>
</androidx.constraintlayout.widget.ConstraintLayout>
- 编写
fragment_image
xml
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent">
<ImageView
android:id="@+id/image"
android:background="@drawable/example"
android:layout_width="250dp"
android:layout_height="250dp"
android:layout_marginTop="100dp"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
/>
<Button
android:id="@+id/button"
android:text="改变"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintTop_toBottomOf="@id/image"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
/>
</androidx.constraintlayout.widget.ConstraintLayout>
- 这里的
android:background="@drawable/example
是我已经放在drawble
里的示例图片,这里需要你换成自己的图片
导航视图
- 在
src/res
下创建文件夹navigation
,新建一个navigation
文件( 叫什么名字不影响,这里我命名为main_navigation.xml
)
- 打开这个文件,点击
add destination
- 我们希望的是
MainFragment
可以跳转到DetectFragment
和ImageFragment
,故依次新建并连线,最后效果如下(这部分不会用需要百度,其实就是新建之后直接连线就好)
- 不同
Android Studio
版本MainActivity
的代码实现不一样,我的写法如下所示
kotlin
package com.ericmoin.opencv_demo
import android.os.Bundle
import android.widget.Toast
import androidx.activity.enableEdgeToEdge
import androidx.appcompat.app.AppCompatActivity
import androidx.core.view.ViewCompat
import androidx.core.view.WindowInsetsCompat
import com.ericmoin.opencv_demo.databinding.ActivityMainBinding
import org.opencv.android.OpenCVLoader
import org.opencv.imgproc.Imgproc
class MainActivity : AppCompatActivity() {
lateinit var binding: ActivityMainBinding
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = ActivityMainBinding.inflate(layoutInflater)
setContentView(binding.root)
if(OpenCVLoader.initLocal()){
Toast.makeText(this,"opencv 初始化成功",Toast.LENGTH_SHORT).show()
}
}
}
- 此时点击运行
- 一切正常
- 编写
MainFragment
kotlin
package com.ericmoin.opencv_demo
import androidx.fragment.app.viewModels
import android.os.Bundle
import androidx.fragment.app.Fragment
import android.view.LayoutInflater
import android.view.View
import android.view.ViewGroup
import androidx.navigation.fragment.findNavController
import com.ericmoin.opencv_demo.databinding.FragmentMainBinding
class MainFragment : Fragment() {
companion object {
fun newInstance() = MainFragment()
}
lateinit var binding: FragmentMainBinding
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
}
override fun onCreateView(
inflater: LayoutInflater, container: ViewGroup?,
savedInstanceState: Bundle?
): View {
binding = FragmentMainBinding.inflate(inflater,container,false)
return binding.root;
}
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
binding.detectButton.setOnClickListener {
findNavController().navigate(R.id.action_mainFragment_to_detectFragment)
}
binding.imageButton.setOnClickListener {
findNavController().navigate(R.id.action_mainFragment_to_imageFragment)
}
}
}
图像灰度化
openCV
处理图像的流程基本遵循三步:Bitmap
转化为Mat
- 对
Mat
进行操作 Mat
转化为Bitmap
,展示
- 图像灰度化的核心函数只有
Imgproc.cvtColor
一个,故不作过多解释
kotlin
package com.ericmoin.opencv_demo
import android.graphics.Bitmap
import androidx.fragment.app.viewModels
import android.os.Bundle
import androidx.fragment.app.Fragment
import android.view.LayoutInflater
import android.view.View
import android.view.ViewGroup
import android.widget.Button
import android.widget.ImageView
import androidx.core.view.drawToBitmap
import com.ericmoin.opencv_demo.databinding.FragmentImageBinding
import org.opencv.android.Utils
import org.opencv.core.CvType
import org.opencv.core.Mat
import org.opencv.imgproc.Imgproc
class ImageFragment : Fragment() {
companion object {
fun newInstance() = ImageFragment()
}
lateinit var binding: FragmentImageBinding
override fun onCreateView(
inflater: LayoutInflater, container: ViewGroup?,
savedInstanceState: Bundle?
): View {
binding = FragmentImageBinding.inflate(inflater,container,false)
return binding.root
}
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
initButton()
}
private fun initButton() {
binding.button.setOnClickListener {
changeImage()
}
}
private fun changeImage() {
// 获得ImageView所展示的Bitmap
val bitmap = binding.image.drawToBitmap().copy(Bitmap.Config.ARGB_8888,false)
// 创建一个初始矩阵
val src = Mat()
// 把获得的Bitmap转换为矩阵
Utils.bitmapToMat(bitmap,src)
// 图像灰度化
Imgproc.cvtColor(src,src, Imgproc.COLOR_BGR2GRAY)
// 把变换后的矩阵转化为bitmap
Utils.matToBitmap(src,bitmap)
// 展现图片
binding.image.setImageBitmap(bitmap)
}
}
- 改变前
- 改变后
边缘检测
-
我希望实现一个实时的边缘检测效果,这就意味着我们需要摄像头的相关权限
-
在
AndroidMenifest.xml
中声明
xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools">
<uses-feature android:name="android.hardware.camera.any" />
<uses-feature android:name="android.hardware.autofocus" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<application
android:allowBackup="true"
android:dataExtractionRules="@xml/data_extraction_rules"
android:fullBackupContent="@xml/backup_rules"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/Theme.Opencv_demo"
tools:targetApi="31">
<activity
android:name=".MainActivity"
android:exported="true">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
-
权限的申请部分我使用了郭霖大神的
permissionX
,这个库我在前面gradle
部分已经导入了,这里直接使用即可 -
回到
MainActivity
kotlin
package com.ericmoin.opencv_demo
import android.Manifest
import android.os.Bundle
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import androidx.core.app.ActivityCompat
import com.ericmoin.opencv_demo.databinding.ActivityMainBinding
import com.permissionx.guolindev.PermissionX
import org.opencv.android.OpenCVLoader
class MainActivity : AppCompatActivity() {
companion object{
private const val REQUEST_CODE_PERMISSIONS = 10
private val REQUIRED_PERMISSIONS = arrayOf(
Manifest.permission.CAMERA,
Manifest.permission.READ_EXTERNAL_STORAGE,
Manifest.permission.WRITE_EXTERNAL_STORAGE
)
}
lateinit var binding: ActivityMainBinding
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = ActivityMainBinding.inflate(layoutInflater)
setContentView(binding.root)
if(OpenCVLoader.initLocal()){
Toast.makeText(this,"opencv 初始化成功",Toast.LENGTH_SHORT).show()
}
initPermission()
}
private fun initPermission() {
PermissionX.init(this)
.permissions(
REQUIRED_PERMISSIONS.toList()
)
.request { allGranted, _, _ ->
if ( allGranted ){
Toast.makeText(this,"权限申请成功", Toast.LENGTH_SHORT).show()
}
else{
ActivityCompat.requestPermissions(
this,
REQUIRED_PERMISSIONS,
REQUEST_CODE_PERMISSIONS
)
}
}
}
}
- 编写
DetectFragment
kotlin
package com.ericmoin.opencv_demo
import androidx.fragment.app.viewModels
import android.os.Bundle
import android.util.Log
import androidx.fragment.app.Fragment
import android.view.LayoutInflater
import android.view.View
import android.view.ViewGroup
import com.ericmoin.opencv_demo.databinding.FragmentDetectBinding
import org.opencv.android.CameraBridgeViewBase
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2
import org.opencv.core.Mat
import org.opencv.core.MatOfPoint
import org.opencv.core.Point
import org.opencv.core.Scalar
import org.opencv.core.Size
import org.opencv.imgproc.Imgproc
class DetectFragment : Fragment() {
companion object {
fun newInstance() = DetectFragment()
}
lateinit var binding: FragmentDetectBinding
override fun onCreateView(
inflater: LayoutInflater, container: ViewGroup?,
savedInstanceState: Bundle?
): View {
binding = FragmentDetectBinding.inflate(inflater,container,false)
return binding.root
}
val cameraViewListener2 = object : CvCameraViewListener2{
override fun onCameraViewStarted(width: Int, height: Int) {
}
override fun onCameraViewStopped() {
}
override fun onCameraFrame(inputFrame: CameraBridgeViewBase.CvCameraViewFrame): Mat? {
return drawBorder(inputFrame.rgba())
}
}
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
binding.cameraView.visibility = CameraBridgeViewBase.VISIBLE
binding.cameraView.setCvCameraViewListener(cameraViewListener2)
binding.cameraView.setCameraPermissionGranted()
}
override fun onResume() {
super.onResume()
binding.cameraView.enableView()
}
override fun onDestroy() {
super.onDestroy()
binding.cameraView.disableView()
}
private fun drawBorder(mat:Mat): Mat? {
val result = Mat()
// 把原矩阵灰度化,便于进行边缘检测
Imgproc.cvtColor(mat,result, Imgproc.COLOR_BGR2GRAY)
// 高斯滤波去噪声
Imgproc.GaussianBlur(result, result, Size(3.0, 3.0), 0.0)
// canny算法得到边缘
Imgproc.Canny(result, result, 0.0, 256.0)
val contours: List<MatOfPoint> = ArrayList()
val hierarchy = Mat()
// 寻找边缘
Imgproc.findContours(
result,
contours,
hierarchy,
Imgproc.RETR_EXTERNAL,
Imgproc.CHAIN_APPROX_SIMPLE
)
if (contours.isEmpty()) {
return null
}
val resultMat = mat.clone()
// 绘制边缘
for( index in contours.indices ){
// 边缘的颜色指定
val scalar = Scalar((0..255).random().toDouble(),(0..255).random().toDouble(),(0..255).random().toDouble())
Imgproc.drawContours(resultMat,contours,index,scalar,1,8,hierarchy,0, Point())
}
return resultMat
}
}
-
可以发现
openCV
在Android
下的api
和python
下基本保持一致,也就是说网上通用的python
教程理论上都可以在Android
上复现 -
运行看看结果
-
感觉还不错
-
这里可以看到得到的图像莫名向左旋转了
90
度,关于这个问题openCV
并没有提供更好的地方给我们对预览得到的图像进行第一时间的旋转,但是如果在刚刚的onCameraFrame
处对得到的矩阵进行旋转,会导致报错(不信可以试试).因为实际获取到的Bitmap
和旋转后的Mat
转化为的Bitmap
宽高并不匹配 -
理论上我们可以不使用它提供的预览组件,而是使用
CameraX
对获取到的Bitmap
进行相同的图像处理就能解决问题,由于不是这里的内容不做讨论,CameraX
的使用可以看我的另一篇文章:CameraX 简单使用 - 掘金 -
那就只好在
openCV
提供的SDK
进行修改了,我找到了一篇不错的文章来解决这个问题(需要科学上网):Working with the OpenCV Camera for Android: Rotating, Orienting, and Scaling -
其中的核心部分即更改
CameraBridgeViewBase
中的deliverAndDrawFrame
函数,代码如下:
java
private final Matrix mMatrix = new Matrix();
private void updateMatrix() {
float mw = this.getWidth();
float mh = this.getHeight();
float hw = this.getWidth() / 2.0f;
float hh = this.getHeight() / 2.0f;
float cw = (float)Resources.getSystem().getDisplayMetrics().widthPixels; //Make sure to import Resources package
float ch = (float)Resources.getSystem().getDisplayMetrics().heightPixels;
float scale = cw / (float)mh;
float scale2 = ch / (float)mw;
if(scale2 > scale){
scale = scale2;
}
boolean isFrontCamera = mCameraIndex == CAMERA_ID_FRONT;
mMatrix.reset();
if (isFrontCamera) {
mMatrix.preScale(-1, 1, hw, hh); //MH - this will mirror the camera
}
mMatrix.preTranslate(hw, hh);
if (isFrontCamera){
mMatrix.preRotate(270);
} else {
mMatrix.preRotate(90);
}
mMatrix.preTranslate(-hw, -hh);
mMatrix.preScale(scale,scale,hw,hh);
}
@Override
public void layout(int l, int t, int r, int b) {
super.layout(l, t, r, b);
updateMatrix();
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
updateMatrix();
}
/**
* This method shall be called by the subclasses when they have valid
* object and want it to be delivered to external client (via callback) and
* then displayed on the screen.
* @param frame - the current frame to be delivered
*/
protected void deliverAndDrawFrame(CvCameraViewFrame frame) { //replaces existing deliverAndDrawFrame
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch(Exception e) {
Log.e(TAG, "Mat type: " + modified);
Log.e(TAG, "Bitmap type: " + mCacheBitmap.getWidth() + "*" + mCacheBitmap.getHeight());
Log.e(TAG, "Utils.matToBitmap() throws an exception: " + e.getMessage());
bmpValid = false;
}
}
if (bmpValid && mCacheBitmap != null) {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
int saveCount = canvas.save();
canvas.setMatrix(mMatrix);
if (mScale != 0) {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2),
(int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2 + mScale*mCacheBitmap.getWidth()),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2 + mScale*mCacheBitmap.getHeight())), null);
} else {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((canvas.getWidth() - mCacheBitmap.getWidth()) / 2,
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2,
(canvas.getWidth() - mCacheBitmap.getWidth()) / 2 + mCacheBitmap.getWidth(),
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2 + mCacheBitmap.getHeight()), null);
}
//Restore canvas after draw bitmap
canvas.restoreToCount(saveCount);
if (mFpsMeter != null) {
mFpsMeter.measure();
mFpsMeter.draw(canvas, 20, 30);
}
getHolder().unlockCanvasAndPost(canvas);
}
}
}
总结
- 整个安装过程我在网上找了很多的教程,踩过很多的坑才走到了这里,如果您看到了这里,希望您能留个点赞或收藏。感谢您的观看。