Closed FantaMagier closed 11 months ago
Hello yes I also encountered this 4 months ago on an experiment. . The problem is that the texture and the preview are not placed really at the same origin. There is two things we can do
Hello,
I am reworking the preview widget Maybe this can help you.
Can you test with that branch? https://github.com/Apparence-io/CamerAwesome/pull/403
The preview is now keeping proportions in every cases. The preview is drawn using the same origin (top, left) than you have in a canvas. Meaning that you can now correctly draw your points even if you are in cover mode.
Hey @g-apparence Thank you for your PR. But I have a strange behaviour. Maybe my code is wrong.
My Result on the PR:
Here ist the complete Code for it:
name: flcammltest
description: A new Flutter project.
publish_to: 'none' # Remove this line if you wish to publish to pub.dev
version: 1.0.0+1
environment:
sdk: '>=3.1.4 <4.0.0'
dependencies:
flutter:
sdk: flutter
google_mlkit_text_recognition: ^0.10.0
# camerawesome: ^2.0.0-dev.1
camerawesome:
git:
url: https://github.com/Apparence-io/CamerAwesome
ref: preview_fix
camera: ^0.10.5+5
//main.dart
import 'package:camera/camera.dart';
import 'package:camerawesome/camerawesome_plugin.dart';
import 'package:flcammltest/painter.dart';
import 'package:flcammltest/text_Ml.dart';
import 'package:flutter/material.dart';
import 'package:google_mlkit_text_recognition/google_mlkit_text_recognition.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
AnalysisImage? currentImage;
RecognizedText? currentText;
final TextProcessing textProcessing = TextProcessing();
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: CameraAwesomeBuilder.previewOnly(
previewFit: CameraPreviewFit.cover,
// 2.
imageAnalysisConfig: AnalysisConfig(
androidOptions: const AndroidAnalysisOptions.nv21(
width: 1024,
),
maxFramesPerSecond: 5,
autoStart: true,
),
// 3.
onImageForAnalysis: (img) {
setState(() {
currentImage = img;
// textProcessing
// .getTextElements(img)
// .then((value) => currentElements = value);
textProcessing
.getTextElements(img)
.then((value) => currentText = value);
});
return textProcessing.getTextElements(img);
},
// 4.
builder: (cameraModeState, previewSize, previewRect) {
return SizedBox(
width: previewSize.width,
height: previewSize.height,
child: CustomPaint(
painter: TextRecognizerPainter(
currentText,
currentImage!.size,
currentImage!.rotation,
CameraLensDirection.back,
),
),
);
},
),
),
);
}
}
//MlKitUtils.dart
import 'package:camerawesome/camerawesome_plugin.dart';
import 'package:google_mlkit_text_recognition/google_mlkit_text_recognition.dart';
extension MlKitUtils on AnalysisImage {
InputImage toInputImage() {
return when(
nv21: (image) {
return InputImage.fromBytes(
bytes: image.bytes,
metadata: InputImageMetadata(
rotation: inputImageRotation,
format: InputImageFormat.nv21,
size: image.size,
bytesPerRow: image.planes.first.bytesPerRow,
),
);
},
bgra8888: (image) {
final inputImageData = InputImageMetadata(
size: size,
rotation: inputImageRotation,
format: inputImageFormat,
bytesPerRow: image.planes.first.bytesPerRow,
);
return InputImage.fromBytes(
bytes: image.bytes,
metadata: inputImageData,
);
},
)!;
}
InputImageRotation get inputImageRotation =>
InputImageRotation.values.byName(rotation.name);
InputImageFormat get inputImageFormat {
switch (format) {
case InputAnalysisImageFormat.bgra8888:
return InputImageFormat.bgra8888;
case InputAnalysisImageFormat.nv21:
return InputImageFormat.nv21;
default:
return InputImageFormat.yuv420;
}
}
}
//Painter.dart
import 'dart:io';
import 'dart:ui';
import 'dart:ui' as ui;
import 'package:camera/camera.dart';
import 'package:camerawesome/camerawesome_plugin.dart';
import 'package:flcammltest/translate.dart';
import 'package:flutter/material.dart';
import 'package:google_mlkit_text_recognition/google_mlkit_text_recognition.dart';
class TextRecognizerPainter extends CustomPainter {
TextRecognizerPainter(
this.recognizedText,
this.imageSize,
this.rotation,
this.cameraLensDirection,
);
final RecognizedText? recognizedText;
final Size imageSize;
final InputAnalysisImageRotation rotation;
final CameraLensDirection cameraLensDirection;
@override
void paint(Canvas canvas, Size size) {
final Paint paint = Paint()
..style = PaintingStyle.stroke
..strokeWidth = 3.0
..color = Colors.lightGreenAccent;
final Paint background = Paint()..color = Color(0x99000000);
for (final textBlock in recognizedText!.blocks) {
final ParagraphBuilder builder = ParagraphBuilder(
ParagraphStyle(
textAlign: TextAlign.left,
fontSize: 16,
textDirection: TextDirection.ltr),
);
builder.pushStyle(
ui.TextStyle(color: Colors.lightGreenAccent, background: background));
// builder.addText(textBlock.text);
builder.pop();
final left = translateX(
textBlock.boundingBox.left,
size,
imageSize,
rotation,
cameraLensDirection,
);
final top = translateY(
textBlock.boundingBox.top,
size,
imageSize,
rotation,
cameraLensDirection,
);
final right = translateX(
textBlock.boundingBox.right,
size,
imageSize,
rotation,
cameraLensDirection,
);
// final bottom = translateY(
// textBlock.boundingBox.bottom,
// size,
// imageSize,
// rotation,
// cameraLensDirection,
// );
// canvas.drawRect(
// Rect.fromLTRB(left, top, right, bottom),
// paint,
// );
final List<Offset> cornerPoints = <Offset>[];
for (final point in textBlock.cornerPoints) {
double x = translateX(
point.x.toDouble(),
size,
imageSize,
rotation,
cameraLensDirection,
);
double y = translateY(
point.y.toDouble(),
size,
imageSize,
rotation,
cameraLensDirection,
);
if (Platform.isAndroid) {
switch (cameraLensDirection) {
case CameraLensDirection.front:
switch (rotation) {
case InputAnalysisImageRotation.rotation0deg:
case InputAnalysisImageRotation.rotation90deg:
break;
case InputAnalysisImageRotation.rotation180deg:
x = size.width - x;
y = size.height - y;
break;
case InputAnalysisImageRotation.rotation270deg:
x = translateX(
point.y.toDouble(),
size,
imageSize,
rotation,
cameraLensDirection,
);
y = size.height -
translateY(
point.x.toDouble(),
size,
imageSize,
rotation,
cameraLensDirection,
);
break;
}
break;
case CameraLensDirection.back:
switch (rotation) {
case InputAnalysisImageRotation.rotation0deg:
case InputAnalysisImageRotation.rotation270deg:
break;
case InputAnalysisImageRotation.rotation180deg:
x = size.width - x;
y = size.height - y;
break;
case InputAnalysisImageRotation.rotation90deg:
x = size.width -
translateX(
point.y.toDouble(),
size,
imageSize,
rotation,
cameraLensDirection,
);
y = translateY(
point.x.toDouble(),
size,
imageSize,
rotation,
cameraLensDirection,
);
break;
}
break;
case CameraLensDirection.external:
break;
}
}
cornerPoints.add(Offset(x, y));
}
// Add the first point to close the polygon
cornerPoints.add(cornerPoints.first);
canvas.drawPoints(PointMode.polygon, cornerPoints, paint);
canvas.drawParagraph(
builder.build()
..layout(ParagraphConstraints(
width: (right - left).abs(),
)),
Offset(
Platform.isAndroid &&
cameraLensDirection == CameraLensDirection.front
? right
: left,
top),
);
}
}
@override
bool shouldRepaint(TextRecognizerPainter oldDelegate) {
return oldDelegate.recognizedText != recognizedText;
}
}
//Translate.dart
import 'dart:io';
import 'dart:ui';
import 'package:camera/camera.dart';
import 'package:camerawesome/camerawesome_plugin.dart';
double translateX(
double x,
Size canvasSize,
Size imageSize,
InputAnalysisImageRotation rotation,
CameraLensDirection cameraLensDirection,
) {
switch (rotation) {
case InputAnalysisImageRotation.rotation90deg:
return x *
canvasSize.width /
(Platform.isIOS ? imageSize.width : imageSize.height);
case InputAnalysisImageRotation.rotation270deg:
return canvasSize.width -
x *
canvasSize.width /
(Platform.isIOS ? imageSize.width : imageSize.height);
case InputAnalysisImageRotation.rotation0deg:
case InputAnalysisImageRotation.rotation180deg:
switch (cameraLensDirection) {
case CameraLensDirection.back:
return x * canvasSize.width / imageSize.width;
default:
return canvasSize.width - x * canvasSize.width / imageSize.width;
}
}
}
double translateY(
double y,
Size canvasSize,
Size imageSize,
InputAnalysisImageRotation rotation,
CameraLensDirection cameraLensDirection,
) {
switch (rotation) {
case InputAnalysisImageRotation.rotation90deg:
case InputAnalysisImageRotation.rotation270deg:
return y *
canvasSize.height /
(Platform.isIOS ? imageSize.height : imageSize.width);
case InputAnalysisImageRotation.rotation0deg:
case InputAnalysisImageRotation.rotation180deg:
return y * canvasSize.height / imageSize.height;
}
}
//Text_Ml.dart
import 'package:camerawesome/camerawesome_plugin.dart';
import 'package:flcammltest/MLutils.dart';
import 'package:flutter/material.dart';
import 'package:google_mlkit_text_recognition/google_mlkit_text_recognition.dart';
class TextProcessing {
final textRecognizer = TextRecognizer(script: TextRecognitionScript.latin);
Future<RecognizedText?> getTextElements(AnalysisImage img) async {
final inputImage = img.toInputImage();
try {
final RecognizedText recognizedText =
await textRecognizer.processImage(inputImage);
return recognizedText;
} catch (e) {
debugPrint("...sending image to Textscan resulted error $e");
}
}
}
Hi, it seems that there is a padding on the left side of the image, am I right?
Hi,
what do you mean with Padding on the left side 😅?
I can show you the full result with cover and contain mode
With Cover:
And with Contain:
I think I have found the solution. I need to work on this a bit more. I should have something cool in some hours.
My solution works. But this requires some changes. The good news is that getting a point from analysis to your preview will be way simpler.
A fix is now available on the main branch of the repository. You can test it.
This should be now complete. If you still have problems please open another issue
Hello, I am trying to implement the TextRecognition ML Kit and would like to display the text blocks. But the format does not match the preview. With the code from the documentation it doesn't work either. I use the one from the ML Kit Printer example, but the package is always a bit compressed, so the boxes are not correct of the preview. How do I get the correct format? What else is wrong here? Thank you very much for your help.
The Problem:
My Code: