Closed tosbaha closed 8 years ago
I tried to train the network by myself. Here is what I have tried. I transformed the first image by -10 degrees. Then applied CISpotColor
filter to turn yellow-green color to black and rest to white. Then I used extractBlobs function. However it only found 5 blobs instead of 6. I checked blobs and found out that 4th blob is qd
. Since my image is fixed width, what is the part that I should change to correctly extract the blobs and train the network? Here is code that I use for image filters. I will really appreciate if you can help me out to solve this OCR problem.
let firstImage = transformImage("original.jpg")
let secondImage = myFilter(firstImage)
let blobs = ocrInstance.extractBlobs(secondImage)
func transformImage(imagename:String)->CIImage {
let degreesToRadians: (CGFloat) -> CGFloat = {
return $0 / 180.0 * CGFloat(M_PI)
}
let imagex = CIImage(image: UIImage(named: imagename)!)!
let transform = CIFilter(name:"CIAffineTransform")!
transform.setValue(imagex, forKey: kCIInputImageKey)
let rotation = NSValue(CGAffineTransform: CGAffineTransformMakeRotation(degreesToRadians(-10)))
transform.setValue(rotation, forKey: "inputTransform")
return transform.outputImage!
}
func myFilter(inputImage:CIImage) ->UIImage? {
self.context = CIContext(options: nil)
let centerColor1 = CIColor.init(color: UIColor.blackColor())
let replacementColor1 = CIColor.init(color: UIColor.whiteColor())
let centerColor2 = CIColor.init(color: UIColor.blueColor())
let replacementColor2 = CIColor.init(color: UIColor.whiteColor())
let centerColor3 = CIColor.init(color: UIColor(red: 0.75, green: 0.87, blue: 0.33, alpha: 1))
let replacementColor3 = CIColor.init(color: UIColor.blackColor())
let closeness1:Float = 0
let contrast1:Float = 0
let closeness2:Float = 0
let contrast2:Float = 0
let closeness3:Float = 0.5
let contrast3:Float = 1
let parameters : CIParameters = [
"inputCenterColor1":centerColor1,
"inputReplacementColor1":replacementColor1,
"inputCloseness1":closeness1,
"inputContrast1":contrast1,
"inputCenterColor2":centerColor2,
"inputReplacementColor2":replacementColor2,
"inputCloseness2":closeness2,
"inputContrast2":contrast2,
"inputCenterColor3":centerColor3,
"inputReplacementColor3":replacementColor3,
"inputCloseness3":closeness3,
"inputContrast3":contrast3,
kCIInputImageKey: inputImage
]
let colorSpot = CIFilter(name:"CISpotColor", withInputParameters: parameters)
if colorSpot == nil {
print("error filter")
return nil
}
self.tempModImage = colorSpot!.outputImage
let cgimg = context.createCGImage(self.tempModImage, fromRect: self.tempModImage .extent)
return UIImage(CGImage: cgimg)
}
Hi. I tried extractBlobs
with this image:
It extracted all blobs correctly after I applied the default preprocessing algorithm. You could try adding a thresholding filter at the and of your custom filter.
I could make it by changing let xMergeRadius:CGFloat = 0.5
However I still couldn't used your trainer to train the network.Then I used more than 100 samples to train the network. It finished around 1-2 minutes. It worked with 100% success rate. I don't know maybe I was lucky 😄 Thanks for your extractBlobs
code, it really helped.
I am trying to train the network by using Monofont I waited around 10 minutes and I still see FFNN console output. Is it normal? Here is the log from Xcode console.
i stopped in the middle, saved training data. I changed OCR-Network with the produced one, but it failed with following images. It only recognized last one as A5C8U5