Open hebergentilin opened 7 years ago
Would you mind making a pull request with a test that fails because of this?
This is the same issue as #44 , the tokenizer is too restrictive, and doesn't tolerate special chars in values, i.e. doesn't tokenize 'anexo' node content.
You can try to fix the regex used to match the next token, which is the core of the problem. It can be found here: https://github.com/bivas/protobuf-java-format/blob/091d247393772e94d64c2d8835ef4cedcdfc244e/src/main/java/com/googlecode/protobuf/format/XmlFormat.java#L320
But for now I could not manage to do it since making the regex more flexible often produces some side effects.
The best solution IMO should be to completely rewrite the XML parser using an existing one, which would be more reliable.
I'm getting errors when informing special characteres like '//' (char generated from encoded base64 file) at a proto
bytes
field.protos
formatFactory.java
I got a
java.lang.RuntimeException: Can't get here.
message exception atXmlJavaxFormat.java:566
.Change the formater, from XML_JAVAX to XML, I got this exception:
com.googlecode.protobuf.format.ProtobufFormatter$ParseException: 4:22: Expected ">".
Request sending: