Open javaHelper opened 4 years ago
@mminella - I've implemented code something like this, since my 2000 character record length is spread across two lines here.
@Configuration
public class JobConfig {
@Autowired
private StepBuilderFactory stepBuilderFactory;
@Autowired
private JobBuilderFactory jobBuilderFactory;
@Bean
public BlankLineRecordSeparatorPolicy recordSeparatorPolicy() {
return new BlankLineRecordSeparatorPolicy();
}
@Bean
public CacheManager cacheManager() {
return new ConcurrentMapCacheManager();
}
@Bean
public FlatFileItemReader<Customer> customerItemReader(){
FixedLengthTokenizer tokenizer = new FixedLengthTokenizer();
tokenizer.setNames("firstValue", "secondValue", "thirdValue", "fourthValue", "fifthValue", "sixthValue",
"seventhValue", "eighthValue", "ninethValue", "tenthValue", "dummyRange");
tokenizer.setColumns(
new Range(3, 6), new Range(7, 13), new Range(14,15), new Range(16,24), new Range(25, 28), new Range(29,32),
new Range(33, 36), new Range(1322, 1324), new Range(1406, 1408), new Range(1543, 1548), new Range(1549));
CompleteFlatFileLineMapper mapper = new CompleteFlatFileLineMapper();
mapper.setTokenMaxLength(2000);
mapper.setTokenizer(tokenizer);
mapper.setFieldSetMapper(new CustomerFieldSetMapper());
mapper.setCacheManager(cacheManager());
FlatFileItemReader<Customer> reader = new FlatFileItemReader<>();
reader.setLinesToSkip(1);
reader.setResource(new ClassPathResource("/data/test.conv"));
reader.setLineMapper(mapper);
reader.setStrict(false);
return reader;
}
@Bean
public CustomerItemWriter customerItemWriter(){
return new CustomerItemWriter();
}
@Bean
public Step step1() {
return stepBuilderFactory.get("step1")
.<Customer, Customer> chunk(10)
.reader(customerItemReader())
.writer(customerItemWriter())
.build();
}
@Bean
public Job job() {
return jobBuilderFactory.get("job")
.start(step1())
.build();
}
}
CompleteFlatFileLineMapper.java
@Data
public class CompleteFlatFileLineMapper implements LineMapper<Customer>, InitializingBean {
private LineTokenizer tokenizer;
private FieldSetMapper<Customer> fieldSetMapper;
private Integer tokenMaxLength;
private CacheManager cacheManager;
private boolean isAppend = false;
@Override
public void afterPropertiesSet() throws Exception {
Assert.notNull(tokenizer, "The LineTokenizer must be set");
Assert.notNull(fieldSetMapper, "The FieldSetMapper must be set");
}
@Override
public Customer mapLine(String line, int lineNumber) throws Exception {
// check if current line length is less than Max length
if (line.length() < tokenMaxLength) {
// Store the value is cache and append next line value
if (cacheManager.getCache("row").get("cust") == null) {
cacheManager.getCache("row").put("cust", line);
isAppend = true;
} else {
line = this.getFullLine(line);
isAppend = false;
cacheManager.getCache("row").clear();
}
}
if (!isAppend) {
Customer c = fieldSetMapper.mapFieldSet(tokenizer.tokenize(line));
c.setLineNumber(lineNumber);
return c;
}
return new Customer();
}
private String getFullLine(String line) {
String finalLine;
StringBuilder sb = new StringBuilder((String) cacheManager.getCache("row").get("cust").get());
sb.append(line);
finalLine = sb.toString();
return finalLine;
}
}
Please confirm if above is the correct ways of doing it ? Or please suggest proper solution with sample example. Thanks in advance !
I am looking to read the fixed-width flatfile using Spring Batch generated from Mainframe system. This file does not have any delimiter and one complete records has 1-2000 characters or columns and 2001 to 4000 as characters length. The main issue is for few records, data spread across two lines or three lines and there I am facing issue while reading the code.
Could you please guide me here: https://stackoverflow.com/questions/63674370/caused-by-org-springframework-batch-item-file-transform-incorrectlinelengthexce
test