neo4j-contrib / neo4j-etl

Data import from relational databases to Neo4j.
https://neo4j.com/developer/neo4j-etl/
Other
216 stars 46 forks source link

Possible to avoid the LoadCSV step? #46

Open soneymathew opened 5 years ago

soneymathew commented 5 years ago

Hi @jexp / maintainers , I am trying to online import a Database which contains JSON values in it's columns, I notice that when it get's loaded into CSV it breaks the CSV format.

Can we modify this to use https://neo4j-contrib.github.io/neo4j-apoc-procedures/#load-jdbc to overcome this?

I was able to leverage this strategy by handrolled cypher successfully. it would be great if this strategy can be adopted by the ETL tool.

Happy to help raise a PR with some guidance as well.

soneymathew commented 5 years ago

I tested the following strategy for a table with ~1 million rows to complete the load in ~46 seconds

set the DB_URL so that it can be refered later

call apoc.static.set('DB_URL', '<connection url>')

not sure if this matters but I set the constraints required for the Node first

CREATE CONSTRAINT ON (t:<TableName>) ASSERT t.id IS UNIQUE;

use apoc procedures to load the data in parallel with a batchsize of 10k

CALL apoc.periodic.iterate('
CALL apoc.static.get("DB_URL") yield value WITH apoc.convert.toString(value) AS DB_URL
call apoc.load.jdbc(DB_URL,"<TableName>") YIELD row RETURN row',
'CREATE (c:<TableName>) SET c += row', {batchSize:10000, parallel:true})
jexp commented 5 years ago

it's odd the csv export / import should handle json correctly. I have a deeper look at the issue

What is the data type in your relational database?

Yep apoc.periodic.iterate rocks it :)

jexp commented 5 years ago

It's actually an interesting proposal. Let's discuss this.

So instead of doing the transformation + batching (optionally parallel) in Java, we would do it in apoc instead.

soneymathew commented 5 years ago

My initial exploration was to manually do what the ETL tool is facilitating currently. but as I went thru multiple stages I learned that the APOC approach can be amazingly performant and skips the translation overhead of CSV creation and loading.

my sample dataset is a postgres schema containing a mix of the following

on a side note, I noticed that the etl tool silently ignores unsupported datatypes Examples are oid and tsvector it will be cool to log it as errors or choices that can be pondered upon perhaps?

mroiter-larus commented 5 years ago

@soneymathew Can you help me to reproduce this issue? For example can you told me the sample database that you use?

Thank you

soneymathew commented 5 years ago

@mroiter-larus sorry I won't be able to share my database. you could probably reproduce this by storing a big pretty printed json file content in a text column Example: The mapping json generated is a good candidate for test data.

mroiter-larus commented 5 years ago

@soneymathew I had a test case but it didn't reproduce the error. Anyway i'll try your suggestion.

soneymathew commented 5 years ago

@mroiter-larus is it important that you need to reproduce it?

Users could be having bad data in database with non-printable characters or any multiline content that can break your CSV. There is a possible risk to data loss along the translations steps DB -> CSV -> LOADCSV step.

I raised this issue to explore if neo4j-etl can be updated to bypass the CSV generation step all together.

I believe it will help your users from the following perspectives.