Closed StanYaha closed 7 years ago
I'm not 100% sure what you are asking...once you create an index on a property key you should be able to use it normally to retrieve vertices:
v = await g.V().has('indexed_key', value).next()
Thanks! I have solved this problem! I want to ask how to increase the efficiency insert Vertex and Edge into the graph database. I inserted 2 Millions Vertex,it seems too slow.Do you have sample code about insert Millions Vertex and Edge. Sorry about my English, I am not very good at it!
import goblin
from goblin import Vertex
import asyncio
import pymysql
def get_hashable_id(val):
result = val
if isinstance(val, dict) and "@type" in val and "@value" in val:
if val["@type"] == "janusgraph:RelationIdentifier":
result = val["@value"]["value"]
return result
loop = asyncio.get_event_loop()
app = loop.run_until_complete(goblin.Goblin.open(loop, get_hashable_id=get_hashable_id))
session = loop.run_until_complete(app.session())
class Suspector(Vertex):
id_number = goblin.Property(goblin.String)
class Relation(goblin.Edge):
relationship = goblin.Property(goblin.String, default='unkonwn')
db = pymysql.connect("localhost", "root", "peptalk", "Person")
cursor = db.cursor()
cursor.execute("select * from sus_relation where id>200387")
data = cursor.fetchall()
count = 1
for i in data:
count += 1
print(count)
suspector1 = Suspector()
suspector2 = Suspector()
suspector1.id_number = i[1]
suspector2.id_number = i[2]
two_suspector_relationship = Relation(suspector1, suspector2)
session.add(suspector1, suspector2, two_suspector_relationship)
app.register(Suspector, Relation)
loop.run_until_complete(session.flush())
It's my code.It seems very slow
Hi, i am trapped with this problem.I hope you can help me!Thanks
Well, Goblin is not made for bulk loading or ETL purposes. Instead, it is designed to make data modeling for applications transparent and intuitive. There are multiple approaches you could take here, some of which use the aiogremlin.driver
module to submit raw scripts to the server, others would probably use the Gremlin Language Variant (also included with aiogremlin
) to submit traversals to the server. Moral of the story, Goblin is not made for bulk loading. You can bulk load, and then use Goblin on the data that already exists in your database as long as you assign labels properly during the bulk load process. I would start by checking out JanusGraph's docs about bulk loading. Then there should be a variety of options. Off the top of my head I can think of:
Write the data to a file, and then use Gremlin I/O to load data into your graph instance. This is quick and easy with small graphs, but I have never done it with a larger sample.
You could try the BulkLoaderVertexProgram implementation provided with TinkerPop. I have never used it, but it presumably works.
Submit a custom gremlin-groovy script to the server that loads your data. There should be tons of examples of this if you look at the mailing lists.
I am happy to help you with Goblin/Gremlin related issues, but for stuff like this honestly the best place to start would be the mailing lists:
https://groups.google.com/forum/#!forum/gremlin-users https://groups.google.com/forum/#!forum/janusgraph-users
Hopefully this leads you in the right direction. Again, I would stress, don't use Goblin for bulk loading. Choose a more appropriate tool, and then you can use Goblin later in your application to model data and interact with the graph based on your business requirements.
Thank you so much! I will think of another way for bulk loading!
I got this warn
I have created composite index and my propertykey is 'id_number'
How to use the index in goblin?Please help me