trinodb / trino

Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
https://trino.io
Apache License 2.0
10.45k stars 3.01k forks source link

Project Swift #22271

Open wendigo opened 5 months ago

wendigo commented 5 months ago

Trino has had its protocol since it's inception in 2012. Both client and cluster protocols are REST-oriented and are using JSON as only serialization format and HTTP/1.1 as a transport layer. While in 2012 the client and server protocols were good enough for the majority of use-cases, nowadays the amount of data clients want to efficiently retrieve from the Trino cluster has increased significantly.

We are starting project Swift with the goal of improving existing Trino protocol, both for client and server to server communication.

Introduction of v2 protocol isn't the goal for this project.

Tasks

### Client protocol improvements
- [ ] #21793
- [ ] #22227
- [ ] https://github.com/trinodb/trino/issues/22662
### Server protocol improvements
- [ ] #21793
- [ ] https://github.com/trinodb/trino/pull/22249
- [ ] https://github.com/trinodb/trino/issues/6552
- [ ] https://github.com/airlift/airlift/pull/1183
- [ ] https://github.com/airlift/airlift/pull/1158
- [ ] https://github.com/airlift/airlift/pull/1161
- [ ] https://github.com/trinodb/trino/pull/22457
- [ ] https://github.com/trinodb/trino/pull/22995
- [ ] https://github.com/trinodb/trino/issues/23237
- [ ] Announce GA of the Spooled Protocol
sajjoseph commented 5 months ago

Wonderful. Thanks for this initiative even though I will be thrilled if we ever see green light for V2 protocol. How about the following.

  1. Add nextURI to the HTTP header response
  2. Add partialCancelUri to the HTTP header response
  3. targetResultSize enhancement
  4. Add cluster identifier as a request parameter
wendigo commented 5 months ago

@sajjoseph can you elaborate more on the use-cases for each of the points?

himanshpal commented 5 months ago

In a world where Arrow and Arrow Flight are the new standards and being increasingly adopted by many databases, Do we ever plan to invest in Arrow and integrate in Trino ?

I know, couple of years ago Netflix team did a poc for integrating Arrow in trino but it never got completed.

wendigo commented 5 months ago

@himanshpal we are not considering introduction of an entirely new protocol at the moment (like Arrow Flight). We are thinking about other serialization format for the client-server communication and Arrow is one of the candidates.

losipiuk commented 5 months ago

cc: @losipiuk

mosabua commented 4 months ago

@himanshpal just to clarify what @wendigo mentioned.. we are considering Arrow as one of the candidates but in its current format it has significant limitations in its type system so that it can not be used to cover all data from Trino and its richer type system. So we might end up in a situation where Arrow can be used with limitations in place, and another format is used for full support. However .. the Arrow project is advancing and we are still quite a way from even starting on a V2 protocol. There is a lot of room to improve the current protocol and that is our focus in this Project Swift.

mosabua commented 4 months ago

@wendigo I think some of the ideas from @sajjoseph are related to Trino Gateway and other tools being able to redirect easier by just using info in the HTTP headers rather than having to parse the response. I kinda recall us talking about that in some Trino Gateway dev syncs as well so maybe @oneonestar @vishalya @willmostly @Chaho12 have a better memory than me and can detail this more.

wendigo commented 4 months ago

@mosabua I recall it.

mosabua commented 4 months ago

Some very interesting numbers from a user reported in https://github.com/trinodb/trino/issues/22303 related to changing targetResultSize .. this could be a great quick win. Maybe its worth changing the current default to more than 16MB for starters. And maybe figure out some way to adjust automatically.

nickalexander053 commented 4 months ago

Does project swift include the ability to do parallel reads directly from the worker nodes? I would love to remove all file system access from users and pump everything through Trino but I need to support use cases where very large full tables are loaded for model training into spark. Any idea when work on v2 protocol will begin

wendigo commented 4 months ago

@nickalexander053 yes, parallel reads are part of the project but exposing data directly from the worker is not an option so we've approached that other way around. The protocol changes that we are planning to introduce will support your use case.

nickalexander053 commented 4 months ago

@wendigo Thanks, could you elaborate or point me to some documentation/discussion as to what the protocol changes are? Any idea when work on the protocol changes may begin, would love to help

wendigo commented 4 months ago

@nickalexander053 we will post more details soon, we already have a first iteration of a working prototype

shohamyamin commented 4 months ago

Improving the protocol could possibly help with creating odbc driver for Trino?

For example if I am not mistaken someone was implemented the flightSQL protocol in there forked Trino and use the flightSQL odbc driver and that work for him.

So maybe taking under consideration the need of odbc driver in the building of the protocol will make it easier in the future to build an odbc driver

wendigo commented 4 months ago

@shohamyamin this is not a goal and flightsql is out of scope

sajjoseph commented 4 months ago

@sajjoseph can you elaborate more on the use-cases for each of the points?

I added more details here - https://github.com/trinodb/trino/issues/22662#issuecomment-2227562728. Thanks!

FHTMitchell commented 2 weeks ago

@mosabua

Can you please expand on which Trino types you think aren't expressible in arrow?

wendigo commented 2 weeks ago

@FHTMitchell for example timestamp with picosecond precision with timezone

wendigo commented 2 weeks ago

Arrow only supports timestamp(6)

FHTMitchell commented 2 weeks ago

@wendigo

Thanks for the quick reply!

Yeah the 64 bit precision of the arrow timestamp wouldn't fit the full range of the trino equivalents. Have you considered using arrow extension types? Arrow should cover 99% of users use cases so feels, to me at least, that building on an established format would be preferable.

mosabua commented 2 weeks ago

From what I know Arrow also does not support some of our more complex data types. We will essentially have to create a mapping and translation layer for https://arrow.apache.org/docs/python/api/datatypes.html and https://trino.io/docs/current/language/types.html .. at first from Trino to Arrow but potentially in both directions.

This might make sense from a compatibility perspective for client tools that work with Arrow directly. From a performance perspective it might be better to figure out a way to move the memory format from Trino (that inspired Arrow) over the wire directly to the clients. That would avoid the translation .. but we would have to expand what our client drivers can do to adjust for that.

At this stage we don't know what is better and we might end up doing both. For now we are already seeing amazing improvements with the spooling protocol and are working on documenting that and get it supported in all clients. It is flexible enough to support other encodings so the doors are open..

mosabua commented 2 weeks ago

Also one last note ... we essentially want to do what is best for Trino users first and foremost. And that is mostly performance and Trino-specific use case related. When and where integration with Arrow is important (which we dont know at this stage), we would love to find out more about it and for people with Arrow knowledge and development skills to help.

losipiuk commented 2 weeks ago

From a performance perspective it might be better to figure out a way to move the memory format from Trino (that inspired Arrow) over the wire directly to the clients. That would avoid the translation

Actually this not that simple :) You have very different backward compatibility requirements for protocol and internal representation. You need to be free to make any internal changes, to unlock performance/other improvements, and you are basically not allowed to do any changes to external protocol. So translation is needed no matter what.