Chainlit / chainlit

Build Conversational AI in minutes ⚡️
https://docs.chainlit.io
Apache License 2.0
6.31k stars 808 forks source link

Future of Chainlit as fully functional Open Source project #1115

Open andrePankraz opened 2 weeks ago

andrePankraz commented 2 weeks ago

Hello,

I have more a question than a feature request (or may be a feature request for a clean roadmap).

I have seen that the Prompt Playground has now completely disappeared from the frontend and has been moved to LiteralAI. The dependency on LiteralAI Services is thus significantly greater (cross-UI links for previously integrated functions), as it is hardly possible to develop a more complex agent without this basic function.

Existing open source functions with Apache 2.0 license are removed from Chainlit and turned into LiteralAI closed source functions (and in the future paid functions, also somewhat unclear from the website).

I can certainly understand the dual principle of open source + "enterprise features" when it comes to add-on functions.

However, with the Prompt Playground, which was there from the beginning and was also easy to use with a custom data layer, I am now really skeptical about how far this is to be taken. Will more and more functions be removed from Chainlit step by step and pushed to LiteralAI?

I'm also surprised that so many heterogeneous open source contributors seem to be totally fine with it. Perhaps I have misunderstood something or there is a clearly formulated plan somewhere that I am not aware of?

Please don't write now that you wanted to make the UI clearer. This can also be achieved in a different way than completely removing existing basic features, without which the project becomes increasingly useless (without adding LiteralAI). I just want a clear statement so that we know whether we have backed the wrong horse.

Thx for the great project.

constantinidan commented 2 weeks ago

Thank you for your feedback and for sharing your concerns. We truly appreciate your engagement with Chainlit.

We’re constantly striving to bring the best value to our users. Our aim is to separate application-side functionality (Chainlit) from LLMops (Literal AI). This is why the playground iteration feature, which aligns more with LLMops, has moved to Literal AI with enhanced capabilities. You can think of this as similar to the relationship between ChatGPT and the OpenAI developer platform.

Chainlit’s core mission remains to be an open-source Python framework for building and sharing conversational applications. This is what you should expect from Chainlit moving forward.

We’re expanding our team to deliver more features and respond to user requests more effectively. If you have further questions, feel free to email me at dan@chainlit.io.

hayescode commented 2 weeks ago

Thanks for making this post @andrePankraz I also share your same concerns.

@constantinidan I think what you're doing with Literal is great and fine, the core issue is that not all dev use/can use Literal so from a pure Chainlit perspective we are seeing previously available features that drew us into Chainlit in the first place being removed. Without a published roadmap we're anxious because every new release seems to continue this capability reduction that nobody is asking for. Even CoT has been severely restricted which is a critical component for user adoption. We all live in this new AI world everyday but our users do not (many of my new business users have never used ChatGPT) and CoT helps build new trust, which again is critical for adoption.

Like @andrePankraz said there are different ways to accomplish this - building out Literal while maintaining/improving Chainlit. Recently it instead feels like moving things out of Chainlit and into Literal i.e. Open Source --> Closed Source.

We love Chainlit and the effort you and your team put into this product, and wish to help improve it further through feedback and contributions.

andrePankraz commented 2 weeks ago

Thank you for your feedback. It's reassuring to hear that no further functionalities will be lost as I understand it.

1) Separation of App/LLMOps: Our users appreciate the ability to delve (I said it ;) into details within the steps (chain of thought), even checking and refining the prompts themselves. Not everyone does this, of course, but interested power users do. It also created great feedback in potential customers demos.

From my perspective, the strategy of completely separating application features and debug/ops features does not work for everyone, especially with complex AI agents. Chainlit is mainly used in areas where we cannot use the cloud (internal company data, citizen inquiries, etc.). Therefore, it is very inconvenient for our end-users that CoT step tracking and playground functionalities have been significantly reduced or removed. The LiteralAI frontends are feature-rich, but require separate logins/frontends with too many functions/information. The sweet spot is missing.

2) Separate Cloud/On-Prem Services: From my perspective, the LiteralAI cloud strategy is a definite no-go for dormant data (comprehensive step info/prompts, etc.) in this area.

If we were allowed to use cloud services, we wouldn't use Chainlit at all, but rather a full-service cloud agent system from end to end. The strategy feels odd for such detailed persistence functions. This is where the cloud is most critical.

Also, the new Docker container - on-prem deployments as an alternative to cloud services are often very pricey and frequently have unclear prospects (ongoing support? Changes in licensing models? Vendor lock-in, etc.)

Ultimately, money should be made with such services. However, besides the end-user-focused CoT/Playground in Chainlit, there are still opportunities to create additional value in an LLMOps-focused LiteralAI. The user groups are indeed different.

nileshtrivedi commented 4 days ago

@andrePankraz @hayescode Regarding the restrictions on chain-of-thought step tracking, I think it's being enabled in the next release. Here is the announcement from Discord:

image