langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.4k stars 2.1k forks source link

[Feature request] React Native support #4239

Open bmbmjmdm opened 8 months ago

bmbmjmdm commented 8 months ago

Currently LangchainJS doesn't support React Native due to it missing several JS symbols, etc. It would be great to see a RN compatible version of this library!

dosubot[bot] commented 8 months ago

🤖

Hey @bmbmjmdm, great to see you back! 😄 I remember our last interaction. I'm diving into your React Native compatibility request now and will get back to you with a detailed response soon!

jacoblee93 commented 8 months ago

Yes we need this!

dosubot[bot] commented 8 months ago

🤖

Thank you for your feature request. It is indeed possible to add React Native compatibility to the LangchainJS library. The current implementation of LangchainJS is designed to work with React, as seen in the createReactAgent function.

React Native shares a lot of similarities with React, with the main difference being the rendering of components to native platforms instead of the DOM. Therefore, the LangchainJS library can be made compatible with React Native by ensuring that any UI components or APIs used are compatible with React Native.

However, without specific details on the parts of LangchainJS that are currently incompatible with React Native, it's difficult to provide a detailed plan or code examples. In general, to make LangchainJS compatible with React Native, you would need to:

  1. Replace any use of web-specific APIs with equivalent React Native APIs or packages.
  2. Replace any UI components with React Native components.
  3. Test the library thoroughly on both Android and iOS to ensure compatibility.

Regarding the specific JavaScript symbols or features that are currently missing in LangchainJS that prevent it from being compatible with React Native, I wasn't able to find this information within the repository. Could you provide more details on this? This would help in understanding the scope of the changes required.

For reference, here is the source code for the createReactAgent function: langchain/src/agents/react/index.ts

I hope this helps, and I look forward to your response.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

david-gettins commented 6 months ago

An issue I am currently facing when using langchain with React Native is that the url used with the fetch function is missing the base url.

If I add my own fetch function that simply prepends the base url to the url arg and calls the React Native fetch, then a forward slash is prepended to the url.

Either way, I get a 404.

I am attempting a basic prompt using the ChatOpenAI class.

jacoblee93 commented 6 months ago

Yes I really want to tackle this! Have continually fallen behind.

For now, does specifying a custom url like this help?

https://js.langchain.com/docs/integrations/chat/openai#custom-urls

david-gettins commented 6 months ago

Yes I really want to tackle this! Have continually fallen behind.

For now, does specifying a custom url like this help?

https://js.langchain.com/docs/integrations/chat/openai#custom-urls

Yes and no. I was playing around with that but is turned out not to be the base url. I got it working by trimming the trailing slash from the url on iOS and Android.

import { ChatOpenAI, ClientOptions, OpenAIChatInput } from '@langchain/openai';
import { Platform } from 'react-native';

export class LangchainService {
  private readonly openAiConfig: Partial<OpenAIChatInput> = {
    openAIApiKey: '***',
    modelName: 'gpt-4',
  };

  private readonly clientConfig = Platform.select<ClientOptions | undefined>({
    web: undefined,
    default: {
      fetch: (url, init) => {
        if (typeof url === 'string') {
          return fetch(
            url.slice(0, url.length - 1), // <-- This fixed it!
            init,
          );
        }
        return fetch(url, init);
      },
    },
  });

  private readonly model = new ChatOpenAI(this.openAiConfig, this.clientConfig);

  async prompt(text: string) {
    return this.model.invoke(text);
  }
}

Here is a before and after in Flipper of the network requests

Screenshot 2024-03-07 at 17 17 47

Before - Not Working

Screenshot 2024-03-07 at 17 15 59

After - Working

Screenshot 2024-03-07 at 17 16 18
david-gettins commented 6 months ago

Long story short. There doesn't seem to be an issue running Langchain JS in the latest version of React Native 0.73.5. Not sure when it started working and I haven't tried all your features out yet. But just a basic set up works fine. I'll be trying out the parts that may have issues tomorrow i.e. streaming. I'll share my results here.

jacoblee93 commented 6 months ago

Ah gross. Do you think we'd have to override/polyfill fetch to get this working in a reasonable way?

david-gettins commented 6 months ago

Ah gross. Do you think we'd have to override/polyfill fetch to get this working in a reasonable way?

Maybe? But I don't understand why the trailing slash causes a 404, perhaps its the OpenAI API.

You might not have to do anything at all using a different LLM with Langchain.

I'll try hitting the OpenAI API from Postman with and without the slash tomorrow and see if it is just the OpenAI API.

david-gettins commented 6 months ago

I've played around with the OpenAI API to see where the 404 issue came from. I can confirm that the API does not like trailing slashes. I tried this out in Postman. I'm not sure why the browser fetch implementation removes the trailing slash, but the React Native fetch does not.


https://api.openai.com/v1/chat/completions/

{
    "error": {
        "message": "Invalid URL (POST /v1/chat/completions/)",
        "type": "invalid_request_error",
        "param": null,
        "code": null
    }
}

https://api.openai.com/v1/chat/completions

{
    "id": "chatcmpl-90QkixWp8vQqHSn3I6cqWQsTkyCYP",
    "object": "chat.completion",
    "created": 1709889864,
    "model": "gpt-4-0613",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "Hello! How can I assist you today?"
            },
            "logprobs": null,
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 8,
        "completion_tokens": 9,
        "total_tokens": 17
    },
    "system_fingerprint": null
}

I'll be trying out more features from Langchain JS today to see if they are compatible with React Native.

david-gettins commented 6 months ago

It looks like the trailing slash is a known issue in the React Native URL polyfill and isn't going to be fixed. See here. It has also been reported in the openai-node project too.

david-gettins commented 6 months ago

Streaming is proving difficult. Use of both ReadableStream and AsyncGenerator provides a big challenge in React Native with Hermes enabled.

david-gettins commented 6 months ago

Streaming is proving difficult. Use of both ReadableStream and AsyncGenerator provides a big challenge in React Native with Hermes enabled.

Got it working but I don't think anyone will be happy with the solution... Lots of polyfills... Bundle size increases...


Langchain Streaming in React Native

  1. Install react-native-polyfill-globals - you don't need to apply the patches as the readme suggests.
  2. Install react-native-fetch-api
  3. Install core-js
  4. Add the encoding, readable stream and async generator polyfills to your entry point (i.e. where the app is registered index.jsx/main.jsx/index.tsx/main.tsx):

    import { polyfill as polyfillEncoding } from 'react-native-polyfill-globals/src/encoding';
    import { polyfill as polyfillReadableStream } from 'react-native-polyfill-globals/src/readable-stream';
    import 'core-js/modules/es.symbol.async-iterator';
    
    import { AppRegistry } from 'react-native';
    import App from './app/App';
    
    polyfillEncoding();
    polyfillReadableStream();
    
    AppRegistry.registerComponent('MyApp', () => App);
  5. Modify your Langchain client config to provide a custom fetch function (I am using the ChatOpenAI client):

    /* eslint-disable @typescript-eslint/no-explicit-any */
    import { ChatOpenAI, ClientOptions, OpenAIChatInput } from '@langchain/openai';
    import { Platform } from 'react-native';
    // @ts-expect-error No typings available.
    import { fetch } from 'react-native-fetch-api'; // <-- The package readme suggests this isn't required but it didn't work if I didn't import it.
    
    const model = new ChatOpenAI(
      {
        openAIApiKey: '***',
        modelName: 'gpt-4',
      }, 
      {
        fetch: (url, init) => {
          if (typeof url === 'string' && url.endsWith('/')) {
            return fetch(url.slice(0, url.length - 1), {
              ...init,
              reactNative: { textStreaming: true },
            } as any);
          }
          return fetch(url, init);
        },
      }
    );
  6. Call the stream function:
    try {
      const readable = await model.stream(text);
      for await (const chunk of readable) {
        console.log(chunk);
      }
    } catch (reason) {
      console.warn(reason);
    }
  7. Not sure if this is needed, but I reset metro cache and re-ran the app.

Sorry for the TypeScript clutter to those who prefer JavaScript.


I tested with a simple "Hello" prompt, the logger output looks like so:

 LOG  {"id": ["langchain_core", "messages", "AIMessageChunk"], "kwargs": {"additional_kwargs": [Object], "content": ""}, "lc": 1, "type": "constructor"}
 LOG  {"id": ["langchain_core", "messages", "AIMessageChunk"], "kwargs": {"additional_kwargs": [Object], "content": "Hello"}, "lc": 1, "type": "constructor"}
 LOG  {"id": ["langchain_core", "messages", "AIMessageChunk"], "kwargs": {"additional_kwargs": [Object], "content": "!"}, "lc": 1, "type": "constructor"}
 LOG  {"id": ["langchain_core", "messages", "AIMessageChunk"], "kwargs": {"additional_kwargs": [Object], "content": " How"}, "lc": 1, "type": "constructor"}
 LOG  {"id": ["langchain_core", "messages", "AIMessageChunk"], "kwargs": {"additional_kwargs": [Object], "content": " can"}, "lc": 1, "type": "constructor"}
 LOG  {"id": ["langchain_core", "messages", "AIMessageChunk"], "kwargs": {"additional_kwargs": [Object], "content": " I"}, "lc": 1, "type": "constructor"}
 LOG  {"id": ["langchain_core", "messages", "AIMessageChunk"], "kwargs": {"additional_kwargs": [Object], "content": " assist"}, "lc": 1, "type": "constructor"}
 LOG  {"id": ["langchain_core", "messages", "AIMessageChunk"], "kwargs": {"additional_kwargs": [Object], "content": " you"}, "lc": 1, "type": "constructor"}
 LOG  {"id": ["langchain_core", "messages", "AIMessageChunk"], "kwargs": {"additional_kwargs": [Object], "content": " today"}, "lc": 1, "type": "constructor"}
 LOG  {"id": ["langchain_core", "messages", "AIMessageChunk"], "kwargs": {"additional_kwargs": [Object], "content": "?"}, "lc": 1, "type": "constructor"}
 LOG  {"id": ["langchain_core", "messages", "AIMessageChunk"], "kwargs": {"additional_kwargs": [Object], "content": ""}, "lc": 1, "type": "constructor"}

Other Notes

There are many different ways you can polyfill the missing features in React Native fetch. The above was the easiest to implement in my opinion.

core-js was critical to getting AsyncGenerator working when using Hermes which is now enabled by default and much better than its predecessor.

For React Native Web, you will need to add the below to your webpack config so you use the standard fetch implementation:

module.exports = {
  resolve: {
    alias: {
      'react-native$': 'react-native-web',
      fs: false,
      stream: 'stream-browserify',     // <-- This
      crypto: 'crypto-browserify',     // <-- This
      'react-native-fetch-api': false, // <-- This
    },
    extensions: ['.web.tsx', '.web.ts', '.web.jsx', '.web.js'],
  },
  // Other stuff...
};

Also, you can use Platform.select to provide an undefined client config for Langchain. Like so:

const clientConfig = Platform.select<ClientOptions | undefined>({
  web: undefined,
  default: {
    fetch: (url, init) => {
      if (typeof url === 'string' && url.endsWith('/')) {
        return fetch(url.slice(0, url.length - 1), {
          ...init,
          reactNative: { textStreaming: true },
        } as any);
      }
      return fetch(url, init);
    },
  },
});
jacoblee93 commented 6 months ago

Hoo boy... this is going to be a fun one. Thank you so much for your efforts digging here - hopefully we can work some of this into a guide/possibly the framework directly.

david-gettins commented 5 months ago

After attempting to add more features and other third party libs (various things), I've found that although my above set up kind of works, there can be a lot of issues down the line, polyfill conflicts, breaking other libs, outdated packages with audit issues, etc.

@jacoblee93 I wouldn't bother trying to update the docs or lib with React Native support. The ideal solution would be a version of langchain written for React Native i.e. using Native Modules. Just to clarify, I'm not directly asking you to do this as it is a big task.

I've started writing a bespoke solution that meets the needs of the application I am creating using Native Modules. Unfortunately this means abandoning langchain for now. Hopefully in the future we - the React Native community - can contribute to a React Native version of langchain.

jacoblee93 commented 5 months ago

Damn. I do still really want to dig in deep here. Such a cool use case.

Keep me posted! And thank you for investigating!

jacoblee93 commented 2 months ago

Still planned and have actually made a bit of progress - the following polyfill works at least for basic prompt/model chains:

import 'react-native-get-random-values';
import 'react-native-url-polyfill/auto';
import { ReadableStream } from 'web-streams-polyfill/es6';
(globalThis as any).ReadableStream = ReadableStream;

You need to set web-streams-polyfill to ^3 in package.json:

"web-streams-polyfill": "^3"

TowhidKashem commented 2 months ago

@david-gettins what do you think of the effort made here https://github.com/julian-hecker/naitive?tab=readme-ov-file It's not my repo, I just ran into it randomly but am not knowledgable enough to know if it solves any of the issues you encountered in a better way. Your input would be useful in deciding of I should look into that repo or not..

depombo commented 3 weeks ago

In case it is useful for this implementation or anyone else that lands on this issue like I did, I ended up building an openai react native client that supports file upload and chat completion streaming without polyfills by using react-native-sse and the expo file system. The library uses the official node sdk everywhere else along with the same types and API.

https://github.com/backmesh/openai-react-native