The Form Filler Addon is an experimental feature that provides an easy way for Flow users to create forms filled automatically from natural language input sources using GPT technologies.
@Route("test")
public class FormTest extends Div {
public FormTest() {
TextField nameField = new TextField("Name");
nameField.setId("name");
TextField addressField = new TextField("Address");
addressField.setId("address");
FormLayout fl = new FormLayout();
fl.add(nameField, addressField);
FormFiller formFiller = new FormFiller(fl);
formFiller.fill("My name is Bart and I live at 742 Evergreen Terrace, Springfield USA");
add(fl);
}
}
This is an experimental feature, and it may be removed, altered, or limited to commercial subscribers in future releases.
Add maven dependency to your project:
<dependency>
<groupId>com.vaadin.flow.ai</groupId>
<artifactId>form-filler-addon</artifactId>
<version>0.1.0</version>
</dependency>
To use the Form Filler Addon you will need a valid ChatGPT API key. To use the addon in your own application or to test the main demo (Route "/") you don't need anymore but to run OCR-Image demos you also will need a Google Vision API key. Check Views - Image Input section
These keys can be set as environment variables or specified from command line with the '-D' flag.
export OPENAI_TOKEN="THE KEY"
export GOOGLE_VISION_API_KEY="THE KEY"
The addon includes the FormFiller addon and some demos to check its capabilities.
There are 6 constructors all of them based on the same one just providing default values when a parameter of the base constructor is not provided.
These parameters are:
target: the target component or group of components (layout) to fill. This is the only mandatory parameter without default value.
componentInstructions: additional instructions for the AI module related to a specific component/field (i.e.: field format, field explanation, etc...). Use these instructions to provide additional information to the AI module about a specific field when the response of the form filler is not accurate enough. By default this structure is initialized empty.
contextInstructions: additional instructions for the AI module related to the input source or all components/fields (i.e.: target language, vocabulary explanation, current time, etc..). Use these instructions to provide additional information to the AI module about the context of the input source in general. By default this structure is initialized empty.
llmService: the AI module service to use. By default, this service would use OpenAI ChatGPT with chat/completion end point and the "gpt-3.5-turbo-16k-0613" model. There is another built in service using also ChatGPT but with the /completion endpoint and the "text-davinci-003" model. About ChatGPT models, newest models could not be better for the specific task of the Form Filler. So far tests has not clearly identified the best model so don't hesitate to test both services and give us feedback about your results. More models and others LLM providers will be added to the addon in the future. If you want to create your own provider service you just need to extend the interface LLMService and add it as a parameter to the Form Filler.
public FormFillerResult fill(String input)
The main method to be called when we want to fill the form fields after setting up the FormFiller object. This method fills the registered fields and returns a structure with information of the process such as the AI module request and response.
For a whole example check the main demo code
The only requirement to make a component/field accessible to the Form Filler is just to set a (meaningful) id to the component.
Creating the form:
formLayout = new FormLayout();
TextField nameField = new TextField("Name");
nameField.setId("name");
formLayout.add(nameField);
TextField addressField = new TextField("Address");
addressField.setId("address");
formLayout.add(addressField);
TextField phoneField = new TextField("Phone");
phoneField.setId("phone");
formLayout.add(phoneField);
TextField emailField = new TextField("Email");
emailField.setId("email");
formLayout.add(emailField);
DateTimePicker dateCreationField = new DateTimePicker("Creation Date");
dateCreationField.setId("creationDate");
formLayout.add(dateCreationField);
DatePicker dueDateField = new DatePicker("Due Date");
dueDateField.setId("dueDate");
formLayout.add(dueDateField);
ComboBox<String> orderEntity = new ComboBox<>("Order Entity");
orderEntity.setId("orderEntity");
orderEntity.setItems("Person", "Company");
formLayout.add(orderEntity);
NumberField orderTotal = new NumberField("Order Total");
orderTotal.setId("orderTotal");
formLayout.add(orderTotal);
TextArea orderDescription = new TextArea("Order Description");
orderDescription.setId("orderDescription");
formLayout.add(orderDescription);
RadioButtonGroup<String> paymentMethod = new RadioButtonGroup<>("Payment Method");
paymentMethod.setItems("Credit Card", "Cash", "Paypal");
paymentMethod.setId("paymentMethod");
formLayout.add(paymentMethod);
Checkbox isFinnishCustomer = new Checkbox("Is Finnish Customer");
isFinnishCustomer.setId("isFinnishCustomer");
formLayout.add(isFinnishCustomer);
CheckboxGroup<String> typeService = new CheckboxGroup<>("Type of Service");
typeService.setItems("Software", "Hardware", "Consultancy");
typeService.setId("typeService");
formLayout.add(typeService);
Grid<OrderItem> orderGrid = new Grid<>(OrderItem.class);
orderGrid.setId("orders");
formLayout.add(orderGrid);
Filling the form:
FormFiller formFiller = new FormFiller(formLayout);
FormFillerResult result = formFiller.fill(input);
FormFiller formFiller = new FormFiller(formLayout, fieldsInstructions, contextInformation);
FormFillerResult result = formFiller.fill(input);
FormFiller formFiller = new FormFiller(formLayout, new ChatGPTService());
FormFillerResult result = formFiller.fill(input);
To make a set of components ready to be filled by the FormFiller the only requirements are:
Anyways Remember that later you can add extra information about any component to help the AI module if the Id is not enough to understand what data you are looking for. Of course you can use a sentence as an Id but for cleaner code we recommend to use Ids in combination with extra instructions but it is up to the developer to choose. For most cases a 2-3 word Id is enough for the AI module to understand the target. i.e.:
CheckboxGroup<String> typeService = new CheckboxGroup<>("Type of Service");
typeService.setItems("Software", "Hardware", "Consultancy");
typeService.setId("typeService");
formLayout.add(typeService);
......
HashMap<Component,String> fieldInstructions = new HashMap<>();
fieldInstructions.put(typeService, "This field describes the type of the items of the order");
FormFiller formFiller = new FormFiller(formLayout, fieldsInstructions);
FormFillerResult result = formFiller.fill(input);
better than
CheckboxGroup<String> typeService = new CheckboxGroup<>("Type of Service");
typeService.setItems("Software", "Hardware", "Consultancy");
typeService.setId("the type of the items of the order");
formLayout.add(typeService);
......
FormFiller formFiller = new FormFiller(formLayout);
FormFillerResult result = formFiller.fill(input);
These extra instructions can be used not only for understanding but also for formatting or error fixes i.e.:
HashMap<Component,String> fieldInstructions = new HashMap<>();
fieldInstructions.put(nameField, "Format this field in Uppercase");
fieldInstructions.put(emailField, "Format this field as a correct email");
There are some limitations for some fields specialy the ones containing dates the FormFiller has its own standard formatting requirement so be careful manipulating them.
The demo has 3 built-in views available. In all demos you have preloaded examples that you can use to test them. Of course, you can always use your own examples of input sources.
All demos follow the same layout:
The actions are different for the text input and the document input just in the functionality of uploading documents instead of using predefined examples. In the case of the input text you just need to modify the ‘Debug Input Source’ text area.
The extra instructions tool is just a set of text fields to be able to add more context information to the prompt at runtime. This information can be related to a specific field. For example in text demo try for name “Format this field in Uppercase” and for context information “Translate items to Spanish”.
The Debug Tool includes text areas to visualize each of the important parts of the process:
Debug Input Source: The exact input data that is sent to ChatGPT
Debug JSON target: The target JSON schema required to ChatGPT to describe the data
Debug Type target: The information about fields (type, context) shared with ChatGPT
Debug Prompt: Final prompt as it is sent to ChatGPT.
Debug Response: The response received from ChatGPT.
In these examples we use snapshots from 1 page documents to get the text. In both examples you can load your own image to test.
“/invoice” - Example using invoice documents as input source. These documents usually are well formatted and contain similar information.
“/receipt” - Example using receipt documents as input source. These documents usually are not well formatted and contain different formats and information.
This example demonstrates how to use your mobile camera to capture text:
/camera - Example using any picture (invoice/receipt) to test the mobile camera as an input source. When used on a desktop, it will open the operating system's file manager to upload an image file. On a mobile device, it allows you to take a photo and use it immediately as the input source.
Starting the test/demo server:
mvn -Pdev
Hint: Ensure you activate the development profile. This includes the vaadin-server, making the URLs below accessible. Without this profile, the dependency remains in the provided scope, and the Vaadin demo URL paths won't load or be reachable.
This deploys demos at http://localhost:8080, http://localhost:8080/receipt and http://localhost:8080/invoice
To run Integration Tests, execute mvn verify -Pit,production
.
Tests run by default in headless
mode, to avoid browser windows to be opened for every test.
This behaviour is always disabled when running the tests in debug mode in the IDE
or when running maven with the -Dmaven.failsafe.debug
sytem property.
On normal execution, headless mode can be deactivated using the -Dtest.headless=false
system property.
Note: Draft version, for every key creation or change it could be different and update would be nice to have As of 2023, September 12 this seems coherent with Google Cloud documentation.
Sign Up or Sign In:
Apply for Free Credits:
Navigate to API Dashboard:
Create a New Project:
Go to API Library:
Search and Enable Vision API:
Credentials:
Create a New API Key:
Copy and Use API Key:
export GOOGLE_VISION_API_KEY="YOUR_API_KEY"
-D
flag in your application.Create a Service Account:
Generate Key for Service Account:
Assign Roles for Service Account:
Roles for Project:
Wait for Activation:
You should now be set up to use Google's Vision API.