amosproj / amos2023ws05-pipeline-config-chat-ai

MIT License
1 stars 0 forks source link

Testing output for consistency 🤠 #82

Closed AviKatziuk closed 9 months ago

AviKatziuk commented 10 months ago

User story

  1. As a PO.
  2. I want to make sure the output given by the bot is consistent in terms of the functionality of the code's output.
  3. So that I can assure the course's staff and IP that we have tested for that.

Acceptance criteria

  1. (This task is only relevant if a solution to the bot's inaccuracy has been found and tested with successful results)
  2. Pick one or two queries and try to adjust them a little bit so they are similar to the way the end user will actually type them. Test the following things:
  3. What needs to be typed for the bot is guaranteed to provide continuous code per output (with correct headers and parameters!!)
  4. How lousy can the name of the components be written to still be recognized, what is the threshold for recognition?
  5. Can the components be called without defining what is a destination, transformation and source?
  6. Is it possible to feed parameter values within the textual input? (Example input: Use RTDIP with source spark component where the server address is 127.0.0.1; Example output: "--headers-- SparkServerAddress: "127.0.0.1")

Definition of done (DoD)

Feature DoD:

  1. Code review has been completed and code has been merged.
  2. User interaction tests pass on all major browsers.

Sprint Release DoD:

  1. Project builds, deploys, and tests successfully.

Project Release Definition of Done

  1. User interaction tests pass on all major browsers.
  2. Design documentation has been updated

DoD general criteria