Open janeHe13 opened 2 years ago
Hi:
I will add the switch task e2e case.
@caishunfeng Help me look at the problem of E2E. It's on slack
I will add the E2E case of the third project list
I will add the E2E case of the third project list
Which e2e case do you want to do? @yangyunxi workflow create or modify should be in the latter
I will add the E2E case of the third project list
Which e2e case do you want to do? @yangyunxi workflow create or modify should be in the latter
Number is 3
I will add the E2E case of the third project list
Which e2e case do you want to do? @yangyunxi workflow create or modify should be in the latter
Number is 3
@yangyunxi It seem we already have an E2E about create project in https://github.com/apache/dolphinscheduler/blob/67cc260d52c1f2e5aa7db76aa8621cdd0f8c4ee0/dolphinscheduler-e2e/dolphinscheduler-e2e-case/src/test/java/org/apache/dolphinscheduler/e2e/cases/ProjectE2ETest.java#L49
Do you mind change to other one?
I will add the E2E case of the third project list
Which e2e case do you want to do? @yangyunxi workflow create or modify should be in the latter
Number is 3
@yangyunxi It seem we already have an E2E about create project in
Do you mind change to other one?
Number is 5 view item list data
@yangyunxi yeah, you're right, maybe you could add describe to project, and add delete, delete project
Search before asking
Description
2 Item name and description are blank
3 Click "submit" Button
2 Enter the project name, and the description is blank
3 Click "submit" Button
2 Enter project name and description
3 Click "submit" Button
2 Enter the project name and description
3 Click "Cancel" Button
2 Ordinary users can only see the projects created by themselves, but can't see the projects created by others
2 Item name is required and description is not required. It is the same as new item
2 Click the "delete" button
2 User B selects the authorized item and clicks the "delete" button
2 Click the "delete" button
1 Query no data, item list displays no data
2 The query has data, and the item list displays the searched items correctly
2 Select 30 pieces / page, no more than 30 pieces of data, and view the pagination display
3 Select 50 pieces / page, no more than 50 pieces of data, and view the pagination display
2 Select 30 pieces / page, no more than 30 pieces of data, and 1 page displays
3 Select 50 pieces / page, no more than 50 pieces of data, and one page displays
2 Select 30 pieces / page and more than 30 pieces of data to view the pagination display
3 Select 50 pieces / page, more than 50 pieces of data, and view the pagination display
2 Select 30 pieces / page. If more than 30 pieces of data are selected, the page will turn to display
3 Select 50 pieces / page, and more than 50 pieces of data will be displayed on the page
2 View the task status statistics on the project homepage
0: submitted successfully
1: running
2: ready to pause
3: pause
4: ready to stop
5: stop
6: failure
7: success
8: fault tolerance required
9: kill
10: waiting for threads
2 View the process status statistics on the project homepage and query the SQL as follows:
select t.state, count (0) as count
from t_ ds_ process_ instance t
join t_ ds_ process_ definition d on d.code=t.process_ definition_ code
join t_ ds_ project p on p.code=d.project_ code
where 1 = 1
and t.is_ sub_ process = 0
and t.start_ time >= '2019-10-10 00:00:00' and t.start_ Time < ='2022-10-31 "11:16:00 '
and" p.id = 8
group by t.state
0: submitted successfully
1: running
2: ready to pause
3: pause
4: ready to stop
5: stop
6: fail
7: success
8: fault tolerance required
9: kill
10: wait for threads
2 View the process definition statistics on the project homepage
select u.user_ name,count(0) as count from t_ ds_ process_ definition d
join t_ ds_ project p on p.id=d.project_ id
join t_ ds_ user u on u.code=d.user_ code
where 1 = 1
and p.id in (1)
group by u.user_ Name
2 View task status statistics Process status statistics
2 The data of task status statistics and process status statistics within the default time period is correct
2 View task status statistics Process status statistics
if the resource needs to select a file, Hadoop or F3
2 Drag the shell component onto the canvas
3 Fill in the node name and operation flag, select "normal", and fill in the description
4 Select the priority (from high to low: highest / high / medium / low / low), and select one of the priorities
5 Select worker group, environment name, failed retry times, failed retry interval and delayed execution time
7 Select timeout alarm. The timeout policy is checked as timeout alarm, timeout failure, and the timeout duration is 1 minute
8 Edit shell script:
echo "test shell start" ; echo $time; echo $today; echo ${today_ global}; sleep 70; echo "test shell end"
9. Select a resource (shell file must be created in file management), which is not required
10 Custom parameter time = $[yyyymmddhhmmss ], today = ${today_ global}
11. Click "confirm to add" to close the task editing pop-up window
12 Click "save" to pop up the "set DAG name" pop-up box
13 Enter the name and description of the workflow, select the tenant, click the timeout alarm, and set the timeout alarm for 1 minute
15 Set the global parameter today_global = $[yyyy-MM-DD ], click + to add a global parameter, and click Delete to delete the new global parameter
16 Online process definition is checked by default
17 Click the "add" button
2 t ds process A new data item is added to the definition table, release state=1,process definition json. tasks. type=SHELL
2 Click task to pop up the task editing box. Task editing is the same as adding a shell task
3 Save workflow
2 Sub_ Drag the process component onto the canvas
3 Fill in the node name and operation flag, select "normal", and fill in the description
4 Select the priority (from high to low: highest/high/medium/low /lowest), and select one of the priorities
5 Select worker group and environment name
6 Select timeout alarm. The timeout policy is checked as timeout alarm, timeout failure, and the timeout duration is 1 minute
7 Select the online child node
8 Click "confirm to add" to close the task editing pop-up window
9 Click "save" to pop up the "set DAG name" pop-up box
10 Enter the name and description of the workflow, select the tenant, click the timeout alarm, and set the timeout alarm for 1 minute
11 Online process definition is checked by default
12 Click the "add" button
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=1,process_ definition_ json. tasks. type=SUB_ PROCESS
2 Select a workflow
2 Drag the procedure component into the canvas, and the public field editing is the same as the shell task
3 Select data source type and data source name
4 Enter SQL statement
5 In and out parameter types of user-defined parameters and test stored procedures:
1) for in type, you need to enter parameter name and parameter value
2) for out type, you only need to enter parameter name
2 t_ ds_ process_ A new data entry is added to the definition table
2 Drag the SQL component into the canvas, and the public field editing is the same as the shell task
3 Select different data source types and addresses
4 The SQL type is query, check send email, enter email subject and alarm group, and select the number of rows of log query result
5 Edit SQL statements (only one is allowed)
6 Pre SQL and post SQL tests (select statements are not supported)
7 User defined parameter test (including local parameters and global parameters)
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=1,process_ definition_ json. tasks. type=SQL
2 Drag the SQL component into the canvas, and the public field editing is the same as the shell task
3 Select different data source types and addresses
4 The SQL type is query. Do not check send mail. Select the number of rows of log query results
5 Edit SQL statements (only one is allowed)
6 Pre SQL and post SQL tests (select statements are not supported)
7 User defined parameter test (including local parameters and global parameters)
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=1,process_ definition_ json. tasks. type=SQL
2 Drag the SQL component into the canvas, and the public field editing is the same as the shell task
3 Select different data source types and addresses
4 The SQL type is non query
5 Edit SQL statements (only one is allowed)
6 Pre SQL and post SQL tests (select statements are not supported)
7 User defined parameter test (including local parameters and global parameters)
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=1,process_ definition_ json. tasks. type=SQL
2 Drag the spark component into the canvas, and the public field editing is the same as the shell task
3 Program types Java, Scala, python
4 Spark version: select spark1 or spark2
5 Fill in the class of the main function, such as: com journey. spark. WordCount
6. Select the main package (when the program type is Java and Scala, only jar files can be selected; when python, only py files can be selected)
7 Select the "cluster" or "client" or "local" mode of spark
8 Fill in the number of driver cores, driver memory, executor, executor memory and executor cores
9 Fill in the main program parameters, such as: /jane1/ words txt /jane1/out
10. Fill in option parameters
11 Select a resource (not required)
7 Fill in user-defined parameters, not required
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=1,process_ definition_ json. tasks. type=SPARK
2 Drag the flow component into the canvas, and the public field editing is the same as the shell task
3 Program types Java, Scala, python
4 Fill in the class of the main function, such as: org apache. flink. streaming. examples. wordcount. WordCount
5. Select the main package (when the program type is Java and Scala, only jar files can be selected; when python, only py files can be selected)
6 Select cluster or local} mode for deployment
7 Flink version selection < 1.10 or > = 1.10
8 Fill in the task name
9 Fill in the number of jobmanager memory, taskmanager memory, slots, taskmanagers and parallelism
10 Fill in the main program parameters, such as: -ytm flink
11 Fill in option parameters
12 Select a resource (not required)
13 Fill in user-defined parameters (not required)
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=1,process_ definition_ json. tasks. type=FLINK
2 Drag the MR component into the canvas, and the public field editing is the same as the shell task
3 Program types Java, Scala, python
4 Fill in the class of the main function, such as: com journey. hadoop. WordCount
5. Select the main package (when the program type is Java and Scala, only jar files can be selected; when python, only py files can be selected)
6 Fill in the task name
7 Fill in the main program parameters, such as: / jane1 / words txt /jane1/MRout1
8. Fill in the option parameter
9 Select a resource (not required)
10 Fill in user-defined parameters (not required)
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=1,process_ definition_ json. tasks. type=MR
2 Drag the python component into the canvas, and the public field editing is the same as the shell task
3 Writing Python scripts
4 If the script needs to reference resources, create a file in the file management module
5 Fill in user-defined parameters (not required)
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=1,process_ definition_ json. tasks. type=PYTHON
2 Drag the requirement component into the canvas, and the public field editing is the same as the shell task
3 Add dependency, select project - > workflow - > task
2 Test the offset (first XXX) of interval (month, week, day and hour)
3 Dependent conditions and, or tests
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=1,process_ definition_ json. tasks. type=DEPENDENT
2 Drag the HTTP component into the canvas, and the public field editing is the same as the shell task
3 Test request address
4 Test request type: get, post, head, put, delete
5 Test request parameters: parameter, body, headers
6 Test verification conditions: default response code 200, user-defined response code, content verification
7 Timeout setting: fill in connection timeout and socket timeout
8 Custom parameter
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=1,process_ definition_ json. tasks. type=HTTP
2 Drag the dataX component into the canvas, and the public field editing is the same as the shell task
3 Close the user-defined template (closed by default), and select different data source types and data sources
4 Write SQL statements
5 Select target database and data source
6 Fill in the target table
7 Write pre SQL and post SQL of target database (optional)
8 Fill in current limit (number of bytes), current limit (number of records) and running memory
2 Drag the dataX component into the canvas, and the public field editing is the same as the shell task
3 Open the custom template
4 Write JSON, JSON reference template: {job ": {setting": {speed ": {channel": 3}, "errorlimit": {record ": 0," percentage ": 0.02}}," content ": \ [{reader": {name ":" mysqlreader "," parameter ": {username": "root", "password": "root", "column": \ ["Id", "name" ], "splitpk": "DB \ _id", "connection": \ [{table ": \ [" table "]," jdbcurl ": \ [" JDBC: mysql://127.0.0.1:3306/database "]}]}},"writer":{"name":"mysqlwriter","parameter":{"writeMode":"insert","username":"root","password":"root","column":["id","name"],"session":["set session sql_mode='ANSI'"],"preSql":["delete from test"],"connection":[{"jdbcUrl":"jdbc: mysql://127.0.0.1:3306/datax?useUnicode=true&characterEncoding=gbk ","table":["test"]}]}}}]}}
5. Fill in user-defined parameters
6 Fill in the running memory
2 Drag the sqoop component into the canvas, and the public field editing is the same as the shell task
3 Fill in the task name
4 Select import
5 for flow direction Fill in Hadoop parameters, such as: MapReduce, map memory. mb=2048
6. Fill in sqoop parameters, such as: MapReduce and reduce memory. mb=2048
7. Type level data source of data source: MySQL: mode selection form or SQL
8 Add hive type mapping and Java type mapping
9 Select HDFS or hive as the type of data destination, and fill in the corresponding field information according to different types
10 Fill in custom parameters
2 Drag the sqoop component into the canvas, and the public field editing is the same as the shell task
3 Fill in the task name
4 Select export
5 for flow direction Fill in Hadoop parameters, such as: MapReduce, map memory. mb=2048
6. Fill in sqoop parameters, such as: MapReduce and reduce memory. mb=2048
7. When import is selected as the flow direction, MySQL is selected as the type level data source of the data source: mode selection form or SQL
8 Add hive type mapping and Java type mapping
9 Select HDFS or hive as the type of data destination, and fill in the corresponding field information according to different types
10 Fill in the user-defined parameter
2 Create the conditions task, connect task a before the conditions task, and connect tasks B and C after the conditions task
3 Double click the conditions task and select branch flow B in success status and branch flow C in failure status
4 Click the user-defined parameter and select the status of task a as success or failure
2 If the status of task a is failed, task a will be transferred to task C after running, and Task B will not be executed
2 Create Task1, task2, task3 task4
3. Task1 connects task2 and task3 (task2 and task3 are parallel), and task4 depends on task2 and task3
2 Task2 runs in parallel with task3. Only after task2 and task3 run successfully at the same time can task4 be run
2 The worker service is not started in the worker group, and the task status is always "submitted successfully"
3.1 After version 2.1, workers are grouped in worker Configure worker in properties Group = workergroupname, the default value of worker group is default
2 Select "timeout alarm" for timeout strategy
3 Set "timeout duration"
4 Select alarm group
2 t_ ds_ A new piece of data is added to the alert table_ status=1
2 Select "timeout failed" for timeout policy
3 Set "timeout duration"
2 Save the workflow and turn on the "timeout alarm" switch
3 Set timeout duration
4 Run the workflow and select alarm group
2 t_ ds_ A new piece of data is added to the alert table_ status=1
2 Set user-defined parameters in the task (user-defined parameters can reference global variables)
2 Shell script input in Task1
echo ${setValue (trans = Hello trans)} ;
3 The custom parameter setting of Task1 is trans, and the type is out
4 Shell script input in task2
echo ${trans}
5 Run the workflow
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=1,process_ definition_ json. tasks. type=SHELL
2 t_ ds_ process_ A new data item is added to the definition table, release_ state=0,process_ definition_ json. tasks. type=SHELL
2 When saving the workflow, set the global parameter
2 Global parameters can be constants or variables
2 Check one or more workflows and click the "export" button at the bottom of the page to successfully export the workflow
2 Import an existing workflow
3 Cross project import workflow
2 Check one or more workflows, click the "batch copy" button, and select the project name
2 Select the project name
2 The list data is displayed correctly
2 Click the query button
1 Query no data, and the list displays no data temporarily
2 The query has data, and the list is displayed correctly
2 Workflow online status, unable to edit; Workflow offline status can be edited
2 Click "close" on DAG page
2 To save the workflow, click the Cancel button in the "set DAG name" pop-up box
2 Enter the workflow definition page and click "run" to pop up a pop-up box
3 Select "continue" for the failure strategy in the pop-up box
2 Workflow instance status is failed
2 Enter the workflow definition page and click "run" to pop up a pop-up box
3 Select "end" for the failure strategy in the pop-up box
2 The workflow instance status is "failed", and the task status of the killed is "kill"
2 Select the notification policy from the pop-up box: none, success, failure, success or failure
2 Select send successfully to send a notification after the workflow instance runs successfully. If it fails, it will not be sent
3 Select send failed to send a notification after the workflow instance fails to run. If it succeeds, it will not be sent
4 Select send both success and failure, and the workflow instance will be notified of success or failure
2 Select the process priority in the pop-up box
2 Select worker group default
2 If multiple worker services are started, the workflow task randomly selects one worker to run
2 Enter the workflow definition page and click "run" to pop up a pop-up box
3 Select worker group woker from the pop-up box_ group_ 188
2 Select the environment name
2 If no environment variable is selected, the environment variable configured in DS will be loaded by default/ conf/env/dolphinscheduler_ env. sh
2 Select the alarm group in the pop-up box
2 Check "complement" in the pop-up box and select "serial execution"
3 Supplement date: 2019-11-05 00:00:00 - 2019-11-07 00:00:00
2 There are only 1 data in the workflow instance page. The running type is "complement number". After scheduling time is completed from 2019-11-05 00:00:00>2019-11-06 00:00:00>2019-11-07 00:00:00 and complement, the scheduling time is 2019-11-07 00:00:00.
2 Check "replenish" in the pop-up box and select "parallel execution"
3 Supplement date: 2019-11-05 00:00:00 - 2019-11-07 00:00:00
2 There are three pieces of data on the workflow instance page. The operation type is "supplement". The scheduling times are 2019-11-05 00:00:00, 2019-11-06 00:00:00 and 2019-11-07
2 Set the startup parameters in the pop-up box and click the "run" button
2 If the master service fails to execute the command, the data will be recorded to t ds errorCommand table
3 Workflow instance written to t ds process Instance table
4 Task write to t ds task_ Instance table
2 Set the startup parameters in the pop-up box and click the "run" button
2 Set the startup parameters in the pop-up box, click the "run" button
2 Set "start and end time" and "timing" expressions in the pop-up box, failure policy, notification policy, process priority, worker group, notification group, recipient and CC
3 Click the "execution time" button
2 It will not take effect until it goes online regularly
3 When the current system time reaches the scheduled execution time, the workflow will run automatically. A new piece of data will be added to the workflow instance. The operation type is "scheduling execution", and the scheduling time will be displayed correctly
2 Click the "Edit" button
2 Click the "go online" button
2 The edit and delete buttons cannot be clicked, and the "go online" button changes to "go offline"
2 Click the "offline" button
2 The edit and delete buttons can be clicked, and the "offline" button changes to "online"
2 Click the "delete" button
2 The "Edit" and "delete" buttons cannot be clicked
2 "Edit" and "delete" buttons can be clicked, while "run", "timing" and "timing management" cannot be clicked
2 Click the "delete" button
2 Click Delete to delete only the selected workflow on the current page
2 Click the "delete" button
2. Click Delete to delete the selected workflow
2 Select the task, right-click to pop up the pop-up box, and click the "run" button
1) select execute backward for node execution and execute backward from the current task
2) select execute forward for node execution and execute from the first task
3) select execute only the current node for node execution and execute only the current node
2 Workflow is not online and cannot be run
3 The workflow has been online and can be run
2 Select a task, right-click to pop up the pop-up box, and click Edit
2 Select a task, right-click to pop up a pop-up box, and click Copy
2 Select a task, right-click to pop up a pop-up box, and click Delete
2 Select 30 pieces / page, no more than 30 pieces of data, and view the pagination display
3 Select 50 pieces / page, no more than 50 pieces of data, and view the pagination display
2 Select 30 pieces / page, no more than 30 pieces of data, and 1 page displays
3 Select 50 pieces / page, no more than 50 pieces of data, and one page displays
2 Select 30 pieces / page and more than 30 pieces of data to view the pagination display
3 Select 50 pieces / page, more than 50 pieces of data, and view the pagination display
2 Select 30 pieces / page. If more than 30 pieces of data are selected, the page will turn to display
3 Select 50 pieces / page, and more than 50 pieces of data will be displayed on the page
2 The list data is displayed correctly
2 Click the query button
2 Stop, pause and Gantt chart buttons can be clicked
2 Click Edit, rerun, delete and Gantt chart buttons
2 Edit, rerun, restore failed, delete and Gantt chart buttons can be clicked
2 The Gantt chart button can be clicked
2 The Gantt chart button can be clicked
2 Click Edit, rerun, resume operation, delete and Gantt chart buttons
2 The Gantt chart button can be clicked
2 Click Edit, rerun, resume operation, delete and Gantt chart buttons
2 Edit the workflow and click Save to pop up a pop-up box
3 Check "update process definition" in the pop-up box
2 Edit the workflow and click Save to pop up a pop-up box
3 Do not check "update process definition" in the pop-up box
2 Click the "start parameters" and "view variables" buttons in the upper left corner of the page
2 Click the "start parameter" and "view variable" buttons again to collapse the parameter display
2 Double click the task, expand the node settings, and click the "view log" button
2 Double click the task, expand node settings, and click view history button
2. Write the task being executed into the t_ds_commandtable. The master scans the table, writes the data into the tasks_kill queue of zookeeper, and then the worker executes the kill task
3. After killing the task, the process status changes to "stop" and the task status changes to "kill"
2. The submitted tasks will be completed, the unsubmitted tasks will be suspended, and the process status will change to "suspended"
2. Click the "delete" button
2. Click "delete" Button
2. Select 30 items / page, no more than 30 data, and view pagination display
3. Select 50 items / page, no more than 50 data, and view pagination display
2. Select 30 pieces / page, no more than 30 pieces of data, 1 page displays
3. Select 50 pieces / page, no more than 50 pieces of data, and 1 page displays
2 Select 30 pieces / page and more than 30 pieces of data to view the pagination display
3 Select 50 pieces / page, more than 50 pieces of data, and view the pagination display
2 Select 30 pieces / page. If more than 30 pieces of data are selected, the page will turn to display
3 Select 50 pieces / page, and more than 50 pieces of data will be displayed on the page
2 Click the query button
2 Select 30 pieces / page, no more than 30 pieces of data, and view the pagination display
3 Select 50 pieces / page, no more than 50 pieces of data, and view the pagination display
2 Select 30 pieces / page, no more than 30 pieces of data, and 1 page displays
3 Select 50 pieces / page, no more than 50 pieces of data, and one page displays
2 Select 30 pieces / page and more than 30 pieces of data to view the pagination display
3 Select 50 pieces / page, more than 50 pieces of data, and view the pagination display
2 Select 30 pieces / page. If more than 30 pieces of data are selected, the page will turn to display
3 Select 50 pieces / page, and more than 50 pieces of data will be displayed on the page
Use case
No response
Related issues
No response
Are you willing to submit a PR?
Code of Conduct