Closed chinglee-iot closed 5 months ago
Failure with the following unit test. Once unit test is fixed. This PR will be ready for review.
FreeRTOS/FreeRTOS/Test/CMock/smp/multiple_priorities_no_timeslice_mock/covg_multiple_priorities_no_timeslice_mock_utest.c:520:test_coverage_vTaskDelete_scheduler_not_running:FAIL:Function listLIST_IS_EMPTY. Called more times than expected.
FreeRTOS/FreeRTOS/Test/CMock/smp/multiple_priorities_no_timeslice_mock/covg_multiple_priorities_no_timeslice_mock_utest.c:1185:test_coverage_prvCreateIdleTasks_name_within_max_len:FAIL:Function listINSERT_END. Called earlier than expected.
tasks_1_utest.c:887:test_vTaskDelete_success_current_task:FAIL:Function listLIST_IS_EMPTY. Called more times than expected. tasks_1_utest.c:905:test_vTaskDelete_success_current_task_ready_empty:FAIL:Function listLIST_IS_EMPTY. Called more times than expected. tasks_1_utest.c:924:test_vTaskDelete_success_current_task_ready_empty_null_task:FAIL:Function listLIST_IS_EMPTY. Called more times than expected.
This branch is working, however there are still leaks due to the lack of ability to join.
I think whats happening is something like:
I'd like graceful shutdown, vs. having the management thread vTaskDelete() the sub-thread.
What I'd do on posix:
Thoughts on adding join functionality to close this opening or should I be giving up on graceful shutdown and killing the thread directly from the management thread?
This branch is working, however there are still leaks due to the lack of ability to join.
I think whats happening is something like:
* Management thread asks sub-thread X to shut down * Management thread stops scheduler before sub-thread X calls vTaskDelete(NULL); * We leak the TCB, stack, and pthread
I'd like graceful shutdown, vs. having the management thread vTaskDelete() the sub-thread.
What I'd do on posix:
* Management thread asks sub-thread X to shut down * Management thread joins sub-thread X (block until thread is on delete list or has been deleted) * Management thread stops scheduler
Thoughts on adding join functionality to close this opening or should I be giving up on graceful shutdown and killing the thread directly from the management thread?
@cmorganBE Thanks for your help testing. I will continue to have this PR merged.
The question you have is about how to gracefully delete application created tasks. FreeRTOS doesn't have pthread_join like function. FreeRTOS inter-task communication can be used to synchronize the management task and sub-task X in your example. For example, we can make use of a event queue to synchronize sub-task and management task in your example. Sub-task X sends an event to notify management task that it is ready to be deleted then blocks itself. Once management task receives this event, it can delete the sub-task X.
There may have more than one solution to this question depends on different requirements. pthread_join like function is also a new feature to FreeRTOS and we should discuss this feature with you in FreeRTOS forum. I would like to suggest we move the discussion there. We can also have the suggestion or experience share from the community.
Thoughts on adding join functionality to close this opening or should I be giving up on graceful shutdown and killing the thread directly from the management thread?
You are right that FreeRTOS does not support POSIX like join functionality currently. Till something like join is implemented in the kernel, we can achieve the same in the application by doing something like the following:
#define TASK_X_BIT ( 1 << 0 )
#define TASK_Y_BIT ( 1 << 1 )
#define ALL_TASKS ( TASK_X_BIT | TASK_Y_BIT )
EventGroupHandle_t xTaskTrackingGroup;
void vSubTaskX( void *pvParams )
{
for( ;; )
{
/* TaskX code. */
}
/* Inform the management task. */
xEventGroupSetBits( xTaskTrackingGroup, TASK_X_BIT );
vTaskSuspend( NULL );
}
void vSubTaskY( void *pvParams )
{
for( ;; )
{
/* TaskY code. */
}
/* Inform the management task. */
xEventGroupSetBits( xTaskTrackingGroup, TASK_Y_BIT );
vTaskSuspend( NULL );
}
void vManagementTask( void *pvParams )
{
xEventGroupWaitBits( xTaskTrackingGroup,
ALL_TASKS,
pdFALSE,
pdTRUE, /* Wait for all the tasks to finish. */
portMAX_DELAY );
/* Delete all the sub tasks. */
vTaskDelete( xSubTaskXhandle );
vTaskDelete( xSubTaskYhandle );
vTaskEndScheduler();
/* The task code here should not be run. */
configASSERT( pdFASLE );
}
int main( void )
{
xTaskTrackingGroup = xEventGroupCreate();
/* Creating management and subtasks. */
/* Start the scheduler. */
vTaskStartScheduler();
/* Assuming the application continue to execute from here after
* vTaskEndScheduler is called . */
/* The management task can now be safely deleted here. */
vTaskDelete( xManagementTaskHandle );
}
@cmorganBE What do you think?
The SonarCloud Quality Gate passed, but some issues were introduced.
1 New issue
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code
Delete kernel created task in vTaskEndScheduler
Description
This PR addresses the discussion in #956 - Valgrind detects TCBs and stacks of deleted tasks are not freed after the scheduler stops.
There are 2 type of kernel objects -
In order to keep the resource ownership clear, the application should delete the kernel objects that it creates and the kernel should delete the ones it creates. In line with that, timer task and idle tasks are now deleted in
vTaskEndScheduler()
.vTaskEndScheduler()
must be called from a task and we cannot delete that task fromvTaskEndScheduler()
. The application is responsible for deleting the task that callsvTaskEndScheduler()
after stopping the scheduler.Example to delete the last running task
In this PR
vTaskDelete()
to delete a task directly when scheduler is stopped instead of putting it on the xTasksWaitingTermination list.vTaskEndScheduler()
.vTaskEndScheduler()
.Test Steps
Test with posix port and test cases contributed by @cmorganBE. The test cases is updated in this branch
valgrind test result shows no memory leakage after scheduler stops or restarting scheduler
The test
Checklist:
Related Issue
956
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.