I have a FastAPI endpoint which process a list of files. After the processing the logfiles are packed into a zip archive and returned to the requester. Finally i want to clean up all temporary logs to save disk space.
Some code which i use here for illustration:
class CustomLogger:
""" Wraps the logger to allow additional attributes """
def __init__(self, logger):
self._logger = logger
self.filehandlers = []
def __getattr__(self, name):
""" This method redirects the attribute access to the wrapped logger
object when an attribute lookup fails. (Means: it was not defined for this class)
If the attribute isn't found on the CustomLogger object, getattr(self._logger, name) is called,
attempting to get the attribute from the original logger.
"""
return getattr(self._logger, name)
def truncate_filehandlers(self):
for filepath, handler_id in self.filehandlers:
logger.remove(handler_id)
def init(log_path: Path, script: str, level="INFO"):
logfile_name = f"{script}.log"
logfile_path = str(log_path / logfile_name)
LOG_FORMAT = "<green>{time:YYYY-MM-DD HH:mm:ss.SSS}</green> [{process.id}] " \
"[<level>{level: <8}</level>] [{thread.name:14.14}] " \
"<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan>: " \
"<level>{message}</level>"
logger = CustomLogger(loguru.logger)
logger.remove() # remove the default handler
logger.add(sys.stdout,
level=level,
format=LOG_FORMAT,
enqueue=True) # helps for using ray?
logger.filehandlers.append((logfile_path, # track the filehandlers for cleanup
logger.add(logfile_path,
level=level,
format=LOG_FORMAT,
rotation=rotator.should_rotate,
# compression=compress_in_subprocess,
enqueue=True)
))
return logger
FastAPI endpoint (inside a class):
...
def remove_temporary_files(self, directories: List[Path]) -> None:
for directory in directories:
logger.info(f"Cleaning {directory}")
truncate_directory(directory)
def endpoint(self, background_tasks: BackgroundTasks, ...):
...
logger = init(...)
logger.truncate_filehandlers() # free the files so they can be removed
background_tasks.add_task(self.remove_temporary_files, [log_path, temp_path])
return response
When performing the cleanup in the background task i get
PermissionError: [WinError 32] The process can not access the file because it is used by another program: 'script_name.log'
It seems that the filehandler is removed but the file is not closed when calling logger.remove(...).
The interesting thing is that it works on a small test script
I redesigned the setup of my loggers and this fixed the issue. It seems that i had some references still dangling around.
I will close this as a it is not an issue.
I have a FastAPI endpoint which process a list of files. After the processing the logfiles are packed into a zip archive and returned to the requester. Finally i want to clean up all temporary logs to save disk space. Some code which i use here for illustration:
FastAPI endpoint (inside a class):
When performing the cleanup in the background task i get
PermissionError: [WinError 32] The process can not access the file because it is used by another program: 'script_name.log'
It seems that the filehandler is removed but the file is not closed when calling
logger.remove(...)
.The interesting thing is that it works on a small test script
So it seems to be related to the multi threaded environment. Adding some sleep stuf helps but doesnt seem to be deterministic.
Is it possible to actively
terminate
/close
the file which is used by the FileSink?