CGRU / cgru

CGRU - AFANASY
http://cgru.info/
GNU Lesser General Public License v3.0
278 stars 111 forks source link

Simultaneous Redshift Render on separate GPUs #427

Open eoyilmaz opened 5 years ago

eoyilmaz commented 5 years ago

Hey Timur,

I want to implement the ability to send one Maya Redshift job to a single computer that has multiple GPU's and run each task on a different GPU simultaneously.

My main motivation is that, we're using the Redshift renderer for our project, everything is good so far, but the scene translation times are like the 80% of the whole render time.

Although Redshift says that the scene translation is multi-threaded, I believe I can squeeze a little more juice out of it if I was able to run multiple tasks on the same computer. But then I run in to the limitation that all of the active tasks will try to use the same GPU's and will choke on them.

So if I was able to define which task should run on which GPU or rather I was able to control it in Afanasy it would be great.

The Redshift command line tool accepts a -gpu parameter followed by a number that will let you specify which GPU you would like to use, ex. -gpu 1 -gpu 3 will let you use 2 GPUs etc. And I believe if you pass this option to the mayarander command then it will pass it to the redshift renderer too.

So could you direct me to the correct course so I can implement this feature.

lithorus commented 5 years ago

First of all I'm not sure that this is the right place to ask this.

However I think that you can do it by having 2 seperare afrender processes each with it's own environment variable set before running the process.

eg. afrender1 :

GPU_SELECTION="{1,3}"
AFHOSTNAME=`hostname`-1
afrender -h $HOSTNAME

afrender2 :

GPU_SELECTION="{0,2}"
AFHOSTNAME=`hostname`-2
afrender -h $HOSTNAME

and then inject the GPU_SELECTION environment variable into the mayarender command line

eoyilmaz commented 5 years ago

That's a nice solution, thanks @lithorus

timurhai commented 5 years ago

Hi. You can also alter task command in a Python service class. Add "-gpu 1" to a command, and create some file. Other task will check that file for existence, and if it exists add "-gpu 2" to a command.

eoyilmaz commented 5 years ago

@lithorus's approach with some modifications worked fine.

So I created one render_GPU#.cmd for each GPU and changed the code to:

rem Name=Local Render...
rem Separator

set GPU_LIST={0}
for /f "delims==" %%i in ('hostname') do set AFHOSTNAME=%%i

call %0\..\_setup.cmd
if defined AF_RENDER_CMD (
   "%AF_RENDER_CMD%" %*
) else (
   "%AF_ROOT%\bin\afrender.exe" -hostname %AFHOSTNAME%_GPU0 %*
)

and updated the mayarender.cmd to:

@echo off
call %CGRU_LOCATION%\software_setup\setup_maya.cmd

if "%GPU_LIST%" == "" (
    "%APP_DIR%\bin\Render.exe" %*
) else (
    "%APP_DIR%\bin\Render.exe" -r redshift -gpu %GPU_LIST% %*
)

@timurhai of course your approach is much more elegant. But how or when would I delete the temp file to consider that the GPU1 is now free?

timurhai commented 5 years ago

@eoyilmaz after task finish, in a python service class d-tor or even some earlier, when process just finishes afrender calls checkExitStatus(self, i_status) function.