PygmalionAI / aphrodite-engine

Large-scale LLM inference engine
https://aphrodite.pygmalion.chat
GNU Affero General Public License v3.0
996 stars 111 forks source link

[Usage]: native nvlink support or not agnostic to mobo #518

Open puppetm4st3r opened 3 months ago

puppetm4st3r commented 3 months ago

Your current environment

running isolated on docker container

How would you like to use Aphrodite?

I have the following question,

currently nvlink support on new motherboards that are not high enterprise/server grade has been discarded, but some inference artifacts like llama cpp can run p2p thru the nvlink bridge event when the mobo did not officially supported, Aphrodite engine needs a mobo/chipset with official nvlink support?, or has its own kernel/implementation for nvlink / p2p agnostic to the mobo/chipset, I have 2 3090's in a new brand asus 4 pcie 16x ports, but the mobo did not support nvlink.

regards!

AlpinDale commented 3 weeks ago

I believe our p2p detection has improved as of v0.6.0. Can you try again?