Many laptops come now with 2 GPUs: a slower one that consumes less battery and a “dedicated” GPU consuming more battery but capable of much better speeds, esp. when used with non-trivial 3D data. The CGE editor and your applications automatically use the “dedicated” GPU now.
More information about this feature, the documentation how to disable it, and how to upgrade your existing projects to follow: Dedicated GPU.
This is cool. I thought a game-engine or our CGE use always the more powerful GPU. As i read in the Link, you can define it by yourself inside the code.
My laptop has also 2x GPU´s. An intel one and a powerful nVidia one. When i start any of my application (mostly something with 3D) i can choose by myself as a user which GPU i want to use. Do you think it would be a good idea to let the user choose which one. Maybe through a cmd-batch or so.
Hm, it’s probably possible, though it would require another “launcher” application and a bunch of system-specific code.
For example, on Linux I can request using dedicated Nvidia GPU by
optirun, Bumblebee - ArchWiki .
I don’t know about other GPUs (AMD) or systems (like Windows) though I presume there are similar options.
I’m not sure how much it would be useful. E.g. on Linux, GNOME 3 also exposes “Run this using a dedicated GPU” context menu item for every application. So in practice I don’t need a CGE-specific solution, I already have a solution at system level to do this. Though this is tied to me using Linux+GNOME in this case.
It’s similar to the option you mention too. So you also already have a solution to this at system level, independent of CGE.
Note that, unfortunately, the application cannot “request a dedicated GPU” at runtime. So we cannot implement in CGE e.g. a standard command-line option for all applications
--dedicated-gpu=yes/no. That’s because of the way how both Nvidia and AMD decided to implement this: the EXE of the application specifies some special symbols to request using dedicated GPU, and in effect the whole process from this EXE knows (from the very start when it runs) which GPU it uses.
All in all, while it’s possible to do such launcher, it seems a bit of work – and it is system-specific, and Nvidia/AMD-specific. So I don’t have plans to develop such launcher in CGE at this point… though it is possible for anyone to make outside CGE
I did not meant to change the GPU-model at runtime. Just before you execute the app / game.
But ya, i guess every operating system handle such stuff differently.
I was thinking to play a bit longer on my laptop, when deciding by myself which GPU it should use.
Continue the discussion at Castle Game Engine Forum